问题描述
几个月前,我开始开发一个项目,开发一个算法来处理从linecan摄像机设备获取的数据(例如,每300us一行384像素)。由于我是一个工程师而不是程序员,我开始使用Python来最小化学习曲线。在SX的帮助下,我成功地构建了一个Python应用程序(最终超过2000行代码),并成功创建了一个图像处理算法来处理数据。我印象深刻的客户,他们想把它提高到一个新的水平。现在,我需要它是实时的...这意味着C ++。我得到了Koenig和Moo的Accelerated C ++并开始阅读。所以,请容易在我。我喜欢学习编程,但我没有正式的培训。我现在做的最好我可以!
我现在有一个C + +原型GUI(使用Qt)包裹所有的库需要通过CameraLink接口与相机通信。采集代码存在于自己的线程中并向GUI发出信号。所以,我有基本原理。我可以获得尽可能多的数据行,我希望与我当前的代码,但我现在正试图找出如何构建一个应用程序。甚至写了一个与Qt(MOCing等)工作的自定义makefile。
无论如何,对于应用程序,我想要两个模式(这些是QUESTIONS): p>(1)实时视图...其中行扫描数据由GUI实时显示。我正在考虑使用循环缓冲区(例如,Boost :: circular_buffer)来实时保存数据,并且简单地通过发送的信号将缓冲区的副本(memcpy?)传递给GUI。这是tenable吗?我觉得缓冲区的副本是必要的,因为循环缓冲区将改变每300us左右,我不知道主要的事件循环可以跟上。同样,数据采集存在于自己的线程中。它必须比这更复杂吗?我将不得不从缓冲区中弹出数据,因为它是读取,而不是使用循环缓冲区?我觉得循环缓冲是要走的路,因为那是我想要显示的那种图像。
(2)数据处理模式...其中线阵列数据以块(即,384×384)个像素发射。在300us(〜3,333Hz)的扫描速率下,这是每100ms左右的块或帧。在那100毫秒,我需要做数据的正常化,对象检测,阈值等。我计划在运行实时内核补丁的Linux盒子上运行。我认为它应该跟上。我需要在数据采集和数据处理线程之间进行通信...我需要套接字吗?
我在这里寻找如何开始使用这两个部分的建议。第二个更关键,但第一个将帮助我可视化正在发生什么。最后,我想让两种模式同时运行。我已经花了大部分时间得到这远远,但需要确保我沿着我的计划正确的道路。
至(1):
从GUI代码和接收器代码访问同一个缓冲区时,你必须小心同步问题。一种可能的改进是将GUI更新的数量限制一点。屏幕刷新率通常为50或60Hz,大多数GUI库假定更新不会比这更频繁。
您还可以减少数据量你通过复制实际将在屏幕上显示的内容复制。所以我建议或许反转这一点:GUI获得更新计时器(任何看起来足够你的目的),从循环缓冲区根据需要拉出新的显示内容。这样,你可以减少很多不必要的(即不可见的)屏幕更新和缓冲区副本。
根据你的需要,你也可以使用为您的问题的第2部分创建屏幕更新。
To(2):
首先,通常不需要套接字或类似的东西使用多线程。
我建议你使用一个线程池来处理:当新的块变得可用时,将它们复制到一个任务对象(你定义的一个具有处理代码的类并实现线程池所理解的接口)并将其提供给线程池。
由于您使用的是Qt,我将看看QThreadPool和QRunnable 。如果你需要FINISH处理块在一个特定的顺序事情得到一点更有趣。基本上你会有一个阻塞队列数据结构,你也会用QRunnable对象,然后另一个线程,抓住它们,并等待每个完成它们开始的顺序。
这里的通信将限于数据采集线程将输入切割为块和启动任务。如果你还需要从数据处理任务中控制数据获取线程,那么你可能需要一些不同的设计。
你也可以不使用实时内核补丁。如果你用来访问你的行扫描摄像机的库缓冲了它的输入,你会得到多行一个接一个之后,如果你错过了更新。这也取决于你需要多快的反应,但你正在对多行高的块进行图像处理,所以我希望你已经可以处理一点延迟。
ETA:
我只是重新阅读了你的问题。所以你基本上每100ms只有384x384像素的块。我打算建议使用Qt信号,但你可以遇到的问题:Qt信号在线程之间通信时内部使用阻塞队列数据结构。不幸的是,它们的实现不允许你设置一个大小限制,所以如果你的GUI线程或你的处理线程没有足够快地处理它们(比如一个用户在一个模态对话框的GUI),他们将获得缓冲,并耗尽内存。
而是您可以使用这样的:
=> (Blocking Queue)==>正在处理线程
基本上,你的采集线程只是将块泵入队列。处理线程将循环地从队列中获取块,并将它们发送到GUI以用于显示,然后处理它们。或者,如果你想要可视化。
Several months ago, I started working on a project to develop an algorithm to process data acquired from a linescan camera device (e.g., a line of 384 pixels every 300us). Since I am an engineer and not a programmer, I started working with Python to minimize the learning curve. With help from SX, I successfully built a Python application (that ended up being more than 2000 lines of code) and successfully created an image processing algorithm to work with the data. I impressed the customer and they want to take it to the next level. Now, I need it to be real-time... and that means C++. I got Koenig and Moo's Accelerated C++ and started reading. So, please go easy on me. I love to learn to program, but I have no formal training. I'm doing the best I can!
I now have a C++ prototype GUI (using Qt) wrapped around all the libraries needed to communicate with the camera via a CameraLink interface. The acquisition code lives in its own thread and emits signals to the GUI. So, I have the fundamentals in place. I can acquire as many lines of data as I wish with my current code, but I am now trying to figure out how to build an application around that. Even wrote a custom makefile that works with Qt (MOCing, etc.)
Anyway, for the application, I would like two modes (these are the QUESTIONS):
(1) A "live" view... where the linescan data is displayed in real time by the GUI. I was thinking about using a circular buffer (e.g., Boost::circular_buffer) to hold the data in real time, and simply pass a copy of the buffer (memcpy?) to the GUI via an emitted signal. Is this tenable? I feel the copy of the buffer is necessary since the circular buffer will change every 300us or so and I don't know the main event loop can keep up. Again, the data acquisition lives in its own thread. Does it have to be more complicated than that? Will I have to pop data from the buffer as it is read instead of using a circular buffer? I felt a circular buffer was the way to go since that is exactly the kind of image I want to display.
(2) A data processing mode... where the linescan data is emitted in blocks (i.e., 384 x 384) pixels. At a scan rate of 300us (~3,333 Hz), that is a block or frame every 100ms or so. In that 100ms, I'll need to do normalization of data, object detection, thresholding, etc. I'm planning on running this on a Linux box running real-time kernel patch. I think it should keep up. I'll need to communicate between the data acquisition and data processing threads... do I need sockets for this?
I'm looking for advice here on how to get started with these two pieces. The second one is more critical, but the first will help me visualize what is going on. Ultimately, I'd like to have both modes running simultaneously. I've spent most of the week getting this far... but need to ensure I'm heading down the right path with my plan.
To (1):
Makes sense to me. You'd have to be careful of synchronization issues when accessing the same buffer from GUI code and your receiver code otherwise. One possible refinement would be to limit the number of GUI updates a bit. Screen refresh rates are usually 50 or 60Hz and most GUI libraries assume that updates don't happen much more frequently than that.
You can also cut down on the amount of data that you copy by just copying what will actually be displayed on the screen. So I'd recommend maybe inverting this a little: The GUI gets an update timer (whatever looks good enough for your purpose) that pulls new display contents from the circular buffer as needed. That way you cut down on a lot of unnecessary (that is, invisible) screen updates and buffer copies.
Depending on your needs you could also just use the blocks that are created for part 2 of your question for screen updates.
To (2):First, you don't normally need sockets or anything like that when you use multithreading.
I'd recommend something like a thread pool for your processing: As new blocks become available copy them to a task object (a class you define that has the code for processing and implements an interface understood by the thread pool) and give it to a thread pool.
Since you're using Qt I'd look at QThreadPool and QRunnable for this part. If you need to FINISH processing blocks in a specific order things get a bit more interesting. Basically you'd have a blocking queue data structure that you would also feed with the QRunnable objects, then another thread that grabs them off there and waits for each to complete in the order they were started.
The communication here would be limited to the data acquisition thread cutting the input into blocks and launching tasks. If you need to also control the data acquisition thread from the data processing tasks you'd likely need a bit of a different design.
You might also get away without using a real-time kernel patch. If the library you use to access your line scan camera buffers its input you would just get multiple lines one after the other if you miss an update. Again this depends on how fast you need to react, but you're doing image processing on blocks that are multiple lines high, so I'd expect that you can already handle a bit of delay.
ETA:I just re-read your question. So you basically have blocks of only 384x384 pixels every 100ms. I was about to suggest using Qt signals but there you can run into problems: Qt signals use a blocking queue data structure internally when communicating between threads. Their implementation unfortunately does not allow you to set a size limit so if your GUI thread or your processing thread does not process them fast enough (say a user is in a modal dialog for the GUI) they will get buffered instead and use up memory.
Instead you can use something like this:
Acquisition thread ==> (Blocking Queue) ==> Processing thread
Basically, your acquisition thread would just pump blocks into the queue. The processing thread would grab blocks from the queue in a loop and send them to the GUI for display, then process them. Or the other way around if you want visualizations.
这篇关于使用C ++与行扫描摄像机在3000Hz接口,并处理/显示数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!