问题描述
项目描述:
将现有的C"程序(主控件)连接到 Python GUI/Widget.为此,我使用了 FIFO.C 程序旨在查看基于帧的遥测.
Connect existing "C" program (main control) to Python GUI/Widget. For this I'm using a FIFO. The C program is designed look at frame based telemetry.
Python GUI执行两个功能:
The Python GUI performs two functions:
- 根据用户需要(单独的.py文件,不同用户编写的脚本)通过GUI小部件运行/创建图(可能是通过matplotlib创建).
- 在创建帧之后将帧号传递给python绘图脚本,以便在从主程序获得帧号后可以更新"自身.
我有几个问题-了解此处看到的多处理与多线程的优缺点:多处理与线程 Python
I have several questions--understanding the pros and cons from multi-processing versus multi-threading seen here: Multiprocessing vs Threading Python
实施注意事项:
在基于信号的架构中通过线程创建过多的图可能会在更新它们方面变得滞后,我猜.我不确定它们何时变得受 CPU 限制……大多数图会更新一些线系列,有些可能会更新图像.无论我选择哪种方式,无论创建方法如何,可能都会滞后.
Having too many plots created via threads in signal based architecture probably becomes laggy in terms of updating them I'm guessing. I'm not sure when they become CPU bound...most plots will update a few line series, some may update images. Perhaps it will be laggy regardless of which way I choose to do this regardless of creation method.
我不确定打开30个python进程是什么,每个进程使用matplotlib绘制一两个图都会对一台机器或其资源产生影响.我在我的系统上看到一个简单的 matplotlib 图有一个 117M 的 RSS(分配的内存),所以我认为如果通过为每个图打开单独的进程来绘制 30 个图的单个用户不会限制系统内存.(16 GB、32 核 Linux Box,同时有多个用户)
I'm not sure what opening 30 python processes, where each process makes a plot or two with matplotlib does to a machine or its resources. I see a single simple matplotlib plot on my system has an RSS (allocated memory) of 117M, so I don't think a single user plotting 30 plots would limit system memory if done by opening separate processes for each plot. (16 GB, 32-core Linux Boxes with several simultaneous users)
问题:
- 我是否应该通过线程或进程打开图,并且一个比另一个更慢?
- 如果我使用线程,有没有人知道在单个线程上变得滞后之前需要更新多少个 matplotlib 图形?
- 如果将图创建为过程,是否应该使用多处理程序包?我猜这个 API 可以直接在进程之间传递帧号?
- 鉴于我有可用的多处理功能,尝试通过POpen打开进程可能很愚蠢,对吧?我猜是这种情况,因为如果这样做,我将不得不自己设置所有管道/IPC,这会做更多的工作吗?
推荐答案
因此,我能够以两种方式实现该项目-有无多进程.
So I was able to implement this project in two ways--with and without multiprocess.
- 我在PyQt GUI中有一个主进程,该进程带有一个线程,该线程从控制C程序的帧号的管道中读取.
- 当用户选择绘图(.py 脚本)时,可以选择按一批绘图上的执行"按钮,使它们保持在主进程中.从这一点开始,如果框架更新,则图将连续更新.减速几乎立即开始发生在少数地块之后,但对于 10-20 个简单的时间序列地块来说并不是禁止的.
- 有一个替代按钮,允许使用另一个进程进行处理.我能够使用 POpen 和命名管道或多处理和多处理队列来做到这一点.最干净的方法是让我的其他进程创建绘图 QObjects 并使用 pyqt 信号,其中每个其他进程通过在该进程中创建 QApplications 结束,但我必须使用 ctx = mp.get_context('spawn') 在 Linux 上,因为默认情况下 Linux 使用 fork,当我创建 QApplication 时,它认为 QApplication 已经在主进程中运行.这是我能够获得可预测的多重处理行为的唯一方法,在该行为中,所有matplotlib图都将在替代过程中更新.
我读到matplotlib在网络上不是线程安全的,但是,从等待队列读取的线程中发出的pyqt插槽看起来不错.
I read matplotlib is not thread-safe on the web, however, with pyqt slots emitting from the threads waiting for the queue reads this seems to be fine.
我选择该实现是为了让用户可以灵活地在同一过程中打开图或在另一个过程中打开批次图,而不是每个进程预先确定数量的图,认为可能存在某些具有复杂更新的图可以创建和那些应该得到他们自己的过程,可以这样选择.对于简单的绘图,这也比每个进程的绘图浪费更少@每个进程最小 100MB,同一进程中每个额外的绘图仅需要 3MB 左右的额外内存.
I chose the implementation to give the user the flexibility for opening plots in the same process or batches of plots in another process rather than predetermined amounts of plots per process thinking that there could be certain plots with complex updates which could be created and those would deserve their own process and could be selected as such. This was also less wasteful than a plot per process for simple plots @ 100MB minimum per process with only 3MB or so of additional required memory for each additional plot in the same process.
最后一个细节是用户可能非常快速地切换帧.我让接收进程在一个非阻塞守护线程中读取并清空队列,只获取最新信息.一旦发送了用于更新图的信号,则图更新循环将抓住线程锁,并且在更新方法释放线程锁之后,读取的守护程序再次能够发出更新.
One last detail was the user switches the frame quite rapidly potentially. I had the receive process read and empty the queue in a non-blocking daemon thread and grab only the latest information. Once a signal was sent to update the plots a thread lock was grabbed by the plot update loop and the read daemon is again able to emit updates after the update method released the thread lock.
一些基本实现示例代码: https://stackoverflow.com/a/49226785/8209352
Some sample code of the basic idea of the implementation: https://stackoverflow.com/a/49226785/8209352
这篇关于带有 matplotlib 的 Python IPC的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!