LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Best practices for running python node very fast (~ kHz) wrt renew session? Session becomes corrupt after few seconds w/ multiple calls.

Solved!
Go to solution

I am scanning data from a piece of hardware using labview and processing it in a python module. This runs every couple of ms, so it needs to process very quickly. If I open python, run the code, and close python on each iteration, every couple ms, then it takes too long and can't keep up. As such, I tried just opening the python session one time and passing that same session into the python module call each time. This works fine for a few seconds, and can keep up OK in time, but then crashes after a few seconds. The pyhon returns a non-descript error and then no future calls to the python work. It seems as though the session is somehow corrupted. 

 

The sample vi here works OK and doesn't exhibit the behavior (I could not replicate it with a simpler vi), but this illustrates ~ what I am doing. For the python module in the example, it is just the addint32 example that comes with Labview, I just made it run with the passed parameters.

 

So, to summarize,

 

- if I open a new connection on each call, it works fine, but can't keep up in time with the data. 

- if I run as shown in the image, it keeps up in time, but the connection becomes corrupted after a very short time

 

My questions are:

 

1. Is it OK to have a single session to python open and pass it all over a vi making a bunch of module calls? 

2. What is the best practice for using a python module when I need to make many fast calls to various modules? 

3. Is there a way to repair a session in some way that is still "fast" without waiting for an error on a module call? (as in, I can't use the error cluster on an output to check if the session is bad very easily because that would have meant that I lost data)

 

Thanks!

0 Kudos
Message 1 of 3
(623 Views)
Solution
Accepted by topic author AlexGSFC

I do not work with Python so I can't comment specifically, but I can try generically.

 

Can you batch your data to only send it to Python every, say, 100 ms? Generally speaking, for things like this, the overhead to make a call is massively bigger than the call itself.Say you need 2 ms to send the data to Python, then 0.5ms for it to do the math, and 2ms to send it back, you're at 4.5 ms total time.

 

If you can batch that data by 10x, and your Python script takes 10x as long, then you're at 2ms to send data to Python, 5ms to analyze, and 2ms to send it back, you're at 9ms total time. You've doubled your time, but 10x'd your throughput.

 

Usually you only need real-time analysis for something like a control loop where you MUST know the results of the previous calculation before you can start the next one, and doing kHz speed control loops without an RTOS is just not feasible.

 

If I had to make a guess, you're calling something in Python before the last task was finished and it's throwing some sort of error.

Message 2 of 3
(598 Views)

I might be able to batch the stuff and send every 100 ms or so, that's a good thought. The reason I was analyzing each data packet is that I needed to get the packet size out by processing a header before I read the rest of the data from the com port. I was doing that in python as well, but that part is pretty easy and if I shift that over to labview, then I can buffer the data for a while. 

thanks!

0 Kudos
Message 3 of 3
(564 Views)