Hi, are there any methods or tricks that are widely used to make a bulletproof the tcp/ip communication protocol in labview/python?
I have heard a method that is based on incrementing the data that is sent and received but have no source on that. Any other ideas are appreciated as well.
Solved! Go to Solution.
Your question is far too vague to answer. What do you want to "bulletproof"? The data protocol? Auto re-connect for connection errors? The TCP/IP primatives in LabVIEW work quite well and are very reliable in and of themselves. You can certainly write bad code that makes your communications unstable. Likewise, it is possible to write very solid code that will run reliablly 24/7/365.
You are right, let me explain my problem a little bit. Yes, I mean the data protocol and handling possible errors...But also:
In my application I have two LV vis and a python code that tcp communicates with these vis individually. So python sends a set of data to the first vi. First vi runs an experiment. Then the second vi is an image analysis vi that is triggered by a camera using the first vi. The second vi does the image analysis and sends a set of new data back to Python via tcp connection. So if the trigger is triggered unwantedly or unexpectedly or if not triggered when desired what I receive from the second vi (from the python side) is not reliable at all.
A possible solution might be to establish a tcp connection between the vis? Or increment the data being sent and received somehow?
I am working on a project where I try to optimize and automate an experiment. In detail, by using a Python code via TCP communication, I send a set of input parameters to a LabVIEW VI #1 that controls and executes the experiment.
After the experiment is concluded, another LabVIEW application (LabVIEW VI #2) takes an image of the experimental output and does image analysis on it and sends a few parameters (derived from the image) back to the Python code via TCP communication.
This 3 way loop seems to be working fine right now and the goal here is to optimize the experimental output by finding/feeding better input parameters into the experiment. This optimization has been planned to be done via an optimization package in Python.
The communication between these two LabVIEW VIs (the one that controls the experimental input parameters and the one that does the image analysis in the end) is based on the trigger of a camera that is located in the experimental setup.
So when a camera is triggered by the LabVIEW VI #1, LabVIEW VI #2 acknowledges it and does the image analysis on the experimental output and then sends a few image parameters back to Python via TCP.
Python communicates with these VIs via different sockets. So Python-->LV #1 uses a port, LV#2-->Python uses another port.
Now, I am trying to make this loop as robust/bulletproof as possible and the biggest challenge seems to be accounting for a scenario that the trigger doesn't work as expected.
For instance, if the camera is accidentally triggered while the experiment is not concluded then the LabVIEW #2 would still be taking an image (an undesired one) and would be sending the wrong image analysis parameters back to Python.
Then this would mess up the entire optimization loop.
So I guess the problem is to somehow find a way to be able associate the experimental input parameters (controlled by LV #1) to the corresponding output parameters (obtained by LV #2) and to make sure that they correspond to each other.
So that Python side knows that given the input, what it receives from the other side is THE output.
I am thinking about maybe establishing a TCP communication between LV #1 and LV #2 (in addition to the trigger system we currently have) or maybe add a third LV #3 that TCP communicates with both LV VIs in a sequential manner (takes in the input parameters from LV #1 and output parameters from LV#2 and then sends this bundle to Python).
But I am having a hard time imagining the details of such architecture that ensures the input parameters are associated with the output ones coming to it in the first place. So to speak, this scenario doesn't solve the problem of accidental triggering in the first place. I could time out the duration of the experiment (which is around 20s), so if the trigger is triggered sooner than that, we would know that it is accidental. But if it is not triggered at all then I am not sure about how to handle that case.
This sounds unnecessarily complicated. Are these two VIs in two different applications as you seem to indicate at some point or rather in the same?
Personally I think I would put it in one application with 3 quasi independent entities/processes that communicate with each other through queues. One is the TCP/IP server, one is the experiment preparator and one is the experiment executor. Your problem is not even about reliable TCP communication but rather about synchronizing your preparator task and your executor task not only by your single trigger signal but also between each other. So the preparator needs to be able to flag the executor that there will be a trigger within an x amount of seconds and the executor needs to ignore any trigger otherwise. You could of course add another TCP/IP link between the two but if they are located in the same application you can better use queues or similar.
Yes, the VIs are different applications.
"So the preparator needs to be able to flag the executor that there will be a trigger within an x amount of seconds and the executor needs to ignore any trigger otherwise." Yes, exactly. Can you elaborate on how I can do that?