I am hitting this problem:
The problem was handled and discussed for RT devices with LV here:
However for the Raspi I could not find back a LVRT.conf file.
Anyone knows if this is possible to solve on the Raspi?
Solved! Go to Solution.
Could you describe more of what you are trying to achieve and exactly what you were doing before this error happened? What are your software versions for labview and linx? What OS are you using? What raspberry pi are you using? Are you trying to connect via wifi or eithernet? Please attach your current code that you are having trouble/errors with.
It will help us troubleshoot.
I am having 15 sensors (IoT like application) where each sensor is connected to its own ESP32 feather module. They are all connected over WiFi (TCP protocol) to a gateway for reporting of the sensor value. The gateway is a raspi 4B.
On the ESP32 wifi is implemented as client. The server (raspi with LV code) has a static IP and static open port number. The clients are more dynamically: port number can change and has dynamic IP assigned by a wifi router.
On the gateway I have the following LV code (community version, 2020):
Communication and data exchange works well between esp32 and raspi for the first sensor. When I switch to the next sensor I am closing the first sensor connection by tcp close. When doing so the error 60 pops up when trying to listen to a second sensor connection. As described above, the issue is with LV which is waiting for the open port connection to properly terminate, making sure all data has been received or transmitted. (There is no flush buffer available as far as I know.) The long period is a problem for my application. In NI proprietary HW there is a lvrt.conf file where the waiting time can be adjusted. How to go around this problem using TCP on a raspi?
The lvrt.conf trick most likely won't work because technically, LabVIEW on the Raspberry Pi is not LabVIEW Realtime but simply LabVIEW for Linux compiled for the ARMV7 architecture. This is only logical since the underlaying OS is not NI Linux RT but a normal Debian based Linux.
So if there is any magic ini file setting, it certainly won't be in lvrt.conf
The big question to me is however, why do you need to close the listener to open a new connection to the client. A listener is typically left open and the Wait on Listener is called in a loop to accept new incoming connections. There is no need to close and reopen the listener continously to handle new connections. In fact if you write it in a reentrant way you can keep having a loop waiting for connections and on a new incoming connection hand of that connection to a different worker loop or instantiate a reentrant VI to handle that connection until it is closed by either the server or client side. In that way you could even handle all sensors in parallel.
thank you for your feedback. I agree on your strategy to keep the connection open for as long as data is sent. One question left: how does unexpected disconnection work on LV server side? Will the whole connection disconnect or can I use an error handling in the loop for the specific ESP32 to reinitiate the communication? With other words: one sensor disconnection will not disconnect all others in the loop?
Tx for your help
Typically connections are self contained. When the server sees an error on a connection (other than timeout on read which is a valid and expected error condition) it simply closes the connection and lets the client reconnect and a new connection arrives that you handle as any other new connection. Of course you likely want to have some mapping of connections to a specific sensor but I would guess that your sensor has some "Who are you" command it can respond too, or it could include this information in its initial "Hello" message it sends to the server. The connection handler then processes that information and uses it to send the data to a central data collector which distributes it to other display clients, disk or data base storage clients or whatever through queues or similar. If the actual data processing is very trivial such as only streaming it periodically to a file for instance, you could include that in the data collector, but distributing it is a more flexible architecture, which will let you more easily add extra processing options later on.
It all goes into the direction of an Actor based system, but I would refrain from using the Actor Framework for this unless you envision this to grow to a real big system or just want to get your hands dirty on the Actor Framework for another really big system.