From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

VISA Set I/O Buffer Size fails with all but one value on Linux RT

And yes, Linux doesn't have a standard way to change the buffer size. There is some ioctrol TIOCSSERIAL that supposedly sets the hardware buffer size before an interrupt is generated. But that is up to 16 bytes on the standard serial port hardware. No official provision to change the OS maintained buffer size seems to exist, well aside from editing the kernel resources and recompiling your own custom kernel, but I'm pretty sure that is not something you intend to do. Smiley Happy

Rolf Kalbermatter
My Blog
0 Kudos
Message 11 of 14
(1,078 Views)

Regardless of the discussion about whether buffer sizes *should* be configurable or not on given targets, you still have the issue that there are serial functions that will try to set it, and their default value will generate a warning/error.

 

I would say the best way to handle that is to either make the defaults a value that will not generate any errors or warnings, or make the default adjust to the target if the code is targeted at such a device. As it is now the chosen default (4096) has the side effect that you are guaranteed to end up having to figure out that your serial interface does not have a configurable buffer. That might be looked upon as a good thing instead of a bad thing, but I do not think it is in line with the general philosophy in LabVIEW (the philosophy then would have to be that *all* defaults should be chosen to be borderline, to force the users to pay attention to any potential issues of not having specified a value..).

Message 12 of 14
(1,054 Views)

Incidentially the buffer size used in Linux standard kernels for both input and output buffers is 4096 Smiley Very Happy

But not being able to even query that in any official way, not to talk about changing it, since it is hardcoded in the sources, I'm not entirely sure how VISA could best handle that. Assuming that 4096 as default would do the right thing is at least questionable, since anybody can go and compile his own kernel and then VISA would be lieing about what it does. The choice to generate a warning is IMO a pretty sensible one, at least if you use LabVIEW error cluster in the way they are meant.

 

Unless in very specific cases where you know that a function will always return a certain error code and never just as a warning, you really need to evaluate the status boolean first. VISA specifically has many operations that can return warning codes, such as for instance the "warning" 1073676294, The number of bytes transferred is equal to the requested input count. More data might be available! Also for VISA at least the errror code is also indicative in itself. Positive values are warnings, negative values are errors, but that is a VISA specific attribute of the error code and doesn't apply to many other error codes including the standard LabVIEW errors between 1 and 5000. The boolean status however is a globally valid indicator for error or not, with the exception of some hacked code maybe.

 

You can agree or not agree that functions can return warning codes, but the fact is, that the VISA API exists now almost 20 years already and has done this in all that time and is written in stone in the VXI PnP Specification papers, and therefore will not change this anytime soon. VISA is by far the most strict API in terms of specifications that NI ever has implemented, because there is an actual standard gremium behind it. Yes it is heavily influenced by NI but nevertheless you can consider the VISA specification as a full blown industrial standard that is backed up and supported by many more than just NI and they would get very upset if NI changed its implementation now.

 

Theoretically VISA could add another buffer layer on top of the OS buffer that it could manage itself and allow this function to work as if nothing is at hand. Practically that has quite a few implications, such as performance, and extra complexity that could make implementation of certain low latency protocols completely impossible.

Rolf Kalbermatter
My Blog
0 Kudos
Message 13 of 14
(1,045 Views)

In this case VISA is "lying" (to use your phrase Rolf) about it anyway because it will not generate a warning if you set the value to 0. So in my mind the expected behaviour would be that the default should be whatever is least likely to generate warnings or error output, which in this case is 0 (or at least in this case 0 is treated as OK (interpreted as ignore operation?) ). not 4096 - even though 4096 then probably matches what you are actually getting. 

 

Due to backwards compatibility I guess the default will never get changed to 0 now though.

 

 

Rolfk wrote:
"Incidentially the buffer size used in Linux standard kernels for both input and output buffers is 4096 Smiley Very Happy

But not being able to even query that in any official way, not to talk about changing it, since it is hardcoded in the sources, I'm not entirely sure how VISA could best handle that. Assuming that 4096 as default would do the right thing is at least questionable, since anybody can go and compile his own kernel and then VISA would be lieing about what it does. The choice to generate a warning is IMO a pretty sensible one, at least if you use LabVIEW error cluster in the way they are meant."

 

 

 

0 Kudos
Message 14 of 14
(1,018 Views)