LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

In the drivers (DAGmx,NI-Scope/Fgen,...) 

If a value (usually as DBL)  enters a config vi,  it could be coerced by the driver.

That vi should directly output the actual value.

 

Most often seen in the Forum: A user declares a non valid DAQmx Timing Sampleclock and wonder why the wfrm(data) is not what he expected.

Yes, RTFM help, you can read the properties after configuration  but it would easy to add an output to the config vi with the actual (maybe coerced) value. Making the user avare of that 'feature' (instead of popping an error 😉 )

 

Should not only cover samplerate ,  other values like FGEN standard funktion parameters (frequency, Phase), actual range, ...

So one can set many of the standard properties for VISA connections.  However the serial port latency setting is not one of these.  It can easily be set by a standard POSIX ioctl call for both linux and MacOS (who the heck knows how Windows does it).  Many character based devices can be manipulated through these ioctl calls.

 

To be specific, the FTDI USB-Serial port driver chips used in MANY devices as a USB interface or just as a serial port have a slow default latency that is fine for preserving CPU cycles but not great for modern high speed custom communication.

 

1. Just add a a property that sets the serial port latency using an ioctl IOSSDATALAT property configuration.

 

2. More generally allow an interface that will call ioctl with a supplied property and the LV programmer can pass the correct property configuration.

 

3. Less satisfying but possibly easier to implement is to have the VISA property be the file descriptor number.  This simplifies the programming where one has to now get the "Name" property and then search the process information table for that "Name" (or really path) and then extract the file descriptor.  Only then can one set the latency for that device.

 

This idea was submitted by a user who requested I post this for them.

 

As of now, the VISA resource controls only allow you to select resource names without their full Windows description.  You can select individual COM ports (COM3, COM5, etc), or pick from a list of alias names if you've defined aliases for your com ports.  But, it might be nice to give the user a configurable option which will provide the additional descriptive information that you can find in Windows Device Manager.  This could allow novice users to select the desired COM port based on the actual physical layer needed for the application.  Again, I'm pretty sure you can work around this by reviewing the different COM ports in Measurement & Automation Explorer, or even creating your own aliases to surface the additional information.  But, if I'm creating an executable, to be used on different systems by novice users, then I may not want them to have to go into MAX to be able to properly identify their desired port.

 

So, instead of asking the user to select a COM port from a list of items looking like this...

travisferguson_0-1641925866067.png

 

Maybe give an option in a property page for the VISA Resource Control that might look like this (this is a mark-up)...

 

travisferguson_1-1641925926951.png

so that an operator can pick from a more descriptive list like what you see in the Windows Device Manager...

travisferguson_2-1641925994204.png

 

Thank you,

 

Although it appears that the 2020 version of the Block Diagram Cleanup Tool (BDCT) does do a better job than its predecessors, I would still say that the BDCT’s results are still less than optimal. Most LabVIEW Idea Exchange posts concerning the BDCT talk about label positioning and alignment.  Here I would like to focus on the issue of horizontal expansion of the block diagram and a holistic view of what LabVIEW features contribute to that effect.

 

Like most programmers, when developing a VI block diagram, I try to keep the diagram no more than one screen wide.  I have learned a few tricks over the years that help manage horizontal expansion, such as resizing an object’s label so that the words appear on multiple lines before running the BDCT. This allows for some compression horizontally and allows for some growth vertically to compensate. Horizontal expansion of the block diagram is certainly expected to some extent because data flow happens left to right, of course; but it would seem logical to incorporate that knowledge into the BDCT algorithm to compensate and find ways to adjust the spacing so that the rearrangement creates more a bit more vertical expansion and less horizontal expansion—while still satisfying usual criteria such as no backwards-running wires.

 

Because data flow is horizontal, to help the BDCT work better, NI may want to think about what visual features—other than left-to-right data flow—contribute to a unnecessarily wide block diagram. I seldom have an issue with a block diagram becoming too tall.  Admittedly, poor programming technique can result in too-wide block diagrams, but let’s look at a few other things. What elements of LabVIEW’s block diagram unnecessarily consume width? Here’s a few that I could think of:

 

  1. Long names in bundle/unbundle cluster objects, property/invoke nodes, Enum/ring constants, local variables, formula nodes, etc. – Most LabVIEW-supported languages are left-to-right horizontal. Long names, especially when using nested clusters, take up horizontal space. I like being able to use long names; it helps the code to be more self-documenting. If those type of objects supported word-wrap, that would help conserve width.
  2. Expression nodes and multi-digit constants – no word-wrap available here either.
  3. Named terminals of timing and event structures – They are what they are, and you cannot remove all of the ones you don’t use, and so they take up space horizontally.
  4. Some native functions – There are some LabVIEW native function icons that are wider than they are tall. Some examples: In-Range/Coerce and Initialize Array.
  5. Shift registers – It has a subtle effect, but shift registers are wider than they are tall. Do they have to be that way?

To fully tackle this issue, I believe you have to look at things holistically and not just as potential improvements that could be made in the BDCT.  Recognize that data flow is left-to-right horizontal and you will have long text names; you can’t really do anything about that. However, there are other things that could be done in LabVIEW feature-wise to help compensate for width-wise block diagram growth.

How is NI-MAX still so bad after all these years? It goes completely unresponsive and crashes at the slightest provocation. This feels like it should be NI's bread-and-butter.

 

Every LabVIEW team ends up recreating so much of the functionality that NI-MAX is supposed to offer out of the box. Maybe with the discontinuation of NXG, some resources should be allocated to making this product usable.

 

TurboPhil_1-1629324790878.png

 

 

KB articles like this and this probably shouldn't need to exist.

Currently the only way to set/modify Tcp socket options is by directly calling some system library, as done for example in this post.

 

This not only causes code difficult to understand ("what does that library call do again?") but also poses problems when you want to use your code on different operating systems: Currently the only way to do this is using "conditional disable structures", and then Labview still tries to load the code used for a different operating system...

 

Labview should have a standard way to set socket options within Labview code, at least for the most important options (Nagle algorithm leaps to mind...). This could either be done as additional inputs to the "Tcp open connection"-VI, or (much better) using property nodes for Tcp connections.

 

It would be really nice if we could build PPL for multiple targets. The LV help states:

 

"If a VI calls a packed project library compiled for one target and you open the VI on another target that has a different operating system, the packed project library fails to load."

 

We should be able to choose which target's compiled code shall the builder include in the PPL. This would make the use of PPLs in a multi-target system so much better.

Problem

Many times, the bulk of LabVIEW development happens on computers that will never interface with hardware. A dozen engineers may be collaborating on code that will ultimately run on a dedicated machine somewhere, that is connected. Yet, as things currently are, I have to install more than I need on my development machine to get access to API VIs. If I am working on my laptop on an application with DAQ, RF, Spectrum analyzer, etc. components, I have to choose to either download and install all of that, or deal with missing VIs and broken arrows. This seems needless, since my particular machine will never actually interface with the hardware.

 

Idea

I would like to have the option to install only the LabVIEW VIs and ignore the driver itself. In many, if not most cases, the LabVIEW API could be independent of driver version. It could install very quickly, since it would just be a set of essentially no-op VIs. I don't care that the VIs would do nothing. They would just be placeholders for my development purposes. This would allow me to have full API access to develop my code without having to carry around large driver installations that I will never actually use.

If we can have instrument simulator VIs, we can test our codes prior to the actual hardware acquisition

 

The idea proposes device driver packages from IDNet to include a vendor instrument simulator VI that can be run prior to code testing. The VI can be configurable (VISA based on virtual ports created during runtime, file paths to read from to simulate measurements and responses to messages, etc) and started asynchronously (call & forget) at the start of the program and terminates when the caller terminates.

 

or as a template from the "New > Simulated Instrument" where user can modify according to the instrument that they are going to use.

 

...and that it covers NI and 3rd party instruments

The current way to bind many shared variables to an OPC client, is through browsing a tree and selecting.

This is very time consuming when you have hundrends or thousands of tags to bind. Specialy if the all the tags are not in the same path.

A better way, is to bind the variables, by simple text.

Example: We will insert the following text

A1M01Z1:value

A2M01Z1:value

A3M01Z1:value

 

and LabVIEW will automatically create 3 variables, bounding to those addresses (with the same name, prefered). Many OPC Servers supports this type of address.

Note that the true path of A1M01Z1 could be something very big, like:

My Computer\OPC ABB.lvlib\_OPC1\[Control Structure]\Root\NETWORK 1\Nodecm3\Extended Process Objects\MB300 AI\A1M01Z1\VALUE

 

This way you can add thousands of items in minutes. It is quite easy for the R&D team to implement and will help many professional engineers.

Most probably, this idea will not accept many kudos, but i think R&D must consider to implement this.

 

(this was discussed with NI technical suport, Reference#3279019)

 

Thank you all, for reading

Hello forum

 

Wouldn't it be nice if we can add W10 IoT Enterprise PCs as Embedded Targets, where we can create VI executable and set it as startup programs and once deployed, the target will be automatically configured to: launch the startup programs with Embedded Enabling Features (EEF), Enhanced Write Filter (EWF), Hibernate Once, Resume Many (HORM) and File Based Write Filter (FBWF) but on different volume; as presented in NI Week TS2361 & TS8562 slides.

 

Thanks

With the advent of the IoT and the growing need to synchronize automation applications  and other TSN (Time Sensitive Networking) use cases UTC (Coordinated Universal Time) is becoming more and more problematic.  Currently, there are 37 seconds not accounted for in the TimeStamp Which is stored in UTC.  The current I64 portion of the timestamp datatype that holds number of seconds elapsed since 0:00:00 Dec 31, 1900 GMT is simply ignoring Leap Seconds.  This is consistent with most modern programming languages and is not a flaw of LabVIEW per se but isn't quite good enough for where the future is heading   In fact, there is a joint IERS/IEEE working group on TSN 

 

Enter TAI or International Atomic Time: TAI has the advantage of being contiguous and is based on the SI second making it ideal for IA applications.  Unfortunately, a LabVIEW Timestamp cannot be formated in TAI.   Entering a time of 23:59:60 31 Dec 2016, a real second that did ocurr, is not allowed.  Currently IERS Bulletin C is published to Give the current UTC-TAI offset but, required extensive code to implement the lookup and well, the text.text just won't display properly in a %<>T or %^<>T (local abs time container and UTC Abs time container)  We need a %#<>T TAI time container format specifier. (Or soon will!)

If you are using TCP to communicate to a different code environment, you may want to set some of the socket options. For example, for responsive control, you will want to disable Nagle's algorithm. There is currently no obvious or easy way to do this. TCP Get Raw Net Object.vi in <vi.lib>\utility\tcp.llb will provide the raw socket ID, but you then need to call setsockopt() on your particular platform using the call library node. You can do this with the code provide here. A much better way would be adding a property node to the TCP reference that allowed you to set and query the options directly.

The VISA test panel is a very valuable tool for troubleshooting instrument connectivity issues.

 

This used to be included with the VISA runtime, or at least with any installer that also included the VISA runtime.

 

Now I have to separately download and install the FULL VISA just to get this valuable tool. 

 

That makes installing a LabVIEW executable a multistep process as now I have to run two different installers. 

 

NI-MAX and the VISA test panel should ALWAYS BE included in any installer that includes the VISA runtime.

When performing a single point read on an XNet session, you will receive the value of the signal that was last read, or the Default value as defined by the database if it has never been read.

 

This type of functionality is sometimes useful, but more often I'm interesting in knowing what the last reading was, if the reading is relatively recent.  The problem with the NI implementation is that you have no way of knowing (with this one session type) if the device is still broadcasting data or not.  I think a useful property might be to have a way of specifying an amount of time, that a signal's value is still valid.  If I haven't seen an update to a signal in 2 seconds for example, I can likely assume my device is no longer communicating and the reading I get back from the XNet read should return NaN.

 

I had a small discussion on this topic and posted a solution using an XY session type here, which demonstrates the functionality I am talking about.  I'd like this to be built into the API.

What I propose is to have functionality built into the XNet API that is similar to the DAQmx API.  I'd want a separate subpalette where functions like Start, and Stop logging can be invoked which will log the raw data from an XNet interface into a TDMS file.  Maybe even some other functions like forcing a new file, or forcing a new file after the current file is so old, or of a certain file size.  On buses that are heavily loaded, reading every frame, and then logging it can be use a non-trivial amount of resources, and having this built into the API would likely be more efficient.

 

XNet already has a standard for how to read and write raw frames into a TDMS file that is compatible with DIAdem, and has several shipping examples with LabVIEW to log into this format.

Recently, user cprince inquired why all of the possible Mouse Button presses were not available in LabVIEW.  In the original post, it was not clear whether this referred to "normal" mouse functions (as captured, say, by the Event structure) or whether this referred to the Mouse functions on the Input Device Control Palette.  The latter has a Query Device, which shows (for my mouse on my PC) that the Mouse has 8 buttons, but the Acquire Input Data polymorphic function for Mouse input returns a "cluster button info" with only four buttons.

 

The "magic" seems to happen inside lvinput.dll.  If, as seems to be the case, the DLL can "see" and recognize 8 Mouse Buttons, it would be nice if all 8 could be returned to the user as a Cluster of 8 (instead of 4) Booleans.

 

Bob Schor

There are currently two NI toolkits which add a software layer on top of the automotive CAN bus.  

 

The Automotive Diagnostic Command Set (ADCS) adds a couple of protocol standards like KWP2000 (ISO 14230), Diagnostics on CAN (ISO 15765, OBD-II), and Diagnostics over IP (ISO 13400).  This is a pretty handy API when all you need is one of these protocols.  But often when you need to communicate to an ECU, you also want have a normal Frame or Signal/Channel API where you get raw frame data, or engineering units on signals.

 

The ECU Measurement and Calibration (ECU M&C) adds XCP, and CCP capabilities on top of CAN allowing to read and write parameters based on predefined A2L files.  This again is super handy, if the A2L you provide can be parsed, and if all you need to do is talk XCP or CCP to some hardware.  But often times you also need to talk over a Frame or Signal/Channel API.  And what if you need to also talk over some other protocol.

 

Every time I've had to use one of these toolkits, it has failed me in real world uses, because you generally don't want just one API you want severaal.  And to get that kind of capabilities you often are going to close one API sessions type, and reopen another, then close that and reopen the first.  This gets even more difficult when different hardware types either NI-CAN or NI-XNET are used.  The application ends up being a tightly coupled mess.

 

This idea is to rewrite these two toolkits, to be as pure G implementation as possible.  The reason I think this would be a good idea, is because with it you could debug issues with file parsing, which at the moment is a DLL call and hope it works, and it would allow for this toolkit to be used, independently of the hardware.  Just give it some raw frames, and read data back.  NI-XNET already has some of this type of functionality in the form of their Frame / Signal Conversion type, where it can convert from frames to signals without needing hardware.  If these toolkits could support this similar raw type of mode then it would allow for more flexible and robust applications, and potentially being able to use these toolkits on other hardware types, or simulated hardware.

The current bluetooth VIs (as of LV 2014) don't support communication with the new protocol Bluetooth 4.0, referred as Bluetooth Low Energy (or Bluetooth Smart).

 

New VIs dedicated to BLE or adding support on current VIs is needed for all developers of this new bluetooth stack.

 

LabVIEW users should be able to deploy programs to the Intel Edison Module 

 

The Intel Edison is a very small single board computer with a dual-core Atom processor, 1GB RAM and built-in WiFi/Bluetooth LE.

Functionality can be added by connecting breakout boards, so called blocks. Many of these blocks are already available, like ADC, GPIO, Arduino, PWM...

 

In my opinion the Intel Edison is very well suited for stuff like embedded control, robotics, the Internet-Of-Things, and.

Thats why I posted this idea to convince NI to support it in LabVIEW!

 

 

VDB