From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

In the drivers (DAGmx,NI-Scope/Fgen,...) 

If a value (usually as DBL)  enters a config vi,  it could be coerced by the driver.

That vi should directly output the actual value.

 

Most often seen in the Forum: A user declares a non valid DAQmx Timing Sampleclock and wonder why the wfrm(data) is not what he expected.

Yes, RTFM help, you can read the properties after configuration  but it would easy to add an output to the config vi with the actual (maybe coerced) value. Making the user avare of that 'feature' (instead of popping an error 😉 )

 

Should not only cover samplerate ,  other values like FGEN standard funktion parameters (frequency, Phase), actual range, ...

Problem

Many times, the bulk of LabVIEW development happens on computers that will never interface with hardware. A dozen engineers may be collaborating on code that will ultimately run on a dedicated machine somewhere, that is connected. Yet, as things currently are, I have to install more than I need on my development machine to get access to API VIs. If I am working on my laptop on an application with DAQ, RF, Spectrum analyzer, etc. components, I have to choose to either download and install all of that, or deal with missing VIs and broken arrows. This seems needless, since my particular machine will never actually interface with the hardware.

 

Idea

I would like to have the option to install only the LabVIEW VIs and ignore the driver itself. In many, if not most cases, the LabVIEW API could be independent of driver version. It could install very quickly, since it would just be a set of essentially no-op VIs. I don't care that the VIs would do nothing. They would just be placeholders for my development purposes. This would allow me to have full API access to develop my code without having to carry around large driver installations that I will never actually use.

This idea was submitted by a user who requested I post this for them.

 

As of now, the VISA resource controls only allow you to select resource names without their full Windows description.  You can select individual COM ports (COM3, COM5, etc), or pick from a list of alias names if you've defined aliases for your com ports.  But, it might be nice to give the user a configurable option which will provide the additional descriptive information that you can find in Windows Device Manager.  This could allow novice users to select the desired COM port based on the actual physical layer needed for the application.  Again, I'm pretty sure you can work around this by reviewing the different COM ports in Measurement & Automation Explorer, or even creating your own aliases to surface the additional information.  But, if I'm creating an executable, to be used on different systems by novice users, then I may not want them to have to go into MAX to be able to properly identify their desired port.

 

So, instead of asking the user to select a COM port from a list of items looking like this...

travisferguson_0-1641925866067.png

 

Maybe give an option in a property page for the VISA Resource Control that might look like this (this is a mark-up)...

 

travisferguson_1-1641925926951.png

so that an operator can pick from a more descriptive list like what you see in the Windows Device Manager...

travisferguson_2-1641925994204.png

 

Thank you,

 

With the advent of the IoT and the growing need to synchronize automation applications  and other TSN (Time Sensitive Networking) use cases UTC (Coordinated Universal Time) is becoming more and more problematic.  Currently, there are 37 seconds not accounted for in the TimeStamp Which is stored in UTC.  The current I64 portion of the timestamp datatype that holds number of seconds elapsed since 0:00:00 Dec 31, 1900 GMT is simply ignoring Leap Seconds.  This is consistent with most modern programming languages and is not a flaw of LabVIEW per se but isn't quite good enough for where the future is heading   In fact, there is a joint IERS/IEEE working group on TSN 

 

Enter TAI or International Atomic Time: TAI has the advantage of being contiguous and is based on the SI second making it ideal for IA applications.  Unfortunately, a LabVIEW Timestamp cannot be formated in TAI.   Entering a time of 23:59:60 31 Dec 2016, a real second that did ocurr, is not allowed.  Currently IERS Bulletin C is published to Give the current UTC-TAI offset but, required extensive code to implement the lookup and well, the text.text just won't display properly in a %<>T or %^<>T (local abs time container and UTC Abs time container)  We need a %#<>T TAI time container format specifier. (Or soon will!)

How is NI-MAX still so bad after all these years? It goes completely unresponsive and crashes at the slightest provocation. This feels like it should be NI's bread-and-butter.

 

Every LabVIEW team ends up recreating so much of the functionality that NI-MAX is supposed to offer out of the box. Maybe with the discontinuation of NXG, some resources should be allocated to making this product usable.

 

TurboPhil_1-1629324790878.png

 

 

KB articles like this and this probably shouldn't need to exist.

This is something a few power users have asked me about. There's no Instrument Driver or VIPM Idea Exchange, so I thought I would post it here.


What if VIPM could manage Instrument Drivers from IDNet?
There are a few key benefits this would offer us...

  • download IDNet drivers directly from VIPM 
  • track which version of a driver you are using for different projects and revert when necessary 
  • wrap up ID dependencies in a VIPC file for use at a customer site
Install Other Version.png
Get Info.png 

taking ideas like that seen here  , I got this idea: 

 

 

When working with "Path Constant", which works with a path too long, we observe look like this:

path2editado.PNG

Would be much nicer if we could double-click the “Path Constant” for us to see something like this:

path3 editado.PNG

The idea is this.

path idea labview editado.PNG

According to the increasing number of questions about this communication protocol, it would be time to rewrite the MODBUS library. I also suggest to add it to the NI device drivers installer.

 

This could be the place to list the expected modifications. Some comments and bugs are already listed in above linked page.

So one can set many of the standard properties for VISA connections.  However the serial port latency setting is not one of these.  It can easily be set by a standard POSIX ioctl call for both linux and MacOS (who the heck knows how Windows does it).  Many character based devices can be manipulated through these ioctl calls.

 

To be specific, the FTDI USB-Serial port driver chips used in MANY devices as a USB interface or just as a serial port have a slow default latency that is fine for preserving CPU cycles but not great for modern high speed custom communication.

 

1. Just add a a property that sets the serial port latency using an ioctl IOSSDATALAT property configuration.

 

2. More generally allow an interface that will call ioctl with a supplied property and the LV programmer can pass the correct property configuration.

 

3. Less satisfying but possibly easier to implement is to have the VISA property be the file descriptor number.  This simplifies the programming where one has to now get the "Name" property and then search the process information table for that "Name" (or really path) and then extract the file descriptor.  Only then can one set the latency for that device.

 

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.

Hi!

Since National Instruments offers support for programming ARM microcontrollers, I think it would be great to start supporting programming very popular recently BeagleBone development board made by Texas Instruments. You can find more information about this device there: http://beagleboard.org/bone . Please kudo it 🙂

The VISA test panel is a very valuable tool for troubleshooting instrument connectivity issues.

 

This used to be included with the VISA runtime, or at least with any installer that also included the VISA runtime.

 

Now I have to separately download and install the FULL VISA just to get this valuable tool. 

 

That makes installing a LabVIEW executable a multistep process as now I have to run two different installers. 

 

NI-MAX and the VISA test panel should ALWAYS BE included in any installer that includes the VISA runtime.

I think it is very difficult to make a UI that runs on Windows and interacts with targets. Here are two suggestions to improve this:

 

1. We currently can't have \c\ format style in a file path control on windows. It would be nice to allow user to specify the OS syntax to use instead of assuming it should always be the local sytax.

2. The icing on the cake would be to have the File Path Control support a property node for an IP Address so when the user clicks on the browse button, it automatically browses the target (this is already an idea mentioned in the link below) and uses the syntax of the target. This becomes especially useful as we start to have targets that may have an alternative way of browsing files besides FTP. It would be a pain to figure out which SW is installed on the target and use the correct method to enumerate files/folders.

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Path-control-of-VI-under-real-time-target-should-browse-target-s/idi-p/1776212

 

These two features could be implemented by having an enum property node for the File Path Control called Syntax which include options like: Local, Various OSes (Windows, Linux, Mac, etc), or IP Address. If IP Address is specified, another property Node called IP Address will be used to determine what target's OS to use (if it's not specified or invalid, we default to Local).

If I "create constant" from a path input, and start typing into the path, the control grows to the left, away from the function that accepts it as an input.

 

But for VISA and IVI, the name controls grow to the right as you type into them, covering up the icon of the function that accepts the input, along with the wire connecting them.  Let's make VISA and IVI behave better.

 

23852i492819D58059CA42

Currently the only way to set/modify Tcp socket options is by directly calling some system library, as done for example in this post.

 

This not only causes code difficult to understand ("what does that library call do again?") but also poses problems when you want to use your code on different operating systems: Currently the only way to do this is using "conditional disable structures", and then Labview still tries to load the code used for a different operating system...

 

Labview should have a standard way to set socket options within Labview code, at least for the most important options (Nagle algorithm leaps to mind...). This could either be done as additional inputs to the "Tcp open connection"-VI, or (much better) using property nodes for Tcp connections.

 

 Connecting to remote panels using LabVIEW is difficult if private networks, local private and external public IPs (under NAT), and firewalls, etc. are involved. It requires significant knowledge as well as external networking configurations (port forwarding, etc.), and possibly admin privileges to modify those.

 

There are plenty of companies that have found a way around all this. The prime example is chrome remote desktop, which seamlessly works even if target computers are in hidden locations on private networks, as long as each machine can access the internet with an outgoing UDP connection. The way I understand it, each computer registers with the Google server, which in turn patches the two outgoing connections together in a way that both will communicate directly afterwards. All traffic tunnels inside the plain Google chat protocol (udp based). Similar mechanisms have been developed for security systems (example) and many more.

 

Since the bulk of the traffic is directly between the endpoints, the traffic load on the external connection management server is very minimal. It simply keeps an updated list of active nodes and handles the patching if requested.

 

I envision a very similar mechanism where LabVIEW users can associate all their applications and distributed computers with a given user ID (e.g. ni profile), and, at all times be able to get a list of all currently running remote systems published under that user ID. If we want to connect to one of them, the connection server would patch things together without the need of any networking configuration. Optionally, users could publish any given panel under a public key, that can be distributed to allow public connections by any other LabVIEW user.

 

This is a very general idea. Details of the best implementation would need to be worked out. Thanks for voting!

 

If you are using TCP to communicate to a different code environment, you may want to set some of the socket options. For example, for responsive control, you will want to disable Nagle's algorithm. There is currently no obvious or easy way to do this. TCP Get Raw Net Object.vi in <vi.lib>\utility\tcp.llb will provide the raw socket ID, but you then need to call setsockopt() on your particular platform using the call library node. You can do this with the code provide here. A much better way would be adding a property node to the TCP reference that allowed you to set and query the options directly.

Wouldn't it be great if National Instruments could support AUTOSAR SWC-development in LabView. The AUTOSAR standard with its module-based approach fits perfectly for LabView. I am convinced that your company could do a great job in implementing an easy-to-use environment for this emerging standard. I have worked with the tools from Vector and in my opinion everything there is very messy and illogical.

 

Best regards

Mats Olsson

The current bluetooth VIs (as of LV 2014) don't support communication with the new protocol Bluetooth 4.0, referred as Bluetooth Low Energy (or Bluetooth Smart).

 

New VIs dedicated to BLE or adding support on current VIs is needed for all developers of this new bluetooth stack.