LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It would be nice to have a managed solution to get LV Class properties both in development and in runtime environment: name of the properties, concerning accessor VIs with path and scope information.

Currently the only way to get these information is to handle the classes by XML parsing methods. This is neither managed solution nor working for classes which are found in compiled codes.

The recently introduced Raspberry Pi is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The Raspberry Pi is a $35 (with Ethernet) credit card sized computer that is open hardware. The ARM chip is an Atmel ARM11 running at 700 MHz resulting in 875 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 15 times the processing power!

 

Wouldn’t it be great to programme the Raspberry Pi in LabVIEW?

Problem: When an error occurs inside some DAQmx VIs, the error source (the string component of the error cluster) does not contain the call chain. This means that it is impossible to know the location where the error occurred based on looking at the error message.

 

Real-world example: The other day I encountered DAQmx error -200477 on a cRIO-9045 that uses DAQmx to acquire data from several different C-Series modules. The real-time application that was running on the cRIO contained an error handling module, which correctly logged the error to file. I saw the following when using PuTTY and the linux 'cat' command to display the contents of the error log file.

 

1.png

 

 

 

 

 

 

Notice that the error message does not contain any information as to where in the codebase this error actually occurred. This meant that I had to spend a few minutes inspecting the moderately large codebase before identifying the likely location of the error. Running subsequent tests I was able to confirm that that was the location of the error. The error was soon understood and fixed.

 

Root-cause: The root-cause of the issue (lack of call chain information) is the DAQmx Fill In Error Info.vi. In the real-world example above the error was occurring inside the DAQmx Timing (Sample Clock).vi.


4.png

 

 

 

 

 

 

The block diagram of this VI is seen below:

2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Any LabVIEW errors generated inside that VI are generated by the DAQmx Fill In Error Info.vi. This VI is, of course, essentially a simple translator between the return type (I32) output of the Call Library Function Node and a native LabVIEW error. That VI has an unwired input named depth whose default value is 1. This means that only the last link of the call chain is inserted into the error message.

 

3.png

 

 

 

 

 

 

 

 

 

Solution: Always insert the complete call chain inside all DAQmx error messages.

I distribuute a lot of code, and sometimes it's difficult to tell my users what they need to install in order to run that code.  It would be nice if I (or a user) could run a built in LabVIEW utility that tells me what a given VI needs to run.

 

For example, do I need DAQmx, Mathscript, Robotics?

 

 

 

The BeagleBoard xM is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The BeagleBoard xM is $149 and is open hardware. The BeagleBoard xM uses an ARM Cortex A8 running at 1,000 MHz resulting in 2,000 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 33 times the processing power!

 

Wouldn’t it be great to programme the BeagleBoard xM in LabVIEW?

This idea was submitted by a user who requested I post this for them.

 

As of now, the VISA resource controls only allow you to select resource names without their full Windows description.  You can select individual COM ports (COM3, COM5, etc), or pick from a list of alias names if you've defined aliases for your com ports.  But, it might be nice to give the user a configurable option which will provide the additional descriptive information that you can find in Windows Device Manager.  This could allow novice users to select the desired COM port based on the actual physical layer needed for the application.  Again, I'm pretty sure you can work around this by reviewing the different COM ports in Measurement & Automation Explorer, or even creating your own aliases to surface the additional information.  But, if I'm creating an executable, to be used on different systems by novice users, then I may not want them to have to go into MAX to be able to properly identify their desired port.

 

So, instead of asking the user to select a COM port from a list of items looking like this...

travisferguson_0-1641925866067.png

 

Maybe give an option in a property page for the VISA Resource Control that might look like this (this is a mark-up)...

 

travisferguson_1-1641925926951.png

so that an operator can pick from a more descriptive list like what you see in the Windows Device Manager...

travisferguson_2-1641925994204.png

 

Thank you,

 

This is something a few power users have asked me about. There's no Instrument Driver or VIPM Idea Exchange, so I thought I would post it here.


What if VIPM could manage Instrument Drivers from IDNet?
There are a few key benefits this would offer us...

  • download IDNet drivers directly from VIPM 
  • track which version of a driver you are using for different projects and revert when necessary 
  • wrap up ID dependencies in a VIPC file for use at a customer site
Install Other Version.png
Get Info.png 

taking ideas like that seen here  , I got this idea: 

 

 

When working with "Path Constant", which works with a path too long, we observe look like this:

path2editado.PNG

Would be much nicer if we could double-click the “Path Constant” for us to see something like this:

path3 editado.PNG

The idea is this.

path idea labview editado.PNG

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.

Hi!

Since National Instruments offers support for programming ARM microcontrollers, I think it would be great to start supporting programming very popular recently BeagleBone development board made by Texas Instruments. You can find more information about this device there: http://beagleboard.org/bone . Please kudo it 🙂

Currently LabVIEW only has support for Mandriva, RedHat and SUSE Linux.  What's even worse, only 32-bit versions of those are supported.  Today, 64-bit linux installations are on huge raise, and Ubuntu is getting more and more popular.  LabVIEW Linux support should be expanded to include Ubuntu, and 64-bit versions are needed.

 

cheers,

Pekko

 

I think it is very difficult to make a UI that runs on Windows and interacts with targets. Here are two suggestions to improve this:

 

1. We currently can't have \c\ format style in a file path control on windows. It would be nice to allow user to specify the OS syntax to use instead of assuming it should always be the local sytax.

2. The icing on the cake would be to have the File Path Control support a property node for an IP Address so when the user clicks on the browse button, it automatically browses the target (this is already an idea mentioned in the link below) and uses the syntax of the target. This becomes especially useful as we start to have targets that may have an alternative way of browsing files besides FTP. It would be a pain to figure out which SW is installed on the target and use the correct method to enumerate files/folders.

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Path-control-of-VI-under-real-time-target-should-browse-target-s/idi-p/1776212

 

These two features could be implemented by having an enum property node for the File Path Control called Syntax which include options like: Local, Various OSes (Windows, Linux, Mac, etc), or IP Address. If IP Address is specified, another property Node called IP Address will be used to determine what target's OS to use (if it's not specified or invalid, we default to Local).

How is NI-MAX still so bad after all these years? It goes completely unresponsive and crashes at the slightest provocation. This feels like it should be NI's bread-and-butter.

 

Every LabVIEW team ends up recreating so much of the functionality that NI-MAX is supposed to offer out of the box. Maybe with the discontinuation of NXG, some resources should be allocated to making this product usable.

 

TurboPhil_1-1629324790878.png

 

 

KB articles like this and this probably shouldn't need to exist.

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

 Connecting to remote panels using LabVIEW is difficult if private networks, local private and external public IPs (under NAT), and firewalls, etc. are involved. It requires significant knowledge as well as external networking configurations (port forwarding, etc.), and possibly admin privileges to modify those.

 

There are plenty of companies that have found a way around all this. The prime example is chrome remote desktop, which seamlessly works even if target computers are in hidden locations on private networks, as long as each machine can access the internet with an outgoing UDP connection. The way I understand it, each computer registers with the Google server, which in turn patches the two outgoing connections together in a way that both will communicate directly afterwards. All traffic tunnels inside the plain Google chat protocol (udp based). Similar mechanisms have been developed for security systems (example) and many more.

 

Since the bulk of the traffic is directly between the endpoints, the traffic load on the external connection management server is very minimal. It simply keeps an updated list of active nodes and handles the patching if requested.

 

I envision a very similar mechanism where LabVIEW users can associate all their applications and distributed computers with a given user ID (e.g. ni profile), and, at all times be able to get a list of all currently running remote systems published under that user ID. If we want to connect to one of them, the connection server would patch things together without the need of any networking configuration. Optionally, users could publish any given panel under a public key, that can be distributed to allow public connections by any other LabVIEW user.

 

This is a very general idea. Details of the best implementation would need to be worked out. Thanks for voting!

 

Wouldn't it be great if National Instruments could support AUTOSAR SWC-development in LabView. The AUTOSAR standard with its module-based approach fits perfectly for LabView. I am convinced that your company could do a great job in implementing an easy-to-use environment for this emerging standard. I have worked with the tools from Vector and in my opinion everything there is very messy and illogical.

 

Best regards

Mats Olsson

Currently the only way to set/modify Tcp socket options is by directly calling some system library, as done for example in this post.

 

This not only causes code difficult to understand ("what does that library call do again?") but also poses problems when you want to use your code on different operating systems: Currently the only way to do this is using "conditional disable structures", and then Labview still tries to load the code used for a different operating system...

 

Labview should have a standard way to set socket options within Labview code, at least for the most important options (Nagle algorithm leaps to mind...). This could either be done as additional inputs to the "Tcp open connection"-VI, or (much better) using property nodes for Tcp connections.

 

The current bluetooth VIs (as of LV 2014) don't support communication with the new protocol Bluetooth 4.0, referred as Bluetooth Low Energy (or Bluetooth Smart).

 

New VIs dedicated to BLE or adding support on current VIs is needed for all developers of this new bluetooth stack.

 

Currently the lvsound2 library -- the Sound Input and Sound Output VIs supported by lvsound2.llb and lvsound2.dll -- updates the audio device list from the operating system only when first being loaded into memory. If you change the device list (e.g.., pair/unpair a Bluetooth headset) the device IDs will not reflect the new configuration until all the lvsound2-dependent VIs have been unloaded from memory. After adding or removing a device, the VIs will generate error 4803 ("The sound driver or card does not support the desired operation.") for device IDs related to the new/removed device,  even if the ID is still actually valid and points to something else. This is extraordinarily inconvenient for test systems focused on audio device testing, but understandably a niche issue, which may be why it hasn't been caught before now.

 

In the interim, the workaround is to dynamically call any of the VIs you're interested in to force them to load/unload as necessary. There are two appropriate solutions I can think of:

 

1) Update the Sound X VIs to implement the dynamic call workaround (preferably directly around lvsound2.dll calls so we can still borrow other VIs in the LLB).

2) Update the DLL to support on-the-fly changes.

 

The latter solution is ideal, particularly for performance. This reads both as a suggestion and a bug report so that anyone else who has this problem can find a public forum documenting the issue.