LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

If I "create constant" from a path input, and start typing into the path, the control grows to the left, away from the function that accepts it as an input.

 

But for VISA and IVI, the name controls grow to the right as you type into them, covering up the icon of the function that accepts the input, along with the wire connecting them.  Let's make VISA and IVI behave better.

 

23852i492819D58059CA42

Currently the only way to set/modify Tcp socket options is by directly calling some system library, as done for example in this post.

 

This not only causes code difficult to understand ("what does that library call do again?") but also poses problems when you want to use your code on different operating systems: Currently the only way to do this is using "conditional disable structures", and then Labview still tries to load the code used for a different operating system...

 

Labview should have a standard way to set socket options within Labview code, at least for the most important options (Nagle algorithm leaps to mind...). This could either be done as additional inputs to the "Tcp open connection"-VI, or (much better) using property nodes for Tcp connections.

 

 Connecting to remote panels using LabVIEW is difficult if private networks, local private and external public IPs (under NAT), and firewalls, etc. are involved. It requires significant knowledge as well as external networking configurations (port forwarding, etc.), and possibly admin privileges to modify those.

 

There are plenty of companies that have found a way around all this. The prime example is chrome remote desktop, which seamlessly works even if target computers are in hidden locations on private networks, as long as each machine can access the internet with an outgoing UDP connection. The way I understand it, each computer registers with the Google server, which in turn patches the two outgoing connections together in a way that both will communicate directly afterwards. All traffic tunnels inside the plain Google chat protocol (udp based). Similar mechanisms have been developed for security systems (example) and many more.

 

Since the bulk of the traffic is directly between the endpoints, the traffic load on the external connection management server is very minimal. It simply keeps an updated list of active nodes and handles the patching if requested.

 

I envision a very similar mechanism where LabVIEW users can associate all their applications and distributed computers with a given user ID (e.g. ni profile), and, at all times be able to get a list of all currently running remote systems published under that user ID. If we want to connect to one of them, the connection server would patch things together without the need of any networking configuration. Optionally, users could publish any given panel under a public key, that can be distributed to allow public connections by any other LabVIEW user.

 

This is a very general idea. Details of the best implementation would need to be worked out. Thanks for voting!

 

If you are using TCP to communicate to a different code environment, you may want to set some of the socket options. For example, for responsive control, you will want to disable Nagle's algorithm. There is currently no obvious or easy way to do this. TCP Get Raw Net Object.vi in <vi.lib>\utility\tcp.llb will provide the raw socket ID, but you then need to call setsockopt() on your particular platform using the call library node. You can do this with the code provide here. A much better way would be adding a property node to the TCP reference that allowed you to set and query the options directly.

Wouldn't it be great if National Instruments could support AUTOSAR SWC-development in LabView. The AUTOSAR standard with its module-based approach fits perfectly for LabView. I am convinced that your company could do a great job in implementing an easy-to-use environment for this emerging standard. I have worked with the tools from Vector and in my opinion everything there is very messy and illogical.

 

Best regards

Mats Olsson

The current bluetooth VIs (as of LV 2014) don't support communication with the new protocol Bluetooth 4.0, referred as Bluetooth Low Energy (or Bluetooth Smart).

 

New VIs dedicated to BLE or adding support on current VIs is needed for all developers of this new bluetooth stack.

 

Hi Guys!

My idea is quite simple. What I would like to do is to use DAQmx Read and Write to accept a DVR as input so that I can read/write data directly to memory. This would be really appriciated if it could be applied on a TDMS read/write also (i know that there is a feature like this in TDMS now but it is only applicable on external dvrs).

 

 DVR.png

 

Pros: Everything

Cons: None

 

🙂


Sincerely,

 

Andreas

 

 

 

Currently the lvsound2 library -- the Sound Input and Sound Output VIs supported by lvsound2.llb and lvsound2.dll -- updates the audio device list from the operating system only when first being loaded into memory. If you change the device list (e.g.., pair/unpair a Bluetooth headset) the device IDs will not reflect the new configuration until all the lvsound2-dependent VIs have been unloaded from memory. After adding or removing a device, the VIs will generate error 4803 ("The sound driver or card does not support the desired operation.") for device IDs related to the new/removed device,  even if the ID is still actually valid and points to something else. This is extraordinarily inconvenient for test systems focused on audio device testing, but understandably a niche issue, which may be why it hasn't been caught before now.

 

In the interim, the workaround is to dynamically call any of the VIs you're interested in to force them to load/unload as necessary. There are two appropriate solutions I can think of:

 

1) Update the Sound X VIs to implement the dynamic call workaround (preferably directly around lvsound2.dll calls so we can still borrow other VIs in the LLB).

2) Update the DLL to support on-the-fly changes.

 

The latter solution is ideal, particularly for performance. This reads both as a suggestion and a bug report so that anyone else who has this problem can find a public forum documenting the issue.

Recently, user cprince inquired why all of the possible Mouse Button presses were not available in LabVIEW.  In the original post, it was not clear whether this referred to "normal" mouse functions (as captured, say, by the Event structure) or whether this referred to the Mouse functions on the Input Device Control Palette.  The latter has a Query Device, which shows (for my mouse on my PC) that the Mouse has 8 buttons, but the Acquire Input Data polymorphic function for Mouse input returns a "cluster button info" with only four buttons.

 

The "magic" seems to happen inside lvinput.dll.  If, as seems to be the case, the DLL can "see" and recognize 8 Mouse Buttons, it would be nice if all 8 could be returned to the user as a Cluster of 8 (instead of 4) Booleans.

 

Bob Schor

It would be really nice if we could build PPL for multiple targets. The LV help states:

 

"If a VI calls a packed project library compiled for one target and you open the VI on another target that has a different operating system, the packed project library fails to load."

 

We should be able to choose which target's compiled code shall the builder include in the PPL. This would make the use of PPLs in a multi-target system so much better.

There are currently two NI toolkits which add a software layer on top of the automotive CAN bus.  

 

The Automotive Diagnostic Command Set (ADCS) adds a couple of protocol standards like KWP2000 (ISO 14230), Diagnostics on CAN (ISO 15765, OBD-II), and Diagnostics over IP (ISO 13400).  This is a pretty handy API when all you need is one of these protocols.  But often when you need to communicate to an ECU, you also want have a normal Frame or Signal/Channel API where you get raw frame data, or engineering units on signals.

 

The ECU Measurement and Calibration (ECU M&C) adds XCP, and CCP capabilities on top of CAN allowing to read and write parameters based on predefined A2L files.  This again is super handy, if the A2L you provide can be parsed, and if all you need to do is talk XCP or CCP to some hardware.  But often times you also need to talk over a Frame or Signal/Channel API.  And what if you need to also talk over some other protocol.

 

Every time I've had to use one of these toolkits, it has failed me in real world uses, because you generally don't want just one API you want severaal.  And to get that kind of capabilities you often are going to close one API sessions type, and reopen another, then close that and reopen the first.  This gets even more difficult when different hardware types either NI-CAN or NI-XNET are used.  The application ends up being a tightly coupled mess.

 

This idea is to rewrite these two toolkits, to be as pure G implementation as possible.  The reason I think this would be a good idea, is because with it you could debug issues with file parsing, which at the moment is a DLL call and hope it works, and it would allow for this toolkit to be used, independently of the hardware.  Just give it some raw frames, and read data back.  NI-XNET already has some of this type of functionality in the form of their Frame / Signal Conversion type, where it can convert from frames to signals without needing hardware.  If these toolkits could support this similar raw type of mode then it would allow for more flexible and robust applications, and potentially being able to use these toolkits on other hardware types, or simulated hardware.

 

The LM3S9D96 Development Kit is a 32 bit ARM based microcontroller board that is popular and has several plug in boards. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK.

 

The LM3S9D96 Development Kit costs $425 and is open hardware. The LM3S9D96 is an ARM Cortex M3 running at 80 MHz resulting in 96 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power.

 

The LM3S9B96 Development Kit brochure (http://www.ti.com/lit/ml/spmt158e/spmt158e.pdf) already states, “The LM3S9B96 development board is also a useful development vehicle for systems programmed using tools such as Microsoft’s .NET Micro Framework and Embedded LabView from National Instruments”. So, the brochure already states that the board can be programmed using LabVIEW. Unfortunately, this is not so - not without a few months work. No one has done the Tier 2 to Tier 1 port and it would make the most sense for National Instruments to do this once for the benefit of all. Relatively little work to enable this interesting development board. And the marketing is already done!

 

Wouldn’t it be great to programme the LM3S9D96 Development Kit in LabVIEW?

 

The files with extension (.txt), are one of the most used. But when you want to export data from a graph to a file (.txt), you can not make directly. It would be interesting to have this option.

 

 

EXPORT.png

Stellaris Launchpad.jpg

 

I’ve already put up ideas, about 7 weeks ago, for four development boards that could be LabVIEW targets:

1)  LabVIEW for Raspberry Pi (current kudos 139)

2)  LabVIEW for Arduino Due (current kudos 74)

3)  LabVIEW for BeagleBoard (current kudos 49)

4)  LabVIEW for LM3S9D96 Development Kit (current kudos 15)

 

I wanted to leave it at that to gauge LabVIEW community/user interest, however an exciting new board has just been introduced, which is too good to leave out. It’s the Texas Instruments Stellaris Launchpad.

 

It’s very attractive for three main reasons:

1)    It is very easy to get LabVIEW Embedded for ARM to target this board (a Tier 1 port)

2)    The microcontroller is powerful with many useful on-chip peripherals

3)    The price is extraordinarily low.

 

The Stellaris Launchpad features are:

  • ARM Cortex M4 with floating point unit running at 80 MHz (100 MIPS)
  • 256 KB flash
  • 32 KB SRAM
  • USB device port (separate from the USB for programming/debugging)
  • 8 UARTS
  • 4 I2C
  • 4 SSI/SPI
  • 12 x 12 bit A/D channels
  • 2 analog comparators
  • 16 digital comparators
  • Up to 49 GPIO
  • and more

 

The most interesting feature is that it costs $4.99 including postage. Yep, just under five dollars!  Including postage!  I’ve already ordered two!

 

The Texas Instruments Stellaris Launchpad can be programmed using the free Code Composer Studio in C/C++ or the free Arduino IDE using Energia from github. Both great ways to program. It just needs LabVIEW as the third exciting programming option.

 

Wouldn’t it be great to program the Stellaris Launchpad in LabVIEW?

LabVIEW users should be able to deploy programs to the Intel Edison Module 

 

The Intel Edison is a very small single board computer with a dual-core Atom processor, 1GB RAM and built-in WiFi/Bluetooth LE.

Functionality can be added by connecting breakout boards, so called blocks. Many of these blocks are already available, like ADC, GPIO, Arduino, PWM...

 

In my opinion the Intel Edison is very well suited for stuff like embedded control, robotics, the Internet-Of-Things, and.

Thats why I posted this idea to convince NI to support it in LabVIEW!

 

 

VDB

Hello,

 

the current functionality doesnt allow to asynchronously call a method that has any dynamically dispatched inputs. This causes a need to create a statically dispatched wrapper around the dynamic method which then can be called.

 

This is a source of frustration for me because it forces you to have code that is less readable, and this doesn't seem to be any reason for having functionality like that. Since you allready need to have a class loaded in the memory to provide it as an input for the asynchronously called VI why not just allow to use dynamic dispatch there (the dynamic method is allready in the memory).

 

How it is right now:

DynamicDispatchAsynchCall0.png

DynamicDispatchAsynchCall1.png

 

Solution: Allow to make asynchronous calls on methods with dynamic dispatch inputs.

THIS IS A REPOST FROM THE MAX IDEA EXCHANGE BECAUSE WE DON'T ACTUALLY HAVE A DRIVER IDEA EXCHANGE AND I DONT KNOW IF THE MAX IDEA EXCHANGE IS CORRECT

 

Before I start, I want to make clear that I am fully aware that my suggestion is probably linked to some crazy amount of work....  That being out of the way:

 

I often have to switch between LV Versions and have on more than one occasion run into the rpoblem that different versions of LV work with MUTUALLY EXCLUSIVE sets of drivers.  This means that I cannot (_for example) have LabVIEW 7.1 and 2011 on the same machine if I need to be coding GPIB functionality over VISA because there is no single VISA version which supports both 7.1 and 2011 (image below).

 

 

Of course these days we just fire up a VM with the appropriate drivers but for much hardware (Like PCI or Serial or GPIB) this doesn't work out too well.

 

Why can't we have some version selection ability for hardware drivers.  Why can't I have VISA 4.0 and 5.1.1 installed in parallel and then make a selection of which version to use in my project definition?  I know that ehse drivers probably share some files on the OS level so it clearly won't work for existing driver packages but for future developmend it would be utterly magnificent to be able to define which version of a hardware driver (Or even LV Toolkit like Vision) should be used ina  project.

 

Shane.

What I propose is to have functionality built into the XNet API that is similar to the DAQmx API.  I'd want a separate subpalette where functions like Start, and Stop logging can be invoked which will log the raw data from an XNet interface into a TDMS file.  Maybe even some other functions like forcing a new file, or forcing a new file after the current file is so old, or of a certain file size.  On buses that are heavily loaded, reading every frame, and then logging it can be use a non-trivial amount of resources, and having this built into the API would likely be more efficient.

 

XNet already has a standard for how to read and write raw frames into a TDMS file that is compatible with DIAdem, and has several shipping examples with LabVIEW to log into this format.

When performing a single point read on an XNet session, you will receive the value of the signal that was last read, or the Default value as defined by the database if it has never been read.

 

This type of functionality is sometimes useful, but more often I'm interesting in knowing what the last reading was, if the reading is relatively recent.  The problem with the NI implementation is that you have no way of knowing (with this one session type) if the device is still broadcasting data or not.  I think a useful property might be to have a way of specifying an amount of time, that a signal's value is still valid.  If I haven't seen an update to a signal in 2 seconds for example, I can likely assume my device is no longer communicating and the reading I get back from the XNet read should return NaN.

 

I had a small discussion on this topic and posted a solution using an XY session type here, which demonstrates the functionality I am talking about.  I'd like this to be built into the API.