LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It would be nice if LabVIEW RT could give more output during the startup process of a LabVIEW real-time executable. For example if the rtexe is started at all or not, if there are missing dependencies during the loading process or other usable stats.

 

In my case I had problems with the German codepage. The build process was done without failure. But the rtexe didn’t start at all. I used special German characters like “öäü” in some VI names. And the rtexe couldn’t be loaded for that reason. But I didn’t get any message.

 

So please improve the debug output for LabVIEW RT.

Packed project libraries are new with LV 2010 and seem to be a great way to share code.  One idea to make them more user friendly for the end user of the library would be something that would give the project library developer the ability to specify driver dependencies for their library. 

 

If the end user of the library did not have the right drivers installed, they would receive a warning or maybe a broken run arrow if they tried to use it.  The warning could be very descriptive by telling them exactly what drivers they are missing.  This seems like a better solution that just getting all these arbitrary errors because LabVIEW can't find subvis called by the packed project library.

 

Here's a mockup of what the window for this might look like in the packed project library build specifications (borrowed from the additional installers window).

 

packedlibrary.png

Would like to target Labview embedded on a host of platforms. Specifically x86, lynuxworks, Vxworks and xilinx platforms - boards, SBCs, COMs, stacks, PC/104... Like PXI, the way to get the users onboard is to show that the industry supports this direction and you are not tied to one vendor. It would be expensive if in an iterative learning process (necessary but collaboration is also necessary with well established & matured platforms/standards) the users one chosen platform vanishes (maybe like fieldpoint?). Packaging options is also very important to achieve something like Adlink's MilSystem 800 that the application demands.

 

Pavan Bathla

Currently VISA only has a single timeout value and there are use cases where a different read and write timeout is required. Today you need to constantly modify the timeout between each read and write. It would be prefered if these timeouts could be set independently.

Well, I think the title says it all...

 

There are many threads on the NI website about this certain topic, but none of them really shows how to deal with this "problem" correctly!

 

It is a very common task to synchronize a AI signal (let's say 0-10 V from a torque sensor) with a Ctr signal (e.g. an angular position of a drive which causes the torque). How do I correctly display the torque over the drive angle in a X-Y graph?

 

It would be great if NI offers a reference example in the LV example finder, how to solve such a task elegantly and efficiently.

 

I'm not sure if this is the appropriate place for this suggestion, but anyway...I would love to see this in the LV example finder!

 

Regards

A:T:R

Currently the "Bytes at port" property only applies to serial ports. If you are trying to write a generic driver or communications package that supports multiple connection types you cannot use the "bytes at port" since it will not work for anything but a serial connection. There is a related idea here which proposes the "Bytes at port" can be added to the event structure. It also suggests that this be expanded to other connection types. My idea suggests that at a minimum the VISA property node can be expanded.

The Problem


Doing a long finite acquisition in DAQmx results in the manner shown below results in a all the data from the acquistion residing in a 2D array of waveforms that the user must rearrange to begin working with.  Since a 2D array of waveforms is not really useful with any processing functions in LabVIEW, why not come up with an automatic way to get the datatype you want (a 1D array of waveforms with the new samples appended to the Y array of the appropriate channel.)

regular tunnel.png


2Dwaveform.png

 

The Solution


Give the user the ability to create a "waveform autoindexing tunnel" via a context menu option.  This tunnel would automatically output the appended waveforms, 1 per channel.  This could be done behind the scenes in the most memory efficient way possible so as to save users the headache of trying to modify the 2D array they currently get.

waveformTunnel.png

 

1Dwaveform.png

 

 

Zoomed in Images


regularZoomed'.png

 

 

waveformZoomed.png

 


It'd be great if NI Spy had an "always on top" selection under the View menu.

 

thanks! Smiley Happy

Hello LabVIEW Experts,

 

I thought of this recently when I was setting up a test computer.  I started up my LabVIEW project file and opened up my host VI only to find that I had some broken wires... DOH!  After looking over MAX and googling a few error codes, I found that the test machine I had didn't have RT installed.  That's when it hit me, wouldn't it be great if the VI would have recognized that I was trying to connect to a cRIO chassis and gave me a little pop up telling me what software I was missing and where to get it?  Sure would have made my day easier.

 

 

There are express VIs for Analog/Digital/Counter channels.How about express VIs for configuraing and acquiring data from an RS232 port?

Hello,

 

I work as a LabVIEW integrator and for one of my customer I develop application, with maintenance goal for their installation all over the world with RS232 or Bluetooth communication for example, deployed on targets as PDA (PocketPC 2003 & Windows Mobile 6) or Touch Panel (Windows CE 4.2 to 5.0) using LabVIEW 8.6 Mobile Module.

This LabVIEW additonal module works quite well to deploy the same application in this differant targets, but I encountered some problems and I have some suggestions for Mobile Module :

 

1/ MMI (Man Machine Interface)

      -  for graphics indicators (as Gauge or Slide) graphical object ramp or multi Slider is deleted by LabVIEW when you build an EXE => I think that a graphical objets with all its graphical property could be added in Mobile Module (and works on these all targets). See examples in GraphicalObjects.PNG

      - as I say in my introduction my application is deployed all over the world and usually I used "Caption" dynamic modification for multilingual management for all my controls & indicators. But in mobile module we can't do this (but we can modify dynamically boolean text), so I think that it could be possible.

 

2/ VISA (& Bluetooth) Management

      -  I think there is a bug with VISA installation. Indeed with the installer you can choose the installation directory (default directory or specific directory : in non volatil memory) but if you don't install VISA support in default directory it doesn't work (with PocketPC 2003 for example) I think that could be resolved

-  As I said in introduction my application could communicate throught different protocols (RS232, Bluetooth) and with LV 8.6 Mobile Module we can now use VISA : great !!!. But in Windows Mobile 6, VISA does not support bluetooth (it seems to have an incompatibility with virtual aliases in the registry) ; and in PocketPC2003 it works very well. I think that could be resolved

Imagine going into a customer site, after a 2 hour drive. Or worse a 2 hour drive after a 6 hour flight.

And being able to connect your Android smart phone to the instrument under question.

Run a test program, or calibration program from a special GPIB controller that plugs into your phones USB/Charger port.

 

How much easier would that make field service?

I would like DSC to have the ability to use array variables from a PLC as shared variables arrays. This is currently not supported, and adding individual variables to then make up an array is cumbersome. Using methods other than shared variables are not as portable, and equally cumbersome,

Most of the time, I put remote devices on our internal network. If I set up the device so the IP Address is dynamically assigned, the IP often changes. But, the project needs me to point to a device via specific IP address. If the IP Address changes, I constantly have to update the remote devices properties in my project, often times not realizing this is even an issue until a deployment fails because the device can't be found.

 

I'm proposing an idea to link to a devices' alias (or some unique ID other than IP) and allow the project to automagically update the IP Address.

 

 

 

When using the ECU M&C toolkit to handle XCP communication on an XNET interface port (using the the xxx:yyy@nixnet syntax), I've found that Disconnecting and Closing the XCP connection will cause the whole XNET interface to stop, even if other XNET sessions are still using it. 

 

This seems to be bad behavior for an XNET port.  Typically, when a session is closed, the interface will not be closed until every other session is done using it.  In this case, though, the interface gets closed without regard to other XNET sessions that may be running on the same port, and they consequently stop receiving or transmitting CAN frames, without any indication of why.

 

This is particularly problematic for me if I am using a Stream In session to log all messages on the interface.  If a self-contained XCP code module is run (Open, Connect, Read, Disconnect, Close), then the interface gets closed/stopped, and the Stream In session stops receiving frames.

 

I believe that this issue is happening with the NI Automotive Diagnostic Command Set (ADCS) as well.  It also seems to close the interface when a Disagnostic session is closed (see here and here).

Can we have access via LabVIEW to the MAX, that I can delete a NI cDAQ device

 

Jürgen

Using MAX IMAQdx Devices or the IMAQdx function Open Camera.vi, certain USB DirectShow cameras will always appear as connected or available when they are not.  Simply installing the vendor's DirectShow driver makes them appear in the available camera list.  In MAX, under Devices and Interfaces > NI-IMAQdx Devices, clicking on such a camera name results in an error and red X subsequently appearing for its name.  Closing and reopening MAX produces the same result.  In LabVIEW, the IMAQdx Open Camera.vi also shows such cameras in its Camera Name list.  A webcam however, does appear and disappear from that MAX IMAQdx Devices list or IMAQdx Open Camera name list when it is connected and disconnected. 

 

IMAQdx Enumerate Cameras.vi with Connected Only set True will also list such cameras in its output Camera Information Array.

 

According to one vendor : "USB cameras that implement their DirectShow driver as a data source filter and register the filter as a "video capture device" will always shows the device (filter) no matter whether the physical device is connected or not. This contrasts with many other USB cameras (e.g. all web cameras) which implement their DirectShow support with a WDM stream driver."

 

This makes it difficult to display a list of connected cameras for users to select from. While the Camera Information Array output from IMAQdx Enumerate Cameras.vi can be looped over with IMAQdx Open Camera.vi and each Camera Open error output checked for True and an actual camera available list built from that (see attached example), it would be useful if both MAX and IMAQdx Open Camera.vi's Camera Name would automatically do that.

Right now you have to right click on every individual control/constant and open the Filter Names menu as described in this KB. I'd rather be able to just always see all the options with a checkbox like this:

 

Filter Names.png

 

Alternatively, this could be implemented in the Tools » Options menu as a global setting, but either way I wouldn't have to do it with every new VI to see certain clock sources.

I would like to see the ability to select which network interface UDP writes are written to. Presently the UDP method in labview allows you to select interface for reads but writes automatically go out the "default interface" having a system consisting of a wired card, wireless card and a virtual interface is bringing about this need. I can manually turn off the virtual in the network control panel but there should be a way to just output to the device one wants to

I'd like to have a way to give the FPGA on my PCIe card direct access to a block of the host PC's RAM.  At the moment, the FPGA is limited to its internal RAM and whatever might be on the PCIe card. With my PCIe-7841, I have about 1MB available to the FPGA.  If I need more, I have to use DMA FIFO transfers - the FPGA can use one FIFO to ask the host for some data and the host can send it to the FPGA in another FIFO.  This is a lot of overhead compared with simply using a memory method node to access the FPGA RAM.

 

So how about a method to allocate a block of memory in the host's RAM that the FPGA can access directly over the PCIe bus with minimal involvement by the host.  For simplicity, it will probably need to be limited to a contiguous block so that there are no gaps in the addresses - the FPGA would only need to know the start address and the number of bytes in the block.  Ideally safeguards should be established to ensure the FPGA doesn't access memory outside the allocated block, but leaving that to the LabVIEW programmer would be fine.