From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

NI Labs

cancel
Showing results for 
Search instead for 
Did you mean: 

Welcome to NI Labs 3D Visualization Demos

After downloading and reviewing the NI Labs 3D Visualization Demos, we have some questions and issues.  We found two examples in the demo package: a VI called Software.vi that uses an Express VI and a 3D Picture Control, and a VI called example_1.vi that uses a new object called a 3D surface plot.  Because these are different approaches, it is not clear which direction NI will be taking in upcoming LabVIEW releases.  Is the intent to continue to develop the 3D surface plot or merge its functionality into the 3D picture control?

We would like to develop an application that maps data onto a 3D object, an object that we have exported from a CAD package.  Software.vi, the example that uses the Express VI, seems to be close to having the functionality we need.

3D Visualization Express VI (Software.VI) Questions & Issues

  1. It appears that the Sensor Values input on the Express VI is designed to handle floating point values scaled from 0 to 1.  True?
  2. The Express VI's dialog box is able to load and display a 3D object from an object file.  The example uses an STL file.  Was the intent to have it handle both STL and VRML files?  When we tried loading our VRML 2.0 file, the Express VI's dialog box locked up.  We even tried a VRML file from an example and that didn't work either.
  3. The Help button on the Express VI's dialog box does nothing.
  4. The 3D Picture Control object on the Express VI's dialog box gives you no way to zoom.  It appears that the Camera Control is set to "Flying" instead of "Spherical".  Because of this you cannot place sensors on the back side of an object with the point-and-click sensor placement feature.
  5. The Express VI provides no way to input sensor locations.  The point-and-click interface is imprecise, but OK for a small number of sensors.  In our case, we have a test piece with hundreds of sensors, so the point-and-click interface would be cumbersome to use.  We need an input terminal for the sensor locations, or a way to generate code from the Express VI for better control and feature access.
  6. We cannot determine if the color mapping interpolation is working properly.  We tested it by loading an STL model of our test piece which looks like a piece of pipe with flanges.  We used the point-and-click feature to locate four sensors evenly spaced along the length of the pipe.  Changing the sensor values on the two end sensor locations produced a color change like we would expect.  Changing the sensor values on the two middle sensor locations produced only a barely noticeable color shade - even with extreme sensor value changes.


Message Edited by Ron @ Bechtel-Bettis on 05-29-2008 10:35 AM
0 Kudos
Message 21 of 27
(9,691 Views)
After spending more time with the Software.vi demo, we found that we were able to convert the Express VI by clicking View - VI Hierarchy and then double-clicking on the Express VI.  This action popped up a dialog box asking us if we wanted to convert the VI.  We then converted and extracted the primary VI.  In working with the extracted VI, we learned that:
  1. The example does expect sensor values scaled from 0 to 1.
  2. The example only handles STL files not VRML.  We tried editing the VI to be compatible with VRML, but could not figure out to extract SceneMesh data from the VRML file in a way to be compatible with this VI's existing code structure.
  3. Our color mapping interpolation was a problem with the meshes defined in our STL files.  We are currently working to generate and test new 3D models with a finer and more evenly-sided mesh.
  4. The example carries the sensor location coordinates in a cluster array called "Long Term State", but doesn't use them inside the VI.  We conjectured that this is probably some vestige of programming that is not quite finalized.
  5. To calculate values to interpolate the colors onto the 3D object, the example uses a cluster called Sensor-to-Vertex which is a calculated weighted average of sensor measurements for every 3D object vertex location.  We came up with our own VI that takes an input array of sensor coordinate locations and computes the Sensor-to-Vertex cluster.  This cluster is then passed to an internal VI in this example that is named simply "uhh" which outputs a array of reals which is subsequently used fo color scaling.
0 Kudos
Message 22 of 27
(9,640 Views)
We've been continuing with making extensive modifications to the converted express VI in the NI Labs Example, Software.vi.  As we've dissected this program, we've come to understand more of its nuances and have built a sample LabVIEW application that overlays color interpolated real-time data from a Shared Variable onto a 3D object.  We also read in our sensor locations from a .csv file with the sensor names in Cartesian coordinates rather than method used in the point-and-click interface in the Software.vi example.
 
What we would like to do next is plot points (marker dots) onto the 3D object to represent the sensor locations.  It appears that the 3D visualization demo package may include a VI that does that.  It is called Pick.vi and is located in C:\Program Files\National Instruments\LabVIEW 8.5\user.lib\_express\_SensorConfig.llb.  We tried to look at the block diagram, but it is password-protected.  Can anyone help?
0 Kudos
Message 23 of 27
(9,593 Views)
That particular VI was made private for various reasons, however the basic idea of what it does is this...

It takes a point in window coords, casts a ray through the scene, and returns the first point on the model it hits.

Based on what you are doing it sounds like you are pre-loading the sensor coordinates from a file.  If you have the sensor coordinates you shouldn't need to do any of that.  You should be able to use a sphere primitive(or whatever shape you want to represent your sensor) attach it to the root of the scene, and do a "Set Translate" to move the sphere/sensor where you want it to go.

As far as your previous posts.
1. I believe if you edit the range of the color ramp on the configuration page of the express VI, it should correctly interpolate your input values based on that range.
2. The NI Labs demo only supports STL files, this is correct
3.  Based on how the interpolation works in the demo, you are right.  Basically if you have a sparse model, or place a sensor where there is no vertex, you won't see the extreme 'hot' spots.  To help remedy this you can either place your sensors closer to vertices, or make a denser mesh for your model.
4. The camera controller on the configuration page should by default be set to spherical.  This should allow you to rotate with left mouse, zoom with shift+left mouse, and pan with ctrl+left mouse.  However any newly created 3D picture control defaults to having no camera controller, so you have to set that manually.
0 Kudos
Message 24 of 27
(9,576 Views)
We were able to do what we wanted to do without needing the ingredients of Pick.vi.  We were able to effectively plot "points" representing the sensor locations (from our input file of coordinates) by creating small near-zero-height cylinders (primatives), using the Object.Add Object invoke node to add them to the Scene, and then translating each of them to the 3D object's surface.
0 Kudos
Message 25 of 27
(9,553 Views)
"We were able to do what we wanted to do without needing the ingredients of Pick.vi. "
 
So problem solved?
 
Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 26 of 27
(9,541 Views)

I read some insightful posts above. But can anyone explain to me how the Software.vi VI implements point-and-click selection?

The pick function from openGL casts a ray and returns the first Object it crosses. That is, the meshed object. Where to go, from there, in order to find the actual point coordinates? A meshed object has many points, so I don't think it would be feasible to plot the points as separate individual objects.

Thank you all.

gigiozzi

0 Kudos
Message 27 of 27
(9,535 Views)