Vision Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Calling a python script from VBAI requires many steps.  Please make it simpler and more intuitive to run a Python script directly from VBAI.  Even if limiting the return types to the data types currently available in VBAI would be fine.  This would allow VBAI to easily utilize OpenCV more directly and more intuitively.

 

Today, the only way to call a Python script is to build the call through LabVIEW, then build the LV VI for distribution to VBAI, then use a call LV step in VBAI.  For simple Python scripts (e.g. manipulating text strings, move, rename or delete files, getting values from dictionaries) this is a lot of unnecessary steps that prevent people from using Vision Builder.  Even seasoned LabVIEW users often have difficulting building a LV step such that it can be accessed by VBAI.

 

Critical for Debugging: During run, the VBAI step should also return the name and path of the environment being used.

 

 

Bonus: A simple Python editor launched from VBAI would be ideal (such as the free version of Visual Studio Code).

 

 

 

<rant>

A user of mine who had tried a software I had released as an executable using the trial license offered by NI when first installing the Vision Development Module RTE, decided to go ahead and purchase a single user license from NI to be able to continue using it (note: I don' t get a dime from this transaction and in fact I would argue that being a research only tool, there should be no "Deployment License Fee" charged to any user, but this is another topic).

The user called NI, who told her that the product would ship in 3 days from now. The user reported this to me, and I immediately called NI to figure out what was going on. After all, the deployment license is a simple license number and there is no need to ship anything, since the RTE can be downloaded for free on NI's website.

 

Well, things are different at NI.

 

Yes, the website confirms that a deployment license doesn't need any delivery ("no media" is the only choice on the product page: http://www.ni.com/en-us/shop/select/vision-development-module). But still, when you purchase a license on Tuesday, it only "ships" on Friday.

I had to talk to two NI employees before I could get to the point where I was offered to get a "trial license extension" for my user (but no admission that the fact that a software license needs 3 days to be generated is backwards).

I certainly will not go through this a second time, and will clearly spell out the antiquated process that any potential user of my software will need to go through, should they desire to purchase a license...

<end of rant>

 

Idea: Make it simple to get a vision development module deployment license (as in quick email delivery shortly after online purchase).

The zoom tool of the image display control is antiquated. Currently (Vision 2016), you select it and then click in the image to zoom by a fixed amount.

Consider upgrading its capabilities to match modern UIs:

- Selecting a rectangular ROI with the zoom tool should zoom to that ROI, i.e., the ROI should be expanded to fill the imag display control.

- support mouse wheel (and touch screen if needed) to act as a zoom in and out no matter which tool is selected.

- support fixed aspect ratio hot key (e.g. Ctrl key): zooming with the above ROI selection while the hot key is pressed will zoom the image to fill the display with the largest (or smallest) dimension of the ROI.

 

Feel free to add suggestions below.

 

With the release of LV 2017 and therefore Vision 2017 I expected the 2GB limit to be gone.

Unfortunately and with great diappointment I have found out that this was not the case and it's still not possible to create or handle image files greater than 2GB even in LV 64bit 2017.

Latest linear image sensors can go up to 32.000 pixels (per line and per channel) and we currently use a 16K sensor in our camera and easily can exceed the 2GB limit when we scan long materials specially if 16bit precision is required and RGB color is also needed. This means that we had to implement several tricks in order to break down the image size (we process each RGB color layer as a separate image file and divide the image in several chunks) but we are still anyway experiencing many bottlenecks and are frustrated in the software development because we cannot use many of the exhisting Vision tools that are limited to 2GB. And of course VISION/IMAQ libraries cannot handle image processing when image information is distributed into different images and therefore we had to write our own libraries (even for image viewing) so as to handle the images properly. 

Furthermore, despite the TIFF format is limited to 4GB (not 2 GB) it's relatively easy to split the image file into separate TIFF files while it's not easy to handle or process image data in separate chunks or channels. Furthermore there are also other image formats that allows for very large file sizes (i.e. BigTIFF which is extremely easy to implement as it's only a 64bit extension of the currently 32bit limit in the TIFF format).

 

So my suggestion is: to break the 2GB limit and finally allow handling very large image files directly into Labview and the Vision libraries !!!

 

Lately I've been working on projects that use high-resolution cameras (10+MP). Typically I use the Vision Assistant to quickly prototype a vision inspection, then rewrite the inspection in LabVIEW.

 

When working with large images, I often need to pan around to different parts of the image. However, the only way to pan the image in the Vision Assistant is by using the scrollbars, which often feels clunky. I would love to be able to navigate around the image more easily and fluidly.

 

A few possible ideas:

  • Add the Pan tool to the toolbar, like the Image Display indicators have in LabVIEW, so that the image can be navigated by clicking + dragging
  • Click the middle mouse-wheel to toggle panning with the mouse position, like in Chrome/Firefox/Excel
  • Click + hold the middle mouse button to drag and pan the image, like Paint.NET
  • Change the behavior of the mouse scroll-wheel to align with familiar software, like Chrome/Firefox and Paint.NET, instead of just having the school wheel be zoom, that is:
    • Scrolling up/down would pan the image up/down
    • Shift+scroll wheel would pan the image left/right
    • Ctrl+scroll wheel would zoom in/out

I'm expecting this a long time... but at each new Labview release I'm disappointed...

Despite 64bit Labview and Vision libraries are available since a long time now, it's still not possible to allocate more then 2GB for a single image

 

Would be nice to remove that limitation or at least to extend it !

 

Current situation:

More and more image sensors do support bit depths of more than 16Bit.

Just two examples: Aptina MT9M034, Omni Vision OV10640

 

Currently it is not possible to handle image from those sensors within LabVIEW except as a 2D-Array / Matrix.

Using the SGL image type would be possible on the first sight but it has several disadvantages: storing of those image is not possible and a lot of image processing and calculation function do not support SGL.

 

Improvement:

Add support for the image types U32, I32 and RGB128. Personally I see the priority at U32 and I32 because those types are needed to handle the RAW data from the sensors.

Very important: Do not just add those types to your typedef! There must be a real support from all basic functions!! Basic functions are in my opinion / for my usage: write to file, read from file, all functions in the palettes "Vision Utilites", "Processing", "Filters", "Operators" and the functions "Histogram", "Histograph",  "Line Profile", "ROI Profile", "Linear Averages" and all other statics functions.

This idea is something I have implemented in house and posting is motivated by the following discussion:

 

http://forums.ni.com/t5/Machine-Vision/Simulate-GigE-Vision-Basler-Runner-Camera-ruL2098-10gc/m-p/3076940#M44384

 

I would like to have the ability to add a simulated imaq camera in max to use with the imaqdx driver.  

The camera should be able to specify wither it is linescan or area scan, as well as sensor size and specifications.  The simulated device should support the basic property nodes (AOI, exposure line or frame rate ..... could even have a xml for iGenCam).  Ideally the source of the image (directory) could be specified so that grabs or snaps would play back previously acquired images or a test pattern could be selected instead.

 

I might be asking for too much, but this allows for offline testing on vision systems, something that is common after deployment.

This idea is to have an option in the Image Indicator to change to Image Display.

Currently to get Image Display Developer needs to right click on the Front panel of Image Indicator -- Replace---Select Vision and the needs to select the kind of diplay needed.Image Display.PNG

 

If by default the right click option provides Display also as an option, then its easy to navigate in single selection.

 

 

Problem:

I need to acquire images from a camera and rotate them 90-degrees counter-clockwise on a cRIO-9034 target. Rotating the image on the camera sensor is not supported by the camera.

 

Current situation:

IMAQ rotate.png

It is currently possible to rotate an image using “IMAQ Rotate.vi”. This VI has a floating point input angle which can be set to 90 degrees. It is however a bit slow (around 80 ms on a cRIO-9034 for a 3840x1200 image). It should be possible to implement a faster way of rotating an image when the angle is in increments of 90 degrees.

 

My suggestion:

Create a new IMAQ vi for rotating images in increments of 90 degrees. The VI could be similar to "IMAQ Symmetry.vi" which can be used to flip images horizontally and vertically.

 

Similar to Draw Line/Draw Multiple Lines function available for Picture, it would be good to have line styles(as mentioned in the attached image) for IMAQ Overlay Line, IMAQ Overlay Multiple Lines 2.

When Vision Assistant steps are converted to LabVIEW VI, the overlay of result (i.e. edge detect, particle counts) is lost.

no overlay.png

 

Adding overlay function on VI requires not only LabVIEW skills, but Vision knowledge due to image memory handling.

So it'd be nice to have an option to keep the overlay functionality on Vision Assistant even when converting to LabVIEW VI to have a smooth transition.

 

Ryuji Kuwajima

I have run into this problem with the Vision run time, Vision Acquisition run time since I began using Labview. The problem here is that every time I upgrade my development software for example Vision Development Module 2016 to 2017 then update my customers they need to actively re-license both the VD run time and VA run times. This seems to make no sense since both run times were already activated in the previous version. This is a real issue when you are updating software on customers machines that have no internet access. The Vision Development run times need to automatically transfer from version to version since the Vision run time keys are good indefinitely for updates. This would prevent customers from having to re-license yearly like they do now which in my opinion does not make sense. At the very least there could be a way to add a 20 character code problematically when updating versions so users don't have to work their way through NI's license manager. 

I'm interested in experimenting with the Vision Development Module as a means of learning more about computer vision (not for school or anything) and I think it would be a great idea to make it available free of charge under the Community Edition license. Image acquisition hardware is widely available at very affordable prices, and computer vision has many potential hobbyist applications.

This is probably not a simple thing for NI to implement but I always thought it would make sense to make the IMAQ image a native data type. When I first started using LabVIEW, it took me a while to get used to the image data type and understand that it is just a pointer to a named image in memory and the pointer is passed between image operators. I am used to it now but it still seems clunky to me. Why can't an image constant or control implicitly create an image and they can be implicitly destroyed at the end of a program. Branching a wire would create two copies of an image that could be operated on independently. At the moment my imaging code always looks quite messy because I often need to create temporary image buffers for intermediate operations.

 

I think this would make the imaging tools a more wholistic addition to labVIEW and make it feel less like something that was adapted from another language. Sure there would be some impact on performance, but isn't that why we use LabVIEW? Simpler programming at a higher level. 

There are multiple places where we can configure constants in Vision Assistant, but which are not available inside LabVIEW.

 

For example: IMAQ GetKernel is severely limited by the kernel size (3,5,7). Nowadays, images are larger and computers are faster, so we often require larger kernel sizes. It is possible to create larger kernels in Vision Assistant, so porting that functionality should not be a problem.

 

Also, Vision Assistant provides more palettes than IMAQ GetPalette. (DICOM Hot Iron, DICOM PET etc). 

 

In both cases, it is possible to create code from Vision Assistant and copy-paste the constants, or create "custom" versions of the vis. But this is such a waste of time.

 

It would be _really_ helpful if LabView could read/write mp4 / h.264 files, so please could that be added to a future version? I'm working with a company who generate long mp4 files that are quite efficiently compressed. These need to be analysed using LabVIEW but to create an mjpeg version just so that LabVIEW can open it wastes a lot of time and space + needs another software install. It's embarassing to say that LabVIEW just can't do it. ps please can we have the ability to read / write data fields on AVI files put back in to the latest versions? Thanks

Functions* that insert overlay's into images can define a Group, so later you can e.g. delete the whole overlay group at once. Currently there is no VI that would be able to get a list af all Group names preset/used in the image (if you e.g. load an image saved with overlay's and you want to know the Group names, so you can access theyr properties). It could look like this:

 

group.png

Then it would be also easy to get properties if all of the overlay group's (with IMAQ Get Overlay Properties VI)... This VI could also return a array of integers, that would give information about how many Bytes each overlay group uses in memory - for debuging puposes, or if you have some memory shortage problems...

 

 

 


*

IMAQ Overlay Arc
IMAQ Overlay Closed Contour
IMAQ Overlay Line
IMAQ Overlay Oval
IMAQ Overlay Points
IMAQ Overlay Rect
IMAQ Overlay Text

...

It would be very nice to be able to document the state diagram in Vision Builder, at this point the only thing we have is the state name that has to be descritive, that's not a lot...

 

Ideally, having all the documentation option that we have on a LabVIEW diagram : decorations, bookmarks, text with connected arrow, etc...

 

The use case is to be able to document the state diagram like a state machine, this is really needed to document inspection templates. 

When I have a camera with a parameter out of range, I cannot open the connection to this camera with NI-IMAQdx, the only way to connect with the camera is to use another Pleora Software like GevPlayer to communicate with the camera. This SW can open the connection and in the parameters outside the range it puts a sign (!) in front of the parameter.
my idea is to allow to open the connection with the camera the result is not an error but a waring indicated the first parameter that is out of range, with this we can correct the parameter and continue capturing image.
The function (IMAQdx Configure Grab.vi)it should return error in case a parameter is out of range, but function (IMAQdx Open Camera.vi) it should not return an error but a waring as said before.
The same with CameraValidator only returns an error code which means (unable get attribute value)