kl3m3n's blog

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Klemen
6534 Views
10 Comments

In one of my previous posts, I have been testing the openCV SIFT algorithm. I have cleaned and improved the code and used a couple of different input images. The results are shown below.

I am also attaching the source c++ code, the DLL and labview code that calls the DLL. For an image of VGA size (640x480 pixels) the SIFT algorithm takes about 500 ms (with my poor coding at least ). The modification to use SURF algorithm is then simple. Note that the algorithm uses default parameters that cannot be altered with the attached code. But again, to extend the code to be able to change the parameters is also simple. Code is compiled with VS2010 x86 and OpenCV 2.4.5. Add  "...\opencv\build\x86\vc10\bin" to system path or recompile the source code yourself.

1.png

2.png

3.png

4.png

Be creative.

Post edited on 30.8.2013

I saw that I have made some error in the previous code (the LV example program, where the u offset of the matched points was off - the wire "x image size" was connected from the wrong image). I have corrected this and updated the code so you can modify the parameters of the SIFT algorithm in Labview. The code also calculates the homography, which can be used to map the corresponding points from one image to another (see the image below). This can be used for object tracking.

sift_homography_openCV_FP.jpg

Note, that the dll was built with OpenCV 2.4.6 so you need to install it (and add to system path) or recompile the source code yourself. The LV example is attached also (version 2010).

Best regards,

K

Download All
Klemen
7055 Views
8 Comments

There has been some interest in the new stereo vision library that is used in the Labview Vision Development Module 2012.

I was (and still am) interested in testing the performance of the library as are some other people on the NI forums that are having problems obtaining a proper depth image from a stereo vision system.

In this post I will present my results that I obtained so far and also attach a small VI, that can be used to calculate the parameters of the stereo setup. Based on this THEORETICAL calculations it is possible to obtain the disparity for a certain depth range and also calculate the depth accuracy and field of view.

First off, I tried to test the stereo library with the setup shown in Figure 1.

medium.jpg

Figure 1. Setup for the initial test of the stereo library.

Two DSLR cameras were used with the focal lenghts of both set to 35 mm. The cameras were different (different sensors, different focal lenghts), and the setup was really bad (the cameras were attached to the table using a tape). Also note that the NI Vision prefers a horizontal baseline, or a system where the horizontal distance exceeds the vertical distance between the two cameras (NI Vision Manual). Ideally, the vertical distance between both sensors should be equal to zero and the horizontal distance separated by the baseline distance. I selected the disparity according to the parameters calculation (see the attachment) and I really was not expecting great results, and sure enough, I didn't get them.

So, I changed my stereo setup. I bought two USB webcameras and rigidly mounted them   as shown in Figure 2. The system was further mounted on a stand.

medium.jpg

Figure 2. Stereo setup with two webcameras with baseline distance of 100 mm.

I followed the same procedure (calibrating the cameras (distortions) and the stereo system (translation vector and rotation matrix)) and performed some measurements. The results of the reconstructed 3D shape are shown in Figure 3 (with texture overlaid).

medium.jpg

Figure 3. 3D reconstruction with overlaid texture.

The background had no texture (white wall), so I expected that the depth image of the background would not be properly calculated, but the foreground (the book) had enough texture. To be honest, I was dissapointed as I really expected better results. In order to enhance the resolution the object needs to be closer or the the focal lenght needs to be increased.

But before I tried anything else, I took the example that came with the stereo library and calculated the 3D cloud of points (similar as in Figure 3). The results are shown in Figure 4.

medium.jpg

Figure 4. The 3D reconstruction with texture for the supplied LV stereo example.

To be honest again, I was expecting better results. I am confident that this setup was made by experts with professional equipment and not some USB webcameras. Considering this, I am asking myself the following question: Is it even reasonable to think that any good results can be achieved using webcameras? Looking at the paper titled "Projected  Texture  Stereo" (see the attached .pdf file) it would also be interesting to try using a randomly projected pattern. This should improve the algorithm when matching the features of both images and consequently make the depth image better and more accurate.

I really hope that someone tries this before I do (because I do not know when I will get the time to do this) and shares the results.

If anyone has something to add, comment or anything else, please do so. Maybe we can all learn something more about the stereo vision concept and measurements.

Be creative.

Download All
Klemen
10262 Views
5 Comments

In one of the previous posts I said (quote): “I just wish the Vision Module would by default include other popular computer vision algorithms for features detection…” and (also quote): “Since LabVIEW includes MathScript (support for .m files and custom functions) it would be interesting to try and implement some of the mentioned algorithms…“

I have been trying to implement some of the Matlab’s algorithms in LabVIEW, but found out that it is quite difficult to create custom functions (real functions, I mean, not just some add and divide stuff!). This is basically due to the fact that almost every custom function implemented in Matlab (open-source) has quite a lot of sub-functions which are needed to run the algorithms properly. And none of these are included in Mathscript in LabVIEW. They can be included in Mathscript “default directory” but… Well, try it and see for yourself.

So, instead I used a great CV open-source library – OpenCV (if you work with CV algorithms, you should know something about OpenCV) and built a .dll based on OpenCV to detect and match the features of two images using SIFT (scale invariant feature transform) and SURF (speeded up robust features).

The results of SIFT are shown in Figure 1 and the results of SURF in Figure 2. The corresponding keypoints of two images are connected by a line.

SIFT.jpg

Figure 1: SIFT feature detector.

SURF.jpg

Figure 2: SURF feature detector.

In the feature, I will try to test more algorithms in LabVIEW using OpenCV library. Mind that this testing was performed with default parameters, and filtering keypoints using the minumum distance criteria. No thorough testing wasn't performed yet.

Thanks for reading.

Be creative.

P.s. : See newer post for source code and dll.

Klemen
8757 Views
8 Comments

I have managed to get my laser scanning system working. I still have some problems since there are some perspective errors (distortions) in my scans, which I still need to correct.


Figure 1 shows the scanning setup. The frame is from thin sheet metal (2 mm), which was bent by hand with the help of a vice. The camera and the laser projector are attached to this frame using a seperate thin sheet metal frame. The geometry was designed according to the calculations I have made in the previous post (Mini laser scanner - part 2). All the components were fastened to a wooden board. I also used a thick white cardboard as a pedestal to reflect the laser light more efficiently.

setup.jpg

Figure 1. Laser scanning system setup.

To successfully calculate the points in 3D space, some parameters have to be determined:

1. Intrinsic camera parameters (fu, fv, cu, cv, k1, k2, p1, p2). These were determined using a chessboard pattern and camera calibration toolbox in Matlab. For comparison, I also determined the same parameters using the NI calibration training interface and a dot pattern grid.

2. Extrinsic camera-projector parameters (Py, Pz, alpha). Py and Pz are the coordinates of the laser projector in the camera coordinate system. Alpha is the triangulation angle. These parameters were not accurately determined, since there is some math behind this. I would also need a reference body with known dimensions and iterative algorithms (which I do not have, nor do I have the time to code them) to determine the smallest deviation between the measured and the reference surface. Currently I do not know of any other methods to determine the mentioned parameters. If anyone knows alternative methods, please share them…


So instead I attached a measuring tape to the frame (can be seen in the Figure 1) and photographed the measuring system from the side, so that the optical axis of the camera and the laser were parallel (or close to parallel) to the image plane (photographed). The image was transferred to Autocad and the required parameters measured and transformed to real-world units using the measuring tape of known spacing.

3. Transformation parameters from camera to world coordinate system. In order to take into account the motion of the stepper motor, the 3D camera coordinates have to be transformed to the world coordinate system (stepper motor coordinate system). Thus, the speed of the motor in the motion direction can be taken into account and the subsequent acquired profiles can be spaced accordingly. The transformation from camera coordinate system to world coordinate system was also performed using the Matlab camera calibration toolbox.

The speed of the motor in real-world units (at different steps and speeds) was determined by measuring the traveled distance (laser line as the reference) in the corresponding time (Arduino timer library). For all the presented scans, my scan speed was ~2.1 mm/s.


Video 1 shows an example of a scan and here is the link to youtube video: http://www.youtube.com/watch?v=Krd4lKoVa8k

Video 1. Example of a scanned hand.

Figure 2 shows another scan of an object (small radio with speaker).

scanned_speaker.jpg

Figure 2. Scanned radio speaker.

Figure 3. shows another scan of a refrigerator magnet.

scanned_magnet.jpg

Figure 3. Scanned refrigerator magnet.

The laser scanning system is not perfect, but it server the main purpose – learning something new every day. I think it would be better to move the camera-projector pair and not the pedestal where the object is placed. But this is another story for another day. To put it plainly – my scanning system needs a lot of improvement in order to get accurate scans. But the main principle of laser scanning is covered in my mini scanner project. Maybe I will take some time in the future to improve it.


So, the total cost of the system:

1. camera: 20 eur,

2. projector: 4.6 eur,

3. filter: 3.5 eur,

4. motor driver: 1.9 eur,

5. old document scanner with stepper motor: free

6. frame (with screws, nuts, etc…): 10 eur,

7. wooden board and cardboard pedestal: 5 eur.

Total costs: 45 eur.

Not counting the Arduino which I use for other stuff and was already here before the scanning system.

So the total cost is half the cost I predicted at the beginning. Yippee!

Thanks for reading.


Be creative.

Klemen
7334 Views
4 Comments

This part is dedicated to calculating the parameters of the scanning system and basic system design. In order to scan the object, I chose a stationary camera-projector system, where the object is subjected to motion. The relative motion between the scanner and the object can be used to preserve the proportions (i.e. spacing between the sequential scan lines) of the reconstructed surface. In this case the scanning speed/rate and the object motion speed has to be known. The scan line is also perpendicular to the motion of the measured object.

Figure 1 shows the rough sketch of my scanning system (not yet built, although I already have all the components).

Slika.bmp

Figure 1. Basic setup of the laser line scanning system.

The parameters were determined using the lens and triangulation equation. The sensor of my web camera is 1/3'' CMOS which equals to the physical size of 4.8 mm in width and 3.6 mm in height. I have calibrated the camera using the NI Vision Calibration Training interface and also determined the focal length of the web camera in real-world units. The calibration was real quick using only five images and I did not cover the entire grid. I will perform more accurate calibration at a later stage. The triangulation angle was selected based on the measuring distance (magnification) and scan resolution (dz). Figure 2 shows the calculated parameters of the system.

Parameters.jpg

Figure 2. Determining the parameters of the scanning system.


I have also placed a band-pass filter in front of the camera lense to attenuate the environment lighting and pass only the laser light at 650 nm. The filter effect is visible from Figure 3.

Filter_noFilter_comparison.jpg

Figure 3. Acquired image using a band-pass filter (left image) and without filter (right image) in the same lighting conditions.

I will be using the following procedure to calculate the 3D surface of the measured object:

  • extract the laser line from the acquired image,
  • determine the distorted pixel coordinates form extracted line,
  • normalize the distorted pixel coordinates,
  • undistort the normalized distorted pixel coordinates,
  • transformation from 2D image coordinates to 3D camera coordinate system using a pinhole camera model,
  • transformation from 3D camera coordinate system (CCS) to 3D world coordinate system (WCS), where the motion of the object will be taken into account. The origin of the WCS is located on the moving part of the scanning system.

I have already written the code in LabVIEW for the first five points above. I will move to the last point after I've already built the scanning system. I still have to determine the speed of the stepper motor and I will use some sort of linear encoder (probably the IR diode emitter and detector pair). I was thinking about using the pattern matching alghoritms to determine object speed in pix/s, but I think the transformation from 2D to 3D would then not be possible.

Thanks for reading.

Be creative.

P.S.: For programing the Arduino I recommend Arduino for Visual Studio (http://visualmicro.codeplex.com/). You have almost all the benefits of Visual Studio. I think you can also debug your code, but an addon is required for this. Anyway, you can exploit Intellisense feature and have great control over the datatypes throughout the code.

Klemen
8458 Views
0 Comments

The first part is dedicated to controlling a stepper motor via RS232 port using LabVIEW. I am not using LIFA (LabVIEW Interface for Arduino) to program Arduino, but C++ with basic Arduino programming environment (for example you could alternatively also use Atmel Studio for debugging the code).

In my opinion this approach is better, because the code is more portable (i.e., you do not need LIFA installed on every computer when porting the code).

I disassembled an old document scanner with six wire stepper motor. To determine the correct wiring search the web. There is a lot of documentation on this.

The wires of my stepper were already paired accordingly, so I discarded the two unnecessary wires and used the other four to connect them to Arduino through SN754410 Quad Half H-Bridge driver. The wiring is based on the diagram in Figure 1, which was taken from http://www.hobbytronics.co.uk/stepper-motor-sn754410.

stepper wiring.jpg

Figure 1. Wiring diagram for connecting the stepper motor to Arduino.

Figure 2 shows the experimental setup. Ignore the other side of the protoboard, since that is a 24V to 12V regulator from previous testing and is irrelevant in this case. It could be used to supply the motor but I found an old 12V adapter, which I used instead.


setup1.jpg

setup2.jpg

Figure 2. Experimental setup.

The Arduino code is really simple:


#include <Stepper.h>

#define steps_per_revolution 220 // 14 steps from min to max position
#define motor_speed 60

Stepper motor(steps_per_revolution,8,9,10,11); // init stepper library
int received_data[2]; // allocate memory for data over serial

void setup()
{
Serial.begin(9600); // init serial port
motor.setSpeed(motor_speed); // set the speed of the motor
}

void loop()
{
while(Serial.available() < 2) {} // wait until 2 bytes are available on serial port

for(int i=0;i<2;i++) {
received_data = Serial.read(); // read two bytes
}

int steps = received_data[1]; // the second byte determines how many steps in either direction

if(received_data[0] == 1) // clockwise revolution
{
for(int i=0;i<=steps;i++) // number of steps equals to the second received byte (0-255)
{
motor.step(steps_per_revolution); // step
delay(5);
}
Serial.print("DONE stepping in CW direction for ");
Serial.print(steps);
Serial.print(" steps!");
}

else if(received_data[0] == 2) // counter clockwise revolution
{
for(int i=0;i<=steps;i++) // number of steps equals to the second received byte (0-255)
{
motor.step(-steps_per_revolution); // step
delay(5);
}
Serial.print("DONE stepping in CCW direction for ");
Serial.print(steps);
Serial.print(" steps!");   
}
else Serial.print("ERROR!")
;
}

Arduino waits for 2 bytes over the serial port and acts according to the logical statements. In my case I always send 2 bytes, where the first gives the stepper motor direction (CW, CCW) and the second the number of steps. After the stepping is over, a message is sent to the serial port.

I am also attaching the code in LabVIEW. Note that both algorithms are just simple drafts and I will have to modify them according to the end application.

Also a note of caution – in my case there is no feedback on the motor position and no limit/end switches, so don't go too far. Some sort of limit/end switches could be easily added and their state monitored during the motor stepping. 

Thanks for reading.

Be creative.

Download All
Klemen
7595 Views
0 Comments

This is an add-on to the blog post #2 – "Using Microsoft Kinect to visualize 3D objects with texture in LabVIEW in real-time".


The idea is basically the same, with the difference of using two Kinect sensors in LabVIEW simultaneously. The acquisition from both sensors is also based on OpenNI and PCL libraries where instead of a single cloud callback, two cloud callbacks are performed. Thus, the calibrated X, Y, Z coordinates can be obtained from two different viewpoints.


Video1 shows the calibrated depth image (Z coordinates) for two Kinect sensors from different viewpoints (QVGA resolution). Some interference can be seen, since the projection patterns from both sensors overlap. This can be effectively remedied according to this article for example:


Maimone A., Fuchs H. Reducing interference between multiple structured light depth sensors using motion, Virtual Reality, IEEE (2012), p. 51-54.

Here is the video and the youtube link: http://www.youtube.com/watch?v=-g7emWGKzHU


Video 1. Calibrated depth image from two Kinect sensors simultaneously.

merged.bmp

Figure 1. Merged data form 2 Kinect sensors (this was a quick merge, didn't pay too much attention to it).

P.S.: I've tested this using VGA resolution and Kinects also easily achieve 30 fps.

Thanks for reading. Until next time…

Be creative.

Klemen
2794 Views
0 Comments

I am writing this mostly as a reminder to myself!

Next projects:

  1. building and testing a mini laser scanner using a line projector. This will be a scanner for small objects (actual dimensions not yet known) with budget under 100 eur. Software (hopefully) will account for both camera (fairly simple) and projector distortion (hmmm).
  2. using MLX90620 IR temperature sensor and Arduino to display temperature measurements in real-time on Android compatible device.

Side projects:

  1. Testing Labview CUDA library
  2. Testing Labview Stereo Vision library (http://forums.ni.com/t5/Machine-Vision/Stereo-library-2012-pointers/td-p/2171812)

The order of the execution is not known.

Be creative (also note to self) .

Klemen
10990 Views
7 Comments

The MLX90620 IR array sensor from Melexis (http://www.melexis.com/Infrared-Thermometer-Sensors/Infrared-Thermometer-Sensors/MLX90620-776.aspx) is factory calibrated and uses I2C communication protocol to send the calibration data stored on the EEPROM chip and the raw data from the IR (infrared) and PTAT (Proportional To Absolute Temperature) sensor. The sensor is thermopile based (16x4 array) and performs measurements in real-time. I will not go into details about the characteristics and the communication protocol of the sensor. For more information about the sensor refer to its datasheet (better than me copy-pasting it ).


As mentioned before, the sensor sends raw data, which needs to be properly handled in order to obtain the temperature (to program the MCU the Arduino IDE is used). To achieve this, some computations need to be performed. These could be potentially heavy for the MCU, so the best option is to perform the calculations on the computer. Just to give some reference – performing everything on the Arduino UNO R3 it takes about 30-40 ms to read, process the data and calculate the temperatures (using 400kHz I2C clock frequency).


First, the sensor needs to be properly connected to the Arduino. The sensor performs best at operating voltage of 2.6V, so a regulator needs to be used. The schematics to connect the sensor to the Arduino board are shown in Figure 1.


schematics.jpg

Figure 1. Schematics for the MLX90620 and Arduino.


To perform temperature calculations, EEPROM needs to be read as this is the place, where all calibration data is stored. For optimal performance,  the EEPROM data should be read only once at the beginning and stored in the memory (MCU or computer). To perform the data transfer the I2C library i2cmaster from Mr. Peter Fleury (http://homepage.hispeed.ch/peterfleury/avr-software.html) is used. This library is easy-to-use and supports repeated starts that are needed for proper communication.


All the raw data read from sensor is stored as a byte data type and sent to the serial port as such. To determine the beginning and the end of the sent data, two distinct ASCII characters are sent to wrap this data. In this way, the whole data can be read unambiguously from the serial port in LabVIEW.

After reading the raw data from the serial port and performing the calculations, the final output is shown on an intensity graph (see Figure 2).


IRtemp.jpg

Figure 2. Example of calculated temperatures displayed on an intensity graph in LabVIEW.


To summarize: LabVIEW serial communication library is used to read from the serial port where the MLX90620 raw temperature and EEPROM data (byte data types) is sent via the Arduino micro-controller. All processing and data visualization is performed inside the LabVIEW environment.


Thanks for reading.

Be creative.

Edit on 25.3.2013

By optimizing the code, less than 10 ms  refresh rate was achieved using the same setup (ON CHIP CALCULATION!). This includes reading, calculating and displaying values of all 64 pixels. The refresh rate of the sensor can be manually tuned by writing the Oscillator trimming value parameter to the MLX90620 chip.

Klemen
5182 Views
6 Comments

The acquisition of depth and texture (RGB) information is based on the PrimeSense technology (http://www.primesense.com/) and OpenNI framework (http://www.openni.org/) using the PCL (http://pointclouds.org/) to interface with the Kinect sensor. Additionally, the CLNUI drivers (http://codelaboratories.com/nui) are used to control the motor of the Kinect, since the PrimeSense drivers do not support this feature (as far as I know). The C++ code for point cloud and texture acquisition was built as a dynamic link library using Microsoft Visual Studio 2010. This enables us to call the library from within the LabVIEW environment. Similarly, a dynamic link library is also used to control the Kinect motor.

The Point Cloud Library can acquire and process data from the Kinect sensor. This means that we can obtain a calibrated point cloud (X,Y,Z) and a texture image for each frame. Furthermore, the texture image can be directly overlaid over the point cloud as shown in Figure 1.

Figure1.bmp
Figure 1. 3D point cloud with overlaid texture acquired from a single Kinect sensor.

After calling the dynamic link library in LabVIEW, the point cloud is reshaped to a 2D array for each corresponding dimension. Next, a smoothing filter is applied to the depth image (Z dimension) to reduce the deviation of the reconstructed surface (noise removal). Simultaneously, the texture image is also stored in a 2D array. Since the depth image and the texture image are aligned, it is trivial to extract the spatial coordinates of the desired object features from the texture image. It is less trivial to detect these features in prior to extraction. There a lot of known algorithms for object detection and tracking, most of them out of my league (I wish they weren't ). The functions and VI's that are included in LabVIEW's NI Vision Module are stable, fast and most important – they perform well (sincere thanks to the developers for this). I just wish the Vision Module would by default include other popular computer vision algorithms for features detection, object tracking, segmentation, etc. (SIFT, MSER, SURF, HOG, GraphCut, GrowCut, RANSAC, Kalman filter…). Of course, one could write algorithms for achieving this, but like I said previously this is not so trivial for me, since a lot of background in mathematics and computer science is needed here. For example, Matlab has a lot of different computer vision algorithms from various developers that can be used. Since LabVIEW includes MathScript (support for .m files and custom functions) it would be interesting to try and implement some of the mentioned algorithms.

Ok, back to the main thread (got a little lost there)!

Here is one feature detection algorithm that I do understand, and it works extremely stable when used in the right circumstances – image correlation. And here, LabVIEW shows its power. In my experience the pattern matching algorithms are accurate and fast. It all comes down to choosing the right algorithm for the task. An example of 3D (actually its 2D) features tracking is shown in Figure 2. The texture contains four circular red objects that are tracked using the normalized cross-correlation algorithm (actually the centers of the reference templates are tracked). They are tracked on the extracted green plane channel. Each object has its own region of interest, which is updated for each subsequent frame (green square). This also defines the measuring area (yellow rectangle).

depth_texture_3d.bmp

Figure 2. Depth image (left) with overlaid textured image (middle) forms the reconstructed 3D shape with texture.

The tracking algorithm works really well (it takes about 1ms to detect all four objects), the only problem is that the specific algorithm is not scale-invariant (±5% according to the NI Vision Concepts manual). In this case, scale-invariance is not so important, but again it all comes down to the nature of your application. It is necessary to define your constraints and conditions in prior to constructing the application/algorithm.

To summarize: LabVIEW (mostly NI Vision Module) is used to perform 2D object tracking with simultaneous 3D shape acquisition by interfacing with the Microsoft Kinect sensor via a dynamic link library. The algorithm acquires, processes and displays the results in real-time with the frequency of 30 Hz (@320x240 pix).

Thanks for reading.

Be creative.

Klemen
3062 Views
8 Comments

Dear reader,

welcome to my blog!

In my research I came across different obstacles that were hurdled only due to great people that write such blogs and participate in forums. I am by no means an experienced programer, but nevertheless I hope someone finds this useful or inspiring in their project work.

First post will be dedicated to connecting a Microsoft XBOX Kinect sensor in order to aquire the 3D shape of the measured object using the PCL (Point Cloud) library. The best part of this is that the acquired point cloud is already calibrated and also aligned with the texture image. I found this useful for object tracking and simultaneous 3D shape analysis.

The second post will be dedicated to connecting a low-cost IR array sensor (MLX90620) to Arduino MCU via the I2C communication protocol and sending the acquired raw data bytes to the serial port. This is then read in Labview where the actual temperature calculation and display is performed.

This is it for now. I hope to update the blog as soon as possible.

Be creative.

K