LabVIEW Robotics Documents

cancel
Showing results for 
Search instead for 
Did you mean: 

NIWeek 2010 Robotics Swarm Demo: Obstacle Avoidance, Mapping, and Steering

This page is part of the NIWeek 2010 Robotics Demo series of blog   posts.  For an introduction to the demo and links to other parts of the series, visit http://decibel.ni.com/content/docs/DOC-13031/.

Overview

If you think back to the environment in which our four NI Starter Kit robots operated at NIWeek—a 20 x 20 ft arena surrounded by walls and filled with obstacles (check out the Robot Arena section of our introductory post of this series)—the success of the robots' navigation depended heavily on their ability to avoid obstacles and each other. As mentioned in our Driver Station User Interface post, our robots can operate autonomously or be teleoperated via a joystick controller. You might think we'd only be concerned with obstacle avoidance when the robot is moving autonomously, but with the kind of time we lovingly put into our DaNI robots, we also didn't want to take the chance of collisions ocurring in tele op mode.

Of course, to avoid obstacles, the robots need to know about them first. However, in the real-world situation we modeled our demo after—the scene of an accident in a remote area—responders to the scene wouldn't have maps or knowledge of the area.  So with that as our inspiration, we didn't give our robots any prior knowledge of the arena.  This left us with our robots needing to detect obstacles in order to avoid them.  And we thought, while the robots are detecting obstacles, why not also put them to work building a map from the data they gather? We did just that, and the map the robots create and share allows each robot to have knowledge about obstacles and open space in a much greater portion of the arena. This collaborative map building, among other things, necessitated the data communication solution we described in a previous post.

With plans for our robots to perform obstacle avoidance and map building, we just needed a way to move and steer the robots so they could traverse the arena and respond to information about obstacles in the map. Read on for our solutions for all three of these tasks.

Architecture

At NI, we often think about robotics systems using a Sense-Think-Act paradigm, where robots acquire sensor data about their environment, make decisions based on predefined tasks and the sensor data, and then perform the predefined task. As we developed our code, the three tasks we needed our robots to perform—obstacle avoidance, mapping, and steering—fit themselves within this framework and became closely connected.  That is, our robots needed to acquire sensor data about obstacles and open space in the arena (Sense), update the shared map of the arena with this information and calculate paths to waypoints (Think), and move at appropriate velocities to traverse the arena (Act). The remainder of this document  explores the architecture we put in place to implement each feature.

As you can see in NIWeek 2010 Robotics Demo.lvproj, this code is organized under the NI Starter Kit hardware target because we deployed the code to each Starter Kit robot. We'll next talk about three main loops in our demo that drive our obstacle avoidance, mapping, and steering.

Rangefinder Loop

Our Rangefinder Loop VI acquires data from a robot’s IR distance sensor and updates the maps that the robot uses to avoid obstacles and navigate the arena. You can find the code for this feature at (path as viewed in NIWeek 2010 Robotics Demo.lvproj😞


NI Starter Kit > Mapping and Obstacle Avoidance > NI_Robotics_RangefinderLoop.lvlib > Rangefinder Loop.vi

The sections that follow describe tasks this code performs.

Reading Scan Data

The IR distance sensor is mounted on a servo motor that pans back and forth (refer to Robot Recipe: Teleoperation Mode for NI Robotics Starter Kit for information about how we adjusted the position of the stock Starter Kit robot's servo motor). This motion creates a 123-degree field of view in front of the robot in which the sensor can detect obstacles.  Code on the FPGA controls the servo motor, and sensor data is sent to the Rangefinder Loop VI running on the RT host as 1D arrays of scan distances (m) and scan angles (rad).  Each array sent from the FPGA contains five elements, with each element representing a single scan taken at 30 ms intervals.

You can find the code that runs on the FPGA and controls the servo motor at (path as viewed in NIWeek 2010 Robotics Demo.lvproj😞

NI Starter Kit > Chassis > FPGA Target > Robot FPGA.lvlib > Robot FPGA Main.vi

The data from the distance sensor is in the form of distances and bearings relative to the robot’s frame of reference, so the Rangefinder Loop converts the data into global Cartesian coordinates, or x- and y-positions relative to the world.

Updating the Shared Map

Each robot shares a map in the form of a 2D grid of cells that correspond to parts of the arena. After the Rangefinder Loop converts sensor data about obstacles to Cartesian coordinates, the data is then used to update the grid.  You can think of the shared map as having three components—an explored grid, a certainty grid, and an occupancy grid. The robots use the map's different grids when making obstacle avoidance and path planning decisions.

Updating the Explored Grid

With each obstacle measurement, the map's explored grid is updated with information about the cells that lie on a straight line between the sensor and the obstacle. That is, when the distance sensor detects an obstacle in the arena, the cells between the sensor and the cell at the x- and y-coordinates that correspond to that location are updated.

The screenshot that follows shows a grid that corresponded to our arena as the robots were in the process of mapping it. This is actually a screenshot of the same map display included in our driver station user interface.

map.png

For our 20 x 20 ft arena, the size of the grid is 80 x 80 cells. As you can see, the explored grid includes white cells, which indicate open space, and red cells, which indicate obstacles.

Each robot broadcasts the coordinates of newly explored cells for use by other robots.  Also, the driver station user interfaces receive the data and update their map displays.

Updating the Certainty Grid

Each robot also shares the map's grid of certainty values, which is used for mapping and obstacle avoidance. The values of cells in the certainty grid correspond to the number of times an obstacle has been detected at that position in the map.  When a robot detects a new obstacle, the certainty value at the appropriate grid index is incremented.  Similarly, when robots find that a grid cell contains no obstacles, the certainty value at the appropriate grid index is decremented.  As mentioned in the previous section, each robot broadcasts the positions of obstacles as they are detected so the other robots and the driver stations are aware of the same certainty values that indicate obstacles.

For our 20 x 20 ft arena, the size of the map's certainty grid is also 80 x 80 cells.  In the previous screenshot, cells shaded in dark red have higher certainty values and indicate obstacles that have been detected multiple times. Cells shaded in light red have lower certainty values than the dark red ones.

Updating the Occupancy Grid

Finally, each robot shares the map's occupancy grid information.  An occupancy grid map consists of cells that correspond to parts of an environment, each of which has an associated cost to enter. Cost refers to a positive number that represents user-defined variables that should be minimized when calculating paths, such as distance between points or energy required to traverse the area. A cost of Inf means the area of the map cannot be traversed, for  example, because an obstacle blocks travel through the area.  The occupancy grid map is a data type included with the LabVIEW Robotics Module, along with a palette of Occupancy Grid Map VIs you can use to operate on them.

For our 20 x 20 ft arena, the size of the occupancy grid is 20 x 20 cells. When a robot in our demo detects a new obstacle, the cost of entering  the the occupancy grid cell at that position is incremented. In the previous screenshot, cells shaded in a shade of red have costs, while cells colored white do not have a cost to enter.

Obstacle Avoidance Loop

Our ObstacleAvoid Loop VI uses localization data and the aforementioned data in map grids to find an appropriate velocity vector that will steer the robot without it colliding with stationary obstacles or other  robots. The robot velocity vector consists of forward, lateral, and angular velocity components.  You can find the code for this feature at (path as viewed in NIWeek 2010 Robotics Demo.lvproj😞

NI Starter Kit > Mapping and Obstacle Avoidance > NI_Robotics_ObstacleAvoidLoop.lvlib > ObstacleAvoid Loop.vi

When the robot is in fully autonomous or semi-autonomous mode, it uses a VFH+ algorithm to find a heading free of obstacles.  When the robot is in teleoperated mode, it coerces the steering vector received from the joystick controller (refer to the section on the joystick controller in the post about our Driver Station User Interface for more information) to avoid collisions with an obstacle or another robot in its immediate path.

The sections that follow describe tasks this code performs.

Reading Waypoint Data (Fully Auto or Semi-Auto Modes)

The obstacle avoidance loop reads from a list of waypoints provided by the path planning loop (our post on path planning is forthcoming), and then uses the first waypoint that is outside a specified acceptance radius as the location to which it will navigate.  When the robot reaches its final waypoint, the robot comes to a stop.

Calculating a  Heading (Fully Auto or Semi-Auto Modes)

As mentioned previously, robots use a VFH+ algorithm to find an appropriate heading in fully and semi-autonomous modes.  The heading is based on the following factors:

  • Values in the portion of the certainty grid that is within a specific radius of the robot
  • The heading toward the current waypoint
  • The current heading of the robot
  • The locations of any other robots within a specific radius

As mentioned before, the shared map's certainty grid is generated from sensor data acquired from all robots in their rangefinder loops.  Once a suitable heading is found (suitable given the certainty values between the robot and the current waypoint) the corresponding robot velocity vector is determined.  In the case of these robots, the lateral velocity is always zero because its wheels cannot move in the lateral direction.

Coercing Velocity to Prevent Collision (All Modes)

As we just explained, in fully autonomous or semi-autonomous mode we calculate a velocity vector from the best heading. In tele-op mode, users with a joystick or keyboard manually specify the vector.  In all cases, the velocity is coerced to prevent collisions with stationary objects or other robots.  First, we calculate the path the robot would take over a specified distance given its current velocity.  Second, if that path intersects with an obstacle or another robot, we scale down, or coerece, the forward velocity component such that the collision will happen in no less than two seconds.  Since this coercion operation runs in a continuous loop, it results in the robot slowing down, then stopping as it approaches objects.  While the ObstacleAvoid Loop VI calculates and coerces the robot velocity, the Steering Loop VI, described in the next section, actually sets the velocity of the robot motors.

Steering Loop

The Steering Loop VI reads the coerced velocity setpoint provided by the ObstacleAvoid Loop VI to set the velocities of the robot's two drive motors. You can find the code for this feature at (paths as viewed in NIWeek 2010 Robotics Demo.lvproj😞


NI Starter Kit > Steering > NI_Robotics_SteeringLoop.lvlib > Steering Loop.vi

In this area of our code, we use the Steering API included with LabVIEW Robotics Module to convert between the velocity of the robot (in the lateral, forward, and angular directions) and the motors (counterclockwise rad/s).  Then, the Steering Loop VI sends motor velocity setpoints to the FPGA, where PID is used to maintain the velocity.  At the same time, the current motor velocities, based on encoder readings, are read from the FPGA so that the Steering Loop has an estimate of the current velocity.

Comments
anfedres86
Member
Member
on

Hi, first of all, I would like to congratulates you, you did a very good job, I'm trying to do almost the same, and I'd like to see the part in which you create the map, I've been lookin for the code, but, I don't know where is it.

Can you share with us the code?

Thank you very much.

"Everything that has a beginning has an end"
RoboticsME
Member
Member
on

Are you looking for the map display?  Or are you looking for how we populated the certainty grid?

Cheers,

Karl

anfedres86
Member
Member
on

Thanks Karl for your quickly response.

Look, I have data x, y, and depth, and I'm trying to build the map with the labview library of Robotics, but I don't know how use the occupancy grid map, or how to use the data in order to obtain the map. I hope that you can help me Karl, thank you.

"Everything that has a beginning has an end"
anfedres86
Member
Member
on

Yes, I'm looking for how fill each grid in the map, and how update the map.

Thank you.

"Everything that has a beginning has an end"
RoboticsME
Member
Member
on

Check out NI Starter Kit > Mapping and Obstacle Avoidance > NI_Robotics_RangefinderLoop.lvlib > Rangefinder Loop.vi in the project.  That's where all the mapping takes place.

anfedres86
Member
Member
on

I've labview Robotics, but I don't have the starter kit.

"Everything that has a beginning has an end"
anfedres86
Member
Member
on

I mean, I can't find the path that you gave me, I don't know where it is, I have labview Robotics 2010 and I have the basic example with the starter kit called "Starter Kit Roaming.lvproj". I'm trying to find the example that you gave me, but I couldn't find it. They sell the starter kit separetly and this came with a DVD, maybe is inside the DVD so, maybe is the problem, do you know if a can see the Rangefinde Loop in other side, or Where can I see the code?

Thank You.

"Everything that has a beginning has an end"
RoboticsME
Member
Member
on

You shouldn't need the starter kit.  Maybe this screen shot will help you in finding the range finder loop in the project:

_ss.png

If you still can't find it, can you send a screen shot of what your project looks like?

RoboticsME
Member
Member
on

Otherwise, the directory in the unzipped folder is: Rangefinder\Rangefinder Loop.vi

anfedres86
Member
Member
on

My mistake, I couldn't find the download location of the program, but I have it now, and I've seen it, it seems a bit complicated, and I can not find exactly where did you use the robotics blocks to build the map. Did you use probability to build the map?.

"Everything that has a beginning has an end"
RoboticsME
Member
Member
on

Within the Rangefiner Loop.vi, there is a subVI called Increment Map Grid Cells.vi.  This is where the certainty grid is created.  The Delta Grid is what is used in path planning.  So you see in the Increment Map Grid Cells.vi, a reference to the "delta grid."  If you go to the Path Planning Loop.vi (in path planning folder in project), you'll see the robotics VI's using the delta grid.  The values in the delta grid are used to update the map used for path planning.

RoboticsME
Member
Member
on

I should clarify that it appears that a certainty grid is kept up to date (called certainty grid) as well as a "delta grid," which is what is used for path planning.

anfedres86
Member
Member
on

Thanks for the explanation, I'm very grateful.

I been trying to understand your code, I have to accept it, is looks pretty hard, but it's OK, and now I have a couple of question, I hope that you can help me with this.

For example, I watched the 2 vi's that you told me, and I really don't know how you increase or decrease each grid, Which method did you use? Did you use the occupancy grid algorithm?. Can you explain me in a couple of words how did you use it to determine when a grid is full and when is not?.

I gonna keep reading and trying to understand, but your help could be more helpfull.

Regards, anfedres.

"Everything that has a beginning has an end"
anfedres86
Member
Member
on

Ok, is been a long time since I been researching about the Occupancy grid Map, I implemented the code, this is the first approach that I made today, here is a video if you want to see it.

http://www.youtube.com/watch?v=-KXQ750qAWA

"Everything that has a beginning has an end"
RoboticsME
Member
Member
on

Awesome work!

I asked someone else who helped work on the demo to get back to you about your questions.  Hopefully you'll hear something in the next day or two.

Again, great video!!!

anfedres86
Member
Member
on

I implemented the real Occupancy Grid Map algorithm , the algorithm wich Alberto Elfes implemented, I'm going to publish the code soon, feel free to ask me  or suggest me something, is only the first approach, and by the way, thank you for your help.

Sincerly Regards.

Andrés F. E.

"Everything that has a beginning has an end"
suzane.b
Member
Member
on

can we implement this with sbrio 9631????

pratik123
Member
Member
on

hello ,

i need help in my project.

i am developing obstacle avoidance using advance vfh and i am using ultrasonic sensor with sbrio starter kit and in advance vfh i got vx , vy and command heading now i want how to calculate velocity setpoint which is given to appply velocity to motors.vi so that robot would avoid obstacle and reach to the target...

thanking you

Contributors