LabVIEW Robotics Documents

cancel
Showing results for 
Search instead for 
Did you mean: 

NIWeek 2010 Robotic Swarm Demo: Driver Station User Interface

This page is part of the NIWeek 2010 Robotics Demo series of blog   posts.  For an introduction to the demo and links to other parts of the  series, visit http://decibel.ni.com/content/docs/DOC-13031/.

Overview and Requirements

A major aspect of our demo was teleoperation of our robots, and we allowed any NIWeek attendee to walk up to our arena and drive. So, with four robots in our swarm, each of which could operate independently in three different modes, we needed to provide a user interface to allow anyone to control the robots. With that in mind, we tried to make our driver station UI intuitive enough that people who hadn't spent the past couple months immersed in the demo—namely everyone but us—could successfully use the driver stations.

Even though we put much emphasis on allowing users to operate the robots, we still needed to maintain some level of control over the demo.  To accomplish this, we required that our NI demo operator, from his or her own user interface, be able to assign control of a robot to a particular driver station (for more information,  refer to the Allowing Control by a Master Client section of the Data Communcation post in this series).  Only when the operator gives control of a robot to a driver station would its front panel controls be enabled; once the driver station is enabled, users would be able to select between different options for controlling that robot.  And of course, the demo operator needed the ability to take back control of the robots from the driver stations.

As you might expect, the sort of extra consideration we gave to usability can result in more complex code in the background, as we'll soon show you.

Architecture

You can find all of the code for our driver station user interface at (path as viewed in NIWeek 2010 Robotics Demo.lvproj):

My Computer > Driver UI > NI_Robotics_Driver_UI.lvlib > Driver Station Main UI.vi

The driver station UI contains three main parts:

  • Video Feed—At  the left of the driver station UI is the video feed from the Axis IP camera on the robot. 
  • Map Display—At the right is the map of the robot arena the robots are programmed to explore. As they move through the arena, their IR rangefinders acquire data about obstacles (red) and open areas (white), and the map in the UIs updates.
  • Modes of Operation—Along the bottom of the UI are controls that allow users to start the robot moving, halt its motion, and change its mode of operation.

The screenshot that follows shows all of these things, except that where the video feed normally displays is a simulated video feed.

DriverUI.png

We'll discuss each of the three main parts of the UI  in more detail in the sections that follow.

Acquiring a Robot's Video Feed

In order to teleoperate a robot, you need a "tele" feed. In our case, this was a video feed from the robot. We used the Axis M1011, an IP network camera, shown below.

ph_m1011_m1011w_right_low.jpg

The camera comes with a clamp you can use to mount the camera onto our NI Starter Kit robot's servo motor (see Robot Recipe: Teleoperation Mode for NI Robotics Starter Kit for more information about mounting this camera on the Starter Kit robots).  The camera clamp works well for most applications, but we wanted something that looked cleaner and would be fixed on the robot.  So, we created a small, custom bracket (illustrated below) using a rapid prototyping machine.

bracket.png

When using a 2.4GHz WiFi connection, we had to limit how much data was sent from the cameras.  In order to maintain network reliability, we restricted frame sizes to 5k, which made for a shoddy but acceptable video feed.  However, when using the much faster 5GHz 802.11n wireless router (refer to our Data Communications post for more information), we were able to easily stream back images with 640 x 480 resolution at the maximum camera framerate.

If you install IMAQdx and the RT Vision software on the robot's Single-Board RIO, the board can acquire images from the IP camera and even do vision processing.  However, since we were already running so much software on the sbRIO, we accessed the camera feeds directly from each driver station, bypassing the sbRIO.  This was possible because the individual wireless routers on each robot connect the sbRIO and the IP camera to the wireless network hub, and it is as if the IP camera is on the same network as each driver station.

The camera loop in the Driver Station Main UI VI only operates on the active camera you select on the front panel of the driver station UI. If the camera session is invalid or an error occurs, the loop attempts to reconnect to the camera and keeps trying to reconnect until it successfully starts acquiring images.

Understanding the Map Display of the Arena

You can find most of the code for the map display in the driver station UI at (path as viewed in NIWeek 2010 Robotics Demo.lvproj):

My Computer > Driver UI > NI_Robotics_Map Display.lvlib > Draw Map.vi

Unexplored cells are colored black, while explored cells are marked on a scale from white to dark red based on the certainty that an obstacle occupies that area (our post on obstacle detection and avoidance is forthcoming). Robots, their goals, and their paths are added in the Draw Robot VI.  Each robot's color correlates to the color on the Driver Station Main UI VI.

map.png

Switching between Modes of Operation

As mentioned previously, from the driver station you can set the robot to operate in one of three different modes:

  • Full Autonomous—The robot indepentently generates both the destination and the path it will take to get there.  The robot also communicates (see our Data Communication post) with other robots to try to optimize the combined search effort. That is, robots in fully autonomous mode generate goals in areas that are unexplored in order to increase the amount of the map that is explored.
  • Semi Autonomous—Users at a driver station set the robot's destination by selecting a point on the map for the robot to navigate to, and the robot independently plans a path to get there.
  • Tele Op—Users at a driver station use a joystick to directly control where the robot moves.  The joystick's velocity commands are translated to the robot's motors.  The next section of this document contains information about how we used wireless controllers to drive our robots in tele op mode

In all modes, obstacle avoidance and mapping code runs on each robot.  This means that even when you drive a robot in tele op mode, it cannot run into known obstacles.   Additionally, even in tele op mode, the robot maps obstacles it detects and communicates that information to the other robots. We'll tell you more about obstacle avoidance in a later post...

Controlling Robots Wirelessly by Joystick

The messages generated to move a robot in tele op mode come from either a joystick or keyboard. You can find the code for joystick control at (path as viewed  in NIWeek 2010 Robotics Demo.lvproj):

My Computer > Driver UI > Joystick

There are three different flavors of Joystick.lvclass code: code for a differential steering joystick, a velocity rate joystick, and a joystick emulator. The emulator code accepts keyboard input rather than joystick input to create the tele op commands to drive the robot. The code for the differential steering and velocity rate joysticks is designed to work the Logitech Cordless Rumblepad 2 (shown below), but with small modifications it can also work with other joysticks.

Cordless-RumblePad-2-Game-pad-12btns.jpg

A few more details about these two types of joystick code:

  • The differential steering joystick code maps the left and right joysticks to the left and right motor velocities of the robot respectively. The maximum velocities are set in the Initialize Joystick VIs.
  • The velocity rate joystick code maps the left joystick to the forward/reverse velocity of the robot while the right joystick maps to the angular velocity of the robot and turns the robot left or right.

Using the joystick mapped in the differential steering code, it is very difficult to drive the robot slowly and in a straight line. For example, to perform such a motion with differential steering, you must move the two joysticks together very precisely, which is difficult to do with a 32768 resolution. The mapping in the velocity rate code, however, allows for the most precise control of the robot's movement.  By decoupling the forward velocity from the angular velocity, as our velocity rate joystick code does, you can precisely control the robot's motion at low and high speeds.

One issue that arose toward the end of NIWeek was that the wireless controllers seemed to stop working, or they would behave unexpectedly.  We could never quite figure out what caused this behavior, so if you are using multiple controllers or might be operating in an environment with high RF interference, we recommend you use wired controllers.

Driver Station Bonus: Goal Finding!

Even if you saw our demo live at NIWeek, you might not have seen a  little easter egg we programmed into the demo.  If you click the Driver Station x string at the top of a particular driver station UI while it's running, a goal-finding routine is activated and a few extra indicators appear at the bottom of the UI, as shown below.

goalfind.png

The goal-finding routine searches the incoming video feed from the robot that is controlled by that station for three different symbols: a stop sign, the LabVIEW logo, and the NI Eagle logo.  This routine primarily uses pattern matching to find the different symbols.  You can find the code for this feature at (path as viewed  in NIWeek 2010 Robotics Demo.lvproj):

My Computer > Driver UI > Goal Finding.lvclass >  Search For Goals.vi

We first developed these algorithms using NI's Vision Assistant, and then adapted them slightly for more optimized performance.

Contributors