Wireless wearable sensor system could map disaster zones in real-time (Wired UK)

MIT computer scientists have published a paper detailing how they created a prototype Kinect-style sensor system that picks up visual data and translates it into a map, building an image of the world around a wearer instantaneously. The team intends on the system…

MIT computer scientists have published a paper detailing how they created a prototype Kinect-style sensor system that picks up visual data and translates it into a map, building an image of the world around a wearer instantaneously.

The team intends on the system (known as simultaneous localisation and mapping, or SLAM) being used in disaster zones, where emergency services need to quickly build up a picture of the environmental context. “Our work is motivated by rapid response
missions by emergency personnel, in which the capability for one or more people to rapidly map a complex indoor environment is essential for public safety,” they explain in the paper, which is on the agenda for the 2012 Intelligent Robots and Systems in October.

The system, worn like a backpack strapped to the user’s body, consists of a laser rangefinder and a series of sensors including accelerometres, gyroscopes, a Kinect depth sensor and a barometer, all mounted on a piece of plastic. The accelerometers and gyroscopes help combat issues that would arise when humans traverse an irregular surface — by recording things like altitude and when the laser is tilted, a more accurate map can be devised. (The system does, however, have to remain within ten degrees of the horizontal if the laser is to function and map out the surroundings accurately). The laser sweeps the vicinity in a 270-degree arc and calculates how long it takes for the light to return. In this way, it can calculate the distance between the wearer and a nearby physical structure.

There is, of course, also a camera fitted to the system, which snaps a photo every few metres as the test subject walks, feeding the information to a laptop which then uses special software to extract several hundred features of notes, including a location’s
colour pattern and contours. As the wearer walks through an environment, they can also push a button to flag up an important  location — in the future, this feature will be upgraded to include voice or text tags that could pinpoint toxic spills or other things
of note/things to avoid. All the data is processed by an algorithm running on a laptop carried in the backpack.

“The operational scenario that was envisioned for this was a hazmat situation where people are suited up with the full suit, and they go in and explore an environment,” says Maurice Fallon, an MIT computer scientist and the paper’s lead author. “The current
approach would be to textually summarise what they had seen afterward — ‘I went into this room on the left, I saw this, I went into the next room,’ and so on. We want to try to automate that.”

SLAM technology is typically integrated into robotic systems. However, in a real-world environment where terrain is tricky to navigate and our current standard of robots might have difficulty, emergency workers already out in the field
could be used to build up a picture of what a city is dealing with and prepare accordingly. MIT already proved the system worked
during test runs on campus where the data was wirelessly fed to a
laptop that built up the map in real-time — the sensors were able
to differentiate between the floors the subject was walking around
in a building.

“What they definitely tackled is the problem of height and
dealing with staircases, as the human walks up and down,” commented
Wolfram Burgard, a computer science professor at the University of
Freiburg in Germany not involved in the study. “The sensors are not
always straight, because the body shakes. These are problems that
they tackle in their approach, and where it actually goes beyond
the standard 2-D SLAM.”

To make the system even more robust, the team are considering
using a foot-mounted inertial measurement system that could produce
additional spatial readings. Currently, the greatest problem facing
the system is that of false loop
closures — this is when the individual returns to the same
location more than once. The issue could be remedied by taking more
images and instructing the system to run more comparisons based on
that information, but a foot-mounted system would also help.

The fact that the US Air Force and the country’s Office of Naval
Research funded the work suggests its real-world use in disaster
zones could be imminent, once the tech is improved upon
further.