Perhaps the biggest challenge in robotics is the unpredictability of humanity. With an ever changing world, robot interactions cannot be preplanned if they are ever going to fit seamlessly into daily life. MIT is teaching robots to react to their environments, and to those humans around them, in ways that will work best in changing circumstances. Helen Clark reports:
The MIT team attempted to teach its robotic buddy to navigate crowds using a technique called reinforced learning. On a basic level, the method involves putting a robot through a series of computer simulation training scenarios designed to teach it how to deal with objects traveling at various speeds and trajectories, while taking note of simulated people in the environment.
Simulation was also used to teach the robot to navigate while observing social norms, such as walking on the right-hand side and keeping to a pedestrian pace of 1.2 meters per second. When the robot is then faced with a room of people in the real world, it recognizes certain situations encountered during the training, and deals with them accordingly while observing pedestrian rules.
Outside of the computer, MIT describes its robot as a “knee-high kiosk on wheels.” It’s fitted with a variety of sensors including a webcam, depth-sensor, and a high resolution LIDAR sensor that allows the robot to perceive its environment, and employs open-source algorithms to help determine its position.
The sensors assess the environment around the robot every tenth of a second, allowing it to fluidly adjust its path on the go without the need to stop and calculate its best option.
“We’re not planning an entire path to the goal — it doesn’t make sense to do that anymore, especially if you’re assuming the world is changing,” comments graduate student Michael Everett, one of the co-authors of a paper on the research. “We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing.”
The scientists combined their unusual looking robot with the reinforced learning technique, and headed down to the MIT’s Stata Center for a series of physical tests. The robot successfully navigated the winding, pedestrian-clogged hallways of the building for 20 minutes at a time, without bumping into a single person.
“We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that,” Everett says. “One time there was even a tour group, and it perfectly avoided them.”
The team plans to continue and expand its research, and examine how robots fare in a pedestrian scenario where people move in crowds.
Read more here.
Latest posts by Dick Young (see all)
- Risk Analysis for Consistent, Positive, Prudent Returns - January 23, 2019
- Nine Ways to Powerfully Boost Your Investment Performance - January 18, 2019
- R.I.P. Jack Bogle. - January 17, 2019