2023–24 Projects:
When I was at Grace Hopper last year, I went to a presentation by one of my favorite robotics researchers at Carnegie Mellon, Manuela Veloso. She always brings videos of her latest robots and what they are up to, and this time she showed a video of a robot that escorts visitors to the correct meeting rooms at CMU. The video is unfortunately not yet public, but the same idea has been used in other projects too, studying human-robot interactions.
One example of the same principle in action is Pearl, a helper robot in an assisted-living facility. Pearl escorts residents to their physical therapy appointments, as this video demonstrates:
We don't have humanoid robots in the CMC (yet), but we do have smaller robots at the ready that we can use for the same purpose.
Robot motion in general is an interesting and challenging computer science problem. Given a robot with limited sensory input about its environment and a set amount of processing power, how can the robot use this information to figure out where it is, what its environment looks like, and, most importantly, how to get to where it's going? How can the robot deal with obstacles in its path, first by learning what an obstacle is and second by determining how to overcome the obstacle (if possible)? How can a robot learn to recognize the difference between a doorway and a stairway, and keep itself from harm? How can a robot deal with imperfect information about its surroundings (incomplete pictures, lighting and terrain changes, etc) and still keep its footing? Answering such questions requires knowledge of computer vision, image processing, wireless networking, motion planning, and artificial intelligence.
Robots that can successfully navigate, and map, their environments have many uses. In addition to the helper roles mentioned above, they can be used to provide companionship to the differently-abled (sort of like a guide-dog robot), perform survey and analysis of damaged and dangerous areas after a disaster, and collaborate on tasks, whether building a model or kicking a soccer ball.
Your goal in this project is to "teach" a small robot to successfully navigate, or guide itself, to various locations on the third floor CMC, from Mike Tie's office to the computer labs to the faculty offices. By the end of the project, the robot should be able to successfully escort a member of your team from a starting point on the third floor (such as the student lounge or the stairwell) to a given end point (such as Mike Tie's office). The robot must do so using both its knowledge of the third floor CMC terrain and layout and its limited sensors (camera, laser, in frared, etc).
This project will involve several pieces:
Mapping of the third floor CMC. In order for the robot to successfully navigate the third floor CMC, it must understand its layout. In this part of the project, you will come up with a representative map, or model, of the third floor CMC that can be conveyed to the robot in a compact and understandable format. This mapping can be done independently, on a separate server, or with the cooperation of the robot.
"Landmarking" the third floor CMC. Part of mapping/modeling the third floor CMC involves identifying visual landmarks to guide the robot on its journey, and training the robot to recognize these landmarks. These could be existing physical entities or something like stickers on the wall.
Path planning. In this stage of the project, you will teach the robot how to navigate from a starting point to an ending point, by indicating the number and type of landmarks that the robot should look for, the distances the robot should travel, etc.
Image processing and computer vision. The main sensory input that the robot has is its camera. You will utilize and implement image processing and computer vision algorithms, like thresholding and edge detection, to help the robot both avoid obstacles and determine its current location.
Development of a command center. Because the robot's processing power is limited, you will also develop a "helper" that will actually map the terrain and communicate the robot's "marching orders". The helper will not actually control the robot---rather, it will do the actual path planning and relay this information (distance, number of landmarks, etc) to the robot, who will then complete the path traversal.
In the fall, you'll work with a librarian to do a thorough literature search to find out what others have done in this area. In the meantime, here are a few relevant links.
L. Shapiro. Computer Vision. Upper Saddle River, NJ: Prentice Hall, 2001.
B. Horn. Robot vision. Cambridge, MA: MIT Press, 1987.
Y. Amit. 2D object detection and recognition: models, algorithms, and networks Cambridge, MA: MIT Press, 2002.