Inspired to Engineer a Better Future: Andre Cleaver

Andre Cleaver, EG18, EG23, investigates ways to improve human-robot interactions with augmented reality.
Andre Cleaver
“As robots increasingly become integrated into our lives, we have to focus on improving those interactions,” says Andre Cleaver, EG18, EG23. Photo: Alonso Nichols

Optimism and curiosity are twin engines that drive the imagination of young engineers at Tufts, whose professors nurture skills, knowledge, and social awareness to help translate their visions into real-world applications. This year, as Engineers Week celebrates the theme of “Reimagining the Possible,” Tufts Now reached out to five School of Engineering undergraduates and graduate students who are bringing energy and big ideas to a changing world.

Andre Cleaver, EG18, EG23

Andre Cleaver, a Ph.D. candidate in mechanical engineering, looks closely at how to improve human-robot interactions through augmented reality (AR), a fusing of the real and virtual worlds that superimposes computer-generated information over a view of actual places and structures. His graduate studies with Jivko Sinapov, James Schmolze Assistant Professor in Computer Science, explore how AR can help robots convey how they perceive the world to people. Instead of thousands of lines of numbers shown on a terminal window, information is presented as simple shapes and colors so that people can connect what they see with what the robot “sees.”

Why Augmented Reality: I was getting a master’s in mechanical engineering when I first saw a demonstration of what AR could do. And I thought: Wow. I was hooked. Robots can communicate with other robots easily, but a robot communicating to a human is where the challenge lies. What’s so powerful about AR is that you can pretty much render any visualization that you want anywhere, anytime. It’s an emerging field. Say we want to communicate with a robot that is simply going to travel down the hallway and turn left, how do we do that? What do we show, exactly? And one of the options that we came up with is just a simple dotted line on the ground. But do we show that as colorful markers or blinkers? Do we show only the destination point, or everything in between? These are the kind of complex questions I find fascinating as we think about how a robot understands the physical world. You can show people what they were not able to see in the past: sonar waves, sound visuals, a laser scan. So, it’s exciting to know I’m helping develop tools that expand our visual world and our experience of it.

A big idea:  As robots increasingly become integrated into our lives, we have to focus on improving those interactions. As people and robots share space, we need to make those interactions more effective and smoother; with that progress, people will have more trust in robots than they do now. The general public thinks robots are going to take over the world or this robot's going to be like a Terminator. They appear to operate in their own world, and that leads to the question: Is this robot something I need to worry about? To me that view is very limiting. But how about imagining a robot that can communicate with you in a friendly way? That would open up the potential for new human-robot interactions.

Why progress matters: One area where I think we’ll see AR features combined with how we live in the future relates to the rise of autonomous vehicles. With an autonomous car, a pedestrian doesn't see anybody behind the wheel, so how does that pedestrian understand when's a good time to, say, safely cross the street? One of the things that I would like to explore is if we can augment the vehicle with indicators that better communicate that the car understands that it's coming to a complete stop, and by that understanding, the pedestrian can proceed to cross the street. I worked on another practical application in a past internship. Say you’re exploring an unknown building that is believed to contain some hazardous material, something radioactive, for example. You can't see radioactive material. But with AR we could render a boundary zone saying, “This area is receiving harmful levels of radiation.” So instead of having expensive detection equipment, we can visualize dangerous levels. I think we’re just beginning to recognize the beauty of what AR can do. To see Cleaver’s AR projects, check out his social media posts.

This excerpt is from "Inspired to Engineer a Better Future" by Laura Ferguson, Tufts Now.