Tomorrow’s Engineers: Tyler Frasca

Tyler Frasca, Human-Robot Interaction PhD candidate, is teaching robots to learn and complete tasks through natural language.
Tyler Frasca, PhD candidate
“That’s something that I really like about these cognitive robotic architectures—the widespread applications; they’re not necessarily specific to a single problem,” says Tyler Frasca, left, with Matthias Scheutz. Photo: Alonso Nichols

Tyler Frasca came to Tufts from Wentworth Institute of Technology to pursue graduate studies in human-robot interaction. He was the lead on the Tufts team for the 2017 NASA Space Robotics Challenge, in which Tufts was one of 20 finalists out of 93 competing teams.

What I’ve always enjoyed is taking things apart and putting them back together, and being able to innovate on different ideas. Growing up I was always taking things apart. Once I built a little device mounted next to my bed; it had two strings wrapped around it that attached to the light switch on the wall. I was able to sit in bed and turn on and off the light without having to get up.

More recently, when at Wentworth, my friend and I designed and programmed a hexapod—or six-legged—robot. I was just like, “Wow.” I was able to build this awesome little robot and program it to walk on its own. It was fascinating that I could create things that could be self-sufficient.

So, solving problems—especially that help other people, including yourself, to do things that you wouldn’t normally be able to do—that’s what I love about engineering. 

My highlight experiences at Tufts have been working with Professor Matthias Scheutz and the team in the Human-Interaction Lab. I remember the first time I taught one of our NAO robots how to dance, in the sense that raised its arms, squatted down, and then stood back up. It was a lot of fun, being able to see my work, to design a system that allows the robot to learn new tasks.

Our work on humanoid robot capacity for the NASA competition was a highlight, too. Ever since, I’ve been working on teaching robots through natural language—being able to verbally explain a task to a robot instead of having to program it specifically.

What we’re trying to do is develop robots that learn new tasks or action sequences by equipping them with an initial vocabulary and understanding of phrases, so they learn words online through reasoning and inference. 

My dream job would be to have my own robotics and artificial intelligence company. I have had some interest in assistive home care robots. Another side of me is also very interested in space exploration, so I’m little bit torn between those two applications.

That said, a lot of the internal pieces in the robotic architectures can definitely be applicable to both, and that’s something that I really like about these cognitive robotic architectures—the widespread applications; they’re not necessarily specific to a single problem.

This excerpt is from "Who Are Tomorrow’s Engineers? Meet Five with Big Ideas" by Laura Ferguson, Tufts Now.

Department:

Computer Science