AI capable of handling the unexpected

A team of Tufts and UT Dallas researchers collaborate on a framework for AI models to perform successfully in unpredictable environments.
Stock image of a visualization of a computer network.

By Cansu Birsen, E25

A team of researchers from Tufts University and the University of Texas at Dallas recently published a paper on a cognitive architecture framework in the prestigious journal Artificial Intelligence. The team’s work aims to advance the field of artificial intelligence, and develop artificial intelligence (AI) agents that can handle novel and unexpected elements.

The paper introduces a framework for AI agents to operate in “open-world” environments—dynamic settings where new objects, agents, and events may arise, often contradicting previous assumptions. Traditional AI systems are designed for "closed world" environments, where the environment is fully understood and static. This design limits these traditional models’ ability to be generalized for diverse situations and to reflect the complexity of real-life situations. 

To address these challenges, the paper combines symbolic planning, which is the use of formal representations and logical reasoning; counterfactual reasoning, which is the consideration for hypothetical scenarios; reinforced learning, where the AI model learns by interacting with the environment; and deep computer vision, which is the use of deep learning environments like neural networks to interpret visual data. Integrating these methods creates a hybrid cognitive architecture that enables agents to complete tasks despite encountering unforeseen changes by incorporating inference and machine learning for new situations. The researchers tested their framework in simulated environments and found that the agents were capable of efficiently completing tasks in open-world environments.

The research team included numerous members of the Department of Computer Science at Tufts. PhD students Shivam Goel and Panagiotis Lymperopoulos shared the role of first author. Research staff members Ravenna Thielstrom and Evan Krause; Tufts PhD students Patrick Feeney, Pierrick Lorang, and Sarah Schneider; alum Yichen Wei, A23; and faculty members Assistant Professor Michael Hughes, Associate Professor Liping Liu, Associate Professor Jivko Sinapov, and Karol Family Applied Technology Professor Matthias Scheutz also collaborated on the research. Two authors from the University of Texas at Dallas, Research Professor Eric Kildebeck and lead web developer Stephen Goss, performed the evaluation.

The work has broader implications in fields where AI systems are deployed in complex, changing environments, such as autonomous vehicles, robotics, and intelligent assistants. By enabling agents to recognize and adapt to new information in real time, this framework pushes the boundaries of AI's ability to function in dynamic, real-world settings—ultimately moving towards more robust, intelligent systems capable of handling the unknown.

Read the full paper, titled "A neurosymbolic cognitive architecture framework for handling novelties in open worlds," in Artificial Intelligence.

Department:

Computer Science