Securing the Future of AI with Research, Collaboration, and Discussion
Over the span of just a few years, ChatGPT became one of the world’s most popular websites, offering a glimpse into a future in which many people may use generative AI without thinking twice. But with AI systems that easily generate answers to almost any query, cybersecurity researchers like Ph.D. student Nicole Meng of the Department of Electrical and Computer Engineering are investigating the safety of generative AI by studying its response to various security threats.
“Because it’s so new, and so widely used, understanding and addressing the security vulnerabilities of generative AI is crucial to protecting user safety and privacy—especially as more and more people and companies rely on these new systems,” explained Meng.
In the Cybersecurity and Knowledge Computing Lab at Tufts, Meng’s research focuses on the security of generative AI vision models that create and reconstruct images based on text prompts. Her research inspired the creation of a workshop at Computer Vision and Pattern Recognition (CVPR) 2026, an upcoming international conference bringing together leading researchers and industry labs to discuss the future of computer vision and AI. Titled “SPAR-3D: Security, Privacy, and Adversarial Robustness in 3D Generative Vision Models,” her workshop aims to unite leaders in 3D vision research to discuss security and privacy considerations that could make modern models safer.
The threat of AI vision model attacks
AI vision models can be useful, but they come with vulnerabilities. For example, instead of swiping a card to access a building, an AI vision model could be programmed to recognize images of people allowed into the building. But these systems can be tempered with using backdoor attacks, for example granting access to specific color glasses and patterns on shirts.
Vision models can now generate 3D scenes, which allow users to reconstruct scenes from 2D sparse inputs. For example, drones can fly over a landscape to reconstruct detailed maps, models can be used to reconstruct interactive street views, or a person can input a photo of the front of their house to generate an image of the back of it—and these use-cases are becoming more common every day.
“These models reconstruct scenes with great accuracy and efficiency,” said Meng, “but tiny pixel tweaks on input images can send them off the rails.”
In previous research, Meng and her collaborators investigated how a specific 3D generative AI model, called NeRF, might respond to an attacker tampering with the input image. They carefully crafted and optimized tiny changes (called perturbations) to input images, which resulted in large distortions and a significant decrease in the quality of the images generated.
“You might think a drop in visual quality is merely cosmetic,” Meng explained. “But in reality, every rendered view can feed downstream tasks, like object classification in autonomous driving. A slightly distorted view can cause a car’s vision system to misidentify a pedestrian as a shadow, or a stop sign as a billboard.”
Collaboration for reliable and safe AI
As the world increasingly relies on AI to power companies, research, vehicles, security systems, and more, researching its vulnerabilities is essential to informing and establishing safety benchmarks that can help build trustworthy, secure technology.
Like how bringing a new drug to market involves research, clinical trials, and review, inspection, and approval by the Federal Drug Administration (FDA), Meng explained that a similar process could be useful for AI. Safety regulations could prevent attacks and privacy leaks in generative AI vision models by investigating security systems or reviewing potential vulnerabilities before the tools become widely available. But to do so requires organization, interdisciplinary collaboration, and discussion—exactly what she hopes to facilitate with her workshop.
The Cybersecurity and Knowledge Computing Lab, led by Associate Professor Yingjie Lao, aims to develop responsible, efficient AI by researching how to enhance AI fairness and transparency, reduce computational and energy costs, and ensure its reliability in fields such as healthcare.
“Having a workshop accepted and led at CVPR is a significant accomplishment—especially for a PhD student,” Lao said. “Meng's work reflects both strong research leadership and recognition by the broader vision community."
Learn more about Tufts’ Master’s in Cybersecurity and Master’s in Cybersecurity and Public Policy programs.
Department:
Electrical and Computer Engineering