Meet Booz Allen's Adversarial AI Team
Securing the nation against intensifying threats
Federal agencies are investing in AI to automate processes and enhance decision making across mission systems, ranging from critical national security assets to essential healthcare platforms. Yet, as these AI systems proliferate, they remain vulnerable to a growing set of attacks.
Such threats are quickly becoming a national security concern. “Interest in deploying AI solutions in the government continues to climb, with use expected to grow 40% this year,” explains Booz Allen Principal Matt Keating, leader of the firm’s adversarial AI practice. “But, with a few notable exceptions, we haven’t seen commensurate energy devoted to developing robust AI models.” This is not unique to the federal government. A recent Gartner survey revealed that of the organizations polled, only 4% are implementing AI application security tools.
Booz Allen’s adversarial AI team is prepared to help. These experts have been working with federal agencies to understand the threat landscape and AI security for more than five years—and, with increased AI adoption, they anticipate seeing more demand for their services.
Bringing Multiple Perspectives to the Problem
Serving clients across the federal government in such a complex and evolving field demands expertise that’s both deep and wide. Booz Allen’s adversarial AI team delivers this on all levels.
Matt is an experienced business leader and technologist who has worked across industries—including healthcare, law enforcement, and finance—and areas of technical expertise, from bioinformatics to cryptography. Chief Scientist Edward Raff, who leads the machine learning (ML) research team, is a recognized expert in several ML areas, including malware detection, reproducibility, and ethics. Together, they’ve assembled a team with similarly rich and diverse backgrounds.
Lead Machine Learning Scientist Amol Khanna, for example, previously used machine learning to safely automate redundant tasks for a leading global bank. He also built a differentially private federated learning algorithm for disease prediction. Since joining Booz Allen, he has performed research on safe machine learning methods essential for deploying models trained on sensitive data into real-world environments.
Senior Lead Scientist Andre Nguyen started with Booz Allen as an intern and grew into solutions architecting and machine learning research roles, building expertise in areas including cloud data platforms, pharmacovigilance, adversarial machine learning, and cybersecurity research. He left the firm to oversee machine learning for a biotech company, then returned in 2023.
Lead Scientist Derek Everett has a background in computational high-energy nuclear physics. Since joining Booz Allen, he has performed research targeting the safety of machine learning systems, as well as their application to malware detection.
“I'm excited to advance the state-of-the-art for adversarial AI while increasing clients’ effectiveness in this space,” Derek says.
Bespoke Solutions with a Big-Picture View
In a market currently dominated by open-source tools, off-the-shelf software, and one-size-fits-all solutions, the approach of Booz Allen’s adversarial AI team is different by design.
“We're not simply reusing the same template each time,” says Andre. “Every algorithm and use case requires a specific bespoke solution.”
Edward explains one of the reasons why. Often there’s a big difference between adversarial AI in theory—the “pristine sort of lab environment on paper,” where many current solutions have been tested—and adversarial AI in practice, with constraints and incomplete information.
“When you add operational complexity, many of these tools don’t work as advertised,” he says.
This is where Booz Allen’s extensive experience with government agencies comes in, as does its holistic, big-picture approach.
“We’ll typically start by creating a threat model to make sure we are focused on the most probable or damaging attack vectors for the environment,” says Matt.
“We look at the entire structure and process,” says Edward. “For example, if people are likely to attack a certain algorithm, then maybe we just don’t build an algorithm for that part. We’re able to look at where decisions get routed and where solutions get placed and design around the problem.”
In short, says Edward, “Booz Allen is able to create the tools that work in real-world missions.”
Working Together Toward Safe, Reliable AI Adoption
Creating these adversarial AI solutions—and accelerating their development and deployment—has been a highly collaborative enterprise. For example, as Edward works on adapting algorithms and software to scale AI training, Amol focuses on out-of-distribution detection and differential privacy, both critical components to deploying AI models and protecting individuals’ data after deployment. “We’re enhancing the safety of sensitive training data while making government AI more flexible,” Amol says. “This allows us to build solutions that are robust and don’t pose security risks.”
“There’s a lot of anxiety that people have about how things can go wrong with AI,” says Edward. “I’m excited to work with a team who’s helping all of us—including our federal clients—have confidence in how the algorithms are going to work. This will have a positive impact on the community and the government.”
“As a team committed to enhancing the nation’s military readiness, intelligence, and citizen services, we are motivated to not only help our clients implement AI solutions but do so in a reliable and safe manner,” says Matt.