W&M team designs AI that spots security vulnerabilities in software
The key to developing secure technology for the future may hinge on making life a little easier for software engineers today.
It’s hard to find a facet of modern society that hasn’t been transformed by software. From healthcare to entertainment to banking, computerization has made its way into nearly every industry. As the general public enjoys the benefits of software, our data is increasingly being managed by programs that contain undetected vulnerabilities.
A team of William & Mary computer scientists has developed a way to spot such vulnerabilities before hackers can exploit them. They’ve built a form of artificial intelligence that can automatically identify whether a software issue is security-related. The AI is modeled from human speech and achieved a 96 percent success rate when tested on thousands of open source programs.
“To me, the fact that a fully automated system is able to learn the semantics of natural human language is pretty amazing,” said Kevin Moran, a postdoctoral fellow of computer science at William & Mary. “We built an AI-based system that can actually understand the text of software issues enough that it can determine whether a security-related aspect of software is being described.”
Moran and the rest of the research team presented a paper on their work at the 35th International Conference on Software Maintenance and Evolution (ICSME) in Cleveland this week. The team is made up of W&M graduate students David Palacio and Carlos Bernal-Cardenas, undergraduate Daniel McCrystal, post-doc Moran and associate professor Denys Poshyvanyk.
“We actually span the gamut of different lifecycles in academia,” Moran joked.
And it’s not just academia. The project was born out of a collaboration with Cisco Systems, a multinational technology conglomerate based in Silicon Valley. The company provided the team with research grants to fund their work. This is the first paper to come out of the collaboration and there is a second one currently under review.
“I found Denys and liked the work he was doing, so we collaborated,” said Chris Shenefiel, principal engineer of the Advanced Security Research Group at Cisco and adjunct lecturer of computer science at William & Mary.
“It’s been a really good partnership,” Shenefiel added. “The goal is to bring the research to a development stage, where it can be used as a tool for industry — and that’s happening.”
The AI-system, called SecureReqNet, is aimed at helping developers prioritize security-critical tasks, Moran explained, which made Cisco the ideal partner. The company provided researchers access to a software project that was in development, so the team could evaluate SecureReqNet on an actual commercial venture in real time.
“During the typical process of software development, engineers often keep track of tasks in something called an ‘issue tracker,’ which you can think of as a large to-do list,” Moran said. “Each task is called an ‘issue’ and these could be related to many different things, such as fixing a bug or implementing a new feature in the software. Our primary goal was to automatically identify tasks that relate to security aspects of the software, like tasks that describe the handling of user login credentials or performing cryptographic operations.”
Flagging security aspects of software will help developers prioritize the most important tasks, Poshyvanyk said. He says it’s vital that development teams focus on security early in the process, because fixing bugs only becomes more challenging as software evolves.
“In order to enable these development teams to implement proactive software security practices, new tools are required that automate repetitive or intellectually-intensive tasks that would otherwise be delayed or abandoned during typical software development and maintenance cycles,” the paper states.
William & Mary graduate student David Palacio served as project leader, is the paper’s first author, contributed to the design and implementation of the AI and carried out experiments to illustrate its effectiveness. He clarifies that SecureReqNet was designed as a tool for developers — not a means to automate them out of a job.
“Even our most advanced AI is still naive and the degree of automation that we can reach is very limited,” he said. “Our work as researchers is still far from replacing a software engineer. We are making the life of software engineers easier.”
The first step toward making an engineer’s life easier began with teaching the AI to talk like an engineer. The researchers taught the AI to read using embeddings from a dataset combining text extracted from 52,908 entries in the National Vulnerability Database, 53,214 issues described in projects mined from GitHub and 10,000 general articles mined from Wikipedia.
The researchers then trained the AI to tell the difference between security and non-security critical software issues. The team evaluated the AI’s effectiveness on a set of 1,032 unseen issues in publicly available software. SecureReqNet achieved an accuracy of 96 percent on the set of open source issues.
Additionally, the researchers worked with Cisco Systems to evaluate SecureReqNet on a closed-source commercial software project. The AI had a 71.6 percent success rate when tested on the private program. The team says this demonstrates promising potential for commercial applicability.
The rate for industry was lower, Palacio explained, because the AI was not specifically trained on the informal language used by Cisco’s development team. There were also fewer issues for the AI to track. The larger the dataset, like the one the team used to evaluate open-source issues, the better SecureReqNet will be at spotting vulnerabilities.
“The straightforward answer is data, our approach relies on data to extract the pattern with the desired behavior,” Palacio said. “In the future, we aim to perform a more in-depth empirical evaluation comparing SecureReqNet to different baselines and apply our approach to help contextualize other security-related software engineering tasks.”