MIT Looks To Artificial Intelligence To Thwart Cyber Attacks

Updated on

Using a system that MIT is calling AI2, which was developed by the institute’s Computer Science and Artificial Intelligence Laboratory, researchers have made it easier for humans to detect network breaches. Finding the evidence of a compromised network is a daunting take for security experts, at least for humans. The system MIT has developed doesn’t sleep and can sift through millions of log lines looking for abnormalities before bringing them to an analyst’s attention.

Humans and artificial intelligence working together

After AI2 has found an anomaly following a review of data, it points out abnormalities to a human who takes over and has a thorough look at AI2‘s findings. According to the researchers, this human/AI team identified just shy of 90% of attacks while saving the human component hours and hours of time by not chasing after false leads.

There is clearly no way a human can even scratch the surface of work that the AI system is capable of doing 24/7. AI2 is so named because of this one-two punch of man and machine.

“You have to bring some contextual information to it,” says research lead Kalyan Veeramachaneni referring to the need for the human element. Stress-testing of systems that many do internally, cause irregularities. While the human is aware of these, machine-learning systems are not.

“On day one, when we deploy the system, it’s [only] as good as anyone else,” says Veeramachaneni. Essentially, the machine identifies the day’s 200 stranger events and provides a report of these for the analyst. The analyst then has a look and tells the machine whether to continue looking at an event with a finer-tooth comb.

“Essentially, the biggest savings here is that we’re able to show the analyst only up to 200 or even 100 events per day, which is a very tiny percentage of what happens,” says Veeramachaneni.

Based on the analyst having a look at these threats and pointing out whether it was an attack, allows the system to learn and apply this to the next day’s scan.

AI2 was put through the ringer on an unnamed e-commerce site where it looked at 40 million log lines each day and learned from its human counterpart. After three months, AI2 was identifying about 85% of attacks. Good, but not great; but still improving each day.

That is until you look at the success rate of the same system without the human component. The top 200 threats identified by just the machine, had a success rate of under 8%.

Building predictive models

AI2 also proved quite adept and building predictive models going forward into the next day. So, if a hacker continued with, say, the same brute force attack on two consecutive days, the machine would make child’s play of the effort.

What this effort from MIT shows is that, at least for now, the machine can no more replace man than man can do the volume of work the machine can. The two compliment each other quite well, and the machine learns from its “master” quite quickly.

“The attacks are constantly evolving,” Veeramachaneni says. “We need analysts to keep flagging new types of events. This system doesn’t get rid of analysts. It just augments them.”

For at least the next decade, expect this to be the model going forward when it comes to prevent security breaches in both the public, private and military sectors.

Leave a Comment