By Marc Arcas
San Francisco, Jan 15 (EFE).- The ability of artificial intelligence to absorb and process enormous quantities of data has a dark side when used by cyber-pirates to create new “malware,” a trend that is already evident and which will only increase this year.
In the same way that artificial intelligence has huge creative potential to give machines a way to emulate human learning, this very fact also confers upon AI great destructive capacity and the ability to do harm, for example in the preparation and dissemination of viruses, Trojan Horses and other malign software.
“Cyber-pirates will use artificial intelligence more and more to create malware that will be more destructive,” the social director of Microsoft’s Software Engineering division, Glaucia Faria Young, told EFE in an interview.
“This is something that has already started happening and it breaks through the security models we’ve traditionally used. They are more complex attacks and more broadly distributed. And, it’s easier for them to remain undetected,” she said.
AI systems are able to increase the speed and precision of cyberattacks and, at the same time, fool conventional anti-virus defenses, since the latter are programmed to look for specific code elements that are not necessarily obvious in AI programs.
One example of this is the ability for automatic learning to spread a virus widely without causing any damage and without raising anyone’s suspicions, but it suddenly activates itself when it infects the desired equipment, such as the computers of a specific firm, individual or public institution.
In contrast to traditional malware, which harms all devices through which is passes and, thus, is easier to detect and halt, an AI system remains “dormant” until it reaches its objective, recognizes it (via facial or acoustic recognition, for instance) and activates itself.
“The way we can counterattack is to also use AI to detect attacks,” said Young, who emphasized the potential of this technology to identify patterns and anomalies quickly and thoroughly among enormous quantities of data.
The team that Young heads, for example, uses its own system of automatic learning that, instead of pursuing earlier interactions of malign code, as was normally the case, operates using risk factors in analyzing about eight billion signals it receives each day.
Along with AI, the people at Microsoft responsible for cybersecurity forecast that cybercriminals will use four other types of cyberattacks in 2020 focusing on attacks on value chains if they are not coordinated, public “Clouds,” the growing fragility of passwords and the appearance of state-run operations.
Regarding value chains, analysts emphasize the importance of companies, customers and providers acting in a coordinated manner to prevent attacks, given that if just one of these actors protects itself, info-pirates can still harm them by attacking other elements in the chain.
In terms of the Cloud, this is basically a question of volume: with more companies and individuals moving toward such services, the public Cloud has become a target that looks more and more tempting for hackers.
On passwords, the debate has been ongoing for some time, although it has intensified in recent years. Despite their widespread use, they are rather inefficient and vulnerable security systems and experts recommend gradually moving to authentication models using two or more steps, including – for instance – biometric recognition.
Finally, the cybercrime activities organized by governments and state entities around the world are one of the biggest challenges facing cybersecurity authorities, since this implies a significant change in the “shape” of the enemy: It’s not about four hackers in a basement somewhere acting on their own behalf anymore but rather big state-supported and -financed cyberattack operations.