Hacking incidents and data breaches have increased overtime and events seem to happen almost daily. It’s hard to defend against the unknown threats and since cybercriminals know that, they will take advantage of the opportunity when it presents itself. Artificial Intelligence (AI) and Machine Learning (ML) have increase in need in the cybersecurity world. There are many AI and ML tools made available to help analyze data from millions of cyber incidents and they can be used to help identify potential threats. Cybercriminals have always tried to tweak their malware code so, it’s not recognizable by the security software and by doing so, the malware would not be seen as malicious and the system would not quarantine it.
Currently, AI has the ability to detect anomalies and react accordingly and ML has the ability to help spot potential damage of a malicious intrusion, prevent login credentials from being stolen and stop malware from being deployed or enabled by attackers, but they both have the potential to work against the system.
Working against the system:
Attackers could use ML to self-learn automated malware, perform ransomware, social engineering or perform phishing related attacks
ML algorithms can be learned overtime making it possible for phishing attacks to be driven in the same way as security vendors
The code around ML could allow cyber criminals to access free resources that are mostly available through open-source applications
Attackers could use AI-based tools to conduct cybercrime(s) (based on past events)
Attackers could use AI themselves to test their own malware and improve and enhance to become more AI-proof
Many of the cybercriminals elements are created for financial gain, which works against AI and ML effectively
Past AI incidents:
In 2019, AI was used to generate audio to impersonate a CEO’s voice and trick employees into transferring $243,000 to them (AI was used to mimic the voice of the CEO and request the transfer). Note: this was done by finding and exploiting voice recording from the public domain.
AI-based deep fake technology was used to spread disinformation and even make fake videos
Things to know:
AI-based cybersecurity tools continue to develop and improve overtime, which will in turn help businesses stay safe against the increase in cyberattacks.
AI systems are trained through learning sets; cybersecurity firms need to gain access to more of the various data sets to see more malware codes, non-malicious codes and anomalies, so they can learn their patterns and respond better. Learning the patterns can be costly and will take a very long time to do but it can be done in order for vendor software to become more accurate.
Overall, there is still much to be learned about AI and ML when it comes to the unknown threats like many unknown threats. It will take time and resources to learn them but after learning those there will always be more to learn after that. In general, AI and ML will ALWAYS have a learning process involved with them because of the evolving changes around cyber criminals and their activities. Cyber threats are increasing and cyber criminals will take advantage of the opportunity if they feel the need to. Vendors will eventually need additional resources and monitor tactics to keep their software up-to-date and working effective or the use of AL and ML will no longer be a thing in the cybersecurity world because it would not benefit the companies using the products. The best way for the software to work would be if the vendors came up with another innovative way that would not allow attackers to figure out how AI and ML worked with their malware codes in order to better protect their business in the long run. With cybersecurity, AI and ML is still so new, there is a lot to learn about them all but like everything else new, it will take time.