AI in Cybersecurity: Risks and Rewards

proof of concept
Photo courtesy of Vihar Garlapati

In 2023, cybersecurity researchers developed a proof of concept (PoC) for an AI-generated polymorphic malware called "BlackMamba." BlackMamba exploits a large language model (LLM) to return malicious code that can steal a user's device keystrokes. However, BlackMamba can continuously evolve to evade detection by cybersecurity tools, raising concerns about its potential to cause large-scale and long-term havoc.

BlackMamba demonstrates how cybercriminals can potentially weaponize AI to carry out sophisticated, undetectable, and large-scale cyberattacks. Even organizations leading AI development acknowledge the potential misuse of their solutions. For example, Microsoft has admitted that some threat actors are using their generative AI technologies and those of their partner OpenAI to organize offensive cyber operations.

However, here is the good news about AI in cybersecurity: while the bad guys can use AI to attack the good guys, the good guys can also use it to protect their systems and data from cyberattacks. Artificial intelligence can become the most potent weapon in today's cybersecurity efforts if used correctly. How is this possible?

The Risks of AI in Cybersecurity

In January 2024, the National Institute of Standards and Technology (NIST) published a document highlighting how AI can be manipulated to create adversarial AI and execute a range of cyberattacks.

Some of the common risks of AI in cybersecurity include evasion attacks, where attackers try to change input to make the AI system generate an undesirable response; privacy attacks, which expose sensitive information to breaches; and poisoning attacks, where attackers introduce corrupted data into the AI model's training dataset to force incorrect or undesirable results.

As seen in BlackMamba, malware is another big threat from artificial intelligence. Like the BlackMamba researchers, other researchers have discovered how cyber adversaries can exploit AI to infect systems with malware or ransomware.

For instance, in 2018, IBM Research developed a PoC for DeepLocker, an AI-powered malware that conceals its malicious payloads within innocuous files and evades detection by traditional security tools. It also harnesses AI to intelligently identify its targets and auto-activate itself in response to specific triggers to increase the probability of a successful infection.

More recently, in January 2023, CyberArk's security researchers successfully created a new strand of polymorphic malware using ChatGPT. They did this by providing the chatbot with just a few simple prompts. However, these developments raise the worrying question: Is it really so easy to create malware using artificial intelligence?

Clever threat actors also use AI technologies to enhance their phishing capabilities and increase phishing attacks' speed, accuracy, and frequency. A 2023 report found that the use of ChatGPT led to a staggering 1265% surge in phishing scams from Q4 2022 to Q4 2023. One reason for this rise is that it effectively simulates human speech, making it hard for victims to identify the scam and easy for scammers to create more victims and earn substantial payouts.

Criminals also use artificial intelligence to create deepfakes, often to spread misinformation. They also take advantage of AI-enabled chatbots and deepfakes to perpetrate romance scams and family emergency scams, tricking unsuspecting victims into parting with their money.

Artificial Intelligence for Cyberdefense

Despite the risks associated with AI, it's not all doom and gloom. Just as cybercriminals use AI to their advantage, so can defenders use it to enhance their security.

According to IBM's 2024 Cost of a Data Breach report, organizations that use AI for cybersecurity can identify and contain breaches almost 100 days faster. They can save an average of $1.88 million on breach costs compared to organizations that don't use AI technologies. Simply put, you can stop breaches early and reduce the complexity and costs of security operations with AI.

AI solutions can predict, detect, and even eliminate cyber threats in real time. These same solutions have predictive threat modeling and automated incident handling and response capabilities. Users can proactively hunt for new threats and quickly mitigate security incidents before they can cause material damage.

With its gen AI capabilities and workflow automation, users can also identify lateral movements, defeat advanced persistent threats (APTs), and identify system vulnerabilities that increase the risk of attack. Artificial intelligence tools are also helpful for automatically executing security tasks like penetration testing and generating compliance reports for regulations like the General Data Protection Regulation (GDPR).

Challenges Involved in Using AI in Cybersecurity

I'm all for implementing AI in cybersecurity solutions. However, I also urge users to be aware of the challenges they may face.

The first reason is because of bias. Due to in-built bias in AI models, one's AI in cybersecurity tool may flag non-existent threats, also known as "false positives," taking away attention from the real threats. Bias may also cause it not to detect some real threats, increasing the risk of attack and jeopardizing your security posture.

Another problem of AI in cybersecurity is "black box" or opaque AI systems. It's almost impossible for humans to understand how these systems make their decisions, raising doubts about the trustworthiness of their results.

Thirdly, AI systems are trained on vast amounts of data. Some of this information may be sensitive, raising concerns about data privacy and increasing the risk of data breaches.

I don't recommend becoming overly reliant on AI technology for cybersecurity. As we have seen, AI systems are not fallible, so depending on them too much can engender a false sense of security that makes you more not less vulnerable to cyberattacks.

The Key to Success in AI in Cybersecurity

It's an unfortunate fact that AI in cybersecurity is used to harm and exploit. However, it is a good thing that organizations can also use AI to their advantage enhance threat detection, accelerate incident responses, automate security tasks, and simplify compliance.

In this double-edged AI in cybersecurity, the key is to strike a balance between understanding AI's weaknesses and harnessing its potential to implement more effective and predictive security. One can know what to avoid and what to implement in the right areas at the right time by understanding the pros and cons of AI in cybersecurity.

More importantly, human oversight should be maintained when handling AI systems. Equip cybersecurity teams with the knowledge and skills necessary to understand and manage AI technologies effectively, bridging the skills gap in the workforce. Foster collaboration between AI systems and human analysts to enhance decision-making processes and address ethical concerns

About the Author:

Vihar Garlapati is the director of technology at a major Fortune 500 company. With over 17 years of experience in the IT industry, he has a proven track record of deploying sophisticated security solutions across the healthcare and financial sectors.

Garlapati's expertise includes identity access management (IAM) and the implementation of on-premises, cloud, and software-as-a-service (SaaS)-based services, which significantly enhance enterprise risk management, compliance, and productivity.

Garlapati's contributions have led to remarkable improvements in efficiency and effectiveness across multiple domains. By leveraging his deep understanding of IAM and diverse service implementations, Garlapati has consistently delivered solutions that not only meet but exceed expectations.

Related topics : Cybersecurity
READ MORE