Globally, the AI cybersecurity job market will witness 3.5 million unfilled cybersecurity jobs by 2021 according to The New York Times. Plus, the market size is predicted to reach USD 30.5 billion by 2025.
A recent Synack Report claims that combining cybersecurity talent and AI-enabled technology results in 20x more effective attack surface coverage than traditional methods.
But itâ€™s difficult to truly understand the implication of these numbers. Most content on the topic leaves the reader to do all the math, connect the dots, and try to understand the real problem behind the numbers, all by themselves â€“ an overwhelming task.
Two of the most interesting topics on the internet today are Artificial Intelligence and Cybersecurity. You want solid takeaways and insights to embrace in today’s inter-connected digital economy.
Challenges of AI: A cybersecurity industry perspective
A recent survey by the Consumer Technology Association identifies AIâ€™s top application as cybersecurity, with 44% of all AI applications being used to detect and deter security intrusions.
At the other end of the spectrum, cybercriminals are increasingly (and innovatively) using AI as digital ammunition. It is to create potent security threats, resulting in an increased demand for cybersecurity experts.
On the subject of threats, here’s my pick of the top-3 current concerns:
1. Increased cybersecurity threat
In 2019, the Wall Street Journal reported that a group of cybercriminals had demanded â‚¬220,000 from the CEO of a U.K.-based energy firm. An interesting twist to the plot: they used AI to impersonate the CEOâ€™s voice. The lesson learned? AI provides cybercriminals with an additional edge to bypass security blind spots in an organization.
2. Accelerated volume and complexity of cybersecurity attacks and data breaches
A study by Varonis puts things into perspective:
Long story short, AI-powered technology can enable automation of tasks, protect the attacker’s identity, spawn the next frontier in enterprise fraud, a.k.a deepfakes, and refine malicious services (to name but a few instances) at a rate and scale that’s virtually unheard of.
3. A window of (threat) opportunity, owing to greater reliance on AI-enabled technology
Practically speaking, AI-led technology can swing both ways. Either in favor of organizations or in favor of cybercriminals, depending on the circumstances. Organizations can beef up their security by using emerging technologies and AI-led software.
Ironically, attackers can overcome these security controls and manipulate routine tasks by using the same technology in an improvised capacity, giving rise to more complex and interconnected risks. Believe it or not, in 2018, Cisco reported that they â€œblocked seven trillion threats, or 20 billion threats a day, on behalf of their customers.â€� Currently, it seems like we’re stuck in a “chicken-egg” situation.
How AI is contributing to cybersecurity processes
According to Capgemini, two out of three organizations are planning to adopt AI solutions by 2020.
Despite posing serious threats, AI augments and complements the future of cybersecurity best-practices. Here’s a lowdown of the “AI advantage” in the cybersecurity landscape:
Reduced response time to threats and cost of preventing breaches
Synack claims that â€œusing AI accelerates the time to evaluate the breach-worthiness of vulnerability by 73%.â€�
Plus, additional data by Capgemini research shows that â€œ3 out of 4 executives believe AI in cybersecurity speeds up breach response — both in detection and remediation.
â€œAIâ€™s tenacity results in reducing time to discovery, which doesnâ€™t need holidays, coffee breaks, or sleep. And, is unlike Tier 1 security operations center analysts for whom reading endless log files and alerts gets boring.â€�
Ease of multi-tasking
AI is like a superpower organization that can use in a number of ways â€“ to review user behaviors, find patterns, and locate irregularities in the security network to name just a few.
Enhanced and dynamic security practices
The use of AI in combination with human efforts can help companies find and close critical vulnerabilities 40% faster.
Plus, this allows companies to focus their efforts on creating secure â€˜choke pointsâ€™ instead of spending millions to secure the entire work environment.
Finally, by using AI for improved security, companies can leverage greater returns and minimize risks — a growing-concern among companies today.
Minimized security responsibilities with high-quality results
AI is often described as intelligent, and rightly so. There are numerous application areas in which this powerful technology is helping organizations highlight recurring incidents and reverse the damage. (More on this in the next section).
Top-3 impactful use cases of AI in cybersecurity
1. Combing through mountains of security data and automating routine tasks
One of the biggest concerns in cybersecurity is the sheer volume of data organizations have to tackle daily. This is probably one of the reasons why Gmail chose the AI route to block 100 million extra spam messages every day CTO of Seedcamp, David Mytton’s explanation of how AI is disrupting the cybersecurity space is on-point.
In CIO Mytton says, â€œAs more and more systems become instrumented, the problem shifts from knowing that â€˜somethingâ€™ has happened. This is to highlight that “something unusual” has happened.â€�
Instrumented system refers to as — who has logged in and when? What was downloaded and when? What was accessed and when?
2. Reducing false security alarms and capturing unusual incidents
AI intelligence, in conjunction with threat intelligence, can detect new security issues and resolve them, offering a threat detection rate of 95% as opposed to traditional antivirus software, â€œwhere the detection rate is only about 90%, meaning 10% of malicious samples are being missed.â€�
“The hope is that these systems will minimize false alarms and insubstantial issues, leaving a much smaller set of â€˜realâ€™ threats to review and address.”
3. Empowering foolproof security by offering predictive functions and improving efficiency
One of the most innovative and effective applications of AI in thwarting security breaches is the use of biometric authentication. Tech giant, Apple, uses this method — commonly known as “Face ID.”
The powerful face ID technology combines built-in infra-red sensors and neural engines to recognize the user. If Apple’s claims are to be believed, the benefits are vast, with â€œonly a one-in-a-million chance of fooling the AI to open a device with another face.â€�
Apart from providing additional layers of security, AI is making teams more efficient, according to 70% of security professionals, and it is eliminating as much as 55% of employees’ manual tasks.
With the additional layer of security and helping the team pivot their energies towards solving more important tasks, AI also powers productivity. Helping with productivity reduces overall stress levels for everyone involved.
Key perspectives and future trends in AI security
Training AI-powered systems to protect us
We’ve spoken plenty about how AI works round-the-clock to keep an enterprise secure, but what about the safety and training of the technology itself?
Experts and enterprises need to refocus their strategies on training AI – ML models to max the value of these systems.
ML systems can be trained to learn from historical data and detect anomalies. They can allow companies to mitigate and manage cyberattacks efficiently.
AI is the shiny new toy in the digital arms race
From boosting the security defense to hyper-automating parts of the cybersecurity processes, AI will play an increasingly important role in preventing cyberattacks.
Capgemini recently reported that AI â€œadoption is set to skyrocket, with almost two out of three (63%) of organizations planning to employ AI by 2020.â€�
AI as an offensive-defensive capability
As mentioned earlier, AI is a technology that can be used to both defend and attack an organization’s digital defense systems.
The need of the hour is for cybersecurity experts to proactively identify attacks (think: Spam email attempts, disabling critical infrastructure, among others) and defend against them.
Limitations of AI & possible solutions
Experts favor findings verified by humans rather than AI
According to research by White Hat Security, â€œ60% of security professionals are still more confident in cyber threat findings verified by humans over those generated by AI.â€�
The top-4 areas where human intelligence trumps AI in the operational security process are the use of intuition, creativity, human experience, and frame-of-reference â€“ evolved capabilities that AI is yet to demonstrate, let alone master.
Evidently, the current cybersecurity climate at least has tipped the balance towards human capabilities. The solution? Augmenting human talent with AI’s technological prowess can be a reliable way forward.Â Aarti Borkar, Vice President at IBM Security, writes in Fast Company of a 360-degree solution enterprises can embrace:
“One way to help prevent bias within AI is to establish cognitive diversity. The diversity in the computer scientists developing the AI model, the data feeding it, and the security teams influencing it.”
- Establishing AI systems requires an incredible amount of assets and resources, such as memory, accurate data sets, and computing power. Not to mention, it is an expensive and time-intensive undertaking.
Some of the other growing concerns with the use of AI have been well captured by Osterman Research:
Moving forward, let’s look at how enterprises can solve these limitations from a holistic perspective:
- The first step is for companies to invest in an experienced cybersecurity firm with seasoned professionals.
- Test systems and audit your hardware as well as software to find and proactively fix security gaps.
- Install â€“ and constantly update â€“ firewalls and other malware scanners that keep your systems secure.
- Review the latest cyber threats and security protocols to prioritize risks and develop effective strategies.
AI-Powered cybersecurity in the future: Expert speak
We’ve chalked out the numbers and done our analysis. But what do the experts think about the prevalence and relevance of AI in cybersecurity? Let’s look at the picture painted for us by some of the expert opinions collected in Forbes:
The one who uses the technology first will call the shots
Phishing attacks are where the money is for cybercrooks, and they can wreak serious havoc on unsuspecting employees.
What this means is that organizations will need to keep validation technology up-to-date; the tools – to create deepfakes and to detect them- will be the same. So, itâ€™ll be an arms race for who can use the technology first.”
Rise of deepfakes-as-a-service
Audra Simons, Director of Innovation atÂ Forcepoint, offers an interesting perspective: â€œWe expect deepfakes to make a notable impact across all aspects of our lives in 2020 as their realism and potential increases.
In the offensive-defensive cybersecurity game, the defensive side has their work cut out
Marcus Fowler, Director of Strategic Threat atÂ Darktrace explains what it takes to fight AI with AI: â€œThe building blocks are well in place for the rise of AI-powered cyberattacks in 2020, as more sophisticated defenses and access to open-source AI tools incentivize adversaries to supercharge their attacks.
AI wonâ€™t only enable malware to move stealthily across businesses without requiring a humanâ€™s hands on the keyboard, but attackers will also use AI in other malicious ways, including determining their targets, conducting reconnaissance, and scaling their attacks.
Security experts recognize that defensive AI is the only force capable of combating offensive AI attacks. And that the battle must be fought by matching â€“ or exceeding â€“ the speed with which attackers innovate.â€�
The emergence of disinformation and fake news
Pascal Geenens, Security Researcher at Radware talks about this hot topic plaguing organizations and countries across the globe: â€œDisinformation and fake news can spread havoc in both the public and private sector and is increasingly being used as a weapon by nation-states.
In 2020, deep learning algorithms can bring about in generating fake, but seemingly realistic images and videos.
This application of AI will be a catalyst for large scale disinformation campaigns. â€�
The impact of increased digitization on AI and cybersecurity
Phil Dunkelberger, President, and CEO ofÂ Nok Nok Labs, sums up how increasing digitization will impact organizations, cybercriminals, governments, and the world at large: â€œAs digitization continues in 2020, data will become more valuable than ever before.
Information that may have previously seemed trivial to the everyday consumer will hold significant value for stakeholders and hackers across the spectrum.
Adversaries or real-life â€˜data bounty huntersâ€™ will hunt for new ways to exploit it, governments will seek better ways to access it, enterprises will adopt stronger security measures to protect it, and end-users will demand better privacy to secure their personal information.
Furthermore, with the rise of AI and machine learning, crucial data that impacts how medical decisions are made, where/how autonomous cars move. And more will become increasingly more mainstreamâ€”and increasingly more lucrative to threat actors pining for the information.â€�
The first and last line of defense in cybersecurity
Whichever way you slice it, one key learning, in particular, emerges from all this.
“AI isn’t ready to fly solo any time soon.”
Enterprises, as well as experts, are more comfortable embracing a “middle-ground” approach. AI can be used as a smart tool to augment human intelligence. And help organizations stay competitive and safe in the ever-expanding world of cybersecurity threats.
In a nutshell, the advantages of AI far outweigh the limitations and offer a hopeful (and secure) way forward. What are your thoughts?