In August this year, the Wall Street Journal reported on an unusual case of AI being used in hacking. Using AI, the attackers were able to impersonate a CEO’s voice convincingly enough to compel a colleague to transfer nearly a quarter of a million dollars into their accounts.
As unusual as it may seem, this “vishing” or voice phishing attack – a more sophisticated take on conventional phishing scams – wasn’t exactly unprecedented. Vishing attacks have grown nearly 350% since 2013 and recent predictions indicate that the phenomenon will only become more commonplace in 2020 and more sophisticated as ML/AI technologies mature.
Emerging AI-enabled cyberthreats such as vishing represent a new challenge for cybersecurity analysts and CISOs. In an era where AI programs can “learn” any voice in just 20 minutes, AI-enabled cybersecurity shifts from being a nice-to-have to a must-have.
However, AI-enabled cybersecurity is not just about ensuring a level playing field for the defenders versus the dark side. In order to understand the imperative for AI/ML/DL in cybersecurity, we first need to consider the current state of cybersecurity events.
According to the Global Risks Landscape 2019 report published by the World Economic Forum, cyberattacks currently rank as one of the most significant global risks today, surpassed only by natural disasters, extreme weather events and failure to mitigate climate change. However, a global study of over 500 companies impacted by data breaches returned findings that indicate that security response might not really be measuring up to the risk profile.
The study found that malicious cyberattacks – as opposed to breaches resulting from system glitches or human error – were not only the common but also the most expensive. Today, malicious attacks account for 51% of all breaches, up from 42% in 2014, and are anywhere between 27-37% more costly than other forms of breaches.
Even as malicious attacks grew, so did breach lifecycles – the time between the occurrence and containment of a breach. The average breach lifecycle in 2019 was 279 days, a 4.9 percent increase over the 266-day average in 2018.
The global average cost of a data breach also increased to $3.92 million, a 12% increase over 2014, and so has the cost of lost business, the biggest contributor to data breach costs.
It’s not all gloom and doom though. The study also noted that the speed and efficiency of a company’s response to security incidents had a significant impact on overall cost.
A majority of conventional cybersecurity solutions still require significant human interaction and intervention. And as much as humans are unrivalled at detecting and responding to unusual patterns and behaviours, this is not a model that scales to match the increasing number and sophistication of cyberattacks. The other prominent argument for AI in cybersecurity is the increasing size and complexity of enterprise technology environments, with more attack surfaces for bad actors to explore and exploit. These are just two reasons why AI-enabled tools and solutions represent the future of cybersecurity.
A recent Reinventing Cybersecurity with Artificial Intelligence survey from Capgemini found that nearly one in five organizations used AI pre-2019. This number is expected to explode in 2020 with almost two out of three (63%) companies planning to deploy AI in cybersecurity by 2020.
The reasons for this enthusiasm are quite revealing. Over half (56%) of the respondents to the survey indicated that their cybersecurity analysts were overwhelmed, with over two-thirds (69%) counting on AI to help identify threats and thwart attacks. Nearly one in four (23%) were currently underequipped to even investigate all identified incidents.
As a result, nearly half (48%) expect budgets for AI in cybersecurity to increase by an average of 29% in 2020, with one in ten organisations increasing budgets by more than 40%.
Most firms (73%) have already deployed AI in some form or other as part of their cybersecurity strategy, with network security, data security and endpoint security making up the top three applications. The deployment focus is still predominantly on cyberthreat detection (51%), followed by prediction (34%) and response (18%).
But perhaps the most illuminating part of the study is that companies investing in AI-enabled cybersecurity realise significant benefits and are building the capabilities to cope with the dynamics of today’s cyberthreats.
Take breach lifecycle, for instance. Over 40% of organizations have been able to achieve a significant reduction – up to 15% – in the time taken to detect a breach as well as in the time taken to remediate a breach. There is also a consensus that AI enables more accuracy when it comes to detecting breaches.
This increase in accuracy and response times apart, a majority of companies have also realised meaningful cost savings – of up to 15% again – in detecting and responding to breaches.
Finally, AI-enabled cybersecurity has also helped improve the productivity of overwhelmed cyber analysts by relieving them of the mundane and allowing them to focus on the issues that really matter.
Access to data is the primary challenge for implementing AI in cybersecurity. In order for AI to be effective, it requires huge volumes of current, high-quality, diverse and dynamic data inputs. Even if the data exists, access could be an issue especially in larger companies where data is distributed about multiple silos, applications, locations and stakeholders. The technical complexities apart, consolidating all security-relevant data so that they can be accessed from a data platform can be a formidable undertaking, especially for companies that do not have the infrastructure in place.
Even then, in these times of heightened data usage and privacy regulations, it may not always be possible to let algorithms loose across a company’s data assets. It is extremely important to ensure that all the right consents and controls are in place. Finally, there’s the meticulous task of data cleaning, standardisation, correlation and understanding. This requires a high degree of data maturity as it involves a diverse variety of datasets, each with its own set of rules and principles. Organisations then need to create an integrated data platform that can connect these data sources to AI algorithms.
Over half of the companies surveyed in the Capgemini study cited a lack of understanding of high-potential use cases as an implementation challenge. The study helpfully provides a curated matrix of twenty use cases ranked according to implementation complexity and benefits.
Prioritizing use cases on a complexity benefit continuum is a good starting point for implementing AI in cybersecurity. However, there are a few other variables applicable to AI projects in general that could help further focus the selection strategy.
For instance, it is essential that the shortlisted low complexity, high return applications have a consistent and sufficient database for successful implementation. Similarly, choose projects for which the requisite skills and knowledge are either available within the business or can be quickly accessed through external collaborations. This can have a considerable impact on the time to value of AI implementations. Focus on projects that can deliver quick and meaningful wins, which can be critical to ensuring stakeholder buy-in and sustaining morale and momentum. Ensure that every use case is aligned with the broader cybersecurity priorities of the business.
The reality is that there is more collaboration between bad actors sharing information, tools and techniques, than there is between security analysts. The concept of security as a competitive advantage is still a key obstacle for genuine collaboration between industry, private and public sector organisations, government agencies, academics, threat researchers and other players in the cybersecurity space.
Without collaboration, the best technological implementations will still leave critical gaps in a company’s defences. The best solutions will combine technology and partnerships to strengthen defences across the board while allowing each participant to address deficiencies in their own security systems. Collaboration empowers security teams to stay abreast with developments in the threat landscape. It also plays a critical role when it comes to improving the logic of AI algorithms to detect threats more efficiently. In the long term, a collaborative model can deliver significant cost reductions and facilitate a more proactive, rather than reactive, approach to managing cyberthreats.
SOAR (Security Orchestration, Automation and Response) refers to a solution stack of compatible tools that aggregate security information, alerts and threats from multiple sources. SOAR is a critical component for enhancing the performance and productivity of AI in cybersecurity – though only 36% of organizations have deployed it.
Combined with ML and AI tools, SOAR provides a strong framework for mitigating evolving threats and can even be configured to respond automatically to a range of situations. Automating incident response procedures can help organizations reduce mean time to detect (MTTD) and mean time to respond (MTTR) by qualifying and remediating security alerts in minutes, rather than days, weeks and months. In addition to shortening response times, this rules-based approach to automation also ensures more consistency, eliminates the possibility of human error and frees analysts for high-priority incidents.
SOAR delivers a range of benefits including better quality of intelligence, improved operational efficiency, enhanced reporting and knowledge capture, lower costs, and the ability to streamline the process of onboarding cyber analysts.
AI in cybersecurity will considerably reduce the need for human intervention in day-to-day tasks. However, cybersecurity and AI specialists will continue to play a pivotal role in training, improving and perfecting these solutions. Right now, these specialists are at the top two positions on the list of hard-to-find skills among IT managers. The key challenge, therefore, will be to find cyber analysts trained in AI and ML.
Technologies like AI in cybersecurity needs a perfect blend of enterprise process knowledge, cybersecurity expertise and AI/ML proficiency. One approach to addressing the talent shortage in cybersecurity and AI is to upskill employees affected by automation but with good enterprise process knowledge into new roles in cybersecurity. Companies also need to look at concepts such as “new collar workers”, pioneered by IBM, where workers are trained specifically for in-shortage technology jobs through non-traditional means rather than four-year college degrees.
The rise in the number of cyberattacks and the intensifying regulatory focus on data security and privacy has elicited calls for framing cybersecurity as a part of a company’s Environmental, Social and Corporate Governance (ESG) strategy. Meanwhile, the prevalence of AI-based technologies is drawing out appeals to define a security risk governance framework for AI.
It is, therefore, easy to understand the strategic importance as well as the systemic complexities of defining a governance framework for AI in cybersecurity. It is, however, still possible to establish a productive, transparent and ethically sound governance mechanism that can deliver long-term improvement. Organizations embarking on AI-enabled cybersecurity programs need to start with clearly defined roles and responsibilities for cyber analysts who will act as the mediator between algorithm output and eventual action. In addition, there should be control processes in place to monitor AI algorithms for any behavioural anomalies.
AI has the potential of extending the capabilities of convention security solutions and bring next-generation features to an organization’s approach to cybersecurity. The current state of cybersecurity is characterised by prolonged detection and remediation times that have resulted in increasing the financial, regulatory and reputational risks associated with cyber threats.
Most organizations, however, are invested in the potential of AI to transform their security operations in terms of cost, efficiency, productivity and accuracy. They will also need a strategic roadmap to effectively integrate AI-based technologies into existing cybersecurity environments. They need to deliver clean, relevant and up-to-date datasets for AI algorithms to learn from and evolve. They have to choose the right use cases that can maximize business value without being bogged down by complexity. They have to create cross-sectoral partnerships to exchange security information that will allow them to shift from reactive to proactive threat mitigation. They have to prioritize the adoption of technologies such as SOAR to extract the best value from their AI investments. And finally, they have to define governance mechanisms that address the risks and concerns of AI in cybersecurity.
If you want top quality written content and proven social media strategies to boost your online presence, then look no further than markITwrite. We have a ton of experience writing for cybersecurity businesses and we’d love to bring our knowledge and expertise to your website.
Please get in touch today for more information.