Join The Newsletter

Get actionable tips and insights on AI, Cyber and Leadership to become resilient in the world of AI

  • Dec 18, 2025

013: Predictive Cyber with AI, Market Adoption, Fixing Immaturity and 2026 Outlook

    Read Time: 9 Minutes
    Read on: monicatalkscyber.com
    Read previous newsletter editions

    ***

    In Feb 2025, a peer in the industry told me outright to stop wasting my life public speaking / doing AI advisory. It was painful, but I didn't listen to him.

    • In just last 4 months, I travelled to 4 continents and 6 countries, giving keynotes on AI and doing AI advisory:
      - Canada (Sep): Opening keynote on Adaptive / predictive cyber to bridge the AI and cyber gap (out now on my youtube)
      - UK (Sep): Keynote on Defending against AI, Cyber and Human risks
      - Brazil (Sep): Keynote on Imminent Dangers of AI (video coming out in 2026)
      - Norway (Oct) - keynote on Governing AI Before It Hacks You
      - Malta (Oct) - closing keynote on Innovating with AI without Breaking Trust
      - UAE (Nov) - opening keynote on Rebuilding Trust in the Era of AI (video coming out in 2026)

    • I advised on AI auditing, AI governance and AI threats.

    • I got featured and did brand partnerships with Google Cloud, AuditBoard, and others on deepfakes, AI-augmented risk management, AI threat landscape and more.

    Here's one of the things my father taught me growing up. Don't let the world tell you what you cannot do. That's the reason why I believe, what I believe.

    Screw the naysayers. You do you!

    I am grateful for 2025. I am grateful to you for being a part of my community.

    As we wrap us 2025, I want to share with you some of the biggest learnings I have had in AI, cyber, leadership and life. In this edition, we will cover:

    • Where are organisations failing

    • What's the cybersecurity adoption lifecycle model

    • Why is this important and relevant in today’s world of AI

    • What the model "predicted" in terms of cybersecurity maturity

    • What's the outlook for 2026 and beyond

    ***

    We Got Maturity All Wrong

    🤯 We have been (and most organisations still are) doing cybersecurity maturity all wrong. This is the hill I'll die on. Here's why.

    Over the last 5 years, resilience has undoubtedly become one of the biggest focuses of almost every professional, every business, and every organisation out there. While, we started talking about ‘resilience’ over the last two decades or so, the concept and implementation of ‘resilience’ for businesses didn’t really hit the mainstream market, until the financial crisis of 2008. It particularly gained public traction only after Covid-19 became a global health crisis exposing our supply chains as a part of a complex, intertwined and vulnerable global digital world, pushing further for the real need for resilience in our global infrastructure across all industries and businesses.

    Yet most companies were in denial. Many still are. In 2024, the adaptive security market revenue was valued at USD 3.45 billion.

    Figure 1: 2020 Cybersecurity Adoption Lifecycle Model
    (See below for 2025 Updated Model)

    That’s why I built this above model in 2020, that got published 5 years ago (check the links for the 2020 versions) to help companies and businesses understand how cybersecurity needed to evolve in order to continue to support the businesses and serve the society. In that model I showcased how companies should be thinking and implementing cyber maturity, not in terms of how automated your process is but how your organisation manages the "unknowns". That's where adaptive and predictive security measures come into the picture. In a nutshell, in that model, I showcased that cybersecurity adoption in the market would (need to) evolve from preventive or regulatory to both adaptive and predictive security measures becoming mainstream over the coming years. That's how we need to look at maturity. That's the direction going forward.

    Why is this important, why is it still relevant today and how do we solve this? First, let’s look at the problem.

    The Problem

    It’s no surprise to anyone that most of the last decade has been driven by uncertainties and the unknowns. There are at least three types of unknowns, typically:

    1. The unknown knowns (i.e. tacit knowledge) e.g. the risks we mitigated based on our experience or intuition but without complete knowledge of attribution or the entire kill chain.

    2. The known unknowns (i.e. the ignorance we are aware of) e.g. the advanced persistent threats (APTs), the supply chain vulnerabilities, the insider threats, etc. we know they exist but don’t know when they’ll be exploited or they’ll materialise.

    3. The unknown unknowns (i.e. the meta-ignorance) e.g. the AI and cyber threats and risks that we don't even know we don't know. Since, we don’t know what we don’t know, it’s even hard to exemplify it before the fact. These are the most dangerous.

    The hardest one is the unknown unknowns. This meta ignorance was triggered, for example, during the global pandemic, then by the global geopolitical crisis (Ukraine-Russia warfare) and recently by AI. To add to that, with AI, we have one more aspect to it: unpredictability. That’s a massive one and let’s unfold it.

    The biggest challenge with AI isn’t an army of AI-robots taking over humanity. It’s AI (being used in) hacking humans and businesses in ways that what we don’t know we don’t know. It’s AI going rogue in ways we don’t know we don’t know. Ultimately, it’s the meta-ignorance around AI risks.

    There have been myriad cases pointing towards how AI unknowingly, unintentionally yet very much so heavily influenced human decision making leading to dire or irrecoverable consequences. Look at the example of the parents of a 16-year old who took his own life have sued OpenAI because allegedly ChatGPT drove the kid to suicide, or this another example of this old man who passed away because he was influenced by an AI chatbot into believing he had a real girlfriend in New York, so he left his wife to go visit his “girlfriend” for one last time.

    I have been talking about autonomous AI hacking for a while now, lately in my keynote on Bridging the AI and cyber gap that I gave in Canada in September 2025, where I talked about the concept of Autonomous AI hacking (btw, way before Claude released their paper on AI-orchestrated cyberattacks in November 2025) and why we need to move towards adaptive and predictive security, something I originally talked about in my model in 2020.

    You can watch my keynote here.

    Video 1: Monica's Keynote in Canada on Adaptive and Predictive Security Bridging the AI and Cyber Gap (Sept '25)

    Additionally, what I talked about in that keynote was the fact the we will soon come to the point where agentic AI, with some level of autonomy in decision making and actions, will be able to attack other agentic AI or infrastructure or you, with little to no human intervention. For example, my AI-assistant hacking your AI-assistant to carry out a task based on a decision it is made to believe without you or me even knowing about it, until it’s too late. This is also a specific example of combining AI going rogue and hacking autonomously.

    That unpredictability combined with our meta-ignorance (the unknown unknowns) is my biggest worry.

    Now the next question is how do we fight that?

    Setting the Foundation for Cybersecurity Adoption Lifecycle

    Here’s where most maturity models fall short. Most maturity models are around how well the process is defined, managed, automated, etc. I get it, automation matters, especially with AI, both in terms of data processing with speed and scale, which is great. But that maturity framework doesn’t take into account a lot of things:

    1. Making a bad process highly defined, managed and automated doesn’t make it any better. Bad processes, when automated are just even worse processes, that are now everywhere and churning incorrect output faster, not better. Unless, you fix the process, making it faster, automated and scalable, won’t make it any better.

    2. If regulations are the only reason why you do what you do, you’ll never be truly resilient. As I wrote back then in 2020 in my published article:

      In addition to preparation and recovery, one of the key success factors in building a strong cyber-resilience framework is adaptability and predictability aka adaptability to an ever-evolving threat landscape and predictability of the unknowns.

    3. There is no 100% security, we know that. Add to that AI and unpredictability. In this insecure, uncertain and unpredictable world of AI, maturity isn’t about preventing or even just protecting, it’s about adapting and predicting (and here's the key: with a certain degree of probability) the threats and the risks of the unknowns (or maybe just unknown to your organisation). It’ll never be 100%. We are not aiming for 100% anything.

    4. In order to continue experimenting with AI, we need to look at cyber maturity very differently. AI will break things. It's not some automated process that'll fix it. It's the extent of adaptability and the level of predictably that will determine the actual impact of the unintended consequences and how we will manage that impact.

    5. Looking at security maturity as just an ITIL process that's well defined, managed, automated, etc. takes out of the equation all the key stakeholders that you truly need to make an organisation mature. It's not just IT. It's operations. It's product teams. It's architecture. It's engineering. It's HR. Looking at maturity differently and increasing it will require all of them to be involved and rightly so. Just not an IT process that's automated.

    So, I built that model in 2020, and based on how things have evolved, I believe the cybersecurity industry has and will (need to) continue to evolve in the direction as shown in my model. So, I decided to give it an update. Here's my updated 2025 version of this model:

    Figure 2: 2025 Updated Cybersecurity Adoption Lifecycle Model
    (Pinch with two-fingers to zoom in)

    Why It is Still Highly (and Maybe Even More) Relevant Today

    You can read my detailed 2025 blog on updated the cybersecurity adoption lifecycle here but here are seven (7) key things both my 2020 and now my 2025 updated models showcase and especially how it still is and maybe even more relevant in the world of AI:

    1. True security maturity isn’t about processes. For organisations to truly mature in the digital world, and now in the AI world, they must cross the chasm from preventive security to adaptive and predictive security. One such example of adaptive security is policies being activated in real-time during an ongoing crisis to minimise the damage to your crown jewels or to keep your Minimum Viable Business (MVB) running.

    2. Having said that, adaptive security is not enough. My 2020/2025 cybersecurity adoption lifecycle models showcase that the next wave of maturity won’t just be adaptability but predictive security e.g. predicting and defending against threats in real-time, e.g. threat hunting in real-time, etc. It was true back then in terms of where we were heading. It’s truer today, especially with AI.

    3. For me, "Autonomous AI Hacking" is when AI autonomously attacks organisations, infrastructure, humans or even other AI systems with little to no human intervention. Now, I get it, that research paper had many flaws. While they claimed it to be 80%-90% automated, human intervention was critical for review, verification, launch, etc. But here’s what I talked about in my keynote in Canada and here’s why autonomous AI hacking is the future we need to be ready to defend: Despite no true autonomous AI hacking today, we are already at a point where say e.g. my AI-assistant is able to hack your AI-assistant to carry out a task based on a decision it is made to believe, without you or me even knowing about it, until it’s too late.

    4. Autonomous AI hacks do not necessarily need to be complex or sophisticated. Obviously, agentic AI carrying out the tasks with some level of autonomy is an advancement in itself, but the attack itself doesn't need to be sophisticated. They can but they don't have to.

    5. The key is that AI is able to, with little to no human intervention, carry out tactical and chained tasks throughout e.g. a cyber kill chain to execute the attack and achieve its malicious goal, whatever that is. It more than just automating cyberattacks in speed and at scale. Surely, more sophisticated attacks will require some human intervention, but AI is already automating few key parts of the entire cyber kill chain including not only vulnerability discovery in real-time, but creating exploits.

    6. In addition to preparation and recovery, one of the key success factors in building resilience in the world of AI is adaptability and predictability, that is adaptability to an ever-evolving threat landscape and predictability of the unknowns. However, it's important to differentiate AI-powered cyberattacks and Autonomous AI Hacking. Here's how I define and distinguish them. Attacks like deepfakes, personalised phishing, etc. are AI-powered. When it comes to Autonomous AI Hacking, it’s about the end-to-end autonomy of carrying out chained tasks in an entire cyber kill chain or most parts of it using AI for not only generating “tokens” or code, but creating workflows, analysing input, making certain decisions, based on that taking certain actions to carry out the next step in the kill chain, with little to no human intervention, ultimately achieving its "malicious" goal.

    7. In the world of AI, where AI-orchestrated cyber attacks, and autonomous-AI hacking will eventually become mainstream, predictive security is going to be even more important than before. We need AI to defend against the darker side of AI. It is this very AI that we will need to not only hunt for threats in real-time, correlate them in terms of impact on business risks, find vulnerabilities, and patch them in real-time to defend against autonomous AI cyberattacks.

    To ensure we're not lagging, it's not enough to be proactive, we need to be adaptive and predictive.

    You can read my entire detailed blog on Updated Cybersecurity Adoption Lifecycle Model and Predictive Security here.

    Outlook '26 and Beyond: Adaptability and Predicting The Future

    Here's the underlying message and outlook for '26 and beyond:

    • I am a big fan of threat modeling. Even before I officially worked as a hacker, more than 20 years ago, I wrote my Master Thesis on Attacker-Centric Web Application Threat Modeling, for which I received the best master thesis award. Threat modeling has truly been the backbone of security engineering. AI will not only change threat research, but also threat hunting, real-time vulnerability patching and evidence-based risk management, to name just a few examples of predictive security with AI.

    • For the last decade and especially since I released my model 5 years ago, I have been advocating that adaptive and predictive security is the future of cybersecurity. I stand by it.

    • As per the 2020 model, most organisations and businesses are in the preventive security and regulatory-driven security fields. There are very few that build and truly implement cybersecurity to advance society and serve as a business differentiator. This requires investing and working in the fields of adaptive security and even predictive security.

    • Now with AI, I have a higher degree of confidence in my model and where we need to continue to be heading. Adaptive and predictive cybersecurity measures are still very much key in the coming years. This doesn't mean preventive measures or basic cyber hygiene goes out of the window. They are still very much valid. At least most of them, barring some. But they won't suffice. I use the word “predicting” somewhat loosely, because there is no 100% predictability, but we can shape the outcomes we want based on the actions we take today.

    • What I built 5 years ago, I believe is even more true and relevant today and will continue to shape the cybersecurity industry. I still believe, we are crossing the chasm with adaptive and predictive security and will continue to do so. That shift towards adaptive and predictive security is needed both in the adoption in the mainstream market and in organisations making it as a part of their maturity equation.

    In 2026, I am coming out with amazing news for you all. Stay tuned! If you found this useful share it with a friend or a colleague.

    –– Monica Verma

    Follow me on Linkedin, Youtube, Instagram or Book a 1:1 Call

    Join The Newsletter

    Get actionable tips and insights on AI, Cyber and Leadership to become resilient in the world of AI

    0 comments

    Sign upor login to leave a comment