Join The Newsletter
Get actionable tips and insights on AI, Cyber and Leadership to become resilient in the world of AI
- Apr 14, 2025
011: Humans and AI – The Cost, Valuation, Strategy and Trends
Read Time: 10 Minutes
Read on: monicatalkscyber.com
Read previous newsletter editions
***
I am bullish on AI, especially in combination with Augmented Reality (AR). Think of all the different possibilities of enhancing human lives, e.g. through surgeries using a combination of AR and AI. This combination of AR / AI can increase the success rate drastically with timely access to a larger and more relevant data set, much better and deeper visuals and insights into the human anatomy, and improved movement and coordination. But that’s not all. It's not just one industry. What if I told you that your career, your business — your entire world — could be shaped by a few key AI trends in 2025? These AI shifts spans across all industries, including cyber. Let's dig in!
***
Today’s edition of The Monica Talks Cyber Newsletter covers:
7 AI trends in 2025 to make (or break) you career
Components of an AI strategy (with Template)
Hot Take: The cost of your AI subscription (BREAKING NEWS)
Third Time's a Charm: Recall is about to re-launch
***
7 AI Trends to Make (or Break) Your Career
Companies are replacing you with AI. Companies are replacing cybersecurity folks, engineers and coders. They are replacing you, thinking it’s a great idea. It is not. Why?
Vibe coding may be great, and yes it makes virtually everyone a “coder”, but 90% of that 90% AI-generated code will end up being buggy, insecure and in certain cases even fatal.
The key skills engineers, coders and cyber professionals bring isn't stitching together auto-completed code. It's systems thinking, critical reasoning and analytical mindset for delivering value, where AI still fails. Most of this unsafe code will end up being used and reused without understanding any implications. Sure, engineers and coders can also produce buggy, insecure and faulty code, but reducing that threshold means two things: While on one hand, it’s amazing opportunity for accessibility and bringing the world forward. On the other, because the threshold is so low now, we need a better, faster and more efficient way to detect this buggy code, especially if it can mean insecure code becoming a part of the entire value chain of a company. Read more here.
While, AI will eliminate jobs, break a lot of things, still create garbage output, produce misinformation and make incorrect decisions, when done correctly, it has massive potential and upside, no doubt.
Amidst all that chaos, AI will also create new jobs, make you a builder, improve your career, fix a lot of security issues, make you more productive, help defend against cybercrime, and reduce time between idea to execution. AI in Cyber is a massive opportunity.
So if AI is not yet ready to replace entire workforce or departments of engineers, coder and cyber professionals (despite companies trying hard), what is it really good at?
7 AI Shifts in 2025
One of the key shifts that we are seeing is that LLM agents and agentic multimodal AI are driving hyper personalisation. That's where the big bucks are. Humans don't buy with logic. They buy with emotions. No matter which industry. Especially in cybersecurity, where everything ends up being spam, unless it's hyper personalised and timely.
A research paper that just came out on agentic multimodal AI-based ad framework for hyperpersonalised ads basically showcases that such an agentic AI can gather and analyse market intel, create personalised and tailored ads to user personas, and perform ad optimisation to position the product strategically, all that with up to 95% click-through rates. Why am I telling you this? Those who understand and manage to crack these ongoing trends (before the rest of the 99%) will stand out.
A Quick Summary of The 7 AI Shifts
AI-powered personalisation goes mainstream: Hyper personalisation is the future of customer experiences, not matter which industry. It's also true for cyber, especially for cyber.
The battle for high-quality data intensifies: Any ML/AI processing and outcome is highly dependent on the quality of data it works on. The problem is, most data out there needs some serious cleaning up. If you are gonna put garbage in, you are going to get garbage out.
Not every problem is an AI problem (until it is): AI is not new. But we are using it like never before. Despite that, AI in not a rainmaker. AI agents and agentic AI are going to become even bigger in 2025, but we aren't reaching AGI in the next months.
AI is your decision-making co-pilot: AI is like a brainy sidekick, your own personalised Einstein. I don't just mean ChatGPT or Claude. I am talking about AI influencing and even making decisions on your behalf e.g. whether your credit loan is accepted or denied.
Explainable AI takes the centre stage: As AI starts making your decisions, you need to know how it came to that decision. Enter explainable AI, which is also going to be big for legal reasons.
AI is generating synthetic data (not just fake but synthetic): Think of synthetic data like a really good fake plant. It looks real, feels real, but you don't need to water it. It can solve a lot of privacy and data protection issues, but again, remember garbage-in-garbage-out, so it needs to be high quality.
There is no responsible AI without you: Who's keeping all this AI stuff safe, ethical and responsible? More on it below.
Check out the full episode on 7 AI Shift to Make (or Break) Your career or Business, here: Audio / Video.
Here's how to think about leveraging them:
How would you career and business change if you could use AI and apply one of these concepts with AI?
Can you use it to augment decision making?
Can you use AI agent to improve your communication hyper personalised to your stakeholders?
Could synthetic data provide better privacy and compliance?
How would you incorporate responsible AI in your work, both personally and professionally?
***
Components of an AI Strategy (with Template)
Two years ago, as Generative AI exploded, my team and I, together with key business stakeholders in the company, started an AI project. A sort of co-pilot pilot that would provide an offline/self-hosted AI model to access business data with two goals: better document classification and improved productivity. The use of AI for document classification is a fantastic use case. It served two purposes. One was to ensure there is a right level of access from the AI models to all documents and content within the business. A wrong level of access at AI model level, could be much more disastrous as it exacerbates the risks in scale and speed. The second purpose was to do a pilot testing of an AI model (internally hosted) on a use case that would benefit the organisation.
But before you start with any AI use case, you need an AI strategy which basically means establishing an AI governance, understand risks vs. opportunities, and defining core pillars of Responsible AI as a part of that strategy.
Any good AI Governance framework, at the very least, will cover 3 major components:
Accountability and Ownership: Who is overall accountable for AI strategy, governance and outcomes? What are the clear escalation paths? Who are the ultimate decision makers in case of conflict?
Core Value of AI Framework: What are the core values that your AI framework based on? Does it support transparency, human oversight, and user-controlled options on AI model training.
Compliant and Responsible AI: What are the key pillars for responsible use of AI & data processing by AI models? Does it provide compliance to laws and regulations such as EU AI Act, GDPR, etc.? At a fundamental level, responsible use of AI ensures 5 key pillars: transparency / explainability, fairness, non-biased / non-discriminatory use, ethical use and accountability.
I recently gave a keynote on Human, AI and Trust at the Association of Certified Fraud Examiner’s Fraud Conference Europe, covering all 5 aspects of Responsible AI, and AI use cases in finance, fraud and cybercrime. Entire keynote coming out soon on my youtube channel. Subscribe here, if you don’t wanna miss it.
The problem is that most companies are using AI in their business without knowing:
What’s the purpose?
What problems should AI solve?
What specific outcomes should AI generate?
What AI use cases are relevant for the business?
What are the business opportunities vs. risks of using AI?
All the above questions need to be answered in an AI strategy. AI is getting more and more weaved into our professional and personal lives, on a daily basis. Everyone wants to innovate, but the legal and ethical aspects are still playing catch-up. I am not talking about stifling innovation, but the need to define and implement what ethical, legal and responsible looks like. Here is what an overall AI Strategy template looks like, exemplified with a specific AI use case: "AI in document classification". Grab your free copy with detailed information and use case here.
***
Hot Take: The Cost of Your AI Subscription
BREAKING: OpenAI just raised $40 billion at a whopping $300 billion valuation surpassing any company ever to raise that record amount. From oct 2024 to now, it’s valuation went from $157 billion in October 2024 to 2x in just 6 months time. As if that was not wild enough, Mira Murati, former CTO at Open AI, is in process of raising more than $2 billion for her new startup “Thinking Machines” in just the seed round alone. If that goes through, that would make this as the largest seed round in the history. And no, we are not done.
THIS JUST IN: Ilya Sutskever, just raised $2 billion for his new startup Safe Superintelligence (SSI) at a $32 billion valuation with 0 products and 0 revenue. No APIs. No plans. No monetization.
While AI companies are raising hundreds of millions and billions of dollars in funding, what is it costing you? What is your AI subscription costing you. Is it just 20 bucks per month? Here are 3 more ways to look at it:
-
It is costing you your data
In cybersecurity, we know it. What you put online, is there forever. While you may have already been putting out your content on the Internet, and while it’s already being used by all major tech companies to spam you, now the repercussions are even stronger. Not only is your data being scraped from the Internet (including and particularly social media platforms), it is being used to train AI models, infringing your privacy, (mis)represent you, and most of the times you don’t even know about it. AI models are costing you your personal data being used to train AI models on your content, without you being aware about it. Ireland’s Data Protection Commission is currently investigating X for allegedly using European users' personal data to train its Grok AI chatbot without consent, violating the EU's General Data Protection Regulation (GDPR).
-
It is costing you transparency
While AI companies like OpenAI do provide you the option to opt-out of your data being used to train model, and some other like Claude’s policy claims your data won’t be used unless it was flagged by their safety systems or you gave them explicit consent to do so. But how do you know if these are enforced and to what level? Most AI models are black-boxes, and still many AI companies do not disclose the specific datasets used for training, making it difficult for you to know whether you data has being utilised.
-
It is costing the your copyright
AI models, especially LLMs, train on the vast underlying datasets, including copyrighted material. While Open AI came out pointing fingers at Deepseek for infringing…. it seems Open AI has been doing the same with your data. But what happens when it you copyrighted material or your IP? Many AI companies have been accused of using copyrighted books, articles, and other creative works without permission to train their models. The former Sinn Féin president Gerry Adams is considering suing Meta for alleged use of his creative work and his books to train its model without his permission.
While there are no quick fixes, here are few things to consider:
-
Separate personas
For some of the online AI models, I use a completely different account from any of my personal accounts, and feed any data very intentionally. Also, I came across
-
Have I Been Trained
“Have I Been Trained” is a tool that allows you to search the database for what data is being used to train AI and to discover if your work has been included.
-
Opt-Out Policies / Offline Models
While "opt-out" is as good as just believing the companies will do what they say they'll do, it's still better than not opting out of your data being used for training the underlying AI model. Anthropic claims they don't use it. Open AI allows you to disable it. And if none of these options are good enough, you can try offline models or self-hosted AI models. Those will be limited in functionality and require massive compute, energy, etc. however, there are options like Llama 3, Mistral AI, MS' Phi Models (SLMs), which means better control, privacy and better dataset if you can feed it your context, which is what will give you the most benefit. SLMs may change the way we think about AI.
***
Third Time's a Charm: Recall is About to Re-launch
Microsoft is about to relaunch recall for the third time (for real this time).
Microsoft’s Recall AI is creepy, clever, and compelling. I’ve been testing the controversial Recall feature and I’m still not sure if I love it or hate it. – The Verge
After massive discussions around privacy and security issues on this feature, which I still believe is highly debatable and polarised, Microsoft is relaunching Recall, but this time disabled by default, unless you opt-in. This both a massive benefit but can also be a privacy and legal nightmare. You'll either love or hate it. In any case, you've a third chance again to make up or change you mind about it. Read more here.
***
Join The Newsletter
Get actionable tips and insights on AI, Cyber and Leadership to become resilient in the world of AI