Reality Check: Rebaselining AI for Security Folks
2–3–2025 (Monday)
Hello, and welcome to The Intentional Brief - your weekly video update on the one big thing in cybersecurity for middle market companies, their investors, and executive teams.
I’m your host, Shay Colson, Managing Partner at Intentional Cybersecurity, and you can find us online at intentionalcyber.com.
Today is Monday, February 3, 2025, and while I said we were running a full speed last week, it feels like the pace has picked up even more since I uttered those words.
The fact that the pace is rapidly reaching beyond human norms shouldn’t surprise us, however, as it’s time to rebaseline our positions around AI as it evolves at an incredibly rapid pace.
Reality Check: Rebaselining AI for Security Folks
Last year, it was probably good enough - or even forward thinking - to have a written policy on AI use in your firm. Now, however, that’s just not going to be good enough, and the primary example of this is a new AI model that was released last week by a Chinese-owned Hedge Fund called DeepSeek.
I’m linking here to Ben Thompson’s DeepSeek FAQ on Stratechery, as it’s the best I’ve seen on the topic from someone I trust, but here’s the core of what you need to know: this new model has two significant components.
First, it is being released as an open-source model, meaning anyone can use it (and its approach) to build on.
Second, it claims to have required far less computing resources than leading American model OpenAI to achieve similar performance (claiming several orders of magnitude less).
I want to unpack these two threads a bit in the security context and then bring it back to why you need to get moving on AI.
First, to the open source piece, this structure essentially guarantees that the best practices developed by any model will quickly be replicated by every model. We’ve already seen Sam Altman, CEO of OpenAI, note that they need a new open-source strategy based on DeepSeek’s success, and even calls his firm “on the wrong side of history” according to reporting in Fortune.
The nature of these models is that not only is every large tech company in the US developing their own - Google, Meta, Microsoft, as well as OpenAI, Claude, and many others - but they are also being developed in China and many other places. By demonstrating what improvements they’ve made, combined with how, the pace of acceleration is unlike the other technology we’re familiar with, and is certainly outpacing Moore’s Law in terms of progress.
That progress, of course, is then replicated back into other models, compounding in an exponential way - with plenty of smart folks putting resources into the effort.
To the plotting progress point, there’s lots of contention around these sort of things: benchmarks, capabilities, costs, etc. - and that brings us to point number 2, while also simultaneously missing the point entirely.
Point number 2, about the purportedly significantly reduced training costs of DeepSeek’s R1 model - it doesn’t matter if it was $6 million or $60 million or $600 million. They’re all less than the billions that others are throwing at their models, and the fundamental mechanics they’re using to make this reduction in cost is going to be applied across all models.
Comparing and plotting the models against each other or against cost basis or against some arbitrary testing mechanism misses the point entirely - which is that these things are getting exponentially more capable, exponentially more affordable, and at a pace that simply doesn’t map to human time.
What does that mean for your company and for security? It means that you’ve got to figure out what AI means for you, and how you’re going to deal with it - whatever “deal with it” means for your technical and leadership teams.
I would suggest figuring out a way to provide paved paths or playgrounds where you can learn to use AI, build capabilities and familiarity, and do so in a way that has a reasonable level of security controls around it. These technologies take a good bit of experimentation to really understand, and that’s something that’s only going to get harder to catch up on the later you jump in.
Your employees - and especially your competitors - are going to be experimenting with this technology, and resisting it outright or entirely seems like a very untenable position. Understanding how it all works - and when data is shipped back to China, for example - will be critical.
Having security lead the way on AI framing can be very helpful - and is not new. We’ve actually done lots of things like this - the cloud, most recently, but smart phones, desktop computing, and other things over the past few decades.
So, find a way to get started. Write an AI policy. Make your next tabletop about unauthorized use of an AI. Build a sandbox where employees can play. Consider low-barrier things like Copilot or Google’s Gemini (even if they leave plenty to be desired). Any constraints you place on your use of AI are likely no present for at least some set of competitors, much less attackers.
I’m urging you get going on this because - at some point - it simply won’t be possible to “catch up.”
Fundraising
From a fundraising perspective, another week with more of the same, totaling just under $5.2B newly committed capital (again).
Topping that list is Francisco Partners, who raised $3.3b for its third opportunistic credit fund.
Would also note a report from PE International that Hg, a software-focused PE firm, is targeting $12b for its latest fund.
I’m not going to wade into the macro picture this week, other than to say volatility and uncertainty persist. Plan and invest accordingly.
A reminder that you can find links to all the articles we covered below, find back issues of these videos and the written transcripts at intentionalcyber.com, and we’ll see you next week for another edition of the Intentional Brief.
Links
https://stratechery.com/2025/deepseek-faq/
https://fortune.com/2025/02/01/sam-altman-openai-open-source-strategy-after-deepseek-shock/