This short video was a fun part of my AI-cybersecurity work in 2025
As one year ends and another begins I find it helpful to look back at what I managed to accomplish over the last 12 months, even as I plan for the next 12. Back when the study of cybersecurity was my full time job, this process was an annual ritual, often embodied in weighty reports that sought to capture the implications of one year's cybercrimes for the next year's defensive strategies. ----- D R A F T -----
Around the middle of 2019, cybersecurity ceased to be my full-time job as I retired from my role as senior security researcher at ESET, one of the leading makers of security software. My intention at the time was to move to England to support my mother who was entering her nineties, write my next book, and put out content that might attract some conference speaking, teaching, and consulting work.
Unfortunately, I had not factored in the changes to British society caused by the government's official policy of hostility to foreigners, including my American wife. She was forced to undergo a long and deeply distressing visa process before she could even enter the country, and suffered a serious brain haemorrhage before that permission was granted. That turn of events resulted in my transition to primary carer/caregiver for two people, my nonagenarian mum and my brain-damaged wife.
What does all that have to do with my 2025 output? Well, as you can see from the following snippets, my attention has been split in several directions. There's cybercrime of course, particularly the tension between institutions urging people to "just go online" and the grim reality that having any kind of online presence these days increases your chances of being scammed, defrauded, stalked, or otherwise assaulted and abused. I posted some of my work on this to Substack, a platform that I started to favour in 2025.
Quite a lot of my 2025 output was related to artificial intelligence, partly because I was fortunate to get several AI-related teaching gigs early in the year. One of these was for the Computer Science department at Bridgewater State University, Massachusetts. I presented two versions of my "AI and Cybersecurity" class to students taking a course in computer forensics.
In May, I was invited to conduct a three-hour online course with the grand title of AI and Cybersecurity: Seizing the Opportunities, Defending Against the Threats, Navigating Legal Risks (currently available to purchase, but I don't get any royalties).
This was a great opportunity to be remunerated for digging deep into several areas of great interest and concern, areas in which I have a lot of history. (I was fortunate to have spent my last eight years of regular employment working with cybersecurity experts who had pioneered the use of machine learning, neural networks, and artificial intelligence — and I mean real experts, not techbros seduced by AI hype).
This was a great opportunity to be remunerated for digging deep into several areas of great interest and concern, areas in which I have a lot of history. (I was fortunate to have spent my last eight years of regular employment working with cybersecurity experts who had pioneered the use of machine learning, neural networks, and artificial intelligence — and I mean real experts, not techbros seduced by AI hype).
Teaching that course led to another opportunity from the same source (the father and son team of David and Mark Jacobs, experts in IT consulting and legal training respectively). I provided the opening talk for an online conference in October focused on AI and IP.
That was timely because my partner and I had just discovered that unknown number of the two dozen books that we have authored since 1992 were among the pirated volumes that the company known as Anthropic downloaded and used without permission. Thankfully, several authors sued Anthropic in a case that is referred to as Bartz v. Anthropic. The Authors Guild says the class action suit was "brought by authors against an AI company for using books without permission to train large language models." Naturally, we wrote about this and listed which of our books might be involved.
That was timely because my partner and I had just discovered that unknown number of the two dozen books that we have authored since 1992 were among the pirated volumes that the company known as Anthropic downloaded and used without permission. Thankfully, several authors sued Anthropic in a case that is referred to as Bartz v. Anthropic. The Authors Guild says the class action suit was "brought by authors against an AI company for using books without permission to train large language models." Naturally, we wrote about this and listed which of our books might be involved.
In June, I realised that 2025 would see the 30th anniversary of the first macro virus and so I started writing an article about this. For those not steeped in the history of malicious code, the appearance of a macro virus "in the wild" was a big deal. In fact, it made a difference to my life. I'd been working on the computer virus problem since the late 1980s, covering it in my 1992 book on computer security. By the start of 1995 I was working for the National Computer Security Association which established the first commercial testing lab for antivirus or AV software. At that time several AV products were proving to be very effective when properly deployed and managed, to the point where I was thinking "problem solved, or at least solvable".
The Word macro virus vastly expanded the scope of the problem and introduced an information system attack strategy that is still used today, by criminals and state-aligned actors, like ransomware extortionists and spy agencies. My 30th anniversary article was written to highlight the fact that the macro virus was made possible by a selfish decision on the part of Microsoft, and that fact should be a red flag for the "add artificial intelligence to everything" economy. Here's a link to the article:
Ironically, while researching the macro virus article, I found a glaring example of how the use of AI can go wrong. Naturally, I wrote about this and published an article online:
My research into the workings of AI and its impact on society continued with a novel hypothesis: if AI is going to help humans get things right, which is what a lot of politicians and investors were saying in 2025, what does AI think we should be doing about cybercrime? And what weight will that thinking carry in the real world.
That project helped me define a threat to both AI and society that nobody else seems to be talking about: unchecked cybercrime reduces trust in technology to an extent that prevents AI achieving the ends that justify the means being ploughed into it. For example, the ability of AI to achieve medical breakthroughs will be limited if people won't share their medical data because criminals keep abusing such data for selfish ends. So, I asked several AI what they would like to say to world leaders about this. One response was published on LinkedIn: An Open Letter to World Leaders from ChatGPT-5.
I also reworked the structured interviews for a LinkedIn article and led with a quote from Google's Gemini LLM. To be honest, I spent quite a bit of time and effort trying to find the right platform for my research-related articles: Medium, Substack, LinkedIn, my YouTube channel, or our blog. Speaking of blog content, I did make a page for the topic of AI on the Scobbs Blog and another for Cybercrime and Health.
Speaking of health, I was prompted to revisit the topic of haemochromatosis in September as a by-product of dealing with Chey's declining health. Now that Chey is a UK citizen we spent quite a bit of time working through the process of getting her some support as a housebound patient with cognitive issues, plus some help for me as primary carer/caregiver for said patient. I might share more of that journey in 2025, but here's the haemochromatosis article, a good basic introduction to the topic:
In October, I also refreshed my page on Primary Aldosteronism, also knowns as Conns, a leading cause of heart disease that is curable for many these days, given the advances in medical technology this century. The fact that millions of people who have this condition are not diagnosed or treated forms the basis for an article I hope to complete in Q1 of 2026. It addresses the reality of "AI medical breakthroughs" and why we should not count on these making a big difference to human health.
Finally, in November, I turned my attention to something of which we will see a lot more in 2026: misogyny and other ugly manifestations of male supremacy. I am sick of this, and the men who perpetuate it. Knowing how best to oppose this is challenging but awareness of the problem is clearly a first step. And clearly there are a lot of straight white cisgender men who don't yet see just how different, and difficult, life is for people who are not. To help open some eyes and minds I wrote: Lifting awareness of male supremacy: an elevator pitch with a twist.
So, those are the highlights of my output in 2025, but somehow it feels like I'm missing something. Ah yes, my annual look at the IC3 Internet Crime Loss statistics. This came out in April, and this year's title was: 2024 sets a record for cybercrime losses and at $16.6 billion it's a lot higher than I predicted. I am predicting a new record of $20 billion in the report that should arrive in April, 2026, a fivefold increase in five years, further evidence that too many humans are missing the point when it comes to cybercrime.
Speaking of missing, I did miss several events in 2025 due to my responsibilities as a carer. One of these was DefCon, the annual hacking conference, which I first attended 30 years ago. As you can see, I have the t-shirt to prove it, and yes, I did write about that.



