The Threat Prompt Newsletter
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
Reader Question: Can LLMs really reason?
This is a topical and important question for cyber.
Jun 14
•
Craig Balding
1
Before You Deploy an AI Threat Detector, Send This Email
Before deploying that “AI-powered threat detector” your vendor is promoting, pause to consider a key risk: a single tainted data source can flood SOC…
Jun 1
•
Craig Balding
March 2025
Create Better Security Visuals with AI
Spotting AI-generated security images is easy: padlocks, shields, jumbled words, and that distinctive “AI look”; they rarely look decent.
Mar 27
•
Craig Balding
February 2025
LLM Hacks Its Evals
...and the team didn't notice.
Feb 22
•
Craig Balding
1
January 2025
DeepSeek app, safe to use?
From a WhatsApp chat with a good friend earlier this week…
Jan 31
•
Craig Balding
1
OpenAI: Devs, Share Your Org Data, Get "Free" Tokens
Account owners get to decide...
Jan 17
•
Craig Balding
Don't Get Caught by AI Code Remnants
In my Cloud Advisory work, I’m frequently asked for opinions:
Jan 7
•
Craig Balding
December 2024
Christmas Scams - Automation & AI
Merry Christmas
Dec 23, 2024
•
Craig Balding
Breaking In, Breaking Out: The Bug That Wasn’t a Bug
Marco Figueroa, a bug bounty researcher, found himself on an adrenaline-fueled hunt after noticing an odd error message from ChatGPT.
Dec 20, 2024
•
Craig Balding
Finding the Bugs Humans Miss
Ever wonder how many vulnerabilities are still lurking in “well-tested” code?
Dec 19, 2024
•
Craig Balding
Blocked for AI reply
Ever get that feeling something’s off?
Dec 18, 2024
•
Craig Balding
AI Agent Observability. Seeing What Went Wrong
The picture is becoming clearer: knowledge workers collaborating with LLM powered agents.
Dec 17, 2024
•
Craig Balding
1
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts