The Threat Prompt Newsletter
Subscribe
Sign in
Home
Archive
About
New
Top
Discussion
TP#20 Eight Automated AI Attack Frameworks
Plus: The AI Jedi Mind Trick
Jun 5
•
Craig Balding
1
May 2023
TP#19 Silently Prompting Your AI Assistant
Plus: A PolyGlot in your AI pipeline?
May 28
•
Craig Balding
TP#18 The AI Trust Paradox
Plus: High Impact Prompt Injection through ChatGPT Plugins
May 20
•
Craig Balding
1
TP#17 ChatGPT Plugins and Prompt Injections
Plus: Unlimited OpenAI Credits
May 15
•
Craig Balding
2
2
TP#16 AI Security Techniques
Plus: Tracking the Space
May 6
•
Craig Balding
2
April 2023
TP#15 Explain Your Security Controls and Boost Secure Choices
Plus: Is That an AI Model Running In Your Browser?
Apr 29
•
Craig Balding
2
TP#14 How To Avoid Leaking PII to ChatGPT
Plus: Trust in AI: Low vs. High Assurance
Apr 22
•
Craig Balding
1
TP#13 AI Security is Probabilistic Security
Plus: ChatML, Guardrails and Auto-GPT
Apr 15
•
Craig Balding
2
TP#12 Lies from an Inscrutable Blackbox
Plus: Reverse Engineering Neural Networks
Apr 8
•
Craig Balding
1
TP#11 Apply AI to Solve Security Challenges at Scale
Plus: Navigating the Complexities of AI Safety
Apr 1
•
Craig Balding
March 2023
TP#10 BadDiffusion: Is that a backdoor in your diffusion model?
Plus: Insurers Ask Cyber Insurance Applicants to Disclose ChatGPT Use
Mar 25
•
Craig Balding
TP#9 Meta's LLaMA Escaped
Plus: GPT-4 Jailbreak and Indirect Prompt Injections
Mar 18
•
Craig Balding
1
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts