Discover more from The Threat Prompt Newsletter
TP#2 Outpainting: Place Anyone Anywhere
Plus: Automate the Web with a chatbot
A warm welcome to our new subscribers - and a special thanks to the original crew. Now, on to this week’s ideas, where AI and Cybersecurity intersect…
A step towards sanitising or weaponising images at scale? Example use cases:
Mask credit card numbers
Make timeless (remove dates)
Create perfect physical access security passes from imperfect originals
Steps to replicate:
Find suitable input images (or make your own)
Experiment with different prompts and settings
Iterate until consistently passable outputs
Share what you came up with. Hit reply…
Protection: will traditional organisations make policy decisions to block software & websites with this capability? Aside from the nefarious use cases, imagine the visual office memes!
Detection: image analysis to detect tampered/AI-adjusted images. I will explore these in a future newsletter.
An essential software security challenge banks and other regulated entities face is detecting malicious code planted by insiders. Peer code review appears to be the most effective (under the right circumstances), with static analysis a very distant second. Source code reviews may detect suspect insider code. Still, from first-hand experience, it’s pretty hit-and-miss and depends on the scope and intent of the review work, the size of the code base and the familiarity of the code reviewer(s) with not just the source code but the build chain and glue scripts.
Key question: where does GPT-3 auditor do well, and where does it struggle?
Review the samples included and the results
Test GPT-3 auditor against famous backdoors - how does it fare?
Can one or more classes of backdoors be consistently flagged for human review?
How fast can an AI file FOIA requests? Perhaps not the best example, as FOIAs can be filed with a tweet to the data controller…. It’s an easy prediction that AI will automate the web for the masses: “Hey Siri, login to my primary bank account and transfer X to Emma”. Headless browsing in secure sandboxes will be a growth area.
There is nothing to say here except an obvious need to standardise AI content labelling for human and machine consumption.
What seems simple on the surface will quickly get complicated:
Is it AI generated?
Is it AI generated + human edited?
Is it a mix - some paragraphs one way, some another?
How about that image?
Plus, do we need to denote the heritage of the training set?
What about bias - do we need to mark whether the content was generated from an unbiased training set? And what does that mean?
What about business application code generated by AI that makes decisions in sensitive areas?
Watch this space get overly complicated fast…and quickly become politicised. Companies embracing AI will find themselves compelled to establish policies in how they mark AI content; this one example is just the tip of an enormous iceberg.
If you read one thing this week about AI, read this white paper. My favourite quote proposing how educators should respond:
“curriculum design focusing on collaboration between human and AI, opportunities to simulate real-world decision-making processes, the need to teach creative problem solving, improved teaching productivity”. How soon until employees cheat internal Computer Based Training with ChatGPT? Should education providers switch to “longer-form reports rather than resorting to in-house exam based testing where good use of AI technologies could actually be showcased.”
How is your/your kid’s school/college/university reacting to ChatGPT? Hit reply and let me know.
Procrasti-Blaster is helping people get unstuck. If you find yourself procrastinating this week…give it a try!
Click the emoji that best captures your reaction to this edition…
If your regulated business is struggling to market due to cyber risk, let me help you with Faster Cyber.
New To This Newsletter?
Subscribe here to get what I share next week.