TP#15 Explain Your Security Controls and Boost Secure Choices
Plus: Is That an AI Model Running In Your Browser?
Welcome to the 15th edition of Threat Prompt, where AI and Cybersecurity intersect. The week when Google announced their AI Security Workbench and EU lawmakers passed a draft of the EU Artificial Intelligence (AI) Act…
Five Ideas
1. LLMs Can Boost Password Awareness & Empower Secure Choices
Imagine a world where security controls coach you towards making better choices—no more obtuse or cryptographic “computer says no” messages. I believe this is a broad area where AI can make a positive difference across a wide range of otherwise quite poor security experiences.
But let’s start very small: what unique properties of Large Language Models could we use to help people choose better passwords?
I asked GPT4…
Understanding of language patterns: LLMs have a deep understanding of language structures and common phrases. This allows them to identify passwords that may seem strong due to character variety or length but are, in fact, based on easily guessable phrases or patterns.
Detection of semantic meaning: LLMs can detect the semantic meaning of words or phrases used in passwords, making it possible to identify weak passwords that rely on simple substitutions or common themes, which might not be easily detected by traditional methods.
Adapting to evolving threats: As LLMs are trained on massive datasets, they have extensive knowledge of existing password leaks, popular password trends, and known attacks. This enables them to adapt to evolving threats and better identify weak passwords.
Generating creative suggestions: LLMs can provide users with more creative and memorable password suggestions, tailored to their interests or preferences. This can encourage users to adopt stronger passwords that are less likely to be guessed by attackers.
To gain the benefits of an LLM for password strength evaluation, you can provide a prompt that asks the model to assess the strength of a given password and explain its reasoning. Here’s an example prompt with output from GPT4:
Prompt: Given the password “p@ssW0rd123”, please assess its strength on a scale of 1 to 10 (with 1 being very weak and 10 being very strong) and explain your evaluation. Consider factors such as language patterns, semantic meaning, known attacks, and character variety.
GPT4: I would rate the password “p@ssW0rd123” a 3 out of 10 in terms of strength. Here’s my explanation based on the factors you mentioned:
Language patterns: The password is based on a common English word, “password”, with simple character substitutions (e.g., “@” for “a” and “0” for “o”). This pattern is widely recognized and easily guessable.
Semantic meaning: As mentioned, the password is derived from the word “password”, which is a very weak choice due to its obvious relation to the concept of a password.
Known attacks: Attackers are well aware of the practice of substituting characters with similar-looking symbols or numbers. In addition, “password” and its variations are among the most commonly used passwords, making it a prime target for dictionary and brute-force attacks.
Character variety: Although the password contains a mix of uppercase and lowercase letters, numbers, and special characters, its overall strength is still weak due to the predictable pattern and easily guessable nature of the word it’s based on.
To create a stronger password, consider using a combination of unrelated words, numbers, and special characters or a passphrase made up of several random words. This can make the password much more difficult to guess or crack while still being memorable for the user.
Now imagine we embed a prompt like this into a password evaluation routine and provide the user with feedback that clearly communicates weaknesses and provides guidance on a better way. How many people’s operational security practices could we nudge in the right direction?
It can also help with a group of otherwise smart people who consciously set terrible passwords using keyboard patterns that otherwise meet strict rules. When a user sets or changes their password, we could augment our traditional rule-based checks with an LLM evaluation to detect compliant but high-risk passwords.
Don’t want to send your password to OpenAI? No problem, narrow LLMs capable of limited but effective inference will be in-browser soon…
2. Is That an AI Model Running In Your Browser?
Google Chrome now has access to your GPU through WebGPU. What’s that got to do with AI? It opens the door to on-device inference; i.e. a 3rd party hosted AI model can be squirted down the wire into your browser, where it can operate in private within the browser sandbox.
By definition, Large Language Models are big, which means they take up lots of RAM. The race is on to minify the models through quantisation (e.g. 16 bits -> 4 bits) and breaking out the decoders to provide “narrow AI” locally, without requiring users to be command line ninjas.
web-llm is a working demo that gives us a glimpse into the future.
The engineering challenges of running models in-browser include:
We need to bring the models somewhere without the relevant GPU-accelerated Python frameworks.
Most of the AI frameworks rely heavily on optimized computed libraries that are maintained by hardware vendors. We need to start from scratch.
Careful planning of memory usage, and aggressive compression of weights so that we can fit the models into memory.
Watch this space as the rapid pace of advancement unlocks new possibilities. Running the language model in your browser paves the way for many privacy and security applications (see first idea above for a trivial example) making it an exciting and vital field to explore.
3. A Glimpse Into The Future Of Military Ops
Palantir announces their AI Platform, one dedicated to Business and the other Defense. Vice reports on the demo video:
a “military operator responsible for monitoring activity within eastern Europe” receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be.
“They ask what enemy units are in the region and leverage AI to build out a likely unit formation,” the video said. After getting the AI’s best guess as to what’s going on, the operator then asks the AI to take better pictures. It launches a Reaper MQ-9 drone to take photos and the operator discovers that there’s a T-80 tank, a Soviet-era Russia vehicle, near friendly forces.
Then the operator asks the robots what to do about it. “The operator uses AIP to generate three possible courses of action to target this enemy equipment,” the video said. “Next they use AIP to automatically send these options up the chain of command.” The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems.
Three quick thoughts:
explainability: a criticism of fielded AI models is they don’t cite references or explain their decision-making. No detail yet, but you’ve got to imagine any military exploring mission-integrated AI will demand proper explainability or pay a high human price down the road.
upstream data poisoning: your adversaries will be figuring out your upstream data flows and doing their best to influence or directly poison the data your models depend upon.
security of deployed AI: I’m convinced that AI platforms present quite a large attack surface for adversaries. Traditional cyber security controls will get you a long way, but there’s a lot we still don’t know about protecting mission-critical AI models making potentially life-impacting decisions.
4. Prompt Injection: What’s The Worst That Can Happen?
Simon Willison, the co-founder of Django, provides the most comprehensive overview of Prompt Injection, including its implications and the effectiveness of current countermeasures.
Often when I raise this in conversations with people, they question how much of a problem this actually is.
For some applications, it doesn’t really matter. My translation app above? Not a lot of harm was done by getting it to talk like a pirate. … Increasingly though, people are granting LLM applications additional capabilities. The ReAct pattern, Auto-GPT, ChatGPT Plugins–all of these are examples of systems that take an LLM and give it the ability to trigger additional tools–make API requests, run searches, even execute generated code in an interpreter or a shell.
This is where prompt injection turns from a curiosity to a genuinely dangerous vulnerability.
If you read one long-form article on practical AI security this month, I recommend you make it this one.
5. UK Must Be Competitive in AI
The Daily Telegraph newspaper reports that the UK government is to spin up an AI Task Force to capture the upside of AI and manage the risk - such as AI-fueled disinformation campaigns - a concern highlighted by Sir Jeremy Fleming, the director of GCHQ, the UK’s listening GCHQ.
Rishi Sunak [UK Prime Minister] told the Cabinet the UK had to remain “competitive” on AI, saying the pursuit of advances could be one of the most important policies for the Government to prepare for the future. A new taskforce, focused on AI modelling, will be created to join up government and industry thinking, and will be modelled on the Covid vaccines taskforce. Sir Jeremy and Prof Angela McLean, the Chief Scientific Adviser, took Mr Sunak and his most senior ministers through recent developments in AI at the weekly Cabinet meeting A source familiar with the conversation told The Telegraph that it largely focused on the positive impacts such technological advancements could bring. However, there were warnings – especially about “disinformation” – with concerns raised that AI chatbots could reproduce factually wrong information, which readers would assume was true.
In February, Belfast in Northern Ireland received a grant of £18.9 to establish a ‘Cyber-AI Hub’ at CSIT (Centre for Security Information Technologies), which is the most prominent university technology research, development, and innovation centre in the UK with a focus on cybersecurity and now AI. The CSIT is hosted by Queens University, and along with Ulster University, they employ 150 full-time permanent staff dedicated to AI research.
Bonus Idea
Blueprint AI Shares Weekly Summaries of Key Open-Source AI Projects
Keeping up with popular open-source AI repos is hard, but we’ve got you covered! Blueprint AI summaries of Auto-GPT, @LangChainAI, @babyAGI_, @gpt_index and more
Feedback
Click the emoji that best captures your reaction to this edition…
Sponsors
I pre-launched a service to help Indie Hackers, and Solopreneurs navigate security due diligence by Enterprise clients: Cyber Answers for Indie Hackers & Solopreneurs. If you know someone who might benefit, please forward this note.
Thanks for reading!
What would make this newsletter more useful to you? If you have feedback, a comment or question, or just feel like saying hello, you can reply to this email; it will get to me, and I will read it.
-Craig
New To This Newsletter?
Subscribe here to get what I share next week.