These are not the hallucinations you are looking for
LLMs are powerful, but their knowledge often skews towards popular frameworks with lots of training data.
Your favourite GPT may know React/Next.js inside out, but newer projects like fasthtml? Not so much.
This knowledge gap can slow you down when generating code and lead to some very frustrating dead ends.
The fix?
Avoid wasting time wandering down 'Hallucination Avenue' (a maze within a maze…) and follow the simple example set by Fasthtml's team.
They created a context file to help 'educate' the AI about their shiny new Python web framework.
This is dead simple to adopt.
Just create '[tool_name]_context.txt' with a brief overview, key features, recent updates, and common pitfalls. Paste this into the context window with your request, and watch the AI become much more helpful. If you’re working in a team, pop this into git and incrementally improve.
The best part? This manual workaround might not be necessary for long. LLMs are gaining "legs" to hunt down relevant info autonomously—through real-time web searches, RAG, or database queries.
This approach is gold for security. Feed LLMs curated info on deprecated or insecure APIs, and they’ll be far better at finding and fixing vulnerabilities. Include details like outdated encryption protocols, insecure functions, policy statements, or even accepted risks (to avoid helpful but unwanted “I fixed this again for you, Dave”).
As ever, I recommend carefully scrutinising LLM-generated code you plan to run outside a sandbox.
Cheers,
Craig