Secure AI Unit Testing: Have Your Cake and Eat It Too
Remember when we discussed generating unit tests without exposing your full source code to an AI?
Well, there’s a robust tool that takes this concept to the next level.
Meet Aider, an AI-powered pair programmer that implements this idea brilliantly.
While developers typically use Aider’s ‘/add’ command to include source files in the LLM chat, it offers a more secure approach for sensitive codebases.
Using TreeSitter, a parser generator tool, Aider creates a structural map of your local git repository without exposing the full source text. This allows Aider to understand your code’s structure and generate robust test cases without adding actual source files to the chat.
For security-conscious developers, this means leveraging AI for unit testing while minimizing exposure of sensitive code.
You control what code, if any, is shared with the AI. This flexibility offers a practical way to simultaneously enhance your code quality and security posture, especially for projects with heightened privacy requirements.
Want to see what that looks like? Here’s Aider creating a black box test case
Aider is about a year old and is updated nearly daily (!) by the developer Paul Gauthier. It’s an open-source alternative to Cursor.
I’ve recently adopted Aider to develop security tools rapidly and will share tips along the way.