Show HN: Valknut – static analysis to tame agent tech debt

github.com

2 points by CuriouslyC 16 hours ago

Hi y'all,

In my work to reduce the amount of time I spend in the agentic development loop, I observed that code structure was one of the biggest determinants in agent task success. Ironically, agents aren't good at structuring code for their own consumption, so left to their own devices purely vibe-coded projects will tend towards dumpster fire status.

Agents aren't great at refactoring out of the box either, so rather than resign myself to babysitting refactors to maintain agent performance, I wrote a tool to put agents on rails while refactoring.

Another big problem I encountered trying to remove myself from the loop was knowing where to spend my time efficiently when I did dive into the codebase. To combat this I implemented a html report that simplifies identifying high level problem. In many cases you can click from an issue in the report directly to the code via VS Code links.

I hope you find this tool as useful as I have, I'm working on it actively so I'm happy to field feature requests.

badmonster 5 hours ago

This tackles a real problem with AI agents and code quality! The observation about agents struggling with code structure is spot-on. I'm curious about your static analysis approach - what specific anti-patterns or code smells do you detect that are particularly problematic for agent performance? For example, are deeply nested conditionals more problematic than high cyclomatic complexity?

Also, the HTML report with VS Code links is a nice touch. Have you considered integrating this as an LSP or IDE plugin so developers could get real-time feedback during the refactoring process itself, rather than just post-hoc analysis?