Multi-agent AST scanning detects every LLM call, maps costs to call sites, and categorizes findings by impact.
Each agent runs in sequence, passing structured findings to the next stage.
Every LLM call is detected structurally — not with regex. Finds calls in strings, dynamic arguments, and deeply nested code.
import anthropicclient = anthropic.Anthropic()def summarize_document(doc: str) -> str:response = client.messages.create(model="claude-3-5-sonnet-20241022"max_tokens=4096,messages=[{"role": "user", "content": doc}])return response.content[0].text
max_tokens=4096 on a task averaging 200 output tokens — billing for 20x unnecessary tokens.claude-3-haiku for this task type — save 95% with identical output quality.Every finding is tagged by impact type, so your team can prioritize fixes that matter most.
See exactly how much you're wasting — and what you'll save after applying erabot's recommendations.
Most teams see ROI within the first 24 hours of deploying recommendations.