AI code compiles. It doesn't work.
LUCID finds the bugs your AI assistant hallucinated into existence. Adversarial verification for every implicit claim in your code.
We adversarially verified code from 4 platforms. 21 bugs found. 0 passed.
The worst bugs we found
Highest-severity finding per platform. Selected from 21 total.
Broken config bootstrap — app non-functional out of the box
ensureRuntimeSupabaseConfig is never called before React renders. All Supabase API calls fail with undefined URL/keys.
Admin routes with zero authentication guards
All /admin/* routes (users, monitoring, configuration, tools) are defined with no auth checks. Any user can navigate directly to admin panels.
iframe sandbox disabled — arbitrary script execution
BrowserPreview.tsx has sandbox attribute commented out. No restrictions on embedded content, allowing arbitrary script execution.
Scene analysis calls non-existent API endpoint
Calls /ai/analyze-scene but endpoint is never defined. Falls back to mock data, so app appears to work while core functionality is fake.
The industry knows it's broken
45%
of AI-generated code fails security review
2,500%
projected defect increase by 2028
1 in 5
organizations breached via AI-generated code
Aug 2, 2026
EU AI Act Article 50 deadline