News / FJP Releases
New FJP Issue Brief Warns of Risks with AI-Generated Police Reports
FJP and Prosecutors Urge Caution Over Use of Unregulated and Unreliable Technology in Law Enforcement
June 24, 2025 (Washington, D.C.) — June 24, 2025 – As police departments across the country begin experimenting with artificial intelligence (AI) to draft official reports, a new issue brief from Fair and Just Prosecution (FJP) warns that these tools pose serious threats to accuracy, due process, and public trust in the justice system.
The issue brief, AI-Generated Police Reports: High-Tech, Low Accuracy, Big Risks, explores many of the emerging issues related to the use of AI-generated police reports, including their susceptibility to inaccuracy and bias, privacy and data ownership concerns, impacts on police accountability and public trust in law enforcement, and the serious legal issues these tools raise.
“Prosecutors rely on police reports to make life-altering decisions — and that means we can’t afford errors, bias, or fiction masquerading as fact,” said FJP Executive Director Aramis Ayala. “When AI language models generate false narratives, real people pay the price. AI-generated reports have already included officers who were not present, misattributed actions, and warped evidence. The stakes are too high to treat this like just another tech upgrade. We owe it to our communities to pause, scrutinize, and demand transparency before this tech is allowed anywhere near the courtroom.”
Adopting AI-generated reporting without robust safeguards, independent audits, and measurable improvements in accuracy and bias mitigation risks undermining the trust essential for effective policing. For this reason, law enforcement agencies must proceed with extreme caution when considering these tools. Similarly, prosecutors should monitor whether these tools are being used by their local law enforcement agencies and implement internal policies declining to accept their use or implementing safeguards around their use.
Key findings from the issue brief include:
- High Error Rates and “Hallucinations”: AI-generated reports have been shown to fabricate dialogue, misidentify individuals, or include officers who weren’t present at the scene.
- Bias and Injustice Reinforcement: Because these systems are trained on historical law enforcement data, they risk amplifying systemic inequities due to racial profiling and discriminatory practices.
- Constitutional Risks: Even small factual mistakes in AI-generated reports could lead to Fourth Amendment issues, Brady concerns, and/or issues related to unreliable officer testimony.
- Privacy Violations: AI tools can inadvertently include unrelated or private information from body cam recordings, potentially compromising bystanders’ privacy rights.
- Minimal Efficiency Gains: Despite claims of faster processing, independent research has found no significant time savings compared to existing report templates and workflows.
Read the full brief.