With the rapid proliferation of artificial intelligence (AI) tools, more and more individuals are beginning to rely on chatbots to draft legal documents without the assistance of lawyers. This new trend, however, is causing friction within the New York judicial system. A recent lawsuit involving a public housing redevelopment project was directly dismissed by a judge due to AI-generated fictitious legal precedents, sparking widespread discussions within the legal community regarding the use of AI in courtrooms.
According to a report by Gothamist, Manhattan community activist Louis Flores has been opposing the New York City Housing Authority (NYCHA) on the redevelopment plan of a public housing complex in Chelsea for years. The plan involves demolishing existing residences and constructing new buildings, raising concerns among some residents and community groups about potential privatization and displacement of residents.
In July 2025, when preparing his lawsuit documents, Flores consulted the AI chatbot Grok for legal precedents that could support his case. Within seconds, the AI provided a list of cases. Flores compiled the relevant information into a 42-page legal document and submitted it to the New York State Court, requesting a halt to the demolition project and a more rigorous public review.
However, shortly after, NYCHA’s lawyers pointed out to the judge that the four precedents cited in the document either did not exist at all or did not align with the legal conclusions described by the plaintiff. In other words, the AI system exhibited what is known as “hallucination” – generating seemingly credible but actually non-existent information.
Ultimately, Judge James D’Auguste of New York State dismissed the entire case on the grounds of false citations within the document.
Flores expressed deep frustration at the outcome, saying, “It took us six years to get the case to a point where a judge could hear it, and now because of an AI error, the entire case was dismissed, feeling like justice was denied to us.”
The New York State court system has taken note of the rapid proliferation of AI in legal proceedings. A Judicial Advisory Committee established in 2024 released a memorandum in October of last year, indicating that the courts are seeking guidance on how to address the use of AI in litigation documents.
Currently, New York courts are following existing principles: all documents submitted to the court must ensure accuracy of information, even if completed with AI assistance. The administrative department of the court is reviewing new policies that will not prohibit the use of AI or require parties to disclose the use of AI tools. If errors occur in documents, they will still be handled according to existing rules, including fines or other penalties.
A court spokesperson stated that the purpose of the policy is to ensure uniform standards for AI usage across all courts to prevent different judges from creating varying rules.
Some legal scholars believe that AI tools are crucial for individuals who cannot afford legal fees. Many civil cases in the United States are filed by parties without legal representation. Studies show that in state courts, millions of cases are brought by parties on their own; in federal courts, about a quarter of cases involve plaintiffs without lawyers.
Many ordinary people do not have access to professional legal databases like LexisNexis or Westlaw and rely on internet searches or AI tools. While in the past people searched for cases on Google, AI now speeds up the process but may also lead to more errors.
Errors in legal documents due to AI are not limited to ordinary individuals. A database established by the HEC Paris Business School in France shows over 900 global judicial cases involving AI errors, with about 40% involving lawyers or legal professionals.
In 54 known cases in New York City, 24 cases involved lawyers submitting documents containing AI errors. Some lawyers were fined up to $10,000, or referred to the Bar Association for disciplinary investigations.
Legal experts generally agree that the use of AI in the judicial field will continue to increase. The key concern for the future will be how to strike a balance between promoting judicial accessibility and maintaining the accuracy of legal documents. AI only amplifies a longstanding issue – the high cost of legal services – with the real problem being that many people cannot afford lawyers.
As AI tools gradually enter the judicial system, courts, lawyers, and ordinary parties will all face new technological and institutional challenges.
