this post was submitted on 01 Jun 2025
2 points (100.0% liked)

The Verge

148 readers
148 users here now

News community for TheVerge. Will be deleted or retired once the Verge officially supports ActivityPub in their site.


This is an automated RSS-Feed community. If you dislike RSS Feed communities consider blocking it, or the bot.

founded 2 months ago
MODERATORS
 

A captcha-like image that says “In God we trust” overlaid with the scales of justice.

Every few weeks, it seems like there's a new headline about a lawyer getting in trouble for submitting filings containing, in the words of one judge, "bogus AI-generated research." The details vary, but the throughline is the same: an attorney turns to a large language model (LLM) like ChatGPT to help them with legal research (or worse, writing), the LLM hallucinates cases that don't exist, and the lawyer is none the wiser until the judge or opposing counsel points out their mistake. In some cases, including an aviation lawsuit from 2023, attorneys have had to pay fines for submitting filings with AI-generated hallucinations. So why haven't they stopped?

The answer mostly comes down to time crunches, and the way AI has crept into nearly every profession. Legal research databases like LexisNexis and Westlaw have AI integrations now. For lawyers juggling big caseloads, AI can seem like an incredibly efficient assistant. Most lawyers aren't necessarily using ChatGPT to write their filings, but they are increasingly using it and other LLMs for research. Yet many of these lawyers, like much of the public, don't understand exactly what LLMs are or how they work. One attorney who was sa …

Read the full story at The Verge.


From The Verge via this RSS feed

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here