Law schools and law firms have spent considerable time debating the risks, ethical implications, and the future of artificial intelligence over the last three years. We have urged lawyers to become technologically competent. But there is a meaningful difference between knowing about AI and actually building with it.
This year, we decided to close that gap with future lawyers. Every first-year law student at Case Western Reserve University School of Law was required to design and prototype a legal technology solution. The project became known as the 1L Vibe Coding Competition. It wasn’t optional or theoretical. It required students to identify real legal workflow problems and create structured, AI-assisted tools to address them.
That experience clarified something important for us. Lawyers don’t need to become software engineers. But they do need to understand how AI systems behave when tasked with solving legal problems. That is what we mean by vibe coding for lawyers.
The phrase may sound informal, but the concept is serious. In the technology community, vibe coding refers to rapid, iterative development using AI tools. You describe what you want built using natural language, refine it through structured prompts, test outputs, and improve the system incrementally. For lawyers, this translates into practical steps. Identify a recurring friction point in practice. Articulate it clearly. Use AI tools to prototype a solution. Then test, verify, and constrain it responsibly.
Finding the Limits
What surprised many of our students wasn’t just how quickly applications could be built, but also how quickly limitations surfaced. Vibe-coded AI systems can generate structured workflows, organize information, and draft prototypes with remarkable speed. They can help build a local court rules tracker or a citation validation system. They can assist with timekeeping frameworks or intake tools.
But they also hallucinate, meaning they can’t securely handle sensitive client information without proper security engineering and testing. They infer where they shouldn’t. That tension is precisely why building with AI is educational.
The most effective student projects weren’t grandiose attempts to automate legal reasoning. They were disciplined. One student built a platform to track judges’ standing orders and link them directly to official court websites. Another student developed a system to monitor local rules on AI usage in Ohio courts. These tools were grounded in authoritative sources and designed to support, not replace, professional judgment. The exercise required students to ask not only what AI can do, but what it should do within the boundaries of legal ethics.
When lawyers think about AI exclusively as a drafting shortcut, they miss the structural opportunity. AI is more powerful as infrastructure than as replacement. It can reduce administrative drag. It can surface information faster. It can standardize processes that are otherwise inconsistent.
But it can’t relieve a lawyer of responsibility. Every output must be verified. Every source must be checked. Every use must align with duties of competence and confidentiality.
Start With Modesty
For practitioners curious about experimenting with vibe coding, the starting point is modesty. Don’t set out to build a comprehensive litigation assistant. Begin with a narrow task that consumes time and attention. Track rule changes. Validate citations. Generate structured checklists. Work with hypothetical or public data. Refine your prompts. Observe where the model fails. Build guardrails into the tool itself. Make its limitations explicit.
This process forces a lawyer to confront the mechanics of AI systems. You begin to see patterns in how models extrapolate or have limitations. You notice where they rely on probabilistic language rather than verified authority. You learn to interrogate outputs rather than accept them. The profession requires that habit of skepticism.
There are real pitfalls. Confidential client information should never be used in unsecured environments. Lawyers must understand the data retention and privacy policies of any platform they touch. Overconfidence is another risk. AI-generated text often appears polished and authoritative, which can mask underlying inaccuracies. Courts have already sanctioned attorneys for submitting fabricated citations produced by AI tools. The responsibility always remains with the lawyer.
Exercise Judgment
The point of vibe coding is professional preparedness. Clients increasingly expect lawyers to use modern tools efficiently and responsibly. Courts are confronting AI-generated filings. Law firms are integrating AI into research and compliance workflows. Passive AI literacy is insufficient in today’s law school environment.
When law students build even a simple AI-assisted tool, they gain something more valuable than technical fluency. They gain judgment. They see where the system succeeds and where it breaks. They learn that verification is indispensable. They recognize that ethical constraints are design features, not afterthoughts.
The legal profession is often cautious, but caution shouldn’t become disengagement. Lawyers who never experiment with AI risk misunderstanding its capabilities and limitations. Those who engage with it carefully within defined boundaries are better positioned to supervise it, regulate it, and use it responsibly.
Vibe coding is more about innovation, problem-solving, and accountability than coding. It asks lawyers to move from passive users of opaque systems to informed evaluators of how those systems function. It reinforces that technological competence isn’t merely knowing what a tool can produce, but understanding how to constrain, verify, and supervise it.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Matthew A. Salerno is an associate professor and co-director of the legal writing program at Case Western Reserve University School of Law.
Oliver Roberts is an adjunct professor of law at Case Western Reserve University School of Law, managing attorney at The Roberts Law Firm, and founder of Wickard.ai
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.