Texas is charting a pragmatic path for legal education in the age of artificial intelligence: Innovate carefully, teach ethics first, and remind every future lawyer that efficiency without integrity is never competence.
Generative AI has entered law practice faster than any technology in recent memory and law schools are already testing what it means to use these tools responsibly. At SMU Dedman School of Law, the challenge isn’t to sprint ahead—it’s to teach students that technology should inform, not replace, professional judgment.
Like most law schools, we’re learning how to balance experimentation with rigor. The goal isn’t to turn students into AI operators but into evaluators—lawyers who can spot errors, bias, and overreliance before they become malpractice.
Across the university, professors now have licensed access to ChatGPT Enterprise ensuring secure, institution-wide experimentation. SMU Law recently joined the Harvey AI Project, a collaboration with other leading law schools and Harvey’s AI legal research platform to study how generative tools can responsibly enhance legal education and practice.
That partnership allows faculty and students to explore AI in realistic professional contexts, supported by privacy safeguards and technical training. SMU also requires faculty to adopt one of three AI-use options on every syllabus:
- Ban generative AI entirely
- Allow structured use, with attribution and confidentiality requirements
- Craft a custom policy for the class
That transparency has forced meaningful discussion in every classroom about how responsible use looks—a lesson future lawyers will need in practice.
At SMU, several courses explore AI from multiple dimensions of legal work and theory. Artificial Intelligence and the Law examines how regulation and governance frameworks shape the development of AI systems.
Other offerings focus on practical applications—how AI can support legal research and drafting, assist in pre-litigation assessment, and reshape discovery and due diligence.
The Legal Analysis, Writing, and Research program deliberately sequences instruction: traditional research first, followed by guided AI integration during a spring prepare-to-practice event featuring demonstrations from law firms using AI in document review and transactional work. The structure reinforces that technology should enhance, not replace, analytical skill.
The legal profession’s own regulators are grappling with these same issues. In February 2025, the State Bar of Texas issued Ethics Opinion 705, comprehensive state-level guidance on lawyers’ use of generative AI. The opinion outlines how existing disciplinary rules apply to this fast-evolving technology:
- Competence (Rule 1.01): Lawyers must understand how AI works before using it and remain “technologically competent.” They don’t have to use AI, but they shouldn’t “unnecessarily retreat” from tools that can save clients time and money.
- Confidentiality (Rule 1.05): Attorneys must avoid disclosing client information to public or “self-learning” AI systems without safeguards or client consent.
- Supervision and Candor (Rules 5.03, 3.03): Lawyers remain fully responsible for verifying AI-generated work and can’t blindly rely on or submit unverified outputs.
- Fees: Lawyers may charge for time spent refining and verifying AI results but can’t bill for time “saved” through its use—the efficiency belongs to the client.
The opinion’s message is clear: Technological convenience doesn’t dilute ethical responsibility.
That guidance underpins how I teach Professional Responsibility. Misusing AI—relying on hallucinated cases or uploading confidential data—violates duties of diligence and candor.
Yet refusing to use AI where it could ethically improve accuracy and reduce cost may also raise competence and fairness concerns. Competence now includes understanding when technology serves the client’s interests and when it doesn’t.
Texas’ measured approach—pairing innovation with ethical guardrails—mirrors how law schools are adapting. By anchoring AI instruction in ethics and governance, institutions across the state are positioning themselves as leaders in responsible adoption rather than blind acceleration. SMU’s policy model of transparency, attribution, and accountability reflects the same client-facing expectations lawyers will face in practice: clear disclosures, informed consent, and responsibility for every word they sign.
We want to teach students to think better, not prompt better. Technology is only as reliable as the lawyer who reviews it. The future of law will depend less on automation and more on discernment—the ability to verify, contextualize, and act ethically amid constant change.
That’s why SMU begins with fundamentals: understanding precedent, authority, and reasoning before layering AI on top. The discipline of verification—checking sources, confirming citations, maintaining skepticism—remains the lawyer’s first defense against error.
AI will evolve faster than any syllabus, but the principles of professional judgment endure. As Ethics Opinion 705 reminds us, competence, confidentiality, and integrity can’t be delegated to an algorithm. The best way to prepare future lawyers is to teach them to think critically about technology before they use it and to remember that the law’s most powerful tool is still human judgment.
Columnist Carliss Chatman is a professor at SMU Dedman School of Law. She writes on corporate governance, contract law, race, and economic justice for Bloomberg Law’s Good Counsel column.
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.