Artificial intelligence is reshaping nearly every corner of the professional world and law is no exception. From contract drafting to legal research, AI tools like ChatGPT, Harvey, and CoCounsel are being used by lawyers and non-lawyers alike to produce legal documents at a fraction of the traditional time and cost.
But with that power comes a pressing question: Is it ethical?
The answer, as it turns out, is nuanced, and the legal profession’s governing bodies have started to weigh in with concrete guidance.

The Short Answer: Yes, But With Caveats
Using AI to draft legal documents is not inherently unethical. Numerous state bar associations, including those in California, Florida, New York, and others, have issued formal opinions confirming that attorneys may use AI tools in their practice, provided they do so responsibly.
The key phrase is “responsibly.” What that means in practice depends on who’s using the AI, for what purpose, and how much human oversight is involved.
What Bar Associations Are Actually Saying
While the ABA has not issued a sweeping national rule on AI, it released Formal Opinion 512 in 2024, which provides detailed guidance on generative AI use. Several state bars have followed with their own opinions. The themes across these documents are remarkably consistent:
- Competence (Rule 1.1): Attorneys must understand the AI tools they use well enough to evaluate the output. Blindly submitting AI-generated documents without review violates the duty of competence.
- Supervision (Rule 5.3): Lawyers are responsible for work produced using AI, just as they are responsible for work by paralegals or junior associates. Delegation does not equal abdication.
- Confidentiality (Rule 1.6): Client data entered into an AI platform may be stored, analyzed, or used to train future models. Attorneys must vet their AI tools’ data practices before entering sensitive client information.
- Candor to the Tribunal (Rule 3.3): AI hallucinations (fabricated case citations) are a real and documented risk. Attorneys have been sanctioned and fined for submitting briefs containing fake citations generated by AI. Thorough verification is mandatory.
- Reasonable Fees (Rule 1.5): If AI dramatically reduces the time spent on a task, billing a client for the full manual equivalent hours may be improper. Transparency about AI use and its effect on billing is increasingly expected.
What About Non-Lawyers Using AI for Legal Documents?
This is where things get more complicated. The ethical rules above apply specifically to licensed attorneys. But what about the growing number of individuals who use AI to draft their own wills, leases, employment contracts, or small claims filings?
The ethical concerns shift, but don’t disappear:
- AI models are not licensed to practice law and cannot provide legal advice. Using them for legal drafting is legal (no unauthorized practice of law on your part), but it carries real risk, especially if the document is used in a jurisdiction with specific requirements the AI doesn’t account for.
- AI-generated documents can appear professional while containing significant errors. Legalese alone does not make a document legally sound.
- For low-stakes documents (simple demand letters, informal agreements), the risk may be acceptable. For anything involving property, custody, business formation, or significant sums, professional review is strongly advisable.
The Hallucination Problem: A Real-World Warning
In 2023, a New York attorney was sanctioned after submitting a brief containing six fabricated case citations, all generated by ChatGPT, none verified. The attorney claimed ignorance of the AI’s tendency to hallucinate sources.
The court was not sympathetic. The judge’s ruling was clear: using AI does not diminish a lawyer’s personal responsibility to verify every citation, fact, and legal claim in a submitted document.
This case has become a cautionary tale across the profession and a key reason why bar associations are emphasizing that AI is a tool, not a substitute for judgment.
Best Practices: Using AI Ethically in Legal Drafting
Whether you’re an attorney or a self-represented individual, these principles apply:
- Always review and verify. Treat AI output as a first draft, not a final product. Read every line and check every citation.
- Understand your tool’s data policy. Know whether the AI platform retains, shares, or uses your input to train its models.
- Disclose when required. Some courts now require disclosure of AI use in submitted filings. Check the local rules in your jurisdiction.
- Match the tool to the stakes. AI is well-suited for research, boilerplate drafts, and issue-spotting. High-stakes or highly specific legal work still benefits enormously from licensed counsel.
- Stay current. This area of guidance is evolving rapidly. Bar opinions issued in 2024 may already be supplemented or superseded by 2025 or 2026 updates.
The Bottom Line
AI can be a genuinely valuable tool in the legal drafting process, saving time, reducing costs, and making legal services more accessible. But it doesn’t replace professional judgment, accountability, or the duty of thoroughness that legal documents demand.
The bar associations aren’t saying “don’t use AI.” They’re saying: know what it can’t do, verify everything it produces, protect your clients’ confidential information, and take full responsibility for whatever goes out under your name.
That’s not a prohibition. It’s a standard, and one the legal profession has always held itself to, regardless of what tools are in use.
Note: This blog post is for informational purposes only and does not constitute legal advice. Consult a licensed attorney in your jurisdiction for guidance specific to your situation.