ABA Model Rules and AI: What Every Attorney Needs to Know
The ABA Model Rules of Professional Conduct weren't written with AI in mind. But they apply to AI tools as clearly as they apply to any other tool an attorney uses in practice. Understanding how these rules intersect with legal AI isn't optional — it's a professional obligation.
Rule 1.1: Competence
Rule 1.1 requires attorneys to provide competent representation, which includes "keeping abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology."
In 2026, this means attorneys have an obligation to understand the AI tools they use — including their limitations. Submitting AI-generated work product without understanding how the AI generates citations, where it sources information, and what its failure modes are is arguably a violation of the duty of competence.
Practical implications: if you use AI for legal research, you need to understand how the AI validates citations. If it can hallucinate — and most general-purpose AI models can — your duty of competence requires you to verify every citation before relying on it.
Rule 1.6: Confidentiality of Information
Rule 1.6 requires attorneys to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client."
This rule directly governs how you use cloud-based AI tools. When you upload a client's privileged documents to a cloud AI platform, you are transmitting confidential information to a third party's infrastructure. Whether this constitutes "reasonable efforts" to prevent disclosure depends on the platform's security architecture, terms of service, and the sensitivity of the documents.
For most routine legal work, a well-secured cloud platform with encryption and SOC 2 certification likely satisfies Rule 1.6. For highly sensitive matters — merger negotiations, trade secret litigation, privileged communications in high-stakes disputes — the calculus changes. The question becomes: is cloud transmission a reasonable risk when on-premise alternatives exist?
Rule 5.3: Responsibilities Regarding Nonlawyer Assistants
Rule 5.3 holds attorneys responsible for ensuring that nonlawyer assistants' conduct is compatible with the attorney's professional obligations. Courts and bar associations are increasingly treating AI tools as analogous to nonlawyer assistants.
This means supervising AI output with the same rigor you'd apply to a paralegal's work product. It means verifying AI-generated citations. It means reviewing AI-drafted documents before filing. The AI doesn't have professional obligations — you do, and its output is your responsibility.
State-Level Developments
Many state bar associations have issued guidance on AI use in legal practice. While approaches vary, common themes include mandatory disclosure of AI use in some jurisdictions, requirements for human review of AI-generated work product, enhanced supervision obligations for AI tools, and specific guidance on confidentiality when using cloud AI.
Check your jurisdiction's specific rules and guidance. The landscape is evolving rapidly, and state bars are issuing new guidance regularly.
What This Means for AI Platform Selection
Your choice of AI platform is, in part, an ethical decision. Specifically, a platform with verified citation mechanisms reduces your risk under Rule 1.1. A platform with on-premise deployment provides the strongest position under Rule 1.6 for sensitive matters. A platform with transparent methodology helps you fulfill your supervisory obligations under Rule 5.3.
When evaluating legal AI, ask: "Does this platform help me meet my professional obligations, or does it create new risks I'll need to manage?"
Frequently Asked Questions
Do I have to disclose AI use to courts? It depends on your jurisdiction. Some courts now require disclosure. Others don't. Check local rules.
Can I be sanctioned for using AI? You can be sanctioned for submitting inaccurate work product, regardless of how it was produced. AI doesn't create special liability — it creates the same responsibility you have for all work product.
Does using on-premise AI automatically satisfy Rule 1.6? On-premise deployment provides the strongest data confidentiality posture, but it's not a blanket compliance guarantee. Your overall security practices, access controls, and policies also matter.
Frequently Asked Questions
Rule 1.1 (Competence), Rule 1.6 (Confidentiality), and Rule 5.3 (Supervisory Responsibilities) are the most directly relevant to attorney use of AI tools.
It depends on the deployment model. Cloud-based AI that processes client data on third-party servers raises confidentiality concerns. On-premise AI like Scrivly Local keeps all data within the firm.
Yes. Under Rule 1.1 and Rule 5.3, attorneys must supervise AI output and verify accuracy. Citation traceability is essential for meeting this obligation.