What ethical issues are raised under the Texas Disciplinary Rules of Professional Conduct by a lawyer’s use of generative artificial intelligence in the practice of law?
The public release of ChatGPT in late 2022 introduced many people (and many lawyers) to the concept of generative artificial intelligence. ChatGPT, like other generative AI tools, gives users the ability to rapidly generate new, seemingly human-crafted content in response to user prompts. Many generative AI tools are “large language” or “deep-learning models” that compile vast amounts of text and analyze it using machine learning and sophisticated algorithms to “create” responses to user inquiries. Due in part to the rapid commercial success of ChatGPT, other generative AI tools have proliferated.
Some lawyers soon realized that there could be ways to effectively utilize generative AI, including ChatGPT, in the practice of law. And some companies have designed generative AI tools specifically for the practice of law, to assist in tasks like contract review and management, due diligence, document review, research, and even initial drafting of letters, contracts, and briefs. But lawyers have already seen—and displayed, very publicly—the dangers that lurk in the improper use of these tools. The most famous example at this point is a case where lawyers were sanctioned for submitting a brief that cited non-existent judicial opinions made up by ChatGPT. See Mata v. Avianca, No. 22-cv-1461, 2023 WL 4114965 (S.D.N.Y. June 22, 2023). Indeed, many generative AI models have a tendency to “hallucinate,” or create inaccurate or made-up answers that sound convincing.
The Committee issues this opinion in response to a request from the State Bar of Texas’s Taskforce on Responsible AI in the Law to provide a high-level overview of ethical issues that may be implicated by the use of generative AI in the practice of law. The world of generative AI is rapidly developing and changing nearly every day. So this opinion does not purport to address every ethical issue that might arise now or in the future. Some of the issues raised here may soon be resolved or mooted by changes in the technology or industry practices. This opinion is intended only to provide a snapshot of potential ethical concerns at the moment and a restatement of certain ethical principles for lawyers to use as a guide regardless of where the technology goes.
Competence
Rule 1.01(a) of the Texas Disciplinary Rules of Professional Conduct provides, with limited exceptions, that a lawyer “shall not accept or continue employment in a legal matter which the lawyer knows or should know is beyond the lawyer's competence.” The Rules define “competence” as the “possession or the ability to timely acquire the legal knowledge, skill, and training reasonably necessary for the representation of the client.” See Preamble, Terminology. In prior Opinions, this Committee has applied Rule 1.01 to questions involving novel technologies and has concluded that this obligation extends to a lawyer’s “technological competence,” especially when it comes to preserving client confidential information. See Professional Ethics Committee Opinion 680 (September 2018) (addressing cloud-computing systems); Opinion 665 (December 2016) (addressing metadata in electronic documents). Comment 8 to Rule 1.01 confirms that lawyers “should strive to become and remain proficient and competent in the practice of law, including the benefits and risks associated with relevant technology.”
Rule 1.01 almost certainly does not require the use of generative AI for any particular purpose in the practice of law, especially at the present moment where the technology is still developing and the cost-benefit analysis remains somewhat unclear. Still, lawyers should not “unnecessarily retreat[] from the use of new technology that may save significant time and money for clients.” Opinion 680; see also comment 8 to Rule 1.01. What’s clear even now is that if a lawyer opts to use a generative AI tool in the practice of law, the lawyer must have a reasonable and current understanding of the technology—because only then can the lawyer evaluate the associated risks of hallucinations or inaccurate answers, the limitations that may be imposed by the model’s use of incomplete or inaccurate data, and the potential for exposing client confidential information. Cf. Opinion 680 (lawyer should acquire a general understanding of how cloud computing works before using in practice of law); Opinion 665 (similar for metadata). Several of those issues are discussed more fully below.
Confidentiality
Some of the greatest risks posed by the unthinking use of generative AI relate to confidentiality of client information. In general, a lawyer must not knowingly reveal client confidential information to any person other than those who are permitted to receive the information under Rule 1.05. This duty extends to both privileged information and all other information relating to a client or furnished by the client and acquired by the lawyer during the course of the representation. See Rule 1.05(a). A lawyer violates Rule 1.05 if the lawyer knowingly reveals or uses either category of information in ways that exceed Rule 1.05’s scope. See also Opinion 680 (explaining these principles).
The extent to which Rule 1.05 is implicated by the use of generative AI will depend on how a given program works and how a lawyer uses it. As with other research tools, there may be ways to use certain generative AI programs for general research purposes without revealing client confidential information. But by their very nature, many generative AI tools invite a “conversation” in which the lawyer—through his or her prompts to the generative AI tool—will explain relevant facts, legal theories, and arguments. These exchanges could, if nothing else, expose the lawyer’s privileged mental impressions to the generative AI tool. One could also imagine a request for certain outputs from a generative AI tool—like a draft demand letter or a settlement agreement—that would require the lawyer to feed the generative AI program certain privileged or otherwise confidential facts related to the dispute. In any case where the lawyer intends to provide client confidential information to the program, Rule 1.05 will likely be implicated.
These concerns are especially relevant given the “self-learning” nature of many generative AI programs. A self-learning program is one that stores and incorporates user inputs into its existing datasets so as to continue refining its responses and improving operation of the service. In some ways, generative AI programs are attractive because of this ever-evolving nature. But that may make them inappropriate for legal work. The use of such self-learning programs poses a risk that the confidential information a lawyer inputs to the program may be stored within the program and revealed in responses to future inquiries by third parties. That is obviously unacceptable. So, with any generative AI tool, the lawyer should be reasonably satisfied that the program will not reveal confidential information to others or permit the use of such information to the disadvantage of the client. If the lawyer is not so satisfied, the lawyer should—at a minimum—not input any confidential information to the program without client consultation and consent.
This goes back to the duty of technological competence. Before any lawyer uses a generative AI product for client work, the lawyer must understand to a reasonable degree how the technology works and must take reasonable precautions to ensure that any client confidential information is protected. Drawing from this Committee’s Opinion 680, such reasonable precautions may include:
(1) acquiring a general understanding of how the technology works;
(2) reviewing and potentially renegotiating the “terms of service” to which the lawyer submits when using the generative AI tool;
(3) learning about the data-security protections used by the generative AI tool—because even if the tool does not intentionally share inputs with other users, it may be particularly vulnerable to hacking of stored information; and
(4) training lawyers and staff about how to appropriately use generative AI tools while protecting client confidential information.
See Opinion 680. “These precautions do not require lawyers to become experts in technology; however, they do require lawyers to become and remain vigilant about data security issues from the outset of using a particular technology in connection with client confidential information.” Id.
With all that said, there may be circumstances where it is permissible to use confidential information in conjunction with a generative AI program. Rules 1.05(c) and 1.05(d) allow a lawyer to disclose client confidential information in various circumstances, including where the use of third-party service providers is reasonably necessary to carry out the representation effectively. See Opinion 572 (June 2006) (copy service); Opinion 680 (cloud computing service). But the lawyer can only do so if he or she is reasonably confident that the confidential character of the information will be respected and protected by the service provider. See id. The same principles would apply to the use of a generative AI tool.
If a lawyer intends to use confidential information in conjunction with generative AI tools, the lawyer should consider informing clients about the associated risks and may need to secure client consent. The State Bar of California Standing Committee on Professional Responsibility and Conduct has recommended that lawyers inform their clients if generative AI tools will be used as part of their representation. See State Bar of California, Standing Committee on Professional Responsibility and Conduct, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (Nov. 16, 2023). Ethics opinions from the ABA and the Florida Bar go a step further and suggest that lawyers should obtain informed consent before using these tools. ABA Comm. on Ethics & Prof’l Responsibility, Formal Op. 512 (2024) (“Generative Artificial Intelligence Tools”); Florida Bar Ethics Opinion 24-1 (2024). This Committee, in Opinion 680 concerning the risks of cloud-computing software, stated “[i]n some circumstances it may be appropriate to confer with a client regarding these risks as applicable to a particular matter and obtain a client’s input regarding or consent to using” such new technology. At a minimum, Texas lawyers should engage in the same thoughtful analysis with respect to generative AI tools.
Oversight/Supervision
Though this should likely go without saying, a lawyer should always verify the accuracy of any responses received from a generative AI tool. But this principle apparently wasn’t obvious to the ever-increasing number of lawyers who have been caught submitting made-up citations in court filings. So, the Committee will say it again: lawyers are responsible for the work product they submit regardless of who (or what) does the original research and drafting. That means lawyers cannot blindly rely upon or use answers given by generative AI tools. Lawyers who rely on generative AI for research, drafting, and communication risk many of the same perils as those who rely on inexperienced or overconfident nonlawyer assistants. Cf. Rule 5.03 (Responsibilities Regarding Nonlawyer Assistants).
A lawyer’s failure to verify generative AI outputs can implicate a host of Rules, including Rule 1.01 (Competent and Diligent Representation), Rule 3.01 (Meritorious Claims and Contentions), Rule 3.03 (Candor Toward the Tribunal), and Rule 3.04 (Fairness in Adjudicatory Proceedings), among others. The best practice here, as with many other efficiency-enhancing tools in the law: AI-generated outputs can be used as a starting point for a lawyer’s work, but must always be carefully analyzed for accuracy and quality. That said, a lawyer’s duties require more than merely detecting and eliminating false AI-generated results—the lawyer is ultimately responsible for ensuring that the content is accurate and supports the client’s interests.
A lawyer must also be aware of how various courts treat the use of generative AI. Some courts have issued standing orders or local rules prohibiting the use of generative AI to draft legal filings or at least requiring certain forms of disclosure; others have declined to issue any such rules at all. Compare N.D. Tex. LR 7.2(f) (disclosure rules for briefs prepared using generative artificial intelligence), with “Court Decision on Proposed Rule” (5th Cir. June 10, 2024) (declining to adopt special rule regarding the use of artificial intelligence in drafting briefs).
Fees
It’s not hard to imagine how the effective use of generative AI tools might impact the fees that lawyers charge—after all, one of the most promising aspects of these tools is the possibility for lawyers to provide legal services more efficiently. In most typical hourly arrangements (depending on the agreement), a lawyer will likely be able to charge the client for the actual time the lawyer spends using a generative AI program for purposes of the representation, including to refine the program’s outputs and check the work. A lawyer may not, however, charge hourly fees for the time that was “saved” by using the generative AI program. As the District of Columbia Bar Association explained:
[I]t goes without saying that a lawyer who has undertaken to bill on an hourly basis is never justified in charging a client for hours not actually expended. If a lawyer has agreed to charge the client on this basis (i.e., hourly), and it turns out that the lawyer is particularly efficient in accomplishing a given result, it nonetheless will not be permissible to charge the client for more hours than were actually expended on the matter. When that basis for billing the client has been agreed to, the economies associated with the result must inure to the benefit of the client.
D.C. Legal Ethics Opinion 388 (2024) (quoting D.C. Legal Ethics Opinion 267 (1996) and ABA Comm. on Ethics & Prof’l Responsibility, Formal Op. 93-379 (1993) (“Billing for Professional Fees, Disbursements and Other Expenses”)). See also Florida Bar Ethics Opinion 24-1 (“Though generative AI programs may make a lawyer’s work more efficient, this increase in efficiency must not result in falsely inflated claims of time.”).
If the lawyer pays per use for a particular generative AI program, the lawyer may be able to collect those expenses from the client, as allowed by law and if the client accepts that arrangement. See Opinion 594. When a lawyer incurs per-use fees associated with a generative AI program, one could imagine a client agreeing to reimburse those expenses in much the same way some clients agree to pay for the use of traditional online research tools like Westlaw and LexisNexis. The lawyer will generally not be permitted to recover more than the amount of expenses actually incurred and paid to the generative AI provider. Cf. id.
While there may be many ways that generative AI can assist in the practice of law and benefit lawyers and clients alike, Texas lawyers must always be aware of the ethical issues that may arise in the use of generative AI. Among many other issues, lawyers should acquire basic technological competence before using any generative AI tool, should always ensure that the tool does not imperil confidential client information, should always verify the accuracy of any responses received from a generative AI tool, and should not charge clients for the time “saved” by using a generative AI program.
Tex. Comm. On Professional Ethics, Op. 705 (2025)