It is hard to go a day or two without getting an email about some new virtual assistant application (i.e., ChatGPT or Bing Chat). While these tools can greatly increase attorney efficiency with things like preparing sample demand letters or draft agreements, it is important to understand how these programs learn and the ethical concerns raised by sharing potentially privileged and/or confidential information with these computers.
First, some background. Virtual assistants are large language models designed to learn from the information they receive from users and the internet at large. This means that all information shared with a virtual assistant becomes part of the database to be accessed and analyzed when future users make requests.
As a basic refresher, California Rules of Professional Conduct, Rule 1.6 requires “A lawyer shall not reveal information protected from disclosure by Business and Professions Code section 6068, subdivision (e)(1) unless the client gives informed consent, or the disclosure is permitted by paragraph (b) of this rule.”
Under the State Bar Act, an attorney has a duty “[t]o maintain inviolate the confidence, and at every peril to himself or herself to preserve the secrets, of his or her client.” (Cal. Bus. & Prof. Code, § 6068, subd. (e) (1).)
The State Bar of California Standing Committee on Professional Responsibility and Conduct (“COPRAC”) has attempted to explain how a lawyer’s duty to maintain client confidences is impacted by modern technology. (See Formal Opinion No. 2010-179 [addressing whether an attorney violates “the duties of confidentiality and competence he or she owes to a client by using technology to transmit or store confidential client information when the technology may be susceptible to unauthorized access by third parties”].)
COPRAC has outlined “appropriate steps” lawyers should evaluate before using any particular technology in their law practice: “1) the level of security attendant to the use of that technology, including whether reasonable precautions may be taken when using the technology to increase the level of security; 2) the legal ramifications to a third party who intercepts, accesses or exceeds authorized use of the electronic information; 3) the degree of sensitivity of the information; 4) the possible impact on the client of an inadvertent disclosure of privileged or confidential information or work product; 5) the urgency of the situation; and 6) the client’s instructions and circumstances, such as access by others to the client’s devices and communications.”
Virtual assistants fail in almost every element of the precautions outlined by COPRAC. These programs admit that once information is shared with the assistant, that information becomes part of a larger network of data to be analyzed and used for purposes beyond the client’s control.
For example, imagine summarizing a meeting with a client and then submitting a draft letter to a virtual assistant to clean up the writing. Immediately, all information from the client meeting is then saved in the virtual assistant database. That information is then free to be regurgitated to another user in the future if someone asks a question focusing on a similar subset of facts. While the client’s name may not be tied to the second output, it is not hard to envision an instance where a competitor or litigation adversary could search for similar information and then learn secret information about your client.
In fact, this exact situation just came up in the context of an involuntary trade secret disclosure by Samsung employees. There, an employee used an AI tool to help fix a source code question and inadvertently disclosed Samsung’s trade secret source code and made it available to competitors. (See https://www-businesstoday-in.cdn.ampproject.org/c/s/www.businesstoday.in/amp/technology/news/story/samsung-employees-accidentally-leaked-company-secrets-via-chatgpt-heres-what-happened-376375-2023-04-06.)
If you or any member of your firm are going to use a virtual assistant, it is always good practice to keep clients informed and obtain their informed written consent regarding what information can and cannot be shared with any virtual assistant. This way it is the client’s decision regarding how this technology is used.
While virtual assistants, like ChatGPT, can be helpful in many contexts, clients and attorneys should be aware of the potential risks involved in sharing confidential information. Attorneys must understand the dangers of disclosing such information to virtual assistants to that client confidentiality is maintained and respected in all settings.
*** This article originally appeared in the Spring 2023 San Diego Defense Lawyers, The Update titled “Exercise Caution and Do Not Disclose Confidential Information to AI Programs“