You'll Probably Need a ChatGPT Company Policy

ChatGPT is an AI-powered language model produced by OpenAI that was launched in November 2022 and has already gained millions of users. The original objective was for the ChatGPT to support users in their everyday activities like producing workout plans, recipes, and poems, among other personal tasks. However, the use of ChatGPT has expanded to include work-related activities. In this article, we explore how employees are using ChatGPT in their work and the potential risks this practice carries.

You'll Probably Need a ChatGPT Company Policy

Based on a blog post by by Debevoise & Plimpton.

ChatGPT's usage in the workplace

Numerous articles have been published about how ChatGPT will take over specific jobs. As it stands, however, ChatGPT appears to be a tool that enhances the productivity of the employees instead of replacing them. The following are some examples:

Fact-Checking: Employees use ChatGPT in the same manner as they would employ Google or Wikipedia to verify the facts in documents they are producing or reviewing.

First Drafts: ChatGPT can develop initial drafts of speeches, memos, cover letters, and routine emails. When asked to write this blog post, ChatGPT offered several valuable suggestions, including "Employees using ChatGPT must undergo training to understand the tool's capabilities and limitations, as well as the best practices for using it in the workplace."

Editing Documents: ChatGPT is adept at modifying text due to its status as a language model that has been trained on millions of documents. Workers are taking poorly-worded paragraphs to ChatGPT, which then fixes grammatical errors, adds more clarity, and generally improves readability.

Generating Ideas: ChatGPT is surprisingly good at creating lists. For example, to our upcoming webcast about the role of ChatGPT in the legal industry, ChatGPT created queries on maintaining privilege, checking for accuracy, and disclosing the role of ChatGPT to clients and courts.

Coding: ChatGPT's two most common applications in the workplace are creating new code and verifying existing code, with many programmers claiming that ChatGPT has significantly increased their efficiency and productivity.

ChatGPT's Risks at Work

Quality Control Risks: Despite its impressive performance, ChatGPT can produce inaccurate outcomes. When composing parts of a legal brief, it sometimes refers to irrelevant or non-existent cases. Since it is a language model, it may have difficulties with computational tasks and may provide incorrect results when asked to solve basic algebraic problems. These limitations are well-known to OpenAI, and ChatGPT itself frequently issues warnings that it may generate incorrect information. It also has gaps in its knowledge about world events that occurred after 2021. These hazards may be reduced when the person reviewing ChatGPT's results can quickly recognize and correct such errors. However, if the reviewer cannot quickly identify what is wrong with or missing from ChatGPT's response, or if no one reviews it, the quality control risks rise. The significance of these hazards varies based on the usage scenario. For instance, the risk is lower when summarizing news stories on a specific subject for internal awareness than it is for creating essential code for the company's central information systems.

Contractual Risks: There are two main types of contractual risks associated with using ChatGPT for work. First, there may be limitations on the organization's ability to disclose confidential information about its customers or clients to third parties, including ChatGPT. Second, there may be questions about who owns the intellectual property rights to the work produced using ChatGPT. These risks can be addressed by revising the company's contracts to address these concerns.

Privacy Risks: Akin to some contractual risks, sharing personal data about customers, clients or workers with ChatGPT can create privacy risks. According to ChatGPT FAQ, OpenAI might use chat conversations for training purposes and to improve the product. Depending on the personal information being shared, companies might have the professional obligation to update their privacy policies; providing customers with the necessary notice and obtaining consent and opt-out options for this process. These obligations are changing as US state and federal privacy laws evolve. Use of ChatGPT involving personal data also raises the issue of deletion rights or requests to remove data from their ChatGPT-generated workstreams and their internal models themselves.

Consumer Protection Risks: If consumers are unaware they are interacting with ChatGPT (instead of a human customer service representative), or if they receive documents in which the customer is not clearly notified that they were created using ChatGPT, there is a risk of claims of unfair or deceptive practices under certain state and federal laws. Depending on the situation, customers might be upset if they paid for content only to find out it was generated by ChatGPT (and not disclosed as such).

Intellectual Property Risks: The implementation of ChatGPT into the workplace raises a number of IP issues. The extent to which workers use ChatGPT to generate software code or other protected content - this might not be protected by copyright in several jurisdictions since it was not created by a human being, as is the current requirement by the US Copyright office. There is also risk that ChatGPT and any content it generates may be deemed derivative of copyrighted materials used to train the model. If that stance wins, code, marketing, and other generated materials might be found to be infringing, especially if it looks similar enough to copyrighted training data. For confidential materials (like financial data, trade secrets, etc.) which are entered into ChatGPT for analysis, there is a risk that other ChatGPT account holders are able to access that same data, which can compromise its confidentiality and potentially hold the company liable. Lastly, if software submitted to ChatGPT includes open source, it has the potential to trigger open source license obligations.

Vendor Risks: Most of the risks described in the above sections can be applied to company data provided to or received from vendors. For example, companies might need to consider obtaining consent prior to provided ChatGPT-generated contracts to vendors. And, for that matter, specify that confidential company data cannot be entered into ChatGPT.

Reduction of ChatGPT Risks

To mitigate the legal, commercial, and reputational risks associated with ChatGPT, some companies have implemented various measures. One such measure is training employees on the appropriate usage of ChatGPT, emphasizing that it is not infallible and that results should be verified through conventional means. Companies have developed policies that categorize ChatGPT usage into three groups: prohibited, permitted with authorization, and generally permitted without prior approval. For example, it is not permitted to use ChatGPT to check confidential information or sensitive company code, whereas generating code requires approval from a designated authority.

Companies are also taking other steps to reduce the risks associated with ChatGPT. They are creating criteria for assessing the level of risk associated with each use, and requiring all ChatGPT usage for work to be reported to a team responsible for evaluating risk based on established criteria. For certain uses, users are required to label content generated by ChatGPT to indicate that it was created by an AI tool. Companies are also transparently identifying content that was created by ChatGPT when sharing it with clients or the public, and maintaining records of high-risk uses.

Periodic training is provided to employees on both acceptable and prohibited uses of ChatGPT, based on the company's experience and that of other organizations. For certain higher-risk use cases, monitoring tools are deployed to identify whether information was generated by ChatGPT or other AI tools in violation of company policy.

To further mitigate the risks associated with ChatGPT, companies are also implementing various measures, such as:

  • Risk Rating: Developing criteria to assess the level of risk associated with a given use of ChatGPT.
  • Inventory: Requiring employees to report all uses of ChatGPT for work and evaluate them based on established criteria.
  • Internal Labelling: Requiring users to mark content generated by ChatGPT with a distinct label to indicate its origin and ensure it receives appropriate review.
  • External Transparency: Clearly indicating when content has been generated by ChatGPT, particularly when it is being shared with clients or the public.
  • Record Keeping: Maintaining records of when high-risk content was generated and the prompt used to generate it.
  • Training: Providing periodic training to employees on both acceptable and prohibited uses of ChatGPT, based on the company's experience and other best practices.
  • Monitoring: Using tools to detect when ChatGPT or other AI models are being used in violation of company policy, particularly in higher-risk use cases.
  • Here are some additional ways to reduce ChatGPT risk:
  • Implementing access controls: Limiting access to ChatGPT and its outputs to authorized personnel can help prevent unauthorized use and potential misuse of the technology.
  • Incorporating ethics into the design process: Embedding ethical considerations into the development of ChatGPT, such as ensuring that the model is not biased towards certain groups, can help reduce the risk of unintended consequences.
  • Regularly updating and testing the model: Regularly updating the model can help ensure that it is performing optimally and reducing the risk of errors or bias. Testing the model under different scenarios and conditions can also help identify potential risks and areas for improvement.
  • Conducting regular risk assessments: Regularly evaluating the risks associated with the use of ChatGPT can help identify new risks or changes in the level of existing risks. This can inform updates to policies, procedures, and training.
  • Creating a clear incident response plan: Developing a clear plan for responding to incidents related to ChatGPT, such as unauthorized access or disclosure of sensitive information, can help minimize the impact of these incidents and reduce potential legal or reputational risks.
  • Establishing an external review board: Establishing a board of external experts to review and assess the risks associated with the use of ChatGPT can provide additional perspectives and help identify potential risks that may not be apparent to internal stakeholders.

While ChatGPT has the potential to significantly enhance employee productivity, it also carries various risks.By adopting such measures, companies can help to mitigate the risks associated with the use of ChatGPT and ensure that it is being used in a responsible and ethical manner.

Share post:

Welcome to

An Exclusive Legal Community for the Best In-House Professionals.
Join the Community