Generative AI Best Practices (Draft)
These guideline apply to William & Mary community members (including faculty, staff, and students) using GenAI applications or development of GenAI models, including internal models, third-party models, or publicly available applications such as ChatGPT for administrative work that supports W&M operational needs, infrastructure, and strategic objectives, indirectly facilitating academic success. This includes the following W&M Administrative Categories.
- Institutional Operations
- Student Services and Support
- Compliance and Reporting
- Administrative Policy and Planning
- Advancement and Alumni Relations
- Enrollment Management
- Marketing and Communications
- Legal and Risk Management
- Sustainability Initiatives and Campus Planning
- Diversity, Equity, and Inclusion (DEI) Initiatives
- Community and Government Relations
- Institutional Research and Data Analytics
- Technology Integration and Innovation
Key Terms and Definitions
Generative AI (GenAI) is an artificial intelligence technology that synthesizes new versions of text, audio, or visual imagery from large bodies of data in response to user prompts. GenAI models can be used in stand-alone applications, like ChatGPT or Bard, or incorporated into other applications, such as internet search engines or word processing applications. If you have any questions about what constitutes GenAI, please contact support@wm.edu.
Privacy, Security, and Confidentiality: Information is shared with a GenAI tool by user prompts, or a series of instructions or questions and set of data or text for the tool.[WK2] [BF3] Generally, providing access to information constitutes sharing data with the tool. The sharing of data potentially makes confidential or sensitive information public as the tool may train its model on the data shared. In some cases, data that has been anonymized could be linked to personal information and become exposed. Any customer or employee personal information, university proprietary information or intellectual property, or otherwise confidential information entered in the prompt may appear in other users’ output. Therefore, users of GenAI shall avoid entering any information into a GenAI tool which they do not want to be made public or is otherwise restricted by law or policy.
Verification of Generative AI Output: Outputs created by GenAI tools may provide fictitious answers; these are sometimes referred to as hallucinations. Furthermore, many open-source GenAI models are often trained on large, publicly available datasets (e.g., through data extraction of public web pages). The outputs may, therefore, contain copyrighted information or others’ intellectual property. While ownership in many of these cases is unclear, users should err on the side of caution and not use any output that contains material they suspect to be under copyright [WK4] [BF5] protection in any materials, internal or external facing. In the case that output is not common knowledge, ensure that the appropriate source is properly cited.
Users of GenAI tools must also be aware that they incorporate any biases of the data sets that were used to train them. This modeling bias may not always align with William & Mary’s core value of Belonging and our commitment to diversity, equity, and inclusion. Therefore, model output may make systematic errors or favor certain groups, leading to unfair or discriminatory outcomes. Users of GenAI must adhere to existing review processes where GenAI is used to make decisions or provide analysis of information that may be subject to bias.
Using output from GenAI tools without reviewing it for accuracy places the university at risk and may harm William & Mary’s reputation with campus stakeholders.
Transparency: Consistent with our code of conduct and other policy, we aim to provide our employees, third parties, and customers with transparency regarding how we use GenAI to support our work. All content generated (created or synthesized) using GenAI including text, audio or visual imagery shall be clearly identified and explicitly marked and labeled as such on any outward-facing content.[WK6] [BF7] It is a best practice that all uses of GenAI be explicitly marked and labeled and it is required for outward-facing content.
Third-party risk: Data sent by William & Mary to third parties could be used in the third party’s use of GenAI tools. Uses include, but are not limited to, training new GenAI models, providing updated information for existing GenAI models, and improving the user experience. Sensitive information input into unapproved GenAI applications may appear as output for individuals outside of the university. Not using GenAI in accordance with this policy may violate William & Mary’s contractual obligations with vendors or, in some cases, violate applicable law.
Sensitive data: Sensitive and personally identifiable information includes any data that can be used to distinguish or trace an individual's identity, alone or combined with other information. Examples include a name, home address, email address, social security number, driver's license number, bank account number, passport number, date of birth, biometrics such as fingerprints, or information that is linked or linkable to an individual such as medical, educational, financial, and employment information.
W&M GenAI Do’s and Don’ts
For any use of GenAI applications, employees should adhere to the following best practices:
To maintain the security of our data and IT systems and to avoid potential data leaks or security incidents:
- Do not use the university credentials (username & password combination) as a login to publicly available GenAI applications unless they are configured with Single Sign-on.[EA8] [BF9] [KP10]
- Do not install non-approved Application Programming Interfaces (APIs), plug-ins, connectors, or software related to GenAI systems.
- Do not implement or use in any way code generated by GenAI on the university systems.
- Do not input university intellectual property into non-approved generative AI applications.
- Do not enter any protected or sensitive information (as defined in the university Data Classification Policy) into non-approved generative AI applications.
- Do not enter any personal information (PII) of employees, students, affiliates, or other third parties into any non-approved GenAI application. Treat the application as you would an employee of another university with whom we have no formal relationship.
- Do contact your manager or email support@wm.edu if you are unsure whether the information you are planning to input falls into any of the above categories.
- Do review the university’s data and information security policies by clicking here { Security Policies, Standards and Procedures | Services | Information Technology | William & Mary (wm.edu)}.
- Do clearly attribute any output used for work purposes to the GenAI application that created it through a footnote or other means visible to the reader.
- Do maintain an updated record of GenAI use for work purposes and be able to share those records with your manager or other authorized university personnel upon request.
- Do review the output of GenAI applications to make sure it meets the university’s standards for principles of equity, ethics, and appropriateness.
- Do not use any output discriminating against individuals based on race, color, religion, sex, national origin, age, disability, marital status, political affiliation, or sexual orientation.
- Do not use GenAI applications to create text, audio, or visual content for purposes of committing fraud or misrepresenting an individual’s identity.
All employees are expected to report instances of non-compliance with this policy to the compliance team at abuse@wm.edu. Employees are encouraged to speak up when they witness misconduct. Employees who report misconduct or concerns in good faith will not be retaliated against.