Guidelines for Responsible Use of Generative Artificial Intelligence at Saint Peter’s University

Summary

This document provides general guidance for the use of generative artificial intelligence at Saint Peter’s University.

Body

Introduction

This document provides general guidance for the use of generative artificial intelligence at Saint Peter’s University. We encourage individual departments and offices to consider building on this foundation and adopting additional guidelines, according to their objectives. In this document, we use the term generative artificial intelligence (GenAI) to refer to any deep-learning model which can produce images, text, audio, video, music, synthetic data, and other content, in response to user-initiated prompts. 

Communication and Transparency Disclosure

You should clearly inform all participants in any conversation if GenAI is part of the interaction in any way, for example, if you are using GenAI to take notes during a Zoom call. In addition, if you have used GenAI for any part of your work (such as researching, presentations, brainstorming, revising, or editing), you must disclose this use, as well as the specifics of how and which form(s) of GenAI was/were used. 

Data Security and Privacy

Members of the University must use extreme care if choosing to use GenAI tools and keep the following considerations in mind.

Presume any data entered into GenAI is public. Unless confirmed otherwise by the Office of Information Technology, assume that any input provided to any given GenAI tool is recorded, used to train models, and may be available to the public.

Do not use GenAI with confidential data. Do not input University data classified as internal, confidential, or highly confidential—including personally identifiable information (PII) for students, faculty, staff, trustees, affiliates, alumni, or donors—into any GenAI tool.  Please refer to the Data Classification and Usage Policy

Avoid the use of GenAI in sensitive contexts. Avoid using GenAI tools, including those that record and transcribe meetings, during discussions of sensitive University business or PII. These tools may store or share data insecurely with others, risking exposure.

Be aware of data possession and control. Faculty and staff must ensure any data processed by GenAI tools remain under University control. Review the terms of service to confirm data possession and secure necessary consents. When selecting a GenAI tool, be sure to review the policy statements, particularly if the tool stores the data you provide it. Confirm that data is deleted after use in accordance with University policies.  If in doubt, please ask the Office of Information Technology for assistance with vetting any given tool.

Comply with Privacy Laws, such as General Data Protection Regulation (GDPR) and Family Educational Rights and Privacy Act (FERPA). Before using any automated tools to record, transcribe, or summarize a meeting, you must first inform all attendees and provide them with the opportunity to opt out or leave the meeting if they prefer. 

Misinformation

AI-generated hallucinations refer to instances where a GenAI system produces outputs that are factually incorrect, despite appearing credible. AI-generated hallucinations could undermine the reliability of student work, faculty research, or educational materials. GenAI users in the campus community should be aware of the potential for hallucinations and compare the tool's output with other reliable sources of information. 

Risk Mitigation

Faculty should include a section in their syllabi about how GenAI is or is not allowed to be used in their class. They should provide the students with guidance to prevent the misuse of GenAI. All community members are encouraged to discuss and review best practices with colleagues, both within departments and offices and across campus, with the goal of encouraging responsible use. 

When using GenAI generated output, one should verify that the material is accurate and bias free. This is particularly important for official communications and academic material. Users are reminded that GenAI hallucinates, and hence are encouraged to think of GenAI output not as facts, but rather as a starting point.  

Outside Vendors

Check contracts with outside vendors to make sure their use of GenAI follows Saint Peter’s University policies and that any information you share with them is protected from unauthorized, improper or illegal use. The vendor should agree in writing to take responsibility for any harm caused by their use of GenAI and to work with the University if there’s a claim or investigation. 

Ethical Concerns

GenAI technologies present several ethical concerns that should be considered by students, faculty, and staff who are considering utilizing these applications. 

Data Provenance

Data provenance refers to the documentation and traceability of data sources, which can affect the reliability and validity of GenAI outputs. In academic settings, ensuring that GenAI systems utilize data from credible and ethical sources is important for maintaining institutional integrity. University employees should inquire about the provenance of data used in GenAI applications and make efforts to source data that adheres to ethical standards. Ethical standards should include obtaining informed consent from data subjects, or at the very least, implementing policies that respect the spirit of consent and privacy in data collection practices. 

Amplification of Existing and Continuing Bias 

GenAI systems can inadvertently perpetuate existing biases present in the data they are trained on. In a university setting, this can manifest in student work, admissions processes, unfair grading practices, or discriminatory outcomes in student support services. To mitigate this risk, it is vital to actively monitor and address potential biases in GenAI systems utilized in university operations or student work. 

Student learning

When students utilize GenAI to complete assignments not designed with these tools in mind, student learning may be diminished. This presents ethical concerns for educators around teaching and learning as well as fairness when assessing student work. Course syllabi must include clear criteria about what tools are permissible in accordance with any program, school, and university guidelines. Students must follow the Academic Integrity Policy disclosing their use of GenAI tools when applicable. 

Intellectual Property

Because GenAI learns from data sets rather than generating original content, some GenAI systems utilize data that is private, privileged, or protected by copyright. Use of such data may violate intellectual property rights, and infringement - whether intentional or not - could lead to legal claims against users and/or the University. Therefore, GenAI users should verify permissions from all sources and cite those sources appropriately to avoid infringement or plagiarism. 

Continuous Learning

Because GenAI already impacts our personal, professional, and digital lives, it is critical to stay informed about the evolving capabilities and limitations of GenAI. As GenAI improves, so should our understanding of best practices, ethical considerations, and potential risks associated with its use. 

Reporting Misuse

If you encounter instances of GenAI misuse or unethical behavior within the University community, report them to the appropriate offices (e.g. Dean of Students, Academic Deans, Human Resources, IT, etc.) as detailed in our University guidelines (above). 

 

Approved by IT Advisory Committee 3/18/2025
Approved by Faculty Senate 3/31/2025

Details

Details

Article ID: 168411
Created
Mon 8/25/25 1:20 PM
Modified
Mon 8/25/25 3:28 PM

Related Articles

Related Articles (1)

Information technology and data constitute valuable Saint Peter’s University assets. In order to protect the security, confidentiality and integrity of Saint Peter’s University data from unauthorized access, modification, disclosure, transmission or destruction, as well as to comply with applicable state and federal laws and regulations, all Saint Peter’s University data are now classified within security levels, with regulations on the usage of data at different levels.