We recognize the growing importance of institutional support to enable the adoption of Artificial Intelligence (AI) tools. To protect the research and the researcher, we will continue to offer guidance on the use AI in all its varied forms to perform and publish research, so that we approach its use with responsibility and care. As you may know, faculty, research staff, and students must be aware of the relevant policies and guidelines for the use of AI for their respective work. For example, AI might be used to write and generate data and ideas, but the researcher remains accountable for the validity of data generated from AI and citations necessary for that work.
As the use of AI grows, we want to emphasize the importance of:
“Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.”[1]
[1] Committee on Publication Ethics [COPE]. (2023, February 13). Authorship and AI Tools - COPE: Committee on Publication Ethics. Retrieved June 14, 2023, from https://publicationethics.org/cope-position-statements/ai-author. “Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used” (COPE, 2023; Zielinski et al., 2023; Flanagin et al., 2023). Flanagin, A., et al. (28 February 2023). Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA 329(8), 637-639. Zielinski, C., et al. (2023, May 31) Chatbots, ChatGPT, and scholarly manuscripts: WAME recommendations on Chatbots and generative artificial intelligence in relation to scholarly publications. Retrieved July 10, 2023, from Chatbots, Generative AI, and Scholarly Manuscripts || WAME
Fact-checking AI content involves a careful process of identifying claims, researching, verifying facts, and cross-referencing sources.
“Provenance, or provenance tracking, is a common technique for AI authentication that allows for the tracing of the history and quality of a dataset;”[1]
Be careful when using AI to ensure data provenance is maintained. Data provenance is a documented trail that accounts for the origin of a piece of data and where it has moved from to where it is presently. This trail of documentation ensures that data creators are transparent about their work. Documentation should include:
- Researcher responsibilities (who did what)
- Work-flow
- Input
- Output
- Metadata
- Origin/access point of the data
- Management of the data
Researchers are responsible for verifying the accuracy of AI generated output.
[1] The Information Technology Industry Council (ITI), Authenticating AI-Generated Content Exploring Risks, Techniques & Policy Recommendations, January 2024.
Researchers should seek out and stay current on AI ethics and regulations, and rules for publication.[3]
[3] For example, do not enter confidential information and/or data into third-party AI tools. Any infringements may expose the University and its community members to potential privacy and security breaches.
Know that misuse of AI could give rise to concerns under UVA’s existing Research Misconduct Policy. Foundational research integrity standards continue to apply as do long standing rigor and reproducibility expectations– whether AI is used or not. Appropriate use of AI technology is critical to avoid violating UVA research misconduct policy, federal and international standards.
A Research Integrity Officer (RIO) might assess harm caused by the use of AI in research in several scenarios, including:
- Data Fabrication or Falsification:
- Scenario: An AI tool is used to generate or manipulate data in a way that misrepresents the research findings.
- Assessment: The RIO would evaluate the extent to which the AI-generated data has impacted the validity of the research, the potential for misleading conclusions, and any subsequent harm to public trust or policy decisions based on the research.
- Privacy Violations:
- Scenario: AI tools used in research inadvertently expose sensitive or personal data, violating privacy regulations in addition to constituting misconduct as defined in the research misconduct investigation.
- Assessment: The RIO might work with UVA privacy experts to evaluate the breach’s impact on the individuals whose data was exposed, the legal implications, and the steps needed to mitigate the harm.
Misinterpretation of AI Results:
- Scenario: Researchers misinterpret the outputs of an AI tool, leading to incorrect conclusions or recommendations and fail to ensure accuracy of data reported. In other words, federal sponsors might review questions or allegations that a researchers willfully disregarded the accuracy of the results of the AI tool used, and failed to verify them.
- Assessment: This might result in a retraction without any allegations of research misconduct or there could be intertwined concerns that fall within a research misconduct policy. The RIO would assess the intent of the researcher, their knowledge of the misinterpretation and opportunity to correct or not, the potential harm caused by these incorrect conclusions, including any negative effects on subsequent research.
Overreliance on AI:
- Scenario: Researchers rely too heavily on AI tools without adequate human oversight, leading to knowing or reckless inaccuracies incorporated into the research published, omitting data or results such that the research is not accurately represented in the research record.
- Assessment: The RIO might evaluate the consequences of these errors, the importance of human oversight in the research process, and the need for better training or guidelines for using AI tools.
In each of these scenarios, the RIO’s role would be to ensure that the integrity of the research is maintained, any harm is mitigated, and appropriate corrective actions are taken to prevent future occurrences.
Scholars, governments, sponsors and publishers have published guidance on the appropriate use of AI, and we have collected examples of many of those linked to this site.
Researchers must inform the IRB about any plan to use AI. Further information can be found on the IRB website.
We encourage you to explore available AI tools and practice using them to see how they can enhance your productivity. The constantly evolving field of AI tools and uses will mean that we will update this guidance on an ongoing basis.
We are here to help if you have questions related to the use of AI and research integrity: AskResearch@virginia.edu
Additional UVA Guidance Related to AI
- Center for Teaching Excellence (CTE) - GenAI in Teaching and Learning Resources
- Office of the Provost – GenAI Teaching and Learning Task Force Report & Guidance for Faculty and Students
- Office of the Chief Information Officer- Generative AI Use Guidelines
Acceptable Uses of AI
UVA ITS offers UVA Copilot Chat (Copilot Chat) and UVA Copilot for Microsoft 365 (Copilot for M365). It is important to be aware of guidelines that apply when using these Microsoft Generative AI products:
- Generative AI use is subject to UVA's Responsible Use Guidelines.
- Your school or unit may have additional Generative AI usage guidelines that you should review.Highly Sensitive Data (HSD) should never be entered into a Generative AI tool. By using these tools, you acknowledge that you agree to UVA's Terms of Use
Further information on Generative AI Tools can be found on the UVA Information Technology Services website.
UVA Research Computing supports researchers to accomplish more tools like AI. An example of upcoming educational opportunities is:
Workshop: Deep learning in drug discovery
This workshop will briefly outline the drug discovery process and how deep learning can be applied to arrive at drug-like candidates in a time-sensitive manner. Tools/scripts that can be utilized by the research community on our HPC environment to make the drug discovery process faster and result in robust outcomes will be discussed.
Date: Wednesday, November 6, 2024
Time: 10:00am - 11:30am
Presenter: Pri Prakash