Check whether key stakeholders in your field of study, the publisher or sponsor discuss the integration of AI generated text, data and images with proposals, reports, manuscript submissions and similar submissions. Given the rapidly changing landscape of AI, we expect continuous updates. If you have any questions or feedback on research integrity and AI, reach out to the Office of the Vice President for Research. AskResearch@Virginia.edu
“[W]e urge the scientific community to focus sustained attention on five principles of human accountability and responsibility for scientific efforts that employ AI:
Transparent disclosure and attribution
Verification of AI-generated content and analyses
Documentation of AI-generated data
A focus on ethics and equity
Continuous monitoring, oversight, and public engagement
With the advent of generative AI, all of us in the scientific community have a responsibility to be proactive in safeguarding the norms and values of science.” Protecting scientific integrity in an age of generative AI | PNAS
“NSF's Proposal and Award Policies and Procedures Guide (PAPPG) addresses research misconduct, which includes fabrication, falsification, or plagiarism in proposing or performing NSF-funded research, or in reporting results funded by NSF. Generative AI tools may create these risks, and proposers and awardees are responsible for ensuring the integrity of their proposal and reporting of research results. This policy does not preclude research on generative AI as a topic of study.” Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process | NSF - National Science Foundation
“[We] should all actively engage in discussions about the responsible and effective deployment of AI applications, promoting awareness and cultivating a responsible use of AI as part of a research culture based on shared value…” Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum - European Commission
- Association for Computing Machinery Policy on Authorship
- Elsevier AI Author Policy
- Emerald Publishing’s Stance on AI Tools and Authorship
- IEEE Submission Policies
- Nature Statement on Authorship and AI
- Oxford Academic - Instructions to Authors
- PLOS - Ethical Publishing Practice
- Sage - Using AI in Peer Review and Publishing
- Taylor & Francis Editorial Policies - defining authorship in your research paper
- Wiley - Best Practice Guidelines on Research Integrity and Publishing Ethics
Influencers in AI Ethics
- Kate Crawford, a researcher at Microsoft Research, her work focuses on the social implications of AI and data practices. Key Contributions: She focuses on the social and political impacts of AI. Her book, Atlas of AI, explores how AI systems are entangled with power structures and resource exploitation.
- Timnit Gebru is a co-founder of the Distributed Artificial Intelligence Research Institute (DAIR), known for advocating algorithmic bias and ethics in AI—key Contributions: Known for her work on bias in AI systems, particularly in facial recognition technology. Formerly part of Google's Ethical AI team, her departure sparked discussions about transparency and ethics in AI research.
- Rumman Chowdhury. Former Director of Machine Learning Ethics, Transparency, and Accountability at Twitter. Key Contributions: An advocate for ethical AI development, she has worked on building practical tools for assessing the fairness and transparency of AI systems.
- Joanna Bryson is an AI researcher who emphasizes the importance of accountability and ethics in AI design.
- The AI Policy Podcast - The AI Policy Podcast is by the Wadhwani AI Center at CSIS, a bipartisan think-tank in Washington, D.C.
AI4Good is driving technological solutions that measure and advance the UN's sustainable development goals. Founded in 2015 by a team of Machine Learning and Social Science Researchers in the US and Europe, AI for Good is headquartered in Berkeley, California.
Articles
- "The Mythos of Model Interpretability" by Zachary C. Lipton explores the challenges of understanding AI models and their implications for accountability.
- "Artificial Intelligence—The Revolution Hasn’t Happened Yet" by Peter Norvig offers a perspective on AI technologies' current state and ethical considerations.
"A Survey of Ethical Issues in Artificial Intelligence" is a thorough review of various ethical dilemmas posed by AI, including privacy, bias, and accountability. While the field is constantly evolving, these voices and resources provide a solid foundation for understanding the complexities and ethical considerations of AI development and deployment.
Books
- "The Oxford Handbook of Ethics of AI" edited by Markus Dubber et al. (2020) includes a chapter on The Ethics of AI in Biomedical Research, Patient Care, and Public Health written by Alessandro Blasimme and Effy Vayena. They highlight the role of artificial intelligence in precision medicine and use of public health data for surveillance. They point out that even without new ethical frameworks to govern the use of AI in research, "rapid uptake of AI-driven solutions [will] necessarily have to rely on existing ethical safeguards."
- "AI Ethics: MIT Press Essential Knowledge Series" by Mark Coeckelberh. He explains AI technology and offers an overview of critical ethical issues at all stages of data science processes.
- "The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities" by Prof Luciano Floridi. He covers challenges around implementing ethical AI practices and governance and the potential misuse of AI.