Check whether key stakeholders in your field of study, the publisher or sponsor discuss the integration of AI generated text, data and images with proposals, reports, manuscript submissions and similar submissions. Given the rapidly changing landscape of AI, we expect continuous updates. If you have any questions or feedback on research integrity and AI, reach out to the Office of the Vice President for Research. AskResearch@Virginia.edu
“[W]e urge the scientific community to focus sustained attention on five principles of human accountability and responsibility for scientific efforts that employ AI:
Transparent disclosure and attribution
Verification of AI-generated content and analyses
Documentation of AI-generated data
A focus on ethics and equity
Continuous monitoring, oversight, and public engagement
With the advent of generative AI, all of us in the scientific community have a responsibility to be proactive in safeguarding the norms and values of science.” Protecting scientific integrity in an age of generative AI | PNAS
“NSF's Proposal and Award Policies and Procedures Guide (PAPPG) addresses research misconduct, which includes fabrication, falsification, or plagiarism in proposing or performing NSF-funded research, or in reporting results funded by NSF. Generative AI tools may create these risks, and proposers and awardees are responsible for ensuring the integrity of their proposal and reporting of research results. This policy does not preclude research on generative AI as a topic of study.” Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process | NSF - National Science Foundation
“[We] should all actively engage in discussions about the responsible and effective deployment of AI applications, promoting awareness and cultivating a responsible use of AI as part of a research culture based on shared value…” Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum - European Commission
- Association for Computing Machinery Policy on Authorship
- Elsevier AI Author Policy
- Emerald Publishing’s Stance on AI Tools and Authorship
- IEEE Submission Policies
- Nature Statement on Authorship and AI
- Oxford Academic - Instructions to Authors
- PLOS - Ethical Publishing Practice
- Sage - Using AI in Peer Review and Publishing
- Taylor & Francis Editorial Policies - defining authorship in your research paper
- Wiley - Best Practice Guidelines on Research Integrity and Publishing Ethics
Influencers in AI Ethics
- Kate Crawford, a researcher at Microsoft Research, her work focuses on the social implications of AI and data practices. Key Contributions: She focuses on the social and political impacts of AI. Her book, Atlas of AI, explores how AI systems are entangled with power structures and resource exploitation.
- Timnit Gebru is a co-founder of the Distributed Artificial Intelligence Research Institute (DAIR), known for advocating algorithmic bias and ethics in AI—key Contributions: Known for her work on bias in AI systems, particularly in facial recognition technology. Formerly part of Google's Ethical AI team, her departure sparked discussions about transparency and ethics in AI research.
- Rumman Chowdhury. Former Director of Machine Learning Ethics, Transparency, and Accountability at Twitter. Key Contributions: An advocate for ethical AI development, she has worked on building practical tools for assessing the fairness and transparency of AI systems.
- Joanna Bryson is an AI researcher who emphasizes the importance of accountability and ethics in AI design.
- The AI Policy Podcast - The AI Policy Podcast is by the Wadhwani AI Center at CSIS, a bipartisan think-tank in Washington, D.C.
AI4Good is driving technological solutions that measure and advance the UN's sustainable development goals. Founded in 2015 by a team of Machine Learning and Social Science Researchers in the US and Europe, AI for Good is headquartered in Berkeley, California.
AI Ethics and Research Resources
- Aguilar N, Landau AY, Mathiyazhagan S, et al. Applying Reflexivity to Artificial Intelligence for Researching Marginalized Communities and Real-World Problems. Proceedings of the 56th Hawaii International Conference on System Sciences 2023;712-721. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1361&context=hicss-56.
- Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for Artificial Intelligence in Healthcare: A Multidisciplinary Perspective. BMC Medical Informatics and Decision Making 2020;20(310):1-9. doi: https://doi.org/10.1186/s12911-020-01332-6.
- Bernstein MS, Levi M, Magnus D, et al. ESR: Ethics and Society Review of Artificial Intelligence Research. arXiv 2021;2106.11521v2:1-18. doi: https://doi.org/10.48550/arXiv.2106.11521.
- Blau W, Vinton GC, Enriquez J, et al. Protecting Scientific Integrity in an Age of Generative AI. PNAS 2024;121(22)(e2407886121);1-3. doi: https://doi.org/10.1073/pnas.2407886121.
- Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific Challenges Posed by Artificial Intelligence in Research Ethics. Frontiers in Artificial Intelligence 2023;6(1149082). doi: https://doi.org/10.3389/frai.2023.1149082.
- Canadian Institutes of Health Research. CIHR Policy Statement on the Use of Artificial Intelligence-based Technology in Peer Review Meetings. Accessed March 3, 2025. https://cihr-irsc.gc.ca/e/54129.html.
- Celi LA, Cellini J, Charpignon M-L, et al. Sources of Bias in Artificial Intelligence that Perpetuate Healthcare Disparities – A Global Review. PLoS Digital Health 2022;1(3):e0000022. doi: https://doi.org/10.1371/journal.pdig.0000022.
- Chen Y, Clayton EW, Novak LL, Anders S, Malin B. Human-Centered Design to Address Biases in Artificial Intelligence. Journal of Medical Internet Research 2023;25:1-10. doi: https://doi.org/10.2196/43251.
- Cohen IG, Slottje A. Artificial Intelligence and the Law of Informed Consent. Research Handbook on Health, AI and the Law 2024;167-182. doi: https://doi.org/10.4337/9781802205657.00017.
- Cohen IG. Informed Consent and Medical Artificial Intelligence: What to Tell the Patient? Georgetown Law Journal 2020;108;1425-1469. doi: https://dx.doi.org/10.2139/ssrn.3529576.
- COPE. Authorship and AI Tools. First published Feb. 13, 2023. https://publicationethics.org/cope-position-statements/ai-author.
- Dalrymple D, Skalse J, Bengio Y, Russell S, Tegmark M, Seshia S, Omohundro S, Szegedy C, Goldhaber B, Ammann N, Abate A, Halpern J, Barrett C, Zhao D, Zhi-Xuan T, Wing J, Tenenbaum. Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. arXiv 2024;1-30. doi: https://doi.org/10.48550/arXiv.2405.06624.
- Drazen JM, Haug CJ. Trials of AI Interventions Must Be Preregistered. NEJM AI 2024;1(4). doi: https://doi.org/10.1056/AIe2400146.
- Eto T, Heath E. Artificial Intelligence Human Subjects Research (AI HSR) IRB Reviewer Checklist. 2022. https://www.academia.edu/66425895/Artificial_Intelligence_Human_Subject….
European Commission, ERA Forum Stakeholders’ Document. Living Guidelines on the Responsible Use of Generative AI in Research. 2024. https://research-and-innovation.ec.europa.eu/document/download/2b6cf7e5-36ac-41cb-aab5-0d32050143dc_en?filename=ec_rtd_ai-guidelines.pdf. - Ferretti A, Ienca M, Sheehan M, et al. Ethics Review of Big Data Research: What Should Stay and What Should Be Reformed? BMC Medical Ethics 2021;22(51):1-13. doi: https://doi.org/10.1186/s12910-021-00616-4.
- Flanagin A, Bibbins-Domingo K, Berkwits M, et al. Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA Network 2023;329(8):637-639. doi: https://doi.org/10.1001/jama.2023.1344.
- Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA 2023;330(8):702-703. doi: https://doi.org/10.1001/jama.2023.12500.
- Flanagin A, Pirracchio R, Khera R. Reporting Use of AI in Research and Scholarly Publication – JAMA Network Guidance. JAMA 2024;331(13):1096-1098. doi: https://doi.org/10.1001/jama.2024.3471.
- Friesen P, Douglas-Jones R, Marks M, et al. Governing AI-Driven Health Research: Are IRBs Up to the Task? Ethics in Human Research 2021;43(2):35-42. doi: https://doi.org/10.1002/eahr.500085.
- Gallifant J, Bitterman DS, Celi LA, Gichoya JW, Matos J, McCoy LG, Pierce RL. Ethical Debates Amidst Flawed Healthcare Artificial Intelligence Metrics. npj Digital Medicine 2024;7(243):1-3; doi: https://doi.org/10.1038/s41746-024-01242-1.
- Gallifant J, Nakayama LF, Gichoya JW, et al. Equity Should Be Fundamental to the Emergence of Innovation. PLoS Digital Health 2023;2(4):e0000224. doi: https://doi.org/10.1371/journal.pdig.0000224.
- Gichoya JW, Banerjee I, Bhimireddy AR, et al. AI Recognition of Patient Race in Medical Imaging: A Modelling Study. Lancet 2022;4(6);E406-E414. doi: https://doi.org/10.1016/S2589-7500(22)00063-2.
- Gichoya JW, Thomas K, Celi LA, et al. AI Pitfalls and What Not to Do: Mitigating Bias in AI. British Journal of Radiology 2023;96(1150):1-8; doi: https://doi.org/10.1259/bjr.20230023.
- Gil Y. Will AI Write Scientific Papers in the Future? Presidential Address. AI Magazine 2021;42:3-15. doi: https://doi.org/10.1609/aaai.12027.
- Godwin RC, Bryant AS, Wagener BM, et al. IRB-Draft Generator: A Generative AI Tool to Streamline the Creation of Institutional Review Board Applications. SoftwareX 2024;25:101601;1-5. doi: https://doi.org/10.1016/j.softx.2023.101601.
- Gray ML. A Human Rights Framework for AI Research Worthy of Public Trust. Issues in Science and Technology; May 21, 2024. doi: https://doi.org/10.58875/ERUU8159.
- Harvard Library, Research Guides. Artificial Intelligence for Research and Scholarship. Last updated Aug. 19, 2024. https://guides.library.harvard.edu/c.php?g=1330621&p=9798082.
- Hendricks-Sturrup R, Simmons M, Anders S, et al. Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach. Journal of Medical Internet Research AI 2023;2(e52888). doi: https://doi.org/10.2196/52888.
- Hosseini M, Rasmussen LM, Resnik DB. Using AI To Write Scholarly Publications. Accountability in Research 2024;31(7);715-723. doi: https://doi.org/10.1080/08989621.2023.2168535.
- Hosseini M, Resnik DB. Guidance Needed for Using Artificial Intelligence to Screen Journal Submissions for Misconduct. Research Ethics 2024. doi: https://doi.org/10.1177/17470161241254052.
- Hosseini M, Resnik DB, Holmes K. The Ethics of Disclosing the Use of Artificial Intelligence Tools in Writing Scholarly Manuscripts. Research Ethics 2023;19(4):449-465. doi: https://doi.org/10.1177/17470161231180449.
ICMJE. Defining the Role of Authors and Contributors. Accessed March 3, 2025. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html. - ICMJE. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated May 2023. https://www.icmje.org/news-and-editorials/icmje-recommendations_annotated_may23.pdf.
- ICMJE. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated January 2024. https://www.icmje.org/icmje-recommendations.pdf.
- Ienca M, Vayena E. Ethical Requirements for Responsible Research with Hacked Data. Nature Machine Intelligence 2021;3:744-748. doi: https://doi.org/10.1038/s42256-021-00389-w.
- Information for Authors. Lancet Accessed March 3, 2025. https://www.thelancet.com/pb-assets/Lancet/authors/tl-info-for-authors-1690986041530.pdf
- Jordan SR. Designing an Artificial Intelligence Research Review Committee. Future of Privacy Forum 2019. https://fpf.org/wp-content/uploads/2019/10/DesigningAIResearchReviewCommittee.pdf.
- Kaebnick GE, Magnus DC, Kao A, et al. Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing. Medicine, Health Care and Philosophy 2023;26:499-503. doi: https://doi.org/10.1007/s11019-023-10176-6.
- Kaushik D, Lipton ZC, London AJ. Resolving the Human-Subjects Status of Machine Learning’s Crowdworkers: What Ethical Framework Should Govern the Interaction of ML Researchers and Crowdworkers? Queue 2024;21(6):101-127. doi: https://doi.org/10.1145/3639452.
- Koller D, Beam A, Manrai A, Ashley E, Liu X, Gichoya J, Holmes C, Zou J, Dagan N, Wong TY, Blumenthal D, Kohane I. Why We Support and Encourage the Use of Large Language Models in NEJM AI Submissions. NEJM AI 2023;1(1):1-3; doi: https://doi.org/10.1056/AIe2300128.
- Li H, Moon JT, Purkayastha, et al. Ethics of Large Language Models in Medicine and Medical Research. Lancet Digital Health 2023;5(6):e333-e335. doi: 10.1016/S2589-7500(23)00083-3.
- Liebrenz M, Schleifer R, Buadze A, et al. Generating Scholarly Content with ChatGPT: Ethical Challenges for Medical Publishing. Lancet 2023;5(3):E105-E106. doi: https://doi.org/10.1016/S2589-7500(23)00019-5.
- London AJ. Artificial Intelligence in Medicine: Overcoming or Recapitulating Structural Challenges to Improving Patient Care? Cell Reports Medicine 2022;3(5);1-8. doi: https://doi.org/10.1016/j.xcrm.2022.100622.
- Makridis CA, Boese A, Fricks R, et al. Informing the Ethical Review of Human Subjects Research Utilizing Artificial Intelligence. Frontiers in Computer Science 2023;5:1235226;1-8. doi: https://doi.org/10.3389/fcomp.2023.1235226.
- McCradden MD, Joshi S, Anderson JA, London AJ. A Normative Framework for Artificial Intelligence as a Sociotechnical System in Healthcare. Patterns 2023;4(11):1-9. doi: https://doi.org/10.1016/j.patter.2023.100864.
- Microsoft Research. Project Resolve. Accessed Dec. 9. https://www.microsoft.com/en-us/research/project/project-resolve/.
- Naddaf M. How are Researchers Using AI? Survey Reveals Pros and Cons for Science. Nature 2025. Accessed February 7, 2025. https://www.nature.com/articles/d41586-025-00343-5.
- Nasr M, Carlini N, Hayase J, Jagielski M, Cooper AF, Ippolito D, Choquette-Choo CA, Wallace E, Tramer F, Lee K. Scalable Extraction of Training Data from (Production) Language Models. arXiv 2023;1-64. doi: https://doi.org/10.48550/arXiv.2311.17035.
- National Academy of Medicine. Health Care Artificial Intelligence Code of Conduct. https://nam.edu/programs/value-science-driven-health-care/health-care-artificial-intelligence-code-of-conduct/.
- National Academy of Sciences. US-UK Scientific Forum on Science in the Age of AI. June 11-12, 2024. https://www.nasonline.org/event/us-uk-scientific-forum-on-science-in-the-age-of-ai/.
- National Academies of Sciences, Engineering, and Medicine. AI for Scientific Discovery: Proceedings of a Workshop. Washington, DC: National Academies Press 2024. doi: https://doi.org/10.17226/27457.
- National Academies of Sciences, Engineering, and Medicine. Fostering Responsible Computing Research: Foundations and Practices. National Academies Press 2022. doi: https://doi.org/10.17226/26507.
- National Health and Medical Research Council. Policy on Use of Generative Artificial Intelligence in Grant Applications and Peer Review. 2023. Accessed March 3, 2025. https://www.nhmrc.gov.au/about-us/resources/policy-use-generative-artificial-intelligence.
- National Institutes of Health (NIH). The Use of Generative Artificial Intelligence Technologies Is Prohibited for the NIH Peer Review Process. NOT-OD-23-149. June 23, 2023. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html.
- National Institutes of Health (NIH). Use of Generative AI in Peer Review. Frequently Asked Questions (FAQs). Last updated Aug. 2, 2024. https://grants.nih.gov/faqs#/use-of-generative-ai-in-peer-review.htm.
- National Science Foundation (NSF). Notice to Research Community: Use of Generative Artificial Intelligence Technology in the NSF Merit Review Process. Dec. 14, 2023. https://new.nsf.gov/news/notice-to-the-research-community-on-ai.
- Nature Portfolio. Artificial Intelligence. Accessed March 3, 2025. https://www.nature.com/nature-portfolio/editorial-policies/ai.
- Nature. Editorial. Tools Such as ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for their Use. Nature 2023;613:612. doi: https://doi.org/10.1038/d41586-023-00191-1.
- NEJM. Editorial Policies. Accessed March 3, 2025. https://www.nejm.org/about-nejm/editorial-policies.
- NIH. The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. 2023. Accessed March 3, 2025. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html.
- NSF. Notice to Research Community: Use of Generative Artificial Intelligence Technology in the NSF Merit Review Process. 2023. Accessed March 3, 2025. https://www.nsf.gov/news/notice-to-the-research-community-on-ai.
- Palmer K. AI Threatens to Cement Racial Bias in Clinical Algorithms. Could it Also Chart a Path Forward? STAT News , Sept. 11, 2024. https://www.statnews.com/2024/09/11/embedded-bias-series-artificial-intelligence-risks-of-bias-in-medical-data/.
- Patton DU, Landau AY, Mathiyazhagan S. ChatGPT for Social Work Science: Ethical Challenges and Opportunities. Journal of the Society for Social Work and Research 2023;14(3);553-562. doi: https://doi.org/10.1086/726042.
- Penn State, AI Hub. AI Guidelines. 2023. https://ai.psu.edu/guidelines/.
- Perni S, Lehmann LS, Bitterman DS. Patients Should Be Informed When AI Systems Are Used in Clinical Trials. Nature Medicine 2023;29;1890-1891. doi: https://doi.org/10.1038/s41591-023-02367-8.
- PLOS. Ethical Publishing Practice. Accessed March 3, 2025. https://journals.plos.org/plosone/s/ethical-publishing-practice.
- PNAS Nexus. Information for Authors. Accessed Oct. 23, 2024. https://academic.oup.com/pnasnexus/pages/general-instructions?login=false.
- PNAS. PNAS Author Center: Editorial and Journal Policies. Accessed Oct. 23, 2024. https://www.pnas.org/author-center/editorial-and-journal-policies#authorship-and-contributions.
- Porsdam Mann S, Vazirani AA, Aboy M, Earp BD, Minssen T, Cohen IG, Savulescu. Guidelines for Ethical Use and Acknowledgement of Large Language Models in Academic Writing. Nature Machine Intelligence 2024; doi: https://doi.org/10.1038/s42256-024-00922-7.
- Resnik DB, Hosseini M. The Ethics of Using Artificial Intelligence in Scientific Research: New Guidance Needed for a New Tool. AI and Ethics 2024;1-19. doi: https://doi.org/10.1007/s43681-024-00493-8.
- Science. Science Journals: Editorial Policies. Accessed Oct. 23, 2024. https://www.science.org/content/page/science-journals-editorial-policies#:%7E:text=Artificial%20intelligence%20(AI).,explicit%20permission%20from%20the%20editors.
- Shaw J, Ali J, Atuire CA, et al. Research Ethics and Artificial Intelligence for Global Health: Perspectives from the Global Forum on Bioethics in Research. BMC Medical Ethics 2024;25(46);1-9. doi: https://doi.org/10.1186/s12910-024-01044-w.
- Sleigh J, Hubbs S, Blasimme A, Vayena E. Can Digital Tools Foster Ethical Deliberation? Humanities and Social Science Communications 2024;11(117):1-10. doi: https://doi.org/10.1057/s41599-024-02629-x.
- Thorp HH, Vinson V. ChatGPT Is Fun, But Not an Author. Science 2023;379(6630):13. doi: https://doi.org/10.1126/science.adg7879.
- Thorp HH, Vinson V. Change to Policy on the Use of Generative AI and Large Language Models. Science:Editor's Blog 2023. Accessed March 3, 2025. https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models.
- Thorp HH. Genuine Images in 2024. Science 2024;383(6678):7. doi: https://doi.org/10.1126/science.adn7530.
U.S. Department of Health and Human Services, Office for Human Research Protections (OHRP), SACHRP Recommendations. Considerations for IRB Review of Research Involving Artificial Intelligence. Approved July 1, 2022. https://www.hhs.gov/ohrp/sachrp-committee/recommendations/attachment-e-july-25-2022-letter/index.html. - University of California Berkeley, Office of the Chancellor, Office of Ethics, Risk and Compliance Services. Appropriate Use of Generative AI Tools. 2024. https://oercs.berkeley.edu/privacy/privacy-resources/appropriate-use-generative-ai-tools.
- University of Michigan, Generative Artificial Intelligence. Accessed Nov. 21, 2024. https://genai.umich.edu/. U-M Guidance for Faculty/Instructors. Accessed Nov. 21, 2024. https://genai.umich.edu/resources/faculty.
- University of Minnesota, Technology Help. Artificial Intelligence: Appropriate Use of Generative AI Tools. 2024. https://it.umn.edu/services-technologies/resources/artificial-intellige…. Navigating AI @ UMN. 2024. https://it.umn.edu/navigating-ai-umn.
- University of Minnesota Libraries. ChatGPT, Copilot, and Other AI Tools. Updated Sept. 17, 2024. https://libguides.umn.edu/chatgpt.
- University of Minnesota Duluth, Information Technology Systems and Services. Artificial Intelligence at UMD. 2024. https://itss.d.umn.edu/service-catalog/academic-technology/ai.
- University of Utah, Office of the Vice President for Research. Guidance on the Use of AI in Research. July 13, 2023. https://attheu.utah.edu/facultystaff/vpr-statement-on-the-use-of-ai-in-research/.
- Warraich HJ, Tazbaz T, Califf RM. FDA Perspective on the Regulation of Artificial Intelligence in Health Care. JAMA Network 2024;333(3):241-247. doi: https://doi.org/10.1001/jama.2024.21451.
- Wellcome. Use of Generative Artificial Intelligence (AI) WHen Applying for Wellcome Grant Funding. Accessed March 3, 2025. https://wellcome.org/grant-funding/guidance/policies-grant-conditions/use-of-generative-ai.
- Wing JM. Trustworthy AI. Communications of the ACM 2021;64(10):64-71. doi: https://doi.org/10.1145/3448248.
- Wing JM, Wooldridge M. Findings and Recommendations of the May 2022 US-UK AI Workshop. 2022;1-37. https://par.nsf.gov/servlets/purl/10390650.
- Yang Y, Zhang H, Gichoya JW, Katabi D, Ghessemi M. The Limits of Fair Medical Imaging AI in Real-World Generalization. Nature Medicine 2024;30:2838-2848. doi: https://doi.org/10.1038/s41591-024-03113-4.
- Yu KH, Healey E, Leong TY, Kohane IS, Manrai AK. Medical Artificial Intelligence and Human Values. The New Eng;land Journal of Medicine 2024;390(20):1895-1904. doi: https://doi.org/10.1056/NEJMra2214183.
Books
- "The Oxford Handbook of Ethics of AI" edited by Markus Dubber et al. (2020) includes a chapter on The Ethics of AI in Biomedical Research, Patient Care, and Public Health written by Alessandro Blasimme and Effy Vayena. They highlight the role of artificial intelligence in precision medicine and use of public health data for surveillance. They point out that even without new ethical frameworks to govern the use of AI in research, "rapid uptake of AI-driven solutions [will] necessarily have to rely on existing ethical safeguards."
- "AI Ethics: MIT Press Essential Knowledge Series" by Mark Coeckelberh. He explains AI technology and offers an overview of critical ethical issues at all stages of data science processes.
- "The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities" by Prof Luciano Floridi. He covers challenges around implementing ethical AI practices and governance and the potential misuse of AI.