IRB Compliance Coordinator University of New England
Background: : The growing use of AI in research raises ethical concerns about preserving privacy and confidentiality. This project explores how AI challenges traditional protections, questioning whether data can ever truly be de-identified and how this affects ethical guidance in human subjects research.
Methods: : A narrative literature review was conducted using peer-reviewed English-language sources published between 2005 and 2025 that focused on AI applications in both human and nonhuman subjects research. Conclusion: : AI challenges foundational assumptions of research ethics by undermining the stability of de-identification as a protective measure. The capacity for re-identification and use of participant data without explicit consent demands a reevaluation of what qualifies as human subjects research. These concerns can be organized under the concept of “algorithmic opacity,” in which the lack of transparency and understanding of where AI algorithms retrieve and use information leaves gaps of vulnerability that could hurt both researchers and participants. This algorithmic opacity extends beyond privacy and confidentiality concerns and also affects other ethical principles such as respect for persons and justice. All of these concerns are exacerbated because AI risks are no longer limited to individuals directly involved in a study.
Limitations:: This review does not account for all AI applications or delve into technical details of algorithm development, which limits the depth of ethical analysis. The rapidly evolving nature of AI also means findings may quickly become outdated.
Discussion: : Future work should incorporate technical experts to inform ethical frameworks and ensure ongoing adaptability as AI evolves. Interdisciplinary collaboration is essential to protect participants and maintain trust in research practices.