IRB Compliance Coordinator University of New England
Background: : Deception in research aims to improve objectivity but often draws ethical criticism. This analysis explores whether and when deception is justifiable, particularly as new technologies introduce covert forms of deception.
Methods: : A comparative literature review analyzed peer-reviewed studies on deception in human subjects research. It specifically investigates the following topics: (1) the current federal regulations regarding deception in human subjects research and (2) how emerging technologies such as AI and metadata affect the use of deception in human subjects research. Sources were drawn from academic databases and assessed for methodological approach, ethical design, and participant impact. Federal regulations and technological influences on deception were also reviewed. Conclusion: : When comparing studies that utilized deception in an ethical and unethical manner, there were three distinguishing patterns: the purpose of deception use, study design, and briefing and debriefing. Examples of studies that successfully ethically employed deception began steps toward the protection of participants in the earliest stages of a protocol by designing a study that would minimize potential harm. These studies would also indicate to participants that they may be deceived in some manner before participating and be comprehensively debriefed afterwards. Lastly, the purpose of deception use complemented the other two points by ensuring it was the best methodological choice and that the type and severity of deception would not pose any potential harm during the study or afterward with debriefing.
The issue arises when organizing all of the aforementioned studies as a type of “overt deception,” as they are directly involved in the study. Cover deceptions lie in instances where the deception may not be directly related to the study but affects the participants' data, privacy, and confidentiality nonetheless. Some examples include survey platforms like Qualtrics collecting metadata such as IP addresses and geological coordinates and AI algorithms not being transparent in what data is used for their training and their process for decision-making. Technology makes it much easier for researchers to unknowingly commit covert deception, making it difficult to mitigate ethical risks.
Limitations:: This review focuses solely on deception in human subjects research and does not address incomplete disclosures or emerging gray areas. Such instances call for further discussion. For example, how much of a complex AI algorithm must be explained to non-experts in a consent form before it threatens overall comprehension, creating further potential ethical risks? This analysis also does not evaluate participant perceptions of deception, which may further complicate ethical assessments.
Discussion: : Future guidance must address covert deception introduced by technological tools, emphasizing transparency, comprehension, and harm mitigation. Ethics training should encourage researchers to critically assess the necessity of deception and develop alternatives when possible.