Evaluation methods applied to digital health interventions: What is being used beyond randomised controlled trials? A scoping review
 
More details
Hide details
1
Institute of Health and Nursing Science, Charité–Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
 
2
Leibniz ScienceCampus Digital Public Health Bremen, Bremen, Germany
 
3
Unit for Health Promotion Research, University of Southern Denmark, Esbjerg, Denmark
 
4
Human and Health Sciences, University of Bremen, Bremen, Germany
 
 
Publication date: 2023-04-27
 
 
Popul. Med. 2023;5(Supplement):A555
 
ABSTRACT
Background and Objective: Digital health interventions (DHI) can deliver health interventions with lower barriers to a wide audience and support behaviour change. Despite the potential of DHIs, evaluating their effectiveness is challenging. DHIs are often designed as complex interventions, and established evaluation methods, e. g., randomised controlled trials (RCT), have limited applicability for evaluating complex interventions. In this context, alternative evaluation methodsare often discussed. Therefore, a scoping review was conducted to provide an overview of existing evaluation methodsof DHIs beyond RCTs.  Methods: The Cochrane Central Register of Controlled Trials, MEDLINE, Web of Science, and EMBASE were screened in May 2021 to identify relevant publications. Studies were included that were (1) applying alternative evaluation Methods, (2) testing and reporting effects of interventions, and (3) dealing with DHIs. Inclusion was not restricted to any specific population or to specific contexts in which studies were conducted.  Results: Eight studies were identified, which included four alternative assessment designs. The most used evaluation design for DHIs were factorial designs (n=5), followed by stepped-wedge designs (n=1), sequential multiple assignment randomised trials (SMARTs) (n=1), and micro randomised trials (MRTs) (n=1). Some of these methodsenable the adaptation of interventions (e. g., SMARTs or MRTs) and the evaluation of specific components of interventions (e. g., factorial designs). Conclusions: Alternative study designs are appropriate for addressing some specific needs in the evaluation of DHIs. Alternative study designs might be beneficial in overcoming the current evaluation challenges in DHIs. However, it remains unclear how to establish these alternative evaluation designs in research practice and how to deal with the limitations of the designs.
ISSN:2654-1459
Journals System - logo
Scroll to top