Predictors of Health Care Practitioners’ Intention to Use AI-Enabled Clinical Decision Support Systems: Meta-Analysis Based on the Unified Theory of Acceptance and Use of Technology

Background Artificial intelligence–enabled clinical decision support systems (AI-CDSSs) offer potential for improving health care outcomes, but their adoption among health care practitioners remains limited. Objective This meta-analysis identified predictors influencing health care practitioners’ intention to use AI-CDSSs based on the Unified Theory of Acceptance and Use of Technology (UTAUT). Additional predictors were examined based on existing empirical evidence. Methods The literature search using electronic databases, forward searches, conference programs, and personal correspondence yielded 7731 results, of which 17 (0.22%) studies met the inclusion criteria. Random-effects meta-analysis, relative weight analyses, and meta-analytic moderation and mediation analyses were used to examine the relationships between relevant predictor variables and the intention to use AI-CDSSs. Results The meta-analysis results supported the application of the UTAUT to the context of the intention to use AI-CDSSs. The results showed that performance expectancy (r=0.66), effort expectancy (r=0.55), social influence (r=0.66), and facilitating conditions (r=0.66) were positively associated with the intention to use AI-CDSSs, in line with the predictions of the UTAUT. The meta-analysis further identified positive attitude (r=0.63), trust (r=0.73), anxiety (r=–0.41), perceived risk (r=–0.21), and innovativeness (r=0.54) as additional relevant predictors. Trust emerged as the most influential predictor overall. The results of the moderation analyses show that the relationship between social influence and use intention becomes weaker with increasing age. In addition, the relationship between effort expectancy and use intention was stronger for diagnostic AI-CDSSs than for devices that combined diagnostic and treatment recommendations. Finally, the relationship between facilitating conditions and use intention was mediated through performance and effort expectancy. Conclusions This meta-analysis contributes to the understanding of the predictors of intention to use AI-CDSSs based on an extended UTAUT model. More research is needed to substantiate the identified relationships and explain the observed variations in effect sizes by identifying relevant moderating factors. The research findings bear important implications for the design and implementation of training programs for health care practitioners to ease the adoption of AI-CDSSs into their practice.


Effort expectancy
The perceived ease associated with the use of the AI-CDSS (Venkatesh et al., 2003) Perceived ease of use The perception that using the AI-CDSS would be free of effort (Davis, 1989) Ease of mastery The perception that doctors could quickly master the use of the AI-CDSS in medicine (Tamori et al., 2022) User-friendliness The perception that doctors could easily operate the AI-CDSS in medical settings (Tamori et al., 2022)

Social influence
The perception that important others believe that the AI-CDSS should be used (Venkatesh et al., 2003) Expectations of others Perceived optimism of people around the healthcare practitioner regarding the potential of the AI-CDSS (Tamori et al., 2022) Expectations among patients Perceived optimism of patients regarding the potential of the AI-CDSS (Tamori et al., 2022)

Facilitating conditions
The perception that an organizational and technical infrastructure exists to support the use of the AI-CDSS (Venkatesh et al., 2003) Positive attitude An individual's overall positive affective reaction to using the AI-CDSS (Venkatesh et al., 2003)

Optimism
The perceived extent to which the AI-CDSS provides more control, flexibility, and efficiency (Hsieh, 2023)

Trust
The belief that the AI-CDSS will act cooperatively to fulfill expectations without exploiting vulnerabilities (Venkatesh et al., 2011) Initial trust Trust in the performance and efficacy of an unfamiliar AI-CDSS that has not been used before (McKnight, 2005) Trust in system ability, integrity, benevolence The belief that the AI-CDSS has the ability, integrity, and benevolence needed in providing services (Cornelissen et al., 2022) Trust in competence, benevolence, willingness, reciprocity

Concern about accountability and liability
The level of concern regarding who would be accountable or liable in case of accidents or errors resulting from the use of the AI-CDSS (Tamori et al., 2022) Perceived unregulated standard The belief that regulatory standards and guidelines to assess the AI-CDSS algorithmic safety are yet to be formalized (Esmaeilzadeh, 2020)

AI Anxiety
The fear and intimidation experienced by an individual during their interaction with an AI-CDSS, including fear of losing information and making irreversible mistakes (Venkatesh et al., 2003)

Innovativeness
The willingness of an individual to try out a new innovation (Agarwal & Prasad, 1998) Note.The superordinate constructs displayed in bold font were used as constructs in the meta-analysis.The subconstructs were matched to the superordinate constructs.
Trust in system competence, benevolence, willingness, reciprocity(Gulati  et al., 2018)    RiskPerceived potential negative consequences associated with the use of the AI-CDSS, including performance failure, data insecurity, additional workload (Zhai et al., 2021) Privacy concerns The concern about the potential disclosure or sharing of personal information by the AI-CDSS with third parties without explicit consent or authorization (Brady et al., 2021) Medico-legal risk The concern surrounding potential legal liability arising from the use of the AI-CDSS, including inadequate protection, ambiguity in assigning responsibility for damages, and manufacturers' attempts to shield themselves from legal liability (Prakash & Das, 2021)Performance riskThe concern regarding the performance of the AI-CDSS, including doubts about its reliability, level of benefits, potential diagnostic errors, and perceived technical immaturity (Prakash & Das, 2021)Concern about data leakageThe level of concern regarding the potential leakage of personal data resulting from the use of the AI-CDSS(Tamori et al., 2022) Note. k = number of independent samples; N = cumulative sample size; r = sample size-weighted correlation; rc = sample size-weighted and reliability-corrected correlation; SDc = standard deviation of rc; CI = confidence interval for rc; CR = credibility interval.

Table S2 .
Inclusion criteria per included study

Table S3 .
Search terms per database

Table S4 .
Construct and subconstruct definitions

Table S5 .
Pooled meta-analytic correlations and number of samples per correlation