Understanding Health Care Students’ Perceptions, Beliefs, and Attitudes Toward AI-Powered Language Models: Cross-Sectional Study

Background: ChatGPT was not intended for use in health care, but it has potential benefits that depend on end-user understanding and acceptability, which is where health care students become crucial. There is still a limited amount of research in this area. Objective: The primary aim of our study was to assess the frequency of ChatGPT use, the perceived level of knowledge, the perceived risks associated with its use, and the ethical issues, as well as attitudes toward the use of ChatGPT in the context of education in the field of health. In addition, we aimed to examine whether there were differences across groups based on demographic variables. The second part of the study aimed to assess the association between the frequency of use, the level of perceived knowledge, the level of risk perception, and the level of perception of ethics as predictive factors for participants’ attitudes toward the use of ChatGPT.


Background
Artificial intelligence (AI) and machine learning technologies have transformed various sectors of contemporary society, including health care [1].Among these developments, AI-powered large language models (LLMs) such as OpenAI's ChatGPT have shown significant promise in revolutionizing numerous aspects of health care services [2].ChatGPT is a variation of OpenAI's language model that generates humanlike writing in a conversational situation [3].
As of January 2023, the population using ChatGPT exceeded 100 million [4].While ChatGPT was not originally intended for application in health care settings, it is possible that some of these users comprise students or health care practitioners [5].Consequently, the insights derived from their interactions with ChatGPT may offer valuable information in patient communication, information management, electronic health records, diagnostics, decision-making assistance, and, potentially, therapeutic interventions [6].
LLMs have shown to be beneficial to health care provision [7].ChatGPT has demonstrated strong, human-level performance supporting decision-making, data management, and patient education in many specialties, such as internal medicine, surgery, and oncology [8,9].The upcoming generations of health professionals comprise students who undergo training in conditions with plenty of easily accessible technology resources [10].Some students may assume roles as directors of health institutes, whereas others may engage in research or work as health care professionals.Nevertheless, it is crucial to recognize that the quality of education received will directly impact the caliber of professionals in the future.Consequently, it is imperative to understand the interests that occupy their thoughts concerning the use of tools such as LLMs.This comprehension is essential in determining how these tools can either enhance or fail to enhance their academic and educational competencies as well as their professional application soon after [11].

Objectives
In light of this, the primary aim of our study was to assess the frequency of ChatGPT use, the perceived level of knowledge, the perceived risks associated with its use, and the ethical issues, as well as attitudes toward the use of ChatGPT in the context of health care education.The second part of the study aimed to assess the association between the frequency of use, the level of perceived knowledge, the level of risk perception, and the level of perception of ethics as predictive factors for participants' attitudes toward the use of ChatGPT.

Sample Size Calculation
The sample size for this study was calculated using the following formula: n = (Estimated Design Effect Factor × Np [1p])/(d2/Z2 1 -α/2 × [N -1] + p × [1 -p]).Accounting for a population size of 1 million, a hypothetical frequency of 50% with a 5% margin of error, and a confidence level of 99.99%, the calculated sample size was 1512.

Recruitment
Our study focused on individuals aged >18 years enrolled in diverse health care-related college programs such as medicine, nursing, dentistry, nutrition and dietetics, and medical laboratory science.Through a convenience sampling method, we gathered responses from 2661 participants.We adopted a multifaceted recruitment approach to ensure a varied sample of health care students.We reached out to potential participants through email, student networks, social media, on-campus events, academic institutions, and student associations.
We expanded our sample by including universities across the Americas, specifically in Argentina, Mexico, Colombia, Chile, and Ecuador.By disseminating study links to these institutions, we achieved a diverse representation of health care students from different countries and fields.

Bias
To minimize potential biases, we adopted a comprehensive recruitment strategy targeting a wide range of universities across the Americas, hence reducing selection bias.Response bias was mitigated by conducting anonymous surveys, encouraging honest responses from the participants.In addition, to limit information bias, the survey questions were designed to be straightforward and used standardized Likert-scale responses.

Questionnaire
The questionnaire was developed following the recommendations by Passmore et al [12] and Eysenbach [13].A steering committee composed of 4 experts and heads from 4 specialized centers worldwide reviewed the literature and developed the survey items, which integrated all constructs to be assessed.The first section of the survey gathered the demographics and medical education of the participants.The second section of the survey aimed to assess the students' perceptions, attitudes, patterns of use, and further learning regarding ChatGPT.
The perception domain was further categorized into self-perceived knowledge, ethics, and beliefs of perceived risk subdomains.The subdomain of self-perceived knowledge was assessed on a 5-point Likert scale ranging from 1 (no knowledge) to 5 (superior knowledge).The scale of self-perception of knowledge about ChatGPT was recategorized as follows: (1) "No knowledge"-this category included participants who either answered "No" to the question "Have you heard of ChatGPT before?" or selected "No Knowledge" in response to the question "How would you rate your knowledge of ChatGPT and its applications in health care?"; (2) "Minimal knowledge"-participants falling into this category included those who answered with options such as "Minimal" or "Basic knowledge" on the Likert scale; and (3) "Adequate knowledge"-this category encompassed participants who selected options such as "Adequate" or "Superior" knowledge on the Likert scale.
The ethical perception subdomain featured 3 items, which respondents were asked to score on a 5-point Likert scale ranging from 1 (totally unethical) to 5 (totally ethical).The beliefs of perceived risk subdomain had 3 items, which respondents were asked to score on a 5-point Likert scale (1 [strongly disagree] to 5 [strongly agree]).The attitude domain included 5 statements reflecting evaluations and opinions on ChatGPT.On a 5-point Likert scale, respondents were asked to score these statements (1 [strongly disagree] to 5 [strongly agree]).The domain of further learning consisted of 4 questions inquiring as to whether respondents wanted to learn more about ChatGPT.Respondents were asked to choose the resources or educational materials that they believed would be the most beneficial in learning about ChatGPT and its potential applications in health care.Those who did not want to learn more about ChatGPT were requested to explain their reasons.
In total, 2 questions assessed the "Pattern of Use" domain: one assessing the frequency of use using a 5-point Likert scale ranging from 1 (less than once a month) to 5 (more than once a day) and one assessing the applications of ChatGPT in health care settings with a choice of 8 alternatives.
The questionnaire is shown in Multimedia Appendix 1.A pilot study was performed by the steering committee with colleagues and a sample of 20 students.After drafting the survey, it was distributed to the study population in May and June 2023.The survey was available in English and Spanish.

Ethical Considerations
Ethics approval was obtained from the Human Research Ethics Committee from Ecuador with approval HCK-CEISH-2022-006.All participants provided informed consent to take part in the study.They were informed about the purpose of the research, their rights as participants, and the voluntary nature of their participation.We ensured the privacy and confidentiality of participant data throughout the study.The survey responses were anonymized, and no personally identifiable information was collected.No compensation was provided to participants for their involvement in the study.It is important to note that the approval obtained from the Human Research Ethics Committee in Ecuador was deemed sufficient to expand recruitment to all Latin American countries included in the study.This decision was made based on the similarity of ethical standards and regulations across these countries, as well as the collaborative nature of the research conducted within the region.

Demographic Variables
The demographic variables selected for this study are pivotal for examining the diversity of health care students' attitudes toward using ChatGPT.They are used in both the descriptive (for sample composition purposes) and regression (as control variables) tables.Each variable is thoughtfully coded to capture the nuanced differences among the survey participants, facilitating a detailed analysis of their responses.
Age was recorded as a continuous variable.This allowed for precise analysis of trends across different age groups, helping identify whether younger students are more adept and receptive to AI technologies such as ChatGPT compared to their older counterparts [14].
Gender was categorized into several groups: male, female, nonbinary or third gender, prefer not to say, and other.This categorization ensured that the study could address and respect the diversity of gender identities.It allowed for an analysis of whether perceptions of ChatGPT vary significantly across different gender groups, which could indicate targeted approaches for technology integration based on gender-specific preferences or concerns [15].
The type of university was divided into public and private.This classification helped investigate whether the institutional context influences students' familiarity with and attitudes toward ChatGPT.Differences in resources, exposure to technology, and educational priorities between public and private universities might contribute to distinct attitudes observed among the students from these institutions [16].
Region was split into Central America and South America.By distinguishing between these 2 regions, the study could explore regional differences that might affect students' acceptance and use of AI technologies.Such differences could stem from varying levels of technology integration in health care education, regional cultural attitudes toward technology, and economic factors [17].
The field of study was specified as medicine, nursing, nutrition, dentistry, therapy, psychology, pharmacology, and other.This detailed categorization allowed the study to determine whether students in certain fields are more likely to perceive ChatGPT as a beneficial tool [18].For instance, fields requiring up-to-date information and quick data retrieval might show higher appreciation for AI assistance compared to fields that are more focused on personal patient interactions [19].

Outcome Variables
The outcomes of this study focused on the health care students' attitudes toward using ChatGPT quantified through a series of statements.These statements were designed to capture various dimensions of the perceived utility and reliability of ChatGPT in health care contexts.Each outcome variable was measured using Likert scales ranging from "strongly disagree" to "strongly agree" in order to have a granular view of respondents' attitudes and, through detailed statistical analysis, assess trends and influences on these perceptions.Specifically, the outcomes assessed were (1) "I think that ChatGPT makes my job easier."-thisstatement evaluated the perceived practical utility of ChatGPT in simplifying tasks within health care settings; (2) "ChatGPT can be beneficial in health care settings."-thisstatement assessed broader benefits, looking at whether students believe ChatGPT can positively impact health care environments; (3) "ChatGPT provides trustworthy health care information or guidance."-thisstatement measured trust in the accuracy and reliability of the information provided by ChatGPT; (4) "ChatGPT is a useful tool when I need to search for information on specific medical questions."-thisstatement evaluated the usefulness of ChatGPT as a resource for specific, actionable medical inquiries; and (5) "ChatGPT is a useful tool when I need to search for medical literature."-thisoutcome explored the utility of ChatGPT in supporting academic and professional research within medical fields.
Focusing on these specific attitudes toward using ChatGPT helps us understand how health care students perceive the integration of AI into their practices.The statements target various dimensions of AI's role-from enhancing efficiency and providing reliable information to supporting academic research-highlighting areas where ChatGPT could be particularly impactful or face resistance.This nuanced approach not only sheds light on current acceptance levels but also pinpoints areas where further education or system improvements might increase trust in and the utility of AI applications within health care environments.

Overview
In this study, several key predictor variables were used to explore the factors influencing health care students' attitudes toward using ChatGPT.These predictors included knowledge of ChatGPT, perceptions of risk, ethical considerations, and the frequency of use of ChatGPT.A detailed overview of each predictor is presented in the following sections.

Knowledge About ChatGPT
For the regression model, this predictor measured the participants' self-reported knowledge about ChatGPT, assessing their understanding of its functionalities and potential applications in health care.It was quantified using a 5-point Likert scale ranging from 1 (no knowledge) to 5 (superior knowledge).The understanding of ChatGPT's functionalities and potential applications is crucial as it directly influences how students perceive its utility and limitations [20].Higher levels of knowledge might correlate with more positive attitudes as students are better able to appreciate the benefits and manage the limitations of AI in health care [21].

Beliefs of Perceived Risk
This variable is a composite score derived from the median of the agreement on a 5-point scale with three specific statements assessing perceived risks associated with AI: (1) "I think my job could be replaced in the future because of AI," (2) "In the future, ChatGPT (or some similar technology) will play an even more important role in my job," and (3) "Using AI like ChatGPT in clinical practice raises ethical concerns." Perceptions of risk are vital to consider because they shape how students weigh the advantages against the potential drawbacks of using AI technologies [22].Concerns about job security, the increasing role of AI in health care, and ethical implications could negatively influence their attitudes toward ChatGPT, making it essential to analyze how these perceptions impact their overall acceptance [23].

Ethics
The ethical factors were assessed by calculating the median score based on the replies' level of agreement, ranging from totally ethical to totally unethical on a 5-point scale, to the following three statements that address ethical concerns about using AI in health care: (1) "Revising the language of a scientific manuscript?" (2) "Writing text in a scientific manuscript?" (3) "The sole source of information for the clinical practice?"

RenderX
Ethical considerations are paramount in the adoption of any new technology, especially in sensitive fields such as health care.Evaluating how students perceive the ethical dimensions of using ChatGPT for tasks such as manuscript writing or as a clinical information source can provide insights into the ethical acceptability of AI tools in professional health care practices [24].

Frequency of Use
The frequency of use was directly measured by asking participants how often they used ChatGPT, with options on a 5-point Likert scale ranging from 1 (less than once a month) to 5 (more than once a day).The frequency of use is indicative of both familiarity and dependency on the technology.Regular use of ChatGPT might suggest greater comfort and perceived utility, possibly leading to more favorable attitudes [25].Conversely, infrequent use might indicate skepticism or perceived inadequacies in the technology's ability to meet professional needs [26].

Descriptive Analysis
In the descriptive analysis, we examined the demographic information and survey responses of the participants.This part of the analysis comprised 2 main components.First, the demographic characteristics of the participants were assessed and stratified according to the participants' self-rated knowledge of AI.These categories of knowledge were "No knowledge," "Minimal Knowledge," and "Adequate Knowledge."Demographic variables such as age, gender, type of university (public vs private), region, and major were analyzed across these knowledge strata.Statistical significance for differences across the knowledge categories was tested using a chi-square test for categorical variables and an ANOVA for continuous variables, with a P value of <.05 indicating statistical significance.
In the second part of the descriptive analysis, given the ordinal nature of the variables, we assessed the range, median, and IQR of scores for each item in the survey.The survey items were grouped into 3 primary domains: perception, ethics, and attitudes, with the perception domain further divided into 2 subdomains: knowledge and beliefs of perceived risk.In addition, the frequency of use of ChatGPT for various tasks was analyzed.Each item was assessed on a Likert scale ranging from 1 to 5 except for the use tasks, which were reported as percentages.The total median scores for each domain and subdomain were calculated and included in the report.This analysis helped provide a clear picture of the participants' perceptions, ethical considerations, attitudes, and use habits related to ChatGPT.

Regression Analysis
Our analysis of the impact of perception scores on attitude variables involved the use of multiple ordinal logistic regression models.Each model evaluated the attitudes of health care students toward the use of ChatGPT, with individual attitude statements serving as dependent variables.These statements included perceptions of ChatGPT in terms of its ease of use, its utility in health care settings, the trustworthiness of its health information, its usefulness in finding answers to specific medical questions, and its helpfulness in searching for medical literature.
For each attitude statement, three perception subdomains were considered as independent variables: knowledge, beliefs of risk, and ethical considerations.The coefficient, SE, 1-tailed t test, and P value were all calculated for each perception subdomain under each attitude statement.All models were adjusted for control variables, including gender, whether the institution attended was private or public, the field of study, and the country of the student.All analyses were carried out using Stata (version 18.0; StataCorp).

Missing Data
Although our web-based survey, which required complete responses, effectively eliminated the need to handle missing data, the self-selecting nature of web-based surveys could introduce some bias.Participants more comfortable with or having better access to technology might be overrepresented.However, the completeness of the data set ensured the accuracy of our analysis and the robustness of the findings.

Sensitivity Analyses
In the analytical procedure, we used a set of 20 ordinal logistic regression models.Importantly, SEs were clustered by country to account for potential intracountry correlations.The proportional odds assumption, pivotal for the conventional interpretation of ordinal logistic regression, was violated in half (10/20, 50%) of these models.This breach was primarily attributed to the coefficient of the main predictor in the affected models.
To address this violation and offer a more fitting statistical representation, we used the partial proportional odds model for instances in which the main predictor was unconstrained.Even after this adjustment, our results suggested that the interpretation did not differ significantly from models in which every coefficient was constrained, even when faced with assumption violations.Due to this slight difference in interpretation, and in the interest of consistency, we chose to present the outcomes of all models using ordinal logit with all constraints.
For further refinement of our analysis, and to account for potential clustering effects, we introduced random-intercept and slope models.In this setup, schools were treated as nested entities within countries.This multilevel modeling approach produced results that differed only minimally from those of our initial models, underscoring the reliability of our findings.

Perception of Knowledge, Beliefs of Perceived Risks, and Ethics
Among all participants, 42.92% (1142/2661) did not know about ChatGPT.Male students knew more about ChatGPT than female students (598/875, 68.3% vs 907/1765, 51.39%, P<.001).Most of the group of participants who had adequate knowledge of ChatGPT were from South America.With the exception of medicine and therapy students, most health care students were unaware of ChatGPT (Table 1).
Table 2 presents findings from our survey assessing participants across multiple domains related to their perception, attitudes, and use of AI, with a particular focus on ChatGPT.In the "Perception" domain, participants were queried about their knowledge, with scores ranging from 1 to 5.They reported a median score of 2.00, which implies a minimal knowledge of ChatGPT.Delving into beliefs about the perceived risk linked to AI, respondents "somewhat agreed" that using ChatGPT raises potential ethical concerns and that AI will play a more important role in their jobs in the future.
Moving to the "Ethics" domain, participants considered the use of ChatGPT for writing text within a scientific manuscript and using ChatGPT as the sole information source for clinical practice "neither ethical nor unethical."In terms of "Attitudes" toward ChatGPT, the median score was 4.00 among all statements, showing that most participants "somewhat agreed" with the advantages and utility of ChatGPT in health care contexts.
The "Use" domain had respondents spotlight the frequency with which they engaged with ChatGPT, reporting a median score of 2.00 (once a month) on a scale of 1 to 5, with an IQR of 1.00-3.00.Regarding distinct tasks, most participants used ChatGPT for homework support (1078/1519, 70.97%), research paper writing (637/1519, 41.94%), and medical and health care education (349/1519, 22.98%); for more information, see Multimedia Appendix 3.

Further Learning Regarding ChatGPT
Of the participants willing to learn more about ChatGPT, 67.98% (1809/2661) wanted to learn about the applications of ChatGPT in particular cases of medical practice, followed by homework support and understanding the benefits and limits of ChatGPT (Table 3).Less than 30% (745/2661, 27.99%) were interested in learning about "data privacy and security measures" and "ethical considerations."Participants found that the most interesting educational materials for learning more about this topic were research articles and case studies (426/2661, 69.16%), internet-based demonstrations or hands-on experience (1301/2661, 48.91%), workshops or conferences (1211/2661, 45.52%), and webinars or web-based courses (968/2661, 36.37%).

Association Between Perception (Knowledge, Belief, and Ethics) and Frequency of Use and Attitude
The ordinal logistic regression analysis (Tables 5 and 6) illustrates the relationship between predictors such as knowledge, beliefs about risks, ethics, frequency of use, age, gender, institution type, and professional background and their impact on health care students' perceptions of ChatGPT's utility.
An enhanced understanding of ChatGPT consistently showed a positive correlation with more favorable views across all outcomes.For instance, as knowledge increased, the odds of believing that ChatGPT makes one's job easier went up, with odds ratios (ORs) ranging from 1.259 (95% CI 1.047-1.513)to 1.468 (95% CI 1.289-1.672).This trend persisted across other perceptions, such as ChatGPT's potential benefits in health care settings and its trustworthiness in providing health care information.
Beliefs about risk followed a distinctive pattern.Those with heightened risk beliefs felt that ChatGPT made their job easier and could play a beneficial role in health care settings, including obtaining information on medical questions and as a tool for searching medical literature, as evidenced by ORs of 2.040 (95% CI 1.765-2.358),1.106 (95% CI 1.031-1.186),1.179 (95% CI 1.110-1.255),and 1.138 (95% CI 1.076-1.203),respectively.This finding suggests that recognizing potential risks does not negate belief in the tool's utility.Ethical considerations played a significant role.Students with higher ethical concerns perceived ChatGPT's potential in health care more favorably.The ORs for these associations were notable, especially in the context of trustworthiness and specific medical queries (OR 1.620, 95% CI 1.498-1.752).
The frequency of ChatGPT use was a significant determinant.Regular users were more optimistic about its utility, which was evident across all outcomes, such as its benefits in health care (OR 1.540, 95% CI 1.420-1.670)and its efficacy in searching for medical information (OR 1.438, 95% CI 1.311-1.577).
Age influenced perceptions.Older individuals generally had a higher OR across the outcome variables, suggesting a more positive perception of ChatGPT's utility in their profession.Gender-based analysis revealed that female individuals, compared to male individuals, were generally more likely to believe that ChatGPT can help in their job.However, perceptions varied when it came to broader benefits in health care and other outcomes.Those identifying as nonbinary or third gender or those who preferred not to specify their gender showcased diverse perceptions, sometimes differing from those of both male and female individuals.
Institutional type and major played a role.Individuals from private institutions, compared to their public institution counterparts, had varied perceptions.Students from nursing and nutrition exhibited unique outlooks on ChatGPT, highlighting the influence of professional background on shaping perceptions.a Observations: predictor (knowledge): "I think that ChatGPT makes my job easier" n=863, "ChatGPT can be beneficial in health care settings" n=1513, "ChatGPT provides trustworthy health care information or guidance" n=1507, "ChatGPT is a useful tool when I need to search for information on specific medical questions" n=1501, and "ChatGPT is a useful tool when I need to search for medical literature" n=1490.Predictor (beliefs of risk): "I think that ChatGPT makes my job easier" n=861, "ChatGPT can be beneficial in health care settings" n=860, "ChatGPT provides trustworthy health care information or guidance" n=856, "ChatGPT is a useful tool when I need to search for information on specific medical questions" n=854, and "ChatGPT is a useful tool when I need to search for medical literature" n=849.b OR: odds ratio.a Observations: predictor (ethics): "I think that ChatGPT makes my job easier" n=863, "ChatGPT can be beneficial in health care settings" n=1513, "ChatGPT provides trustworthy health care information or guidance" n=1507, "ChatGPT is a useful tool when I need to search for information on specific medical questions" n=1501, and "ChatGPT is a useful tool when I need to search for medical literature" n=1490.Predictor (frequency of use): "I think that ChatGPT makes my job easier=863, ChatGPT can be beneficial in health care settings" n=861, "ChatGPT provides trustworthy health care information or guidance" n=860, "ChatGPT is a useful tool when I need to search for information on specific medical questions" n=858, and "ChatGPT is a useful tool when I need to search for medical literature" n=853.b OR: odds ratio.

Principal Findings
The aim of this study was to determine the perception, attitudes, and uses of ChatGPT among health care students, as well as their willingness to learn more about it.Given that chatbots powered by AI are widely accepted by students [27], our findings provide critical insights into the possibilities of integrating them into undergraduate health care teaching programs.More than half (1419/2661, 53.32%) of the participants knew about ChatGPT according to our data, with male students being more knowledgeable than female students.In May 2023, the Pew Research Center released the findings of a web-based study that showed that, compared to our results (1142/2661, 42.92%), 33% of young people had never heard of ChatGPT.Most participants felt that they knew little to nothing about ChatGPT [28].According to the study by Buabbas et al [29], 84% of Kuwaiti medical students did not have any training on the use of AI.It is worth noting that >80% of our participants (2160/2661, 81.17%) indicated an interest in learning more about ChatGPT's health care applications, with time restrictions being the primary barrier to learning more for 39.98% (1064/2661) of them.
Despite the widespread use of AI chatbots such as ChatGPT for self-diagnosing illnesses (up to 78%) [30] and the recognition of the value and user-friendliness of the information they provide, health care career students in the Americas maintained a neutral stance on whether ChatGPT will replace their jobs.

RenderX
They neither agreed nor disagreed with the notion.This aligns with the findings of the studies by Buabbas et al [29] and Moldt et al [31], where 78.7% and 83% of participants, respectively, expressed skepticism about AI eventually replacing the roles of physicians in the future.Only 22.98% (349/1519) of our students reported using AI for medical and health care education and training, but >70% (1101/1519, 72.48%) said that they used it for homework support.Although some colleges prohibit the use of ChatGPT and consider it plagiarism [32], teachers are investigating its utility during learning.For example, the students of Mullen [33] used ChatGPT to improve the quality of an essay in English (their nonnative language), and the participants felt that the experience left them better equipped to produce future academic output without the use of these tools.
Our study revealed that health care students displayed positive attitudes and acceptance toward ChatGPT and that most were willing to learn more about it, similar to the studies by Buabbas et al [29] and Moldt et al [31].Although we did not inquire about the specific version of ChatGPT used by participants, and as ChatGPT's primary function is not to be used as a web search engine, it is evident that, within the context of higher education, particularly in the field of health, there has been a significant increase in the adoption of disruptive technologies [34], including ChatGPT, as both formal or informal tools for enhancing skills and achieving educational objectives [35].
Respondents perceived ChatGPT as a valuable tool in health care settings, highlighting its usefulness in providing information on specific medical questions and facilitating access to relevant literature.Interestingly, the attitudes toward ChatGPT appeared to be influenced by the participants' self-perceived knowledge about the chatbot.Those who had a better understanding of ChatGPT tended to perceive it as providing trustworthy health care information or guidance.Notably, participants' willingness to use ChatGPT in the health care setting is heavily influenced by the level of trust they have in the system [6].Interestingly, we found a significant association between increased perceived risk scores and the following attitude statement: "ChatGPT provides trustworthy health care information or guidance."Establishing trust is crucial to ensuring the responsible and effective use of ChatGPT, thereby maximizing its benefits while mitigating any associated risks.
Indeed, this study revealed that users' attitudes toward ChatGPT are positively influenced by the frequency of use.Individuals who use ChatGPT more frequently have higher possibilities of believing that ChatGPT makes their job easier and finding it beneficial in health care settings, as well as considering it a useful tool for searching specific medical questions and medical literature.Despite students being somewhat concerned about the perceived risk of the ethical implications of using ChatGPT, they still used it once a month, especially for homework support, research paper writing support, medical or health care education and training, and mental health support.Our study differs from previous research, and Firaina and Sulisworo [36] found that most respondents preferred frequent use of ChatGPT.
Despite the many changes that have occurred in medicine over the last few decades, medical education is still largely based on traditional teaching methods [37,38].The release of ChatGPT caused concerns and debates in health care due to ethical issues, misinformation, misuse, and challenges in practice and academic writing.Concerns include the quality and dependability of medical information, the chatbot model's transparency, the ethics of user information, and potential biases in the ChatGPT algorithms [35].While several studies have demonstrated ChatGPT's ability to answer medical questions [39][40][41][42], many correct answers have been deemed inadequate [39,40].

Limitations
Our study has several limitations that must be considered when interpreting the results.First, our sampling strategy did not capture all health care students from the Americas.Despite our efforts to include universities across the Americas, we encountered a limited recruitment response from Central America.This low number may limit the representativeness of our findings for this specific region.As a result, the findings from Central America should be considered as preliminary and require validation through larger-scale research conducted in this region.Second, this study was cross-sectional in nature, and, therefore, we cannot establish causality among perceptions, beliefs, ethics, and attitudes.Longitudinal studies are needed to determine the temporal relationship between these variables.Third, although, during the course of this study, there were 2 available versions of ChatGPT (3.5 and 4.0), the participants were not specifically queried on which version they used.However, given their status as students, it can be reasonably deduced that they predominantly used the free version rather than the premium version.The disparities between the 2 versions lie mostly in the payment requirement associated with version 4.0.It has been said that this particular version offers enhanced safety measures, more valuable responses, and a heightened comprehension of the contextual nuances pertaining to the posed queries.On the basis of the aforementioned findings, certain worries emerge regarding the potential use of ChatGPT by students within their educational institutions but in an informal manner despite the absence of official integration of ChatGPT as an explicitly disruptive technological tool within their educational system.It is also possible that academic institutions are incorporating this technology within their instructional settings.At present, there remain unanswered inquiries pertaining to the subject matter.However, these discoveries indicate potential gaps in knowledge, warranting an assessment of whether the acquired information satisfies the minimum criteria for quality in the field of health and possesses genuine value in terms of gathering competent professionals in the near future.

Conclusions
The current debate revolves around the potential advantages and disadvantages of incorporating ChatGPT and other LLMs into the teaching and learning process.The age of AI has arrived.It is important to be aware of how it may be used and misused.Research in health care education looks bright in the future due to the essential integrity that drives the vast majority of researchers.A medical educator must remain current with the rapid advancements in technology and consider how they affect

Table 2 .
Range, median, and IQR of the scores of the survey domains a .

Table 3 .
Further learning domain showing aspects of ChatGPT and its applications in health care that students are more interested in learning about (N=2661).

Table 4 .
Reasons for lack of interest in learning more about ChatGPT and its potential applications in health care (N=2661).
a AI: artificial intelligence.

Table 5 .
Estimates from ordinal logistic regression models for the effect of perception scores on attitude variables a .

Table 6 .
Estimates from ordinal logistic regression models for the effect of perception scores on attitude variables (continuation) a .