How does the degree of anthropomorphism of health chatbots affect the public's willingness to seek help from them? Empirical research using HBM
DOI:
https://doi.org/10.62787/mhm.v3i1.130Keywords:
chatbot anthropomorphism, health beliefs, depression, health chatbotAbstract
Currently, health chatbots have a wide range of applications in the medical field. However, most of the related research centers around self-diagnosis at the user's physical level, while the psychological level is rarely discussed. Unlike real doctors, health chatbots are more accessible, more convenient and less stigmatizing. Consequently, this study develops a structural equation model on the degree of anthropomorphism of chatbots and willingness to ask for help from them based on the health belief model, the technology acceptance model, and the privacy computing theory. It contributes to complementing the lack of research on human-robot communication in health communication. The results show that an increase in the degree of chatbot anthropomorphism significantly increases users' perceived benefits and reduces privacy concerns, thus increasing their willingness to seek help from it. Interestingly, increased anthropomorphism enhances users' perceived severity and perceived susceptibility, which may be due to the specificity of mental health problems, where the more the chatbot resembles a real person, the more likely it is to aggravate users' tension and anxiety. This study also found that Chinese users generally have a low level of acceptance of psychological counseling chatbots, and chatbots are not their first choice for help when experiencing psychological problems. In the future, the professionalism of the psychological counseling function of health chatbots can be further improved, and at the same time well publicized for the benefit of more people troubled by psychological problems.