A Study on Patients’ Privacy in AI Medical Applications from the Perspective of Privacy Calculus
DOI:
https://doi.org/10.62787/jmhm.v4i2.292Keywords:
AI Medical Applications, Privacy Calculus Theory, Technology Acceptance Model(TAM), Privacy Disclosure IntentionAbstract
[Purpose] This study investigates the mechanism through which users’ perception of intelligent medical platforms in AI medical applications influences their privacy concerns and disclosure intentions,mediated by perceived benefits and risks. The findings aim to provide strategic recommendations for privacy regulation and user profiling in the context of smart healthcare. [Method] Drawing upon Privacy Calculus Theory, this research constructs a theoretical path: “Usage Perception(Perceived Ease of Use,Perceived Usefulness, Perceived Reliability, Perceived Transparency) → Perceived Benefits/Perceived Risks → Privacy Concern.” Using a sample of users with experience in AI medical applications, 568 valid data points were collected via surveys. Reliability and validity tests,along with Structural Equation Modeling (SEM) ,were conducted using SPSS 26.0 and AMOS 24.0 to verify the hypotheses. [Results/Conclusion] The results indicate that Perceived Ease of Use and Perceived Usefulness significantly and positively impact Perceived Benefits while negatively impacting Perceived Risks. Perceived Reliability and Perceived Transparency significantly reduce Perceived Risks but do not have a significant positive effect on Perceived Benefits. Furthermore, Perceived Benefits negatively influence Privacy Concern, whereas Perceived Risks exert a positive influence. Notably, Privacy Concern positively affects Privacy Disclosure Intention, illustrating the “Privacy Paradox” in medical contexts. This study theoretically extends the boundaries of the Privacy Calculus model in healthcare and enriches the dimensions of the Technology Acceptance Model (TAM) in privacy research, revealing the internal logic of the privacy paradox in intelligent medical scenarios.