Effects of Perception of Potential Risk in Generative AI on Attitudes and Intention to Use
How to cite (IJASEIT) :
H. S. Sætra, “Generative AI: Here to stay, but for good?,” Technology in Society, vol. 75, p. 102372, Nov. 2023, doi:10.1016/j.techsoc.2023.102372.
W. Holmes et al., “Ethics of AI in Education: Towards a Community-Wide Framework,” International Journal of Artificial Intelligence in Education, vol. 32, no. 3, pp. 504–526, Apr. 2021, doi: 10.1007/s40593-021-00239-1.
S. Hwang and M.-K. Kim, “An Analysis of Artificial Intelligence(A.I.) : related Studies’ Trends in Korea Focused on Topic Modeling and Semantic Network Analysis,” Journal of Digital Contents Society, vol. 20, no. 8, pp. 1847–1855, Sep. 2019, doi:10.9728/dcs.2019.20.9.1847.
S. I. Hwang and Y. J. Nam, "The Role of Confidence in Government in Acceptance Intention Towards Artificial Intelligence," Journal of Digital Convergence, vol.18, no.8, pp.217-224, Aug. 2020.
E. J. Noh, "Current Status and Analysis of Domestic and Foreign Regulations Related to Deepfake," KISDI Perspectives, no.4, pp.1-19, Jun. 2024.
D. Kim, “Foundational Discussion on Research Directions for Image-Generating Artificial Intelligence, with a Focus on Realistic Images,” The Korean Society of Human and Nature, vol. 4, no. 2, pp. 255–276, Dec. 2023, doi: 10.54913/hn.2023.4.2.255.
S. Choi, “Use of Generative Artificial Intelligence for Business College Assignments: A Quantitative and Qualitative Investigation on the Students’ Perceptions of Ethical Justification,” Korean Business Education Review, vol. 39, no. 1, pp. 139–159, Feb. 2024, doi: 10.23839/kabe.2024.39.1.139.
S. Ivanov, M. Soliman, A. Tuomi, N. A. Alkathiri, and A. N. Al-Alawi, “Drivers of generative AI adoption in higher education through the lens of the Theory of Planned Behaviour,” Technology in Society, vol. 77, p. 102521, Jun. 2024, doi:10.1016/j.techsoc.2024.102521.
H. C. Pham, C. D. Duong, and G. K. H. Nguyen, “What drives tourists’ continuance intention to use ChatGPT for travel services? A stimulus-organism-response perspective,” Journal of Retailing and Consumer Services, vol. 78, p. 103758, May 2024, doi:10.1016/j.jretconser.2024.103758.
C. Griffy-Brown, B. D. Earp, and O. Rosas, "Technology and the Good Society," Technology in Society, pp.1-7, Feb. 2018.
D. R. Cotton, P. A. Cotton, and J. R. Shipway, "Chatting and Cheating: Ensuring Academic Integrity in the Era of Chatgpt," Innovations in Education and Teaching International, pp.1-56, Jan. 2023.
T. Zhou and C. Zhang, “Examining generative AI user addiction from a C-A-C perspective,” Technology in Society, vol. 78, p. 102653, Sep. 2024, doi: 10.1016/j.techsoc.2024.102653.
H. Yu and Y. Min, “A Study on Intentions to Use Generative AI Chatbot ChatGPT : Adding Affordances to the Technology Acceptance Model,” Journal of Broadcasting and Telecommunications Research, vol. 124, pp. 141–169, Oct. 2023, doi:10.22876/kjbtr.2023..124.005.
S. Noble, Algorithms of Oppression: How Search Engines Reinforce Racism, New York: New York University Press, 2018.
E. M. Bender, T. Gebru, A. McMillan-Major, and S. Shmitchell, “On the Dangers of Stochastic Parrots,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623, Mar. 2021, doi: 10.1145/3442188.3445922.
V. R. Lee, D. Pope, S. Miles, and R. C. Zárate, “Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT,” Computers and Education: Artificial Intelligence, vol. 7, p. 100253, Dec. 2024, doi:10.1016/j.caeai.2024.100253.
M. Zhang and J.-M. Lee, “The Effect of Consumers’ Perceived Value and Trust on Adoption Intention of Artificial Intelligent Assistant,” Consumer Policy and Education Review, vol. 17, no. 1, pp. 17–40, Mar. 2021, doi: 10.15790/cope.2021.17.1.017.
Y. Heo, “Influence of News Literacy on the Perceived Impact and Regulatory Attitude of Fake News : Definition of Fake News as Moderator,” Korean Journal of Communication & Information, vol. 101, pp. 506–534, Jun. 2020, doi: 10.46407/kjci.2020.06.101.506.
J. Danaher and H. S. Sætra, “Technology and moral change: the transformation of truth and trust,” Ethics and Information Technology, vol. 24, no. 3, Aug. 2022, doi: 10.1007/s10676-022-09661-y.
D. Fallis, “The Epistemic Threat of Deepfakes,” Philosophy & Technology, vol. 34, no. 4, pp. 623–643, Aug. 2020, doi: 10.1007/s13347-020-00419-2.
R. Rini, "Deepfakes and the Epistemic Backdrop," The Philosophers' Imprint, vol.20, no.24, pp.1-16, Aug. 2020.
S. Y. Jin and J. E. Lee, "The Effect of the Fake News Related to the Electronic Voting System each News Service on News Users' Attitude of Using System, Intention to Participate Through System and Reliability of News Services," The Journal of the Korea Contents Association, vol.21, no.1, pp.105-118, Jan. 2021.
J. W. Jun, “Effects of AI Technology Perception on Purchase Intention of AI Speakers: A Focus on Mediating Roles of Dual Trust,” Journal of Cybercommunication Academic Society, vol. 40, no. 3, pp. 101–126, Sep. 2023, doi: 10.36494/jcas.2023.09.40.3.101.
C. Moorman, G. Zaltman, and R. Deshpande, “Relationships between Providers and Users of Market Research: The Dynamics of Trust within and between Organizations,” Journal of Marketing Research, vol. 29, no. 3, p. 314, Aug. 1992, doi: 10.2307/3172742.
A. Choudhury and H. Shamszare, “Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis,” Journal of Medical Internet Research, vol. 25, p. e47184, Jun. 2023, doi: 10.2196/47184.
D. L. Kasilingam, “Understanding the attitude and intention to use smartphone chatbots for shopping,” Technology in Society, vol. 62, p. 101280, Aug. 2020, doi: 10.1016/j.techsoc.2020.101280.
H. Kim, “Fairness Criteria and Mitigation of AI Bias,” The Korean Journal of Psychology: General, vol. 40, no. 4, pp. 459–485, Dec. 2021, doi: 10.22257/kjp.2021.12.40.4.459.
X. Huang and S. Y. Lee, “A Study of ‘With Corona’ News Message Characteristics(Direction and Target Country)’s Effect on Perceptual Bias, ‘With Corona’ Attitude, and ‘With Corona’ Supportive Behavioral Intentiona,” Journal of Communication Science, vol. 22, no. 4, pp. 5–35, Dec. 2022, doi: 10.14696/jcs.2022.12.22.4.5.
M. Kim, “The Effect of Perception of the Usefulness of Youtube Algorithm Recommendation on Media Trust on Youtube: Mediated Effects of Perceived Harm, Confirmation Bias, and Privacy Concerns,” Journal of Speech, Media & Communication Research, vol. 21, no. 4, pp. 7–42, Nov. 2022, doi: 10.51652/ksmca.2022.21.4.1.
E.-G. Jang and J.-M. Lee, “Types of Consumer Ambivalence for Intelligent Personal Assistant and Their Impact on Consumer Acceptance,” Journal of Consumer Studies, vol. 31, no. 2, pp. 1–22, Apr. 2020, doi: 10.35736/jcs.31.2.1.
D. Y. Lee, A Study on Continuance Usage Intention of ChatGPT: Focusing on the Moderating Effect of AI Literacy, Ph.D. Dissertation, Dong-Eui Unversity, 2024.
D. Menon and K. Shilpa, “‘Chatting with ChatGPT’: Analyzing the factors influencing users’ intention to Use the Open AI’s ChatGPT using the UTAUT model,” Heliyon, vol. 9, no. 11, p. e20962, Nov. 2023, doi: 10.1016/j.heliyon.2023.e20962.
M. B. Holbrook and R. Batra, “Assessing the Role of Emotions as Mediators of Consumer Responses to Advertising,” Journal of Consumer Research, vol. 14, no. 3, p. 404, Dec. 1987, doi: 10.1086/209123.
I. Ajzen, “The theory of planned behavior,” Organizational Behavior and Human Decision Processes, vol. 50, no. 2, pp. 179–211, Dec. 1991, doi: 10.1016/0749-5978(91)90020-t.
Y. J. Park and K. K. Kim, “Research on Acceptance and Diffusion of Political Fake News : Focused on Biased Information Processing and Third-Person Perception,” The Journal of Social Science, vol. 29, no. 2, pp. 119–141, Jun. 2022, doi: 10.46415/jss.2022.06.29.2.119.
M. Kim, “The Effect of User’s Attitude on Perception of Algorithm Recommendation Customized Service: Mediating Effects of False Consensus, Perceived Risk and Perceived Bias,” Journal of Communication Science, vol. 22, no. 2, pp. 196–231, Jun. 2022, doi: 10.14696/jcs.2022.06.22.2.196.
J.-C. Park and S.-R. Park, “The Effect of Personal Characteristics on ChatGPT Attitude and Intention to Use,” Journal of the Korea Management Engineers Society, vol. 28, no. 3, pp. 33–46, Sep. 2023, doi: 10.35373/kmes.28.3.3.
D. Spohr, “Fake news and ideological polarization,” Business Information Review, vol. 34, no. 3, pp. 150–160, Aug. 2017, doi: 10.1177/0266382117722446.
D. H. Kim, Study of Dis-information Generated by Generative AI: Focusing on New-media Literacy and Digital Literacy, Ph.D. Dissertation, Hankuk University of Foreign Studies, 2024.
H. K. Kim, "Legal Perspectives on Trustworthy AI-based policy and Practice: Reviewing the 2016 & 2021 Reports of the AI 100," Fourth Industrial Revolution Law & Policy, vol.4, pp.179-218, Dec. 2021.
E. G. Kim, The Effect of Artificial Intelligence Ethics Education Using Moral Machine on Elementary School Students' Attitudes and Images toward Artificial Intelligence, Master's Thesis, Korea National University of Education, 2022.
M. E. A. A. Tharwat, D. W. Jacob, M. F. Md Fudzee, S. Kasim, A. A. Ramli, and M. Lubis, “The Role of Trust to Enhance the Recommendation System Based on Social Network,” International Journal on Advanced Science, Engineering and Information Technology, vol. 10, no. 4, pp. 1387–1395, Aug. 2020, doi: 10.18517/ijaseit.10.4.10883.
S. Handajani, S. Suryanto, and S. Keman, “The Development of Training Model Based on Theory of Planned Behavior and Willingness to Behave Higienic Practices for The Food Handler at Foodcourt Baseball in Unesa Surabaya,” International Journal on Advanced Science, Engineering and Information Technology, vol. 5, no. 5, p. 313, 2015, doi: 10.18517/ijaseit.5.5.564.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).