LLM can be a dangerous persuader: Empirical study of persuasion safety in large language models
Published in COLM 2025, 2025
Recommended citation: Minqian Liu, Zhiyang Xu, Xinyi Zhang, Heajun An, Sarvech Qadir, Qi Zhang, Pamela J. Wisniewski, Jin-Hee Cho, Sang Won Lee, Ruoxi Jia, and Lifu Huang (2025). "LLM Can Be a Dangerous Persuader: Empirical Study of Persuasion Safety in Large Language Models." Conference on Language Modeling (COLM 2025).
Download Paper
