Subtoxic Questions: Dive Into Attitude Change of LLM’s Response in Jailbreak Attempts

Abstract

As Large Language Models (LLMs) of Prompt Jailbreaking are getting more and more attention, it is of great significance to raise a generalized research paradigm to evaluate attack strengths and a basic model to conduct subtler experiments. In this paper, we propose a novel approach by focusing on a set of target questions that are inherently more sensitive to jailbreak prompts, aiming to circumvent the limitations posed by enhanced LLM security. Through designing and analyzing these sensitive questions, this paper reveals a more effective method of identifying vulnerabilities in LLMs, thereby contributing to the advancement of LLM security. This research not only challenges existing jailbreaking methodologies but also fortifies LLMs against potential exploits.

Publication
In Deep Learning Security and Privacy Workshop 2024
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.
Jiaqi Huang
Jiaqi Huang
Senior CS Student at Nanjing University
Research Intern at University of Illinois at Urbana-Champaign

My research interests include system reliability & security, and AI for systems.