Skip to main content
Blog William Blondel
FR
Blog

William Blondel

  • The Offensive AI Landscape
  • AI-Driven Defense Architectures
  • Secure Coding in the Age of AI
  • AI Governance, Ethics, and Future Trends
FR
The Offensive AI Landscape AI-Driven Defense Architectures Secure Coding in the Age of AI AI Governance, Ethics, and Future Trends

#Jailbreaking — William Blondel Blog

2 posts

Jailbreaking LLMs: The "DAN" (Do Anything Now) Phenomenon

How users bypass AI safety filters with "DAN" jailbreaks. Explore the evolution from roleplay to automated attacks and the failure of RLHF alignment.

Apr 18, 2025·3 min read
Jailbreaking LLMs: The "DAN" (Do Anything Now) Phenomenon

Prompt injection attacks: Hacking the logic of chatbots

Prompt Injection is the new SQL Injection. Learn how direct and indirect attacks hijack LLM logic and why chatbots accept malicious commands.

Apr 11, 2025·3 min read
Prompt injection attacks: Hacking the logic of chatbots

© 2026 William Blondel

  • Archive
  • Tags
Sitemap RSS