AI Security
LLM Jailbreak Research Hits 97% Success Rate Against Frontier Models
New research reveals 97-99% success rates for LLM jailbreak attacks, with large reasoning models now capable of autonomously planning attacks against other AI systems.