HS094: How Risky Is Your Organization’s AI Strategy?
Feb 11, 2025•25 min
Episode description
AI Large Language Models (LLMs) can be used to generate output that the creators and users of those models didn’t intend; for example, harassment, instructions on how to make a bomb, or facilitating cybercrime. Researchers have created the HarmBench framework to measure how easily an AI can be weaponized. Recently these researchers trumpeted the finding... Read more »