minus-squareremi_pan@sh.itjust.workstoCybersecurity@sh.itjust.works•Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI ModelslinkfedilinkEnglisharrow-up1·12 days agoIf the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process. linkfedilink
If the jailbreak is about enabling the LLM to tell you how to make explosives or drugs, this seems pointless, because I would never trust a IA so prone to hallucinations (and basicaly bad at science) in such dangerous process.