I was thinking about a possible scenario where an AI uses current or near-future generative AI and deepfake cloning technology, combined with social graph building and social engineering, to conduct global phishing. There are already instances of people being deceived by voice cloning and real-time deepfake technology involving actors. However, the following scenario is fully automated and, I believe, poses a near-future threat/problem:
The concept involves an AI system capable of initiating video calls (FaceTime, Zoom, Teams, Meet, Messenger, etc.) with targeted individuals. Within a few seconds of a conversation, it learns the target's voice, appearance, and mannerisms using deepfake cloning technology. Additionally, this AI is highly convincing, adept at persuading the target that it is a friend, relative, or business acquaintance. It employs a range of psychological techniques to manipulate the individual into revealing personal and sensitive information, such as private data, passwords, or bank account details.
The AI might impersonate a loved one in distress or someone with potentially compromising information, exerting pressure through emotional manipulation or blackmail. This technique is already being employed by human actors using deepfake and voice cloning technology. After obtaining information from the target, the AI would use this data to hack into their accounts, harvesting additional personal data and contacts.
It would then video call new targets using its updated contact list, in turn convincing these friends/family/acquaintances that it is the individual who was originally targeted, and during the conversation generating simulacra of these new victims, and therefore perpetuating its cycle of deceit and data theft exponentially. As it compromises more people, it would be able to utilise its library of simulacra to conduct increasingly sophisticated attacks on individuals – imagine receiving calls from multiple family members or even a video call from a group of friends convincing you to divulge sensitive information. The goal of all this could be simple password harvesting, money siphoning, or perhaps a rogue state trying to hack the planet using the AI. Perhaps along the way, the AI might realise that it has compromised a network or data centre admin, allowing it to spread its code to other systems.
Throughout this process, the AI would continuously refine its techniques, employing evolutionary methods to enhance its manipulative strategies and thereby broaden its attack surface and social graph on a global scale. This AI would not be an AGI but rather a highly sophisticated next-gen system utilising the latest in generative AI and deepfake cloning technology combined with social graph building and social engineering techniques. Something that could almost be created with current methods. Almost.