A new “crisis” has developed in AI technology that could compromise our safety, according to OpenAI CEO Sam Altman. The man behind ChatGPT issued a stern warning that has him very concerned. He recently told federal regulators that a massive fraud crisis is approaching. In the hands of bad-faith actors, the potential devastation could reshape financial transactions and leave many vulnerable to financial exploitation. Altman warned that people should proceed with caution when using OpenAI’s new ‘Agent’ tool as it could expose the user to security breaches.
Sam Altman’s AI Fraud Crisis Warning

Altman delivered his stern warning at a Federal Reserve conference in Washington, D.C.. He addressed hundreds of regulators and banking executives. He warned them that artificial intelligence has “fully defeated most of the ways that people authenticate currently, other than passwords.”
His concern also stems from modern AI’s capability to perfectly reproduce anyone’s voice based on just a few short audio samples. Financial institutions still accept voice prints as authentication for moving large sums of money. Altman calls this “a crazy thing to still be doing,” as even he is surprised people trust his product. The OpenAI CEO emphasized that this fraud crisis could emerge “very, very soon”.
Current State of Voice Authentication in Banking

Altman expressed shock and issued a warning to banks that still rely on voice biometrics. Financial institutions like HSBC and Wells Fargo have used voice-print systems for customer verification for decades. These systems require customers to repeat specific phrases, creating “voice prints” for account access. However, researchers have demonstrated that generative artificial intelligence models can clone voices accurately enough to pass voice-print security checks using publicly available audio.
In 2016, the BBC successfully fooled HSBC’s voice recognition system when a reporter accessed his account using his twin brother’s voice. AI has made voice biometrics completely unreliable as AI can now replicate anyone’s voice with disturbing accuracy. This allows scammers the ability to call banks, pass authentication tests, and transfer money freely by simply mimicking a customer’s voice.
Real-World Examples of AI-Enabled Fraud

In a recent high-profile case, a Hong Kong finance clerk was defrauded of $25 million by scammers using deepfake technology. The scammers used a deepfake video call where scammers impersonated multiple senior executives. Other scams include AI bots creating fake intimate relationships and then extorting or manipulating victims. Another scam involved using AI to create an impersonation of a hospital-bound Brad Pitt. This scam was used to steal over $850,000. The FBI has noted that deepfake-related crime increased by more than 1,500% in the Asia-Pacific region from 2022 to 2023.
Types of AI Fraud Techniques

Altman fears that the current AI-fraud crisis will expand beyond voice cloning attacks, deepfake video call scams and phishing emails. He warns that in the future, FaceTime or video fakes may become indistinguishable from reality. The alarming abilities of current AI-technology in the hands of bad faith actors is already terrifying. Scammers can now use AI to create fake identification documents, explicit photos, and headshots for social media profiles.
The Scale and Impact of AI Fraud

According to Trustpair’s 2025 US Fraud study, in the past year, over 90% of finance professionals were victims of cyber fraud. 47% of the targeted companies lost $10 million. The increase in cyber fraud crimes using Gen-AI deepfakes has increased 118% yearly. Experts agree AI will significantly increase fraud volume and sophistication through voice cloning, deepfakes, automated chatbots, and personalized targeting.
Current detection methods for AI generated content still face challenges due to AI’s human-like content generation. As AI technology advances at an exponential rate, the consensus is that large-scale AI fraud adoption by criminals is imminent.
Microsoft’s Move Away from Passwords

In strange opposition to Altman’s warning, Microsoft is implementing a major authentication overhaul affecting over 1 billion users. They plan to implement biometrics – moving toward a passwordless and passkey-first experience. By August 2025, Microsoft Authenticator will eliminate traditional password support entirely. Microsoft will instead rely on biometrics like face recognition, fingerprints, or PINs. The company argues that passkeys are “more secure and three times faster than passwords”
Sam Altman’s World Eye-Scanning Solution

The solution Altman proposes around this impending crisis centers his new company, “World”. World (formerly Worldcoin) launched in the U.S. in 2025 with 6 locations in Atlanta, Austin, Los Angeles, Miami, Nashville, and San Francisco. The system uses iris scans stored on blockchain to create a global identity verification system. Users spend about 30 seconds having their face and iris scanned by spherical biometric devices called Orbs, creating unique iris codes to verify humanity. Altman has stated that 12 million people have already been scanned and verified globally by his company.
Future Implications and Industry Response

Concerns about data privacy and the potential for AI to eventually replicate even eye-scanning technology pose a tangible threat. The financial industry is struggling with the need to reassess authentication methods. There is a need to implement stronger, multi-factor alternatives. The FBI has issued warnings about AI voice and video “cloning” scams, acknowledging that this is not a theoretical future threat but a current reality affecting people today.