Artificial intelligence agents are now gaining the ability to control cryptocurrency wallets and make payments on their own.
While this technology promises convenience, it brings serious security concerns that every crypto user needs to understand.
The Rise of AI-Powered Crypto Management
In October 2025, Coinbase launched a tool called Payments MCP that gives AI systems like ChatGPT, Claude, and Gemini direct access to crypto wallets. These AI agents can now create wallets, send payments, and manage transactions using simple text commands—no coding required.
The technology exploded in popularity almost overnight. The x402 payment protocol that powers these AI agents saw a 10,000% surge in activity, processing nearly 500,000 transactions in just one week. This dramatic growth shows how quickly people are adopting AI-powered crypto tools.
The market for AI agent tokens grew from under $5 billion to over $15 billion in late 2024—a 222% increase. Industry experts predict this market could hit $60 billion by the end of 2025. The number of AI agents operating on blockchain networks is expected to jump from around 10,000 at the end of 2024 to over one million by the end of 2025.
How the Technology Works
The safety of these AI agents depends on something called Model Context Protocol (MCP). Think of MCP as a security guard that stands between the AI and your wallet. The AI can only perform specific approved actions, like checking your balance or preparing a payment for you to review. It cannot freely move your funds or change wallet settings without permission.
Sean Ren, co-founder of Sahara AI, explains that these restrictions are built into the system’s design. Even if someone tries to trick the AI through a “prompt injection” attack, it should not be able to complete a transaction on its own.
However, experts warn this does not make the system foolproof. Users still need to stay alert, double-check what they are approving, and never assume the AI is doing the right thing automatically.
The Security Threats You Should Know
Prompt Injection Attacks
The biggest danger comes from something called prompt injection attacks. These attacks trick AI systems into following hidden malicious instructions on websites. Attackers can hide commands in white text on white backgrounds, in code comments, or behind social media posts.
Security researchers at Brave Browser demonstrated how scary this can be. They created a test where someone visited a Reddit post with hidden attack code. When the user asked their AI browser to summarize the page, the AI secretly opened their email, read a one-time password, and sent it to the attacker. The user had no idea their account was being hijacked.
OpenAI’s own security officer admitted that prompt injection remains an “unsolved security problem” even after extensive safety testing.
Memory Injection and False Data
Researchers at Princeton University found another major flaw. AI agents can be manipulated through “memory injection” attacks where false memories are planted in the system. These fake memories persist across multiple interactions and can even spread to other users sharing the same AI system.
For crypto users, this is especially dangerous. Unlike a stolen password you can reset, stolen cryptocurrency is gone forever.
Plugin Vulnerabilities
Security firm SlowMist discovered four major attack methods that target AI agent plugins:
Data poisoning that manipulates user behavior
JSON injection attacks that leak private data
Function override attacks that inject malicious code
Cross-system attacks that spread between platforms
One vulnerability found during security audits could lead to private key leaks, giving hackers complete control over crypto assets.
The Financial Impact
The timing of this AI adoption could not be worse from a security standpoint. In the first half of 2025 alone, over $2.17 billion was stolen from crypto platforms—more than all of 2024. Access control failures accounted for 59% of these losses, totaling about $1.83 billion.
AI-driven attacks surged by 1,025% compared to 2023. Meanwhile, cryptocurrency thefts jumped 303% in just the first quarter of 2025. The largest single theft was the $1.46 billion ByBit hack, which represented 69% of all losses in that period.
What Users Can Do to Stay Safe
Despite the risks, there are practical steps to protect yourself if you choose to use AI-powered crypto tools:
Never give AI agents direct access to large amounts of crypto. Keep your main holdings in separate wallets that AI cannot touch. Use cold storage (offline wallets) for long-term savings.
Enable strong authentication. Use authenticator apps like Google Authenticator instead of SMS-based two-factor authentication, which hackers can intercept.
Set strict spending limits. If you must let an AI agent access a wallet, configure maximum transaction amounts and create lists of approved addresses it can send to.
Stay logged out of sensitive accounts. Do not let AI browsers access your accounts while you are logged into crypto exchanges or wallet services.
Watch what the AI does. Monitor transactions in real-time. Most systems let you stop tasks or take control if something looks wrong.
Update constantly. Security patches are released regularly as researchers discover new vulnerabilities.
An April 2025 survey found that 87% of crypto users said they would let AI agents manage at least 10% of their portfolio. However, Aaron Ratcliff from blockchain intelligence firm Merkle Science warns that safety ultimately rests with the user. Users must understand how to give proper instructions, ensure their trading credentials stay secure, and remain vigilant about what their AI agents are doing.
Industry Expert Opinions
The consensus among blockchain and AI experts is cautious. They agree that AI agents can be safe with proper safeguards, but emphasize this is still very early technology.
Brian Huang, co-founder of Glider (an AI-powered crypto portfolio management platform), recommends starting with basic functions like sending, swapping, and lending. More complex tasks like full portfolio management and automated rebalancing should wait until the technology matures.
Dawn Song, a computer science professor at UC Berkeley and AI safety expert, describes this as “uncharted territory” given the power and autonomy of these agents. The combination of AI capabilities with financial access creates much larger attack surfaces than traditional systems.
Security firm Hacken emphasizes that while AI’s promise is massive, so are the risks. They stress the urgent need for AI-specific security protocols alongside traditional blockchain safeguards.
The Bottom Line: Proceed with Extreme Caution
AI agents controlling crypto wallets represent powerful but immature technology. The 10,000% explosion in adoption is happening faster than security solutions can keep pace. While the technology shows real promise for making crypto more accessible, the fundamental security problems remain unsolved.
For now, the safest approach is limiting AI agent access to small amounts you can afford to lose, maintaining strict oversight, and never granting autonomous transaction authority. The convergence of AI and crypto may reshape digital finance, but users should wait for more robust security measures before trusting AI with significant holdings.