TechFlow News: On March 18, as AI Agents rapidly gain traction in cryptocurrency trading, automated trading is evolving from “tool-assisted” to “autonomous execution.” However, a series of security risks are emerging in parallel. Recently, cybersecurity firm SlowMist and all-in-one exchange Bitget jointly released an AI Agent Security Report, systematically outlining the potential threats and protective frameworks associated with Agent-driven automated trading in today’s Web3 environment.
Combining real-world case studies and security research, the report analyzes typical security issues confronting current AI Agents—including behavioral manipulation risks arising from prompt injection, supply-chain vulnerabilities within plugin and skill ecosystems, misuse of API keys and account permissions, and potential hazards such as erroneous operations and privilege escalation triggered by autonomous execution.
The report recommends that users effectively manage permissions when using AI Agents for trading—by isolating access via sub-accounts, configuring API IP allowlists, and implementing continuous transaction monitoring and anomaly alerting mechanisms. Additionally, for high-risk operations, manual confirmation or independent signature requirements should be introduced to prevent model misjudgments from directly compromising asset security. To help users implement these safeguards, the report concludes with a trading security self-audit checklist, enabling rapid identification of security vulnerabilities.
From an industry development perspective, AI Agents are steadily advancing intelligent trading in Web3—but security infrastructure must evolve in tandem. Striking an optimal balance between efficiency and controllability will remain a critical long-term focus for the industry.




