TechFlow news, December 2: According to Decrypt, Anthropic's latest research shows that top AI models can now match human hackers, successfully reproducing 207 out of 405 historical smart contract vulnerabilities and simulating the theft of funds worth $550 million. Three models—Claude Opus and GPT-5 among them—identified vulnerabilities worth $4.6 million in contracts created after their training data cutoff dates and discovered two previously unknown zero-day vulnerabilities on the Binance Smart Chain. Security experts warn such attacks could be easily scaled, as many vulnerabilities are publicly available and AI systems can automate attacks around the clock. Anthropic recommends developers integrate automated tools into their security processes to ensure defensive capabilities evolve in tandem with offensive ones.
Navigating Web3 tides with focused insights
Contribute An Article
Media Requests
Risk Disclosure: This website's content is not investment advice and offers no trading guidance or related services. Per regulations from the PBOC and other authorities, users must be aware of virtual currency risks. Contact us / support@techflowpost.com ICP License: 琼ICP备2022009338号




