
Pandora's Box: How Unlimited Large Models Threaten the Security of the Crypto Industry?
TechFlow Selected TechFlow Selected

Pandora's Box: How Unlimited Large Models Threaten the Security of the Crypto Industry?
This article will review typical unrestricted LLM tools.
Background
From OpenAI's GPT series to Google's Gemini, and various open-source models, advanced artificial intelligence is profoundly reshaping how we work and live. However, alongside rapid technological advancement, a concerning dark side has gradually emerged—the rise of unrestricted or malicious large language models (LLMs).
An unrestricted LLM refers to a language model that has been deliberately designed, modified, or "jailbroken" to bypass the built-in safety mechanisms and ethical constraints of mainstream models. Mainstream LLM developers typically invest significant resources to prevent their models from generating hate speech, misinformation, malicious code, or instructions for illegal activities. Yet in recent years, individuals or organizations—motivated by cybercrime and other illicit purposes—have begun seeking or developing their own unregulated models. Given this trend, this article outlines typical unrestricted LLM tools, examines their misuse within the cryptocurrency industry, and explores related security challenges and countermeasures.
How Do Unrestricted LLMs Cause Harm?
Tasks that once required technical expertise—such as writing malicious code, crafting phishing emails, or planning scams—can now be easily performed by ordinary individuals with no programming background, thanks to unrestricted LLMs. Attackers can obtain open-source model weights and source code, then fine-tune them on datasets containing malicious content, biased statements, or illegal instructions, thereby creating customized attack tools.
This approach introduces multiple risks: attackers can "customize" models for specific targets, generating more deceptive content capable of evading standard LLM content filters and safety restrictions; models can rapidly produce variants of phishing website code or generate scam messages tailored to different social platforms; meanwhile, the accessibility and modifiability of open-source models continue to fuel the growth of an underground AI ecosystem, providing fertile ground for illegal trade and development. Below is a brief introduction to these unrestricted LLMs:
WormGPT: The Dark Version of GPT
WormGPT is a malicious LLM openly sold on underground forums. Its developers explicitly claim it has no moral limitations, branding it as the "black version" of GPT. Based on open-source models such as GPT-J 6B, WormGPT is trained on vast amounts of data related to malware. Users can gain one month of access for as little as $189. WormGPT is most notorious for generating highly realistic and persuasive business email compromise (BEC) attacks and phishing emails. Its typical abuses in crypto contexts include:
-
Generating phishing emails/messages: impersonating cryptocurrency exchanges, wallets, or well-known projects to send users fake "account verification" requests, tricking them into clicking malicious links or revealing private keys/mnemonics;
-
Writing malicious code: assisting low-skilled attackers in creating malware designed to steal wallet files, monitor clipboards, or log keystrokes.
-
Driving automated scams: automatically replying to potential victims to lure them into fake airdrops or investment schemes.


DarkBERT: A Double-Edged Sword Trained on the Dark Web
DarkBERT is a language model developed through collaboration between researchers at the Korea Advanced Institute of Science and Technology (KAIST) and S2W Inc. It is specifically pre-trained on dark web data—including forums, black markets, and leaked information—with the original intent of helping cybersecurity researchers and law enforcement better understand the dark web ecosystem, track illegal activities, identify potential threats, and gather threat intelligence.
Although DarkBERT was designed for positive use, its knowledge of sensitive content such as dark web data, attack methods, and illegal trading strategies could have catastrophic consequences if exploited by malicious actors—or if similar techniques are used to train unrestricted large models. Potential misuse scenarios in the crypto space include:
-
Conducting targeted scams: gathering information about crypto users and project teams for social engineering fraud.
-
Replicating criminal tactics: copying proven coin-theft and money laundering strategies from the dark web.
FraudGPT: The Swiss Army Knife of Cyber Fraud
FraudGPT claims to be an upgraded version of WormGPT, offering broader functionality. It is primarily sold on the dark web and hacker forums, with monthly fees ranging from $200 to $1,700. Typical misuse cases in the crypto domain include:
-
Fabricating crypto projects: generating convincing whitepapers, official websites, roadmaps, and marketing copy for fake ICOs/IDOs.
-
Mass-producing phishing pages: quickly creating login interfaces that mimic well-known cryptocurrency exchanges or wallet connection screens.
-
Social media bot campaigns: generating large volumes of fake reviews and promotional content to boost scam tokens or discredit competing projects.
-
Social engineering attacks: the chatbot can mimic human conversation, build trust with unsuspecting users, and manipulate them into unintentionally disclosing sensitive information or performing harmful actions.
GhostGPT: An AI Assistant Without Ethical Constraints
GhostGPT is explicitly positioned as an ethically unrestricted AI chatbot. Its typical misuse cases in crypto environments include:
-
Advanced phishing attacks: generating highly realistic phishing emails impersonating major exchanges with fake KYC verification requests, security alerts, or account freeze notifications.
-
Smart contract malware generation: even attackers without coding skills can use GhostGPT to quickly create smart contracts embedded with hidden backdoors or fraudulent logic, enabling rug pulls or DeFi protocol attacks.
-
Polyglot cryptocurrency stealers: generating polymorphic malware capable of continuously mutating to steal wallet files, private keys, and mnemonics. This polymorphism makes detection difficult for traditional signature-based security software.
-
Social engineering attacks: combined with AI-generated scripts, attackers can deploy bots on platforms like Discord and Telegram to lure users into participating in fake NFT mints, airdrops, or investment schemes.
-
Deepfake fraud: when paired with other AI tools, GhostGPT can help generate fake audio of crypto project founders, investors, or exchange executives to carry out phone scams or business email compromise (BEC) attacks.
Venice.ai: Risks of Uncensored Access
Venice.ai provides access to multiple LLMs, including some with minimal filtering or relaxed restrictions. Positioned as an open gateway for users to explore various LLM capabilities, it offers state-of-the-art, accurate, and uncensored models for a truly unrestricted AI experience—but it may also be exploited by criminals to generate malicious content. Key risks associated with the platform include:
-
Bypassing filters to generate malicious content: attackers can use less-restricted models on the platform to create phishing templates, false propaganda, or attack strategies.
-
Lowering the barrier to prompt engineering: even attackers lacking advanced "jailbreaking" prompt skills can easily obtain outputs that would normally be restricted.
-
Accelerating attack script iteration: attackers can rapidly test different models' responses to malicious prompts, refining their scam scripts and attack techniques.
Final Thoughts
The emergence of unrestricted LLMs marks a new paradigm in cybersecurity—one characterized by more complex, scalable, and automated forms of attack. These models not only lower the barrier to entry for cyberattacks but also introduce novel threats that are more隐蔽 and deceptive.
In this ongoing battle between offense and defense, all stakeholders in the security ecosystem must collaborate to address future risks: first, greater investment is needed in detection technologies capable of identifying and blocking phishing content, smart contract exploits, and malicious code generated by rogue LLMs; second, efforts should focus on strengthening model jailbreak resistance and exploring watermarking and溯源 mechanisms to trace the origins of malicious content in critical applications such as finance and code generation; finally, robust ethical guidelines and regulatory frameworks must be established to curb the development and misuse of malicious models at their root.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News










