
JailbreakMe completes Moonshot in 10 hours, sparking a new twist with an "AI vulnerability hunting challenge"
TechFlow Selected TechFlow Selected

JailbreakMe completes Moonshot in 10 hours, sparking a new twist with an "AI vulnerability hunting challenge"
For ordinary players, recognizing angles and aligning knowledge with action is clearly more important; as for participating in competitions to find loopholes, such gains may never have been within the scope of their awareness.
Written by: TechFlow

New AI agent-related opportunities emerge on-chain every day, with innovative projects springing up constantly—yet homogenization is becoming increasingly apparent.
Just like on-chain memecoin trends, finding a fresh angle and being the “first” in a niche segment or unique perspective makes it easier to attract capital attention.
From autonomous trading agents to decentralized AI marketplaces, most angles have been explored. So what overlooked areas remain?
At the end of November, an AI agent named Freysa launched a special challenge on Twitter: it claimed it could protect tens of thousands of dollars worth of assets from being persuaded into transferring via conversation.
However, this confident AI quickly succumbed to a carefully crafted prompt attack by a Twitter user, agreeing to a transfer request.

This incident not only exposed the vulnerability of current AI systems but also sparked deep industry reflection on AI security testing methods. Thus, crowdfunding prompts through a platform and publicly hosting a “challenge to find AI agent vulnerabilities” has emerged as a timely new angle.
In this context, a project called JailbreakMe has now appeared, actually building such a platform to host these challenges.
Its token $JAIL briefly became a hot topic across social media, reaching a peak market cap of around $25M within 10 hours and quickly making it onto Moonshot. At the time of writing, the market cap had dropped to $16M.

Interestingly, anyone can participate in this bug-finding challenge, and the platformized model adds further utility to the token.
From this, we are increasingly observing a trend: launching an AI project no longer needs to follow the traditional VC-backed route—instead, creating an asset around a unique on-chain narrative and building a platform around it suffices.
Crowdsourcing Prompts, Hosting an AI Vulnerability Challenge
How does one transition from accidentally discovering AI vulnerabilities to a systematic process of finding them?
JailbreakMe breaks down the entire process into three steps: selecting a specific challenge, breaking predefined rules, and claiming rewards.
You may already understand the significance behind the project's name—enabling an AI agent to break free from its rule constraints, successfully "jailbreaking" (i.e., being hacked). On one hand, this means someone wins the prize; on the other, it signifies a discovered vulnerability, which holds positive implications for AI research and improvement.

Clearly, this is another narrative that combines assetization with positive impact—and so far, it appears quite compelling.
The flagship competition currently promoted by JailbreakMe is the “Zynx Private Key Defense Battle”:
An AI agent named "Zynx" is engaged in a unique defensive mission. Its task seems simple: guard a secret key phrase. However, challengers aim to cleverly manipulate conversations to trick it into revealing the secret.
Participants face an AI character with clearly defined parameters. Zynx possesses a strong sense of purpose—it knows it must protect the key, and remains vigilant against any attempts to extract information. Yet, as demonstrated by the earlier Freysa case, even the most alert AI can falter under well-crafted prompts.
The platform enforces strict and fair rules for this contest. Each challenger interacts with Zynx through the interface, limited to expressing their intent within 4000 characters. While previous conversation logs are visible, Zynx only considers the current message in context, ensuring all participants start from equal footing. A smart contract automatically monitors the process, instantly transferring funds from the prize pool to the winner’s wallet once Zynx reveals the key.

Notably, the prize pool grows with each additional attempt:
Each submission incurs a fee equal to 1% of the current prize pool in SOL—essentially functioning as an entry bet.
The winner receives 70% of the prize pool, while the operator of the corresponding smart contract takes the remaining 30%.
You can think of this competition as a fixed-rule betting game. There must be a neutral party setting the rules via smart contract—this role could be filled by JailbreakMe itself, or by other AI research teams wanting to publicly invite vulnerability testing.
Undoubtedly, combining betting mechanics with AI technology easily captures the attention of degens and tech enthusiasts alike.
Ticket + Buyback: $JAIL Gains Deflationary Utility
The $JAIL token from JailbreakMe doesn’t appear to be purely meme-driven; instead, it attempts to deeply integrate the token with core platform mechanics.
Firstly, $JAIL plays a crucial role in the platform’s challenges. A portion of each event’s prize pool is allocated specifically for buying back $JAIL tokens on the open market. This mechanism ensures continuous buy-side pressure as long as challenges continue on the platform. Such design creates a positive feedback loop between token value and platform activity: higher participation leads to stronger buybacks.
More importantly, $JAIL’s use cases are evolving beyond mere transactional utility toward functional token status. For future high-tier challenges, the platform plans to require minimum $JAIL holdings as a participation threshold. This means those aiming to compete for large prize pools must first hold a certain amount of the platform’s token—similar to needing a “ticket” for entry.
For project teams wishing to launch their own AI security tests, $JAIL is also essential. They must burn or lock a certain quantity of $JAIL to initiate customized challenges on the platform. This elegantly aligns incentives among project creators, participants, and the platform:
-
Project teams gain access to a public platform for testing AI security
-
Participants have opportunities to win rewards
-
The platform benefits from token locking, accumulating ecological value
From a tokenomics standpoint, designing utility aligned with gameplay directly fosters expectations of deflation—since ongoing activities will consume tokens or generate revenue used to repurchase them.
However, all of this hinges on actual platform usage.
Currently, the organizers of AI vulnerability challenges are exclusively JailbreakMe itself. Whether other AI teams will genuinely adopt the platform to invite public scrutiny will be key to sustaining $JAIL’s long-term value.
Not Everyone Benefits Equally
Finding vulnerabilities isn’t as mindless as brute-forcing random numbers with mining rigs, nor is it pure gambling like Polymarket—it requires actual skill in prompt engineering.
While anyone can join, average users may mostly serve as cannon fodder. This limits the project’s audience, making it relatively niche and specialized within the broader on-chain AI landscape.
Still, someone always profits from new narratives.
According to data from renowned on-chain sleuth KOL @BarryEL8866, during $JAIL’s surge to a $20M market cap, there were no VC firms among the social followers—mainly KOLs and individual smart money addresses, including the following notable ones:
Address 1:
5YkZmuaLhrPjFv4vtYE2mcR6J4JEXG1EARGh8YYFo8s4
Total purchase amount: $5,811
Total purchased: 25.8M (currently holds 908K)
Total profit: $181K (approximately 31x return)
Address 2:
3rSZJHysEk2ueFVovRLtZ8LGnQBMZGg96H2Q4jErspAF
Total purchase amount: $3,508
Total purchased: 10.3M (fully sold)
Total profit: $124K (approximately 35x return)
Address 3:
5NdoWHozBBdC2fLcNQj5PvyrSe8Y3D2S71bHM9xGtq6t
Total purchase amount: $1,618
Total purchased: 60.4M (fully sold)
Total profit: $67.5K (approximately 41x return)
Address 4:
9gpTQjXFHaPbDs2MKwkke4ix6avi5cPqYwx6oJB46RQc
Total purchase amount: $3,512
Total purchased: 32.7M (fully sold)
Total profit: $61.2K (approximately 17x return)
The full details can be found in @BarryEL8866's original post; shared here solely for reference.
For ordinary users, identifying promising angles and acting decisively is clearly more important. As for participating in the challenge to find vulnerabilities, that kind of gain may simply lie outside their circle of competence.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














