
Messari|Privacy Layer: Understanding the Inner Workings of Decentralized Confidential Computing
TechFlow Selected TechFlow Selected

Messari|Privacy Layer: Understanding the Inner Workings of Decentralized Confidential Computing
Decentralized Confidential Computing (DeCC) represents a fundamental shift in how sensitive data is processed in decentralized systems.
Authors: Mohamed Allam, Drexel Bakker
Translation: AI.bluue
Core Insights
-
DeCC introduces the ability to protect data privacy on inherently transparent public blockchains, enabling private computation and state without sacrificing decentralization.
-
DeCC ensures data security during use by enabling encrypted computation without exposing plaintext, addressing a critical vulnerability in both traditional and blockchain systems.
-
By combining cryptographic tools (such as ZKP, MPC, GC, FHE) with proof-of-Tee, trustless confidentiality is achieved, with each technology offering different trade-offs in performance and trust that can be combined for stronger guarantees.
-
Over $1 billion has been invested in DeCC projects, reflecting growing momentum in the field, with teams focused on practical integration and developer-facing infrastructure.
Introduction: The Evolution of Data Computation and Security
Blockchain technology introduced a new paradigm of decentralization and transparency, but also brought trade-offs. In the first wave of crypto privacy, often called "Privacy 1.0", tools like mixers, tumblers, and private transactions (e.g., Zcash, Monero, and Beam.mw) provided users with a degree of anonymity for financial transfers. These solutions were specialized, mainly limited to hiding sender and receiver identities, and disconnected from broader application infrastructure.
A second wave is emerging. Privacy is no longer just about hiding transactions, but extends to full computation. This shift marks the rise of Decentralized Confidential Computing (DeCC), also known as Privacy 2.0. DeCC introduces private computation as a core feature of decentralized systems, allowing data to be securely processed without revealing underlying inputs to other users or the network.
Unlike typical smart contract environments where all state changes and inputs are publicly visible, DeCC keeps data encrypted throughout computation and only reveals what is necessary for correctness and verification. This enables applications to maintain private states atop public blockchain infrastructure. For example, using Multi-Party Computation (MPC), a group of hospitals can analyze their combined datasets without any institution seeing others’ raw patient data. Where transparency once limited what blockchains could support, privacy now unlocks an entirely new category of use cases requiring confidentiality.
DeCC is enabled by a suite of technologies designed for secure data processing. These include Zero-Knowledge Proofs (ZKP), Multi-Party Computation (MPC), Garbled Circuits (GC), and Fully Homomorphic Encryption (FHE)—all relying on cryptography to enforce privacy and correctness. Trusted Execution Environments (TEE) complement these tools by providing hardware-based isolation for secure off-chain execution. Together, they form the foundation of the DeCC tech stack.
Potential applications are vast: decentralized finance systems with confidential trading strategies, public health platforms extracting insights from private data, or AI models trained on distributed datasets without exposing underlying inputs. All require privacy-preserving computation built into the infrastructure layer of blockchain systems.
This report explores the current state of DeCC and its broader implications. We first contrast how data is handled in traditional systems versus DeCC frameworks, and why transparency alone is insufficient for many decentralized applications. Then we examine the core technologies underpinning DeCC, how they differ, and how they can be combined to balance trade-offs in performance, trust, and flexibility. Finally, we map the ecosystem, highlighting capital flowing into the space, teams building in production, and what this momentum means for the future of decentralized computing.
Traditional Data Processing vs. Decentralized Confidential Computing (DeCC)
To understand the necessity of DeCC, it helps to see how data is processed in traditional computing environments and where vulnerabilities lie. In classical computing architectures, data typically exists in three states: at rest (stored on disk/databases), in transit (moving across networks), and in use (processed in memory or CPU). Due to decades of security advances, the confidential computing industry has reliable solutions for two of these states.
-
Data at rest: Encrypted using disk-level or database-level encryption (e.g., AES). Common in enterprise systems, mobile devices, and cloud storage.
-
Data in transit: Protected via secure transport protocols like TLS/SSL. Ensures data remains encrypted while moving between systems or over networks.
-
Data in use: Traditionally, encrypted data received from storage or network is decrypted before processing. This means workloads run on plaintext, leaving data in use unprotected and exposed to potential threats. DeCC aims to address this vulnerability by enabling computation without revealing underlying data.
While the first two states are well protected, securing data in use remains a challenge. Whether a bank server calculating interest payments or a cloud platform running machine learning models, data usually must be decrypted in memory. At that moment, it is vulnerable: malicious system administrators, malware infections, or compromised operating systems could potentially inspect or even alter sensitive data. Traditional systems mitigate this through access controls and isolated infrastructure, but fundamentally, there's a period where the “crown jewels” exist in plaintext inside machines.
Now consider blockchain-based projects. They take transparency to another level: data isn't just possibly decrypted on one server—it's often replicated in plaintext across thousands of nodes globally. Public blockchains like Ethereum and Bitcoin intentionally broadcast all transaction data to achieve consensus. If your data is intended to be public (or pseudonymous) financial information, this works fine. But if you want to use blockchain for any use case involving sensitive or personal information, this becomes completely unworkable. For instance, in Bitcoin, every transaction amount and address is visible to everyone—great for auditability, bad for privacy. On smart contract platforms, any data you put into a contract (your age, DNA sequence, supply chain info) is public to every network participant. No bank wants all its transactions public; no hospital can place patient records on a public ledger; no gaming company would leak players' secret states to everyone.
The Data Lifecycle and Its Vulnerabilities
In the traditional data processing lifecycle, users typically send data to a server, which decrypts and processes it, stores the result (possibly encrypting it on disk), and sends back a response (encrypted via TLS). The vulnerability point is clear: during use, the server holds the original data. If you trust the server and its security, that’s fine—but history shows servers can be hacked or insiders may abuse access. Enterprises handle this with strict security practices, yet remain cautious about entrusting highly sensitive data to third parties.
In contrast, the DeCC approach aims to ensure that no single entity ever sees sensitive data in plaintext—not even during processing. Data might be split across multiple nodes, processed within encrypted envelopes, or verified cryptographically without being revealed. Thus, the entire lifecycle—from input to output—can remain confidential. For example, instead of sending raw data to a server, users can send encrypted versions or secret shares to a network of nodes. These nodes perform computations in a way that none can learn the underlying data, and users receive an encrypted result that only they (or authorized parties) can decrypt.
Why Transparency Isn’t Enough in Crypto
While public blockchains solve the trust problem (we no longer need to trust central operators; rules are transparent and enforced via consensus), they do so by sacrificing privacy. The mantra is: “Don’t put anything on-chain you don’t want public.” For simple cryptocurrency transfers, this might be acceptable in some cases; for complex applications, it quickly becomes problematic. As the Penumbra team (building a private DeFi chain) noted, in today’s DeFi, “information leakage becomes value leakage” when users interact on-chain, leading to frontrunning and other exploits. If we want decentralized exchanges, lending markets, or auctions to operate fairly, participants’ data (bids, positions, strategies) often needs to be hidden; otherwise, outsiders can exploit that knowledge in real time. Transparency makes every user’s actions public, which differs from how traditional markets function—and for good reason.
Moreover, many valuable blockchain use cases beyond finance involve personal or regulated data that legally cannot be made public. Consider decentralized identity or credit scoring—users may want to prove attributes about themselves (“I am over 18” or “My credit score is 700”) without revealing their full identity or financial history. Under a fully transparent model, this is impossible; anything you put on-chain leaks data. DeCC technologies like zero-knowledge proofs are specifically designed to solve this, allowing selective disclosure (proving X without revealing Y). Another example: a company may want to use blockchain for supply chain tracking but not let competitors see raw inventory logs or sales data. DeCC allows submitting encrypted data on-chain and sharing decryption only with authorized partners, or using ZK proofs to demonstrate compliance with certain standards without revealing trade secrets.
How DeCC Enables Trustless Confidential Computation
Addressing the limitations of transparency in decentralized systems requires infrastructure capable of maintaining confidentiality during active computation. Decentralized Confidential Computing provides such infrastructure by introducing a set of applied cryptographic and hardware-based techniques to protect data across its entire lifecycle. These technologies aim to ensure sensitive inputs aren’t leaked even during processing, eliminating the need to trust any single operator or intermediary.
The DeCC tech stack includes Zero-Knowledge Proofs (ZKP), which allow one party to prove a computation was correctly executed without revealing inputs; Multi-Party Computation (MPC), enabling multiple parties to jointly compute a function without exposing their individual data; Garbled Circuits (GC) and Fully Homomorphic Encryption (FHE), allowing direct computation on encrypted data; and Trusted Execution Environments (TEE), providing hardware-based isolation for secure execution. Each of these technologies has unique operational characteristics, trust models, and performance profiles. In practice, they are often integrated to address varying security, scalability, and deployment constraints in applications. The following sections outline the technical foundations of each and how they enable trustless, privacy-preserving computation in decentralized networks.
1. Zero-Knowledge Proofs (ZKP)
Zero-knowledge proofs are among the most impactful cryptographic innovations applied to blockchain systems. ZKPs allow one party (the prover) to convince another party (the verifier) that a given statement is true, without revealing any information beyond the validity of the statement itself. In other words, it lets someone prove they know something—like a password, private key, or solution to a problem—without disclosing the knowledge itself.
Take the “Where’s Waldo?” puzzle as an example. Suppose someone claims they found Waldo in a crowded image but don’t want to reveal his exact location. Instead of sharing the full picture, they take a close-up photo of Waldo’s face, timestamped, zoomed in so the rest of the image doesn’t appear. The verifier can confirm Waldo was found without knowing where he was in the image. The claim is proven correct, yet no extra information is disclosed.
More formally, zero-knowledge proofs allow a prover to demonstrate that a specific statement is true (e.g., “I know a key whose hash matches this public value” or “This transaction is valid under protocol rules”) without revealing the inputs or internal logic behind the computation. The verifier is convinced of the truth, but gains no additional information. One of the earliest and most widely used examples in blockchain is zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge). Zcash uses zk-SNARKs to let users prove they possess a private key and are sending a valid transaction, without revealing sender addresses, recipients, or amounts. The network only sees a short cryptographic proof confirming the transaction is legitimate.
How ZKP enables confidential computation: In DeCC environments, ZKPs shine when you want to prove a computation was correctly performed on hidden data. The prover can perform the computation privately and publish a proof, rather than having everyone re-execute it (as in traditional blockchain validation). Others can use this tiny proof to verify the result is correct without seeing the underlying inputs. This protects privacy and significantly improves scalability (since verifying a succinct proof is much faster than re-running the entire computation). Projects like Aleo build entire platforms around this idea: users run programs offline on their private data and generate a proof; the network verifies the proof and accepts the transaction. The network doesn’t know the data or specifics, but knows whatever happened followed the smart contract rules. This effectively creates private smart contracts, impossible on Ethereum’s public VM without ZKP. Another emerging application is privacy-focused zk-rollups: they batch transactions not just for scalability, but also use ZK to hide details of each transaction (unlike regular rollups, whose data is usually still public).
ZK proofs are powerful because their security is purely mathematical, typically relying on the honesty of participants in a “ceremony” (a multi-party cryptographic protocol generating secret/random values) during setup. If cryptographic assumptions hold (e.g., certain problems remain hard to solve), proofs cannot be forged or falsely assert invalid statements. By design, they reveal no additional information. This means you don’t have to trust the prover at all; either the proof passes or it doesn’t.
Limitations: Historically, the trade-off has been performance and complexity. Generating ZK proofs can be computationally intensive (orders of magnitude more expensive than normal computation). In early constructions, even proving simple statements could take minutes or longer, and the cryptography is complex and requires special setups (trusted setup ceremonies)—though newer proof systems like STARKs avoid some of these issues. There are also functional limitations: most ZK schemes involve a single prover proving something to many verifiers. They don’t solve for private shared state (private data “owned” or composed by multiple users, as in auctions and AMMs). In other words, ZK can prove user derived Y correctly from my secret X, but it doesn’t inherently allow two people to jointly compute a function of both their secrets. To address private shared state, ZK-based solutions often incorporate other technologies like MPC, GC, and FHE. Additionally, pure ZKPs usually assume the prover actually knows or possesses the data being proved.
There’s also the issue of size: early zk-SNARKs produced very short proofs (only hundreds of bytes), but some newer zero-knowledge proofs (especially those without trusted setup, like bulletproofs or STARKs) can be larger (tens of KB) and slower to verify. However, ongoing innovation (Halo, Plonk, etc.) is rapidly improving efficiency. Ethereum and others are heavily investing in ZK as a scaling and privacy solution.
2. Multi-Party Computation (MPC)
ZK proofs allow one party to prove something about their own private data, while secure multi-party computation (primarily referring to secret-sharing-based techniques) solves another related but distinct challenge: how to truly collaboratively compute something without revealing inputs. In MPC protocols, multiple independent parties (or nodes) jointly compute a function over all their inputs, such that each party learns only the result and nothing about others’ inputs. The foundation for secret-sharing-based MPC was laid by Ivan Damgard of the Partsia Blockchain Foundation and co-authors in late 1980s papers. Since then, various techniques have been developed.
A simple example: a group of companies wants to calculate the industry-wide average salary for a position, but none want to disclose internal data. Using MPC, each company inputs its data into a joint computation. The protocol ensures no company sees any other participant’s raw data, but all receive the final average. The calculation proceeds via cryptographic protocols across the group, eliminating the need for a central authority. In this setup, the process itself acts as a trusted intermediary.
How does MPC work? Each participant’s input is mathematically split into shares and distributed among all participants. For example, if my secret is 42, I might generate random numbers summing to 42 and give each participant one (appearing random). Any single share reveals nothing, but collectively they hold the information. Participants then perform computations on these shares, passing messages back and forth, until they obtain shares of the output, which can be combined to reveal the result. Throughout, no one sees the original input; they only see encoded or obscured data.
Why is MPC important? Because it is inherently decentralized, it doesn’t rely on a single secure box (like TEE) or a single prover (like ZK). It eliminates the need to trust any single party. A common definition describes it as follows: when computation is distributed among participants, no reliance on any single party is needed to protect privacy or ensure correctness. This makes it a cornerstone of privacy-preserving technology. If you have 10 nodes performing MPC, typically a large fraction must collude or be compromised to leak secrets. This aligns well with blockchain’s distributed trust model.
Challenges with MPC: Privacy comes at a cost. MPC protocols usually incur overhead, primarily in communication. To jointly compute, parties must exchange multiple rounds of encrypted messages. The number of communication rounds (sequential back-and-forth messages) and bandwidth requirements grow with function complexity and the number of participants. Ensuring efficient computation gets tricky as more parties join. There’s also the honest vs. malicious participant issue. Basic MPC protocols assume participants follow the protocol (may be curious but won’t deviate). Stronger protocols can handle malicious actors (who might send false information to disrupt privacy or correctness), but this adds more overhead to detect and mitigate cheating. Interestingly, blockchains can help by providing frameworks to penalize misbehavior. For example, if a node deviates from the protocol, staking and slashing mechanisms can be used, making MPC and blockchain complementary.
Significant progress has been made in performance. Preprocessing techniques can perform heavy cryptographic computations before actual inputs are known. For instance, generating relevant random data (called Beaver triples) can later accelerate multiplication operations. When actual computation on real inputs is needed (online phase), it can be much faster. Some modern MPC frameworks can compute relatively complex functions among a few participants in seconds or less. Research continues on scaling MPC to many participants by organizing them into networks or committees.
MPC is particularly important for applications like private multi-user dApps (e.g., auctions with sealed bids executed via MPC), privacy-preserving machine learning (multiple entities jointly train models without sharing data—a vibrant area known as federated learning with MPC), and distributed secret management (like threshold key schemes). A concrete crypto example is Partisia Blockchain, which integrates MPC into its core to enable enterprise-grade privacy on public blockchains. Partisia uses an MPC node network to process private smart contract logic, then publishes commitments or encrypted results on-chain.
3. Garbled Circuits (GC)
Garbled circuits are a fundamental concept in modern cryptography and among the earliest proposed general solutions for computing on encrypted data. Beyond enabling encrypted computation, GC methods are used in various privacy-preserving protocols, including zero-knowledge proofs and anonymous/unlinkable tokens.
What is a circuit? A circuit is a universal computational model that can represent any function, from simple arithmetic to complex neural networks. Though the term is often associated with hardware, circuits are widely used across various DeCC technologies, including ZK, MPC, GC, and FHE. Circuits consist of input wires, intermediate gates, and output wires. When values (boolean or arithmetic) are supplied to input wires, gates process them and produce corresponding outputs. The layout of gates defines the function being computed. Functions or programs are converted into circuit representations using compilers like VHDL or domain-specific cryptographic compilers.
What is a garbled circuit? Standard circuits leak all data during execution—values on input and output wires and intermediate gate outputs are in plaintext. In contrast, garbled circuits encrypt all these components. Inputs, outputs, and intermediate values are transformed into encrypted values (garbled text), and gates become garbled gates. The garbled circuit algorithm is designed so that evaluating the circuit reveals no information about the original plaintext values. The process of converting plaintext to garbled text and decoding it back is called encoding and decoding.
How does GC solve encrypted data computation? Garbled circuits were first proposed by Andrew Yao in 1982 as the first general solution for computing on encrypted data. His initial example, known as the millionaire’s problem, involved a group wanting to know who is wealthiest without revealing their actual wealth. Using garbled circuits, each participant encrypts their input (their wealth) and shares the encrypted version with others. The group then step-by-step evaluates a circuit designed to compute the maximum value using encrypted gates. The final output (e.g., the identity of the richest person) is decrypted, but no one knows the exact inputs of others. While this example uses a simple max function, the same method can apply to more complex tasks, including statistical analysis and neural network inference.
Breakthrough advancements making GC applicable to DeCC. Recent research led by Soda Labs has applied garbled circuit technology to decentralized environments. These advances focus on three key areas: decentralization, composability, and public audibility. In decentralized setups, computation is split between two independent groups: garblers (responsible for generating and distributing garbled circuits) and evaluators (responsible for executing the circuits). Garblers provide circuits to a network of evaluators, who run them on-demand as instructed by smart contract logic.
This separation enables composability—the ability to build complex computations from smaller atomic operations. Soda Labs achieves this by generating a continuous stream of garbled circuits corresponding to low-level virtual machine instructions (e.g., for EVM). These building blocks can be dynamically assembled at runtime to execute more complex tasks.
For public audibility, Soda Labs proposes a mechanism allowing external parties (regardless of participation) to verify results were computed correctly. This verification can occur without exposing underlying data, adding extra trust and transparency.
Importance of GC to DeCC: Garbled circuits provide low-latency, high-throughput computation on encrypted inputs. As demonstrated on the COTI Network mainnet, initial implementations support roughly 50 to 80 confidential ERC20 transactions per second (ctps), with future versions expected to achieve higher throughput. GC protocols rely on widely adopted cryptographic standards (like AES) and libraries such as OpenSSL, extensively used in healthcare, finance, and government. AES also offers quantum-resistant variants, supporting future compatibility with post-quantum security requirements.
GC-based systems are compatible with client environments and don’t require specialized hardware or GPUs, unlike certain TEE or FHE deployments. This reduces infrastructure costs and enables deployment on a wider range of devices, including lower-capacity machines.
Challenges with GC: The main limitation of garbled circuits is communication overhead. Current implementations require sending approximately 1MB of data per confidential ERC20 transaction to evaluators. However, this data can be preloaded long before execution, so it doesn’t introduce latency during real-time use. Ongoing improvements in bandwidth availability—including trends described by Nielsen’s Law (predicting bandwidth doubles every 21 months)—and active research into garbled circuit compression help reduce this overhead.
4. Fully Homomorphic Encryption (FHE)
Fully Homomorphic Encryption is often seen as cryptographic magic. It allows arbitrary computations to be performed on encrypted data while it remains encrypted, and upon decryption, yields the correct result—as if computed on plaintext. In other words, with FHE, you can outsource computation on private data to an untrusted server, which operates solely on ciphertext and still produces a ciphertext you can decrypt for the correct answer—all without the server seeing your data or the plaintext result.
For a long time, FHE was purely theoretical. The concept has been known since the 1970s, but a practical scheme wasn’t discovered until 2009. Since then, steady progress has been made in reducing FHE slowness. Even so, it remains computationally intensive. Operations on encrypted data may be thousands or millions of times slower than on plaintext. But operations once astronomically slow are now merely quite slow, and optimizations and dedicated FHE accelerators are rapidly improving the situation.
Why is FHE revolutionary for privacy? With FHE, you can have a single server or blockchain node compute for you, and as long as encryption holds, the node learns nothing. It’s a very pure form of confidential computation—data remains encrypted everywhere, always. For decentralization, you can also have multiple nodes each perform FHE computation for redundancy or consensus, but none gain any secret information. They all just operate on ciphertext.
In the context of blockchains, FHE opens possibilities for fully encrypted transactions and smart contracts. Imagine an Ethereum-like network where you send encrypted transactions to miners, who execute smart contract logic on encrypted data and include an encrypted result on-chain. You or an authorized party can later decrypt the result. To others, it’s unintelligible gibberish, but they might have a proof the computation was valid. This is where FHE combined with ZK could play a role—proving encrypted transactions follow the rules. This is essentially what the Fhenix project is pursuing: an EVM-compatible Layer-2 where all computation natively supports FHE.
Practical use cases enabled by FHE: Beyond blockchains, FHE is already attractive for cloud computing. For example, sending encrypted database queries to the cloud and receiving encrypted answers—only you can decrypt them. In blockchain contexts, a compelling scenario is privacy-preserving machine learning. FHE allows decentralized networks to run AI model inference on users’ encrypted data, so the network learns neither your input nor result—only you know upon decryption. Another use case is in public sector or health data collaboration. Different hospitals could use a shared or federated key setup to encrypt their patient data, and a network of nodes could compute aggregate statistics over all encrypted hospital data and deliver the result to researchers for decryption. This is similar to what MPC can do, but FHE might achieve it with a simpler architecture—requiring only an untrusted cloud or miner network to crunch numbers—at the cost of heavier per-operation computation.
Challenges with FHE: The biggest challenge is performance. Despite progress, FHE is typically still a thousand to a million times slower than plaintext operations, depending on the computation and scheme. This means it’s currently only viable for limited tasks, such as simple functions or batching many operations at once in certain schemes, but not yet a technology you’d use to run complex virtual machines executing millions of steps—at least not without strong hardware support. There’s also the issue of ciphertext size. Fully homomorphic operations tend to bloat data. Some optimizations, like bootstrapping—which refreshes ciphertexts accumulating noise during operations—are necessary for arbitrarily long computations and add overhead. However, many applications don’t need fully arbitrary depth. They can use leveled HE, which performs a fixed number of multiplications before decryption and can avoid bootstrapping.
Integrating FHE into blockchains is complex. If every node must perform FHE operations for every transaction, with current technology this could be extremely slow. That’s why projects like Fhenix start with L2s or sidechains, where heavy FHE computation might be done by a powerful coordinator or subset of nodes, and the L2 batches results. Over time, as FHE becomes more efficient or with specialized FHE accelerator ASICs or GPUs, broader adoption may follow. Notably, several companies and academics are actively researching hardware to accelerate FHE, recognizing its importance for the future of data privacy in both Web2 and Web3 use cases.
Combining FHE with other technologies: Often, FHE may be combined with MPC or ZK to address its weaknesses. For example, multiple parties could hold shares of an FHE key, so no single party can decrypt alone—effectively creating a threshold FHE scheme. This combines MPC with FHE to avoid single-point decryption failure. Alternatively, zero-knowledge proofs can be used to prove an FHE-encrypted transaction is correctly formatted without decrypting it, so blockchain nodes can verify its validity before processing. This is what some call the ZK-FHE hybrid model. Indeed, a composable DeCC approach might use FHE for the heavy lifting of data processing—since it’s one of the few methods that can compute while staying fully encrypted—and use ZK proofs to ensure the computation didn’t do anything invalid, or to let others verify results without seeing them.
5. Trusted Execution Environments (TEE)
Trusted Execution Environments are foundational components of decentralized confidential computing. TEEs are secure regions within a processor that isolate code and data from the rest of the system, ensuring their contents remain protected even if the operating system is compromised. TEEs provide confidentiality and integrity during computation with minimal performance overhead. This makes them one of the most practical technologies available for secure general-purpose computation.
Think of it this way: a TEE is like reading a confidential document in a locked room that no one else can enter or peek into. You’re free to consult and process the document, but when you leave, you take the results and lock everything else behind. Outsiders never directly see the document—only the final outcome you choose to reveal.
Modern TEEs have advanced significantly. Intel’s TDX and AMD SEV support secure execution of entire virtual machines, and NVIDIA’s high-performance GPUs (including H100 and H200) now feature TEE capabilities. These upgrades make it possible to run arbitrary applications in confidential environments, including machine learning models, backend services, and user-facing software. For example, Intel TDX combined with NVIDIA H100 can run inference on models with over 70 billion parameters with minimal performance loss. Unlike cryptographic methods requiring custom tools or restricted environments, modern TEEs can run containerized applications unmodified. This allows developers to write applications in Python, Node.js, or other standard languages while maintaining data confidentiality.
A classic example is Secret Network, the first blockchain to achieve general-purpose smart contracts with private state by leveraging TEEs (specifically Intel SGX). Each Secret node runs the smart contract execution runtime within an enclave (secure zone). Transactions sent to smart contracts are encrypted, so only the enclave can decrypt them, execute the contract, and produce encrypted output. The network uses remote attestation to ensure nodes are running genuine SGX and approved enclave code. This way, smart contracts on Secret Network can process private data—such as encrypted inputs—even node operators cannot read. Only the enclave can, and it only releases what it should—typically just a hash or encrypted result. Phala Network and Marlin use similar but different models. Their architectures center on TEE-driven worker nodes that perform secure off-chain computations and return verified results to the blockchain. This setup allows Phala to protect data confidentiality and execution integrity without revealing raw data to any external party. The network aims for scalability and interoperability, supporting privacy-preserving workloads across decentralized apps, cross-chain systems, and AI-related services. Like Secret Network, Phala demonstrates how TEEs can extend confidential computing to decentralized environments by isolating sensitive logic within verifiable hardware enclaves.
Modern deployments of TEE in DeCC incorporate several best practices:
-
Remote attestation and open-source runtimes: Projects release code to run inside enclaves (often modified WASM interpreters or specialized runtimes) and provide programs to prove it. For example, each Secret Network node generates an attestation report proving it runs genuine SGX with Secret enclave code. Other nodes and users can verify this proof before trusting the node to process encrypted queries. By using open-source runtime code, the community can audit what the enclave should do, though they must still trust the hardware executes only that.
-
Redundancy and consensus: Some systems have multiple nodes or enclaves perform the same task and compare results, rather than relying on a single enclave. This resembles MPC methods but at a higher level. If one enclave deviates or is compromised and produces a different result, majority voting can detect it, provided not all enclaves are compromised. This was the early Enigma project’s (which evolved into Secret) approach—they planned multiple SGX enclaves to compute and cross-check. In practice, some networks currently trust a single enclave per contract for performance, but designs can scale to multi-enclave consensus for higher security.
-
Temporary keys and frequent resets: To reduce key-leakage risk, TEEs can generate new encryption keys for each session or task and avoid storing long-term secrets. For example, a DeCC service handling confidential data might use temporary session keys frequently discarded. This means even if a breach occurs later, past data may not be exposed. Key rotation and forward secrecy are recommended so that even if an enclave is breached at time T, data prior to T remains secure.
-
For privacy, not consensus integrity: As noted, TEEs are best used for protecting privacy, not core consensus integrity. Thus, blockchains might use TEEs to keep data confidential, but not for validating blocks or protecting ledger state transitions, which are better left to consensus protocols. In this setup, a compromised enclave might leak some private information, but couldn’t, for example, forge token transfers on the ledger. This design relies on cryptographic consensus for integrity and enclaves for confidentiality—a separation of concerns limiting the impact of TEE failure.
-
Confidential Virtual Machine deployment: Some networks have begun deploying full Confidential Virtual Machines (CVMs) using modern TEE infrastructure. Examples include Phala’s cloud platform, Marlin Oyster Cloud, and SecretVM. These CVMs can run containerized workloads in secure environments, enabling general-purpose privacy-preserving applications in decentralized systems.
TEEs can also be combined with other technologies. A promising idea is running MPC inside a TEE. For example, multiple enclaves on different nodes, each holding part of secret data, could jointly compute via MPC while each enclave keeps its share secure. This hybrid provides defense-in-depth: attackers must compromise both the enclave and corrupt enough participants to access all secret shares. Another combination uses ZK proofs to prove what the enclave did. For instance, an enclave could output a short zk-SNARK proving it correctly followed the protocol on certain encrypted inputs. This reduces trust in the enclave. Even if the enclave is malicious, unless it also breaks ZK cryptography, it can’t produce a valid proof if it deviates from prescribed computation. These ideas are still in research stages but actively explored.
In current practice, projects like TEN (an encrypted network, an Ethereum Layer-2 solution) use secure enclaves to achieve confidential rollups. TEN’s approach uses enclaves to encrypt transaction data and privately execute smart contracts, while still producing an optimistic-verified rollup block. They emphasize secure enclaves provide high confidence in running code, meaning users can be sure how their data is processed—because the code is known and proven—even if they can’t see the data itself. This highlights a key advantage of TEE: deterministic, verifiable execution. Everyone can agree on the code hash to run, and the enclave ensures only that code executes, while keeping inputs hidden.
Composable DeCC Tech Stack (Hybrid Approaches)
An exciting aspect of Privacy 2.0 is that these technologies aren’t isolated (though they can and do get used independently); they can be combined. Just as traditional cloud security uses layered defenses like firewalls, encryption, and access control, DeCC confidential computing can layer technologies to leverage their strengths.
Several combinations are being explored: MPC with TEE, ZK with TEE, GC with ZK, FHE with ZK, and so on. The ultimate goal is clear: no single technology is perfect. Combining them can compensate for individual limitations.
Here are some emerging patterns:
-
MPC with TEE (MPC inside enclaves): In this approach, an MPC network runs where each node’s computation occurs within a TEE. For example, consider a network of ten nodes using MPC to jointly analyze encrypted data. If an attacker compromises one node, they only access an enclave holding a single secret share, which alone reveals nothing. Even if the SGX on that node is breached, only a small portion of data is exposed. To compromise the entire computation, a certain number of enclaves must be breached. This greatly enhances security, assuming enclave integrity holds. Trade-offs include higher MPC overhead and dependence on TEE, but for high-assurance scenarios, this hybrid is justified. The model effectively layers cryptographic and hardware trust guarantees.
-
ZK proofs with MPC or FHE: ZK proofs can act as an audit layer. For example, an MPC network computes a result, then jointly generates a zk-SNARK proving the computation followed the defined protocol without exposing inputs. This adds verification confidence for external consumers (e.g., a blockchain receiving the result). Similarly, in an FHE environment, since data stays encrypted, ZK proofs can verify computation was correctly performed on ciphertext inputs. Projects like Aleo adopt this strategy. Computation is done privately, but verifiable proofs attest to correctness. Complexity is non-trivial, but the potential for composability is huge.
-
ZK proofs with GC: Zero-knowledge proofs are often used with garbled circuits to prevent potentially malicious garblers. In more complex GC-based systems involving multiple garblers and evaluators, ZK proofs also help verify individual garbled circuits are correctly composed into larger computational tasks.
-
TEE with ZK (Provable Shielded Execution): TEEs can produce proofs of correct execution. For example, in a sealed-bid auction, an enclave can compute the winner and output a ZK proof confirming the computation was correctly executed on encrypted bids, without revealing any bid details. This allows anyone to verify results with limited trust in the enclave. Though largely experimental, early research prototypes are exploring these confidential knowledge proofs to combine TEE performance with ZK verifiability.
-
FHE with MPC (Threshold FHE): A known challenge with FHE is the decryption step exposes the result to the entity holding the key. To decentralize this, MPC or secret sharing can split the FHE private key among multiple parties. After computation, a collective decryption protocol is executed, ensuring no single party can independently decrypt the result. This structure eliminates centralized key custody, making FHE suitable for threshold use cases like private voting, encrypted mempools, or collaborative analysis. Threshold FHE is an active research area closely tied to blockchains.
-
Secure Hardware with Cryptography for Performance Isolation: Future architectures might assign different workloads to different privacy-preserving technologies. For example, compute-heavy AI tasks could run in secure enclaves, while more security-critical logic (like key management) is handled by cryptographic protocols such as MPC, GC, or FHE. Conversely, enclaves could handle lightweight tasks with limited exposure consequences but critical for performance. By decomposing an application’s privacy needs, developers can assign each component to the most appropriate trust layer, similar to how traditional systems layer encryption, access control, and HSMs.
The composable DeCC tech stack model emphasizes that applications don’t need to pick one privacy method. They can integrate multiple DeCC technologies, customized to specific component needs. For example, many emerging privacy networks are being built modularly, supporting both ZK and MPC, or offering configurable confidentiality layers based on use case.
Admittedly, combining technologies increases engineering and computational complexity, and performance costs may be too high in some cases. Yet for high-value workflows—especially in finance, AI, or governance—this layered security model is justified. Early examples are already operational. Oasis Labs has prototyped a TEE-MPC hybrid for private data markets. Academic projects have demonstrated MPC and GC computations verified by zk-SNARKs, highlighting growing interest in cross-model validation.
Future dApps might run encrypted storage via AES or FHE, use a mix of MPC, GC, and TEE for computation, and publish verifiable ZK proofs on-chain. Though invisible to users, this privacy tech stack will enforce strong protections against data leaks and unauthorized inference. The ultimate goal is to make this level of privacy infrastructure default and seamless, delivering applications that feel familiar but operate under fundamentally different trust assumptions.
VC Investment and Developer Momentum
Privacy-preserving computation has become a notable area of capital allocation in crypto, with investment activity steadily increasing. Investors and builders increasingly believe decentralized confidential computing (DeCC) will unlock new market opportunities by enabling private applications previously infeasible on public blockchain infrastructure.
Aggregate venture funding for leading DeCC projects has exceeded several hundred million dollars. Notable examples include:
-
Aleo, a Layer-1 network using zero-knowledge proofs to build private applications, has raised approximately $228 million. This includes a $28 million Series A in 2021 led by a16z, and a $200 million Series B in 2022 at a $1.45 billion valuation. Aleo is investing in developer tools and its broader ecosystem of privacy-preserving applications.
-
Partisia Blockchain, combining secure multi-party computation (MPC) with blockchain infrastructure, raised $50 million in 2021 to expand its privacy-focused Layer-1 and Layer-2 platforms. Funding came from strategic and institutional supporters focused on confidential data use cases.
-
Fhenix, an Ethereum Layer-2 implementing fully homomorphic encryption (FHE), raised $15 million in a Series A round in June 2024, bringing total funding to $22 million. Early investors include a16z and Hack VC, reflecting confidence in the viability of encrypted smart contract execution.
-
Mind Network, focused on building an FHE-based privacy layer for data processing, raised $10 million in a Pre-A round in September 2024, bringing total funding to $12.5 million. The project targets applications in secure voting, private data sharing, and confidential AI execution.
-
Arcium, a confidential computing network on Solana, raised $5.5 million from Greenfield Capital in early 2025, bringing total funding to $9 million. Arcium positions itself as an encryption computing layer for high-throughput chains.
-
COTI, in partnership with Soda Labs, committed $25 million from its ecosystem fund to develop an MPC-based privacy Layer-2. The collaboration focuses on advancing garbled circuit technology for privacy-preserving payments.
-
TEN, an Ethereum-based Layer-2 using trusted execution environments (TEE), raised $9 million in a late 2023 funding round led by the R3 Consortium, bringing total funding to $16 million. The team includes engineers with experience building permissioned blockchain infrastructure.
-
Penumbra, a Cosmos-based privacy zone for DeFi, raised $4.75 million in a seed round in 2021 led by Dragonfly Capital. The project aims to support private swaps and MEV-resistant trading.
-
Aleph Zero, a privacy-enabled Layer-1 using DAG consensus and zero-knowledge tech, raised about $21 million through public and private token sales. It positions itself as a base layer with built-in confidentiality features.
-
Established projects continue contributing to this momentum. Secret Network, evolved from the Enigma project, raised $45 million via a 2017 token sale to launch the first TEE-based smart contract platform, with total investments reaching up to $400 million. iExec, a TEE-supported decentralized cloud platform, raised about $12 million and has since received grants to advance confidential data tools.
If including token allocations, ecosystem funds, and public sale proceeds, total investment in the DeCC space may approach $1 billion. This is comparable to funding levels seen in Layer-2 scaling or modular infrastructure sectors.
The DeCC ecosystem is also expanding through partnerships and open-source collaboration. Organizations like the Confidential Computing Consortium have onboarded blockchain-based members such as iExec and Secret Network to explore standards across private computing. Academic initiatives, developer hackathons, and privacy-focused conferences are cultivating technical talent and community engagement.
Projects are also improving accessibility via SDKs, languages, and APIs that abstract away cryptographic complexity. For example, tooling frameworks like Circom, ZoKrates, and Noir simplify zero-knowledge development, while platforms like Arcium’s Arcis lower the barrier to building with MPC. Developers can now integrate privacy into decentralized applications without advanced cryptography expertise.
Collaborations with enterprises and government agencies further validate the field. Partisia partnered with Okinawa Institute of Science and Technology (OIST) on joint cryptography research, while Secret Network collaborated with Eliza Labs to develop confidential AI solutions.
With funding and ecosystem activity continuing to grow, DeCC is becoming one of the most well-funded and fastest-moving areas in crypto infrastructure, drawing high interest from builders and institutional stakeholders alike. That said, as with any emerging tech cycle, many projects in the space may fail to realize their vision or gain meaningful adoption. However, a select few may endure, setting technological and economic standards for privacy-preserving computation across decentralized systems.
DeCC Ecosystem
The Decentralized Confidential Computing (DeCC) ecosystem consists of technologies and frameworks enabling secure computation across distributed systems. These tools allow sensitive data to be processed, stored, and transmitted without exposure to any single party. By combining cryptographic techniques, hardware-enforced isolation, and decentralized network infrastructure, DeCC addresses critical privacy limitations in public blockchain environments. This includes challenges related to transparent smart contract execution, unprotected off-chain data usage, and difficulty safeguarding confidentiality in systems designed for openness.
DeCC infrastructure can be broadly categorized into six technological pillars:
-
Zero-Knowledge Proofs (ZKP) for verifiable private computation.
-
Multi-Party Computation (MPC) for collaborative computation without data sharing.
-
Garbled Circuits (GC) for computing on standard encrypted data.
-
Fully Homomorphic Encryption (FHE) for direct computation on encrypted inputs.
-
Trusted Execution Environments (TEE) for hardware-based confidential processing.
-
Decentralized Privacy Networks for routing and infrastructure-level metadata protection.
These components are not mutually exclusive and are often combined to meet specific performance, security, and trust requirements. The following sections highlight projects implementing these technologies and how they fit into the broader DeCC landscape.
Projects Based on Fully Homomorphic Encryption (FHE)
Many DeCC projects use Fully Homomorphic Encryption (FHE) as their primary mechanism for encrypted computation. These teams focus on applying FHE to private smart contracts, secure data processing, and confidential infrastructure. While FHE is more computationally intensive than other approaches, its ability to compute on encrypted data without decryption offers strong security guarantees. Key projects in this category include Octra, Mind Network, and Fhenix, each experimenting with different architectures and use cases to bring FHE closer to practical deployment.
Fhenix
Fhenix is an FHE R&D company building scalable, real-world fully homomorphic encryption applications. Fhenix’s FHE Co-Processor (CoFHE) is an off-chain computation layer designed to securely process encrypted data. It offloads heavy cryptographic operations from the main blockchain (e.g., Ethereum or L2 solutions) to enhance efficiency, scalability, and privacy without compromising decentralization, while providing easy integration. This architecture enables computation on encrypted data without decryption, ensuring end-to-end privacy for decentralized applications. Fhenix is fully EVM-compatible, allowing developers to easily and quickly build FHE-based applications using Solidity and existing Ethereum tools like Hardhat and Remix. Its modular design includes components like FheOS and Fhenix’s Threshold Network, managing FHE operations and decryption requests respectively, offering a flexible and adaptable platform for privacy-preserving applications.
Key Innovations & Features
-
Seamless Integration with EVM Chains: Fhenix enables any EVM chain to access encryption capabilities with minimal modifications. Developers can integrate FHE-based encryption into their smart contracts with a single line of Solidity code, simplifying adoption of confidential computing across various blockchain networks.
-
Fhenix Co-Processor: Fhenix offers a co-processor solution that can natively connect to any EVM chain, providing FHE-based encryption services. This approach allows existing blockchain platforms to enhance their privacy features without overhauling their infrastructure.
-
Institutional Adoption & Partnerships: FHE-based confidentiality is crucial for institutional adoption of Web3 technologies. A proof-of-concept project with JPMorgan Chase demonstrates the platform’s potential to meet strict privacy requirements in financial services.
-
Enhanced Decryption Solutions: The Fhenix team developed a high-performance threshold network combining MPC and FHE for decrypting FHE operations. Advances in the Fhenix TSN network reduce decryption latency and computational overhead, improving user experience for privacy-preserving applications.
-
Diverse Use Cases in Development: Fhenix focuses on bringing encrypted computing to developers within existing ecosystems (like EVM chains). Current developments include confidential lending platforms, dark pools for private trading, and confidential stablecoins—all benefiting from FHE’s ability to maintain data privacy during computation.
Mind Network
Mind Network is a decentralized platform pioneering the integration of FHE to establish a fully encrypted Web3 ecosystem. Mind Network acts as an infrastructure layer, enhancing security across data, consensus mechanisms, and transactions by enabling computation on encrypted data without decryption.
Key Innovations & Features
-
First Project to Bring FHE to Mainnet: Mind Network achieved a significant milestone by bringing its FHE to mainnet, enabling quantum-resistant, fully encrypted data computation. This advancement ensures data remains secure throughout its lifecycle—storage, transmission, and processing—addressing inherent vulnerabilities in traditional encryption methods.
-
Introduction of HTTPZ: Building on the standard HTTPS protocol, Mind Network is working toward realizing Zama’s vision of HTTPZ—a next-generation framework maintaining persistent data encryption for a fully encrypted web. This innovation ensures data stays encrypted during storage, transmission, and computation, eliminating reliance on centralized entities and enhancing security across applications including AI, DeFi, DePIN, RWA, and gaming.
-
Agentic World: Mind Network’s FHE computing network powers AI agents in Agentic World, built on three pillars:
-
Consensus Security: AI agents in distributed systems must reach reliable decisions without manipulation or conflict.
-
Data Privacy: AI agents can process encrypted data without exposing it.
-
Value Alignment: Ethical constraints are embedded into AI agents to ensure their decisions align with human values.
-
-
FHE Bridge for Cross-Chain Interoperability: Mind Network provides an FHE bridge to facilitate seamless decentralized ecosystems. The bridge enables secure interoperability between different blockchain networks, allowing encrypted data to be processed and transferred across chains without exposing sensitive information, supporting development of complex multi-chain applications. Chainlink is currently integrating it with CCIP.
-
DeepSeek Integration: Mind Network became the first FHE project integrated by DeepSeek, a platform renowned for its advanced AI models. This integration leverages Mind Network’s FHE Rust SDK to protect encrypted AI consensus.
Octra
Octra is a general-purpose, chain-agnostic network founded by former VK (Telegram) and NSO team members with a decade of experience in cryptography. Since 2021, Octra has been developing a proprietary Hypergraph-based Fully Homomorphic Encryption (FHE) scheme (HFHE) enabling near-instant computation on encrypted data. Unlike other FHE projects, Octra is entirely independent, not relying on third-party technologies or licenses.
Key Innovations & Features
-
Proprietary HFHE Scheme: Octra’s unique HFHE uses hypergraphs to enable efficient binary operations, supporting parallel computation where different nodes and hyperedges are processed independently.
-
Isolated Execution Environment: The network supports isolated execution environments, enhancing security and privacy for decentralized applications.
-
Diverse Codebase: Primarily developed in OCaml and C++, with Rust support for contracts and interoperability solutions, Octra offers
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














