What is confidential computing?
.jpg)
This article explains what confidential computing is, how it works, and why it’s becoming essential for working with sensitive and regulated data.
Request a live demo
Want to see what else data clean rooms can do? Have a specific use case in mind? Let us show you.

Key takeaways
- Confidential computing is a way to keep data private even while it’s being used, not just when it’s stored or sent somewhere.
- It does this using trusted execution environments (TEEs), which act like secure, locked spaces inside a computer’s processor.
- These spaces let organisations analyse data or train models without seeing the raw information.
- This makes it easier to work with sensitive data in the cloud while keeping it safe from other systems or even the people running them.
More and more of the world’s most valuable data is processed in the cloud, but keeping that data private while it’s in use remains a major challenge. Encrypting data at rest or in transit is now standard practice, but once data needs to be actively used, analysed, queried, or processed, it typically becomes exposed.
That’s where confidential computing comes in.
This article explains what confidential computing is, how it works, and why it’s becoming essential for working with sensitive and regulated data. Whether you’re in healthcare, finance, machine learning, or privacy engineering, we’ll break down the key concepts, technologies, and use cases in a way that’s easy to follow.
Confidential computing definition
Confidential computing is a privacy-enhancing technology (PET) that protects sensitive data while it’s being processed, not just at rest or in transit. It does this by using secure, isolated environments known as trusted execution environments (TEEs). These environments make it possible to perform computations on encrypted data without exposing it to the host system or cloud provider.
How confidential computing works
Confidential computing protects data while it’s being used. This is achieved through a hardware-based innovation known as a TEE.
What is a trusted execution environment (TEE)?
A TEE is a secure, isolated area within a processor. It runs specific code and handles sensitive data in a way that’s protected from the rest of the system, including the operating system, hypervisors, and cloud administrators. This means that even if a machine is compromised, data and workloads inside the TEE remain private and untampered.
At the heart of confidential computing are two core security principles: isolation and attestation. Together, they allow sensitive data to be processed securely in untrusted environments, including the public cloud.
Isolation: keeping data secure at runtime
When a workload runs inside a TEE, it’s sealed off from all other processes on the same machine, including other virtual machines, the host OS, and users with root access.
Here’s a simplified view of how a TEE sits in isolation from all other processes:

This isolation is enforced at the hardware level. The TEE restricts memory access and ensures that no unauthorised code can read or modify the data being processed. Even common attack vectors — such as side-channel attacks or compromised system software — are significantly mitigated.
This allows sensitive computations to be securely offloaded to the public cloud or run in multi-tenant environments, without exposing the underlying data.
Attestation: Verifying trust remotely
Attestation allows the TEE to prove to a third party that:
- It is genuine, manufacturer-certified hardware
- It is running the expected code, and nothing else
The diagram below shows how attestation provides cryptographic assurance that a TEE is authentic and running trusted code:

Attestation is done cryptographically. The TEE generates a signed “measurement” of the code it’s running, which can be validated by a remote system before any data is sent. This builds confidence that the environment is secure and that only authorised software is processing the data.
Together, isolation and attestation allow organisations to run sensitive workloads in untrusted environments while maintaining strict control over data privacy and security.
To summarise: confidential computing enables secure processing of encrypted data inside an isolated, verifiable enclave, as shown below:

Who uses confidential computing, and why?
Confidential computing is a strategic enabler for teams working with highly sensitive or regulated data.
Here’s how it serves different stakeholders:
Data scientists and AI teams
Run machine learning models on encrypted datasets without exposing raw inputs. Confidential computing enables:
- Secure training on sensitive medical or financial data
- Protection of proprietary algorithms
Privacy and compliance leads
Ensure cloud workloads meet strict regulations like GDPR, HIPAA, or PCI DSS.
Confidential computing helps prove:
- Data was never exposed during processing
- Only authorised code ran inside a verifiable environment (via attestation)
Cloud architects and platform teams
Secure high-risk workloads in multi-tenant or public cloud environments.
TEEs help:
- Protect against root user compromises or side-channel attacks
- Migrate sensitive analytics pipelines to cloud providers confidently
SaaS product teams
Offer privacy-preserving features to enterprise customers. Confidential computing supports:
- End-user encryption guarantees
- Secure multi-party data collaboration
- Competitive differentiation for data-first platforms
Security and risk teams
Reduce exposure to insider threats, root exploits, or runtime-level breaches. Confidential computing adds an extra layer of protection beyond network and endpoint security.
This enables:
- Zero-trust infrastructure by default
- Hardware-backed runtime security
- Stronger defences against emerging attack vectors
Policy and legal teams
Support data residency, sovereignty, and contractual control. TEEs provide verifiable proof that sensitive data never left a defined jurisdiction, even when hosted in global cloud regions.
This supports:
- Cross-border compliance assurance
- Legal defensibility in audits and investigations
- More flexible, risk-managed vendor agreements
Decentriq’s data clean rooms are built on confidential computing, enabling partners in advertising, healthcare, finance, and media to collaborate on data without sacrificing privacy. Learn more about how our platform enables secure analytics and AI on sensitive data here.
Confidential computing use cases
Given that the average data breach cost $4.9M in 2024 according to IBM, it’s clear to see there is a need for confidential computing.
Here are some of the ways it can be put to use:
Healthcare
Hospitals, pharmaceutical companies, and research institutions can collaborate on sensitive patient data without ever sharing the raw information. For example, a pharma company could use confidential computing to analyze clinical trial data from multiple hospitals while preserving patient privacy and complying with HIPAA or GDPR.
Use cases include:
- Privacy-safe clinical data sharing
- Secure cross-border medical research
- Supporting federated learning for diagnostics and treatment models
Case study: Decentriq partnered with Datavant to enable privacy-preserving data collaboration between health researchers and European hospitals. Using confidential computing, they deployed a secure data clean room where patient-level data could be analysed without ever being exposed, supporting GDPR compliance and advancing clinical research across borders.
Read the full case study
Finance
Banks and fintech firms can run high-risk workloads such as credit scoring, fraud detection, or Know Your Customer (KYC) processes in the cloud, without exposing customer data. This allows regulated data to be processed with confidence, even in multi-tenant environments.
Use cases include:
- Anti-fraud analytics across institutions
- Secure credit risk modelling
- Cloud-based transaction monitoring
Case study: Swiss Re, one of the world’s largest reinsurers, partnered with Decentriq to run sensitive analytics workflows securely in the cloud. By leveraging trusted execution environments (TEEs) through confidential computing, they ensured that proprietary machine learning models and risk assessment data remained fully protected, even from cloud infrastructure operators. This enabled secure model deployment at scale while meeting strict internal compliance and data protection standards.
Read the full case study
Advertising and marketing
Adtech platforms and marketers can generate audience insights or attribution data without gaining access to raw user information. Confidential computing enables analysis while preserving consumer privacy and regulatory compliance.
Use cases include:
- Privacy-compliant customer segmentation
- Cross-party data activation
- Cookieless attribution modelling
Case study: Laboratoires Pierre Fabre, a major European pharmaceutical and cosmetics brand, collaborated with multiple publishers using Decentriq’s data clean rooms to gain deep insights into customer personas, without ever sharing raw data. Powered by confidential computing, the platform allowed ID-level data matching across partners while maintaining full privacy and GDPR compliance. The outcome: stronger audience segmentation without compromising on trust or transparency.
AI and Machine Learning
AI teams can train or fine-tune models using encrypted datasets, ensuring inputs remain confidential throughout the pipeline. This is particularly useful when working across organisational boundaries or handling IP-sensitive models.
Use cases include:
- Encrypted model training on regulated data
- Secure multi-party model development
- Zero-trust inference pipelines
Confidential computing vs other PETs
Confidential computing is part of a broader category of privacy-enhancing technologies (PETs), all designed to minimise data exposure during processing. To understand when and why to use confidential computing, it helps to compare it with other common approaches.
Homomorphic encryption (HE)
HE allows computations to be performed directly on encrypted data, without decrypting it first. It’s highly secure, but remains computationally intensive and often impractical for complex or large-scale workloads.
- Pros: Maximum data confidentiality; data is never decrypted
- Cons: Slow, limited scalability for real-time or high-volume tasks
- Best for: Simple computations where absolute privacy is essential
Secure multiparty computation (SMPC)
SMPC enables multiple parties to jointly compute a function over their inputs while keeping those inputs private. It’s useful for collaborative analysis without revealing raw data.
- Pros: Strong privacy guarantees across organisations
- Cons: Complex implementation; high communication overhead
- Best for: Joint analytics between parties with strict data separation
Differential privacy (DP)
Differential privacy adds statistical noise to datasets or query results to obscure individual-level data. It balances privacy with usability, especially in aggregate analytics.
- Pros: Lightweight; suitable for public data release
- Cons: Degrades accuracy; not suitable for sensitive computation
- Best for: Large-scale data analysis with public reporting
Confidential computing
Instead of encrypting data through the full computation pipeline, confidential computing decrypts data only within a secure enclave, isolated from the rest of the system and protected by hardware.
- Pros: Strong runtime protection; performant; compatible with existing code
- Cons: Requires trusted hardware and attestation support
- Best for: Complex cloud workloads, cross-party collaboration, regulated industries
Confidential computing strikes a balance between security and performance. Unlike encryption-based approaches, it allows organisations to use their data in near real-time while maintaining strict privacy controls, making it ideal for AI, analytics, and compliance-driven environments.Comparison checklist:
What to look for in a confidential computing solution
As confidential computing technology adoption grows, so does the number of vendors claiming to offer “confidential” capabilities. But not all solutions offer the same guarantees. If you're evaluating platforms — whether for secure data collaboration, AI workloads, or regulated cloud deployments — here’s what to look for:
Trusted hardware support
Ensure the solution uses proven confidential computing hardware, such as Intel SGX, AMD SEV, or Arm Confidential Compute Architecture (CCA). These technologies create isolated, encrypted enclaves that protect data during processing.
Tip: Look for hardware that supports memory encryption, secure boot, and integrity checking by default.
Attestation capabilities
A robust platform should offer remote attestation, proving to you (and your partners) that the environment is secure and untampered before any processing begins. This is essential for compliance and cross-party trust.
Make sure to ask: Does the solution allow for attestation by third parties or partners?
Integration with cloud providers
Check whether the platform integrates seamlessly with your existing cloud stack (e.g. AWS Nitro Enclaves, Microsoft Azure, Google Cloud). Compatibility affects deployment speed, scalability, and cost.
Compliance and regulatory readiness
For industries like finance and healthcare, the platform should support compliance with standards like GDPR, HIPAA, or ISO/IEC 27001. Built-in audit logs and attestation reports can support regulatory submissions.
Collaboration and access controls
If you’re working with external data partners, the solution should support role-based access, secure policy enforcement, and data minimisation by design. This is critical for use cases like data clean rooms.
Comparison checklist:
The future of confidential computing
The confidential computing market is set to grow to $350 billion by 2032.
As more organisations adopt AI models and shift sensitive workloads to the cloud, runtime protection is becoming a foundational requirement, not a nice-to-have.
Industry groups like the Confidential Computing Consortium (CCC) are helping drive adoption and interoperability across hardware vendors such as Intel, AMD, and Arm. Meanwhile, new cloud-native innovations — from Kubernetes support for enclave workloads to early confidential GPU offerings — are making confidential computing more scalable and accessible.
“We’re seeing growing adoption from healthcare, finance, and media partners who need to unlock value from their data without compromising privacy or compliance. The ability to prove that data stays protected throughout its entire lifecycle is a game-changer.” - Nikolaos Molyndris, Senior Product Manager at Decentriq
As confidential computing matures, new use cases are emerging across high-sensitivity domains. These include:
- Confidential AI – Running or fine-tuning machine learning models, including large language models (LLMs), within secure enclaves. While full model training remains limited by enclave performance, confidential inference and small-scale training are increasingly viable in production.
- Cross-cloud data clean rooms – Enabling secure data collaboration between regulated entities across different cloud environments. Confidential computing helps ensure that raw data is never exposed, even during joint analysis or model building.
- Attested analytics pipelines – Building end-to-end data workflows where each stage, from ingestion to output, is verified through remote attestation and protected within isolated environments. This supports compliance with data privacy regulations while enabling near real-time insights.
Explore secure data collaboration with Decentriq
Confidential computing is a production-ready solution enabling secure, privacy-preserving data collaboration across sectors like healthcare, advertising, and technology.
At Decentriq, we make this accessible through data clean rooms built directly on confidential computing. Our platform combines the runtime protection of trusted execution environments with a user-friendly interface that enables organisations to collaborate on sensitive data — without ever exposing the raw inputs.
As a founding member of the Confidential Computing Consortium, we’re committed to advancing this space through transparency, interoperability, and real-world impact.
Ready to unlock insights without sacrificing privacy?
Explore how Decentriq supports secure data collaboration.
See how Decentriq uses confidential computing to enable secure collaboration in action — all in under a minute:
References
Request a live demo
Want to see what else data clean rooms can do? Have a specific use case in mind? Let us show you.

Related content
Subscribe to Decentriq
Stay connected with Decentriq. Receive email notifications about industry news and product updates.