Legal

Nebula AI Services FAQ | KLDiscovery

Written by Admin | March 2, 2026

Print/Download to PDF

Infrastructure & Platform

Q: Which cloud platform supports KLDiscovery's Nebula AI Services and what AI models power the Platform?

A: Microsoft Azure provides the cloud infrastructure for all KLDiscovery AI Services. Nebula uses multiple Large Language Models (LLMs) delivered through Microsoft Azure AI Foundry.  The LLM is task-dependent, with enterprise-tier LLMs deployed across all AI-enabled services. Specific LLM families are documented per client engagement based on suitability.  We may also leverage additional or alternative LLMs made available over time, in line with our security, data protection and client obligations.

Q: What AI features does Nebula offer, and how are they enabled?

A: As of today, Nebula offers AI document summarization, AI Insights (ECAi), and Personally Identifiable Information (PII) detection. All features are disabled by default; administrators enable them through user permission controls. As we continue to innovate on behalf of Clients, we will release new AI features and will announce them as they launch.

Q: Where is AI Inference processing performed?

A: AI Inference processing follows regional data-residency requirements. Inference requests are processed within an infrastructure located in the contractually designated region(s). US client data is processed within US infrastructure. Where processing is contractually restricted to the European Economic Area (EEA) and the United Kingdom (UK), inference is performed within infrastructure located in the EEA. Microsoft may distribute requests within the designated geographic region, but processing does not extend beyond it.

Security & Compliance

Q: What security principles govern KLDiscovery's AI Services?

A:  KLDiscovery’s AI Services are designed and operated in accordance with established information-security and data-protection principles. Protection of Client Data is embedded across the full lifecycle of the AI Services, including design, development and operation.
KLDiscovery maintains a comprehensive security program that includes ongoing risk assessment, continuous security monitoring, and periodic security testing. Security controls are subject to internal review and independent third-party auditors to support their continued effectiveness and alignment with applicable standards.
Access to systems and data is restricted through role-based access controls and least-privilege principles. Security measures are regularly reviewed and updated to address evolving threats and regulatory requirements.

Q: What compliance certifications does KLDiscovery hold?

A: Current certification and compliance information is available at: https://trust.kldiscovery.com/

Q: How is data protected in transit and at rest?

A: Client Data stored within KLDiscovery’s systems is encrypted at rest using industry-standard encryption mechanisms. Client Data transmitted between systems and users is protected through secure communication protocols, including Transport Layer Security (TLS). Nebula operates on secure cloud infrastructure with AI processing performed within controlled service environments and subject to standard security controls.

Data Handling & Privacy

Q: Is Client Data used to train or improve AI models?

A: No. KLDiscovery does not train, fine-tune, or modify underlying LLMs.  KLDiscovery may use anonymized Client Data for purposes related to model improvement and service enhancement of Platform functionality. Any such use is subject to KLDiscovery’s contractual commitments and applicable Client instructions.

Q: How are documents and instructions handled during AI processing?

A: Users select documents and submit instructions through Nebula. KLDiscovery transmits these inputs to the LLMs along with task-specific instructions and output formatting requirements.

Q: What is the data retention policy for AI Services?

A: Data at rest is only stored within the Nebula environment (Microsoft Azure data zone or KLDiscovery data centers) and retained only for as long as necessary to provide the Services. KLDiscovery has opted out of Microsoft's abuse monitoring program for all Azure AI Foundry services used within Nebula, meaning Microsoft does not retain Client Data outside the Platform for content moderation or model improvement purposes.

Policies & Service Stability

Q: How are Platform or security changes communicated?

A: KLDiscovery notifies Clients of Platform changes through regular release emails or Platform alerts that detail important changes such as new features and updates.

Q: Who owns submitted content and AI-generated output?

A: Client retains full ownership of all intellectual property rights in all input and Client facing outputs.

Use of AI in Litigation

Q: Is it defensible to use AI in e-discovery?

A: The use of AI in e-discovery is undertaken at the discretion and risk of the party choosing to rely on it. While courts have accepted Technology Assisted Review (TAR) methods, parties using AI remain responsible for validating performance metrics, accurately documenting their methodology, and defending their approach to opposing parties and the court if challenged.

Broader AI tools, including generative AI, LLMs, currently have limited and evolving judicial treatment in the e-discovery context. These systems operate in a non-deterministic manner and may produce inaccurate or unintended outputs, resulting in increased security and validation requirements.