Launch prototype. Scores, references, and figures shown are illustrative until first publication.
VendorAudit
Subscribe
Category Data Security AI Security Cloud Security Coming
Methodology · v1.0 · Updated 25 April 2026

How VendorAudit scores

Eleven capability criteria. Eight company-health signals. Right of reply. Founder employment disclosure. Published in full because methodology that cannot be inspected cannot be trusted.

Summary

VendorAudit produces a single, published, opinionated assessment of every vendor in the data security category. Each vendor receives a capability score (0-100), a company health score (0-100), and a trajectory indicator (improving, stable, declining). Scores are derived from eleven capability criteria and eight company-health signals, weighted according to the published bands below, applied consistently across every vendor.

We score vendors using only public sources. We do not accept payment from vendors for coverage, sponsorship of profiles, or favourable scoring. Every vendor is offered five business days of right-of-reply before publication and the response is published. Coverage of vendors where the founding editor has a current or recent employment relationship is held to a higher evidence and process bar, including independent advisor review.

What we cover

VendorAudit covers the data security category broadly, including: Data Security Posture Management (DSPM), Data Loss Prevention (DLP), Information Protection and Rights Management, Insider Risk Management, Data Access Governance, Data Detection and Response (DDR), AI Security Posture Management (AI-SPM), and the integrated platforms that span multiple of these functions.

We do not cover network security, endpoint security, identity and access management, cloud security posture management, application security, or any other adjacent category — though we will note where vendors in those categories meaningfully extend into data security.

Coverage is global. The first published edition includes twenty vendors. Coverage will expand quarterly.

Capability scoring — eleven criteria

Each vendor is scored against eleven functional criteria on a 0-4 scale. The scale is calibrated to reflect what an experienced senior buyer would conclude after a serious evaluation, not what the vendor's marketing materials claim. The eleven criteria are:

  1. Data discovery — breadth and depth of supported data sources, including cloud, SaaS, structured databases, on-premises file shares, mainframe, and major enterprise systems.
  2. Data classification — accuracy, breadth of pre-built sensitive types, ability to classify business-specific document types, and the technology used (see classification technology, below).
  3. Data access governance — visibility into who can access what data, ability to identify excessive permissions, support for least-privilege enforcement.
  4. Data loss prevention — enforcement-point coverage, native integration depth, AI-aware controls, prompt-level protection for AI applications.
  5. Threat detection and response — insider threat detection, anomalous access pattern detection, AI-driven triage, integration with broader security operations.
  6. Automated remediation — direct remediation primitives (revoke access, remove sharing, mask data) versus alert-only or ticket-based workflows.
  7. Labelling and rights management — sensitivity labels, persistent encryption, rights management that travels with data outside the organisation.
  8. Activity audit — granular event capture, retention period, correlation with classification and identity context.
  9. Compliance reporting — regulatory framework coverage, audit-ready evidence generation, mapping of controls to regulator requirements.
  10. AI / LLM data security — discovery of AI applications and agents, prompt-level controls, agent governance, AI-specific risk assessments.
  11. Operational TCO — realistic operational cost beyond direct subscription, including implementation model, hidden consumption charges, skilled-headcount requirements, and time-to-value.

Each criterion is scored on a 0-4 scale: 0 (not supported), 1 (limited), 2 (adequate), 3 (strong), 4 (best in class). Scores are accompanied by a one-paragraph justification and a one-line evidence note in every vendor profile.

Classification technology — sub-criterion within data classification

The technology used to perform classification matters as much as the breadth of classifiers offered. VendorAudit assesses classification technology along three dimensions:

Pattern matching (regex, dictionary, checksum)

Mature, deterministic, low compute cost, but high false-positive rate where contextual judgement is required. Most legacy DLP and DSPM vendors rely heavily on pattern matching for built-in sensitive information types.

Statistical and ML models trained by the vendor

Better at contextual judgement (Person Name, Address, classifier categories that depend on surrounding text) but limited by the training data the vendor has assembled and the file types and data states the model can analyse.

Generative AI / large language model classification

Highest contextual accuracy, lowest false-positive rate when correctly applied, but introduces new architectural concerns including data egress (does customer data leave the network for inference?), GPU footprint, latency, and cost-to-classify per document.

Vendors are scored on the combination of technologies used and on the operational realism of their approach for an enterprise-scale deployment. A vendor that uses LLM classification but requires customer data to be sent to a public cloud inference endpoint scores differently from one that uses LLM classification entirely within the customer tenant.

Operational TCO — implementation model matters

The path to a productive deployment varies materially across vendors and is a primary driver of both time-to-value and total cost. VendorAudit assesses the implementation model along three axes:

Who performs the implementation

Three patterns exist: vendor-included implementation, where the vendor performs deployment as part of subscription; vendor-paid professional services, where the vendor offers separately-charged engagements; and partner-led via systems integrators or consultancies, where the buyer engages and pays a partner directly. Many vendors offer two or three of these paths simultaneously. The relevant assessment is the default path most customers take, and what that path costs.

Realistic implementation cost as a percentage of subscription

Some vendors deliver productive deployments for under 10% of first-year subscription value in services costs. Others routinely require services costs equal to 50-100% of first-year subscription value. This ratio is one of the most decision-relevant pieces of pricing intelligence in the entire category.

Skilled-headcount requirements from the buyer's organisation

Some implementations require a small project team and standard administrative skills. Others require dedicated headcount with specialist platform expertise that the buyer must hire or retain throughout the deployment and beyond.

Company health — eight signals

Capability is not enough. A vendor with strong technology but weak company health is a worse buy than a vendor with merely strong technology and a healthy organisation, because the latter will continue to invest in the product. The eight components of company health are:

  1. Revenue scale and growth — disclosed where public, estimated from credible sources where private.
  2. Profitability or path to profitability — operating margin or burn rate, runway visibility for private companies.
  3. Headcount and engineering investment — total headcount, percentage of headcount in engineering and product, growth trajectory.
  4. Leadership stability and tenure — executive continuity, founder involvement, succession depth.
  5. Customer evidence — public references, named customer testimonials, industry breadth, geographic coverage.
  6. Employee sentiment — Glassdoor and similar, treated as one signal among many.
  7. Analyst recognition — Forrester, Gartner, IDC, GigaOm placements; treated as confirmatory signal not authoritative judgement.
  8. Innovation pace and trajectory — frequency of meaningful product releases, depth of investment in next-category capabilities, ability to ship.

Special handling for divisional products of large parent companies

Several vendors in the data security category are product lines within much larger parent companies — Microsoft Purview within Microsoft, AWS Macie within AWS, Google Cloud Sensitive Data Protection within Google. The standard methodology, applied naively, would give these vendors near-perfect scores on every measure because the parents are exceptionally healthy. That outcome would mislead buyers whose actual question is whether the product line is well-resourced and well-led.

For divisional products, VendorAudit applies the eight company-health components to two distinct subjects and reports both: parent company health and product line health. The aggregate company health score for divisional products is a weighted blend, with the product line weighted more heavily because it is the more decision-relevant input for a senior buyer.

Weighting

The eleven capability criteria do not contribute equally to the aggregate score. We publish weight bands rather than precise numerical weights — this is deliberate. Numerical weights would imply false precision, and would also make the methodology gameable.

  • Heaviest weight (40-50% combined): Data discovery, data classification, data access governance, threat detection.
  • Middle weight (25-35% combined): DLP, activity audit, automated remediation, AI / LLM data security, operational TCO.
  • Lighter weight (10-20% combined): Labelling and rights management, compliance reporting.

Where a buyer's specific situation favours different weights — for example, a regulated financial services firm where compliance reporting matters more than for a typical enterprise — VendorAudit produces customised weighted scores for paid subscribers as part of the buyer-side advisory service.

Scanning methodology — discovery score criteria

The discovery score is one of the eleven capability criteria VendorAudit assesses. It reflects not just whether a platform can discover data, but how completely it scans — a distinction that matters significantly for compliance, security posture, and AI governance programmes.

VendorAudit applies four tiers to the discovery criterion:

●●●● Score 4 — Complete scanning

Every data object is individually inspected at the content level. The resulting inventory can be relied upon as exhaustive for the scope covered. Suitable for compliance mandates that require complete data inventories (GDPR Article 30, HIPAA, ASD Essential Eight). Vendors: Varonis.

●●●○ Score 3 — Hybrid scanning

Complete content inspection for structured data sources; intelligent clustering or sampling for large unstructured repositories. High confidence for structured environments; directional for unstructured at petabyte scale. Vendors: Varonis, BigID, Securiti, Concentric AI, Rubrik, Microsoft Purview, IBM Guardium.

●●○○ Score 2 — Intelligent sampling

Statistically representative sampling — analysing representative portions of data stores and clustering similar objects. Fast and scalable. Not exhaustive by design. The platform cannot confirm the absence of sensitive data in unsampled portions. Vendors: Cyera, Sentra, Wiz DSPM, Palo Alto Dig, AWS Macie, Normalyze, Google Cloud DLP, Proofpoint.

●○○○ Score 1 — Metadata / policy only

No content inspection. Access governance and policy enforcement based on metadata, labels, and schema-level information. Effective for access control use cases; not a data discovery platform. Vendors: Immuta, Privacera.

Why this distinction matters

Compliance risk. GDPR Article 30 requires a complete record of processing activities. A DSAR (Data Subject Access Request) requires you to find all records relating to an individual. If your discovery platform sampled 20% of your data lake and the remaining 80% contains records belonging to the data subject, you are non-compliant — and you will not know it. Regulatory frameworks that require complete data inventories cannot be satisfied by sampling approaches without supplementary controls.

Security blind spots. The data stores not covered by sampling are precisely the data stores a threat actor can target without triggering discovery-based controls. More practically: the one S3 bucket in the unsampled portion that contains 50,000 customer records is your breach waiting to happen. A complete-scan platform would have found it. A sampling platform may not.

AI governance risk. The entire premise of AI security governance is knowing what data your models trained on, what data your RAG pipeline has access to, and what data your agents can retrieve. If your discovery platform sampled 20% of your data lake, your AI governance programme has an 80% blind spot. This is not a theoretical risk — it is the operational reality for any organisation using sampling-based DSPM as the foundation of AI security.

False confidence. A dashboard reporting "3.2 million sensitive records discovered" from a sampling approach may represent 20% of the actual sensitive data estate. Boards and regulators see a number. The asterisk is buried in the technical documentation.

What sampling vendors say in their defence

Vendors using sampling approaches make two arguments: (1) statistical sampling at sufficient density provides high-confidence coverage without the performance and cost of exhaustive scanning; and (2) for risk prioritisation purposes, knowing approximately where sensitive data clusters is more actionable than an exhaustive inventory that takes weeks to complete. Both arguments have merit for operational risk management. Neither argument is sufficient for compliance mandates that require completeness, or for AI governance programmes where the question is not "approximately where is sensitive data" but "exactly what data did this model see."

VendorAudit's position: sampling and complete scanning serve different use cases. The discovery score reflects this distinction explicitly. Buyers with compliance completeness requirements or AI governance mandates should factor this criterion heavily in vendor selection.

flicts

No vendor payment for coverage or scores

VendorAudit does not accept payment from vendors for inclusion in coverage, for favourable scoring, for sponsorship of profiles, or for any related editorial influence. This is the single most important commitment we make and the basis on which every other claim about our research depends.

Founder employment history and excluded vendors

VendorAudit's founding editor has held senior commercial roles at multiple cyber security vendors over the last twenty-five years, including current employment at Varonis Systems, Inc. (NASDAQ: VRNS), and prior employment at Dell SecureWorks (now SecureWorks, NASDAQ: SCWX) and Dell EMC.

Coverage of any vendor where the founding editor has a current or recent employment relationship is held to a higher evidence and process bar than other coverage:

  • The vendor is excluded from active scoring during current employment, and for a defined cooling-off period thereafter.
  • When coverage commences, it draws exclusively on fully public sources. No information acquired through prior employment — including but not limited to internal financials, customer data, competitive intelligence, product roadmap, or pricing — is used in the analysis at any stage, including drafting.
  • The first published profile of any such vendor is reviewed by an independent advisor with no current vendor employment in the data security category before publication.
  • The full employment history of the founding editor is disclosed permanently at the link below.

The current list of excluded vendors and their planned coverage commencement dates is published here.

Right of reply

Every vendor profiled is offered five business days of right-of-reply before publication. The vendor is shown the draft in full, including capability scores, company health scores, strengths, weaknesses, and pricing intelligence.

The vendor's response is published verbatim in a standing "Vendor response" section at the bottom of the profile. Where the vendor disputes a factual claim, VendorAudit corrects any error identified. Where the vendor disputes a conclusion or score, VendorAudit notes the disagreement and retains final editorial judgement. The standing right-of-reply section appears on every profile, even when no response is received, as a permanent signal of editorial process.

Update cadence

Capability and company-health scores are reviewed quarterly. Trajectory indicators and recent material news are updated weekly. Pricing intelligence is updated whenever the buyer survey produces fresh data. Vendor profiles carry a "last updated" date prominently on every page.

Each profile lists three to five strengths and three to five weaknesses, prioritised by decision relevance to a senior buyer rather than by exhaustive coverage. The count varies by vendor: a sharply differentiated vendor with a clear capability shape may warrant three of each, while a vendor with multiple distinct strengths and gaps may warrant four or five. The standard is editorial — what does the buyer most need to know — not procedural.

Data sources

VendorAudit draws on: vendor public documentation and product collateral; SEC filings and earnings call transcripts for public companies; analyst reports from Forrester, Gartner, IDC and GigaOm where publicly available; press coverage from credible business and technology media; LinkedIn for headcount and presence data; Glassdoor for employee sentiment signals; the VendorAudit quarterly buyer survey; and primary interviews with senior buyers.

Every claim in every vendor profile is footnoted to a source. Where claims rest on primary buyer interviews, the interviewees are described by role and industry but not named, consistent with standard analyst practice. Where evidence is incomplete, the profile says so explicitly rather than assuming.

VendorAudit does not use information from non-public sources — including but not limited to confidential vendor briefings under non-disclosure, leaked documents, or anonymous tips. Every published claim must be independently verifiable from the cited public source.