← Back to articles
Article

When Vanta or Drata Still Does Not Satisfy Reviewers

Many teams run Vanta or Drata and still face follow-up questions in a security review because scope, ownership, and policy reality are unclear. This guide explains the missing baseline layer and how to package evidence so reviewers can follow the story end to end.

Posted 15 Feb 2026 · 15 min read
Vanta/Drata Security reviews Evidence-first Governance Reviewer-ready

If reviewers still push back after Vanta or Drata, what's missing is the baseline narrative: clear scope, named owners, policies that match reality, and a shareable evidence index. Use the platform for monitoring and collection, not for the story.

Table of contents

Executive summary

  • What it is: Vanta and Drata help manage compliance workflows and evidence, but they do not automatically produce a coherent narrative that a reviewer can trust end to end.
  • Who should care: Chief Information Officers (CIOs), Chief Technology Officers (CTOs), Heads of information technology (IT), and security leads using Vanta or Drata who still face procurement follow-up, SOC 2 (System and Organization Controls 2) questions, or ISO/IEC 27001 scrutiny.
  • First month: define scope (service, data, boundaries), assign owners and approvers, align policies to reality, map controls to evidence, and run one real review end to end to expose gaps early.
  • Evidence to retain: a scope note, an owner matrix, approved policies (with review cadence), a control mapping workbook, and an evidence index with sharing rules plus recent proof for frequently asked controls.
  • Common pitfalls: assuming “passing tests” equals reviewer confidence, leaving scope ambiguous, reusing template wording that is not true, and oversharing raw exports without redaction rules.
  • One-sentence conclusion: an organisation is reviewer-ready when scope, policy, controls, and evidence connect cleanly, with named owners.

An organisation is reviewer-ready when a reviewer can follow scope -> policy -> control -> evidence with named owners, without needing clarification calls for basics.

Vanta and Drata in plain English

What these platforms automate

Vanta and Drata are compliance automation platforms. They typically connect to the organisation’s systems (for example identity, endpoints, cloud, and ticketing) to monitor control signals, collect evidence, and track tasks against a chosen framework or review scope.

They can also structure operational work around documentation and acceptance. For example, Vanta ties each policy to tests for policy approval and employee acceptance, and some controls are not considered met until those tests pass. The tooling helps teams keep approvals and attestations visible and repeatable.

What they cannot decide for the organisation

What the platform cannot decide is what “in scope” means for the organisation, who owns each control in practice, and how to explain the control story in language a reviewer can follow. That baseline narrative is a management decision, not a platform setting.

Platforms can also lag reality if review packaging is not understood. Drata explains that a downloadable Control Evidence package is generated as a snapshot of mapped evidence at the time the auditor sets samples. If new evidence is uploaded after sampling, it does not appear in the existing package unless the auditor sets new samples. That detail matters when reviewers say evidence is “missing” even though the team recently fixed it.

In Drata Audit Hub, request statuses can include Prepared, New, Completed, and Deleted. Drata notes that the “Prepared” status typically means the organisation prepared items for review, but it does not guarantee that all required evidence is attached. That distinction matters when internal teams treat status as completeness.

Why reviewers still push back

What reviewers are trying to verify: customer security reviewers, procurement teams, and auditors are not trying to score a dashboard. They are trying to understand whether controls exist, whether they operate as described, and where accountability sits when something goes wrong.

In SOC 2, the American Institute of Certified Public Accountants (AICPA) describes a SOC 2 examination as a report on controls at a service organisation relevant to security, availability, processing integrity, confidentiality, or privacy. Those areas are often referred to as the Trust Services Criteria categories.

In ISO/IEC 27001, the International Organization for Standardization (ISO) describes the standard as defining requirements for an information security management system (ISMS). An ISMS is the management system that keeps security work owned, consistent, and continuously improved. Reviewers assessing ISO/IEC 27001 readiness will still ask for ownership, scope, and operating evidence even when technical signals look good.

The typical gap: most follow-up questions are not about whether a tool is installed. They are about gaps between artefacts. A policy exists, but it is unclear who approved it. A control is marked “OK”, but it is unclear which services it covers. Evidence is exported, but there is no context, owner, timeframe, or redaction rule.

Supply chain pressure makes this worse. NIST (National Institute of Standards and Technology) Special Publication 800-161 Rev. 1 Update 1 describes cybersecurity supply chain risk management (C-SCRM) as an integrated set of risk management activities and includes guidance on policies, plans, and risk assessments for products and services. Many “extra” reviewer questions are really supply chain questions, even when the framework name on the questionnaire is different.

The baseline layer that tools do not auto-generate

Scope and shared responsibility

A reviewer cannot evaluate controls without boundaries. A practical scope statement usually covers the following points, in plain language:

  • What is being reviewed: the service(s) and environments in scope (for example production, staging, and corporate IT).
  • What data is in scope: customer data types and where they are processed and stored at a high level.
  • Where the boundaries sit: what is explicitly out of scope and why (for example a separate product line or a legacy environment).
  • Key dependencies: material third parties (cloud, hosting, identity, payment) and what is relied upon.
  • Shared responsibility: what the organisation provides, what customers must configure, and what is inherited from suppliers.

“Shared responsibility model” is simply the statement of those boundaries and responsibilities. Reviewers use it to decide what evidence to request from the supplier and what belongs with the customer’s own controls.

Ownership, approvals, and exceptions

Many organisations can show evidence, but cannot show ownership. Reviewers notice quickly when a policy is unsigned, when a control owner is “IT”, or when exceptions are handled ad hoc.

Ownership needs to be specific. Each major control area should have a named owner (accountable for operation) and an approver (accountable for oversight). This is not paperwork theatre. Vanta’s policy workflow illustrates the point: it checks whether a policy is approved and whether relevant employees accept it, and some controls do not pass until both tests are in an OK status.

Exceptions also need structure. When a control is partially implemented, the most reviewer-friendly approach is to record the exception with: the scope of impact, the risk, the compensating measure (if any), the remediation plan, and the decision owner who accepted the residual risk. This turns a “gap” into a managed item.

Policy reality and control narrative: policies are only useful when they describe how work is actually done. Template drift is a common reason reviewers keep pushing. If a policy says a control happens on a schedule but the activity is informal, the reviewer will ask for evidence, then ask who owns it, then ask what systems it covers.

A “control narrative” is the short, repeatable explanation of how a control operates in practice. It links policy to workflow to evidence. A workable narrative includes a trigger, the process steps, the tooling used, the evidence retained, and the cadence and check for completion. When that narrative exists, the platform becomes the evidence engine for the story, not the story itself.

Turn platform outputs into a reviewer pack

Evidence index and safe sharing rules

A reviewer pack is a curated set of documents and evidence pointers that can be safely shared. It is different from the internal evidence store. The internal store can be broad and messy. The reviewer pack must be deliberate.

Teams often debate whether to provide Vanta auditor access, rely on Drata Audit Hub exports, or send a curated folder. The delivery method varies, but reviewers still need the same baseline narrative and the same evidence index.

The centre of the pack is an evidence index: a simple table that maps reviewer questions or control areas to evidence items. It prevents the “screenshot ping-pong” that happens when evidence is emailed in fragments.

A practical evidence index usually includes:

  • Control or topic: access control, incident response, vulnerability handling, supplier risk.
  • Evidence item name: the artefact the reviewer will read.
  • Owner: who can explain it.
  • Location: where it lives (folder path or platform section).
  • Timeframe: what period the evidence covers.
  • Share class: shareable as-is, shareable with redaction, or internal only.
  • Notes: what the reviewer should look for and what is intentionally excluded.

Sharing rules are a practical control to prevent oversharing. An organisation can define a small set of share classes and apply them consistently, for example:

  • Shareable: policies, high-level diagrams, procedures, training summaries, and redacted screenshots.
  • Shareable with controls: audit exports, access review outputs, and vulnerability lists with sensitive fields removed.
  • Internal only: secrets, full logs, sensitive configuration exports, and customer-identifying data.

The evidence index also becomes the source of consistent security questionnaire evidence links. Organisations that want a deeper operational pattern for responding to questionnaires with evidence links can align this pack with Security questionnaires and customer reviews: answer fast with evidence links.

How to handle partial controls honestly

Most reviewer friction comes from over-claiming. A platform can show some tests passing while the end-to-end control is only partially implemented. The reviewer then discovers missing steps, and trust drops.

A clean response to a partial control covers the following:

  1. State the current position: what is implemented, and what is not, in plain language.
  2. Show what is operating now: provide evidence for the parts that do exist.
  3. Show the plan: the remediation work items, the owner, and how completion will be verified.
  4. Show governance: the exception record or decision log entry that shows the risk is understood and owned.

This approach avoids surprises. It also allows reviewers to evaluate residual risk with clarity instead of making assumptions.

The first 30 days plan

Week-by-week rollout

This plan assumes the organisation already uses Vanta or Drata. It does not assume tool replacement and it does not assume privileged access to customer systems. The goal is to build the baseline narrative layer and a reviewer pack that can be reused.

  1. Week 1: scope and ownership. The organisation should write a short scope statement, agree shared responsibility boundaries, create an ownership matrix for major control areas, and decide where reviewer-shareable documents will live and who can approve sharing.
  2. Week 2: policy reality and approvals. The organisation should identify which policies are relied on in reviews, rewrite any template language that does not match reality, route policies for approval and acceptance in the platform or an equivalent process, and align policy titles and identifiers so they match what appears in exports.
  3. Week 3: control narrative and mapping. The organisation should write short control narratives for the controls that generate the most reviewer questions, map them to evidence, build the evidence index, and test exports so they include what reviewers expect. Drata notes that evidence packages can be snapshots tied to auditor sampling, so the “as of” date and sampling behaviour should be understood before a review starts.
  4. Week 4: run one full review end to end. The organisation should run a full review using a real customer questionnaire or a standard enterprise security review format, answer it using evidence links from the index, track follow-up questions, and close gaps. For a structured list of the questions reviewers repeatedly ask, see Enterprise deal blockers: the reviewer questions you can pre-answer in your docs.

Definition of Done checklist: a reviewer-ready baseline looks boring in the best possible way. The following checks reduce follow-up questions because they remove ambiguity:

  • Scope is written, approved, and matches what is described in questionnaires and reports.
  • Shared responsibility is stated so reviewers know what evidence is relevant.
  • Each control area has a named owner and a named approver.
  • Policies match reality and have approval and review metadata.
  • Control narratives exist for the most questioned areas and point to evidence.
  • Evidence index exists, with share classes and redaction notes.
  • A reviewer pack exists as a curated folder or export set, separate from internal working evidence.

When documentation quality needs a repeatable QA routine, Accel Comply’s Process and QA checklist excerpt shows an example of the metadata and cross-reference checks that prevent orphan controls and orphan evidence.

For organisations working towards ISO/IEC 27001, it often helps to stabilise the baseline artefacts in a sensible order. ISO 27001 under a deadline: the first 10 artefacts to stabilise provides a triage sequence that complements this plan.

Evidence to retain

Minimum evidence categories reviewers ask for

Evidence requests differ by reviewer and framework, but the pattern is stable. Reviewers ask for items that prove governance, control operation, and traceability to systems and services.

Common categories include:

  • Scope and system description: what services are covered and what is excluded.
  • Policy set: policies that set expectations, plus approval and acceptance records.
  • Risk management: a risk register and records of risk treatment decisions for key risks.
  • Access control: joiner-mover-leaver processes (how access is granted, changed, and removed), privileged access approvals, and periodic access review outputs.
  • Vulnerability handling: prioritisation decisions, remediation records, and verification notes.
  • Change management: change approvals and evidence that changes are reviewed and tested.
  • Incident response readiness: incident procedure, escalation paths, and exercise notes.
  • Business continuity: backup and recovery evidence for critical services.
  • Supplier and sub-processor oversight: supplier inventory and due diligence records for key suppliers.

SOC 2 and ISO/IEC 27001 both point reviewers back to controls and management system requirements. NIST SP 800-161 reinforces why supply chain governance, policies, and risk assessment artefacts are part of modern assurance. The evidence list should therefore favour items that show ownership and operating rhythm, not one-off screenshots.

Evidence metadata that prevents follow-up: evidence often “exists”, but reviewers still ask follow-up questions because the evidence lacks context. A small set of metadata fields reduces that friction:

  • Owner and approver: who is accountable for content and oversight.
  • Version and effective date: what is current and when it took effect.
  • Review cadence: when it is revisited and what triggers a change.
  • Scope tags: which services, environments, or teams it applies to.
  • Timeframe covered: what period an export or report represents.
  • Redaction note: what was removed and why, so the reviewer does not assume missing controls.

This is why reviewer-ready documentation looks consistent. For example, Accel Comply’s QA excerpt includes a basic check that every document has an owner, approver, version, effective date, and review cadence. The same metadata pattern can be applied across the organisation’s internal documents and the reviewer pack.

Common pitfalls when you have Vanta or Drata

Passing tests, failing the narrative: the platform can show healthy signals while reviewers still lack enough context to trust the story. The most common causes are predictable:

  • Treating platform status as an audit opinion. A green status does not explain scope, responsibility boundaries, or exceptions.
  • Scope ambiguity. Reviewers ask more questions when “in scope” is unstated or changes between documents.
  • Policy drift. Policies written from templates that do not match reality create avoidable follow-up.
  • Evidence without context. Exports sent without owner, timeframe, and interpretation notes create uncertainty.
  • Oversharing. Sending raw logs or full configuration exports without redaction rules creates security risk and slows the review.
  • No pre-answered baseline. Teams respond to every questionnaire from scratch instead of maintaining a reviewer pack and evidence index.

When the organisation wants to standardise reviewer answers across deals, the evidence index approach pairs well with an agreed set of pre-answered review questions. The reviewer question patterns are covered in Enterprise deal blockers: the reviewer questions you can pre-answer in your docs.

Conclusion

Compliance automation platforms are useful. They reduce manual effort and keep evidence collection visible. They do not, by themselves, make an organisation reviewer-ready.

Reviewer readiness comes from a coherent baseline narrative: scope that is clear, ownership that is named, policies that match reality, and evidence that is mapped and shareable with safe rules. When those pieces exist, Vanta or Drata become stronger because platform outputs can be interpreted quickly and consistently.

A practical next step is to build a reviewer pack and evidence index, then run one end-to-end review using real questions. That exposes gaps early and turns follow-up questions into a manageable improvement list instead of a recurring scramble.

Frequently Asked Questions (FAQs)

Why do reviewers still ask for policies when Vanta or Drata shows passing controls?

Reviewers use policies to understand intent, scope, and governance. A passing control signal can show that a technical setting exists, but it does not always explain who approved the approach, which services it covers, what exceptions exist, or what process ensures the control keeps operating. Policies and control narratives provide that context, and the platform evidence then supports it.

What is the minimum scope statement reviewers expect for a security review?

There is no single universal template, but reviewers generally need enough information to understand boundaries. A practical minimum scope statement describes what service is being reviewed, what environments and data types are in scope, what is explicitly out of scope, and what key third parties the service relies on. It should also state shared responsibility boundaries so reviewers know which controls sit with the supplier and which sit with the customer.

How should teams respond when a platform test fails or a control is only partially implemented?

The clean approach is to avoid over-claiming and provide a managed position. State what is implemented now, provide evidence for what exists, and record an exception for what does not. Then provide the remediation plan, the owner, and how completion will be verified. Reviewers usually accept an honest, governed gap more readily than an optimistic statement that does not match evidence.

What evidence is safe to share with a customer reviewer, and what should remain internal only?

Safe sharing is controlled by an evidence index and share classes. Policies, high-level diagrams, and procedures are typically safe to share. Exports, logs, and configuration evidence often need redaction or summarisation. Secrets, customer-identifying data, and full configuration dumps should usually remain internal. The goal is to provide sufficient proof without exposing material security risk.

How do SOC 2 and ISO 27001 reviewers interpret what a compliance platform shows?

Both reviews focus on controls and governance, not on which tool is used. The AICPA’s SOC 2 framing is an examination report on controls relevant to the Trust Services Criteria categories. ISO/IEC 27001 focuses on the requirements of an ISMS. A platform can support both by tracking evidence and tasks, but reviewers still evaluate scope, ownership, and whether controls operate as described.

Do we need to replace Vanta/Drata or add a GRC tool to satisfy reviewers?

Not necessarily. Many organisations satisfy reviewers using existing tools when they add the missing baseline layer: scope, ownership, policy reality, control narratives, and an evidence index with sharing rules. A GRC (governance, risk, and compliance) tool can help at scale, but it does not replace the decisions and narratives reviewers rely on.

External Resources: