What RADV is and why it bites

RADV is CMS's audit mechanism for Medicare Advantage risk scores. The methodology, finalized in the 2023 RADV final rule and now active for payment years 2018 forward, has three properties that change the risk math for plans:

  1. Extrapolation. Findings from a sampled set of members are projected across the entire contract. A 12% error rate on a 200-member sample becomes a 12% clawback on a 50,000-member contract.
  2. No fee-for-service adjuster. CMS removed the FFS adjuster that previously dampened RADV penalties. Errors now translate into payment recoupment dollar-for-dollar.
  3. Six-year lookback. Each contract is auditable for six payment years. Errors compound, and a single audit can reach back to coding decisions made before the current leadership team was in place.
2018
First payment year subject to extrapolation under the 2023 RADV final rule
6 years
Lookback window per contract under current RADV methodology
$0
Remaining FFS adjuster offset against RADV findings

The financial exposure for a single audit finding can exceed annual quality bonus payments. Plans that captured aggressively under V24 with thin documentation are now sitting on retroactive risk that was invisible at the time.

What "audit-defensible" actually means

A diagnosis is audit-defensible when, six years from now, an external coder pulled to review the chart will independently arrive at the same HCC. That requires seven elements to all be present and retrievable:

The seven-element audit-defensibility checklist

  1. A face-to-face encounter with a qualified provider in the calendar year being audited.
  2. A signed clinical note from that encounter, documenting the condition with sufficient specificity.
  3. MEAT criteria. The condition must be Monitored, Evaluated, Assessed, or Treated during the encounter. A passing mention via problem-list copy-forward does not qualify.
  4. Supporting clinical evidence where the condition requires it: labs (e.g., HbA1c for diabetes), imaging (e.g., echo for heart failure), or pharmacy fills consistent with treatment.
  5. An ICD-10 code that maps cleanly to the claimed HCC under the applicable model version (V24, V28, or HHS-HCC).
  6. Coder review and attestation confirming the documentation supports the code.
  7. A retrievable evidence packet linking all of the above, available within the CMS-mandated response window.

A program that produces all seven on every captured HCC will pass RADV. A program that produces five out of seven will not. Most programs that fail did so on element 4 (linked supporting evidence) or element 7 (retrievability), even when the underlying clinical work was solid.

The chain of custody concept

Most documentation failures are not failures of clinical work. They are failures of evidence linkage.

A patient presents in March, the physician documents diabetes with neuropathy, the coder reviews and attaches the diagnosis to the encounter, the diagnosis flows into the claim, the claim flows into the MAO-004 acceptance file, the diagnosis posts to MMR, and three years later the auditor pulls the chart and finds only the encounter note. The lab supporting neuropathy is in a different system. The coder review record was never persisted. The trumping logic that elevated this from "diabetes uncomplicated" to "diabetes with neuropathy" lives in a vendor's logs, not in the plan's audit trail.

Each of these handoffs is a chain-of-custody link. A program is audit-defensible only when the chain is unbroken and retrievable.

Building the program: seven design choices

1. Document the condition, not the visit type

A wellness visit can document chronic conditions if the provider Monitors, Evaluates, Assesses, or Treats them during the encounter. The visit code does not determine HCC eligibility; the documented clinical activity does. Train coders to read for MEAT, not for visit type.

2. Attach evidence at the point of capture, not at audit time

Every HCC captured should have its supporting evidence (note, lab, imaging) bundled at the moment of coding. Plans that try to assemble evidence packets reactively, after a RADV notice, lose roughly 20% of supporting documents to system migrations, retired physicians, and deleted EHR instances.

3. Apply trumping logic in real time

V28 makes constrained groups more common. Capturing both HCC X and HCC Y when X trumps Y is not a benefit; it is a coding error that flags in audit. Trumping must apply pre-claim, in the coder workspace, not as a post-claim correction.

4. Track every diagnosis through the submission lifecycle

A diagnosis is not "captured" until it appears in the MMR. Track every submitted diagnosis through 277CA acknowledgment, 002 acceptance, MAO-004 reconciliation, MMR posting, and MOR validation. Rejections at any step mean the HCC will not pay, and worse, the rejection reason often signals a documentation weakness that will reappear in audit.

5. Sample your own population before CMS does

Run an internal RADV simulation quarterly on a stratified sample. Pull 200 members, audit every HCC, score against the seven-element checklist. Plans that find 12% error rates internally have time to remediate. Plans that learn the rate from CMS do not.

6. Treat copy-forward as a known failure mode

The most common single source of indefensible HCCs is problem-list copy-forward without MEAT documentation in the current encounter. EHR vendors will not flag this. The capture program must.

7. Persist coder reasoning, not just coder decisions

If a coder elevates a diagnosis from "diabetes uncomplicated" to "diabetes with chronic kidney disease," the reasoning ("eGFR 42 documented in March 15 lab results") must be retrievable. A coder decision without coder reasoning is a coder decision that cannot be defended.

Common failure modes

"Most RADV findings are not surprises. They are outcomes of the program ignoring its own quarterly internal audit results for two years before CMS arrived."

Sources and further reading