We’ve raised Series A from M12 (Microsoft Venture Fund). Read More
RADV Audit Checklist

Home » Blogs 

RADV Audit Checklist: The Complete Medical Record Guide for Medicare Advantage Plans in 2026

This is the streamlined podcast version of the blog post, “RADV Audit Checklist: The Complete Medical Record Guide for Medicare Advantage Plans in 2026.” We’ve shaped it into a simple, story-driven experience to help you grasp the most important points with ease.

A Risk Adjustment Data Validation (RADV) audit checklist is the set of criteria used to determine whether medical records actually support the diagnosis codes that Medicare Advantage organizations (MAOs) submitted for risk adjustment. Each Hierarchical Condition Category (HCC) in an enrollee’s risk score has to be backed by compliant, face-to-face provider documentation. If it isn’t, the Centers for Medicare and Medicaid Services can collect overpayments [1].

In 2026, the stakes around this checklist look nothing like they did two years ago. The RADV program has expanded to cover all eligible MA contracts on an annual basis, with sample sizes ranging from 35 to 200 records per contract [2]. New audits now launch approximately every three months, and Payment Year 2020 audits began in February 2026 [2]. On top of that, the CMS 2027 Advance Notice proposes excluding diagnoses from unlinked chart review records entirely from risk score calculations starting in CY 2027 [3], which reinforces how aggressively regulators are tightening what counts as acceptable documentation. Incomplete medical records remain the most common reason diagnosis codes get rejected during RADV audits.

This guide walks through the official RADV medical record checklist, adds the condition-specific requirements that actually trip up audit teams, and lays out what your organization should be doing right now.

Note on extrapolation: A September 2025 federal court ruling vacated the extrapolation methodology from the 2023 RADV Final Rule. Extrapolation enforcement is currently paused. Audits themselves continue, and the Department of Health and Human Services appealed the decision in November 2025. The agency will comply with the court’s order while continuing to pursue outstanding and future payment year audits [2].

Key Takeaways

  • The RADV program now covers all eligible Medicare Advantage contracts annually, with samples of 35 to 200 records per contract.
  • Every diagnosis submitted for risk adjustment must be supported by a face-to-face encounter with MEAT criteria (Monitor, Evaluate, Assess, Treat) documented in the medical record.
  • Records that are illegible, unsigned, or missing clinical specificity will fail.
  • Medicare Advantage plans have a five-month submission window per audit [2].
  • A maximum of two medical records per audited HCC is allowed, but only one valid record is needed to support payment [2].
  • Mock RADV audits on a quarterly cadence are a baseline expectation, not a nice-to-have.

What Is a RADV Audit and Why Does It Matter Now?

The MA Risk Adjustment Data Validation program is the federal government’s primary mechanism to address overpayments to Medicare Advantage organizations (MAOs) [1]. During a RADV audit, regulators verify that diagnoses submitted for risk adjustment are actually supported in enrollees’ medical records. Unsupported diagnoses can trigger overpayment collection.

Here’s how it works: a sample of enrollees is selected from each audited contract, and the health plan must provide medical records to validate every HCC in those enrollees’ risk scores for the payment year. Trained coders then review each record against coding guidelines. Any diagnosis that doesn’t hold up becomes a finding. These contract-level audits happen after the final risk adjustment data submission deadline for the MA contract year [1].

What changed is the scale and cadence. The RADV program now targets all eligible contracts annually, with variable sample sizes based on contract characteristics [2]. Regulators are scaling their coder workforce and integrating AI-enabled tools to support efficiency, though all final determinations are still made by certified human coders [2]. A January 27, 2026, memo from the Audits and Vulnerabilities Group confirmed PY 2020 audits began in February 2026, with future audits launching on a roughly quarterly schedule [2].

The memo also acknowledged that, despite federal estimates suggesting MA organizations may be submitting unsupported diagnosis data resulting in approximately $17 billion in overpayments annually, the last significant recovery under the RADV program was for PY 2007 [2]. Completed audits for PYs 2011 through 2013 found overpayment rates between 5% and 8%, with recoveries on those amounts beginning soon [2]. The backlog is real, and regulators are moving to clear it.

As Wynda Clayton, former auditor and RAAPID’s Director of Risk Adjustment Coding and Compliance, puts it: “Always be ready because anyway this is going to happen.”

The Official RADV Medical Record Checklist

The published RADV medical record checklist helps Medicare Advantage contracts evaluate whether a record is suitable for submission [4]. Any item answered “No” may indicate the record won’t support the audited HCC. Here’s the checklist with practical context.

Checklist Item

If Missing

Is the record for the correct enrollee? (Name + DOB or Member ID)

Fails

Is the record from the correct calendar year for the payment year being audited?

Fails

Is the date of service present for the face-to-face visit?

Fails

Is the record legible to non-clinical staff?

Fails

Is the record from a valid provider type? (Hospital inpatient, hospital outpatient, or physician)

Fails

Are there valid credentials and/or a valid physician specialty documented on the record?

Fails

Does the record contain a signature from an acceptable provider?

Fails

If signature/credentials are missing, is there a completed CMS-generated attestation for this date of service?

Fails

Is there a diagnosis on the record that supports an HCC?

Fails

Does the diagnosis support the specific requested HCC?

Fails

Does the documentation satisfy MEAT criteria for the diagnosis?

Fails

Source: RADV Medical Record Checklist and Guidance [4]

A maximum of two medical records per audited HCC is allowed, though only one valid record is needed [2]. Submit your strongest. A weak second record alongside a strong one just introduces risk.

For inpatient records specifically, the checklist requires admission and discharge dates alongside a signed discharge summary [4]. All necessary pages must be included, and legibility needs to be confirmed for non-clinical staff who may review the file.

What Are MEAT Criteria and Why Do They Determine Audit Outcomes?

MEAT criteria (Monitor, Evaluate, Assess, Treat) are the documentation standard regulators use to validate HCC diagnoses during RADV audits. Listing a diagnosis on a problem list or in an assessment isn’t enough. The medical record must show active management of the condition during that encounter [4].

Monitor

Labs, vitals, imaging results, or other objective data tied to the condition. For a patient with chronic kidney disease, this means documented eGFR or creatinine values. “CKD noted” with no lab data won’t survive an audit.

Evaluate

A clinical assessment was performed during this visit. The physician reviewed the condition, examined the patient, or ordered relevant tests. The record needs to show the clinician engaged with the diagnosis, not just carried it forward from a prior encounter.

Assess

The record documents the current status: stable, worsening, improving, or a specific stage or severity. Vague language like “stable, continue meds” without any clinical context is one of the most common failure points medical coders encounter.

Treat

Active treatment is documented: medications prescribed or adjusted, therapy ordered, referrals made, or a care plan updated. The treatment must connect directly to the specific diagnoses being validated.

If any one of these four elements is missing, the diagnosis won’t pass. Wynda Clayton’s test: “Before you validate any condition, ask yourself: If I had to go before a court of law, can I defend this?”

Condition-Specific Documentation Requirements

Certain high-risk diagnoses fail at higher rates because their documentation requirements demand more precision. Regulators look for clinical detail, not general references.

The OIG published a toolkit specifically designed to help MA organizations identify diagnosis codes at high risk for being miscoded [5]. Their audits found that roughly 90% of targeted high-risk diagnosis codes were unsupported by the associated medical records [5]. The conditions below overlap heavily with OIG’s high-risk categories.

CKD (Chronic Kidney Disease)

What must be documented: Stage (1 through 5) explicitly stated, eGFR/creatinine labs, and a treatment plan or referral. The proposed 2027 model cuts the CKD Stage 3A coefficient by 50.4% [3], making accurate staging even more consequential for risk score accuracy.

Common failure pattern: “CKD noted” without a stage or supporting lab values.

Wynda Clayton: “Don’t say the patient has CKD. We need to know those stages. Without that, you’re probably not going to pass.”

Diabetes with Complications

What must be documented: Type (1 or 2) specified, the complication explicitly linked to the diabetes, and active treatment or monitoring for that complication. The diabetes coefficient drops 6.6% in the proposed 2027 model [3].

Common failure pattern: “DM 2, stable” with no complication management documented.

Records submitted to validate HCCs involving additional manifestations (such as diabetes with renal manifestations) must include language from a physician that establishes a causal link between the disease and the complication [4].

CHF (Congestive Heart Failure)

What must be documented: Type (systolic, diastolic, or combined), current status assessed during this visit, and active treatment. The heart failure coefficient drops 10.8% in the proposed 2027 model [3].

Common failure pattern: CHF on the problem list but not addressed in the encounter note. The condition exists in the chart, but nobody touched it during the visit.

Acute Stroke

What must be documented: Inpatient or ED admission record, imaging confirmation (CT/MRI), and residual deficits or active treatment.

Common failure pattern: “History of CVA” documented without residuals or supporting hospital inpatient records. OIG’s toolkit found a 96% error rate for acute stroke diagnoses that appeared only on physician claims without a corresponding inpatient record [5].

Cancer Diagnoses

Guidance specifically flags cancer as a high-risk area. A notation of “history of cancer” without indication of current treatment may not validate the HCC [4]. When possible, obtain records from the treating oncologist. OIG audits found error rates of 88% to 96% for lung, breast, and colon cancer diagnoses that lacked supporting treatment evidence [5]. These are also OIG target audit areas, making them prime RADV targets.

How Should Medicare Advantage Plans Prepare?

Audit readiness can’t be something you scramble to build when a Notice of Audit shows up. With regulators now auditing all eligible contracts on a rolling quarterly basis, MA organizations need year-round preparation. Here’s what that looks like.

  • Maintain a standing RADV response team. Don’t assemble a group per audit notice. Assign key stakeholders from compliance, coding, HIM, legal, and provider relations as a permanent working group. Internal teams need clear roles and accountability before the audit starts.
  • Run mock RADV audits quarterly. Use the latest guidance and OIG targeting criteria. Pull sample records, run them through the full checklist, and score your pass rate. If your MEAT failure rate exceeds 5%, that needs immediate attention.
  • Identify and prioritize high-risk HCC cohorts. Use data analytics to flag the specific diagnoses OIG targets most often: acute stroke, cancer diagnoses, acute heart attack, and embolism [5]. Review these before regulators do.
  • Test your medical record retrieval process. The five-month submission window [2] sounds generous until you’re chasing records from hundreds of providers simultaneously. Provider fatigue and retrieval delays can blow your timeline. If every MAO is pulling records at the same time, chart retrieval vendors may not have capacity.
  • Deploy audit-specific technology. Regulators are using AI-enabled tools to support coder efficiency [2]. Health plans need tools that can validate documentation against MEAT criteria, flag gaps, and rank the strongest records for each audited HCC. Generic workflow software won’t cut it for a process this structured.
  • Educate providers on documentation practices. Physicians aren’t just documentation sources. They’re compliance partners. Active education programs that teach MEAT criteria, specificity requirements, and common failure patterns reduce audit exposure upstream. Clinical staff who understand what auditors look for produce documentation that holds up.
  • Model RADV exposure for your board. This is a financial risk item, not a compliance footnote. Organizations that fail to maintain compliance may face corrective action plans requiring expensive remediation. RADV exposure should be reported as a standing risk item.
  • Ensure delegation agreements include quality standards. If you delegate coding or chart review, your agreements must include self-audit clauses and documentation accuracy benchmarks. The OIG’s February 2026 MA ICPG makes clear that MAOs bear responsibility for oversight of all first-tier, downstream, and related entities [6].

What Red Flags Do Auditors Look For?

Not every audit failure comes from obviously bad documentation. Some patterns are subtle, but they raise flags that trigger deeper review. Former auditor Wynda Clayton identifies these as the patterns that told her something was off.

Documentation Red Flags

  • “History of” used for acute conditions (this describes resolved conditions, not active ones)
  • Diagnosis on the problem list, but not addressed in the encounter note
  • Vague language: “stable,” “continue meds” without clinical context
  • Template or cloned documentation patterns (identical language across multiple visits)
  • Missing specificity: no stage, no type, no severity
  • No lab values to support chronic conditions

Coding Pattern Red Flags

  • Acute diagnosis codes submitted without hospital inpatient records
  • High-value HCCs documented without specialist notes
  • Cancer codes without oncology involvement
  • A single provider with an unusually high HCC capture rate
  • A diagnosis submitted once and never documented again
  • Prescription Drug Event data inconsistent with the submitted diagnosis

Wynda: “When I saw these patterns, I knew to dig deeper.” These don’t automatically mean fraud. But they attract scrutiny. And scrutiny is expensive.

Why Retrospective Reviews Must Be Two-Way

One of the biggest risk factors regulators now watch for is add-only retrospective chart review programs. The OIG’s February 2026 MA ICPG directly addresses this: federal investigations have revealed conduct including using chart reviews to increase risk scores inappropriately and failing to remove unsupported diagnosis codes previously submitted [6]. OIG concluded that some MA organizations may be leveraging chart reviews and in-home Health Risk Assessments to maximize risk adjustment payments without corresponding processes to ensure data accuracy [6].

The logic is straightforward: a review process that only adds diagnoses and never removes unsupported ones looks like a revenue engine, not a compliance program. DOJ enforcement actions (including the Kaiser Permanente $556M settlement and Cigna’s $172M settlement) reinforce that systems designed primarily to increase risk scores can be interpreted as an intent to inflate government payments.

Compliant retrospective programs work in both directions. They identify uncaptured diagnoses that are legitimately supported (adds) and flag or remove diagnoses that lack documentation support (deletes). This is the baseline for defensible risk adjustment now, not a best practice.

How Does AI Technology Support Audit Readiness?

Regulators have committed to using AI-enabled tools to support RADV coding teams, with all final determinations made by certified human coders [2]. If the federal government is using technology to find your errors, you need technology to find them first.

Audit-specific AI solutions can automate routine compliance checks, provide visibility into documentation status across provider networks, and validate records against MEAT criteria before submission. The right tools surface the strongest documentation for each audited HCC and flag gaps before records reach regulators. Organizations using purpose-built audit technology report meaningful productivity gains while keeping coding accuracy rates high.*

The distinction that matters: AI that operates as decision support (transparent, with human oversight) versus AI that runs as ungoverned automation. Regulators keep human coders making final calls. Health plans should demand the same from their vendors.

Neuro-Symbolic AI, which pairs large language models with clinical knowledge graphs, can generate transparent evidence trails for every suggested HCC by connecting each recommendation directly to documented clinical evidence in the note. This kind of explainability holds up under both RADV scrutiny and DOJ investigation. It also strengthens appeal outcomes when legitimate documentation exists but wasn’t originally surfaced during the audit response.

*RAAPID internal benchmarks

Build Audit Readiness Before the Notice Arrives

RADV audits are no longer episodic events. They’re a routine, high-stakes compliance reality for every Medicare Advantage organization. The plans that survive this environment aren’t the ones with zero errors. They’re the ones with documented, repeatable processes that catch and correct errors before regulators do.

If your current workflow can’t handle concurrent audits, validate documentation against MEAT criteria at scale, or produce defensible evidence trails on demand, it’s time to reassess.

book

The RADV Defensibility Pocket Checklist

Built by Wynda Clayton, a former CMS RADV auditor. Includes the “Court of Law” test, chart-level defensibility checklist, condition-specific documentation requirements, and red flag patterns.

Schedule a Demo of RAAPID’s RADV AI Audit Solution: Purpose-built audit management with AI-powered HCC validation and real-time analytics across concurrent audits.

Frequently Asked Questions

Sophisticated targeting models analyze enrollment patterns, coding trends, and risk score variations to select eligible contracts [1]. With the program now covering all eligible contracts, virtually every MA plan faces annual scrutiny. Variable sample sizes of 35 to 200 enrollees are based on contract size, with smaller contracts much less likely to receive the maximum sample [2].

A five-month submission window has been restored, up from the three-month window initially announced in May 2025 [2]. Plans should have retrieval processes tested and operational well before audit notices arrive.

Regulators may collect overpayments [1]. Unsupported diagnoses don’t just affect the sample; the financial risk extends across the contract. The financial impact can extend beyond the sample itself. While the methodology for calculating repayment across the entire contract population remains in flux following the September 2025 court ruling, audits continue regardless [2]. Inaccurate or incomplete provider documentation of patient diagnoses can lead to disallowed codes and retroactive recoupments.

Yes. MA organizations can appeal medical record review determinations through three levels: reconsideration, Hearing Officer review, and Administrator review. MAOs must exhaust all three levels of medical record appeals before appealing a payment error calculation [7].

RADV audits focus on data validation and payment accuracy. OIG investigations can expand into fraud allegations with criminal penalties. The financial consequences differ dramatically: RADV results in repayment obligations, while OIG and DOJ actions can result in multi-hundred-million dollar settlements.

A maximum of two, but only one valid record is needed to support payment [2]. Submit your strongest record.

Source

[1] Centers for Medicare and Medicaid Services, “Medicare Advantage Risk Adjustment Data Validation Program.” cms.gov

[2] Centers for Medicare and Medicaid Services, HPMS Memorandum from Steven Ferraina, Acting Director, Audits and Vulnerabilities Group, January 27, 2026. “Update on the Status of Medicare Advantage Risk Adjustment Data Validation Audits.”crowell.com

[3] Centers for Medicare and Medicaid Services, “2027 Advance Notice for Medicare Advantage Capitation Rates and Part C and Part D Payment Policies,” January 2026. cms.gov (PDF). Coefficient data per Norwood Solutions analysis.

[4] Centers for Medicare and Medicaid Services, “RADV Medical Record Checklist and Guidance.” cms.gov (PDF)

[5] U.S. Department of Health and Human Services, Office of Inspector General, “Toolkit To Help Decrease Improper Payments in Medicare Advantage Through the Identification of High-Risk Diagnosis Codes,” Report A-07-23-01213, December 2023. oig.hhs.gov

[6] U.S. Department of Health and Human Services, Office of Inspector General, “Medicare Advantage Industry Segment-Specific Compliance Program Guidance (MA ICPG),” February 3, 2026. oig.hhs.gov/compliance/ma-icpg/

[7] Centers for Medicare and Medicaid Services, “Contract Year 2025 Policy and Technical Changes to the Medicare Advantage Program,” Final Rule, April 2024. federalregister.gov

About the author

wynda a

Wynda Clayton

Director of Risk Adjustment Coding and Compliance

This guide was developed with input from Wynda Clayton, MS, RHIT, Director of Risk Adjustment Coding and Compliance at RAAPID. Wynda is a former CMS RADV auditor with 20+ years of experience in healthcare coding and compliance. Her firsthand audit experience shapes RAAPID's approach to defensible coding and audit readiness. Questions? Contact Wynda at wynda.c@raapidinc.com.

Lastest Posts

Subscribe To Our Newsletter

This field is for validation purposes and should be left unchanged.

Disclaimer: All the information, views, and opinions expressed in this blog are inspired by Healthcare IT industry trends, guidelines, and their respective web sources and are aligned with the technology innovation, products, and solutions that RAAPID offers to the Risk adjustment market space in the US.