TL;DR: Key Takeaways
For Busy Executives:
- MEAT criteria (Monitored, Evaluated, Assessed, Treated) are the foundation of defensible HCC coding that survives audits
- Only one MEAT element is required per diagnosis, but more complete documentation provides a stronger audit defense
- The CMS-HCC Model V28 transition increases complexity with 115 HCC categories (up from 86) while removing 2,294 diagnosis codes from risk adjustment
- Manual MEAT validation at scale is unsustainable—leading plans are deploying AI solutions that provide complete evidence trails
- Explainability matters more than speed: your AI must show exactly where each MEAT element appears in clinical notes
Critical Action Items:
- Audit current documentation for MEAT compliance across a representative sample
- Implement validation workflows that link every HCC to specific MEAT evidence
- Evaluate AI solutions based on transparency, not just accuracy percentages
- Prepare for intensified scrutiny as the regulatory pause on extrapolation ends
Are your coding teams still drowning in manual chart reviews? Is incomplete documentation putting your Medicare Advantage plan at risk during audits?
The answer lies in understanding and implementing MEAT criteria (monitored, evaluated, assessed, treated), the foundation of defensible risk adjustment coding. This framework ensures every diagnosis code you submit is backed by solid clinical evidence that can survive CMS scrutiny.
But here’s what changed the game: it’s not just about having MEAT evidence—it’s about proving you can show auditors exactly where that evidence lives in the clinical note.
Autonomous Retrospective Risk Adjustment Solution
One platform. Every HCC validated. Revenue secured.
Table of Content:
- What Are MEAT Criteria in Medical Coding?
- Why MEAT Criteria Matter for Risk Adjustment in Medicare Advantage
- The Explainability Problem Most Organizations Face
- How MEAT Criteria Support HCC Coding and Risk Scores
- The Four MEAT Elements Explained with Clinical Examples
- The Explainability Breakthrough: What Changed
What Are MEAT Criteria in Medical Coding?
MEAT criteria represent the four essential documentation elements that establish the presence of a chronic condition during a face-to-face patient encounter. The MEAT acronym stands for:
- Monitor: Signs, symptoms, disease progression, or disease regression documented in clinical notes[1]
- Evaluate: Test results, medication effectiveness, response to treatment, or physical exam findings reviewed[1]
- Assess/Address: Discussion, records review, counseling, or acknowledgment of the condition’s status documented[1]
- Treat: Medications, therapies, surgery, specialist referrals, or ongoing management plans specified[1]
According to ICD-10-CM official coding guidelines, all documented conditions coexisting at the time of an encounter that require or affect patient care, treatment, or management must be coded as a diagnosis[2]. The MEAT criteria ensure this documentation meets CMS standards for hierarchical condition categories.
The critical insight most organizations miss: A diagnosis can’t simply appear in a problem list. It must have at least one MEAT element documented during the face-to-face encounter, showing the provider actively managed that condition during the visit.
Why MEAT Criteria Matter for Risk Adjustment in Medicare Advantage
Medicare Advantage plans face unprecedented pressure in 2025. Historical CMS data showed that more than two-thirds of Medicare beneficiaries lived with two or more chronic conditions, accounting for 94% of overall Medicare spending[3]. Risk-adjusted payments depend entirely on accurate HCC coding backed by MEAT-compliant documentation.
Here’s the reality: A simple diagnosis list in the medical record does not support a reported HCC code[4]. CMS focuses on diagnoses to demonstrate the need for higher reimbursement rates for patients with complex health conditions. Without proper MEAT documentation, health plans risk:
- Revenue clawbacks: Missing MEAT evidence leads to rejected codes during risk adjustment data validation audits
- Coding accuracy challenges: Teams struggle to validate thousands of charts before audit deadlines
- Compliance vulnerabilities: Incomplete documentation creates compounding audit risk across payment years
- Provider abrasion: Constant back-and-forth requests for documentation clarification strain provider relationships
The stakes are clear. Recent audit findings revealed that organizations faced clawbacks exceeding $5 million due to non-compliant risk adjustment documentation lacking proper MEAT support. [5] The regulatory pause on extrapolation methodology doesn’t eliminate the underlying documentation challenge—it simply bought time for organizations to strengthen their compliance infrastructure.
The Explainability Problem Most Organizations Face
Two years ago, most risk adjustment leaders wouldn’t have trusted autonomous AI coding for a simple reason: the technology couldn’t explain its decisions in ways that would survive an audit.
Here’s what typically happened:
You’d input a clinical note. The AI would output HCC codes. When asked why, you’d get technical explanations about machine learning models and confidence scores. But there was no clear trail showing how the system arrived at each HCC—no way to see the documented evidence that supported the code.
That doesn’t work in an audit.
Auditors want to see the documentation. They want MEAT (Monitoring, Evaluation, Assessment, Treatment) spelled out clearly. They want to understand how each code was supported by the provider’s diagnostic statement in the medical record.
If your system can’t show the documented evidence trail, you’re not reducing risk—you’re multiplying it.
Industry experts who were skeptical of autonomous coding identified three things that had to change before they could trust AI for risk adjustment:
The Three Requirements for Defensible AI Coding
First: The AI had to show its clinical reasoning, not just output codes with confidence scores.
Evidence trails matter. Compliance teams need to see the documented evidence supporting each code, not just technical explanations about how the algorithm works.
Second: Every code needed a complete evidence trail.
Not something a human would need to reconstruct. The full path from clinical note to MEAT documentation to final HCC assignment—all documented and auditor-ready.
Third: Compliance teams had to be able to validate the evidence at scale.
If you’re manually reviewing every AI decision to verify the documentation, you haven’t solved the capacity problem. You’ve just added another bottleneck.
Most tools evaluated by major health plans couldn’t clear all three bars. That’s why skepticism persisted even as AI coding accuracy improved.
How MEAT Criteria Support HCC Coding and Risk Scores
Coding professionals must review the entire medical record documentation to assign appropriate ICD-10-CM diagnosis codes[6]. Most chronic conditions match one of the hierarchical condition categories in the CMS-HCC model. According to CMS guidance, approximately 9,500 of the roughly 70,000 ICD-10-CM codes map to HCC categories used in risk adjustment calculations[7].
The connection between MEAT criteria and risk adjustment factor scores is direct:
- Complete documentation using MEAT elements → Accurate ICD-10-CM code assignment
- Accurate diagnosis coding → Correct HCC mapping
- Correct HCC assignment → Appropriate RAF score calculation
- Appropriate risk scores → Fair risk-adjusted payments from CMS
The CMS-HCC Model V28, currently being phased in through 2026, increases HCC categories from 86 to 115 while assigning risk scores to 2,294 fewer codes[8]. This transition makes precise MEAT-based documentation even more critical for value-based payment models.
Payment Year 2025 uses a blended approach: 33% of risk scores calculated with V24 (2020 model) and 67% calculated with V28 (2024 model). This means your coding teams must understand MEAT requirements across multiple model versions simultaneously—adding significant complexity to validation workflows.
Industry First RADV Audit Solution
AI-powered solution enables health plans to efficiently manage and streamline RADV audits
The Four MEAT Elements Explained with Clinical Examples
- Monitor for Chronic Conditions: Documentation must show ongoing surveillance of the patient’s health status. For diabetes, this includes tracking blood glucose levels, monitoring signs of neuropathy, noting disease progression or regression after lifestyle changes, or documenting symptoms the patient reports.
- Evaluate Test Results and Medication Effectiveness: Providers must document their review of diagnostic tests, lab values, imaging results, or evaluate test results that inform treatment decisions. A progress note stating “A1C reviewed, remains elevated at 8.2%” meets this criterion. Similarly, noting “patient reports improved glucose control on current insulin regimen” demonstrates evaluation.
- Assess or Address the Present Illness: The medical decision-making process must reflect active consideration of chronic conditions evaluated during the encounter. This includes discussions with the patient, care plan adjustments, counseling about ongoing management strategies, or acknowledging the current status of the condition. Documentation like “discussed medication adherence barriers” or “reviewed home glucose monitoring logs with patient” meets this requirement.
- Treat with Medications and Other Modalities: Documentation must specify treatment interventions: prescriptions written, dosage adjustments, referrals to specialists, physical therapy orders, plans for surgical intervention, or documented ongoing management strategies. This can include continuing current medications, adjusting doses, adding new therapies, or planning follow-up interventions.
- Critical insight: Only one element of MEAT is needed to support a diagnosis code, but the more elements included in documentation practices, the stronger the audit defense[9]. The key difference between compliant and non-compliant documentation isn’t the presence of the diagnosis—it’s the evidence that the provider actively managed that condition during the encounter.
The Explainability Breakthrough: What Changed
Here’s the critical breakthrough that’s changing risk adjustment operations: Neuro-Symbolic AI approaches that combine machine learning with explicit clinical reasoning rules.
These systems don’t just identify codes. They apply structured logic to read notes the way trained coders do, identifying the provider’s diagnostic statement and looking for specific evidence of monitoring, evaluation, and treatment.
More importantly, they document what they found and where they found it.
Every HCC comes with a complete evidence trail showing exactly where in the note each MEAT component was identified. The system links the documented evidence directly to the provider’s diagnostic statement that supports the code assignment.
Real-World Validation
Consider a mid-sized health plan with 40,000 retrospective charts sitting in the queue and 15 coders already maxed out. They’d get through maybe 25,000 charts before the deadline using their existing process.
They implemented an AI system with complete evidence trail capabilities. The system processed all 40,000 charts. When compliance spot-checked 500 random charts, the accuracy rate was 98%+.
But here’s what the compliance director said was the biggest relief: It wasn’t the productivity gain. It was opening a chart and immediately seeing the complete evidence trail. No detective work. No guessing. Just clear documentation of what the provider stated and where the supporting MEAT existed.
That’s when it clicked: This isn’t about replacing expertise. It’s about making expertise scalable—but only when the technology can prove its work.
Common Challenges with MEAT Criteria Documentation
Medicare Advantage plans continue to face three major obstacles:
Documentation Gaps Across Multiple Systems
Fragmented data lives in different EHR systems, making it difficult to compile complete documentation for risk adjustment coding. Charts may lack critical MEAT elements buried in specialist notes, external provider records, or test results documented separately from clinical encounters.
The challenge intensifies when validating historical charts for RADV audits—you may need to reconstruct documentation from six years ago across multiple source systems that have since been upgraded or replaced.
Inconsistent Provider Documentation
Not all providers document with risk adjustment in mind. A cardiologist might note “CHF stable” without specifying which MEAT elements apply:
- Did they evaluate an echocardiogram?
- Did they assess medication response?
- Did they adjust diuretics or plan follow-up testing?
This ambiguity creates coding uncertainty. Without clear MEAT documentation, coders must make judgment calls that may not withstand audit scrutiny.
Manual Validation at Scale
Health plans must validate tens of thousands of member charts annually. Reviewing each medical record manually for MEAT evidence remains unsustainable. Teams burn out reviewing charts, especially when preparing for RADV audits that may require validating payment year 2018 charts using CMS-HCC V22—a model many current coders have never used.
The capacity problem won’t solve itself. Hiring more coders is expensive and slow. Your members’ health needs don’t pause while you’re recruiting.
How to Implement MEAT-Enabled HCC Coding Solutions
Forward-thinking Medicare Advantage plans are moving beyond spreadsheet chaos to systematic approaches:
Step 1: Establish MEAT Validation Workflows
Create standardized processes for coding professionals to review documentation against MEAT criteria. Every diagnosis code should have at least one MEAT element clearly identified and linked to specific clinical notes.
The key question: Can someone who’s never seen this chart before immediately identify where the MEAT evidence exists? If the answer requires explanation or reconstruction, your validation process needs strengthening.
Step 2: Train Teams on CMS-HCC Models
With the V28 transition ongoing through 2026, coding teams need proficiency in both current and historical risk adjustment models. RADV audits may require validating payment year 2018 charts using CMS-HCC V22, payment year 2020 charts using V24, and current submissions using the V28 blend.
This isn’t just about knowing which codes map to which HCCs—it’s understanding how MEAT requirements remain consistent across all model versions while the financial implications of each code change.
Step 3: Deploy AI-Powered Documentation Tools with Explainability
The question isn’t whether to use AI—it’s whether the specific AI system you’re evaluating delivers both accuracy and the transparency needed to defend results.
Before implementing autonomous coding solutions, ask these critical questions:
How does your AI explain its coding decisions?
Look for systems that provide complete evidence trails showing exactly where in the clinical note each MEAT element was identified.
Can you trace every code back to MEAT?
Every MEAT component should be highlighted in the source documentation and linked directly to the code assignment.
What type of AI architecture do you use?
Understanding whether a system uses approaches like neuro-symbolic AI can help you evaluate its explainability capabilities.
How does your compliance team validate the documented evidence?
If the answer is “we manually review everything,” that’s not really autonomous—you’ve just added another step to your existing bottleneck.
What happens when the AI is uncertain?
Good systems flag edge cases for human review rather than forcing decisions on ambiguous documentation.
Can this documentation survive an audit?
Ask to see rejected charts and how they were handled. Ask for examples of evidence trails that successfully defended codes during audits.
Step 4: Conduct Regular Mock Audits
Internal audits replicate the CMS process, allowing you to detect documentation gaps before real financial exposure arises. Test your MEAT validation processes against historical data to identify high-risk HCC codes requiring immediate attention.
Audit risk compounds over time. Every undocumented HCC you submit is a future liability. The longer you wait to fix documentation quality, the bigger the exposure becomes.
MEAT Criteria and RADV Audit Readiness
The September 2025 court ruling paused contract-level extrapolation methodology, but core RADV audits continue[10]. This regulatory pause represents a strategic preparation window, not a time to delay compliance improvements.
The pause won’t last forever. And when the rules shift again—and they will—the plans that survive will be the ones with defensible documentation.
That’s the context for any technology decision you make right now. Speed matters. Scale matters. But defensibility matters most.
Leading health plans recognize that inaction is a risk. Organizations building audit-ready infrastructure now will face fewer challenges when the regulatory environment shifts again. Proper MEAT documentation remains the foundation of RADV defense, regardless of extrapolation status.
Key preparation areas:
- Validate historical charts: Ensure 2018-2025 payment year documentation meets MEAT standards across all CMS-HCC model versions in use during those years
- Track diagnosis codes: Identify codes lacking MEAT evidence and remediate before audit selection
- Build evidence trails: Link every HCC to specific clinical documentation that demonstrates monitoring, evaluation, assessment, or treatment
- Maintain 3-level reviews: Implement quality assurance processes that verify MEAT compliance before submission to CMS
What This Means for Your Planning
The regulatory environment isn’t getting simpler. Documentation requirements aren’t getting easier. And your coding teams are already stretched.
The question isn’t whether AI in general is ready. It’s whether the specific system you’re evaluating delivers both the accuracy and the transparency you need to defend results.
Not every plan needs autonomous coding tomorrow. And not all autonomous AI has reached the same level of maturity.
But AI systems with true explainability—those that can show their clinical reasoning, not just output codes—have reached a point where they’re defensible.
For organizations planning operations, make explainability your non-negotiable requirement. Ask vendors to prove their AI can show its work. The difference isn’t just technical—it’s the difference between adding risk and reducing it.
Common Questions
How many MEAT elements must be documented to support a diagnosis?
Only one MEAT element is required to support an ICD-10-CM code for risk adjustment purposes. However, more complete documentation with multiple elements provides stronger audit defense and reduces the likelihood of code rejection during RADV validation.
What does MEAT stand for in HCC coding?
MEAT stands for Monitored, Evaluated, Assessed/Addressed, and Treated. These criteria validate that a diagnosis reflects active patient care during a face-to-face visit rather than a passive entry in a problem list.
What are the MEAT criteria for coding chronic conditions?
MEAT criteria require documentation showing that providers are actively managing chronic conditions through at least one of four elements: monitoring signs and symptoms or disease progression/regression, evaluating test results or medication effectiveness, assessing the condition’s status or addressing it through discussion or care planning, or treating with medications and other therapeutic modalities.
Can diagnosis codes be reported from problem lists?
No. Do not code from problem lists unless the condition is specifically addressed in the encounter note with MEAT elements documented[9]. Problem lists may contain outdated information that doesn’t reflect current patient care and will not withstand audit scrutiny.
What documentation elements support HCC diagnosis coding?
Essential components include history of present illness, physical exam findings, medical decision-making process, assessment of conditions evaluated, and documented care plans showing ongoing management or treatment[6]. Each chronic condition must have at least one MEAT element connecting it to the current encounter.
How does AI help with MEAT criteria validation?
Advanced AI systems using neuro-symbolic approaches can automatically identify and extract MEAT-based evidence from unstructured clinical notes, providing transparent audit trails that link every HCC code to specific documentation. This enables coding professionals to validate thousands of charts efficiently while maintaining defensible accuracy—but only if the AI can show exactly where each MEAT element appears in the clinical note.
RAAPID’s Approach to MEAT-Enabled Risk Adjustment
RAAPID’s Retrospective Risk Adjustment Solution addresses MEAT validation challenges through Neuro-Symbolic AI technology that provides:
- Automated MEAT evidence extraction from unstructured clinical notes across all CMS-HCC model versions, with complete transparency into where each element was identified
- Transparent audit trails linking every HCC code suggestion to specific MEAT-based clinical documentation, showing the exact phrases and sections that support each code
- 60-80% productivity improvement for coding teams through intelligent chart prioritization while maintaining full documentation visibility
- 98%+ coding accuracy validated through 3-level review processes before customer submission to CMS
- Single streamlined workflow that identifies potential adds (unclaimed codes) and deletes (overclaimed codes) using the MEAT framework with complete evidence trails
Unlike approaches that operate as unverifiable systems providing only confidence scores, RAAPID’s explainable AI shows exactly which clinical phrases support each diagnosis code. The system applies structured logic to read notes the way trained coders do, identifying the provider’s diagnostic statement and linking it to specific MEAT evidence.
This defensible accuracy is what separates audit-ready documentation from risky guesswork—and it’s what changed the minds of risk adjustment leaders who previously wouldn’t trust autonomous coding.
Next Steps for Medicare Advantage Plans
The path forward requires both strategic vision and tactical execution:
- Audit your current MEAT documentation quality across a representative sample of member charts—don’t assume your documentation is adequate without validation
- Identify high-risk diagnosis codes that frequently lack clear MEAT evidence in provider notes and target those areas for improvement
- Invest in provider education about documentation practices that support risk adjustment coding—focus on practical examples of MEAT elements for common chronic conditions
- Evaluate technology solutions based on explainability, not just accuracy percentages—ask to see the evidence trails and validate them against your compliance standards
- Build continuous improvement processes that track coding accuracy and compliance trends over time while maintaining audit readiness
Risk adjustment coding doesn’t have to overwhelm your teams. With proper MEAT criteria implementation and technology that can prove its work, you can achieve both operational efficiency and audit readiness.
Ready to transform your risk adjustment documentation from manual chaos to systematic confidence?
Discover how RAAPID’s AI-powered solutions simplify MEAT validation while ensuring every code can survive CMS scrutiny.
Frequently Asked Questions
MEAT stands for Monitored, Evaluated, Assessed/Addressed, and Treated. These four criteria validate that documented diagnoses reflect active clinical management during face-to-face patient encounters rather than passive problem list entries. CMS requires at least one MEAT element to support each diagnosis code used in risk-adjusted payments.
RAAPID’s Neuro-Symbolic AI automatically extracts MEAT-based evidence from clinical notes while providing transparent audit trails that show exactly where each element was identified in the source documentation. This enables coding professionals to validate thousands of charts efficiently while maintaining defensible accuracy that can withstand RADV audit scrutiny.
CMS requires that all diagnosis codes used for risk-adjusted payments must be supported by medical record documentation from face-to-face visits showing at least one MEAT element. The documentation must demonstrate that the provider actively managed the condition during the encounter—simply listing diagnoses without clinical context does not meet official coding guidelines[11].
Providers should ensure progress notes include specific monitoring observations, evaluation of test results or treatment responses, assessments of condition status, and treatment plans for all chronic conditions addressed during the encounter. Focus on documenting active management rather than simply listing diagnoses. Regular training on risk adjustment documentation practices helps reinforce these habits.
RADV audits specifically verify that diagnosis codes submitted for payment are supported by medical record documentation meeting MEAT criteria. Codes lacking proper evidence get rejected, resulting in payment clawbacks. CMS may also identify patterns of documentation deficiencies that increase future audit risk and potential extrapolation exposure.
Yes, though it remains challenging. Organizations conducting RADV audit preparation must validate charts that may be several years old using the CMS-HCC model version applicable to that payment year (V22 for 2018, V24 for 2020-2023, V28 blend for 2024-2026). AI-powered solutions with explainability capabilities can accelerate this historical validation process by quickly identifying where MEAT evidence exists—or doesn’t exist—in legacy documentation.
MEAT (Monitored, Evaluated, Assessed, Treated) and TAMPER (Treatment, Assessment, Monitor, Plan, Evaluate, Referral) are similar frameworks with overlapping elements. Both ensure diagnosis codes are supported by active clinical management rather than passive documentation. MEAT is more commonly referenced in CMS guidance and industry best practices for HCC coding.
MEAT documentation doesn’t directly change risk adjustment factor scores, but it validates whether HCC codes used to calculate RAF scores are legitimate and defensible. Without MEAT evidence, codes get rejected during audits, reducing the organization’s risk scores and reimbursement. The financial impact extends beyond individual chart corrections to potential patterns that affect future audit exposure.
Diagnosis codes lacking MEAT support will not withstand audit scrutiny. CMS may reject those codes, require payment returns, and potentially identify patterns that increase future audit risk. Organizations with systemic documentation issues face a higher likelihood of being selected for RADV audits and greater exposure to extrapolation penalties when that methodology resumes.
Accuracy measures whether the AI suggested the right codes. Explainability measures whether the AI can show you exactly why it suggested those codes by pointing to specific MEAT evidence in the clinical documentation. A system can have high accuracy but low explainability—meaning you can’t defend its decisions in an audit. For risk adjustment, explainability is as critical as accuracy because compliance teams must be able to validate that every code is supported by documented evidence.
Source
[3] For The Record Magazine. “Documentation Dilemmas: Does Your Documentation Meet the MEAT Criteria?” For The Record, Fall 2022.
[5] Office of Inspector General, U.S. Department of Health and Human Services. Medicare Advantage audit findings and enforcement actions, 2022-2025.
[9] Association of Clinical Documentation Integrity Specialists (ACDIS). “Q&A: Acceptable documentation for HCCs.” ACDIS Resources.
[10] Centers for Medicare & Medicaid Services. “Risk Adjustment Data Validation (RADV).” CMS.gov, 2025.
[11] Centers for Medicare & Medicaid Services. “Medicare Managed Care Manual Chapter 7: Risk Adjustment.” CMS.gov, 2024.