Accuracy has always been critical for risk adjustment coding, and recent policy changes will make precision more important than ever, especially for Medicare Advantage (MA) plans. For these organizations, artificial intelligence (AI) can be a helpful tool to improve the accuracy of data for coding and risk adjustment programs, provided leaders establish some guardrails.
A New Reason to Pursue Precision: Expanded RADV Audits
In May, the Centers for Medicare & Medicaid Services (CMS) launched a more aggressive risk adjustment data validation (RADV) auditing strategy, in which the agency will audit all MA contracts. While CMS has traditionally focused on about 60 plans a year, the agency is expanding its workforce and technology to begin auditing all MA plans — roughly 550 in total.
Based on the size of the plan, CMS will also expand the number of medical records audited annually from 35 records per plan to as many as 200 records. These new RADV policies add to the impacts of final rule changes, including extrapolation and the elimination of the fee-for-service (FFS) adjuster, which had already increased pressure on plans for greater accuracy.
Given these changes, plan leaders who have taken a wait-and-see approach to adopting AI in their risk adjustment strategies have a new incentive to deploy the technology. By using AI-enabled tools combined with human expertise, plans can thrive under this increased scrutiny.
How AI Can Improve Coding Accuracy and Efficiency
Whether plan leaders are focused on regulatory audits or overall operational efficiencies, coding tools that use AI, including natural language processing (NLP), can help them reach their goals with the resources they have.
With AI and NLP, coding teams can prioritize charts that pose the greatest risk for unsupported hierarchical condition categories (HCCs) in CMS’ risk adjustment model. These tools can also help coding teams navigate complex medical records by prioritizing relevant sections of documentation, so they focus on the most important areas. For example, the technology can reduce the risk that a coder will miss important provider notes needed to help ensure accurate coding and appropriate reimbursement.
Using this technology, coders can also identify issues affecting accuracy that could easily be missed manually. For example, medical records for different patients can often get scanned together inadvertently—and a coder working without the help of AI could easily assume that vital signs, test results, diagnoses, and progress notes belong to the same patient. With a tool that uses AI or NLP, coders can quickly identify such mismatches.
The Limitations of AI and the Importance of Human Oversight
Despite these benefits, AI still has limitations when used to support risk adjustment functions such as coding. NLP tools can sometimes have difficulty with handwritten notes or different documentation styles in medical records. For example, if the provider notes mention that “the patient denies a history of diabetes,” the tool may fail to pick up on the negative language and assume the patient has been diagnosed with diabetes. Until NLP models are trained and refined to better understand the nuances of language in these notes, humans should provide the critical oversight needed to promote greater accuracy.
As AI-enabled tools continue to transform business practices, industry advocacy groups such as the Responsible AI Institute can help ensure the ethical use of such technology in healthcare and other industries. At the organizational level, plans can implement AI governance committees to ensure that the technology is used to assist, rather than replace, human coders. Stakeholders on these committees can review potential use cases for AI and ensure that selected tools comply with HIPAA and other privacy regulations.
How Plans Can Prepare for AI and NLP in Coding
To help realize the benefits of using AI-enabled tools in coding, plans should consider these steps:
Get educated on AI. Plan leaders should leverage the technical experts on their teams, sign up for technology demos, and read about the benefits and limitations of using these tools for coding. A wide range of government resources on AI are available online.
Assemble a team of experts. Clinical, coding, legal, privacy, and IT leaders in plans should work together to establish appropriate use cases for AI before rolling out these tools. A collaborative team can also proactively identify potential privacy risks that could lead to financial penalties.
Ensure adequate testing and training time. Prior to implementation, plans should ensure that coders can try out the tool in a test environment. Such trial periods not only allow coding teams to increase their comfort level with tools, but they can also help identify potential issues in advance.
Set realistic performance goals and time frames. While coding accuracy can often improve in just weeks, it may be a few months before plans see efficiency gains from using AI-enabled tools.
Create a continuous feedback loop for end-users. Post-implementation, plans should review performance metrics and end-user feedback to refine workflows and continue to improve accuracy and efficiency. Coding directors and managers should encourage their teams to share their ideas for improving AI tools. For example, some features such as excessive false positives may slow down coders and create a negative user experience.
The Future of Risk Adjustment Coding
As more plans realize the benefits of using AI technology, these tools will continue to evolve and present further opportunities to make coding teams nimbler and more precise. With better tools to promote greater accuracy in risk adjustment, combined with human expertise, plans can better prepare for new regulatory challenges and continue to provide vital care for their members.

Katie Sender
Katie Sender, MSN, RN, PHN, CRC, is vice president of clinical and coding services for Cotiviti.






