By Mandy Fogle, Healthcare Value Engineering at Shift Technology
Prior to COVID, only 15% of healthcare providers used telehealth services. When the pandemic hit, telehealth made up to ~50% of medical appointments. As telehealth increased in popularity and gave consumers access to necessary services, the federal government relaxed the requirements for reimbursement at the peak of the pandemic to allow consumers more access to these services.
However, the increased need for telehealth has also ushered billions of dollars in fraud-related schemes, encompassing anything from simple fraudulent behavior like providers overbilling a visit by a couple of minutes, to sophisticated schemes involving telemarketers and DMEs.
With sky high satisfaction rates among users and return appointments expected, telehealth is here to stay. Providers must work in tandem with insurers to better protect themselves for the future of healthcare access.
Uncover the complexities
Fraud schemes are becoming more and more complex. As healthcare organizations try to meet the needs of advanced healthcare requirements, fraudsters continue to find ways to take advantage of the system. New treatments are becoming costly and healthcare spend has gone down since COVID, impacting how healthcare payors are walking the balance beam between providing the best possible care yet trying to keep it affordable along the way.
With fraudsters enlisting everyone from providers to telemarketers and durable medical equipment (DME) suppliers and providers, there is a larger web of complexity. Without AI/ML, it’s harder to uncover those complexities – when investigations are based solely on tips or through a manual process, you’re unable to quickly and efficiently confirm allegations and tie up investigative resources from potentially bigger exposures.
For example, when investigators go about decision-making the manual way, they look at all the providers that bill for telehealth and put that in an excel and see that one provider is bubbling up at the top. However, they may be bubbling up because the way they are billing for their services which may look fraudulent, but they are not committing any fraud. It’s simply an issue of educating the provider or medical coder on the best way to bill services. This may result in the investigator has wasted their time and didn’t get to see the nuances.
AI makes the decision-making process easier for investigators and analysts since it will only pull the alerts they do need to spend energy on, ensuring that they can look past the complexities.
Navigate a changing landscape
Prior to COVID, the health and insurance industries weren’t planning for mass-scaled disruption. The increased need for healthcare services put a strain on an already inundated industry and the influx of claims made it challenging for investigators to wade through all the data.
This changing landscape created the perfect storm for new types of fraud to emerge. For example, prescription drugs were not allowed to be prescribed online but with COVID requiring people to distance and isolate, the federal government relaxed regulations to the point where opioids could be released via telehealth consultations. This made it easier for fraudsters, including providers who are willingly committing fraud, to prescribe opiates to patients.
When there is a disruption in an industry like healthcare, there is always going to be an area that gets exposed as things innovate and change — and old scams will always continue. The best way to address this moving forward is through AI/ML solutions which can aggregate all the data and look at the subtlety and nuances. For example, when analyzing prescriptions – has this patient been prescribed opiates before? And if so, why? And if not, why are they getting them now?
Manage fraud and abuse aperture
Fraud waste and abuse aperture has increased with COVID. Once telehealth increased in usage, it allowed for other connected services to have access to patients they may not have of had prior. Telemarketers have a large reach and have access to telephone numbers and information that has been collected through shopping habits – their business is to reach as many people as they can.
If a telemarketing company gets the right information, they can use that to then get a patient’s insurance details, healthcare provider, etc. Utilizing this knowledge, a fraudulent telemarketer can then call a DME company and use patient data to say that the person has been injured and the DME supplier will provide a kick back to the telemarketer. DME supplies do not have to be sold by doctors, so it widens the aperture at which fraud is being committed because anyone could open a DME business and collaborate with fraudsters.
This widening also happens with providers where a telemarketer will get a patient on the phone and ask if they’ve ever been depressed or upset. If a patient says “yes” that telemarketer can now call providers who are in on the scam and can say “oh yes, I saw so and so and treated them for depression and prescribed medication.”
With the use of AI/ML, the aperture can be closed by aggregating ALL data from social media, to review sites and more. If a provider said they had provided mental health services for a patient, but Facebook showed they were at a medical conference then how were they on a telehealth call? By pulling from watch lists, forums, and even malpractice sites AI/ML can provide a wider breadth of data to help close the scale of fraud.
The healthcare industry is changing, and those changes will impact the insurance industry. As insurers and investigators grapple to stay afloat of a rapidly evolving landscape, AI/ML can manage the easier parts of the decision-making so investigators can focus on the most egregious offenders and emerging schemes.