Misinformation spread during the COVID-19 pandemic almost as fast as the virus. The 2020 Infodemic Report from the World Health Organisation states that this flood of untrue information hampered global public health initiatives. Between 2020 and 2022, millions of pieces of deceptive content were taken down from social media sites like Facebook and Instagram. Yet, the problem remained persistent: a single false video linking 5G towers to COVID-19 could gain thousands of views within hours.
In India, viral WhatsApp rumors incited mob violence. In the U.S., fake cures led to serious health risks. In Brazil, anti-vaccine propaganda circulated unchecked for days. Studies confirmed that misinformation could linger on platforms for hours or even days before being removed, causing significant harm. The problem was not only the content itself, but the speed at which it reached millions.
The Scale of the Misinformation Crisis
The World Health Organization warned that false information rivaled COVID-19’s impact, undermining public health efforts worldwide. In India, WhatsApp rumors triggered mob violence, while in the US, fake cures sold online deepened crises. In Brazil, false vaccine claims reached millions before platforms could act. According to studies, harmful content often remains on social networking sites for hours or days before being taken down, which makes it possible for false information to spread. The core issue isn’t just bad content, it’s how quickly it reaches people, altering behaviors before intervention.
Binita Shah’s Mission to Protect Platforms
Binita Shah, a senior strategist on a major tech platform’s advertising division’ Trust & Safety team, saw the flaw in reactive moderation. With deep expertise in platform safety, she knew removing harmful content after it went viral was like mopping the floor during a flood. She wasn’t just a tech expert, she was driven to make digital spaces safer for all. Her mission was to stop misinformation at its source, before it could gain traction. Her leadership would improve how a major tech platform’s advertising division tackled harmful content, influencing industry practices.
Her drive grew stronger during a global health crisis when fake health ads flooded the platform. The ads, which promoted unproven vaccine claims, got past the first filters and accumulated millions of views in just a couple of hours, endangering public safety and trust. “When I saw ads promoting fake COVID cures go viral in hours, I knew we had to stop misinformation before it could spread,” she said. “It wasn’t just about protecting the platform, it was about saving lives.” As a strategist in the advertising section, she understood that the problem was not only content, but also the potential for large-scale harm amplification from ads. This motivated her to take the lead in creating an AI-powered system that can identify and flag violating and hateful ad material when it is uploaded, preventing its spread until human moderators have had a chance to review it. In addition to protecting consumers, her idea improved the platform’s standing with authorities and advertisers around the globe.
A Smart System to Catch Harm Early
The tool she developed scans multiple elements and copy, visuals, landing page content, and advertiser behavior, across more than 70 languages. It flags suspicious content immediately and pauses its delivery pending human or automated review. This mechanism prevents harmful ads from being seen while still ensuring fair treatment for legitimate advertisers. For instance, it can detect an ad for counterfeit masks in Hindi or a fraudulent “COVID cure” supplement in Portuguese within minutes. “Designing a system to catch scams in 70 languages was a complex challenge,” she said. Binita Shah emphasized, “Trust is everything in advertising. We worked hard to make sure small businesses weren’t unfairly penalized.” The technology pauses the distribution of flagged advertising rather than rejecting it outright. This keeps them from showing up in user feeds or search results until another check is made, either by a human or an automated system. By preventing misleading or malicious content before it causes harm, these checks guarantee that only compliant content reaches viewers. The tool’s strength lies in its fairness and cultural sensitivity, trained to distinguish legitimate promotions from scams, accounting for regional nuances like traditional health ads in India that use bold but compliant claims. Regular audits protect small businesses from erroneous rejections while curbing harmful content, balancing speed and accuracy to safeguard the platform’s advertising ecosystem.
The system’s rollout strengthened ad safety, reducing the spread of scams like fake mask promotions or overpriced sanitizers, particularly in regions heavily impacted by misinformation. In India, a major source of COVID-19 misinformation, the tool curtailed deceptive health ads, helping to mitigate the spread of false information, as reported by the Times of India in 2021 and, Journal of Media Ethics (2021). In Brazil, where ads funded 46% of disinformation sites spreading false vaccine claims, the system helped limit such content, as reported by ProPublica, 2022. Small advertisers benefited from fairer moderation, while the system’s proactive approach aligned with regulatory expectations, aligning with regulatory expectations such as those set by the European Union’s Digital Services Act. By addressing these challenges, the tool fostered greater trust in the advertising platform, setting a new standard for ad moderation, influencing industry practices, and earning her recognition as a leader in crisis response.
Global Impact and Recognition
Her work has influenced broader industry practices. Meta and TikTok have begun testing similar early-detection models, and governments are exploring proactive moderation as a standard. She has shared her learnings with tech coalitions, advocating for open-sourcing parts of the system to help smaller platforms adopt similar safety tools. The World Health Organization warns that “infodemics” can derail public health, as seen during COVID-19 and recent vaccine campaigns.
Many platforms still rely on user reports, which only work after harm spreads. A 2024 report found that 60% of social media platforms take over 24 hours to remove flagged misinformation. Her approach to catching content at upload offers a model others are adopting. Meta is testing similar systems for Instagram, and TikTok is exploring delays for suspicious uploads. Governments are also looking at her work, with some proposing national proactive screening rules. Her system is part of a broader shift toward treating digital platforms like public spaces with built-in safeguards.
She continues to lead, advising tech coalitions to share strategies for fighting misinformation. At the organization, she’s adapting the system for live streams and short-form videos, which spread even faster. Her contributions have earned her wide recognition. She received the platform’s prestigious GOAT Award for AI innovation, an honor reserved for the top 0.1% of contributors globally. She has also been awarded multiple peer bonuses from senior leadership, including Vice Presidents and Managing Directors, in recognition of her leadership, innovation, and cross-functional impact.
Binita is leading the fight against misinformation in today’s fast-changing online world. She’s building AI tools to spot and stop fake videos and harmful content, especially on live-streams and short videos. Her work shows how tech can keep people safe by acting early, setting a strong example for platforms everywhere.
She sees digital platforms as public spaces that need built-in safety, not just places to chat or shop. Her focus is on stopping harm before it spreads, using smart tools to stay ahead of tricky misinformation. This approach helps make the internet a safer place for everyone.
As platforms shape how we live, her work protects users and sets new standards for safety. By focusing on stopping problems early, she ensures trust and truth stay strong online. These simple, forward-thinking ideas prove that with the right tools, we can keep the digital world safe.
Anthony Jones is a freelance writer with over 15 years of experience writing about health supplements for various health and fitness magazines. He also owns a health supplements store in Topeka, Kansas. Anthony earned his health and science degree at Duke University, where he studied the effects of exercise and nutrition on human physiology.