Is artificial intelligence (AI) about to take over our jobs? It may seem that way.
A recent concerning study found that ChatGPT could diagnose medical conditions more accurately than physicians, even when they were helped by ChatGPT. That suggests that the doctors should just get out of the AI’s way.
A closer look, however, reveals a more complicated story. When ChatGPT suggested a diagnosis that differed from how a doctor viewed the case, they started to mistrust the system and thus forgo its benefits. The physicians using ChatGPT treated it as a kind of oracle that was supposed to give them answers, so when they didn’t like the answers, they just ignored it.
But imagine if, instead, they had treated the system as a junior but knowledgeable assistant, discussing the facts of the case and evidence for and against different possible diagnoses. The varying views offered by the AI system would then broaden the doctor’s overall view of the case and could potentially improve results.
This new vision of AI is as assistive intelligence, designed to assist decision-making interactively by augmenting and supporting human intelligence. This contrasts with the dominant idea, that AI is meant to be autonomous intelligence, one that decides and acts on its own and will eventually take over all our jobs and render us useless.
Real-world experience shows the benefits of the collaborative approach. At Yongin Severance Hospital in Korea, collaborating with AI enabled radiologists to catch more chest X-ray abnormalities while significantly improving efficiency. Toyota’s factories see 20% higher productivity when robots assist, rather than replace, workers. A recent meta-analysis found that human-AI teams outperform either humans or AI working separately in 85% of cases.
So why are we so fixated on the autonomous version of AI? It’s probably because collaboration is hard – and expensive. Tech companies can sell autonomous AI systems as turnkey solutions: Install the software, reduce headcount, and watch profits rise. It’s a seductive pitch. It can be marketed as revolutionary and disruptive, and promises instant efficiency, scalability, and reduced labor costs. Engineers like the approach because they can focus on tool building without having to think much about users and their human needs.
Collaborating with assistive intelligence is more difficult and messier. Developing it means engineers working closely with social scientists and design experts, as well as ongoing adaptation to ever-changing human needs. Using it is also more complicated—it requires ongoing investment in training, careful integration into existing workflows, and continuous refinement based on user feedback. There’s no setting and forgetting it here, folks.
The advantages of assistive intelligence, though, can be significant. When Mount Sinai Hospital implemented assistive AI for radiology, they didn’t just see better diagnoses — they saw happier doctors who felt the tools enhanced their work rather than threatened them. They now have more time to spend with patients, using AI to improve screening efficiency while maintaining oversight and final judgment.
The key is that assistive AI more easily establishes trust. Keeping humans at the center and using collaborative decision-making means that people can develop trust in the system in the same way that people learn to trust each other. They get to know it by working with it over time.
Focusing on collaboration reduces key risks of AI to its users. Working actively and collaboratively interactively with AI prevents users from becoming entirely dependent on the technology, and thus weakening their own skills.
Assistive systems also reduce automation bias—the tendency to trust AI outputs uncritically—by ensuring that humans remain actively engaged in decision-making. They do this by highlighting uncertainty, explaining the evidence, proposing alternative viewpoints, and soliciting user input.
How do we get there? There are already scattered efforts to develop and deploy assistive intelligence, but more is needed. Companies need to recognize the long-term benefits that outweigh the short- to medium-term costs. Workers need to insist on collaborative solutions. Companies need to demonstrate the value of assistive AI to workers, explaining that the AI is not there to steal their job, but to help them produce greater results overall.
Governmental policy is vital to promoting the development and deployment of collaborative AI. AI regulations should require that companies be clear and accountable about how their AI systems affect human thinking and behavior. Government funding should focus on interdisciplinary research and development of assistive AI. The government should also encourage the creation of formal certifications that clarify which AI tools really empower users and foster collaboration. These might be done through a governmental agencies such as the FDA or OSHA, though private organizations on the model of Underwriter Laboratories (UL) or the National Association of Safety Professionals (NASP) may give better results.
The choice before us is clear: Do we use these systems in ways that help us to be smarter, more ethical, and more capable, or do we pursue the illusion of autonomous AI at the cost of our humanity?
Assistive AI could not only help doctors more accurately diagnose patients or car manufacturers build vehicles. It could also help tackle the climate crisis, determine public policy, teach children, manage financial services and improve supply chains.
No matter what AI is used for, the truth is apparent: We must approach it as a collaborator. Only in this way can we ensure that the technology serves humanity – and not the other way around.
Shlomo Argamon is Associate Provost for Artificial Intelligence at Touro University

Dr. Shlomo Argamon
Dr. Shlomo Argamon, Associate Provost for AI at Touro University, believes technology should serve humanity and it will do so best when professionals collaborate with AI, not leave AI to act autonomously.