[un]hyping AI In Healthcare

Updated on July 25, 2018

By Alison Lowery

Artificial intelligence has been one of the most important — and perhaps most jarring — technological advances of the 21st century. Lines of code have been trained to drive cars, detect faces, and decode complex radiological images, and the recent explosion of AI utilizations has created a jagged divide in societal perception. On the one hand, there are those who fear the robot apocalypse, with entrepreneur Elon Musk himself claiming that “AI is a fundamental risk to the existence of civilization.” Conversely, there are those who overzealously point to AI as a panacea, a cure-all for everything from weight loss to higher education.

Yet despite these prodigious claims, the marketing hype around today’s AI seems to be, in many ways, surpassing reality. A brief interaction with Apple’s Siri, for example, reveals how easily the clever programming can be fooled, or how often it tends to be completely inaccurate. And IBM’s supercomputer Watson, while impressive, is less magic than manual labour. In a recent STAT article, one reporter noted that although Watson can digest massive amounts of data, “its treatment recommendations are not based on its own insights…. Instead, they are based exclusively on training by human overseers, who laboriously feed Watson information.”

Marketing hype is dangerous, both for AI technology itself and for the humans that could benefit from it. Beyond just risking a complete misunderstanding, overhyping AI imperils its very progress — much like what occurred in the “AI winter” of the 1970s and 1980s, during which the technology had been so sensationalized that, when it could not fulfill every unfeasible expectation, it was met with a period of deep disenchantment that resulted in a lack of popularity, funding, and technological advancement.



Once again, AI finds itself reaching the end of the second stage of the Gartner Hype Cycle, the peak of inflated expectation — and if we’re not careful, it may slope back into what Gartner calls “the trough of disillusionment,” prompting yet another lack of public interest and capital investment.

To avoid the pit, it’s important to properly understand AI and its limitations so that we may use it more effectively in areas where AI particularly excels. Artificial intelligence essentially allows machines to “learn” from experience and adjust to new inputs, in order to accomplish tasks by synthesizing enormous amounts of data and analyzing that data to find patterns. While it’s true that AI can accomplish a variety of tasks more efficiently than us — due to a machine’s ability to store and compute far more data than a single human mind (or even a team of human minds) — it does have its limitations.

For instance, deep learning, one of the more complicated branches of AI, mimics the human mind by utilizing an artificial neural network, wherein layers of mathematically simulated neurons are trained to respond to certain inputs. This method of training, known as supervised learning, is not automatic — it requires varying degrees of attention from engineers, who feed pieces of information into a machine (such as IBM’s Watson) until the machine is able to draw out the most likely conclusion from those data.

Additionally, AI is unable — as of yet — to handle abstract reasoning. Manually-trained programs comb through historical data in order to construct patterns upon which they can make predictions. Although these programs are exceedingly useful when solving problems of categorization, they possess no true understanding in the way that a human does.

Acknowledging AI’s limitations has not impeded its progress — in fact, it’s quite the opposite. Companies from a broad range of fields have been effectively utilizing AI not in spite of its limitations, but because they have found ways to capitalize on them. Currently, FICO uses neural networks to predict fraudulent banking transactions, a process that involves high-volumes of data that are exceedingly arduous to manually sift through. The agricultural industry is also benefiting from AI, with companies using autonomous robotics, machine vision, and predictive analytics to reduce labor costs and maintain soil and crop health. In education, AI is already assisting teachers with menial tasks such as grading and has even proven useful in tutoring students. And in healthcare, Watson Paths, a research collaboration with the Cleveland Clinic designed to assist medical professionals, enables “a more natural interaction between physicians, data and electronic medical records.” Designed to mimic the decision-making process physicians use when diagnosing, the software doesn’t need to be manually programmed to find the correct answer, and it has been so successful that it’s currently being used to teach med students.

Viewing AI either as a catalyst for the apocalypse or as a panacea of all of mankind’s difficulties is a hindrance to further advancement in the field. Only by understanding both the capabilities and limitations of these very technologies can we unlock their true potential.

Alison Lowery, COO, Aspire Ventures, leads operations at Aspire Ventures where she develops methods, teams and technology to build portfolio companies from the ground up. Before joining Aspire, she served as a CTO and VP of Engineering at several tech companies, including Simulmedia, Tacoda and Real Media, where she led the team responsible for building a groundbreaking behavioral targeting platform.  Alison has a passion for empowering great ideas, strengthening teams, and realizing a vision through cutting-edge technology.

References:

  1. http://www.nextgov.com/emerging-tech/2018/02/nvidia-makes-facial-recognition-ai-surveilance/146064/
  2. https://www.thesun.co.uk/tech/5644341/robot-apocalypse-killer-ai/
  3. https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-human-civilisation-existential-risk-artificial-intelligence-creator-slow-down-tesla-a7845491.html
  4. https://www.fastcompany.com/3066768/popular-data-driven-weight-loss-app-mixes-ai-and-a-human-touch-to-boost-suc
  5. https://medium.com/@blackbelthelp/artificial-intelligence-panacea-for-your-higher-education-woes-8f38055a3724
  6. https://www.statnews.com/2017/09/05/watson-ibm-cancer/
  7. https://www.gartner.com/technology/research/methodologies/hype-cycle.jsp
  8. http://www.fico.com/en/blogs/uncategorized/artificial-intelligence-find-it-right-in-your-own-backyard/
  9. https://www.techemergence.com/ai-agriculture-present-applications-impact/
  10. http://www.apa.org/pubs/highlights/spotlight/issue-37.aspx
  11. http://www.research.ibm.com/cognitive-computing/watson/watsonpaths.shtml#fbid=0QRPo95mx4B
14556571 1295515490473217 259386398988773604 o

The Editorial Team at Healthcare Business Today is made up of skilled healthcare writers and experts, led by our managing editor, Daniel Casciato, who has over 25 years of experience in healthcare writing. Since 1998, we have produced compelling and informative content for numerous publications, establishing ourselves as a trusted resource for health and wellness information. We offer readers access to fresh health, medicine, science, and technology developments and the latest in patient news, emphasizing how these developments affect our lives.

1 thought on “[un]hyping AI In Healthcare”

Comments are closed.