{"id":382042,"date":"2026-05-13T03:01:47","date_gmt":"2026-05-13T01:01:47","guid":{"rendered":"https:\/\/prostartup.it\/study-reveals-dangerous-flaw-in-ai-symptom-checkers\/"},"modified":"2026-05-13T03:01:47","modified_gmt":"2026-05-13T01:01:47","slug":"study-reveals-dangerous-flaw-in-ai-symptom-checkers","status":"publish","type":"post","link":"https:\/\/prostartup.it\/ru\/study-reveals-dangerous-flaw-in-ai-symptom-checkers\/","title":{"rendered":"Study Reveals Dangerous Flaw in AI Symptom Checkers"},"content":{"rendered":"<div>\n<figure id=\"attachment_519612\" aria-describedby=\"caption-attachment-519612\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"data:image\/gif;base64,R0lGODlhAQABAIAAAAAAAP\/\/\/ywAAAAAAQABAAACAUwAOw==\" fifu-lazy=\"1\" fifu-data-sizes=\"auto\" fifu-data-srcset=\"https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=75&resize=75&ssl=1 75w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=100&resize=100&ssl=1 100w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=150&resize=150&ssl=1 150w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=240&resize=240&ssl=1 240w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=320&resize=320&ssl=1 320w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=500&resize=500&ssl=1 500w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=640&resize=640&ssl=1 640w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=800&resize=800&ssl=1 800w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=1024&resize=1024&ssl=1 1024w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=1280&resize=1280&ssl=1 1280w, https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1&w=1600&resize=1600&ssl=1 1600w\" class=\"size-large wp-image-519612\" fifu-data-src=\"https:\/\/i2.wp.com\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg?ssl=1\" alt=\"Artificial Intelligence AI Robot Medical Healthcare\" width=\"777\" height=\"518\"><figcaption id=\"caption-attachment-519612\" class=\"wp-caption-text\">AI symptom checkers are becoming a larger part of modern healthcare, but human psychology may limit their effectiveness in unexpected ways. Credit: Shutterstock<\/figcaption><\/figure>\n<p><strong>Researchers found that distrust in AI causes people to give less detailed symptom reports, potentially reducing the <span class=\"glossaryLink\" aria-describedby=\"tt\" data-cmtooltip=\"cmtt_f284c920075a42d8e9953a740078e711\" data-gt-translate-attributes=\"[{&quot;attribute&quot;:&quot;data-cmtooltip&quot;, &quot;format&quot;:&quot;html&quot;}]\" role=\"link\">accuracy<\/span> of digital healthcare assessments.<\/strong><\/p>\n<p>Before seeing a doctor in the future, patients may first find themselves answering questions from an AI. Based on those responses, the system could decide whether the condition is urgent, whether treatment can wait, and even when an appointment should be scheduled.<\/p>\n<p>That future may sound distant, but healthcare is already moving in that direction. AI chatbots and digital symptom checkers are rapidly becoming the first point of contact for patients performing \u201cself-triage,\u201d offering early guidance before a medical professional is ever involved.<\/p>\n<p>Now, researchers are investigating a critical new question: do people communicate differently with machines than they do with doctors? The answer could have major implications, because even the most advanced AI systems can only make reliable assessments when patients provide detailed and accurate information.<\/p>\n<h4>Study Reveals Communication Gap With Medical AI<\/h4>\n<p>That issue is highlighted in a new study published in <em>Nature Health<\/em>. The research was led by Professor Wilfried Kunde of the University of W\u00fcrzburg and research associate Moritz Reis. Scientists from Charit\u00e9 \u2013 Universit\u00e4tsmedizin Berlin, the University of Cambridge, Helios Klinikum Emil von Behring, and Vivantes Klinikum Neuk\u00f6lln also contributed to the work.<\/p>\n<p>\u201cThe 500 study participants were tasked with writing simulated symptom reports for two common conditions \u2013 unusual headaches and flu-like symptoms,\u201d Moritz Reis explained. Participants were told their reports would be reviewed either by an AI chatbot or by a human doctor. Researchers then evaluated how useful the reports were for determining medical urgency.<\/p>\n<p>The results showed a clear pattern. When participants believed they were communicating with AI, their symptom descriptions became less useful for medical assessment compared with reports intended for healthcare professionals. The same trend appeared even among participants who were actually experiencing the symptoms described in the survey.<\/p>\n<h4>Shorter Symptom Reports Hurt AI Accuracy<\/h4>\n<p>The difference was reflected in the amount of detail people provided. Reports written for medical professionals averaged 255.6 characters, while those written for chatbots averaged 228.7 characters.<\/p>\n<p>Although a gap of 28 characters may appear minor, the researchers said it can still have real consequences. Even advanced AI systems can deliver inaccurate medical advice if patients leave out key details. According to the team, the effectiveness of digital health assessments depends not only on computing power but also on whether users provide thorough descriptions of their symptoms.<\/p>\n<p>Researchers believe one reason for this hesitation is something known as \u201cuniqueness neglect.\u201d \u201cMany people assume that AI cannot grasp the individual nuances of their personal situation and instead merely matches standardized patterns,\u201d explains Wilfried Kunde.<\/p>\n<h4>Trust, Privacy, and \u201cUniqueness Neglect\u201d<\/h4>\n<p>Concerns about privacy and skepticism toward algorithm-based diagnoses may also cause users to provide incomplete or vague information. Moritz Reis described the problem this way: \u201cIf we don\u2019t trust a machine to understand our uniqueness, we may unconsciously withhold the information it would need to provide precise assistance.\u201d As a result, important medical details may never reach the system, reducing the quality of the diagnosis.<\/p>\n<p>The researchers say the findings demonstrate that improving AI technology alone will not solve the problem. They believe a better user interface design could help encourage stronger communication between patients and digital systems.<\/p>\n<p>To improve symptom reporting, the team recommends that developers give users clear examples of detailed, high-quality descriptions and design AI systems that actively ask follow-up questions when information is missing. Encouraging patients to share more complete details could reduce misdiagnoses and help ease pressure on healthcare systems.<\/p>\n<p>Reference: \u201cReduced symptom reporting quality during human\u2013chatbot versus human\u2013physician interactions\u201d by Moritz Reis, Florian Reis, Yeun Joon Kim, Aylin Demir, Jess Lim, Matthias I. Gr\u00f6schel, Sebastian D. Boie and Wilfried Kunde, 1 May 2026, <i>Nature Health<\/i>.<br \/>DOI: 10.1038\/s44360-026-00116-y<\/p>\n<p><b>Never miss a breakthrough: Join the SciTechDaily newsletter.<\/b><br \/><b>Follow us on Google and Google News.<\/b><\/p>\n<hr \/>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>AI symptom checkers are becoming a larger part of modern healthcare, but human psychology may limit their effectiveness in unexpected ways. Credit: Shutterstock Researchers found<\/p>","protected":false},"author":1,"featured_media":382043,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"fifu_image_url":"https:\/\/scitechdaily.com\/images\/Artificial-Intelligence-AI-Robot-Medical-Healthcare-777x518.jpg","fifu_image_alt":"Study Reveals Dangerous Flaw in AI Symptom Checkers","footnotes":""},"categories":[9],"tags":[18,2245,37],"class_list":["post-382042","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-innovations","tag-ai","tag-reveals","tag-study"],"_links":{"self":[{"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/posts\/382042","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/comments?post=382042"}],"version-history":[{"count":0,"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/posts\/382042\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/media\/382043"}],"wp:attachment":[{"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/media?parent=382042"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/categories?post=382042"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/prostartup.it\/ru\/wp-json\/wp\/v2\/tags?post=382042"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}