More than 40% of healthcare execs and clinicians surveyed say they know of a colleague who has used unapproved AI. And 17% have done it themselves. A top concern of healthcare executives is that AI adoption among clinicians is moving faster than governance. A new survey proves that point. Some 41% of clinicians and administrators taking part in a Wolters Kluwer survey say they are aware of colleagues who are using AI tools that haven’t been approved by the health system or hospital. And 17% say they’ve done it themselves. Experts call this “shadow AI,” and point out that it could be dangerous. “Doctors and administrators are choosing AI tools for speed and workflow optimization, and when approved options aren’t available, they may be taking risks,” Yaw Fellin, SVP and General Manager of Clinical Decision Support and Provider Solutions for Wolters Kluwer Health, said in a press release. “Shadow AI isn’t just a technical issue; it’s a governance issue that may raise patient safety concerns. Leaders must act now to close the policy gap around AI use, develop clear compliance guidelines, and ensure that only validated, secure, enterprise-ready AI tools are used in clinical care.” To address the use of shadow AI, a white paper accompanying the survey offers six steps: googletag.cmd.push(function() { googletag.display(“dfp-ad-hl_native1”); }); Develop clear policies on AI use Foster collaboration between policy decision-makers and users Identify purpose-built AI tools that support enterprise-wide security and goals Clearly communicate AI policies and provider training sessions Provider broader training on…