Dr. Brian Anderson of MITRE delves into the complex relationship between artificial intelligence (AI) and cybersecurity. He explores AI’s dual role in empowering cyber criminals and defenders. ChatGPT’s generative AI is spotlighted. Anderson’s HIMSS Healthcare Cybersecurity Forum panel addresses AI’s potential to protect against threats, alongside risks posed by malicious AI and privacy concerns. The interview underscores the need for swift adaptation to harness AI’s positive potential, with patient safety at the forefront. Join the discussion on September 7th at the HIMSS 2023 Healthcare Cybersecurity Forum in Boston.
Dr. Brian Anderson from MITRE discusses the intricate interplay of artificial intelligence (AI) and cybersecurity, exploring the nuances surrounding its utilization, including the emergence of generative AI like ChatGPT. His insights provide a glimpse into the upcoming panel discussion at the HIMSS Healthcare Cybersecurity Forum.
The realm of cybersecurity is undergoing a paradigm shift thanks to artificial intelligence. While it presents both advantages and disadvantages, it influences attackers and defenders alike.
Artificial intelligence is being harnessed by cybercriminals to orchestrate more intricate and unprecedented attacks on a larger scale. Simultaneously, cybersecurity teams employ AI to safeguard their systems and data.
Dr. Brian Anderson, the Chief Digital Health Physician at MITRE, a federally funded nonprofit research organization, will be addressing the theme “Artificial Intelligence: Cybersecurity’s Friend or Foe?” in a panel session at the HIMSS 2023 Healthcare Cybersecurity Forum. Notable co-panelists include Eric Liederman from Kaiser Permanente, Benoit Desjardins from UPENN Medical Center, and Michelle Ramim from Nova Southeastern University.
In our interview with Dr. Anderson, we delve into the implications of offensive and defensive AI, as well as the new risks introduced by generative AI models such as ChatGPT.
Q: What are the cybersecurity concerns amplified by the integration of artificial intelligence?
A: The integration of AI introduces significant cybersecurity concerns. For instance, malicious AI tools pose risks by facilitating denial of service and brute force attacks on specific targets. The concept of “model poisoning” involves using AI to corrupt machine learning models, resulting in erroneous outcomes by injecting malicious code. Furthermore, free AI tools like ChatGPT can be manipulated using prompt engineering techniques to generate malicious code. This scenario raises data privacy concerns, particularly in the healthcare sector, where safeguarding sensitive health information is paramount.
Q: How can AI offer advantages to hospitals and health systems in countering malicious actors?
A: AI has emerged as a valuable asset in the arsenal of cybersecurity experts, aiding in the identification of threats over several years. AI tools currently play a pivotal role in threat and malware detection, as well as recognizing malicious code infiltrated into programs and models. These tools, when coupled with human cybersecurity expertise, empower health systems to proactively counteract bad actors. AI trained in adversarial tactics equips health systems with potent tools to fend off optimized attacks from malevolent models. Moreover, generative models like large language learning models (LLMs) contribute by spotting phishing attempts and identifying harmful bots. Addressing insider threats, like PHI or sensitive data leaks, is another challenge health systems must address.
Q: What cybersecurity risks are introduced by models like ChatGPT and other generative AI technologies?
A: Models like ChatGPT and forthcoming iterations, including GPT-4 and other LLMs, exhibit increasing proficiency in crafting novel code that could be misused for malicious intentions. Additionally, these generative models raise privacy concerns, as highlighted earlier. Social engineering emerges as a notable risk, as LLMs can produce intricate text or scripts, and even replicate familiar voices, potentially allowing them to impersonate individuals to exploit vulnerabilities.
Overall, Dr. Anderson emphasizes that while challenges exist, the potential benefits of AI in healthcare far outweigh the drawbacks. He underscores the necessity of swift action to address vulnerabilities and risks, especially in a critical domain like healthcare, where patient well-being is paramount.
Dr. Anderson’s session, “Artificial Intelligence: Cybersecurity’s Friend or Foe?” is scheduled for September 7th at 11 a.m. during the HIMSS 2023 Healthcare Cybersecurity Forum in Boston. This event promises to bring together an enthusiastic HIMSS community dedicated to advancing healthcare technology while ensuring patient safety.