“Whilst the law imposes a high bar to when it comes to processing personal data in the UK and EU, that doesn’t make it impossible for a consumer-facing, AI-powered health chatbot to operate on this side of the Atlantic. ChatGPT Health certainly isn’t the first. And it won’t be the last.
In recognition of the particular sensitivity of health data, and perhaps anticipating concerns of regulators and data subjects the world over, OpenAI’s marketing emphasises from the get-go that ChatGPT Health has been designed with privacy and security in mind. It goes to great pains to highlight many of the controls in place that would be expected – including not using conversations to train its AI. The devil, however, is in the detail; and what’s viable in the US at launch may require material changes for the EU and UK markets, as well as impact assessments.
But it’s not just data protection law that can be a blocker. AI tools used for medical purposes may qualify as medical devices under UK law, in which case there’d be other hoops to jump through before being able to place those tools on the UK market. This includes registering with the UK’s medicines and healthcare products regulator and, depending on the risk, undertaking further assessments. There will be similar considerations from an EU perspective. Additionally, if the chatbot is considered a medical device, then the EU’s AI Act will treat it as a high-risk AI system, triggering further obligations.
No surprise then that the marketing is plastered with disclaimers about the tool being designed to support, not replace medical care; and that it’s not intended for diagnosis or treatment. As well as trying to influence the extent to which it is considered a regulated medical device (though regulators will look at what it does in practice, not what it says on the tin), these disclaimers are also vital given accuracy issues and the potential for AI to hallucinate – when it comes to health, errors in AI outputs have the potential quite literally be a matter of life and death.
Whatever the reason, phased roll-outs are quite common when it comes to tech, and it could well be what OpenAI has had planned from the start – noting that there already seems to be a waiting list in the US.
More sharing usually means more risk. It introduces more opportunities for data to be compromised by expanding the attack surface. It can also mean less control, with unforeseen secondary uses of data becoming more likely. But none of the rich insights, improved performance and personalisation that we can all benefit from, and use to help improve our health, are possible without that sharing.
Sharing requires trust, especially when it comes to the most deeply personal aspects of our lives. So if OpenAI gets it wrong, that breach of trust will likely have the opposite effect and slow down adoption of these tools that have the potential to transform our lives for the better. That’s why, when it comes to AI, it’s so important to get out of the ‘zero-sum’ mindset: innovation and safety go hand in hand, and it’s not a matter of one or other.”
Click here to read the article. This article was first posted by Vogue Business and sits under a paywall.
