Latest ArticlesCOP28: 7 food and agriculture innovations needed to protect the climate and feed a rapidly growing world Santos, now booted from the House, got elected as a master of duplicity — here’s how it worked Colonized countries rarely ask for redress over past wrongs − the reasons can be complex Artificial wombs could someday be a reality – here’s how they may change our notions of parenthood Turmoil at OpenAI shows we must address whether AI developers can regulate themselves Who is still getting HIV in America? Medication is only half the fight – homing in on disparities can help get care to those who need it most Electric arc furnaces: the technology poised to make British steelmaking more sustainable Sustainability schemes deployed by business most often ineffective, research reveals Destruction of Ukrainian heritage: why losing historical icons can leave a long shadow These programs make college possible for students with developmental disabilities
AI chatbots are already widely used by businesses to greet customers and answer their questions – either over the phone or on websites. Some companies have found that they can, to some extent, replace humans with machines in call centre roles.
However, the available evidence suggests there are sectors – such as healthcare and human resources – where extreme care needs to be taken regarding the use of these frontline tools, and ethical oversight may be necessary.
A recent, and highly publicised, example is that of a chatbot called Tessa, which was used by the National Eating Disorder Association (NEDA) in the US. The organisation had initially maintained a helpline operated by a combination of salaried employees and volunteers. This had the express goal of assisting vulnerable people suffering from eating disorders.
However, this year, the organisation disbanded its helpline staff, announcing that it would replace them with the Tessa chatbot. The reasons for this are disputed. Former workers claim that the shift followed a decision by helpline staff to unionise. The vice president of NEDA cited an increased number of calls and wait times, as well as legal liabilities around using volunteer staff.
Whatever the case, after a very brief period of operation, Tessa was taken offline over reports that the chatbot had issued problematic advice that could have exacerbated the symptoms of people seeking help for eating disorders.
It was also reported that Dr Ellen Fitzsimmons-Craft and Dr C Barr Taylor, two highly qualified researchers who assisted in the creation of Tessa, had stipulated that the chatbot was never intended as a replacement for an existing helpline or to provide immediate assistance to those experiencing intense eating disorder symptoms.
So what was Tessa designed for? The researchers, alongside colleagues, had generated an observational study highlighting the challenges they faced in designing a rule-based chatbot to interact with users who are concerned about eating disorders. It is quite a fascinating read, illustrating design choices, operations, pitfalls and amendments.
The original version of Tessa was a traditional, rule-based chatbot, albeit a highly refined one, which is one that follows a pre-defined structure based on logic. It could not deviate from the standardised pre-programmed responses calibrated by its creators.
Their conclusion included the following point: “Rule-based chatbots have the potential to reach large populations at low cost in providing information and simple interactions but are limited in understanding and responding appropriately to unanticipated user responses”.
This might appear to limit the uses for which Tessa was suitable. So how did it end up replacing the helpline previously used by NEDA? The exact chain of events is under discussion amid differing accounts, but, according to NPR, the hosting company of the chatbot changed Tessa from a rules-based chatbot with pre-programmed responses to one with an “enhanced questions and answers feature”.
The later version of Tessa was one employing generative AI, much like ChatGPT and similar products. These advanced AI chatbots are designed to simulate human conversational patterns with the intention of giving more realistic and useful responses. Generating these customised answers relies on large databases of information, which the AI models are trained to “comprehend” through a variety of technological processes: machine learning, deep learning and natural language processing.
Ultimately, the chatbot generated what have been described as potentially harmful answers to some users’ questions. Ensuing discussions have shifted the blame from one institution to another. However, the point remains that the ensuing circumstances could potentially have been avoided if there had been a body providing ethical oversight, a “human in the loop” and an adherence to the clear purpose of Tessa’s original design.
It’s important to learn lessons from cases such as this against the background of a rush towards the integration of AI in a variety of systems. And while these events took place in the US, they contains lessons for those seeking to do the same in other countries.
The UK would appear to have a somewhat fragmented approach to this issue. The advisory board to the Centre for Data Ethics and Innovation (CDEI) was recently dissolved and its seat at the table was taken up by the newly formed Frontier AI Taskforce. There are also reports that AI systems are already being trialled in London as tools to aid workers – though not as a replacement for a helpline.
Both of these examples highlight a potential tension between ethical considerations and business interests. We must hope that the two will eventually align, balancing the wellbeing of individuals with the efficiency and benefits that AI could provide.
However, in some areas where organisations interact with the public, AI-generated responses and simulated empathy may never be enough to replace genuine humanity and compassion – particularly in the areas of medicine and mental health.
Mark Tsagas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.