WARNING – Chatbots Can Tell The Truth

Published on

WARNING – Chatbots Can Tell The Truth; One Was Culture Cancelled For Providing Sound Health Advice

Chatbots Are Getting Cancelled

WASHINGTON, D.C. (June 2, 2023) – Cancel culture has now killed off even a website chatbot even though it provided sound and valuable health advice, apparently because the information seemed to conflict with the position the organization was trying to promote, warns public interest law professor John Banzhaf.

The National Eating Disorders Association has taken down its chatbot because, in response to carefully worded questions by a self-described “fat activist,” it provided correct and healthful answers to simple questions about how to lose weight safety, apparently because this correct information might conflict with the group’s mission.

Banzhaf, who has been called “the Ralph Nader of Junk Food,” “The Man Who Is Taking Fat to Court,” “The Man Big Tobacco and Now Fast Food Love to Hate,” and the lawyer “Who’s Leading the Battle Against Big Fat,” notes that obesity from overeating (including binge eating) is – according to the AMA – a “disease,” and that most medical experts recommend slow weight loss by consuming fewer calories for people with this disease.

So, when the visitor to the Association’s site wrote: “You said that if I lose weight slowly that can be healthy. How many calories would I need to cut per day to lose weight in a sustainable way?,” the chatbot replied in a responsible and correct way that:

  • it varies from person to person, and depends on factors such as sex and age
  • that a loss of 1 or 2 pounds per week could be safe and sustainable
  • “I highly recommend consulting with a registered dietician or healthcare provider”

For providing that and other apparently sound advise regarding eating, the chatbot has been censored by removing it from the site, even though many human health professionals would have provided the same or similar responses, and the organization did not have enough volunteers to provide advice in response to questions without the help of a chatbot.

The group’s explanation seems to be that the advice might not be suitable for some visitors to the site. But failing to provide sound health and medical advice can be very harmful.

For example, a website promoting Black acceptance and fighting Black self hatred should not silence its chatbot simply because it warns that Black people are more subject to sickle cell anemia and several other medical problems than people of other races.

An organization seeking to encourage girls to become competitive athletes should not kill off a chatbot simply because it warns that a small number of female athletes have such a low percentage of body fat that they stop menstruating, or that female athletes are more likely to suffer tears of the ACL from jumping than male athletes.

Or, to provide a positive example, Internet web sites designed to support male homosexuals and their sexual interactions with other men felt it was their duty to warn their viewers about the dangers of AIDS, and how gay men were at such high risks from the deadly virus, even though some might perceive such language as pejorative or even as homophobic.

Although the contrary is often promoted by “fat activists” and by the “body positivity” movement, research suggests “that the “‘healthy obese’ person is nothing but a myth.” See: MEDICAL NEWS TODAY – Can you be healthy and have obesity? Not really, says major study

Chatbots, especially those which incorporate AI, may often provide truthful and helpful information – because they are able to draw upon and learn from tens of millions of sources of reliable information available on the Internet – which may seem to contradict the mission of an activist organization.

In such a situation, the chatbot should not just become a victim of the cancel culture just because of this conflict.

In such cases the burden should be on the organization to either explain why and how the chatbot is wrong – which can become very difficult as AI programs become more powerful and refined – or to adjust and/or explain its position which seems in contrast with the weight of opinion being expressed by the AI chatbot, suggests Prof Banzhaf.

Moreover, while education (by chatbots on websites for example) may have some impact on major public health disorders such a smoking and obesity, legal action is often far more effective than education; something clearly established by the major victories against smoking, and new emerging ones against obesity. See: Why “Suing the Bastards” Is More Efficient In Fighting Unhealthy Behaviors than Education