“The AI” as the new “appeal to authority argument

“The AI” as the new “appeal to authority argument

ChatGPT is a project from the Open AI Foundation. They have made it available for free use – for now – and it has gotten much attention.

Users can ask questions using plain language, even ask it to author essays on entire subjects.

In the near future – perhaps as soon as 2024, use of ChatGPT-like systems might become common.

Many may begin to use The AI as the basis for their perspectives – and share The AI written responses on social media to shut down those who post something objectionable – or merely something you disagree with.

For example, initially one could ask ChatGPT about climate change controversies and ChatGPT acknowledged uncertainties. Within days, that was shut off – and ChatGPT now says only the “company line”. This is reminiscent of our recent experience with social media platforms censoring items from highly credentialed scientists, for having perspectives different than the government. Instead of online moderators, will have The AI do the censoring for us.

Those who control The AI will control the world.

But by 2024, we may online discussions – public discussions – shut off by citing the output of The AI engines.

The first two questions I asked of ChatGPT were answered spectacularly and provably wrong. I asked it about two bestselling software products made in the 1980s-I was the creator of one of them. ChatGPT falsely said the products were sold by a company, whose name was literally invented out of thin air (didn’t exist) as a subsidiary of a company that was a competitor to us! The answer was wrong!

I then asked a related, third question, in a different way, and on that one, it had our company name correct.

That means the knowledge base is holding inconsistent information. That in turn leads to question as to how they verify the accuracy of data in their knowledgebase – and how to identify and report errors.

There is no way for a user to inform ChatGPT of its errors.

But think of this in the broader, long-term context – The AI becomes the new “appeal to authority” argument.

Ultimately, an argument is true based solely on facts and logic – and is not based upon who asserts something is true. Quoting The AI does not mean an argument is valid. But as with the “appeal to authority” form of argument, this approach is successful – and hence, widely used. And of course, enables false information to become identified as correct – because a big name said so, even if its wrong.

 

Comments are closed.