A new report released by NewsGuard says OpenAI's newest generative artificial intelligence (AI) tool, GPT-4, is more likely to spread misinformation, when prompted, than its predecessor GPT-3.5.
Though OpenAI said the updated technology was 40% more likely to produce factual responses than GPT-3.5 in internal testing, NewsGuard claims it was more willing to generate prominent false narratives more frequently and persuasively.
While thinking it's promoting peer-reviewed research, ChatGPT has been proven susceptible to manipulation and made false claims regarding issues such as gun safety for children and testosterone levels. If the algorithm is already prone to manipulation from the web, one can only imagine the danger posed by conspiracy theorists who could potentially game the system to promote their worldview in the guise of objective fact.
When the mainstream media claims ChatGPT is promoting "misinformation," it purposefully leaves out which side the chatbot leans politically. While its so-called disclosure statement says it's politically neutral, GPT quietly embeds liberal ideology into its algorithm, so people think left-wing talking points are the truth while right-wing beliefs are "dangerous fake news."