As it begins marketing its own chatbot, Bard, around the world, Google parent company Alphabet Inc. is warning its employees about how to safely use artificial intelligence (AI) chatbots, including Bard, according to a Reuters report that cites four anonymous sources.
The company reportedly urged its software engineers to avoid direct use of computer code that chatbots can create, as it says AI can reproduce the data it receives during training, risking a potential leak. This recently happened to a Samsung engineer who uploaded code to ChatGPT.
Not only has Google been transparent about the risks of this emerging technology, but it has also been at the forefront of implementing safeguards against risks such as data theft, data poisoning, and malicious chatbot prompt injections by bad actors. The company certainly aims to profit from AI, but it's also spending loads of cash to ensure public safety and privacy.
Tech executives leading the AI discussion believe themselves to be prophets with the power to describe the end times while also offering the guide to salvation. While they issue rhetoric on the potential "existential threat" AI poses, they continue to push for the universalization of the technology, claiming it will bring us to a technocratic Garden of Eden. At this point in time, we shouldn't trust their chatbots or their self-proclaimed wisdom.