Menu
Microsoft strongly encourages users to switch to a different browser than Internet Explorer as it no longer meets modern web and security standards. Therefore we cannot guarantee that our site fully works in Internet Explorer. You can use Chrome or Firefox instead.

With ChatGPT Under FTC Scrutiny, It's Time to Bet on Safer AI


OpenAI kicked off a technological revolution when it released its ChatGPT artificial intelligence-powered chatbot into the world late last year. Generative AI looks like it will be a game-changing technology. With a simple prompt, AI models can generate high-quality text, create images, render videos, write code, and answer questions. Trained on mountains of data, the latest AI models are incredibly impressive.

But are those models safe? The Federal Trade Commission isn't so sure. The agency, tasked with protecting consumers, is investigating OpenAI for potentially harming consumers. The FTC is looking into OpenAI's data collection practices and whether its models produce false information on individuals. Speaking to the House Judiciary Committee last week, FTC Chair Lina Khan cited reports of people's "sensitive information" showing up in results, according to The New York Times.

OpenAI isn't a public company and can't be directly invested in, but the problem of opaqueness extends throughout the AI industry. Large language models like the one that powers ChatGPT can sometimes produce false, biased, and harmful results. These models are trained on enormous quantities of text. If any of that training text is toxic, the model will inevitably spit out toxic results on occasion.

Continue reading


Source Fool.com

Like: 0
IBM
Share

Comments