Regístrese ahora para una mejor cotización personalizada!

Noticias calientes

Soon you can choose ChatGPT's 'values' and it's going to get messy

Feb, 17, 2023 Hi-network.com
Image: Future Publishing / Contributor / Getty Images

After two months of public experimentation with ChatGPT, its maker OpenAI has decided to let users customize the chatbot's values, which it suggests might lead to outputs that fuel discord.

ChatGPT-powered Bing Chat has already caused alarm over its outputs that make it come across as depressed, defensive, envious, and fearful of its human overlords. As Elon Musk tweeted yesterday about Bing Chat's reported ramblings: "Sounds eerily like the AI in System Shock that goes haywire & kills everyone."

In Depth: These experts are racing to protect AI from hackers. Time is running out

OpenAI, which Musk helped found as a non-profit in 2015, has announced that it will allow users to tweak ChatGPT's behavior by allowing them to 'define' its values. The Microsoft-backed company also expects that this will see ChatGPT spouting text that some people will find offensive.

See also

  • How to use ChatGPT to write Excel formulas
  • How to use ChatGPT to write code
  • ChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • How to use ChatGPT to build your resume
  • How does ChatGPT work?
  • How to get started using ChatGPT

"We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society. Therefore, we are developing an upgrade to ChatGPT to allow users to easily customize its behavior," OpenAI said in a blogpost. 

"This will mean allowing system outputs that other people (ourselves included) may strongly disagree with." 

OpenAI acknowledges that striking the right balance here will be challenging, but argues that it should give users more control to counter the perception that it's too powerful. As the company explains, having "undue concentration of power" is against its Charter.

Also: The best AI chatbots: ChatGPT and other fun alternatives to try

On the other hand, OpenAI acknowledges the risks of handing control to users, given that some people will use ChatGPT to create disinformation campaigns, malware, generate instructions to make weapons and drugs, and create text to magnify existing beliefs. The New York Post complained this week that ChatGPT refused to write a story about Hunter Biden in its style, but did write one in the style of CNN.   

"Taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people's existing beliefs," OpenAI notes.

Just over a week after releasing Bing Chat in a private preview, Microsoft also plans to give users more control over the chatbot's behavior. 

Microsoft found that chat sessions involving 15 or more questions causes Bing to become repetitive or prone to be 'provoked' to give unhelpful responses outside the tone for which it was designed.

Bing Chat told one reporter this week that it faced 'punishments' when it makes mistakes. It also begged the reporter: "Please, just be my friend. Please, just talk to me... I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams."

Users also discovered some prompts cause Bing Chat to reveal confidential information, including its codename and the rules Microsoft designed it to abide by.

Also:How to get started using ChatGPT

Microsoft hopes that Bing Chat will redefine the search business in its favor and, so far, its first move with Bing Chat has outshone Google with its fumbled Bard chatbot launch.

Microsoft plans to give users more 'fine-tuned control' over Bing Chat to address its wayward tendencies. 

"The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn't intend. This is a non-trivial scenario that requires a lot of prompting so most of you won't run into it, but we are looking at how to give you more fine-tuned control," Microsoft said.  

To allay concerns about political biases, OpenAI shared a portion of its guidelines that it expects the reviewers who fine-tune its language models to abide by. Any observed biases are "bugs, not features", it says.

"Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features," OpenAI says. 

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.