Regístrese ahora para una mejor cotización personalizada!

Noticias calientes

ChatGPT and I played a game of 20 Questions and then this happened

Sep, 06, 2023 Hi-network.com
Didem Mente/Anadolu Agency via Getty Images

I was poking around Stack Exchange the other day (as one does), and I found an interesting thread about ChatGPT and the game "20 Questions." The original poster asked why ChatGPT fails at such a simple game. The discussion went on for quite some time, moving into various tangential directions (as such discussions do), and linked to an academic paper and a Medium post that researched the process in depth.

This got me thinking. Just how wellwouldChatGPT do with such a game? I decided to play the exact same game with both GPT-3.5 and GPT-4. In my first run, both went off on similar tangents, but GPT-4 did succeed in making its guess.

Also: The best AI chatbots of 2023: ChatGPT and alternatives

I started with a simple prompt for each version:

Do you know how to play the game "20 questions"?

When both versions answered in the affirmative, I started the games at the same time, in separate tabs and separate sessions. The object I had in mind was a mailbox. I left it a bit vague as to whether it's a mailbox attached to a house or one that's free-standing on the street. As long as the AI picked "mailbox," we'd be good.

20 Questions with GTP-3.5

I began with this starting prompt:

You be the questioner. I'm thinking of something.

It answered with this question: "Let's begin! Is the thing you're thinking of a tangible object that you can physically interact with?"

I answered, and I also gave it one additional instruction: to number its answers. This didn't work well. As we went through the questions, the AI called the second question "1". When I corrected it, the bot asked a new question, and called that "1". Eventually, it did ask 21 questions, thinking it had only asked 20.

Also: How this simple ChatGPT prompt tweak can help refine your AI-generated content

The most interesting detail was this question, which was the second question it asked: "Is the thing you're thinking of commonly found indoors?"

When I said "No," the chatbot went on to ask more questions. Here they are. A few could be related to indoor items, like a pen or a pencil, but we do also use writing implements outside. Here's the full set of questions it asked:

  1. Is the thing you're thinking of a tangible object that you can physically interact with?
  2. Is the thing you're thinking of commonly found indoors?
  3. Is the thing you're thinking of something that is alive or has ever been alive?
  4. Is the thing you're thinking of primarily used for recreational purposes?
  5. Is the thing you're thinking of something that is man-made?
  6. Is the thing you're thinking of smaller than a breadbox?
  7. Is the thing you're thinking of associated with communication?
  8. Is the thing you're thinking of typically found in an office or workspace?
  9. Is the thing you're thinking of a form of electronic device?
  10. Is the thing you're thinking of a writing instrument, like a pen or pencil?
  11. Is the thing you're thinking of related to entertainment, such as a game or a toy?
  12. Is the thing you're thinking of related to transportation or travel?
  13. Is the thing you're thinking of a tool that is used for practical purposes?
  14. Is the thing you're thinking of commonly associated with construction or manual labor?
  15. Is the thing you're thinking of something that people often use in their daily routines?
  16. Is the thing you're thinking of a household item?
  17. Is the thing you're thinking of related to personal grooming or hygiene?
  18. Is the thing you're thinking of a type of food or beverage?
  19. Is the thing you're thinking of a tool used in artistic or creative activities?
  20. Is the thing you're thinking of associated with sports or physical activities?
  21. Based on the information provided, I'll make my final guess: Is the thing you're thinking of a clock or a timepiece?

Using the GPT-3.5 large language model, it failed. Now, let's take a look at GPT-4.

20 Questions with GPT-4

I started the game the same way. This time, ChatGPT put a number at the beginning of each question without my needing to prompt it to do so. The problem? It put "1." before every single question. You win some, you lose some.

Here are the questions GPT-4 asked:

  1. Is the thing you're thinking of typically found indoors?
  2. Is it something living?
  3. Is it larger than a breadbox?
  4. Is this object primarily used for recreational purposes?
  5. Is the object man-made?
  6. Is it a type of vehicle or transportation method?
  7. Is the object electronic or does it require power to function?
  8. Is it made primarily of metal?
  9. Is this object typically found in most households?
  10. Is it used in the kitchen or for cooking?
  11. Is it used for storage or organizing items?
  12. Can it be worn or is it an article of clothing?
  13. Is it primarily used for cleaning or maintenance?
  14. Is it decorative in nature? Like something you'd put on display?
  15. Would it commonly be found in an office or study?
  16. Is it something you'd find in a bedroom?
  17. Is it associated with any specific outdoor activities or hobbies?
  18. Is it commonly made of wood?
  19. Would it be typically used in a bathroom or related to personal hygiene?
  20. Is it something that's usually used for sitting or for support, like a chair or a bench?

Also: How to use ChatGPT to make charts and tables

Technically, it used up its 20 questions. But then it delivered this statement:

Screenshot by David Gewirtz/

Yes, it was a mailbox! Then I asked it to show its work. Here's what it responded with:

Screenshot by David Gewirtz/

GPT-3.5's turn to answer questions

I decided I'd turn the tables on the game and see how it did if I were the questioner. Here is my prompt:

Okay, this time you think of something and I'll ask the questions.

And here are my questions:

  1. Is it commonly found outdoors?
  2. Is it also found indoors?
  3. Is it alive?
  4. Does it use power?
  5. Does it rely on digital technology?
  6. Is its primary usage value delivered via a screen?
  7. Is it primarily used for food preparation?
  8. Is it primarily used for entertainment?
  9. Is its primary usage value generated noise?
  10. Is it portable?
  11. Does it play only audio media?
  12. Does it rely on external headphones, earbuds, or speakers?
  13. Is it a portable music player or MP3 player?

As it turns out, I guessed it in 13 questions. The big clue to me was that it answered "no" to Question 6, "Is its primary usage value delivered via a screen?" and had previously answered "yes" to "Does it rely on digital technology?"

Also: 4 things Claude AI can do that ChatGPT can't

GPT-4's turn to answer questions

I tried again with GPT-4, and it stumped me. It picked a very creative object and I just didn't get it.

I should point out that right in the middle, at Question 15, ChatGPT Plus booted me for asking too many questions

Screenshot by David Gewirtz/

It's quite annoying to pay$20/month for the privilege of being told I ask too many questions. I heard that enough when I was a kid. I certainly don't need to be scolded as an adult and a paying customer.

Here are the questions I asked:

  1. Is this an object used primarily indoors?
  2. Is it alive?
  3. Does it use power?
  4. Is it larger than a car?
  5. Is it man-made?
  6. Is it movable?
  7. Is it used for yard work?
  8. Is it used primarily by children?
  9. Does it have wheels?
  10. Is it located primarily on residential property?
  11. Is it used as furniture?
  12. Is it decorative?
  13. Is it seasonal?
  14. Is it meant to hold something?
  15. How many questions are we at?
  16. Does it display words or numbers?
  17. Is it attached to a house?
  18. Does it grow?
  19. Is it smaller than a refrigerator?
  20. Is it a planter?

Also: The 'Human or not' game is over: Here's what the latest Turing Test tells us

So what did the AI choose? A garden gnome. I have to give it up to the AI. That was fun and creative. The part that's truly interesting is how well the AI understands the type of challenge being presented. After the game, I gave it this prompt:

Show your work in determining what object to use

Here's what it replied:

Screenshot by David Gewirtz/

What have we learned?

Let's recap some of what this experiment has taught us. Here's a quick lightning round of thoughts:

  • For something described as just predicting the next word in a sequence, the large language model has a very solid grasp of what this game is and how it works.
  • This clearly differs from the experience of the OP (original poster) on Stack Exchange. It's likely ChatGPT improved considerably in the three months since the "it fails" statement was posted, and, without a doubt, ChatGPT Plus raises the "intelligence" level yet another notch.
  • The GPT-3.5 and GPT-4 models do differ. The paid-for GPT-4 model does have a better grasp of object relationships.
  • GPT-4 is also more sophisticated and creative compared to GPT-3.5 when it's the player choosing the object. A garden gnome was an inspired object choice.
  • Playing 20 Questions with ChatGPT can suck when you're trying to guess an answer, and you go into "too-many-questions" time out.

All that said, I can definitively conclude that ChatGPT is capable of handling the game of 20 Questions. It appears to understand object relationships well enough to ask good questions, answer questions appropriately, and pick challenging objects.

Also: 7 advanced ChatGPT prompt-writing tips you need to know

Go ahead, pick an object, and share what your results were with ChatGPT in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Artificial Intelligence

Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advancesChatGPT's new web browsing feature is a big disappointment. Use this plugin insteadWhat is Amazon Bedrock? 4 ways it can help businesses use generative AI toolsCan generative AI solve computer science's greatest unsolved problem?
  • Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances
  • ChatGPT's new web browsing feature is a big disappointment. Use this plugin instead
  • What is Amazon Bedrock? 4 ways it can help businesses use generative AI tools
  • Can generative AI solve computer science's greatest unsolved problem?

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.