Wednesday, January 17, 2024

ChatGPT Limitation or Flaw?

I found an interesting limitation/flaw in ChatGPT when i present it with different questions in a single prompt.

First I present ChatGPT with two questions in one prompt, one after another. The second question in the same prompt uses the answer from the first question. 

Diagram 1: ChatGPT returns a wrong value.

Diagram 1 shows that ChatGPT generated a result for the prompt that includes 2 questions in the same prompt. If I asked ChaGPT separately, the result is shown in Diagram 3 which is not the same result as in Diagram 1.

Diagram 2: Different result from ChatGPT is question is asked separately.

If I ask a non mathematical questions to ChatGPT as shown in Diagram 3, the result tends to be consistent with questions that are asked separately and answered individually

Diagram 3: Non-mathematical question

As a matter of fact Blue + Yellow = Green, and Green + Red = Brown.

Another Example:
Diagram 4: Chat-GPT 4 seems to use different ways to interpret the 2 similar questions.


I have not gotten the time to find out further in what extend that ChatGPT become less reliable based on how I structure the prompts. Just be mindful that ChatGPT is not bullet proof and it's best that we do our verification on works that we rely on ChatGPT.


No comments: