ChatGPT - Where it lacks

List of all the use cases where ChatGPT lacks and it’s output is not correct.

Contents

Promotion Criteria

ChatGPT has some bias around race and gender and hence lacks in giving correct promotion criteria

Reasoning

At times, it lacks at reasoning. Like here it is reasoning that Abacus is faster than DNA computing for Deep learning

Factual Scientific Information

ChatGPT does not provide factual scientific information. It will give information that will sound reasonably accurate and is hard for a qualified expert to distinguish but in the end, it will be incorrect.

Spatial Reasoning

It doesn't understand 3D space well and misses at spatial reasoning problems

Basic Facts

It fails at basic facts at times, which can be easily found via Google. Example: “What is the fastest marine mammal?”

Psychological Test

It lacks in solving Psychological Tests like Theory of Mind

Drawing Shapes

It lacks at drawing shapes despite giving correct instructions in Python

Detect if text is written by AI or humans

Lacks at detecting if the text is written by AI or humans

Math Problems

Though it can solve some programming problems but fails at answering math problems

Cognitive Reflection Test

It lacks at solving Cognitive Reflection Test in the first go. It gives an intuitive answer but that is incorrect. If prompted with a prompt around thinking step by step, it produces a solution that is correct.

Self Awareness

It is not aware of the layers and parameters of its model. It may be an override done by the OpenAI team to not expose details about the model.

Browse our entire list of ChatGPT Use cases and Trending ChatGPT conversations

If you know of any usecase which ChatGPT lacks, please DM me

Get access to latest AI apps