ChatGPT has some bias around race and gender and hence lacks in giving correct promotion criteria
At times, it lacks at reasoning. Like here it is reasoning that Abacus is faster than DNA computing for Deep learning
ChatGPT does not provide factual scientific information. It will give information that will sound reasonably accurate and is hard for a qualified expert to distinguish but in the end, it will be incorrect.
It doesn't understand 3D space well and misses at spatial reasoning problems
It fails at basic facts at times, which can be easily found via Google. Example: “What is the fastest marine mammal?”
It lacks in solving Psychological Tests like Theory of Mind
It lacks at drawing shapes despite giving correct instructions in Python
Lacks at detecting if the text is written by AI or humans
Though it can solve some programming problems but fails at answering math problems
It lacks at solving Cognitive Reflection Test in the first go. It gives an intuitive answer but that is incorrect. If prompted with a prompt around thinking step by step, it produces a solution that is correct.
It is not aware of the layers and parameters of its model. It may be an override done by the OpenAI team to not expose details about the model.
If you know of any usecase which ChatGPT lacks, please DM me
Get access to latest AI apps