While ai has made tremendous progress and has become a valuable tool in many domains, it is not a replacement for humans’ unique qualities and capabilities. The most effective approach, in many cases, involves humans working alongside ai, leveraging each other’s strengths to achieve the best outcomes. There are fundamental differences between human and artificial intelligence, and there are tasks and domains where human intelligence remains superior.
Humans can think creatively, imagine new concepts, and innovate. ai systems are limited by the data and patterns they’ve been trained on and often struggle with truly novel and creative tasks. However, the question is, can an average human outperform the ai model?
Researchers tried to compare the creativity of humans (n= 256) with that of three current ai chatbots, ChatGPT3.5, ChatGPT4, and Copy.ai, by using the alternate uses task (AUT), which is a divergent thinking task. It is a cognitive method used in psychology and creativity research to assess an individual’s ability to generate creative and novel ideas in response to a specific stimulus. These tasks measure a person’s capacity for divergent thinking, which is the ability to think broadly and generate multiple solutions or ideas from a single problem.
Participants were asked to generate uncommon and creative uses for everyday objects. AUT consisted of four tasks with objects: rope, box, pencil, and candle. The human participants were instructed to provide ideas qualitatively but not depend solely on the quantity. The chatbots were tested 11 times with four object prompts in different sessions. The four objects were tested only once within that session.
They collected subjective creativity or originality ratings from six professionally trained humans to evaluate the results. The order in which the responses within object categories were presented was randomized separately for each rater. The scores of each rater were averaged across all the responses a participant or chatbot in a session gave to an object, and the final subjective scores for each object were formed by averaging the six raters’ scores.
On average, the ai chatbots outperformed human participants. While human responses included poor-quality ideas, the chatbots generally produced more creative responses. However, the best human ideas still matched or exceeded those of the chatbots. While this study highlights the potential of ai as a tool to enhance creativity, it also underscores the unique and complex nature of human creativity that may be difficult to replicate or surpass with ai technology fully.
However, ai technology is rapidly developing, and the results may be different after half a year. Based on the present study, the clearest weakness in human performance lies in the relatively high proportion of poor-quality ideas, which were absent in chatbot responses. This weakness may be due to normal variations in human performance, including failures in associative and executive processes and motivational factors.
Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest ai research news, cool ai projects, and more.If you like our work, you will love our newsletter..
Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and ai.
<!– ai CONTENT END 2 –>