Humor can enhance human performance and motivation and is crucial in relationship building. It is an effective tool for influencing mood and directing attention. Therefore, a computational sense of humor has the potential to greatly improve human-computer interaction (HCI). Unfortunately, despite the fact that computer humor is a long-standing area of study, the computers created are far from “funny.” This issue is even considered AI-complete. However, continuous improvements and recent machine learning (ML) discoveries create a wide range of new applications and present new opportunities for natural language processing (NLP).
Transformer-based extended language models (LLMs) increasingly reflect and capture implicit knowledge, including morality, humor, and stereotypes. Humor is often subliminal and driven by small nuances. Therefore, there is reason for optimism regarding future developments of artificial humor, given these new properties of LLMs. OpenAI’s ChatGPT recently garnered a lot of attention for its innovative capabilities. Users can have conversation-like exchanges with the model via the public chat API. The system can respond to a wide range of queries considering the previous contextual dialogue. As seen in Fig. 1, he can even tell jokes. Fun to use, ChatGPT interacts on a human level.
However, consumers can immediately see the model’s shortcomings as they interact with it. Despite producing almost error-free English text, ChatGPT occasionally has grammatical and content-related errors. They found out that ChatGPT is likely to regularly repeat the same jokes during the previous investigation. The jokes that were offered were also quite accurate and nuanced. These findings supported that the model did not create the jokes produced. Instead, they were copied from the training data or even encoded into a list. They ran several prompt-based structured experiments to learn about the behavior of the system and allow inference about the generation process of the ChatGPT output because the internals of the system are not revealed.
Researchers from the German Aerospace Center (DLR), the Darmstadt University of Technology and the Hessian Center for AI want to know specifically, through systematic indication-based research, how well ChatGPT can capture human mood. The three experimental conditions of joke invention, joke explanation and joke detection come together as the main contribution. Artificial intelligence vocabulary frequently uses comparisons to human traits, such as neural networks or the phrase artificial intelligence itself. Furthermore, they use human-related words when talking about conversational agents, whose goal is to emulate human behavior as closely as possible. For example, ChatGPT “understands” or “explains”.
Although they believe these comparisons accurately capture the behavior and inner workings of the system, they can be misleading. They want to clarify that the AI models under discussion are not at the human level and are, at best, simulations of the human mind. This study does not attempt to answer the philosophical question of whether AI can consciously think or understand.
review the Paper and GitHub link. Don’t forget to join our 24k+ ML SubReddit, discord channel, and electronic newsletter, where we share the latest AI research news, exciting AI projects, and more. If you have any questions about the article above or if we missed anything, feel free to email us at [email protected]
🚀 Check out 100 AI tools at AI Tools Club
Aneesh Tickoo is a consulting intern at MarktechPost. She is currently pursuing her bachelor’s degree in Information Science and Artificial Intelligence at the Indian Institute of Technology (IIT), Bhilai. She spends most of her time working on projects aimed at harnessing the power of machine learning. Her research interest is image processing and she is passionate about creating solutions around her. She loves connecting with people and collaborating on interesting projects.