Human-like generative agents are commonly used in chatbots and virtual assistants to provide natural and engaging interactions with the user. They can understand and respond to user queries, participate in conversations, and perform tasks such as answering questions and making recommendations. These agents are typically built using natural language processing (NLP) techniques and machine learning models, such as GPT-3, to produce coherent and contextually relevant responses. They can create interactive stories, dialogue, and characters in video games or virtual worlds, enhancing the gaming experience.
Human-like generative agents can help writers and creatives generate ideas, generate plots, or even compose poetry or music. However, this process is different from how humans fully think. Humans tend to constantly adapt changes in their plans according to changes in the physical environment. Researchers from the University of Washington and the University of Hong Kong propose humanoid agents that guide generative agents to behave more like humans by introducing different elements.
Inspired by the psychology of humans, researchers have proposed a two-system mechanism: System 1 is responsible for the intuitive and effortless thinking process and System 2 is responsible for the logical thinking process. To influence the behavior of these agents, they introduced aspects such as basic needs, emotions and the closeness of their social relationship with other agents.
Engineered agents need to interact with others, and if they fail, they will receive negative feedback including loneliness, illness, and tiredness.
The social brain hypothesis proposes that much of our cognitive ability has evolved to track the quality of social relationships. People often interact with others to adapt to changes. To mimic this behavior, they empower humanoid agents to adjust their conversations based on how close they are to each other. Your agents view them using a Unity WebGL game interface and present the stimulated agent states over time using an interactive analytics dashboard.
They created an HTML sandbox game environment using the Unity WebGL game engine to visualize humanoid agents in their world. Users can select one of three worlds to view the agent’s status and location at each step. Its game interface ingests structured JSON files from simulated worlds and transforms them into animations. They built Plotly Dash to visualize the status of various humanoid agents over time.
Currently, their systems support dialogues between only two agents, with the goal of assisting multi-party conversations. Because agents work with a simulation that does not perfectly reflect human behavior in the real world, users must be informed that they are working with a simulation. Despite their capabilities, it is essential to consider ethical and privacy concerns when using human-like generative agents, such as the possibility of spreading misinformation, biases in training data, and responsible use and monitoring.
Review the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join. our 32k+ ML SubReddit, Facebook community of more than 40,000 people, Discord channel, and Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you’ll love our newsletter.
We are also on WhatsApp. Join our ai channel on Whatsapp.
Arshad is an intern at MarktechPost. He is currently pursuing his international career. Master’s degree in Physics from the Indian Institute of technology Kharagpur. Understanding things down to the fundamental level leads to new discoveries that lead to the advancement of technology. He is passionate about understanding nature fundamentally with the help of tools such as mathematical models, machine learning models, and artificial intelligence.
<!– ai CONTENT END 2 –>