Introduction
Chatbots have transformed the way we interact with technology, enabling automated and intelligent conversations across multiple domains. Building these chat systems can be a challenge, especially when looking for flexibility and scalability. AutoGen simplifies this process by leveraging ai agents, which handle complex dialogues and tasks autonomously. In this article, we will explore how to create agent chatbots using AutoGen. We'll explore its powerful agent-based framework that makes building intelligent, adaptive chatbots easier than ever.
Overview
- Find out what the AutoGen framework is all about and what it can do.
- See how you can create chatbots that can hold conversations with each other, respond to human queries, perform web searches, and do even more.
- Learn the setup requirements and prerequisites needed to create agent chatbots using AutoGen.
- Learn how to improve chatbots by integrating tools like Tavily for web search.
What is AutoGen?
In AutoGen, all interactions are modeled as conversations between agents. This chat-based agent-to-agent communication streamlines workflow, making it intuitive to start building chatbots. The framework also offers flexibility by supporting various conversation patterns such as sequential chats, group chats, and more.
Let's explore the capabilities of AutoGen's chatbot as we create different types of chatbots:
- Dialectic between agents: Two experts in a field discuss a topic and try to resolve their contradictions.
- Interview preparation chatbot: We will use an agent to prepare for the interview by asking questions and evaluating the answers.
- Chat with web search tool: We can chat with a search tool to get any information from the web.
More information: Autogen: Exploring the basics of a multi-agent framework
Prerequisites
Before creating AutoGen agents, make sure you have the API keys required for LLMs. We will also use Tavily to search the web.
Accessing via API
In this article, we use API keys from OpenAI and Groq. Groq offers free access to many open source LLMs up to some rate limits.
We can use any LLM we prefer. Start by generating an API key for the LLM and they melted search tool.
Create an .env file to securely store this key, keeping it private and making it easily accessible within your project.
Required Libraries
autogen-agentchat – 0.2.36
tavily-python – 0.5.0
assimilate – 0.7.0
openai – 1.46.0
Dialectic between agents
Dialectics is a method of argumentation or reasoning that seeks to explore and resolve contradictions or opposing points of view. We let the two LLMs participate in the dialectic using AutoGen agents.
Let's create our first agent:
from autogen import ConversableAgent
agent_1 = ConversableAgent(
name="expert_1",
system_message="""You are participating in a Dialectic about concerns of Generative ai with another expert.
Make your points on the thesis concisely.""",
llm_config={"config_list": ({"model": "gpt-4o-mini", "temperature": 0.5})},
code_execution_config=False,
human_input_mode="NEVER",
)
Code explanation
- Conversable agent: This is the base class for creating customizable agents that can talk and interact with other agents, people, and tools to solve tasks.
- System message: The system_message parameter defines the agent's role and purpose in the conversation. In this case, agent_1 is instructed to engage in a dialectic about generative ai, making concise points about the thesis.
- llm_config: This setting specifies the language model to use, here “gpt-4o-mini”. Additional parameters are set as temperature = 0.5 to control the creativity and variability of the model response.
- code_execution_config=False: This indicates that there are no code execution capabilities enabled for the agent.
- human_input_mode=”NEVER”: This configuration ensures that the agent does not depend on human intervention and works completely autonomously.
Now the second agent
agent_2 = ConversableAgent(
"expert_2",
system_message="""You are participating in a Dialectic about concerns of Generative ai with another expert. Make your points on the anti-thesis concisely.""",
llm_config={"config_list": ({"api_type": "groq", "model": "llama-3.1-70b-versatile", "temperature": 0.3})},
code_execution_config=False,
human_input_mode="NEVER",
)
Here we will use Groq's Llama 3.1 model. To know how to configure different LLMs, we can consult here.
Let's start the chat:
result = agent_1.initiate_chat(agent_2, message="""The nature of data collection for training ai models pose inherent privacy risks""",
max_turns=3, silent=False, summary_method="reflection_with_llm")
Code explanation
In this code, agent_1 starts a conversation with agent_2 using the provided message.
- max_turns=3: This limits the conversation to three exchanges between agents before it automatically ends.
- silent=False: This will show the conversation in real time.
- summary_method='reflection_with_llm': This employs a large language model (LLM) to summarize all dialogue between agents once the conversation concludes, providing a thoughtful summary of their interaction.
You can go over the entire dialectic using the chat_history method.
Here is the result:
len(result.chat_history)
>>> 6
# each agent has 3 replies.
# we can also check the cost incurred
print(result.cost)
# get chathistory
print(result.chat_history)
# finally summary of the chat
print(result.summary('content'))
Interview preparation chatbot
In addition to having two agents chat with each other, we can also chat with an ai agent. Let's test this by creating an agent that can be used to prepare for interviews.
interviewer = ConversableAgent(
"interviewer",
system_message="""You are interviewing to select for the Generative ai intern position.
Ask suitable questions and evaluate the candidate.""",
llm_config={"config_list": ({"api_type": "groq", "model": "llama-3.1-70b-versatile", "temperature": 0.0})},
code_execution_config=False,
human_input_mode="NEVER",
# max_consecutive_auto_reply=2,
is_termination_msg=lambda msg: "goodbye" in msg("content").lower()
)
Code explanation
Use system_message to define the agent role.
To end the conversation we can use any of the following two parameters:
- max_consecutive_auto_reply: This parameter limits the number of consecutive responses that an agent can send. Once the agent reaches this limit, the conversation automatically ends, preventing it from continuing indefinitely.
- is_termination_msg: This parameter checks if a message contains a specific predefined keyword. When this keyword is detected, the conversation ends automatically.
candidate = ConversableAgent(
"candidate",
system_message="""You are attending an interview for the Generative ai intern position.
Answer the questions accordingly""",
llm_config=False,
code_execution_config=False,
human_input_mode="ALWAYS",
)
Since the user is going to give the answer, we will use human_input_mode=”ALWAYS” and llm_config=False
Now we can initialize the mock interview:
result = candidate.initiate_chat(interviewer, message="Hi, thanks for calling me.", summary_method="reflection_with_llm")
# we can get the summary of the conversation too
print(result.summary)
Chat with web search
Now, let's create a chatbot that can use the Internet to search for queries made.
To do this, first, define a function that searches the web using Tavily.
from tavily import TavilyClient
from autogen import register_function
def web_search(query: str):
tavily_client = TavilyClient()
response = tavily_client.search(query, max_results=3)
return response('results')
An assistant agent who decides to call the tool or terminate
assistant = ConversableAgent(
name="Assistant",
system_message="""You are a helpful ai assistant. You can search web to get the results.
Return 'TERMINATE' when the task is done.""",
llm_config={"config_list": ({"model": "gpt-4o-mini"})},
silent=True,
)
The user proxy agent is used to interact with the assistant agent and executes tool calls.
user_proxy = ConversableAgent(
name="User",
llm_config=False,
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg("content"),
human_input_mode="TERMINATE",
)
When the termination condition is met, it will request human intervention. We can continue with the consultation or end the chat.
Register the function for the two agents:
register_function(
web_search,
caller=assistant, # The assistant agent can suggest calls to the calculator.
executor=user_proxy, # The user proxy agent can execute the calculator calls.
name="web_search", # By default, the function name is used as the tool name.
description="Searches internet to get the results a for given query", # A description of the tool.
)
Now we can consult:
chat_result = user_proxy.initiate_chat(assistant, message="Who won the Nobel prizes in 2024")
# Depending on the length of the chat history we can access the necessary content
print(chat_result.chat_history(5)('content'))
In this way, we can build different types of agent chatbots using AutoGen.
Also read: Creating strategic teams with AutoGen ai
Conclusion
In this article, we learned how to create agent chatbots using AutoGen and explored its various capabilities. With its agent-based architecture, developers can create flexible and scalable bots capable of complex interactions, such as dialectics and web searches. AutoGen's easy configuration and tool integration allows users to create custom conversational agents for various applications. As ai-powered communication evolves, AutoGen serves as a valuable framework to simplify and improve chatbot development, enabling engaging interactions with users.
To master ai agents, check out our Pioneering ai agent program.
Frequently asked questions
A. AutoGen is a framework that simplifies chatbot development by using an agent-based architecture, enabling flexible and scalable conversational interactions.
A. Yes, AutoGen supports various conversation patterns, including sequential and group chats, allowing developers to customize interactions to their needs.
A. AutoGen uses agent-to-agent communication, allowing multiple agents to participate in structured dialogues, such as dialectics, making it easier to manage complex conversational scenarios.
A. You can end a chat in AutoGen using parameters like `max_consecutive_auto_reply`, which limits the number of consecutive replies, or `is_termination_msg`, which searches for specific keywords in the conversation to trigger an auto-termination. We can also use max_turns to limit the conversation.
A. Auogen allows agents to use external tools, such as Tavily for web searches, by registering functions that agents can call during conversations, enhancing the chatbot's capabilities with real-time data and additional functionality.