Today, I’ll be publishing a series of posts on LLM agents and how they can help you improve your delivery capabilities for various tasks.
Also, we’re providing the demo here –
Isn’t it exciting?
Process Flow:

The application will interact with the AutoGen agents, use underlying Open AI APIs to follow the instructions, generate the steps, and then follow that path to generate the desired code. Finally, it will execute the generated scripts if the first outcome of the demo satisfies users.
CODE:
Let us understand some of the key snippets –
Creating the Assistant Agent:
# Create the assistant agent
assistant = autogen.AssistantAgent(
name="AI_Assistant",
llm_config={
"config_list": config_list,
}
)Purpose: This line creates an AI assistant agent named “AI_Assistant”.
Function: It uses a language model configuration provided in config_list to define how the assistant behaves.
Role: The assistant serves as the primary agent who will coordinate with other agents to solve problems.
Creating the User Proxy Agent:
user_proxy = autogen.UserProxyAgent(
name="Admin",
system_message=templateVal_1,
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": WORK_DIR,
"use_docker": False,
},
)Purpose: This code creates a user proxy agent named “Admin”.
Function:
- System Message: Uses
templateVal_1as its initial message to set the context. - Human Input Mode: Set to
"TERMINATE", meaning it will keep interacting until a termination condition is met. - Auto-Reply Limit: Can automatically reply up to 10 times without human intervention.
- Termination Condition: A message is considered a termination message if it ends with the word “TERMINATE”.
- Code Execution: Configured to execute code in the directory specified by
WORK_DIRwithout using Docker.
Role: Acts as an intermediary between the user and the assistant, handling interactions and managing the conversation flow.
Creating the Engineer Agent:
engineer = autogen.AssistantAgent(
name="Engineer",
llm_config={
"config_list": config_list,
},
system_message=templateVal_2,
)Purpose: Creates an assistant agent named “Engineer”.
Function: Uses templateVal_2 as its system message to define its expertise in engineering matters.
Role: Specializes in technical and engineering aspects of the problem.
Creating the Game Designer Agent:
game_designer = autogen.AssistantAgent(
name="GameDesigner",
llm_config={
"config_list": config_list,
},
system_message=templateVal_3,
)Purpose: Creates an assistant agent named “GameDesigner”.
Function: Uses templateVal_3 to set its focus on game design.
Role: Provides insights and solutions related to game design aspects.
Creating the Planner Agent:
planner = autogen.AssistantAgent(
name="Planer",
llm_config={
"config_list": config_list,
},
system_message=templateVal_4,
)Purpose: Creates an assistant agent named “Planer” (likely intended to be “Planner”).
Function: Uses templateVal_4 to define its role in planning.
Role: Responsible for organizing and planning tasks to solve the problem.
Creating the Critic Agent:
critic = autogen.AssistantAgent(
name="Critic",
llm_config={
"config_list": config_list,
},
system_message=templateVal_5,
)Purpose: Creates an assistant agent named “Critic”.
Function: Uses templateVal_5 to set its function as a critic.
Role: Provide feedback, critique solutions, and help improve the overall response.
Setting Up Logging:
logging.basicConfig(level=logging.ERROR)
logger = logging.getLogger(__name__)Purpose: Configures the logging system.
Function: Sets the logging level to only capture error messages to avoid cluttering the output.
Role: Helps in debugging by capturing and displaying error messages.
Defining the buildAndPlay Method:
def buildAndPlay(self, inputPrompt):
try:
user_proxy.initiate_chat(
assistant,
message=f"We need to solve the following problem: {inputPrompt}. "
"Please coordinate with the admin, engineer, game_designer, planner and critic to provide a comprehensive solution. "
)
return 0
except Exception as e:
x = str(e)
print('Error: <<Real-time Translation>>: ', x)
return 1Purpose: Defines a method to initiate the problem-solving process.
Function:
- Parameters: Takes
inputPrompt, which is the problem to be solved. - Action:
- Calls
user_proxy.initiate_chat()to start a conversation between the user proxy agent and the assistant agent. - Sends a message requesting coordination among all agents to provide a comprehensive solution to the problem.
- Calls
- Error Handling: If an exception occurs, it prints an error message and returns
1.
Role: Initiates collaboration among all agents to solve the provided problem.
Summary of the Workflow:
Agents Setup: Multiple agents with specialized roles are created.
Initiating Conversation: The buildAndPlay method starts a conversation, asking agents to collaborate.
Problem Solving: Agents communicate and coordinate to provide a comprehensive solution to the input problem.
Error Handling: The system captures and logs any errors that occur during execution.
We’ll continue to discuss this topic in the upcoming post.
I’ll bring some more exciting topics in the coming days from the Python verse.
Till then, Happy Avenging! 🙂
Note: All the data & scenarios posted here are representational data & scenarios & available over the internet & for educational purposes only. There is always room for improvement in this kind of model & the solution associated with it. I’ve shown the basic ways to achieve the same for educational purposes only.
2 thoughts on “Building solutions using LLM AutoGen in Python – Part 1”