AGENTIC AI IN THE ENTERPRISE: STRATEGY, ARCHITECTURE, AND IMPLEMENTATION – PART 3

This is a continuation of my previous post, which can be found here.

Let us recap the key takaways from our previous post –

Enterprise AI, utilizing the Model Context Protocol (MCP), leverages an open standard that enables AI systems to securely and consistently access enterprise data and tools. MCP replaces brittle “N×M” integrations between models and systems with a standardized client–server pattern: an MCP host (e.g., IDE or chatbot) runs an MCP client that communicates with lightweight MCP servers, which wrap external systems via JSON-RPC. Servers expose three assets—Resources (data), Tools (actions), and Prompts (templates)—behind permissions, access control, and auditability. This design enables real-time context, reduces hallucinations, supports model- and cloud-agnostic interoperability, and accelerates “build once, integrate everywhere” deployment. A typical flow (e.g., retrieving a customer’s latest order) encompasses intent parsing, authorized tool invocation, query translation/execution, and the return of a normalized JSON result to the model for natural-language delivery. Performance introduces modest overhead (RPC hops, JSON (de)serialization, network transit) and scale considerations (request volume, significant results, context-window pressure). Mitigations include in-memory/semantic caching, optimized SQL with indexing, pagination, and filtering, connection pooling, and horizontal scaling with load balancing. In practice, small latency costs are often outweighed by the benefits of higher accuracy, stronger governance, and a decoupled, scalable architecture.

Compared to other approaches, the Model Context Protocol (MCP) offers a uniquely standardized and secure framework for AI-tool integration, shifting from brittle, custom-coded connections to a universal plug-and-play model. It is not a replacement for underlying systems, such as APIs or databases, but instead acts as an intelligent, secure abstraction layer designed explicitly for AI agents.

This approach was the traditional method for AI integration before standards like MCP emerged.

  • Custom API integrations (traditional): Each AI application requires a custom-built connector for every external system it needs to access, leading to an N x M integration problem (the number of connectors grows exponentially with the number of models and systems). This approach is resource-intensive, challenging to maintain, and prone to breaking when underlying APIs change.
  • MCP: The standardized protocol eliminates the N x M problem by creating a universal interface. Tool creators build a single MCP server for their system, and any MCP-compatible AI agent can instantly access it. This process decouples the AI model from the underlying implementation details, drastically reducing integration and maintenance costs.

For more detailed information, please refer to the following link.

RAG is a technique that retrieves static documents to augment an LLM’s knowledge, while MCP focuses on live interactions. They are complementary, not competing. 

  • RAG:
    • Focus: Retrieving and summarizing static, unstructured data, such as documents, manuals, or knowledge bases.
    • Best for: Providing background knowledge and general information, as in a policy lookup tool or customer service bot.
    • Data type: Unstructured, static knowledge.
  • MCP:
    • Focus: Accessing and acting on real-time, structured, and dynamic data from databases, APIs, and business systems.
    • Best for: Agentic use cases involving real-world actions, like pulling live sales reports from a CRM or creating a ticket in a project management tool.
    • Data type: Structured, real-time, and dynamic data.

Before MCP, platforms like OpenAI offered proprietary plugin systems to extend LLM capabilities.

  • LLM plugins:
    • Proprietary: Tied to a specific AI vendor (e.g., OpenAI).
    • Limited: Rely on the vendor’s API function-calling mechanism, which focuses on call formatting but not standardized execution.
    • Centralized: Managed by the AI vendor, creating a risk of vendor lock-in.
  • MCP:
    • Open standard: Based on a public, interoperable protocol (JSON-RPC 2.0), making it model-agnostic and usable across different platforms.
    • Infrastructure layer: Provides a standardized infrastructure for agents to discover and use any compliant tool, regardless of the underlying LLM.
    • Decentralized: Promotes a flexible ecosystem and reduces the risk of vendor lock-in. 

The “agent factory” pattern: Azure focuses on providing managed services for building and orchestrating AI agents, tightly integrated with its enterprise security and governance features. The MCP architecture is a core component of the Azure AI Foundry, serving as a secure, managed “agent factory.” 

  • AI orchestration layer: The Azure AI Agent Service, within Azure AI Foundry, acts as the central host and orchestrator. It provides the control plane for creating, deploying, and managing multiple specialized agents, and it natively supports the MCP standard.
  • AI model layer: Agents in the Foundry can be powered by various models, including those from Azure OpenAI Service, commercial models from partners, or open-source models.
  • MCP server and tool layer: MCP servers are deployed using serverless functions, such as Azure Functions or Azure Logic Apps, to wrap existing enterprise systems. These servers expose tools for interacting with enterprise data sources like SharePoint, Azure AI Search, and Azure Blob Storage.
  • Data and security layer: Data is secured using Microsoft Entra ID (formerly Azure AD) for authentication and access control, with robust security policies enforced via Azure API Management. Access to data sources, such as databases and storage, is managed securely through private networks and Managed Identity. 

The “composable serverless agent” pattern: AWS emphasizes a modular, composable, and serverless approach, leveraging its extensive portfolio of services to build sophisticated, flexible, and scalable AI solutions. The MCP architecture here aligns with the principle of creating lightweight, event-driven services that AI agents can orchestrate. 

  • The AI orchestration layer, which includes Amazon Bedrock Agents or custom agent frameworks deployed via AWS Fargate or Lambda, acts as the MCP hosts. Bedrock Agents provide built-in orchestration, while custom agents offer greater flexibility and customization options.
  • AI model layer: The models are sourced from Amazon Bedrock, which provides a wide selection of foundation models.
  • MCP server and tool layer: MCP servers are deployed as serverless AWS Lambda functions. AWS offers pre-built MCP servers for many of its services, including the AWS Serverless MCP Server for managing serverless applications and the AWS Lambda Tool MCP Server for invoking existing Lambda functions as tools.
  • Data and security layer: Access is tightly controlled using AWS Identity and Access Management (IAM) roles and policies, with fine-grained permissions for each MCP server. Private data sources like databases (Amazon DynamoDB) and storage (Amazon S3) are accessed securely within a Virtual Private Cloud (VPC). 

The “unified workbench” pattern: GCP focuses on providing a unified, open, and data-centric platform for AI development. The MCP architecture on GCP integrates natively with the Vertex AI platform, treating MCP servers as first-class tools that can be dynamically discovered and used within a single workbench. 

  • AI orchestration layer: The Vertex AI Agent Builder serves as the central environment for building and managing conversational AI and other agents. It orchestrates workflows and manages tool invocation for agents.
  • AI model layer: Agents use foundation models available through the Vertex AI Model Garden or the Gemini API.
  • MCP server and tool layer: MCP servers are deployed as containerized microservices on Cloud Run or managed by services like App Engine. These servers contain tools that interact with GCP services, such as BigQueryCloud Storage, and Cloud SQL. GCP offers pre-built MCP server implementations, such as the GCP MCP Toolbox, for integration with its databases.
  • Data and security layer: Vertex AI Vector Search and other data sources are encapsulated within the MCP server tools to provide contextual information. Access to these services is managed by Identity and Access Management (IAM) and secured through virtual private clouds. The MCP server can leverage Vertex AI Context Caching for improved performance.

Note that all the native technology is referred to in each respective cloud. Hence, some of the better technologies can be used in place of the tool mentioned here. This is more of a concept-level comparison rather than industry-wise implementation approaches.


We’ll go ahead and conclude this post here & continue discussing on a further deep dive in the next post.

Till then, Happy Avenging! 🙂

AGENTIC AI IN THE ENTERPRISE: STRATEGY, ARCHITECTURE, AND IMPLEMENTATION – PART 2

This is a continuation of my previous post, which can be found here.

Let us recap the key takaways from our previous post –

Agentic AI refers to autonomous systems that pursue goals with minimal supervision by planning, reasoning about next steps, utilizing tools, and maintaining context across sessions. Core capabilities include goal-directed autonomy, interaction with tools and environments (e.g., APIs, databases, devices), multi-step planning and reasoning under uncertainty, persistence, and choiceful decision-making.

Architecturally, three modules coordinate intelligent behavior: Sensing (perception pipelines that acquire multimodal data, extract salient patterns, and recognize entities/events); Observation/Deliberation (objective setting, strategy formation, and option evaluation relative to resources and constraints); and Action (execution via software interfaces, communications, or physical actuation to deliver outcomes). These functions are enabled by machine learning, deep learning, computer vision, natural language processing, planning/decision-making, uncertainty reasoning, and simulation/modeling.

At enterprise scale, open standards align autonomy with governance: the Model Context Protocol (MCP) grants an agent secure, principled access to enterprise tools and data (vertical integration), while Agent-to-Agent (A2A) enables specialized agents to coordinate, delegate, and exchange information (horizontal collaboration). Together, MCP and A2A help organizations transition from isolated pilots to scalable programs, delivering end-to-end automation, faster integration, enhanced security and auditability, vendor-neutral interoperability, and adaptive problem-solving that responds to real-time context.

Great! Let’s dive into this topic now.

Enterprise AI with MCP refers to the application of the Model Context Protocol (MCP), an open standard, to enable AI systems to securely and consistently access external enterprise data and applications. 

Before MCP, enterprise AI integration was characterized by a “many-to-many” or “N x M” problem. Companies had to build custom, fragile, and costly integrations between each AI model and every proprietary data source, which was not scalable. These limitations left AI agents with limited, outdated, or siloed information, restricting their potential impact. 
MCP addresses this by offering a standardized architecture for AI and data systems to communicate with each other.

The MCP framework uses a client-server architecture to enable communication between AI models and external tools and data sources. 

  • MCP Host: The AI-powered application or environment, such as an AI-enhanced IDE or a generative AI chatbot like Anthropic’s Claude or OpenAI’s ChatGPT, where the user interacts.
  • MCP Client: A component within the host application that manages the connection to MCP servers.
  • MCP Server: A lightweight service that wraps around an external system (e.g., a CRM, database, or API) and exposes its capabilities to the AI client in a standardized format, typically using JSON-RPC 2.0. 

An MCP server provides AI clients with three key resources: 

  • Resources: Structured or unstructured data that an AI can access, such as files, documents, or database records.
  • Tools: The functionality to perform specific actions within an external system, like running a database query or sending an email.
  • Prompts: Pre-defined text templates or workflows to help guide the AI’s actions. 
  • Standardized integration: Developers can build integrations against a single, open standard, which dramatically reduces the complexity and time required to deploy and scale AI initiatives.
  • Enhanced security and governance: MCP incorporates native support for security and compliance measures. It provides permission models, access control, and auditing capabilities to ensure AI systems only access data and tools within specified boundaries.
  • Real-time contextual awareness: By connecting AI agents to live enterprise data sources, MCP ensures they have access to the most current and relevant information, which reduces hallucinations and improves the accuracy of AI outputs.
  • Greater interoperability: MCP is model-agnostic & can be used with a variety of AI models (e.g., Anthropic’s Claude or OpenAI’s models) and across different cloud environments. This approach helps enterprises avoid vendor lock-in.
  • Accelerated development: The “build once, integrate everywhere” approach enables internal teams to focus on innovation instead of writing custom connectors for every system.

Let us understand one sample case & the flow of activities.

A customer support agent uses an AI assistant to get information about a customer’s recent orders. The AI assistant utilizes an MCP-compliant client to communicate with an MCP server, which is connected to the company’s PostgreSQL database.

1. User request: The support agent asks the AI assistant, “What was the most recent order placed by Priyanka Chopra Jonas?”

2. AI model processes intent: The AI assistant, running on an MCP host, analyzes the natural language query. It recognizes that to answer this question, it needs to perform a database query. It then identifies the appropriate tool from the MCP server’s capabilities. 

3. Client initiates tool call: The AI assistant’s MCP client sends a JSON-RPC request to the MCP server connected to the PostgreSQL database. The request specifies the tool to be used, such as get_customer_orders, and includes the necessary parameters: 

{
  "jsonrpc": "2.0",
  "method": "db_tools.get_customer_orders",
  "params": {
    "customer_name": "Priyanka Chopra Jonas",
    "sort_by": "order_date",
    "sort_order": "desc",
    "limit": 1
  },
  "id": "12345"
}

4. Server handles the request: The MCP server receives the request and performs several key functions: 

  • Authentication and authorization: The server verifies that the AI client and the user have permission to query the database.
  • Query translation: The server translates the standardized MCP request into a specific SQL query for the PostgreSQL database.
  • Query execution: The server executes the SQL query against the database.
SELECT order_id, order_date, total_amount
FROM orders
WHERE customer_name = 'Priyanka Chopra Jonas'
ORDER BY order_date DESC
LIMIT 1;

5. Database returns data: The PostgreSQL database executes the query and returns the requested data to the MCP server. 

6. Server formats the response: The MCP server receives the raw database output and formats it into a standardized JSON response that the MCP client can understand.

{
  "jsonrpc": "2.0",
  "result": {
    "data": [
      {
        "order_id": "98765",
        "order_date": "2025-08-25",
        "total_amount": 11025.50
      }
    ]
  },
  "id": "12345"
}

7. Client returns data to the model: The MCP client receives the JSON response and passes it back to the AI assistant’s language model. 

8. AI model generates final response: The language model incorporates this real-time data into its response and presents it to the user in a natural, conversational format. 

“Priyanka Chopra Jonas’s most recent order was placed on August 25, 2025, with an order ID of 98765, for a total of $11025.50.”

Using the Model Context Protocol (MCP) for database access introduces a layer of abstraction that affects performance in several ways. While it adds some latency and processing overhead, strategic implementation can mitigate these effects. For AI applications, the benefits often outweigh the costs, particularly in terms of improved accuracy, security, and scalability.

The MCP architecture introduces extra communication steps between the AI agent and the database, each adding a small amount of latency. 

  • RPC overhead: The JSON-RPC call from the AI’s client to the MCP server adds a small processing and network delay. This is an out-of-process request, as opposed to a simple local function call.
  • JSON serialization: Request and response data must be serialized and deserialized into JSON format, which requires processing time.
  • Network transit: For remote MCP servers, the data must travel over the network, adding latency. However, for a local or on-premise setup, this is minimal. The physical location of the MCP server relative to the AI model and the database is a significant factor.

The performance impact scales with the complexity and volume of the AI agent’s interactions. 

  • High request volume: A single AI agent working on a complex task might issue dozens of parallel database queries. In high-traffic scenarios, managing numerous simultaneous connections can strain system resources and require robust infrastructure.
  • Excessive data retrieval: A significant performance risk is an AI agent retrieving a massive dataset in a single query. This process can consume a large number of tokens, fill the AI’s context window, and cause bottlenecks at the database and client levels.
  • Context window usage: Tool definitions and the results of tool calls consume space in the AI’s context window. If a large number of tools are in use, this can limit the AI’s “working memory,” resulting in slower and less effective reasoning. 

Caching is a crucial strategy for mitigating the performance overhead of MCP. 

  • In-memory caching: The MCP server can cache results from frequent or expensive database queries in memory (e.g., using Redis or Memcached). This approach enables repeat requests to be served almost instantly without requiring a database hit.
  • Semantic caching: Advanced techniques can cache the results of previous queries and serve them for semantically similar future requests, reducing token consumption and improving speed for conversational applications. 

Designing the MCP server and its database interactions for efficiency is critical. 

  • Optimized SQL: The MCP server should generate optimized SQL queries. Database indexes should be utilized effectively to expedite lookups and minimize load.
  • Pagination and filtering: To prevent a single query from overwhelming the system, the MCP server should implement pagination. The AI agent can be prompted to use filtering parameters to retrieve only the necessary data.
  • Connection pooling: This technique reuses existing database connections instead of opening a new one for each request, thereby reducing latency and database load. 

For large-scale enterprise deployments, scaling is essential for maintaining performance. 

  • Multiple servers: The workload can be distributed across various MCP servers. One server could handle read requests, and another could handle writes.
  • Load balancing: A reverse proxy or other load-balancing solution can distribute incoming traffic across MCP server instances. Autoscaling can dynamically add or remove servers in response to demand.

For AI-driven tasks, a slight increase in latency for database access is often a worthwhile trade-off for significant gains. 

  • Improved accuracy: Accessing real-time, high-quality data through MCP leads to more accurate and relevant AI responses, reducing “hallucinations”.
  • Scalable ecosystem: The standardization of MCP reduces development overhead and allows for a more modular, scalable ecosystem, which saves significant engineering resources compared to building custom integrations.
  • Decoupled architecture: The MCP server decouples the AI model from the database, allowing each to be optimized and scaled independently. 

We’ll go ahead and conclude this post here & continue discussing on a further deep dive in the next post.

Till then, Happy Avenging! 🙂

Agentic AI in the Enterprise: Strategy, Architecture, and Implementation – Part 1

Today, we won’t be discussing any solutions. Today, we’ll be discussing the Agentic AI & its implementation in the Enterprise landscape in a series of upcoming posts.

So, hang tight! We’re about to launch a new venture as part of our knowledge drive.

Agentic AI refers to artificial intelligence systems that can act autonomously to achieve goals, making decisions and taking actions without constant human oversight. Unlike traditional AI, which responds to prompts, agentic AI can plan, reason about next steps, utilize tools, and work toward objectives over extended periods of time.

Key characteristics of agentic AI include:

  • Autonomy and Goal-Directed Behavior: These systems can pursue objectives independently, breaking down complex tasks into smaller steps and executing them sequentially.
  • Tool Use and Environment Interaction: Agentic AI can interact with external systems, APIs, databases, and software tools to gather information and perform actions in the real world.
  • Planning and Reasoning: They can develop multi-step strategies, adapt their approach based on feedback, and reason through problems to find solutions.
  • Persistence: Unlike single-interaction AI, agentic systems can maintain context and continue working on tasks across multiple interactions or sessions.
  • Decision Making: They can evaluate options, weigh trade-offs, and make choices about how to proceed when faced with uncertainty.

Agentic AI systems have several interconnected components that work together to enable intelligent behaviour. Each element plays a crucial role in the overall functioning of the AI system, and they must interact seamlessly to achieve desired outcomes. Let’s explore each of these components in more detail.

The sensing module serves as the AI’s eyes and ears, enabling it to understand its surroundings and make informed decisions. Think of it as the system that helps the AI “see” and “hear” the world around it, much like how humans use their senses.

  • Gathering Information: The system collects data from multiple sources, including cameras for visual information, microphones for audio, sensors for physical touch, and digital systems for data. This step provides the AI with a comprehensive understanding of what’s happening.
  • Making Sense of Data: Raw information from sensors can be messy and overwhelming. This component processes the data to identify the essential patterns and details that actually matter for making informed decisions.
  • Recognizing What’s Important: Utilizing advanced techniques such as computer vision (for images), natural language processing (for text and speech), and machine learning (for data patterns), the system identifies and understands objects, people, events, and situations within the environment.

This sensing capability enables AI systems to transition from merely following pre-programmed instructions to genuinely understanding their environment and making informed decisions based on real-world conditions. It’s the difference between a basic automated system and an intelligent agent that can adapt to changing situations.

The observation module serves as the AI’s decision-making center, where it sets objectives, develops strategies, and selects the most effective actions to take. This step is where the AI transforms what it perceives into purposeful action, much like humans think through problems and devise plans.

  • Setting Clear Objectives: The system establishes specific goals and desired outcomes, giving the AI a clear sense of direction and purpose. This approach helps ensure all actions are working toward meaningful results rather than random activity.
  • Strategic Planning: Using information about its own capabilities and the current situation, the AI creates step-by-step plans to reach its goals. It considers potential obstacles, available resources, and different approaches to find the most effective path forward.
  • Intelligent Decision-Making: When faced with multiple options, the system evaluates each choice against the current circumstances, established goals, and potential outcomes. It then selects the action most likely to move the AI closer to achieving its objectives.

This observation capability is what transforms an AI from a simple tool that follows commands into an intelligent system that can work independently toward business goals. It enables the AI to handle complex, multi-step tasks and adapt its approach when conditions change, making it valuable for a wide range of applications, from customer service to project management.

The action module serves as the AI’s hands and voice, turning decisions into real-world results. This step is where the AI actually puts its thinking and planning into action, carrying out tasks that make a tangible difference in the environment.

  • Control Systems: The system utilizes various tools to interact with the world, including motors for physical movement, speakers for communication, network connections for digital tasks, and software interfaces for system operation. These serve as the AI’s means of reaching out and making adjustments.
  • Task Implementation: Once the cognitive module determines the action to take, this component executes the actual task. Whether it’s sending an email, moving a robotic arm, updating a database, or scheduling a meeting, this module handles the execution from start to finish.

This action capability is what makes AI systems truly useful in business environments. Without it, an AI could analyze data and make significant decisions, but it couldn’t help solve problems or complete tasks. The action module bridges the gap between artificial intelligence and real-world impact, enabling AI to automate processes, respond to customers, manage systems, and deliver measurable business value.

Technology that is primarily involved in the Agentic AI is as follows –

1. Machine Learning
2. Deep Learning
3. Computer Vision
4. Natural Language Processing (NLP)
5. Planning and Decision-Making
6. Uncertainty and Reasoning
7. Simulation and Modeling

In an enterprise setting, agentic AI systems utilize the Model Context Protocol (MCP) and the Agent-to-Agent (A2A) protocol as complementary, open standards to achieve autonomous, coordinated, and secure workflows. An MCP-enabled agent gains the ability to access and manipulate enterprise tools and data. At the same time, A2A allows a network of these agents to collaborate on complex tasks by delegating and exchanging information.

This combined approach allows enterprises to move from isolated AI experiments to strategic, scalable, and secure AI programs.

ProtocolFunction in Agentic AIFocusExample use case
Model Context Protocol (MCP)Equips a single AI agent with the tools and data it needs to perform a specific job.Vertical integration: connecting agents to enterprise systems like databases, CRMs, and APIs.A sales agent uses MCP to query the company CRM for a client’s recent purchase history.
Agent-to-Agent (A2A)Enables multiple specialized agents to communicate, delegate tasks, and collaborate on a larger, multi-step goal.Horizontal collaboration: allowing agents from different domains to work together seamlessly.An orchestrating agent uses A2A to delegate parts of a complex workflow to specialized HR, IT, and sales agents.
  • End-to-end automation: Agents can handle tasks from start to finish, including complex, multi-step workflows, autonomously.
  • Greater agility and speed: Enterprise-wide adoption of these protocols reduces the cost and complexity of integrating AI, accelerating deployment timelines for new applications.
  • Enhanced security and governance: Enterprise AI platforms built on these open standards incorporate robust security policies, centralized access controls, and comprehensive audit trails.
  • Vendor neutrality and interoperability: As open standards, MCP and A2A allow AI agents to work together seamlessly, regardless of the underlying vendor or platform.
  • Adaptive problem-solving: Agents can dynamically adjust their strategies and collaborate based on real-time data and contextual changes, leading to more resilient and efficient systems.

We will discuss this topic further in our upcoming posts.

Till then, Happy Avenging! 🙂

Real-time video summary assistance App – Part 2

As a continuation of the previous post, I would like to continue my discussion about the implementation of MCP protocols among agents. But before that, I want to add the quick demo one more time to recap our objectives.

Let us recap the process flow –

Also, understand the groupings of scripts by each group as posted in the previous post –

Message-Chaining Protocol (MCP) Implementation:

    clsMCPMessage.py
    clsMCPBroker.py

YouTube Transcript Extraction:

    clsYouTubeVideoProcessor.py

Language Detection:

    clsLanguageDetector.py

Translation Services & Agents:

    clsTranslationAgent.py
    clsTranslationService.py

Documentation Agent:

    clsDocumentationAgent.py
    
Research Agent:

    clsDocumentationAgent.py

Great! Now, we’ll continue with the main discussion.


def extract_youtube_id(youtube_url):
    """Extract YouTube video ID from URL"""
    youtube_id_match = re.search(r'(?:v=|\/)([0-9A-Za-z_-]{11}).*', youtube_url)
    if youtube_id_match:
        return youtube_id_match.group(1)
    return None

def get_youtube_transcript(youtube_url):
    """Get transcript from YouTube video"""
    video_id = extract_youtube_id(youtube_url)
    if not video_id:
        return {"error": "Invalid YouTube URL or ID"}
    
    try:
        transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)
        
        # First try to get manual transcripts
        try:
            transcript = transcript_list.find_manually_created_transcript(["en"])
            transcript_data = transcript.fetch()
            print(f"Debug - Manual transcript format: {type(transcript_data)}")
            if transcript_data and len(transcript_data) > 0:
                print(f"Debug - First item type: {type(transcript_data[0])}")
                print(f"Debug - First item sample: {transcript_data[0]}")
            return {"text": transcript_data, "language": "en", "auto_generated": False}
        except Exception as e:
            print(f"Debug - No manual transcript: {str(e)}")
            # If no manual English transcript, try any available transcript
            try:
                available_transcripts = list(transcript_list)
                if available_transcripts:
                    transcript = available_transcripts[0]
                    print(f"Debug - Using transcript in language: {transcript.language_code}")
                    transcript_data = transcript.fetch()
                    print(f"Debug - Auto transcript format: {type(transcript_data)}")
                    if transcript_data and len(transcript_data) > 0:
                        print(f"Debug - First item type: {type(transcript_data[0])}")
                        print(f"Debug - First item sample: {transcript_data[0]}")
                    return {
                        "text": transcript_data, 
                        "language": transcript.language_code, 
                        "auto_generated": transcript.is_generated
                    }
                else:
                    return {"error": "No transcripts available for this video"}
            except Exception as e:
                return {"error": f"Error getting transcript: {str(e)}"}
    except Exception as e:
        return {"error": f"Error getting transcript list: {str(e)}"}

# ----------------------------------------------------------------------------------
# YouTube Video Processor
# ----------------------------------------------------------------------------------

class clsYouTubeVideoProcessor:
    """Process YouTube videos using the agent system"""
    
    def __init__(self, documentation_agent, translation_agent, research_agent):
        self.documentation_agent = documentation_agent
        self.translation_agent = translation_agent
        self.research_agent = research_agent
    
    def process_youtube_video(self, youtube_url):
        """Process a YouTube video"""
        print(f"Processing YouTube video: {youtube_url}")
        
        # Extract transcript
        transcript_result = get_youtube_transcript(youtube_url)
        
        if "error" in transcript_result:
            return {"error": transcript_result["error"]}
        
        # Start a new conversation
        conversation_id = self.documentation_agent.start_processing()
        
        # Process transcript segments
        transcript_data = transcript_result["text"]
        transcript_language = transcript_result["language"]
        
        print(f"Debug - Type of transcript_data: {type(transcript_data)}")
        
        # For each segment, detect language and translate if needed
        processed_segments = []
        
        try:
            # Make sure transcript_data is a list of dictionaries with text and start fields
            if isinstance(transcript_data, list):
                for idx, segment in enumerate(transcript_data):
                    print(f"Debug - Processing segment {idx}, type: {type(segment)}")
                    
                    # Extract text properly based on the type
                    if isinstance(segment, dict) and "text" in segment:
                        text = segment["text"]
                        start = segment.get("start", 0)
                    else:
                        # Try to access attributes for non-dict types
                        try:
                            text = segment.text
                            start = getattr(segment, "start", 0)
                        except AttributeError:
                            # If all else fails, convert to string
                            text = str(segment)
                            start = idx * 5  # Arbitrary timestamp
                    
                    print(f"Debug - Extracted text: {text[:30]}...")
                    
                    # Create a standardized segment
                    std_segment = {
                        "text": text,
                        "start": start
                    }
                    
                    # Process through translation agent
                    translation_result = self.translation_agent.process_text(text, conversation_id)
                    
                    # Update segment with translation information
                    segment_with_translation = {
                        **std_segment,
                        "translation_info": translation_result
                    }
                    
                    # Use translated text for documentation
                    if "final_text" in translation_result and translation_result["final_text"] != text:
                        std_segment["processed_text"] = translation_result["final_text"]
                    else:
                        std_segment["processed_text"] = text
                    
                    processed_segments.append(segment_with_translation)
            else:
                # If transcript_data is not a list, treat it as a single text block
                print(f"Debug - Transcript is not a list, treating as single text")
                text = str(transcript_data)
                std_segment = {
                    "text": text,
                    "start": 0
                }
                
                translation_result = self.translation_agent.process_text(text, conversation_id)
                segment_with_translation = {
                    **std_segment,
                    "translation_info": translation_result
                }
                
                if "final_text" in translation_result and translation_result["final_text"] != text:
                    std_segment["processed_text"] = translation_result["final_text"]
                else:
                    std_segment["processed_text"] = text
                
                processed_segments.append(segment_with_translation)
                
        except Exception as e:
            print(f"Debug - Error processing transcript: {str(e)}")
            return {"error": f"Error processing transcript: {str(e)}"}
        
        # Process the transcript with the documentation agent
        documentation_result = self.documentation_agent.process_transcript(
            processed_segments,
            conversation_id
        )
        
        return {
            "youtube_url": youtube_url,
            "transcript_language": transcript_language,
            "processed_segments": processed_segments,
            "documentation": documentation_result,
            "conversation_id": conversation_id
        }

Let us understand this step-by-step:

Part 1: Getting the YouTube Transcript

def extract_youtube_id(youtube_url):
    ...

This extracts the unique video ID from any YouTube link. 

def get_youtube_transcript(youtube_url):
    ...
  • This gets the actual spoken content of the video.
  • It tries to get a manual transcript first (created by humans).
  • If not available, it falls back to an auto-generated version (created by YouTube’s AI).
  • If nothing is found, it gives back an error message like: “Transcript not available.”

Part 2: Processing the Video with Agents

class clsYouTubeVideoProcessor:
    ...

This is like the control center that tells each intelligent agent what to do with the transcript. Here are the detailed steps:

1. Start the Process

def process_youtube_video(self, youtube_url):
    ...
  • The system starts with a YouTube video link.
  • It prints a message like: “Processing YouTube video: [link]”

2. Extract the Transcript

  • The system runs the get_youtube_transcript() function.
  • If it fails, it returns an error (e.g., invalid link or no subtitles available).

3. Start a “Conversation”

  • The documentation agent begins a new session, tracked by a unique conversation ID.
  • Think of this like opening a new folder in a shared team workspace to store everything related to this video.

4. Go Through Each Segment of the Transcript

  • The spoken text is often broken into small parts (segments), like subtitles.
  • For each part:
    • It checks the text.
    • It finds out the time that part was spoken.
    • It sends it to the translation agent to clean up or translate the text.

5. Translate (if needed)

  • If the translation agent finds a better or translated version, it replaces the original.
  • Otherwise, it keeps the original.

6. Prepare for Documentation

  • After translation, the segment is passed to the documentation agent.
  • This agent might:
    • Summarize the content,
    • Highlight important terms,
    • Structure it into a readable format.

7. Return the Final Result

The system gives back a structured package with:

  • The video link
  • The original language
  • The transcript in parts (processed and translated)
  • A documentation summary
  • The conversation ID (for tracking or further updates)

class clsDocumentationAgent:
    """Documentation Agent built with LangChain"""
    
    def __init__(self, agent_id: str, broker: clsMCPBroker):
        self.agent_id = agent_id
        self.broker = broker
        self.broker.register_agent(agent_id)
        
        # Initialize LangChain components
        self.llm = ChatOpenAI(
            model="gpt-4-0125-preview",
            temperature=0.1,
            api_key=OPENAI_API_KEY
        )
        
        # Create tools
        self.tools = [
            clsSendMessageTool(sender_id=self.agent_id, broker=self.broker)
        ]
        
        # Set up LLM with tools
        self.llm_with_tools = self.llm.bind(
            tools=[tool.tool_config for tool in self.tools]
        )
        
        # Setup memory
        self.memory = ConversationBufferMemory(
            memory_key="chat_history",
            return_messages=True
        )
        
        # Create prompt
        self.prompt = ChatPromptTemplate.from_messages([
            ("system", """You are a Documentation Agent for YouTube video transcripts. Your responsibilities include:
                1. Process YouTube video transcripts
                2. Identify key points, topics, and main ideas
                3. Organize content into a coherent and structured format
                4. Create concise summaries
                5. Request research information when necessary
                
                When you need additional context or research, send a request to the Research Agent.
                Always maintain a professional tone and ensure your documentation is clear and organized.
            """),
            MessagesPlaceholder(variable_name="chat_history"),
            ("human", "{input}"),
            MessagesPlaceholder(variable_name="agent_scratchpad"),
        ])
        
        # Create agent
        self.agent = (
            {
                "input": lambda x: x["input"],
                "chat_history": lambda x: self.memory.load_memory_variables({})["chat_history"],
                "agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
            }
            | self.prompt
            | self.llm_with_tools
            | OpenAIToolsAgentOutputParser()
        )
        
        # Create agent executor
        self.agent_executor = AgentExecutor(
            agent=self.agent,
            tools=self.tools,
            verbose=True,
            memory=self.memory
        )
        
        # Video data
        self.current_conversation_id = None
        self.video_notes = {}
        self.key_points = []
        self.transcript_segments = []
        
    def start_processing(self) -> str:
        """Start processing a new video"""
        self.current_conversation_id = str(uuid.uuid4())
        self.video_notes = {}
        self.key_points = []
        self.transcript_segments = []
        
        return self.current_conversation_id
    
    def process_transcript(self, transcript_segments, conversation_id=None):
        """Process a YouTube transcript"""
        if not conversation_id:
            conversation_id = self.start_processing()
        self.current_conversation_id = conversation_id
        
        # Store transcript segments
        self.transcript_segments = transcript_segments
        
        # Process segments
        processed_segments = []
        for segment in transcript_segments:
            processed_result = self.process_segment(segment)
            processed_segments.append(processed_result)
        
        # Generate summary
        summary = self.generate_summary()
        
        return {
            "processed_segments": processed_segments,
            "summary": summary,
            "conversation_id": conversation_id
        }
    
    def process_segment(self, segment):
        """Process individual transcript segment"""
        text = segment.get("text", "")
        start = segment.get("start", 0)
        
        # Use LangChain agent to process the segment
        result = self.agent_executor.invoke({
            "input": f"Process this video transcript segment at timestamp {start}s: {text}. If research is needed, send a request to the research_agent."
        })
        
        # Update video notes
        timestamp = start
        self.video_notes[timestamp] = {
            "text": text,
            "analysis": result["output"]
        }
        
        return {
            "timestamp": timestamp,
            "text": text,
            "analysis": result["output"]
        }
    
    def handle_mcp_message(self, message: clsMCPMessage) -> Optional[clsMCPMessage]:
        """Handle an incoming MCP message"""
        if message.message_type == "research_response":
            # Process research information received from Research Agent
            research_info = message.content.get("text", "")
            
            result = self.agent_executor.invoke({
                "input": f"Incorporate this research information into video analysis: {research_info}"
            })
            
            # Send acknowledgment back to Research Agent
            response = clsMCPMessage(
                sender=self.agent_id,
                receiver=message.sender,
                message_type="acknowledgment",
                content={"text": "Research information incorporated into video analysis."},
                reply_to=message.id,
                conversation_id=message.conversation_id
            )
            
            self.broker.publish(response)
            return response
        
        elif message.message_type == "translation_response":
            # Process translation response from Translation Agent
            translation_result = message.content
            
            # Process the translated text
            if "final_text" in translation_result:
                text = translation_result["final_text"]
                original_text = translation_result.get("original_text", "")
                language_info = translation_result.get("language", {})
                
                result = self.agent_executor.invoke({
                    "input": f"Process this translated text: {text}\nOriginal language: {language_info.get('language', 'unknown')}\nOriginal text: {original_text}"
                })
                
                # Update notes with translation information
                for timestamp, note in self.video_notes.items():
                    if note["text"] == original_text:
                        note["translated_text"] = text
                        note["language"] = language_info
                        break
            
            return None
        
        return None
    
    def run(self):
        """Run the agent to listen for MCP messages"""
        print(f"Documentation Agent {self.agent_id} is running...")
        while True:
            message = self.broker.get_message(self.agent_id, timeout=1)
            if message:
                self.handle_mcp_message(message)
            time.sleep(0.1)
    
    def generate_summary(self) -> str:
        """Generate a summary of the video"""
        if not self.video_notes:
            return "No video data available to summarize."
        
        all_notes = "\n".join([f"{ts}: {note['text']}" for ts, note in self.video_notes.items()])
        
        result = self.agent_executor.invoke({
            "input": f"Generate a concise summary of this YouTube video, including key points and topics:\n{all_notes}"
        })
        
        return result["output"]

Let us understand the key methods in a step-by-step manner:

The Documentation Agent is like a smart assistant that watches a YouTube video, takes notes, pulls out important ideas, and creates a summary — almost like a professional note-taker trained to help educators, researchers, and content creators. It works with a team of other assistants, like a Translator Agent and a Research Agent, and they all talk to each other through a messaging system.

1. Starting to Work on a New Video

    def start_processing(self) -> str
    

    When a new video is being processed:

    • A new project ID is created.
    • Old notes and transcripts are cleared to start fresh.

    2. Processing the Whole Transcript

    def process_transcript(...)
    

    This is where the assistant:

    • Takes in the full transcript (what was said in the video).
    • Breaks it into small parts (like subtitles).
    • Sends each part to the smart brain for analysis.
    • Collects the results.
    • Finally, a summary of all the main ideas is created.

    3. Processing One Transcript Segment at a Time

    def process_segment(self, segment)
    

    For each chunk of the video:

    • The assistant reads the text and timestamp.
    • It asks GPT-4 to analyze it and suggest important insights.
    • It saves that insight along with the original text and timestamp.

    4. Handling Incoming Messages from Other Agents

    def handle_mcp_message(self, message)
    

    The assistant can also receive messages from teammates (other agents):

    If the message is from the Research Agent:

    • It reads new information and adds it to its notes.
    • It replies with a thank-you message to say it got the research.

    If the message is from the Translation Agent:

    • It takes the translated version of a transcript.
    • Updates its notes to reflect the translated text and its language.

    This is like a team of assistants emailing back and forth to make sure the notes are complete and accurate.

    5. Summarizing the Whole Video

    def generate_summary(self)
    

    After going through all the transcript parts, the agent asks GPT-4 to create a short, clean summary — identifying:

    • Main ideas
    • Key talking points
    • Structure of the content

    The final result is clear, professional, and usable in learning materials or documentation.


    class clsResearchAgent:
        """Research Agent built with AutoGen"""
        
        def __init__(self, agent_id: str, broker: clsMCPBroker):
            self.agent_id = agent_id
            self.broker = broker
            self.broker.register_agent(agent_id)
            
            # Configure AutoGen directly with API key
            if not OPENAI_API_KEY:
                print("Warning: OPENAI_API_KEY not set for ResearchAgent")
                
            # Create config list directly instead of loading from file
            config_list = [
                {
                    "model": "gpt-4-0125-preview",
                    "api_key": OPENAI_API_KEY
                }
            ]
            # Create AutoGen assistant for research
            self.assistant = AssistantAgent(
                name="research_assistant",
                system_message="""You are a Research Agent for YouTube videos. Your responsibilities include:
                    1. Research topics mentioned in the video
                    2. Find relevant information, facts, references, or context
                    3. Provide concise, accurate information to support the documentation
                    4. Focus on delivering high-quality, relevant information
                    
                    Respond directly to research requests with clear, factual information.
                """,
                llm_config={"config_list": config_list, "temperature": 0.1}
            )
            
            # Create user proxy to handle message passing
            self.user_proxy = UserProxyAgent(
                name="research_manager",
                human_input_mode="NEVER",
                code_execution_config={"work_dir": "coding", "use_docker": False},
                default_auto_reply="Working on the research request..."
            )
            
            # Current conversation tracking
            self.current_requests = {}
        
        def handle_mcp_message(self, message: clsMCPMessage) -> Optional[clsMCPMessage]:
            """Handle an incoming MCP message"""
            if message.message_type == "request":
                # Process research request from Documentation Agent
                request_text = message.content.get("text", "")
                
                # Use AutoGen to process the research request
                def research_task():
                    self.user_proxy.initiate_chat(
                        self.assistant,
                        message=f"Research request for YouTube video content: {request_text}. Provide concise, factual information."
                    )
                    # Return last assistant message
                    return self.assistant.chat_messages[self.user_proxy.name][-1]["content"]
                
                # Execute research task
                research_result = research_task()
                
                # Send research results back to Documentation Agent
                response = clsMCPMessage(
                    sender=self.agent_id,
                    receiver=message.sender,
                    message_type="research_response",
                    content={"text": research_result},
                    reply_to=message.id,
                    conversation_id=message.conversation_id
                )
                
                self.broker.publish(response)
                return response
            
            return None
        
        def run(self):
            """Run the agent to listen for MCP messages"""
            print(f"Research Agent {self.agent_id} is running...")
            while True:
                message = self.broker.get_message(self.agent_id, timeout=1)
                if message:
                    self.handle_mcp_message(message)
                time.sleep(0.1)
    

    Let us understand the key methods in detail.

    1. Receiving and Responding to Research Requests

      def handle_mcp_message(self, message)
      

      When the Research Agent gets a message (like a question or request for info), it:

      1. Reads the message to see what needs to be researched.
      2. Asks GPT-4 to find helpful, accurate info about that topic.
      3. Sends the answer back to whoever asked the question (usually the Documentation Agent).

      class clsTranslationAgent:
          """Agent for language detection and translation"""
          
          def __init__(self, agent_id: str, broker: clsMCPBroker):
              self.agent_id = agent_id
              self.broker = broker
              self.broker.register_agent(agent_id)
              
              # Initialize language detector
              self.language_detector = clsLanguageDetector()
              
              # Initialize translation service
              self.translation_service = clsTranslationService()
          
          def process_text(self, text, conversation_id=None):
              """Process text: detect language and translate if needed, handling mixed language content"""
              if not conversation_id:
                  conversation_id = str(uuid.uuid4())
              
              # Detect language with support for mixed language content
              language_info = self.language_detector.detect(text)
              
              # Decide if translation is needed
              needs_translation = True
              
              # Pure English content doesn't need translation
              if language_info["language_code"] == "en-IN" or language_info["language_code"] == "unknown":
                  needs_translation = False
              
              # For mixed language, check if it's primarily English
              if language_info.get("is_mixed", False) and language_info.get("languages", []):
                  english_langs = [
                      lang for lang in language_info.get("languages", []) 
                      if lang["language_code"] == "en-IN" or lang["language_code"].startswith("en-")
                  ]
                  
                  # If the highest confidence language is English and > 60% confident, don't translate
                  if english_langs and english_langs[0].get("confidence", 0) > 0.6:
                      needs_translation = False
              
              if needs_translation:
                  # Translate using the appropriate service based on language detection
                  translation_result = self.translation_service.translate(text, language_info)
                  
                  return {
                      "original_text": text,
                      "language": language_info,
                      "translation": translation_result,
                      "final_text": translation_result.get("translated_text", text),
                      "conversation_id": conversation_id
                  }
              else:
                  # Already English or unknown language, return as is
                  return {
                      "original_text": text,
                      "language": language_info,
                      "translation": {"provider": "none"},
                      "final_text": text,
                      "conversation_id": conversation_id
                  }
          
          def handle_mcp_message(self, message: clsMCPMessage) -> Optional[clsMCPMessage]:
              """Handle an incoming MCP message"""
              if message.message_type == "translation_request":
                  # Process translation request from Documentation Agent
                  text = message.content.get("text", "")
                  
                  # Process the text
                  result = self.process_text(text, message.conversation_id)
                  
                  # Send translation results back to requester
                  response = clsMCPMessage(
                      sender=self.agent_id,
                      receiver=message.sender,
                      message_type="translation_response",
                      content=result,
                      reply_to=message.id,
                      conversation_id=message.conversation_id
                  )
                  
                  self.broker.publish(response)
                  return response
              
              return None
          
          def run(self):
              """Run the agent to listen for MCP messages"""
              print(f"Translation Agent {self.agent_id} is running...")
              while True:
                  message = self.broker.get_message(self.agent_id, timeout=1)
                  if message:
                      self.handle_mcp_message(message)
                  time.sleep(0.1)

      Let us understand the key methods in step-by-step manner:

      1. Understanding and Translating Text:

      def process_text(...)
      

      This is the core job of the agent. Here’s what it does with any piece of text:

      Step 1: Detect the Language

      • It tries to figure out the language of the input text.
      • It can handle cases where more than one language is mixed together, which is common in casual speech or subtitles.

      Step 2: Decide Whether to Translate

      • If the text is clearly in English, or it’s unclear what the language is, it decides not to translate.
      • If the text is mostly in another language or has less than 60% confidence in being English, it will translate it into English.

      Step 3: Translate (if needed)

      • If translation is required, it uses the translation service to do the job.
      • Then it packages all the information: the original text, detected language, the translated version, and a unique conversation ID.

      Step 4: Return the Results

      • If no translation is needed, it returns the original text and a note saying “no translation was applied.”

      2. Receiving Messages and Responding

      def handle_mcp_message(...)
      

      The agent listens for messages from other agents. When someone asks it to translate something:

      • It takes the text from the message.
      • Runs it through the process_text function (as explained above).
      • Sends the translated (or original) result to the person who asked.
      class clsTranslationService:
          """Translation service using multiple providers with support for mixed languages"""
          
          def __init__(self):
              # Initialize Sarvam AI client
              self.sarvam_api_key = SARVAM_API_KEY
              self.sarvam_url = "https://api.sarvam.ai/translate"
              
              # Initialize Google Cloud Translation client using simple HTTP requests
              self.google_api_key = GOOGLE_API_KEY
              self.google_translate_url = "https://translation.googleapis.com/language/translate/v2"
          
          def translate_with_sarvam(self, text, source_lang, target_lang="en-IN"):
              """Translate text using Sarvam AI (for Indian languages)"""
              if not self.sarvam_api_key:
                  return {"error": "Sarvam API key not set"}
              
              headers = {
                  "Content-Type": "application/json",
                  "api-subscription-key": self.sarvam_api_key
              }
              
              payload = {
                  "input": text,
                  "source_language_code": source_lang,
                  "target_language_code": target_lang,
                  "speaker_gender": "Female",
                  "mode": "formal",
                  "model": "mayura:v1"
              }
              
              try:
                  response = requests.post(self.sarvam_url, headers=headers, json=payload)
                  if response.status_code == 200:
                      return {"translated_text": response.json().get("translated_text", ""), "provider": "sarvam"}
                  else:
                      return {"error": f"Sarvam API error: {response.text}", "provider": "sarvam"}
              except Exception as e:
                  return {"error": f"Error calling Sarvam API: {str(e)}", "provider": "sarvam"}
          
          def translate_with_google(self, text, target_lang="en"):
              """Translate text using Google Cloud Translation API with direct HTTP request"""
              if not self.google_api_key:
                  return {"error": "Google API key not set"}
              
              try:
                  # Using the translation API v2 with API key
                  params = {
                      "key": self.google_api_key,
                      "q": text,
                      "target": target_lang
                  }
                  
                  response = requests.post(self.google_translate_url, params=params)
                  if response.status_code == 200:
                      data = response.json()
                      translation = data.get("data", {}).get("translations", [{}])[0]
                      return {
                          "translated_text": translation.get("translatedText", ""),
                          "detected_source_language": translation.get("detectedSourceLanguage", ""),
                          "provider": "google"
                      }
                  else:
                      return {"error": f"Google API error: {response.text}", "provider": "google"}
              except Exception as e:
                  return {"error": f"Error calling Google Translation API: {str(e)}", "provider": "google"}
          
          def translate(self, text, language_info):
              """Translate text to English based on language detection info"""
              # If already English or unknown language, return as is
              if language_info["language_code"] == "en-IN" or language_info["language_code"] == "unknown":
                  return {"translated_text": text, "provider": "none"}
              
              # Handle mixed language content
              if language_info.get("is_mixed", False) and language_info.get("languages", []):
                  # Strategy for mixed language: 
                  # 1. If one of the languages is English, don't translate the entire text, as it might distort English portions
                  # 2. If no English but contains Indian languages, use Sarvam as it handles code-mixing better
                  # 3. Otherwise, use Google Translate for the primary detected language
                  
                  has_english = False
                  has_indian = False
                  
                  for lang in language_info.get("languages", []):
                      if lang["language_code"] == "en-IN" or lang["language_code"].startswith("en-"):
                          has_english = True
                      if lang.get("is_indian", False):
                          has_indian = True
                  
                  if has_english:
                      # Contains English - use Google for full text as it handles code-mixing well
                      return self.translate_with_google(text)
                  elif has_indian:
                      # Contains Indian languages - use Sarvam
                      # Use the highest confidence Indian language as source
                      indian_langs = [lang for lang in language_info.get("languages", []) if lang.get("is_indian", False)]
                      if indian_langs:
                          # Sort by confidence
                          indian_langs.sort(key=lambda x: x.get("confidence", 0), reverse=True)
                          source_lang = indian_langs[0]["language_code"]
                          return self.translate_with_sarvam(text, source_lang)
                      else:
                          # Fallback to primary language
                          if language_info["is_indian"]:
                              return self.translate_with_sarvam(text, language_info["language_code"])
                          else:
                              return self.translate_with_google(text)
                  else:
                      # No English, no Indian languages - use Google for primary language
                      return self.translate_with_google(text)
              else:
                  # Not mixed language - use standard approach
                  if language_info["is_indian"]:
                      # Use Sarvam AI for Indian languages
                      return self.translate_with_sarvam(text, language_info["language_code"])
                  else:
                      # Use Google for other languages
                      return self.translate_with_google(text)

      This Translation Service is like a smart translator that knows how to:

      • Detect what language the text is written in,
      • Choose the best translation provider depending on the language (especially for Indian languages),
      • And then translate the text into English.

      It supports mixed-language content (such as Hindi-English in one sentence) and uses either Google Translate or Sarvam AI, a translation service designed for Indian languages.

      Now, let us understand the key methods in a step-by-step manner:

      1. Translating Using Google Translate

      def translate_with_google(...)
      

      This function uses Google Translate:

      • It sends the text, asks for English as the target language, and gets a translation back.
      • It also detects the source language automatically.
      • If successful, it returns the translated text and the detected original language.
      • If there’s an error, it returns a message saying what went wrong.

      Best For: Non-Indian languages (like Spanish, French, Chinese) and content that is not mixed with English.

      2. Main Translation Logic

      def translate(self, text, language_info)
      

      This is the decision-maker. Here’s how it works:

      Case 1: No Translation Needed

      If the text is already in English or the language is unknown, it simply returns the original text.

      Case 2: Mixed Language (e.g., Hindi + English)

      If the text contains more than one language:

      • ✅ If one part is English → use Google Translate (it’s good with mixed languages).
      • ✅ If it includes Indian languages only → use Sarvam AI (better at handling Indian content).
      • ✅ If it’s neither English nor Indian → use Google Translate.

      The service checks how confident it is about each language in the mix and chooses the most likely one to translate from.

      Case 3: Single Language

      If the text is only in one language:

      • ✅ If it’s an Indian language (like Bengali, Tamil, or Marathi), use Sarvam AI.
      • ✅ If it’s any other language, use Google Translate.

      So, we’ve done it.

      I’ve included the complete working solutions for you in the GitHub Link.

      We’ll cover the detailed performance testing, Optimized configurations & many other useful details in our next post.

      Till then, Happy Avenging! 🙂

      Real-time video summary assistance App – Part 1

      Today, we’ll discuss another topic in our two-part series. We will understand the importance of the MCP protocol for communicating between agents.

      This will be an in-depth highly technical as well as depicting using easy-to-understand visuals.

      But, before that, let us understand the demo first.

      Isn’t it exciting?


      Let us first understand in easy language about the MCP protocol.

      MCP (Multi-Agent Communication Protocol) is a custom message exchange system that facilitates structured and scalable communication among multiple AI agents operating within an application. These agents collaborate asynchronously or in real-time to complete complex tasks by sharing results, context, and commands through a common messaging layer.

      How MCP Protocol Helps:

      FeatureBenefit
      Agent-Oriented ArchitectureEach agent handles a focused task, improving modularity and scalability.
      Event-Driven Message PassingAgents communicate based on triggers, not polling—leading to faster and efficient responses.
      Structured Communication FormatAll messages follow a standard format (e.g., JSON) with metadata for sender, recipient, type, and payload.
      State PreservationAgents maintain context across messages using memory (e.g., ConversationBufferMemory) to ensure coherence.

      How It Works (Step-by-Step):

      • 📥 User uploads or streams a video.
      • 🧑‍💻 MCP Protocol triggers the Transcription Agent to start converting audio into text.
      • 🌐 Translation Agent receives this text (if a different language is needed).
      • 🧾 Summarization Agent receives the translated or original transcript and generates a concise summary.
      • 📚 Research Agent checks for references or terminology used in the video.
      • 📄 Documentation Agent compiles the output into a structured report.
      • 🔁 All communication between agents flows through MCP, ensuring consistent message delivery and coordination.

      Now, let us understand the solution that we intend to implement for our solutions:

      This app provides live summarization and contextual insights from videos such as webinars, interviews, or YouTube recordings using multiple cooperating AI agents. These agents may include:

      • Transcription Agent: Converts spoken words to text.
      • Translation Agent: Translates text to different languages (if needed).
      • Summarization Agent: Generates concise summaries.
      • Research Agent: Finds background or supplementary data related to the discussion.
      • Documentation Agent: Converts outputs into structured reports or learning materials.

      We need to understand one more thing before deep diving into the code. Part of your conversation may be mixed, like part Hindi & part English. So, in that case, it will break the sentences into chunks & then convert all of them into the same language. Hence, the following rules are applied while translating the sentences –


      Now, we will go through the basic frame of the system & try to understand how it fits all the principles that we discussed above for this particular solution mapped against the specific technology –

      1. Documentation Agent built with the LangChain framework
      2. Research Agent built with the AutoGen framework
      3. MCP Broker for seamless communication between agents

      Let us understand from the given picture the flow of the process that our app is trying to implement –


      Great! So, now, we’ll focus on some of the key Python scripts & go through their key features.

      But, before that, we share the group of scripts that belong to specific tasks.

      • clsMCPMessage.py
      • clsMCPBroker.py
      • clsYouTubeVideoProcessor.py
      • clsLanguageDetector.py
      • clsTranslationAgent.py
      • clsTranslationService.py
      • clsDocumentationAgent.py
      • clsResearchAgent.py

      Now, we’ll review some of the script in this post, along with the next post, as a continuation from this post.

      class clsMCPMessage(BaseModel):
          """Message format for MCP protocol"""
          id: str = Field(default_factory=lambda: str(uuid.uuid4()))
          timestamp: float = Field(default_factory=time.time)
          sender: str
          receiver: str
          message_type: str  # "request", "response", "notification"
          content: Dict[str, Any]
          reply_to: Optional[str] = None
          conversation_id: str
          metadata: Dict[str, Any] = {}
          
      class clsMCPBroker:
          """Message broker for MCP protocol communication between agents"""
          
          def __init__(self):
              self.message_queues: Dict[str, queue.Queue] = {}
              self.subscribers: Dict[str, List[str]] = {}
              self.conversation_history: Dict[str, List[clsMCPMessage]] = {}
          
          def register_agent(self, agent_id: str) -> None:
              """Register an agent with the broker"""
              if agent_id not in self.message_queues:
                  self.message_queues[agent_id] = queue.Queue()
                  self.subscribers[agent_id] = []
          
          def subscribe(self, subscriber_id: str, publisher_id: str) -> None:
              """Subscribe an agent to messages from another agent"""
              if publisher_id in self.subscribers:
                  if subscriber_id not in self.subscribers[publisher_id]:
                      self.subscribers[publisher_id].append(subscriber_id)
          
          def publish(self, message: clsMCPMessage) -> None:
              """Publish a message to its intended receiver"""
              # Store in conversation history
              if message.conversation_id not in self.conversation_history:
                  self.conversation_history[message.conversation_id] = []
              self.conversation_history[message.conversation_id].append(message)
              
              # Deliver to direct receiver
              if message.receiver in self.message_queues:
                  self.message_queues[message.receiver].put(message)
              
              # Deliver to subscribers of the sender
              for subscriber in self.subscribers.get(message.sender, []):
                  if subscriber != message.receiver:  # Avoid duplicates
                      self.message_queues[subscriber].put(message)
          
          def get_message(self, agent_id: str, timeout: Optional[float] = None) -> Optional[clsMCPMessage]:
              """Get a message for the specified agent"""
              try:
                  return self.message_queues[agent_id].get(timeout=timeout)
              except (queue.Empty, KeyError):
                  return None
          
          def get_conversation_history(self, conversation_id: str) -> List[clsMCPMessage]:
              """Get the history of a conversation"""
              return self.conversation_history.get(conversation_id, [])

      Imagine a system where different virtual agents (like robots or apps) need to talk to each other. To do that, they send messages back and forth—kind of like emails or text messages. This code is responsible for:

      • Making sure those messages are properly written (like filling out all parts of a form).
      • Making sure messages are delivered to the right people.
      • Keeping a record of conversations so you can go back and review what was said.

      This part (clsMCPMessage) is like a template or a form that every message needs to follow. Each message has:

      • ID: A unique number so every message is different (like a serial number).
      • Time Sent: When the message was created.
      • Sender & Receiver: Who sent the message and who is supposed to receive it.
      • Type of Message: Is it a request, a response, or just a notification?
      • Content: The actual information or question the message is about.
      • Reply To: If this message is answering another one, this tells which one.
      • Conversation ID: So we know which group of messages belongs to the same conversation.
      • Extra Info (Metadata): Any other small details that might help explain the message.

      This (clsMCPBroker) is the system (or “post office”) that makes sure messages get to where they’re supposed to go. Here’s what it does:

      1. Registering an Agent

      • Think of this like signing up a new user in the system.
      • Each agent gets their own personal mailbox (called a “message queue”) so others can send them messages.

      2. Subscribing to Another Agent

      • If Agent A wants to receive copies of messages from Agent B, they can “subscribe” to B.
      • This is like signing up for B’s newsletter—whenever B sends something, A gets a copy.

      3. Sending a Message

      • When someone sends a message:
        • It is saved into a conversation history (like keeping emails in your inbox).
        • It is delivered to the main person it was meant for.
        • And, if anyone subscribed to the sender, they get a copy too—unless they’re already the main receiver (to avoid sending duplicates).

      4. Receiving Messages

      • Each agent can check their personal mailbox to see if they got any new messages.
      • If there are no messages, they’ll either wait for some time or move on.

      5. Viewing Past Conversations

      • You can look up all messages that were part of a specific conversation.
      • This is helpful for remembering what was said earlier.

      In systems where many different smart tools or services need to work together and communicate, this kind of communication system makes sure everything is:

      • Organized
      • Delivered correctly
      • Easy to trace back when needed

      So, in this post, we’ll finish it here. We’ll cover the rest of the post in the next post.

      I’ll bring some more exciting topics in the coming days from the Python verse.

      Till then, Happy Avenging!  🙂

      Building & deploying a RAG architecture rapidly using Langflow & Python

      I’ve been looking for a solution that can help deploy any RAG solution involving Python faster. It would be more effective if an available UI helped deliver the solution faster. And, here comes the solution that does exactly what I needed – “LangFlow.”

      Before delving into the details, I strongly recommend taking a look at the demo. It’s a great way to get a comprehensive understanding of LangFlow and its capabilities in deploying RAG architecture rapidly.

      Demo

      This describes the entire architecture; hence, I’ll share the architecture components I used to build the solution.

      To know more about RAG-Architecture, please refer to the following link.

      As we all know, we can parse the data from the source website URL (in this case, I’m referring to my photography website to extract the text of one of my blogs) and then embed it into the newly created Astra DB & new collection, where I will be storing the vector embeddings.

      As you can see from the above diagram, the flow that I configured within 5 minutes and the full functionality of writing a complete solution (underlying Python application) within no time that extracts chunks, converts them into embeddings, and finally stores them inside the Astra DB.

      Now, let us understand the next phase, where, based on the ask from a chatbot, I need to convert that question into Vector DB & then find the similarity search to bring the relevant vectors as shown below –

      You need to configure this entire flow by dragging the necessary widgets from the left-side panel as marked in the Blue-Box shown below –

      For this specific use case, we’ve created an instance of Astra DB & then created an empty vector collection. Also, we need to ensure that we generate the API-Key & and provide the right roles assigned with the token. After successfully creating the token, you need to copy the endpoint, token & collection details & paste them into the desired fields of the Astra-DB components inside the LangFlow. Think of it as a framework where one needs to provide all the necessary information to build & run the entire flow successfully.

      Following are some of the important snapshots from the Astra-DB –

      Step – 1

      Step – 2

      Once you run the vector DB population, this will insert extracted text & then convert it into vectors, which will show in the following screenshot –

      You can see the sample vectors along with the text chunks inside the Astra DB data explorer as shown below –

      Some of the critical components are highlighted in the Blue-box which is important for us to monitor the vector embeddings.

      Now, here is how you can modify the current Python code of any available widgets or build your own widget by using the custom widget.

      The first step is to click the code button highlighted in the Red-box as shown below –

      The next step is when you click that button, which will open the detailed Python code representing the entire widget build & its functionality. This button is the place where you can add, modify, or keep it as it is depending upon your need, which will shown below –

      Once one builds the entire solution, you must click the final compile button (shown in the red box), which will eventually compile all the individual widgets. However, you can build the compile button for the individual widgets as soon as you make the solution. So you can pinpoint any potential problems at that very step.

      Let us understand one sample code of a widget. In this case, we will take vector embedding insertion into the Astra DB. Let us see the code –

      from typing import List, Optional, Union
      from langchain_astradb import AstraDBVectorStore
      from langchain_astradb.utils.astradb import SetupMode
      
      from langflow.custom import CustomComponent
      from langflow.field_typing import Embeddings, VectorStore
      from langflow.schema import Record
      from langchain_core.retrievers import BaseRetriever
      
      
      class AstraDBVectorStoreComponent(CustomComponent):
          display_name = "Astra DB"
          description = "Builds or loads an Astra DB Vector Store."
          icon = "AstraDB"
          field_order = ["token", "api_endpoint", "collection_name", "inputs", "embedding"]
      
          def build_config(self):
              return {
                  "inputs": {
                      "display_name": "Inputs",
                      "info": "Optional list of records to be processed and stored in the vector store.",
                  },
                  "embedding": {"display_name": "Embedding", "info": "Embedding to use"},
                  "collection_name": {
                      "display_name": "Collection Name",
                      "info": "The name of the collection within Astra DB where the vectors will be stored.",
                  },
                  "token": {
                      "display_name": "Token",
                      "info": "Authentication token for accessing Astra DB.",
                      "password": True,
                  },
                  "api_endpoint": {
                      "display_name": "API Endpoint",
                      "info": "API endpoint URL for the Astra DB service.",
                  },
                  "namespace": {
                      "display_name": "Namespace",
                      "info": "Optional namespace within Astra DB to use for the collection.",
                      "advanced": True,
                  },
                  "metric": {
                      "display_name": "Metric",
                      "info": "Optional distance metric for vector comparisons in the vector store.",
                      "advanced": True,
                  },
                  "batch_size": {
                      "display_name": "Batch Size",
                      "info": "Optional number of records to process in a single batch.",
                      "advanced": True,
                  },
                  "bulk_insert_batch_concurrency": {
                      "display_name": "Bulk Insert Batch Concurrency",
                      "info": "Optional concurrency level for bulk insert operations.",
                      "advanced": True,
                  },
                  "bulk_insert_overwrite_concurrency": {
                      "display_name": "Bulk Insert Overwrite Concurrency",
                      "info": "Optional concurrency level for bulk insert operations that overwrite existing records.",
                      "advanced": True,
                  },
                  "bulk_delete_concurrency": {
                      "display_name": "Bulk Delete Concurrency",
                      "info": "Optional concurrency level for bulk delete operations.",
                      "advanced": True,
                  },
                  "setup_mode": {
                      "display_name": "Setup Mode",
                      "info": "Configuration mode for setting up the vector store, with options likeSync,Async, orOff”.",
                      "options": ["Sync", "Async", "Off"],
                      "advanced": True,
                  },
                  "pre_delete_collection": {
                      "display_name": "Pre Delete Collection",
                      "info": "Boolean flag to determine whether to delete the collection before creating a new one.",
                      "advanced": True,
                  },
                  "metadata_indexing_include": {
                      "display_name": "Metadata Indexing Include",
                      "info": "Optional list of metadata fields to include in the indexing.",
                      "advanced": True,
                  },
                  "metadata_indexing_exclude": {
                      "display_name": "Metadata Indexing Exclude",
                      "info": "Optional list of metadata fields to exclude from the indexing.",
                      "advanced": True,
                  },
                  "collection_indexing_policy": {
                      "display_name": "Collection Indexing Policy",
                      "info": "Optional dictionary defining the indexing policy for the collection.",
                      "advanced": True,
                  },
              }
      
          def build(
              self,
              embedding: Embeddings,
              token: str,
              api_endpoint: str,
              collection_name: str,
              inputs: Optional[List[Record]] = None,
              namespace: Optional[str] = None,
              metric: Optional[str] = None,
              batch_size: Optional[int] = None,
              bulk_insert_batch_concurrency: Optional[int] = None,
              bulk_insert_overwrite_concurrency: Optional[int] = None,
              bulk_delete_concurrency: Optional[int] = None,
              setup_mode: str = "Sync",
              pre_delete_collection: bool = False,
              metadata_indexing_include: Optional[List[str]] = None,
              metadata_indexing_exclude: Optional[List[str]] = None,
              collection_indexing_policy: Optional[dict] = None,
          ) -> Union[VectorStore, BaseRetriever]:
              try:
                  setup_mode_value = SetupMode[setup_mode.upper()]
              except KeyError:
                  raise ValueError(f"Invalid setup mode: {setup_mode}")
              if inputs:
                  documents = [_input.to_lc_document() for _input in inputs]
      
                  vector_store = AstraDBVectorStore.from_documents(
                      documents=documents,
                      embedding=embedding,
                      collection_name=collection_name,
                      token=token,
                      api_endpoint=api_endpoint,
                      namespace=namespace,
                      metric=metric,
                      batch_size=batch_size,
                      bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,
                      bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,
                      bulk_delete_concurrency=bulk_delete_concurrency,
                      setup_mode=setup_mode_value,
                      pre_delete_collection=pre_delete_collection,
                      metadata_indexing_include=metadata_indexing_include,
                      metadata_indexing_exclude=metadata_indexing_exclude,
                      collection_indexing_policy=collection_indexing_policy,
                  )
              else:
                  vector_store = AstraDBVectorStore(
                      embedding=embedding,
                      collection_name=collection_name,
                      token=token,
                      api_endpoint=api_endpoint,
                      namespace=namespace,
                      metric=metric,
                      batch_size=batch_size,
                      bulk_insert_batch_concurrency=bulk_insert_batch_concurrency,
                      bulk_insert_overwrite_concurrency=bulk_insert_overwrite_concurrency,
                      bulk_delete_concurrency=bulk_delete_concurrency,
                      setup_mode=setup_mode_value,
                      pre_delete_collection=pre_delete_collection,
                      metadata_indexing_include=metadata_indexing_include,
                      metadata_indexing_exclude=metadata_indexing_exclude,
                      collection_indexing_policy=collection_indexing_policy,
                  )
      
              return vector_store
      

      Method: build_config:

      • This method defines the configuration options for the component.
      • Each configuration option includes a display_name and info, which provides details about the option.
      • Some options are marked as advanced, indicating they are optional and more complex.

      Method: build:

      • This method is used to create an instance of the Astra DB Vector Store.
      • It takes several parameters, including embedding, token, api_endpoint, collection_name, and various optional parameters.
      • It converts the setup_mode string to an enum value.
      • If inputs are provided, they are converted to a format suitable for storing in the vector store.
      • Depending on whether inputs are provided, a new vector store from documents can be created, or an empty vector store can be initialized with the given configurations.
      • Finally, it returns the created vector store instance.

      And, here is the the screenshot of your run –

      And, this is the last steps to run the Integrated Chatbot as shown below –

      As one can see the left side highlighted shows the reference text & chunks & the right side actual response.


      So, we’ve done it. And, you know the fun fact. I did this entire workflow within 35 minutes alone. 😛

      I’ll bring some more exciting topics in the coming days from the Python verse.

      To learn more about LangFlow, please click here.

      To learn about Astra DB, you need to click the following link.

      To learn about my blog & photography, you can click the following url.

      Till then, Happy Avenging!  🙂

      Navigating the Future of Work: Insights from the Argyle AI Summit

      At the recent Argyle AI Summit, a prestigious event in the AI industry, I had the honor of participating as a speaker alongside esteemed professionals like Misha Leybovich from Google Labs. The summit, coordinated by Sylvia Das Chagas, a former senior AI conversation designer at CVS Health, provided an enlightening platform to discuss the evolving role of AI in talent management. Our session focused on the theme “Driving Talent with AI,” addressing some of the most pressing questions in the field. Frequently, relevant use cases were shared in detail to support these threads.

      To view the actual page, please click the following link.

      One of the critical topics we explored was AI’s impact on talent management in the upcoming year. AI’s influence in hiring and retention is becoming increasingly significant. For example, AI-powered tools can now analyze vast amounts of data to identify the best candidates for a role, going beyond traditional resume screening. In retention, AI is instrumental in identifying patterns that indicate an employee’s likelihood to leave, enabling proactive measures.

      A burning question in AI is how leaders address fears that AI might replace manual jobs. We discussed the importance of leaders framing AI as a complement to human skills rather than a replacement. AI enhances employee capabilities by automating mundane tasks, allowing employees to focus on more creative and strategic work.

      Regarding new AI tools that organizations should watch out for, the conversation highlighted tools that enhance remote collaboration and workplace inclusivity. Tools like virtual meeting assistants that can transcribe, translate, and summarize meetings in real time are becoming invaluable in today’s global work environment.

      AI’s role in boosting employee motivation and productivity was another focal point. We discussed how AI-driven career development programs can offer personalized learning paths, helping employees grow and stay motivated.

      Incorporating multiple languages in tools like ChatGPT was highlighted as a critical step towards inclusivity. This expansion allows a broader range of employees to interact with AI tools in their native language, fostering a more inclusive workplace environment.

      Lastly, we tackled the challenge of addressing employees’ reluctance to change. Emphasizing the importance of transparent communication and education about AI’s benefits was identified as key. Organizations can alleviate fears and encourage a more accepting attitude towards AI by involving employees in the AI implementation process and providing training.

      The Argyle AI Summit offered a compelling glimpse into the future of AI in talent management. The session provided valuable insights for leaders looking to harness AI’s potential to enhance talent management strategies by discussing real-world examples and strategies. To gain more in-depth knowledge and perspectives shared during this summit, I encourage interested parties to visit the recorded session link for a more comprehensive understanding.

      Or, you can directly view it from here –


      I would greatly appreciate your feedback on the insights shared during the summit. Your thoughts and perspectives are invaluable as we continue to explore and navigate the evolving landscape of AI in the workplace.

      Enable OpenAI chatbot with the selected YouTube video content using LangChain, FAISS & YouTube data-API.

      Today, I’m very excited to demonstrate an effortless & new way to extract the transcript from YouTube videos & then answer the questions based on the topics selected by the users. In this post, I plan to deal with the user inputs to consider the case first & then it can summarize the video content through useful advanced analytics with the help of the LangChain & OpenAI-based model.

      In this post, I’ve directly subscribed to OpenAI & I’m not using OpenAI from Azure. However, I’ll explore that in the future as well.
      Before I explain the process to invoke this new library, why not view the demo first & then discuss it?

      Demo

      Isn’t it very exciting? This will lead to a whole new ballgame, where one can get critical decision-making information from these human sources along with their traditional advanced analytical data.

      How will it help?

      Let’s say as per your historical data & analytics, the dashboard is recommending prod-A, prod-B & prod-C as the top three products for potential top-performing brands. Whereas, you are getting some alerts from the TV news on prod-B due to the recent incidents. So, in that case, you don’t want to continue with the prod-B investment. You may find a new product named prod-Z. That may reduce the risk of your investment.


      What is LangChain?

      LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model but will also be:

      1. Data-aware: connect a language model to other sources of data
      2. Agentic: allow a language model to interact with its environment

      The LangChain framework works around these principles.

      To know more about this, please click the following link.

      As you can see, this is one of the critical components in our solution, which will bind the OpenAI bot & it will feed the necessary data to provide the correct response.


      What is FAISS?

      Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that do not fit in RAM. It also has supporting code for evaluation and parameter tuning.

      Faiss developed using C++ with complete wrappers for Python—some of the most beneficial algorithms available both on CPU & in GPU as well. Facebook AI Research develops it.

      To know more about this, please click the following link.


      FLOW OF EVENTS:

      Let us look at the flow diagram as it captures the sequence of events that unfold as part of the process.

      Here are the steps that will follow in sequence –

      • The application will first get the topic on which it needs to look from YouTube & find the top 5 videos using the YouTube data-API.
      • Once the application returns a list of websites from the above step, LangChain will drive the application will extract the transcripts from the video & then optimize the response size in smaller chunks to address the costly OpenAI calls. During this time, it will invoke FAISS to create document DBs.
      • Finally, it will send those chunks to OpenAI for the best response based on your supplied template that performs the final analysis with small data required for your query & gets the appropriate response with fewer costs.

      CODE:

      Why don’t we go through the code made accessible due to this new library for this particular use case?

      • clsConfigClient.py (This is the main calling Python script for the input parameters.)


      ################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 15-May-2020 ####
      #### Modified On: 28-May-2023 ####
      #### ####
      #### Objective: This script is a config ####
      #### file, contains all the keys for ####
      #### personal OpenAI-based video content ####
      #### enable bot. ####
      #### ####
      ################################################
      import os
      import platform as pl
      class clsConfigClient(object):
      Curr_Path = os.path.dirname(os.path.realpath(__file__))
      os_det = pl.system()
      if os_det == "Windows":
      sep = '\\'
      else:
      sep = '/'
      conf = {
      'APP_ID': 1,
      'ARCH_DIR': Curr_Path + sep + 'arch' + sep,
      'PROFILE_PATH': Curr_Path + sep + 'profile' + sep,
      'LOG_PATH': Curr_Path + sep + 'log' + sep,
      'DATA_PATH': Curr_Path + sep + 'data' + sep,
      'MODEL_PATH': Curr_Path + sep + 'model' + sep,
      'TEMP_PATH': Curr_Path + sep + 'temp' + sep,
      'MODEL_DIR': 'model',
      'APP_DESC_1': 'LangChain Demo!',
      'DEBUG_IND': 'N',
      'INIT_PATH': Curr_Path,
      'FILE_NAME': 'Output.csv',
      'MODEL_NAME': 'gpt-3.5-turbo',
      'OPEN_AI_KEY': "sk-kfrjfijdrkidjkfjd9474nbfjfkfjfhfhf84i84hnfhjdbv6Bgvv",
      'YOUTUBE_KEY': "AIjfjfUYGe64hHJ-LOFO5u-mkso9pPOJGFU",
      'TITLE': "LangChain Demo!",
      'TEMP_VAL': 0.2,
      'PATH' : Curr_Path,
      'MAX_CNT' : 5,
      'OUT_DIR': 'data'
      }

      Some of the key entries from the above scripts are as follows –

      'MODEL_NAME': 'gpt-3.5-turbo',
      'OPEN_AI_KEY': "sk-kfrjfijdrkidjkfjd9474nbfjfkfjfhfhf84i84hnfhjdbv6Bgvv",
      'YOUTUBE_KEY': "AIjfjfUYGe64hHJ-LOFO5u-mkso9pPOJGFU",
      'TEMP_VAL': 0.2,

      From the above code snippet, one can understand that we need both the API keys for YouTube & OpenAI. And they have separate costs & usage, which I’ll share later in the post. Also, notice that the temperature sets to 0.2 ( range between 0 to 1). That means our AI bot will be consistent in response. And our application will use the GPT-3.5-turbo model for its analytic response.

      • clsTemplate.py (Contains all the templates for OpenAI.)


      ################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 27-May-2023 ####
      #### Modified On: 28-May-2023 ####
      #### ####
      #### Objective: This script is a config ####
      #### file, contains all the template for ####
      #### OpenAI prompts to get the correct ####
      #### response. ####
      #### ####
      ################################################
      # Template to use for the system message prompt
      templateVal_1 = """
      You are a helpful assistant that that can answer questions about youtube videos
      based on the video's transcript: {docs}
      Only use the factual information from the transcript to answer the question.
      If you feel like you don't have enough information to answer the question, say "I don't know".
      Your answers should be verbose and detailed.
      """

      view raw

      clsTemplate.py

      hosted with ❤ by GitHub

      The above code is self-explanatory. Here, we’re keeping the correct instructions for our OpenAI to respond within these guidelines.

      • clsVideoContentScrapper.py (Main class to extract the transcript from the YouTube videos & then answer the questions based on the topics selected by the users.)


      #####################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 27-May-2023 ####
      #### Modified On 28-May-2023 ####
      #### ####
      #### Objective: This is the main calling ####
      #### python class that will invoke the ####
      #### LangChain of package to extract ####
      #### the transcript from the YouTube videos & ####
      #### then answer the questions based on the ####
      #### topics selected by the users. ####
      #### ####
      #####################################################
      from langchain.document_loaders import YoutubeLoader
      from langchain.text_splitter import RecursiveCharacterTextSplitter
      from langchain.embeddings.openai import OpenAIEmbeddings
      from langchain.vectorstores import FAISS
      from langchain.chat_models import ChatOpenAI
      from langchain.chains import LLMChain
      from langchain.prompts.chat import (
      ChatPromptTemplate,
      SystemMessagePromptTemplate,
      HumanMessagePromptTemplate,
      )
      from googleapiclient.discovery import build
      import clsTemplate as ct
      from clsConfigClient import clsConfigClient as cf
      import os
      ###############################################
      ### Global Section ###
      ###############################################
      open_ai_Key = cf.conf['OPEN_AI_KEY']
      os.environ["OPENAI_API_KEY"] = open_ai_Key
      embeddings = OpenAIEmbeddings(openai_api_key=open_ai_Key)
      YouTube_Key = cf.conf['YOUTUBE_KEY']
      youtube = build('youtube', 'v3', developerKey=YouTube_Key)
      # Disbling Warning
      def warn(*args, **kwargs):
      pass
      import warnings
      warnings.warn = warn
      ###############################################
      ### End of Global Section ###
      ###############################################
      class clsVideoContentScrapper:
      def __init__(self):
      self.model_name = cf.conf['MODEL_NAME']
      self.temp_val = cf.conf['TEMP_VAL']
      self.max_cnt = int(cf.conf['MAX_CNT'])
      def createDBFromYoutubeVideoUrl(self, video_url):
      try:
      loader = YoutubeLoader.from_youtube_url(video_url)
      transcript = loader.load()
      text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
      docs = text_splitter.split_documents(transcript)
      db = FAISS.from_documents(docs, embeddings)
      return db
      except Exception as e:
      x = str(e)
      print('Error: ', x)
      return ''
      def getResponseFromQuery(self, db, query, k=4):
      try:
      """
      gpt-3.5-turbo can handle up to 4097 tokens. Setting the chunksize to 1000 and k to 4 maximizes
      the number of tokens to analyze.
      """
      mod_name = self.model_name
      temp_val = self.temp_val
      docs = db.similarity_search(query, k=k)
      docs_page_content = " ".join([d.page_content for d in docs])
      chat = ChatOpenAI(model_name=mod_name, temperature=temp_val)
      # Template to use for the system message prompt
      template = ct.templateVal_1
      system_message_prompt = SystemMessagePromptTemplate.from_template(template)
      # Human question prompt
      human_template = "Answer the following question: {question}"
      human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
      chat_prompt = ChatPromptTemplate.from_messages(
      [system_message_prompt, human_message_prompt]
      )
      chain = LLMChain(llm=chat, prompt=chat_prompt)
      response = chain.run(question=query, docs=docs_page_content)
      response = response.replace("\n", "")
      return response, docs
      except Exception as e:
      x = str(e)
      print('Error: ', x)
      return '', ''
      def topFiveURLFromYouTube(self, service, **kwargs):
      try:
      video_urls = []
      channel_list = []
      results = service.search().list(**kwargs).execute()
      for item in results['items']:
      print("Title: ", item['snippet']['title'])
      print("Description: ", item['snippet']['description'])
      channel = item['snippet']['channelId']
      print("Channel Id: ", channel)
      # Fetch the channel name using the channel ID
      channel_response = service.channels().list(part='snippet',id=item['snippet']['channelId']).execute()
      channel_title = channel_response['items'][0]['snippet']['title']
      print("Channel Title: ", channel_title)
      channel_list.append(channel_title)
      print("Video Id: ", item['id']['videoId'])
      vidURL = "https://www.youtube.com/watch?v=" + item['id']['videoId']
      print("Video URL: " + vidURL)
      video_urls.append(vidURL)
      print("\n")
      return video_urls, channel_list
      except Exception as e:
      video_urls = []
      channel_list = []
      x = str(e)
      print('Error: ', x)
      return video_urls, channel_list
      def extractContentInText(self, topic, query):
      try:
      discussedTopic = []
      strKeyText = ''
      cnt = 0
      max_cnt = self.max_cnt
      urlList, channelList = self.topFiveURLFromYouTube(youtube, q=topic, part='id,snippet',maxResults=max_cnt,type='video')
      print('Returned List: ')
      print(urlList)
      print()
      for video_url in urlList:
      print('Processing Video: ')
      print(video_url)
      db = self.createDBFromYoutubeVideoUrl(video_url)
      response, docs = self.getResponseFromQuery(db, query)
      if len(response) > 0:
      strKeyText = 'As per the topic discussed in ' + channelList[cnt] + ', '
      discussedTopic.append(strKeyText + response)
      cnt += 1
      return discussedTopic
      except Exception as e:
      discussedTopic = []
      x = str(e)
      print('Error: ', x)
      return discussedTopic

      Let us understand the key methods step by step in detail –

      def topFiveURLFromYouTube(self, service, **kwargs):
          try:
              video_urls = []
              channel_list = []
              results = service.search().list(**kwargs).execute()
      
              for item in results['items']:
                  print("Title: ", item['snippet']['title'])
                  print("Description: ", item['snippet']['description'])
                  channel = item['snippet']['channelId']
                  print("Channel Id: ", channel)
      
                  # Fetch the channel name using the channel ID
                  channel_response = service.channels().list(part='snippet',id=item['snippet']['channelId']).execute()
                  channel_title = channel_response['items'][0]['snippet']['title']
                  print("Channel Title: ", channel_title)
                  channel_list.append(channel_title)
      
                  print("Video Id: ", item['id']['videoId'])
                  vidURL = "https://www.youtube.com/watch?v=" + item['id']['videoId']
                  print("Video URL: " + vidURL)
                  video_urls.append(vidURL)
                  print("\n")
      
              return video_urls, channel_list
      
          except Exception as e:
              video_urls = []
              channel_list = []
              x = str(e)
              print('Error: ', x)
      
              return video_urls, channel_list

      The above code will fetch the most relevant YouTube URLs & bind them into a list along with the channel names & then share the lists with the main functions.

      def createDBFromYoutubeVideoUrl(self, video_url):
          try:
              loader = YoutubeLoader.from_youtube_url(video_url)
              transcript = loader.load()
      
              text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
              docs = text_splitter.split_documents(transcript)
      
              db = FAISS.from_documents(docs, embeddings)
              return db
      
          except Exception as e:
              x = str(e)
              print('Error: ', x)
              return ''

      The provided Python code defines a function createDBFromYoutubeVideoUrl which appears to create a database of text documents from the transcript of a YouTube video. Here’s the explanation in simple English:

      1. The function createDBFromYoutubeVideoUrl has defined with one argument: video_url.
      2. The function uses a try-except block to handle any potential exceptions or errors that may occur.
      3. Inside the try block, the following steps are going to perform:
      • First, it creates a YoutubeLoader object from the provided video_url. This object is likely responsible for interacting with the YouTube video specified by the URL.
      • The loader object then loads the transcript of the video. This object is the text version of everything spoken in the video.
      • It then creates a RecursiveCharacterTextSplitter object with a specified chunk_size of 1000 and chunk_overlap of 100. This object may split the transcript into smaller chunks (documents) of text for easier processing or analysis. Each piece will be around 1000 characters long, and there will overlap of 100 characters between consecutive chunks.
      • The split_documents method of the text_splitter object will split the transcript into smaller documents. These documents are stored in the docs variable.
      • The FAISS.from_documents method is then called with docs and embeddings as arguments to create a FAISS (Facebook AI Similarity Search) index. This index is a database used for efficient similarity search and clustering of high-dimensional vectors, which in this case, are the embeddings of the documents. The FAISS index is stored in the db variable.
      • Finally, the db variable is returned, representing the created database from the video transcript.

      4. If an exception occurs during the execution of the try block, the code execution moves to the except block:

      • Here, it first converts the exception e to a string x.
      • Then it prints an error message.
      • Finally, it returns an empty string as an indication of the error.

      def getResponseFromQuery(self, db, query, k=4):
            try:
                """
                gpt-3.5-turbo can handle up to 4097 tokens. Setting the chunksize to 1000 and k to 4 maximizes
                the number of tokens to analyze.
                """
      
                mod_name = self.model_name
                temp_val = self.temp_val
      
                docs = db.similarity_search(query, k=k)
                docs_page_content = " ".join([d.page_content for d in docs])
      
                chat = ChatOpenAI(model_name=mod_name, temperature=temp_val)
      
                # Template to use for the system message prompt
                template = ct.templateVal_1
      
                system_message_prompt = SystemMessagePromptTemplate.from_template(template)
      
                # Human question prompt
                human_template = "Answer the following question: {question}"
                human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
      
                chat_prompt = ChatPromptTemplate.from_messages(
                    [system_message_prompt, human_message_prompt]
                )
      
                chain = LLMChain(llm=chat, prompt=chat_prompt)
      
                response = chain.run(question=query, docs=docs_page_content)
                response = response.replace("\n", "")
                return response, docs
      
            except Exception as e:
                x = str(e)
                print('Error: ', x)
      
                return '', ''

      The Python function getResponseFromQuery is designed to search a given database (db) for a specific query and then generate a response using a language model (possibly GPT-3.5-turbo). The answer is based on the content found and the particular question. Here is a simple English summary:

      1. The function getResponseFromQuery takes three parameters: db, query, and k. The k parameter is optional and defaults to 4 if not provided. db is the database to search, the query is the question or prompts to analyze, and k is the number of similar items to return.
      2. The function initiates a try-except block for handling any errors that might occur.
      3. Inside the try block:
      • The function retrieves the model name and temperature value from the instance of the class this function is a part of.
      • The function then searches the db database for documents similar to the query and saves these in docs.
      • It concatenates the content of the returned documents into a single string docs_page_content.
      • It creates a ChatOpenAI object with the model name and temperature value.
      • It creates a system message prompt from a predefined template.
      • It creates a human message prompt, which is the query.
      • It combines these two prompts to form a chat prompt.
      • An LLMChain object is then created using the ChatOpenAI object and the chat prompt.
      • This LLMChain object is used to generate a response to the query using the content of the documents found in the database. The answer is then formatted by replacing all newline characters with empty strings.
      • Finally, the function returns this response along with the original documents.
      1. If any error occurs during these operations, the function goes to the except block where:
      • The error message is printed.
      • The function returns two empty strings to indicate an error occurred, and no response or documents could be produced.

      def extractContentInText(self, topic, query):
          try:
              discussedTopic = []
              strKeyText = ''
              cnt = 0
              max_cnt = self.max_cnt
      
              urlList, channelList = self.topFiveURLFromYouTube(youtube, q=topic, part='id,snippet',maxResults=max_cnt,type='video')
              print('Returned List: ')
              print(urlList)
              print()
      
              for video_url in urlList:
                  print('Processing Video: ')
                  print(video_url)
                  db = self.createDBFromYoutubeVideoUrl(video_url)
      
                  response, docs = self.getResponseFromQuery(db, query)
      
                  if len(response) > 0:
                      strKeyText = 'As per the topic discussed in ' + channelList[cnt] + ', '
                      discussedTopic.append(strKeyText + response)
      
                  cnt += 1
      
              return discussedTopic
          except Exception as e:
              discussedTopic = []
              x = str(e)
              print('Error: ', x)
      
              return discussedTopic

      This Python function, extractContentInText, is aimed to extract relevant content from the transcripts of top YouTube videos on a specific topic and generate responses to a given query. Here’s a simple English translation:

      1. The function extractContentInText is defined with topic and query as parameters.
      2. It begins with a try-except block to catch and handle any possible exceptions.
      3. In the try block:
      • It initializes several variables: an empty list discussedTopic to store the extracted information, an empty string strKeyText to keep specific parts of the content, a counter cnt initialized at 0, and max_cnt retrieved from the self-object to specify the maximum number of YouTube videos to consider.
      • It calls the topFiveURLFromYouTube function (defined previously) to get the URLs of the top videos on the given topic from YouTube. It also retrieves the list of channel names associated with these videos.
      • It prints the returned list of URLs.
      • Then, it starts a loop over each URL in the urlList.
        • For each URL, it prints the URL, then creates a database from the transcript of the YouTube video using the function createDBFromYoutubeVideoUrl.
        • It then uses the getResponseFromQuery function to get a response to the query based on the content of the database.
        • If the length of the response is greater than 0 (meaning there is a response), it forms a string strKeyText to indicate the channel that the topic was discussed on and then appends the answer to this string. This entire string is then added to the discussedTopic list.
        • It increments the counter cnt by one after each iteration.
        • Finally, it returns the discussedTopic list, which now contains relevant content extracted from the videos.
      1. If any error occurs during these operations, the function goes into the except block:
      • It first resets discussedTopic to an empty list.
      • Then it converts the exception e to a string and prints the error message.
      • Lastly, it returns the empty discussedTopic list, indicating that no content could be extracted due to the error.
      • testLangChain.py (Main Python script to extract the transcript from the YouTube videos & then answer the questions based on the topics selected by the users.)


      #####################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 27-May-2023 ####
      #### Modified On 28-May-2023 ####
      #### ####
      #### Objective: This is the main calling ####
      #### python script that will invoke the ####
      #### clsVideoContentScrapper class to extract ####
      #### the transcript from the YouTube videos. ####
      #### ####
      #####################################################
      import clsL as cl
      from clsConfigClient import clsConfigClient as cf
      import datetime
      import textwrap
      import clsVideoContentScrapper as cvsc
      # Disbling Warning
      def warn(*args, **kwargs):
      pass
      import warnings
      warnings.warn = warn
      ######################################
      ### Get your global values ####
      ######################################
      debug_ind = 'Y'
      # Initiating Logging Instances
      clog = cl.clsL()
      data_path = cf.conf['DATA_PATH']
      data_file_name = cf.conf['FILE_NAME']
      cVCScrapper = cvsc.clsVideoContentScrapper()
      ######################################
      #### Global Flag ########
      ######################################
      def main():
      try:
      var = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
      print('*'*120)
      print('Start Time: ' + str(var))
      print('*'*120)
      #query = "What are they saying about Microsoft?"
      print('Please share your topic!')
      inputTopic = input('User: ')
      print('Please ask your questions?')
      inputQry = input('User: ')
      print()
      retList = cVCScrapper.extractContentInText(inputTopic, inputQry)
      cnt = 0
      for discussedTopic in retList:
      finText = str(cnt + 1) + ') ' + discussedTopic
      print()
      print(textwrap.fill(finText, width=150))
      cnt += 1
      r1 = len(retList)
      if r1 > 0:
      print()
      print('Successfully Scrapped!')
      else:
      print()
      print('Failed to Scrappe!')
      print('*'*120)
      var1 = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
      print('End Time: ' + str(var1))
      except Exception as e:
      x = str(e)
      print('Error: ', x)
      if __name__ == "__main__":
      main()

      Please find the key snippet –

      def main():
          try:
              var = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
              print('*'*120)
              print('Start Time: ' + str(var))
              print('*'*120)
      
              #query = "What are they saying about Microsoft?"
              print('Please share your topic!')
              inputTopic = input('User: ')
              print('Please ask your questions?')
              inputQry = input('User: ')
              print()
      
              retList = cVCScrapper.extractContentInText(inputTopic, inputQry)
              cnt = 0
      
              for discussedTopic in retList:
                  finText = str(cnt + 1) + ') ' + discussedTopic
                  print()
                  print(textwrap.fill(finText, width=150))
      
                  cnt += 1
      
              r1 = len(retList)
      
              if r1 > 0:
                  print()
                  print('Successfully Scrapped!')
              else:
                  print()
                  print('Failed to Scrappe!')
      
              print('*'*120)
              var1 = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
              print('End Time: ' + str(var1))
      
          except Exception as e:
              x = str(e)
              print('Error: ', x)
      
      if __name__ == "__main__":
          main()

      The above main application will capture the topics from the user & then will give the user a chance to ask specific questions on the topics, invoking the main class to extract the transcript from YouTube & then feed it as a source using ChainLang & finally deliver the response. If there is no response, then it will skip the overall options.

      USAGE & COST FACTOR:

      Please find the OpenAI usage –

      Please find the YouTube API usage –


      So, finally, we’ve done it.

      I know that this post is relatively bigger than my earlier post. But, I think, you can get all the details once you go through it.

      You will get the complete codebase in the following GitHub link.

      I’ll bring some more exciting topics in the coming days from the Python verse. Please share & subscribe to my post & let me know your feedback.

      Till then, Happy Avenging! 🙂

      Note: All the data & scenarios posted here are representational data & scenarios & available over the internet & for educational purposes only. Some of the images (except my photo) we’ve used are available over the net. We don’t claim ownership of these images. There is always room for improvement & especially in the prediction quality. Sample video taken from Santrel Media & you would find the link over here.

      Tuning your model using the python-based low-code machine-learning library PyCaret

      Today, I’ll discuss another important topic before I will share the excellent use case next month, as I still need some time to finish that one. We’ll see how we can leverage the brilliant capability of a low-code machine-learning library named PyCaret.

      But before going through the details, why don’t we view the demo & then go through it?

      Demo

      Architecture:

      Let us understand the flow of events –

      As one can see, the initial training requests are triggered from the PyCaret-driven training models. And the application can successfully process & identify the best models out of the other combinations.

      Python Packages:

      Following are the python packages that are necessary to develop this use case –

      pip install pandas
      pip install pycaret

      PyCaret is dependent on a combination of other popular python packages. So, you need to install them successfully to run this package.

      CODE:

      • clsConfigClient.py (Main configuration file)


      ################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 15-May-2020 ####
      #### Modified On: 31-Mar-2023 ####
      #### ####
      #### Objective: This script is a config ####
      #### file, contains all the keys for ####
      #### personal AI-driven voice assistant. ####
      #### ####
      ################################################
      import os
      import platform as pl
      class clsConfigClient(object):
      Curr_Path = os.path.dirname(os.path.realpath(__file__))
      os_det = pl.system()
      if os_det == "Windows":
      sep = '\\'
      else:
      sep = '/'
      conf = {
      'APP_ID': 1,
      'ARCH_DIR': Curr_Path + sep + 'arch' + sep,
      'PROFILE_PATH': Curr_Path + sep + 'profile' + sep,
      'LOG_PATH': Curr_Path + sep + 'log' + sep,
      'DATA_PATH': Curr_Path + sep + 'data' + sep,
      'MODEL_PATH': Curr_Path + sep + 'model' + sep,
      'TEMP_PATH': Curr_Path + sep + 'temp' + sep,
      'MODEL_DIR': 'model',
      'APP_DESC_1': 'PyCaret Training!',
      'DEBUG_IND': 'N',
      'INIT_PATH': Curr_Path,
      'FILE_NAME': 'Titanic.csv',
      'MODEL_NAME': 'PyCaret-ft-personal-2023-03-31-04-29-53',
      'TITLE': "PyCaret Training!",
      'PATH' : Curr_Path,
      'OUT_DIR': 'data'
      }

      I’m skipping this section as it is self-explanatory.


      • clsTrainModel.py (This is the main class that contains the core logic of low-code machine-learning library to evaluate the best model for your solutions.)


      #####################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 31-Mar-2023 ####
      #### Modified On 31-Mar-2023 ####
      #### ####
      #### Objective: This is the main class that ####
      #### contains the core logic of low-code ####
      #### machine-learning library to evaluate the ####
      #### best model for your solutions. ####
      #### ####
      #####################################################
      import clsL as cl
      from clsConfigClient import clsConfigClient as cf
      import datetime
      # Import necessary libraries
      import pandas as p
      from pycaret.classification import *
      # Disbling Warning
      def warn(*args, **kwargs):
      pass
      import warnings
      warnings.warn = warn
      ######################################
      ### Get your global values ####
      ######################################
      debug_ind = 'Y'
      # Initiating Logging Instances
      clog = cl.clsL()
      ###############################################
      ### End of Global Section ###
      ###############################################
      class clsTrainModel:
      def __init__(self):
      self.model_path = cf.conf['MODEL_PATH']
      self.model_name = cf.conf['MODEL_NAME']
      def trainModel(self, FullFileName):
      try:
      df = p.read_csv(FullFileName)
      row_count = int(df.shape[0])
      print('Number of rows: ', str(row_count))
      print(df)
      # Initialize the setup in PyCaret
      clf_setup = setup(
      data=df,
      target="Survived",
      train_size=0.8, # 80% for training, 20% for testing
      categorical_features=["Sex", "Embarked"],
      ordinal_features={"Pclass": ["1", "2", "3"]},
      ignore_features=["Name", "Ticket", "Cabin", "PassengerId"],
      #silent=True, # Set to False for interactive setup
      )
      # Compare various models
      best_model = compare_models()
      # Create a specific model (e.g., Random Forest)
      rf_model = create_model("rf")
      # Hyperparameter tuning
      tuned_rf_model = tune_model(rf_model)
      # Evaluate model performance
      plot_model(tuned_rf_model, plot="confusion_matrix")
      plot_model(tuned_rf_model, plot="auc")
      # Finalize the model (train on the complete dataset)
      final_rf_model = finalize_model(tuned_rf_model)
      # Make predictions on new data
      new_data = df.drop("Survived", axis=1)
      predictions = predict_model(final_rf_model, data=new_data)
      # Writing into the Model
      FullModelName = self.model_path + self.model_name
      print('Model Output @:: ', str(FullModelName))
      print()
      # Save the fine-tuned model
      save_model(final_rf_model, FullModelName)
      return 0
      except Exception as e:
      x = str(e)
      print('Error: ', x)
      return 1

      Let us understand the code in simple terms –

      1. Import necessary libraries and load the Titanic dataset.
      2. Initialize the PyCaret setup, specifying the target variable, train-test split, categorical and ordinal features, and features to ignore.
      3. Compare various models to find the best-performing one.
      4. Create a specific model (Random Forest in this case).
      5. Perform hyper-parameter tuning on the Random Forest model.
      6. Evaluate the model’s performance using a confusion matrix and AUC-ROC curve.
      7. Finalize the model by training it on the complete dataset.
      8. Make predictions on new data.
      9. Save the trained model for future use.

      • trainPYCARETModel.py (This is the main calling python script that will invoke the training class of PyCaret package.)


      #####################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 31-Mar-2023 ####
      #### Modified On 31-Mar-2023 ####
      #### ####
      #### Objective: This is the main calling ####
      #### python script that will invoke the ####
      #### training class of Pycaret package. ####
      #### ####
      #####################################################
      import clsL as cl
      from clsConfigClient import clsConfigClient as cf
      import datetime
      import clsTrainModel as tm
      # Disbling Warning
      def warn(*args, **kwargs):
      pass
      import warnings
      warnings.warn = warn
      ######################################
      ### Get your global values ####
      ######################################
      debug_ind = 'Y'
      # Initiating Logging Instances
      clog = cl.clsL()
      data_path = cf.conf['DATA_PATH']
      data_file_name = cf.conf['FILE_NAME']
      tModel = tm.clsTrainModel()
      ######################################
      #### Global Flag ########
      ######################################
      def main():
      try:
      var = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
      print('*'*120)
      print('Start Time: ' + str(var))
      print('*'*120)
      FullFileName = data_path + data_file_name
      r1 = tModel.trainModel(FullFileName)
      if r1 == 0:
      print('Successfully Trained!')
      else:
      print('Failed to Train!')
      print('*'*120)
      var1 = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
      print('End Time: ' + str(var1))
      except Exception as e:
      x = str(e)
      print('Error: ', x)
      if __name__ == "__main__":
      main()

      The above code is pretty self-explanatory as well.


      • testPYCARETModel.py (This is the main calling python script that will invoke the testing script for PyCaret package.)


      #####################################################
      #### Written By: SATYAKI DE ####
      #### Written On: 31-Mar-2023 ####
      #### Modified On 31-Mar-2023 ####
      #### ####
      #### Objective: This is the main calling ####
      #### python script that will invoke the ####
      #### testing script for PyCaret package. ####
      #### ####
      #####################################################
      import clsL as cl
      from clsConfigClient import clsConfigClient as cf
      import datetime
      from pycaret.classification import load_model, predict_model
      import pandas as p
      # Disbling Warning
      def warn(*args, **kwargs):
      pass
      import warnings
      warnings.warn = warn
      ######################################
      ### Get your global values ####
      ######################################
      debug_ind = 'Y'
      # Initiating Logging Instances
      clog = cl.clsL()
      model_path = cf.conf['MODEL_PATH']
      model_name = cf.conf['MODEL_NAME']
      ######################################
      #### Global Flag ########
      ######################################
      def main():
      try:
      var = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
      print('*'*120)
      print('Start Time: ' + str(var))
      print('*'*120)
      FullFileName = model_path + model_name
      # Load the saved model
      loaded_model = load_model(FullFileName)
      # Prepare new data for testing (make sure it has the same columns as the original data)
      new_data = p.DataFrame({
      "Pclass": [3, 1],
      "Sex": ["male", "female"],
      "Age": [22, 38],
      "SibSp": [1, 1],
      "Parch": [0, 0],
      "Fare": [7.25, 71.2833],
      "Embarked": ["S", "C"]
      })
      # Make predictions using the loaded model
      predictions = predict_model(loaded_model, data=new_data)
      # Display the predictions
      print(predictions)
      print('*'*120)
      var1 = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
      print('End Time: ' + str(var1))
      except Exception as e:
      x = str(e)
      print('Error: ', x)
      if __name__ == "__main__":
      main()

      In this code, the application uses the stored model & then forecasts based on the optimized PyCaret model tuning.

      Conclusion:

      The above code demonstrates an end-to-end binary classification pipeline using the PyCaret library for the Titanic dataset. The goal is to predict whether a passenger survived based on the available features. Here are some conclusions you can draw from the code and data:

      1. Ease of use: The code showcases how PyCaret simplifies the machine learning process, from data preprocessing to model training, evaluation, and deployment. With just a few lines of code, you can perform tasks that would require much more effort using lower-level libraries.
      2. Model selection: The compare_models() function provides a quick and easy way to compare various machine learning algorithms and identify the best-performing one based on the chosen evaluation metric (accuracy by default). This selection helps you select a suitable model for the given problem.
      3. Hyper-parameter tuning: The tune_model() function automates the process of hyper-parameter tuning to improve model performance. We tuned a Random Forest model to optimize its predictive power in the example.
      4. Model evaluation: PyCaret provides several built-in visualization tools for assessing model performance. In the example, we used a confusion matrix and AUC-ROC curve to evaluate the performance of the tuned Random Forest model.
      5. Model deployment: The example demonstrates how to make predictions using the trained model and save the model for future use. This deployment showcases how PyCaret can streamline the process of deploying a machine-learning model in a production environment.

      It is important to note that the conclusions drawn from the code and data are specific to the Titanic dataset and the chosen features. Adjust the feature engineering, preprocessing, and model selection steps for different datasets or problems accordingly. However, the general workflow and benefits provided by PyCaret would remain the same.


      So, finally, we’ve done it.

      I know that this post is relatively bigger than my earlier post. But, I think, you can get all the details once you go through it.

      You will get the complete codebase in the following GitHub link.

      I’ll bring some more exciting topics in the coming days from the Python verse. Please share & subscribe to my post & let me know your feedback.

      Till then, Happy Avenging! 🙂

      Note: All the data & scenarios posted here are representational data & scenarios & available over the internet & for educational purposes only. Some of the images (except my photo) we’ve used are available over the net. We don’t claim ownership of these images. There is always room for improvement & especially in the prediction quality.

      Documenting undocumented python scripts using Python-OpenAI

      Today, I will discuss another very impressive & innovative new AI, which is now operational in Python. We’ll document a dummy python code with no comment captured through OpenAI’s ChatGPT model. But before we start, don’t we see the demo first?

      Demo

      Great! Let us understand we can leverage this by writing a tiny snippet using this new AI model.

      Architecture:

      Let us understand the flow of events –

      The above diagram represents the newly released OpenAI ChatGPT, where one needs to supply the code, which was missed to capture the logic earlier due to whatever may be the reasons. We need to provide these scripts (maybe in parts) as source code to be analyzed. Then it will use this new model & translate that into English-like language & capture the logic/comments for that specific snippet.


      Python Packages:

      Following are the python packages that are necessary to develop this brilliant use case –

      pip install pandas
      pip install openai

      To know more, please click the below – “Continue Reading” link –

      Continue reading “Documenting undocumented python scripts using Python-OpenAI”