Background

Agentic AI represents a significant advancement in artificial intelligence, characterised by systems capable of functioning autonomously. These advanced AI systems can make context-driven decisions and pursue specific goals with minimal human intervention. This field features Multi-Agent Systems (MAS), which consist of several autonomous agents driven by large language models (LLMs). Each agent possesses a unique persona and context, defined by tailored prompts and utilises a variety of tools to enhance their collaborative and independent efforts in achieving specific objectives.

By harnessing context and previous user interactions, these systems make knowledgeable decisions that enhance efficiency and enable real-time adaptations to changing user needs. Additionally, Agentic AI excels in providing personalised experiences, learning from user feedback to continuously evolve and deliver tailored solutions that cater to individual preferences.

The powerful combination of autonomous decision-making and adaptability allows Multi-Agent Systems to function as powerful tools for driving improvements in efficiency and effectiveness across diverse fields and applications.

Objective

The objective of this experiment is to leverage a Multi-Agent System for automating secondary research tasks, aiming to significantly reduce the time and effort needed to gather and analyse large volumes of data, enabling research analysts to work more efficiently.

This experiment aims to develop various components to facilitate:

1. Data collection from diverse sources: Gather data simultaneously from PDFs, webpages and online articles, enhancing research efficiency and ensuring thorough information collection.

2. Diverse perspectives: Develop multiple agents representing different viewpoints and areas of expertise to enrich the analysis of the research topic.

3. Automated insights: Utilise agents to extract key information, trends and patterns efficiently for automated insights.

Business use cases and applications

Agentic AI offer diverse business applications across multiple domains. Some of the key use cases include:

1. Retail: AI agents can collaborate to provide personalised recommendations based on user preferences, historical purchases and current trends, thereby enhancing the customer experience.

2. Finance: AI Agents can monitor financial news and reports, synthesising information to deliver insightful investment recommendations.

3. Customer service: Multi-Agent System can handle customer inquiries across various platforms (chat, email, social media), providing consistent and personalised support.

4. Insurance: Various AI agents can work in collaboration to automate the claims review process, analysing documentation and customer interactions to expedite approvals and reduce fraud.

These use cases demonstrate the various ways Agentic AI systems can be integrated into business operations, emphasising their efficiency and adaptability across industries.

Environment setup

Python: For data collection, data preparation and development of AI agents

Computing infrastructure: AWS EC2 instance (instance type: t2.xlarge)

Azure OpenAI: For information retrieval and content generation (LLM: gpt-4o-mini, Embedding: text-embedding-ada-002)

Experiment approach

The different components of the Multi-Agent System designed to generate insights in an automated way are shown below:

Experiment-approach-flow

In this experiment, we followed a systematic approach for automating secondary research insights using Multi-Agent System by implementing the following steps:

 

1. Data collection: Gathered data from various sources, including webpages, PDFs and text files.

2. Data preprocessing: Removed irrelevant characters and HTML tags from the data to obtain a clean dataset.

3. Document chunking: Divided dataset into smaller text documents for easier processing.

4. Vector database setup: Processed each text document with a large language model (LLM) to create embeddings, which are stored in a Vector Database for quick retrieval.

5. Document retrieval and grading: Leveraged vector similarity search to obtain the top-K documents relevant to the required insights and utilised Gen AI to grade their relevance.

6. Multiple agent deployment: Generated automated insights through various AI agents, each focusing on key tasks such as providing definitions, identifying practical use cases, analysing value propositions and assessing market dynamics relevant to the research topic.

7. Research analyst feedback: Established a process for gathering analyst feedback to enhance the personalisation of insights and reports.

Choice of algorithms

1. Langchain: Langchain is an open-source framework designed to streamline the development of applications that leverage large language models (LLMs) and intelligent agents. It provides tools for building complex workflows, integrating various data sources and deploying autonomous agents that can perform tasks like information retrieval and processing.

2. Generative AI: Utilised the Azure OpenAI large language model (LLM) for information retrieval and content generation, streamlining the process of extracting valuable insights from large datasets.

Experiment outcomes

Through our experiment, we were able to automate secondary research tasks, leading to substantial improvements in efficiency and the quality of insights generated.

Impact on the productivity of secondary research tasks

The table below shows the impact of automation on various secondary research tasks, highlighting the significant boosts in productivity achieved for each task.

Experiment-outcomes-flow

Sample Output

Experiment-outcomes-flow

What's next ...

  • Accuracy amplification: Further elevate the quality of insights by fine-tuning prompts used in the automation process to ensure more contextually relevant outputs.

  • Supporting more data sources: Expand the integration of additional data sources to enrich the information landscape and enhance the depth of research.

  • Elevate user experience: Focus on optimising the user interface and interaction processes to make it easier for users to access and leverage insights.

  • Scalability: Enhance the system's scalability to accommodate increasing volumes of data and user demands without compromising performance.