Financial Analysis Reimagined: Context-Aware AI Agents for Real-Time Insights

David Bressler
June 9, 2023

Financial analysts face a constant challenge of quickly accessing and synthesizing financial news to make informed decisions. Eventum built a custom large language model (LLM)-powered Slackbot for a large financial services company seeking to empower analysts and decision-makers with instant, contextually relevant insights from a diverse range of financial news sources.

Challenge

Financial analysts needed rapid, tailored insights from varied news websites throughout their day-to-day workflows. Traditional search and manual research methods were slow, cumbersome, and often disrupted productivity. Analysts required a streamlined solution that could instantly extract relevant information and contextually respond to their queries within a familiar communication platform.

Solution

Eventum engineered a custom large language model (LLM)-powered Slackbot capable of delivering context-aware responses directly within Slack. The solution involved:

  • Prompt Interpretation: User queries entered into Slack were processed first by a lightweight LLM (OpenAI's GPT-3.5 Turbo) that quickly parsed the user's message, extracting the specific financial news website URL and isolating the precise question or informational need.
  • Dynamic Web Scraping: The Slackbot automatically performed on-demand scraping of the targeted financial news websites using Beautiful Soup + Requests. This allowed for real-time retrieval of text content from diverse, dynamically structured websites, ensuring the extracted data was fresh and contextually accurate.
  • Tailored LLM Responses: Scraped data, along with the user's refined query, was fed into a more powerful and sophisticated LLM (GPT-4) that synthesized an accurate, context-sensitive, and actionable response. This enabled analysts to gain deep insights without manually parsing extensive and complex financial news content.
  • Conversational Interaction: Integrated directly within Slack, the solution provided an intuitive conversational interface, enabling continuous context-dependent dialogue. Analysts could ask follow-up questions and receive instant clarifications and expansions, significantly boosting workflow efficiency and user satisfaction.
How Eventum delivered instant financial insights via AI Slackbot

Results

  • 90% reduction in the time required for analysts to obtain actionable insights from financial news sources.
  • Significantly improved user experience through an intuitive, seamless conversational interface within Slack.
  • Enhanced analyst productivity by enabling rapid iterative queries and reducing context-switching.

Conclusion

This project underscores Eventum's ability to deliver innovative, practical, and productivity-enhancing AI solutions tailored specifically to the nuanced requirements of financial services institutions.

Save up to 50% on hiring World-class talent with us

Hire Elite Talent

More Resources

AI Training | DeepCell

50% model error reduction

Learn how Eventum helped DeepCell reduce manual oversight by 90%, provided R&D roadmapping & mentorship of ML team

Read the Case Study on DeepCell
AI Training | Sanas

Reduced manual oversight by 90%

Discover how Eventum helped Sanas achieve a 50% team efficiency gain and reduced model errors by 50%

Read the Case Study on Sanas
White Paper | Building a Team

Strategies for Hiring Elite ML Teams

Learn how to hire your AI team, ranging from role types & expectations, matching positions to project requirements, and interview structures.

Read the White Paper
David Bressler, PhD

Eventum’s Guide to Mastering RAG: Chunking Done Right

David Bressler, PhD

Top 10 Tips for Cutting Costs in ML Systems

Building out an ML product often feels like a whirlwind of experiments, training jobs, and quick iterations. Before you know it, you’re juggling multiple GPUs or expensive cloud instances—sometimes running idly. Suddenly, an astronomical bill arrives, pushing cost optimization to the top of your priority list.At Eventum, we’ve seen this firsthand. We helped Sanas optimize their GPU usage, implement modern MLOps practices, and drastically cut infrastructure costs—all without compromising on product innovation. Here we’ve gathered ten practical ways to keep your ML systems lean, efficient, and scalable right from the start.‍

David Bressler, PhD

Three Breakthroughs That Shaped the Modern Transformer Architecture