Eduardo Toledo portrait

Eduardo Toledo

Machine Learning Scientist | Agentic & GenAI Engineer | FinTech & RegTech AI Innovator

I love turning complex challenges into smart, working AI systems. With experience in FinTech and Gaming Technology, I’ve built AI solutions for anomaly detection, fraud prevention, and Anti-Money Laundering (AML), focusing on models that not only predict—but protect. My current passion is agentic AI and autonomous workflow automation, using LLM-powered agents, tool-calling, planning, and LangGraph orchestration to move AI from insights to action and execution.

What I Do

“I believe in AI that works with people, not against them — technology that serves fairness, transparency, and purpose.”

Education

Jun 2024 – Master's degree in Analytics Engineering
Universidad de Los Andes
Jun 2014 – Project Management Specialization
Escuela Colombiana de Ingeniería
May 2004 – Master’s in Econometric Sciences and Mathematical Economy
Pontificia Universidad Javeriana
May 1995 – Software Engineering
Universidad Nacional de Colombia

Certifications

Aug 2025 – Agentic with LangGraph and LangChain (LangChain)
Jan 2025 – GenAI Nanodegree Program (Udacity)
Oct 2024 – GenAI for Software Development (Deeplearning.ai)
Oct 2023 – AI for Good (Deeplearning.ai)
Sept 2023 – Databricks Generative AI Fundamental (Databricks)
Aug 2023 – Databricks Lakehouse Fundamental (Databricks)
May 2023 – Microsoft Azure AI Fundamental (Microsoft)
Oct 2022 – Microsoft Azure Data Fundamental (Microsoft)
Jun 2021 – Applied Statistics and Probability (Universidad de Los Andes)
May 2021 – Python Programming (Universidad de Los Andes)
Feb 2021 – Deep Learning Specialization (Deeplearning.ai)
Mar 2020 – Software Design and Architecture (University of Alberta)
Jul 2019 – Machine Learning Specialization (Stanford Online)

Projects / Contributions

Corporate & Enterprise Projects

Current Focus:
Developing an Anti-Money Laundering (AML) pipeline that detects suspicious transaction patterns using feature engineering, anomaly detection, and hybrid ML–LLM methods. The system is built with MLflow and DVC and follows MLOps principles for continuous improvement and scalability.

Dynamic Pricing for ATM Transactions
Designed a system using customer segmentation (KMeans) and demand modeling (XGBoost) to optimize transaction fees, increasing revenue by 10–12% while maintaining transaction volume.
ATM Cash Demand Forecasting & Safety Stock Optimization
Developed a forecasting system using XGBoost and N-BEATS to predict daily cash demand for each ATM cassette ($1, $5, $20, $100). The model generates accurate per-denomination forecasts and calculates safety stock levels to prevent depletion events, significantly reducing cash-out incidents and improving ATM uptime.
Defect Detection from ATM Telemetry Logs
Created a BERT-based anomaly detection model identifying software/hardware defects early. Integrated with LangGraph orchestration for LLM-based root-cause analysis, increasing detection from 30% to 80%.

Consulting & Applied AI Projects

Speech Analytics for Compliance Monitoring in Call Center
Designed an AI-driven speech analytics system using an orchestrator–worker pattern built on LangGraph to analyze interactions between human agents and customers. The agent evaluates compliance adherence, sentiment neutrality, and conversational quality, generating real-time recommendations for call improvement. Integrated with MLflow for full observability and prompt tracking, enabling model auditability and continuous optimization.

Agent-Driven Financial Coaching & Expense Intelligence Platform

Built a serverless, agent-based system using LangGraph to extract and understand expenses from invoices.
Uses Vision-Language Models (VLMs), Atlas Vector Search, and event-driven AWS services to deliver personalized financial insights at scale.
Public demo: https://budgy.com

Related topics: Agentic AI Projects · Technical Articles · LangGraph Certification

Key Agents
  • Document Ingestion Agent – Receives invoices and routes them for processing.
  • Extraction Agent – Uses VLMs and Docling to extract data from images and PDFs.
  • Normalization Agent – Cleans, validates, and standardizes item-level data.
  • Classification Agent – Categorizes expenses using LLM reasoning and vector search.
  • Financial Coaching Agent – Finds spending patterns and produces insights.
Architecture Diagram
    flowchart TD
        User[User / Web UI]
        API[FastAPI / API Gateway]
    
        User --> API
        API -->|Upload invoice| S3[(AWS S3
Raw Documents)] API -->|Trigger| LambdaIngest[AWS Lambda
Invoice Intake] LambdaIngest --> VLM[VLM + Docling
Data Extraction] VLM --> LangGraph[LangGraph Orchestrator
Router + React Agent] LangGraph -->|Async tasks| SQS[AWS SQS Queue] SQS --> LambdaCoach[AWS Lambda
Expense Classification & Coaching] LambdaCoach --> Mongo[(MongoDB Atlas
Structured Records)] Mongo --> Vector[Atlas Vector Search
Embeddings] Vector --> LangGraph LangGraph --> API API --> User
Description

Designed and built an agent-based financial coaching and expense intelligence system using LangGraph. The system uses a Router Pattern and a React Agent to decide which agent works on each task.

The platform processes invoices using Vision-Language Models (VLMs) and Docling to extract financial data from images and PDFs. Multiple agents handle parsing, validation, data cleaning, and reasoning about spending behavior.

The system runs fully serverless on AWS. AWS Lambda handles processing, SQS manages background jobs, and S3 stores raw files. Data is saved in MongoDB Atlas, where Atlas Vector Search enables semantic search and personalized financial coaching insights.

Tech stack: LangGraph, LangChain, Vision-Language Models, Docling, AWS Lambda, SQS, S3, MongoDB Atlas, Atlas Vector Search

AI4Good: Detecting Anomalies in Energy Consumption using Machine Learning
Developed an ML-based system for a Latin American energy distribution company to detect abnormal consumption patterns and identify inefficiencies, potential technical issues, and non-technical losses. Used ETL, feature engineering, clustering, and unsupervised models (K-Means, Isolation Forest, LOF, One-Class SVM) to produce actionable insights. Delivered a scoring datamart, interactive dashboards, and a scalable analytics workflow.

Recent Articles

Loading latest Medium posts...

Contact

📩 Email: eduardo.toledo@bixaistudio.com
🔗 LinkedIn: linkedin.com/in/etechoptimist
💻 GitHub: github.com/etechoptimist