Python Developer

I build data pipelines,
API integrations &
automation systems.

I help businesses collect, process, and act on data in real time. From async Python services to production dashboards, I ship reliable systems that run 24/7.

About

I'm a Python engineer who specializes in building data-intensive backend systems. My focus areas are real-time data pipelines, API integrations, and monitoring/automation platforms.

I've built production systems that ingest 20M+ records from multiple data sources, with automated signal detection, risk management, and live dashboards. I care about clean architecture, comprehensive testing, and code that's easy to maintain.

What I Do

Data Pipelines

ETL systems, API data collection, async ingestion pipelines. I build reliable data flows that handle millions of records with deduplication, validation, and error recovery.

API Development

REST APIs, webhook handlers, third-party integrations. I connect systems together with proper rate limiting, retry logic, and authentication.

Automation

Scheduled tasks, alerting systems, monitoring dashboards. I automate repetitive processes and build tools that keep you informed when things need attention.

Data Visualization

Streamlit dashboards, Jupyter analysis, reporting tools. I turn raw data into actionable insights with interactive visualizations.

Tech Stack

Languages & Frameworks

  • Python
  • FastAPI
  • asyncio / httpx
  • Pydantic

Data & Storage

  • SQLite / PostgreSQL
  • Pandas / NumPy
  • Jupyter
  • Streamlit

Infrastructure

  • Linux / systemd
  • DigitalOcean
  • Git / GitHub Actions
  • Docker

Practices

  • pytest (293+ tests)
  • mypy (strict)
  • ruff / linting
  • CI/CD

Case Studies

Case Study

Real-Time Market Data Pipeline & Monitoring Platform

Built a production data aggregation platform that polls 5+ financial data APIs in real time, processes 20M+ data points, and surfaces actionable signals through automated detection and a live monitoring dashboard.

The Problem

Needed to continuously collect and cross-reference data from multiple market data providers, detect time-sensitive patterns, and present findings through a unified interface — all running autonomously 24/7.

What I Built

  • Async data pipeline — concurrent polling of 5+ APIs every 15-90 seconds using httpx with exponential backoff, rate limiting, and retry logic
  • Storage layer — SQLite with 5 normalized tables, snapshot deduplication, decimal-precision numerics
  • Signal detection engine — configurable threshold-based opportunity detection across multiple data dimensions
  • Risk management system — circuit breakers, position limits, drawdown halts, daily loss caps
  • Live dashboard — Streamlit app with real-time metrics, historical charts, and system health monitoring
  • Production deployment — DigitalOcean droplet, systemd services, SCP-based deployment

Results & Quality

  • 20M+ records ingested and processed
  • 293 automated tests (pytest), strict mypy, ruff linting
  • System ran autonomously in production for weeks
  • Sub-second signal detection latency

Live Dashboard

Pipeline monitor — metrics and cumulative P&L chart
Real-time metrics, cumulative P&L with drawdown annotation
Pipeline monitor — API health, signal volume, and recent signals
API health monitoring, signal volume analysis, recent signals table

Built With

Python asyncio httpx SQLite Pydantic Streamlit APScheduler pytest mypy DigitalOcean systemd
Case Study

API Health Monitor

An async health monitoring system that polls multiple API endpoints concurrently, tracks uptime and latency percentiles, detects outages with smart alerting, and displays everything through a real-time dashboard.

The Problem

Teams relying on third-party APIs need visibility into availability and performance. Manual checks don't scale, and existing tools are expensive or bloated for small-to-mid scale use cases.

What I Built

  • Async health checker — concurrent polling via httpx with configurable intervals, timeouts, and expected status codes
  • Uptime tracking — per-endpoint success rate, avg/p95 latency, consecutive failure detection
  • Smart alerting — status transition detection (healthy → degraded → down), cooldown to prevent alert fatigue, Slack/Discord webhook dispatch
  • SQLite storage — WAL mode for concurrent reads, indexed for fast dashboard queries
  • Live dashboard — Streamlit + Plotly with uptime bars, latency time series, success rate heatmaps, and alert history
  • YAML config — add or remove endpoints without code changes

Results & Quality

  • 35 automated tests (models, storage, checker, alerter)
  • Strict mypy, ruff linting
  • Clean architecture — checker, storage, alerter as independent modules
  • Demo seeder generates 7 days of realistic data with simulated outages

Live Dashboard

API Health Monitor — uptime, response times, and success rate heatmap
Uptime tracking, response time charts, success rate heatmap
API Health Monitor — alerts and endpoint details
Alert history, latency distribution, endpoint detail table

Built With

Python asyncio httpx Pydantic SQLite Streamlit Plotly pytest respx
View on GitHub

Let's Work Together

I'm available for freelance projects. If you need help with data pipelines, API integrations, automation, or Python backend development — let's talk.