Senior Software Engineer
Java and Spring Boot for most of 15 years. AWS for the last five. Building Gen AI agents and MCP tooling in production now. Senior experience + agentic coding tools = a combination that compounds.
Full-stack developer on an OLAP analytics platform for AC Nielsen. Java backend, front-end UI, data analysis support. Where I learned that the boring infrastructure work is actually the important work.
Enterprise Java across regulated industries — State Street (regulatory reporting, REST services, in-memory cache for large datasets), NY Life (ACCORD XML transformation via SOAP/Apache CXF), MetLife (batch-to-realtime migration).
Started building core policy and agency management systems in Java/Spring Boot on Kubernetes. For the last couple of years, shifted to Gen AI: LangChain agents, Bedrock pipelines, MCP tooling — all in production.
LangChain ReAct agent connected to Elasticsearch via Elastic MCP, a Playwright browser tool, a Qdrant RAG store, and short/long semantic memory. Running on AWS Bedrock Agent Core, orchestrated via n8n. Designed to automate repeated prod support tasks with minimum autonomy first — expand as trust is established.
The interesting design constraint: it has to be trustworthy enough that on-call engineers actually hand it tasks at 3am.
Multimodal LLM pipeline to analyze claim documents + notes and suggest next actions. Step Functions orchestrates Lambdas — Textract ingests docs, Bedrock does the analysis. The hard part: giving adjusters enough auditability to trust the output.
Customer-facing agentic chat querying policy, claims, and billing GraphQL APIs plus a RAG store with policy docs. Per-user memory across sessions. Tricky balance: grounded enough to be safe, flexible enough to be useful.
Core insurance platform — policy lifecycle and agency management. Java/Spring Boot backend on Kubernetes with DocumentDB and Lambda + API Gateway for the service layer. The majority of my time at Plymouth Rock before the AI shift.
RRG methodology built from scratch — RS-Ratio and RS-Momentum calculations, live market data, interactive Plotly charts. Started as a "let me understand this properly" project and became something I actually use for my own portfolio decisions.
REST service for high-volume regulatory datasets with an in-memory cache layer for throughput. Custom validation framework automating data quality checks. Also: ACCORD XML transformation (NY Life) and batch-to-realtime migration (MetLife).
Most production AI problems are data quality, latency, and trust — not model capability. I've spent more time building output validators and audit trails than tuning prompts.
The prod support agent starts with workflow tools and human-in-the-loop steps. Agents that do too much too fast are hard to debug and harder to get sign-off on in regulated environments.
15 years in regulated environments means auditability, failure modes, and compliance are first-class concerns — not afterthoughts you bolt on when something goes wrong in prod.
Agents are genuinely useful. They're also fragile in ways demos don't show. I've shipped both the wins and the failure cases — the second teaches you more.
I use AI coding assistants heavily and the velocity gain is real. What makes it work is 15 years of knowing the right architecture, the edge cases, the abstractions that will bite you in six months. Without that, you're just generating code faster. With it, you're compounding.
Open to conversations about enterprise AI tooling, agentic systems, or anything backend. Not actively looking but happy to talk.