Services

I take on a small amount of independent work each week.

If your team is drowning in manual work, unsure how to standardize AI coding tools, or shipping inconsistent output because nobody's written the convention files yet, this page is for you.

First 5 at $99 (normally $250)

AI Readiness Assessment

A 1-hour working call plus a 3-artifact deliverable bundle — written action plan, adapted playbook, and custom AGENTS.md committed to your repo.

What you get:

  • Written action plan — your team's diagnosed readiness level and the 3-4 highest-ROI next steps, with effort and expected outcome per action.
  • Custom AI Adoption Playbook — my public playbook adapted to your stack, scale, and level. Examples use your domain, not generic SaaS.
  • Custom AGENTS.md — ready to commit to your main repo. Tailored to your codebase structure, conventions, and stack.

What I work on

AI coding tool adoption

Your team has Claude Code, Cursor, or Copilot licenses, but output quality varies wildly between developers, senior engineers are skeptical, and nobody can prove ROI. I help you standardize tool selection, build a rollout plan, and run a practitioner workshop against your actual codebase — for teams up to 20 engineers.

Convention files (CLAUDE.md, AGENTS.md)

The reason your AI tools produce inconsistent output usually isn't the model — it's the context. Convention files are how you turn AI coding tools from novelty into compounding productivity. I audit your codebase and ship CLAUDE.md / AGENTS.md / tool-specific rules committed to your repo. Not generic templates.

Automation architecture

Python, APIs, data pipelines, AWS Lambda, n8n. The same patterns that cut $5M in annual operational costs during my years at Ford Motor Company. Production-grade systems with error handling, logging, and monitoring — not prototypes, not 'we'll fix that later.'

How engagements start

Every engagement begins with a paid intake: $500, about 45 minutes of live time. Here's what that looks like:

  1. You send a short async brief (about ten minutes of your time) describing the problem, your stack, and what you've already tried.
  2. I read the brief ahead of the call. We spend ~45 minutes together on Zoom — direct, practitioner-to-practitioner, no discovery theater.
  3. Within 48 hours, you get a written action plan: what I'd do, in what order, with rough scope and pricing. Yours to keep whether or not we work together.
  4. If we scope a project, the $500 intake rolls into the first invoice.

Guarantee: if the written action plan doesn't identify meaningful automation savings or a clear path forward, I refund the $500. I don't reshape the problem to fit a billable engagement.

If we work together

Scoping is a short async exchange — deliverables, timeline, price. No multi-week discovery phase that costs more than the actual work.

Build is async-first with weekly check-ins. Production-grade code with error handling, logging, and monitoring — the same bar I hold for my day job.

Handoff includes documentation, runbooks, and 30 days of support for bugs or adjustments. After that, you can hire me for maintenance or take it in-house — I prefer when teams can own what I build.

Not a fit

I can't take work in investment management, wealth advisory, retirement solutions, ESG data, credit ratings, or compliance software for financial services — conflict with my current employer.

I also don't work well with teams that want a six-month discovery phase before anyone touches code, need daytime or business-hours coverage (my weekdays belong to Morningstar), or expect me to compete in a formal RFP with a scoring rubric. None of those are bad things — they're just not how I can help.

If any of this sounds like your team, send a short note describing the problem. I read every submission and respond within 48 hours with either a scoping proposal or an honest "I'm not the right fit for this."

Start the conversation →