Reflection · EIF Cohort 9

What I learned

The two biggest things I came out of EIF with.

01What I learned

Prompt engineering is way harder than I thought. Coming in, I figured “good prompt, good output.” That's not how it works in production. It's an iterative loop of constraints, structured outputs, fallbacks, and guardrails. Every output you don't want is a tendency you have to push back against, sometimes one prompt revision at a time. The Slide Deck Generator's anti-generic logic and the Thought Leader Drafter's full-text style injection both came from the same realization.

You don't tell AI what you want. You build a system that makes it hard for AI to give you what you don't want.

The other one is that production AI is a different beast from school projects. School AI projects can be brilliant in a demo and fall apart in week two. The EIF projects had real users, real stakeholders (the CEO, the LXDs), and real expectations. Stuff had to keep working. I had to think about edge cases, error handling, failure modes, and what happens when an LLM call times out at 2am. None of that is novel as a concept, but actually doing it for the first time is what makes it click.

02The honest part
What was hard

The hardest part was inheriting other people's codebases. Both projects came with a Phase 1 baseline that I had to read, understand, and then push forward. There's a specific kind of discomfort to being three days into a project and still not fully sure why the original architect made some of the choices they did. I got better at it. Reading code became less about "what does this do" and more about "what was this trying to solve, and is that still the right problem."

The other hard parts: the gap between "AI demo works" and "AI works reliably for users" is wider than I expected. And juggling EIF with school, orgs, freelance work, and other internships meant I was always one schedule conflict away from something slipping.

What I'm proud of

Three things, all genuine wins.

The prompt engineering depth I picked up. I went from writing prompts to thinking in terms of constraints, structured outputs, and fallback chains. That's a real skill and I didn't have it four months ago.

The full-stack range. Backend Python, frontend React, AI integration, all in one sprint, and shipping. Most of my prior projects have been one stack at a time. EIF forced me to be useful across the whole thing.

The A/B comparison hack on Thought Leader Drafter. Single LLM call, two delimited variants, parsed on the backend. Saved cost, saved latency, made the value of the writing samples visible.

What I'd do differently

Push back more on architecture decisions instead of just inheriting. When I joined Phase 2 of both projects, I read the existing code, accepted the structure, and built on top of it. There were a few choices I should have questioned earlier instead of going along with them.

Get closer to the users sooner. I worked with stakeholders fine, but I didn't sit with actual LXDs while they used the deck generator, and I didn't watch the CEO actually try to draft an article in the Thought Leader Drafter. That kind of feedback is irreplaceable and I should have prioritized it.

Manage time better. Finals plus EIF plus orgs plus other work was rough. Some of that was unavoidable but some of it was on me for not setting tighter boundaries earlier.

03What's next

What's next

I'm looking for software engineering internships. Open to full-stack roles with AI components, AI/ML focused product roles, or anything where the AI work is real instead of decorative. I'd rather build a tool that ten people use every day than ship a flashy demo that gets forgotten.

A few directions I'm interested in:

01
AI products that actually work
Production-grade, reliable, not gimmicky.
02
The intersection of AI and UX
Most AI tools are technically impressive and miserable to use. There's a lot of room there.
03
Going deeper on AI/ML systems beyond prompt engineering
I want to understand the layer below the API.

But fundamentally I just want to keep shipping things. AI or otherwise. The best version of the next year for me looks like more projects with real users, more time spent on the parts of building that I haven't done yet, and less time on stuff I've already proven I can do.

Want to talk?

The easiest ways to reach me are on the contact page.

Get in touch