Skip to Content

What I Learned Without AI


I started learning how to code in 2014, when I was a freshman at TJHSST. Every freshman had to take an intro to software engineering class. I didn't start with some long-term plan or career in mind. I just knew I loved the idea of building something from nothing.

It started with Java and Python. From there, I moved into robotics, web development, low-level programming, operating systems, and all kinds of projects that made programming feel bigger every year. Coding did not feel like learning one narrow skill. It felt like opening a door into a world where you could make things, test ideas, and slowly see how much could be built with software.

What I remember most about learning to code back then is that it never really "clicked." There was no single moment where everything suddenly made sense. It was mostly trial and error. I would try everything I could think of until something kind of worked, then spend hours figuring out why it worked, or why it broke the second I changed something else. A lot of learning was just reading the error closely, changing one thing, running it again, and repeating that process until I got a little closer. Progress was rarely clean. Most of the time, it meant one small step forward and ten frustrating steps back.

Before AI tools became part of the workflow, getting unstuck was its own skill. It meant digging through Stack Overflow, YouTube, old forum posts, random blogs, and page after page of Google results, hoping to find something that looked even remotely like the problem in front of me. Usually there wasn't a perfect answer waiting there. You had to stitch together half-relevant examples, test your assumptions, and keep moving even when you had no clear sign you were getting warmer.

Looking back, that process taught me a lot more than syntax. It taught me how to keep moving when I didn't know the answer yet, and how to work my way toward understanding instead of expecting it to appear all at once. Learning without AI forced me to build the habit of figuring things out, and that habit has stayed useful in every stage of engineering since.

By the time I started my first job as a Software Engineer at Bloomberg in 2021, I realized coding itself was only one part of the job. Now I had to read other people's code, review changes, debug issues in larger systems, and understand codebases I didn't write. The work became less about building one program from scratch and more about learning how to work inside a system that already existed.

That shift is part of why my view on AI is complicated. I learned to code in a world where progress was slow and understanding was earned a little at a time. That process was frustrating, but it built habits I still depend on now: patience, attention to detail, and the ability to find my way through a system I don't fully understand yet.


That background is also why I get frustrated with the way AI is often used in engineering now. My issue is not that AI writes code. My issue is that it can make it too easy to produce code without fully understanding the change, the system around it, or the cost of what is being added.

One of my biggest problems with AI-generated code is that it reduces visibility into change. It collapses the distance between request and result so much that engineers can lose sight of what actually changed, why it changed, what assumptions were made along the way, and what might break somewhere else because of it. The code shows up quickly, often in large chunks, and the speed itself can make it harder to slow down and inspect what is really happening.

That matters because software engineering is not just about getting to a working result. It is also about understanding the path you took to get there. If you cannot explain why a change was made, what tradeoffs it introduced, or how it fits into the rest of the codebase, then "it works" is not really enough.

I have also seen AI create a specific kind of hidden tech debt when it is used carelessly. It can produce code that works locally, passes the immediate task, and looks plausible in a PR, while quietly making the system worse. It duplicates logic that already exists, ignores patterns the codebase already uses, introduces abstractions that do not fit, and leaves behind code that technically solves the feature but makes future iterations harder.

In that sense, AI often optimizes for completion, not coherence. It is very good at helping you get something over the finish line. It is much worse at asking whether the solution should look this way in the first place, or what this change will cost the next person who has to work in the code later.

A lot of that frustration comes from the human side, not just the tool itself. People get excited that the feature works and that the PR is out the door, and once that happens it becomes easy to ignore everything else: the inefficiency, the redundancy, the awkward patterns, the assumptions no one checked carefully enough. The immediate win is visible. The long-term cost usually is not.

I also think it takes some of the magic away from coding. Fourteen-year-old me would have been amazed by an unpolished and scrappy website with a tiny mini game that I somehow got working after hours of grinding, minimal CSS, and a plain white background. It was not impressive by any real standard, but I was proud of it because it was mine.

A lot of AI-assisted coding feels different, especially when AI starts doing too much of the building for you. It feels less like building and more like unboxing. You prompt, you wait, and then you see what shows up. That can be useful, but it is a different experience. Some of that ownership disappears too. There is a certain pride that comes from knowing you wrote every line yourself, fought through every bug, and brought the thing to life yourself. With AI, that feeling is no longer there in quite the same way.

What worries me most, though, is what this does to how engineers learn. Over-reliance on AI can let people make the immediate change without going through the slower, more frustrating process of understanding the system they are changing. That means they may be able to ship code, but they have not built the intuition that helps them anticipate failure, reason through tradeoffs, or debug issues when something eventually breaks.

And something always eventually breaks. Systems change, assumptions fail, edge cases show up, traffic patterns shift, and code that looked fine in one context starts failing in another. When that happens, the real question is not whether an engineer was able to generate the change. It is whether they understand the system well enough to diagnose what went wrong and fix it on a random Wednesday at 2AM.

That is the part I do not think we should lose. If an engineer can ship changes but cannot explain how the system works, that is not leverage. That is dependency.


Even with all of that, I still think it is important for engineers to learn how to use AI well. The point is not to reject it. The point is to understand what it is actually good at, where it adds leverage, and where it should stop.

For me, one of the clearest benefits is that AI is very good at boilerplate and mundane work. It handles the kind of tasks where most of the structure is the same and only a few details need to change. If AI can generate the repetitive parts or write a solid first pass, that is real value.

It is also genuinely useful for navigating large codebases more quickly. You still need to verify what it tells you, but it can help you form a high-level picture without having to understand every implementation detail up front. That matters when you are working in an unfamiliar stack, trying to trace how pieces fit together, or just trying to get oriented faster than you otherwise could on your own.

That is part of why I think AI lowers the barrier to entry in a meaningful way. Beginners can spend less time completely stuck. Engineers can move across unfamiliar languages, frameworks, and systems more quickly. A lot of the painful trial and error that used to be part of getting started can now be shortened. That is a real advantage. At the same time, lowering the barrier to entry is not the same thing as building deep understanding. It helps people get moving faster, but they still have to learn how to reason about what they are building.

I also think AI works well as a thought partner. You can bounce ideas off of it, ask it what you might be missing, have it review your approach, compare options, and use it to structure your thinking before you write the final version yourself. Sometimes that means sanity-checking a design. Sometimes it means using it like a rubber duck while debugging.

So despite my frustrations with it, I do not think the answer is to ignore AI or pretend it is not changing engineering. It is useful. It lowers friction and increases speed. The real challenge is learning how to use that leverage without letting it replace judgment.


Two principles shape how I use AI in engineering:

1. Use AI to accelerate your work, not replace your understanding.
2. Trust it enough to use it. Verify it enough to own it.

AI is useful because it can make engineers faster. It can help plan, write code, review code, and handle repetitive work that does not need deep attention every time. But AI is not a replacement for engineering judgment. The goal is not to remove human judgment from engineering. The goal is to use AI in a way that makes that judgment more effective, not less necessary.

That matters because AI does not actually change what makes someone a good engineer. The same skills that mattered in 2021 still matter now: understanding systems, debugging carefully, reading code closely, reasoning about tradeoffs, and maintaining quality over time. AI does not replace any of that. It just adds another tool that engineers now need to learn how to use well.

That is where the second principle matters: trust it enough to use it. Verify it enough to own it. AI can give you a strong first pass and handle mundane work well, but none of that removes the engineer's responsibility to check the output carefully. Verification means reading the code closely, testing it against actual behavior, and making sure it fits the system it is entering.

That becomes even more important in high-stakes systems. In areas like infrastructure or security-sensitive code, verification has to be much stricter. In those cases, "mostly right" is not enough.

One rule follows from that: never merge code you cannot explain. If a change cannot be explained clearly, what it does, why it was written that way, what assumptions it makes, and how it fits into the rest of the system, it should not be shipped. "The AI wrote it" does not change that. Ownership still belongs to the engineer.

It is also important to anchor AI output to the actual codebase, not just to generic best practices. A generated solution might look clean in isolation and still be wrong for the system it is being added to. Part of using AI well is asking more specific questions: does this match our architecture? Do we already have a utility or pattern for this? Is this the style we want in this codebase? Who is going to maintain this six months from now?

AI can be a real advantage. It can accelerate work and improve quality when it is used with care. But it only works when the engineer stays responsible for understanding, judgment, and ownership. AI changes the workflow. It does not change the standard.


I don't think the answer is to reject AI. I think the answer is to use it without giving up the parts of engineering that matter most.

When I think back to learning how to code as a freshman in high school, what I remember most is the feeling of making something work after hours of trial and error. A small game, a silly website, something that slowly came to life one piece at a time. I was proud of those things not because they were perfect, but because they were mine. More than that, they taught me how to persevere, how to adapt when things broke, and how to learn by working through struggle instead of around it. That is what mattered when I was first learning how to code, and I do not think that should change now.