Jake Simonds

My P(oof) is 100%

How my thinking on/about LLMs has changed over 3 years

by Jake Simonds|

October 24, 2025

|

Moment 1: holy shit, I'm talking to a computer

We were somewhere around Barstow, on the edge of the desert, when the AI began to take hold.

- tweet I remember reading, probably early 2023

GPT3.5 was immediately useful to me. I was 1 year into a bootcamp-like grad program, learning to code but still getting stuck all the time and 3.5 wouldn't necessarily solve everything for me but consistently could get me unstuck, which was a godsend.

It was a crazy time. Then there was the anticipation of GPT4, the Kevin Roose Sydney/Bing chat (first and last time I've ever cared to hear about someone else's conversation with an LLM), and I just kinda tried to keep up with it all the best I could.

I felt certain the world was changing. The questions were how much better are these models gonna get, and how fast will that happen? And everybody was talking about scale, scale, scale.

Why I'm grateful I started learning to code 1 year before chatGPT

Why I'm grateful I started learning to code 1 year before chatGPT

Sub-pages are coooooooooool! Thank you Leaflet

It's so humbling trying to learn to code. By the time chatGPT dropped I was already invested a full year of full time study (& full year of not working, living off savings) so I couldn't chicken out.

If I started today, I honestly think I would have quit or never seriously started 1: because of all the articles about "there's no junior engineer jobs" and 2: because of how hard it would be to work-work-work for months and still not be able to do things that chatGPT can do trivially.

Moment 2: Getting real familiar with llama.cpp internals...and feeling like I understand even less

Two years pass between moment 1 and moment 2, when I vibe-coded a small project word synth. I basically just exposed all the exposable parameters I dared to touch in llama.cpp (running the smallest at the time Llama model, 2b or something around there) & then had an interface to mess around with.

The idea behind this was I had heard lots of people talk about the value of messing with temperature, top_k, repeat penalty, and other sampling stuff, and so I wanted to get an understanding of that stuff.

I expected this to be a useful exercise in a similar vein to how you might build a half-adder or more modestly just use resistors and other basic components to hook up an LED to a potentiometer.

You do a basic breadboard something, and it's not like you're under any illusions that you're on your way to rebuilding a modern processor from scratch, but my experience from that sort of exercise is you come away from it understanding digital logic better. While still amazing and awe-inducing, these sorts of exercises demystify digital logic.

With my llama.cpp exploration...I did not find it demystifying. In fact, all the messing around I've done with small LLMs, while quite informational and helpful, has not made me feel like "Oh, I get why this works when scaled up." Quite the opposite, in fact.

Moment 3 (today-ish): Holy Shit, I don't really know SQL

I use SQL nearly every day, for work. But I just habitually use LLMs to both understand existing SQL and create the SQL I need.

I haven't learned SQL because that was kinda the next thing up for me to learn when chatGPT dropped. Plus LLMs are really good at SQL, so I've been able to get by without really learning it.

This is where I get my P(oof) = 100%. Because LLMs have sneakily let me keep choosing the "easier" path every day which in the long run is the way harder, way worse path.

Moment 4?

Most people overestimate what they can do in one year and underestimate what they can do in ten years.

- Bill freaking Gates

tbd.


Get updates from Jake Simonds!