Weeknote #76 (20260308-20260314)
meta
Time changes. UGH
did
- Monday: Rowed; long day at work
- Tuesday: Lifted; another long day at work
- Wednesday: Rest day
- Thursday: Lifted; not a long day at work, for a change, but mainly because I made it so
- Friday: Rowed; zero-meeting day at work, so I was able to get caught up on a few things
read
-
“Introducing Our Lord and Savior, the College’s New Strategic Initiative” (via Screenshot)
A lot of you are probably wondering what the new strategic initiative is. Well, it’s complicated and hard to explain. It moves in mysterious ways. We’re building this plane as we fly it, as they say in the new strategic initiative biz. But fear not, because the strategic initiative will reveal itself in all its glorious details at a time when you are ready to comprehend it.
Did I immediately forward this to TheWife, who works in higher ed? You bet I did. Does it actually apply to corporate America pretty broadly? Hells to the yeah.
-
Here’s what I think is happening: AI-assisted coding is exposing a divide among developers that was always there but maybe less visible.
I continue to think “outcome-driven versus process-driven” is the most useful way to think about this, at least for my brain, but — as an admittedly very process-driven person — I’m not sure how you get the outcome you want without some sort of process…
-
“AI Didn’t Break the Senior Engineer Pipeline. It Showed That One Never Existed.”
For decades, the software industry created capable engineers almost by accident. The work itself provided natural friction. You couldn’t Google your way past a broken build in 2005. There was no AI to debug your segfault. You had to sit with the problem, build a mental model, try things that didn’t work, and eventually find your way through. The junior-to-senior pipeline was a label for what the environment was already doing.
The level system (junior, mid, senior, staff) was never a development model. It was a compensation and expectations framework. It told you what someone should be able to do at each level. It said nothing about how they got there. But because the environment was producing growth on its own, the gap was easy to ignore. The labels tracked what was already happening, and everyone assumed the structure was the mechanism.
AI didn’t break this system. AI revealed that the system was never there.
-
“An open letter to Grammarly and other plagiarists, thieves and slop merchants”:
I am sick to death of this — of the tech industry deciding it can do what it wants without consequences. Everything is a subscription, everything is harder to use and enshittified. Everything is built on something that was stolen or degraded. So-called AI – which is not intelligent at all – is being shoved into everything, and makes nothing better (quite the reverse).
-
“Your LLM Doesn’t Write Correct Code. It Writes Plausible Code.”:
THIS is the failure mode. Not broken syntax or missing semicolons. The code is syntactically and semantically correct. It does what was asked for. It just does not do what the situation requires.
…
The obvious counterargument is “skill issue, a better engineer would have caught the full table scan.” And that’s true. That’s exactly the point! LLMs are dangerous to people least equipped to verify their output.
(Emphasis mine.)
-
“LLM exploration (clanker cosplay)”:
It’s making me feel pretty strongly that there is an undeniable ethical problem with making computation sound and act human.
watched
- Finished off Murderbot and kicked off a Ted Lasso rewatch
cooked
- Monday: weeknight tomato soup, grilled cheese
- Tuesday: cronchy tacos
- Wednesday: roast chicken, sautéed spinach
- Thursday: leftover tomato soup, more grilled cheese
looking forward to
I’m taking a little time off, but not until the week after next — gotta get through this upcoming week!