Tags: models

270

Thursday, April 23rd, 2026

It’s Not AI. It’s FOMOnetization.

FOMO is a feeling. But it’s also a business model—and increasingly, one of the more successful ones. Fear, in general, makes people much easier to separate from their money. It’s perfectly suited to this moment of ubiquitous grift, where everything feels like a lottery ticket or a multi-level marketing scheme.

It’s even more perfectly suited for “the age of AI,” which squeezes economic FOMO from both sides. AI could make you wildly rich (the first person to start a billion-dollar company with zero employees!) or leave you hopelessly destitute (part of the looming “permanent underclass”). Which one do you want to be? Smash that like button, sign up for my online course, and use my new AI-powered business platform!

Summary punishment

In the latest issue of Matthias’s excellent Own Your Web series, he describes the recent betrayal by Google:

The search engine no longer says “here, go read what this person wrote.” It now says “here, I’ve already read it for you.” The contract is broken.

He’s absolutely right.

But…

Have you ever clicked on a result from a search engine? Unless you’re lucky enough to land on a nice personal website, you’re more than likely to be confronted with pop-ups to allow tracking, or a desparate plea to subscribe to a newsletter, or just rubbish ads all accompanied by a slow page loading somewhere in the mix.

Don’t get me wrong. I’m not saying that what Google is doing is okay. But let’s not pretend that everything indexed by Google is just fine and dandy for people to visit.

And of course the main reason why websites are so terrible is because they’ve tied their business model to heaps of behavioral advertising driven by invasive tracking courtesy of …Google.

This reminds me of AMP. Remember Google AMP? It was a terrible solution to a real problem. Web pages were (and still are) bloated and slow. The correct solution would be to encourage people to fix that, but instead Google mandated a proprietary format for your content that had to be hosted on their servers.

AMP was a disaster, both in practical terms and in the reputational damage it did to Google’s developer relations.

Now they’re doing it again, powerwashing away any goodwill they ever had with site owners. Now Google doesn’t even send search engine traffic to the websites that host the ads that Google encouraged people to put on every page.

It’s almost as if Google is a company so large and with so many competing interests that it now suffers from an incurable split personality disorder.

Personally I think they’re missing a trick. They should be using “AI” summaries as a stick.

If your site is slow, or filled with user-hostile annoyances then it should be cockblocked by a hallucinated summary. But a nice fast respectful website? Send the traffic their way! Everyone wins—users, site owners, Google, the World Wide Web.

Could you imagine how quickly this would revolutionise the world of search engine optimisation? They’ve always told us that we should make websites for humans in order to get good Google juice. This would be a way of making it come true, without any of the over-engineered woefulness of AMP.

It’ll never happen of course. But I can dream.

Tuesday, April 21st, 2026

Expansion artifacts || Matt Ström-Awn, designer-leader

Compression made the information age possible by stripping things down to fit the pipes. Expansion made the AI age possible by blowing data back up again. Both operations leave marks; we’ve learned to spot compression artifacts, but we’ve only just begun to reckon with expansion artifacts. Until we do, there’s a lot of risk to manage.

Thursday, April 16th, 2026

Threat models

People talk about the effectiveness (or lack thereof) of large language models as though all tasks are comparable. But it strikes me that there are three broad categories of work that large language models are applied to:

  1. Compression.
  2. Transformation.
  3. Expansion.

Compression is when you feed a large language model something big that you want to make small. Summarise this book. Give me the gist of this meeting. Large language models are generally pretty good at this, which makes sense given that they themselves are kind of like compressed artifacts.

Transformation is when large language models convert from one format into another. Turn this audio into text. Turn this jumble of data into structured JSON. A large language model can handle these tasks pretty well. There’ll probably be a few errors so make sure that’s not a deal-breaker.

Expansion is when you give a large language model a prompt to generate something from scratch. An image. A presentation. An email. A poem. This is where slop lives. The output inevitably betrays its origins, glistening with a sheen of mediocrity.

Laurie spotted this three-way split a while back:

Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

I hope that when the bubble finally bursts, we’ll see the surviving large language models put to work on the first two categories. The boring stuff. The work that’s tedious for humans.

But tedious is as tedious does. Something I consider drudgery might be the very thing that gives you life. Like Giles says:

I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession.

The big exception seems to be programming. Apparently there are plenty of coders who never before expressed an interest in being managers who are now happily hanging up their coding spurs in favour being the overseer of non-human workers.

It’s a reasonable outlook. It could even be considered a user-centred approach. Users don’t care about the elegance of your code; they care about accomplishing their tasks.

Programming is something of an exception to the efficacy of large language models in general. Instead of relying on the subjectivity of painting, poetry, or prose, programming can be objectively tested. Throw enough money at the worst people in the world and they’ll give you tokens you can use to get the machines to test their own output. So you can get a large language model to create something reasonably good from scratch as long as that something is code.

If you had asked me about the threat model of large language models two years ago, I probably would’ve been worried for artists, writers, and musicians. I thought that software had enough inherent complexity to be relatively safe.

Now my opinion has completely reversed. Software is almost certainly the killer app for large language models.

I think the artists, writers, and musicians will be okay, or at least as okay as they ever were. It turns out that humans like things made by other humans.

And y’know what? If I had to choose which endeavour I’d rather see automated away—programming or art—it’s no competition.

Don’t get me wrong—it would be nice if everyone got paid for doing what they enjoy. It’s just that I’m okay with software engineers not being at the front of that line.

I remember when I first started getting paid money to make websites. “Really?” I thought, “Someone is willing to pay me to do something I’d do anyway?” I kept waiting for the jig to be up. Instead I saw my profession grow and expand.

Perhaps there’s a long-overdue compression happening.

Or maybe it’s more like a transformation.

Thursday, April 9th, 2026

The AI Great Leap Forward

In 1958, Mao ordered every village in China to produce steel. Farmers melted down their cooking pots in backyard furnaces and reported spectacular numbers. The steel was useless. The crops rotted. Thirty million people starved.

In 2026, every other company is having top down mandate on AI transformation.

Same energy.

Tuesday, April 7th, 2026

AI Might Be Our Best Shot At Taking Back The Open Web | Techdirt

Not sure I buy the argument here, though I do very much look forward to local language models getting better so we can ditch the predatory peddlars of today’s slop. But this trip down memory lane to the early web of the 1990s could’ve been describing my own experience:

But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.

Fray was what made me want to make websites:

I distinctly remember sites like prehensile tales, 0sil8 and the inimitable Fray triggering something in my brain that made me realise what it was I wanted to do with my life.

Sunday, April 5th, 2026

I used AI. It worked. I hated it.: Taggart Tech

There’s a fundamental problem with these tools beyond the capacity of any deployment strategy to solve: the tool requires expertise to validate, but its use diminishes expertise and stunts its growth. How does one become an expert? There are no shortcuts; there is only continuous hard work and dedication. I was once told of writing, great writers learn how to break the rules in new and ingenious ways by first learning the rules.

But how is a new developer meant to learn the rules if their day-to-day work is nothing but the babysitting of models? How will they gain the hard-won experience that allows a human in the loop to be a useful safeguard?

These models alter cognition in ways deleterious to human prosperity. In other words, for as much output as they provide, they take something important from us.

Thursday, March 26th, 2026

The End : Focal Curve

I can’t remember the last time a blog post resonated with me this much.

Craig’s criteria on his job search:

  • One: fuck offices
  • Two: fuck AI
  • Three: fuck React

And his conclusion:

Fuck work

Saturday, March 21st, 2026

Flood fill vs. the magic circle

Eleven years ago, I wrote:

Sometimes I consider the explosive growth of computation and think that strong AI is a near-term inevitability.

Then I remember printers.

That was just a brainfart, but Robin tackles it seriously in his thoughtful essay.

A pleasing image: if indeed AI automation does not flood fill the physical world, it will be because the humble paper jam stood in its way.

Software cannot, in fact, eat this world. Software can reflect it; encroach upon it; more than anything, distract us from it. But the real physical world is indigestible.

Wednesday, March 18th, 2026

Working with agents doesn’t feel like flow — Bill de hÓra

Related to Matt’s thoughts:

…working with agents feels much less like classic deep work, and much more like playing a game. Not to say the work is frivolous—it’s just because it feels like I’m in a game loop.

Flow, at least in the usual sense for me, feels smooth and continuous. The work and your attention starts to line up so cleanly that the experience becomes frictionless. You disappear into the work and meld with it. One notable aspect of flow has been I lose track of time. Working with agents on the other hand, is not like that at all. It’s highly engaging, but in a more jagged, reactive way. I’m focused, but not settled. I’m absorbed, but not merged with the task. I’m paying close attention the whole time, but the attention is dynamic and tactical rather than continuous. I don’t lose track of time at all.

Tuesday, March 17th, 2026

Gas Town and Bullet Hell – Petafloptimism

Matt has some smart reckons on the relationship between time and technology:

The factory bell, the railway timetable, the telegraph wire, the always-on smartphone — each imposed a new temporal discipline, each produced its own characteristic form of exhaustion, and each was eventually (partially, imperfectly) domesticated through a combination of regulation, design, and collective action.

Monday, March 16th, 2026

Stop Sloppypasta: Don’t paste raw LLM output at people

slop·py·pas·ta n. Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

Thursday, March 12th, 2026

Generative AI vegetarianism | Sean Boots

Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life.

Wednesday, March 11th, 2026

I work, I think? - Annotated

This is about something that’s already happening, that doesn’t show up in employment figures: the quiet destruction of the feedback loop that turns inexperienced people into competent ones. The process by which you get something wrong, feel it, understand why, and become slightly less wrong next time. It’s unglamorous and it’s slow and it’s the only way it’s ever worked.

AI short-circuits that learning completely. Not maliciously. Just structurally. When you can generate something that looks right without doing the thinking, you will (most people, most people being me, will, most of the time, under pressure, with a deadline) and the muscle that thinking would have built never develops.

your ai slop bores me

Mutually assured Mechanical Turk.

This is genuinely much more interesting and wholesome than a chat interface powered by a large language model.

Tuesday, March 10th, 2026

I am in an abusive relationship with the technology industry

The cognitive overload of AI trying to Make You More Productive™️ whilst you’re actually trying to be productive is so shockingly absurd. And yet, we are being made to feel like we are stagnating, being left behind, not good enough, that we are luddites should we not adopt this imposing technology. We are being told we’re missing out, even though we’re probably doing just fine. The technology is gaslighting us.

Sunday, March 8th, 2026

Thursday, March 5th, 2026

LLMs Are Antithetical to Writing and Humanity

If you’re dyslexic and just trying to communicate more clearly in writing, or you’ve got a bullshit job and you just want to get your bullshit job’s bullshit tasks out of the way so you can move on to more meaningful endeavors, or at least move past the day-to-day slog that permeates your workday and serves no real purpose other than to pay the bills, then I cede; I cannot fault you.

But if, say, you’re a “writer” and you’re using an LLM to “help you” “write” or “think” because it’s easier and takes less time and thought, then I stand my ground; I can and do fault you.

Wednesday, March 4th, 2026

Feedback

If you wanted to make a really crude approximation of project management, you could say there are two main styles: waterfall and agile.

It’s not as simple as that by any means. And the two aren’t really separate things; agile came about as a response to the failures of waterfall. But if we’re going to stick with crude approximations, here we go:

  • In a waterfall process, you define everything up front and then execute.
  • In an agile process, you start executing and then adjust based on what you learn.

So crude! Much approximation!

It only recently struck me that the agile approach is basically a cybernetic system.

Cybernetics is pretty much anything that involves feedback. If it’s got inputs and outputs that are connected in some way, it’s probably cybernetic. Politics. Finance. Your YouTube recommendations. Every video game you’ve ever played. You. Every living thing on the planet. That’s cybernetics.

Fun fact: early on in the history of cybernetics, a bunch of folks wanted to get together at an event to geek about this stuff. But they knew that if they used the word “cybernetics” to describe the event, Norbert Wiener would show up and completely dominate proceedings. So they invented a new alias for the same thing. They coined the term “artificial intelligence”, or AI for short.

Yes, ironically the term “AI” was invented in order to repel a Reply Guy. Now it’s Reply Guy catnip. In today’s AI world, everyone’s a Norbert Wiener.

The thing that has the Wieners really excited right now in the world of programming is the idea of agentic AI. In this set-up, you don’t do any of the actual coding. Instead you specify everything up front and then have a team of artificial agents execute your plan.

That’s right; it’s a return to waterfall. But that’s not as crazy as it sounds. Waterfall was wasteful because execution was expensive and time-consuming. Now that execution is relatively cheap (you pay a bit of money to line the pockets of the worst people in exchange for literal tokens), you can afford to throw some spaghetti at the wall and see if it sticks.

But you lose the learning. The idea of a cybernetic system like, say, agile development, is that you try something, learn from it, and adjust accordingly. You remember what worked. You remember what didn’t. That’s learning.

Outsourcing execution to machines makes a lot of sense.

I’m not so sure it makes sense to outsource learning.

Monday, March 2nd, 2026

The nature of the job

Large language models help you build the thing faster, which is the primary end goal for your company but only sometimes for you. My primary goal might be to build the thing faster, but it also might be to learn something durably, to enjoy the work, to look forward to Monday.

I don’t like the mental fragility of not fully understanding how my own code works, where AI-generated code is “mine” in that it’s attributed to me in the git blame and I’m its maintainer going forward.