Thinking About an "AI Statement"

I’ve noticed that many artists, designers, and game-space contributors have posted “AI Use Statements” and “AI Pledges” on their profiles, bios, sites, and distribution channels.

Games Workshop has come out strictly against the use of generative AI in its Warhammer products. Wizards of the Coast, on the other hand, has been increasingly relying upon AI for content in Dungeons & Dragons.

Cutting straight to the point, here’s my position on the matter:

All works to I which claim authorship will be my own creations, either alone or in conjunction with credited persons.
It is my sincere intention not to collaborate on any creative or social projects that are not similarly aligned.

I’m not interested in policing other people’s use of AI tools or in shaming them for doing so. This, like ever so many highly-polarized subjects, is a game that can only be won by playing in precisely the way that best works for individuals making informed choices in support of their own best interests. Or, you know, not playing at all. That works, too.

There has been a lot of sound and fury over the last three years about AI tools and how they’re going to, somehow, run us all out of work while also making infinite piles of gold for the dragons who hoard them and the lickspittle wizards most in service to the aforementioned dragons. That’s facially nonsensical, but I have yet in my life to hold a conversation with anyone who can clearly articulate what money is and where it comes from. DM me to take a crack at it - we might have a candidate for a new site badge!

Personally, I don’t believe it is possible for art to exist without an intention and an intelligence creating it. Humans can make art, so can animals. Machines have yet to exhibit any such qualities, regardless of how clever they might seem putting words together. As a species, humans have only succeeded at building one particular type of computer: a “Turing-complete, universal von Neumann” general purpose machine capable of running any program ever written. Yep, all “incompatible” software claims are due to other issues entirely and John Deere tractors can run Doom because every computer ever built is operationally congruent. As Cory Doctorow has quipped, “if we keep breeding horses to run faster and faster, one of them will give birth to a locomotive” is a preposterous prospect.

Machines are capable, though, of following instructions and producing machines and producing unexpected systems. The fundamental kernel at the bottom of that capacity is “follow instructions” - though those instructions aren’t necessarily transparent or apparent to the most-proximate operator.

Professionally, I work in tech, I work in code. Generative AI has some promise in that space - not as much as the people buying and selling it think it has, but some. Every scrap of code that I’ve ever written in my career has been fed into a machine and either compiled or interpreted into instructions for a machine to follow. My inputs are tightly controlled, with narrowly-defined uses, exacting grammar, and extraordinary sensitivity to precision. I chose to make a living in this space and let me tell you one thing unequivocally: it is not fun, it is not delightful, and it is not relaxing. I’m good at it, I know how to transform ideas into processes, how to chain processes together into systems, and how to get systems to produce what I want. It is work that energizes me when things go well, but the low of the lows is just as high as the highs. Call it the Law of Conservation of Motivators: enthusiasm and dejection are neither created or destroyed, all conditions are temporary, the ultimate emotional state is equilibrium. Humans would do best to remember the wins and forget the losses.

Code is a natural choice for Gen-AI. Throw all the body-hits to the machine, ride the crest of the Big Brain design waves. It doesn’t actually work that way, not at all. But we like to employ our selective memories and entertain the notion it does. The main problem is that people get good at making things (and fixing them) because we do it. A novelist’s first book is never as good as their third one, a junior developer becomes a senior one by solving problems - not by flipping pages on calendars.

“How do you get to Carnegie Hall?” - “PRACTICE!”. We don’t get to practice much when we turn our craft over to a word-assembler. And that’s the root of my AI-skepticism: without the resistance of the medium, we don’t become proficient with the medium.

That’s for the skepticism; the root of my antagonism toward it is this: technologists, business “leaders”, politicians, and people generally seem to have thrown their hands up in despair over the intractability of developing human intelligence. Actual, human knowledge is hard. It takes just as long in 2026 to teach a human being arithmetic as it did in 1926, 1826, and 1226. We can’t reliably measure human intelligence, not the way we can tally up the number of words known by a large language model, or the density of its statistical inference engine. Humans are hard, dammit, and they have some peculiar reactions to being expected to follow instructions.

So… yeah. That’s what I wanted to say. It’s not a product for me in my personal life, but I think I have a grasp of the meanings of things. Barbers don’t exist to make hair shorter, social groups shouldn’t exist to convince us to buy things, and writers don’t exist to sort words in funny ways.