canyonwalker: Roll to hit! (d&d)
I've been writing about a D&D adventure I created and DMed recently, The Collector's Menagerie. I shared in my last blog that a player noted the names/types of rooms in the mansion setting— The Hall, The Library, The Conservatory— and remarked, "This is like a game of Clue!" And how I quipped, "With monsters lurking in the rooms waiting to kill you, it's like Cursed Clue!"

The Collector's Menagerie, a D&D adventure I created (Feb 2026)

You might wonder, given the setting in a dead guy's mansion and the (twisted) murder mystery element to the story if I conceived this adventure as, "It's like Clue, but things in every room are trying to kill you." Frankly it would be awesome if that's how I came up with it. Alas, I did not. Not this time.

I have, in the past, created memorable adventures that started with the simple idea, "What if X, but also Y?" Or to be more specific, "What if something culturally familiar to us players in the modern day were the setting of a swords-and-sorcery fantasy story?" and fill it with in-jokes to see how soon the players figure it out. My greatest hits in that vein have been "The heroes traverse a magical Gate to a Renaissance Faire circa 1995 (pre-cell phones) but think it's actual early Renaissance" and "All the traps in the lich's lair form the lyrics to The Eagles' Hotel California." 😆🤣🤘

Yeah, it could have been epic if I started with "Cursed Clue". But I think it is kind of epic even though I only kind of backed in to the story being Cursed Clue.

My kernel of an idea for this adventure was simply, "Monsters are in a city mansion". I used AI to flesh out the idea. That got me to the point of it being a variety of exotic monsters (read: magical beasts and aberrations) that had escaped their cages after the owner of the house, a reclusive collector, died recently.

For the mansion itself I already had a map of an actual English city mansion I'd used as a setting in a previous game. I grabbed that to use again here. The names of the rooms on the map reminded me of a mansion map I know well from my childhood....

The board game Clue, 1972 edition

Yes, Clue! And it was because of the maps that I made the connection. The real-life mansion floor plan had rooms marked Hall, Ballroom, Conservatory, Drawing Room, etc. Those reminded me of the rooms in Clue. BTW, the Drawing Room is the Lounge. The terms are basically interchangeable in historic wealthy Western homes, indicating room a full of lavish but comfortable furniture for withdrawing to after a meal to impress guests.

Once I made the connection myself I thought about how to lean into the idea of "This is Cursed Clue". I tried to think of a way to stash treasure items, some analogue of the candlestick, rope, knife, etc., in various rooms that the heroes would need to recover to complete the challenge. Ultimately I punted that because it seemed too complex. Simplicity was one of the things I was after with this adventure idea. But I did put in some ridiculous secret doors connecting rooms on opposite sides of the map. Shh, the players haven't found those yet!


canyonwalker: Roll to hit! (d&d)
I've written recently that I'm getting a D&D adventure started. Sometimes, though, getting started is hard. Like, I have an idea of the theme or setting upon which I want to base the game, but I'm not sure what the story should actually be. Other times I've got the kernel of an idea, and it's elaborating it into a storyline with plot points and multiple encounters that's difficult. I figured generative AI could give me a hand at these challenges.

I used Google Gemini to assist with fleshing out two adventures. In one I described the basic setting and prompted "It should include undead among the monsters" and asked the AI to elaborate the major plot points and encounters of the adventure, and to detail the villain. In the other I described an initial encounter I imagined and asked what it might lead to.

In both cases AI was very helpful. It came up with creative ideas for encounters and summarized them as key points in a storyline. The AI even prompted me to ask it followup questions, like "What might be the villain's motivations?", "What help could a key NPC provide?", and "What are some unique magic items involved in the story?"

While the AI was helpful it also made mistakes. When I described this to a few friends recently, one jumped in with, "It's important to proofread what AI gives you!" That's true but it's not the problem I had. While we've probably all seen fails reposted online where a student copy-pasted an AI answer including the prompts, thus revealing that they were so lazy in using AI they didn't even read what they copied, there are failure modes in AI that go well beyond what can be solved with basic proofreading. These projects demonstrated that using AI requires you have significant domain knowledge to check its output.

The errors I caught were ones where the AI cited D&D rules and had them wrong. For example, it listed the wrong Challenge Ratings (CRs) for about half the monsters it put in the adventures. CRs are simple data lookups from monster stat blocks. It shouldn't be hard for AI to get them right. But they were wrong— and deadly wrong in at least one case. If I didn't know so many CRs by heart I might have taken an encounter with a recommended monster way too tough for the party.

In another instance, the AI assured me that the party of the 4th level characters (a detail I specified) would have key spells like Fireball and Cure Disease to overcome specific challenges. Well, those spells are both too high level for 4th level characters to get. When I challenged the AI on how 4th level characters would get such spells, it initially offered me a spirited— and completely bullshit— defense of its creation. When I challenged it a second time it admitted that it made a mistake.

"Okay, now go back and revise the encounters to correct this mistake," I prompted it. And, to its credit, it did! But the problem remains that I had to have significant domain expertise to fact-check what the AI was giving me.

canyonwalker: wiseguy (Default)
2025 was the Year of AI. Content generated by AI started popping up everywhere. I even used it a bit myself. But just a bit, because one thing that was blindingly obvious in the Year of AI— obvious to anyone really paying attention, anyway— is that AI can produce some laughably silly results.

For example, some of my colleagues went big into using AI image generation to illustrate slide presentations a few months ago. I couldn't help but point out, publicly, when a person (in the picture) who was the subject of the story had, say, 3 arms, or when they had a laptop showing our product on its screen while the screen was bent around the wrong way. Maybe that makes me a bad person. But I've always been the one dumbass who, when the emperor strides onto the stage stark-fucking-naked, nudges people next to me and says out loud, "Look, the emperor's wearing no clothes!" And more to the point here, if we don't object to AI slop right now, its' going to become normalized and we're going to be completely inundated with it in 2026.

Anyway, this journal entry isn't supposed to be a screed against AI. I'm writing to share some in-the-know humor about some of the funny results AI image gen gives us.

A few months ago I used Google's Gemini image gen to illustrate panels for a story I wrote on my blog. It's the one I only finally finished yesterday: The Mystery of the Church Up the Hill. One of the images I created was of my father painting the inside of the church he attended decades ago. I wrote a prompt like

A man is painting the walls in a Catholic church. He is in his 30s and is dressed in clothes fashionable in the early 1970s.


And the first result was....

Disco Jesus paints his church. Funny AI rendering fail from Google Gemini. (Oct 2025)

Disco Jesus! 🤣

I literally gave this prompt next:

The man is not Jesus.


To its credit, the image generator came back with a new image that did not have the son of god painting his own church after rising from the dead inside a vintage clothing shop. 🤣

Ultimately there were more things wrong with the pic than just "My dad doesn't look like Jesus", so I prompted the AI to start over. On my second try I used a few more terms to describe the aspects of the scene I thought were most salient. I got the image I used in the story I shared yesterday.

AI rendering of a man painting a church (Google Gemini, Oct 2025)

Was the final image I went with at all like the church my dad painted? No. But it conveyed the parts of the story I thought were appropriate. Including a few key elements of my dad's appearance: age, body shape, hair color, and glasses. One thing I couldn't get right in a handful of prompts was managing to dress my dad like a dork from the early 70s. Gemini kept taking the "early 1970s" prompt as making my dad look like a dork who dressed up to go disco dancing. Though I can see now that Dad would've looked pretty sharp— for a dork— in Chelsea boots and a leather vest!

canyonwalker: Sullivan, a male golden eagle at UC Davis Raptor Center (Golden Eagle)
I began a journey down memory lane yesterday when I wrote a journal entry about how my parents never liked attending the church of their faith that was right in our neighborhood. Instead of a short walk up the hill behind our house to the local church where we might see our own neighbors, we piled in the car and drove 20 minutes each way to a church in the next town over.

As I wrote in that blog, my parents were evasive about why they preferred the one far away. My parents, especially my father, gave only vague non-answers whenever I wondered. After a while I stopped asking.

The truth about the church up the hill came out decades later, not long before my father passed away. He knew he was in his last few months of life. He told me one of his goals then was to square things with relatives who were estranged from him. I wasn't estranged by any stretch of the imagination. I was traveling coast-to-coast every few weeks to visit and support him. During one of our bedside chats he told me the story. Well, not the whole story. He gave me just the one or two missing pieces that allowed me connect up the puzzle from other things he'd told me over the years and from things I remember from as far back as my own early childhood.

The story goes back to the mid 1970s when my dad lost his job as a store manger in a retail chain.

AI rendering of when a chain of stores closed and everyone lost their jobs (Google Gemini, Oct 2025)

The mid 1970s were a tough time in the US. The country was just coming out of a deep economic recession spurred by the first oil embargo. The recession was probably why his employer folded. And even though the recession was over by the way economists define it, it wasn't over by the way ordinary people might define it. Companies were failing. Those that weren't failing still weren't hiring. The unemployment rate was above 7%. So when my dad's employer shut down and sent everyone to the unemployment line, finding new work wasn't easy. It took my dad months... maybe even a year or more.

By the way, yes, I'm using AI image generation to help illustrate this story. No, I don't have real photos to share from that time. I was too young even to hold a camera then. I mean, I was still filling diapers when this shit went down. And my parents never snapped many photos during my childhood. That always struck me as weird when I was older, because my dad had been a semi-pro photographer when he was in high school and college.

I saw some of his 1960s era work decades later. It was in a box from his mother, who'd just passed away at age 101. It looked good. He could have made it a career. Why did he put his cameras down and then not pick up another one for, like, 40 years? And also, his mom kept copies his vintage work as mementos; he never did. I might've asked him "why?" about either of those facts, but as I already explained early in this story, my dad was famously loath to answer such questions. In that respect he was like a perpetual pouty teenager giving guttural one-word answers.

Anyway, AI image generation. I'm using it here because I think telling the story with some pictures improves if, even if the pics are not authentic. For one, having pictures beats walls of text. Two, I've iterated on the prompts for these pictures to have them reflect, accurately, particular elements of the story. Of course it's impossible to have them accurately reflect everything, even the spotty parts I remember in snapshot memories from my early childhood. I've got a funny story to share about some of the prompting I had to do while creating an image I'll use later in the story. I'll share that anecdote when we get there.

To be continued....

canyonwalker: wiseguy (Default)
It was only a matter of time. For a few years we had the scourge of robo-spam calls. You know, spam campaigns where a recorded voice tries to trick you into something, like your relatives in China are in trouble with the central government, and asks you to stay on the line for more instructions. (Yeah, that one was easy to detect— and ignore— because it was in Chinese and I have no Chinese relatives.) But now, because it's 2025 and AI is popping up everywhere, we have AI robo-spam. Spammer aim to increase their hit rate— the chance that you interact with their pitch— by making it seem almost like a real person is talking to you.

I answered my first call like this last night.

Note, I don't answer many spam calls anymore. They've become easy to spot, as the Caller ID comes up "Potential Spam" on my smartphone. I chose to answer this one on the theory of once you answer they stop trying. (I base this on knowledge of how auto-dialers work, from my experience as a telemarketer 😨 many years ago. The system will keep trying your number periodically until it logs a live-person connection.) Plus, sometimes these calls are not from spammers, per se, but from organizations I have a relationship with.

The call began innocuously enough, with seemingly a live person on the other end.

"HI, IS THIS <YOUR NAME>?"

"Yeah, this is <first name>."

"<long pause> THIS IS ERICA FROM CREDIT-SOMETHING. HOW ARE YOU?"

The voice seemed a bit off. It was natural sounding but boomy. And it was too perfect. That triggered my suspicions. Most spam callers nowadays frankly struggle with English— because they're low-skill workers in foreign countries where the cost of labor is lower than in countries where English is the primary language.

"What's this about?" I asked, aiming to short-circuit the obvious cold-call.

"<long pause> GREAT! I'M CALLING ABOUT—"

The second long pause and the fact that the caller responded as if I'd answered her question ("How are you?") instead of countering with my own question told me I was likely speaking to a robot. And by "robot" I mean an AI powered system. Though obviously not a great one.

"Are you a robot?" I asked, interrupting

"<long pause> I AM AN INTERACT VOICE ASSISTANT!"

"Ergo, you're a robot. <click>"

canyonwalker: wiseguy (Default)
There's a water leak in our condo complex. These happen frequently with the landscape irrigation system; squirrels and other critters chew chew the half-buried plastic pipes. This leak seemed a bit more persistent than a landscaping pipe, though. Water was leaking steadily, not just for the 15 or 30 minutes a day that the irrigation system runs. Concern about the problem led to a robust discussion in our neighborhood email forum.

"Supergirl has looked at the water leak", the HOA president assured us.

That was certainly an autocorrect mistake. 🤣 Our landscaper's name is somewhat similar, at least in terms of how autocorrect works, to "Supergirl". But I couldn't resist picturing this...

Supergirl the plumber - generated by Gemini AI (Jun 2025)

...with the help of Google's Gemini AI.

That's right, AI. The thing that's going to take all of our jobs in a few years. We'll be sitting at home, surviving off our unemployment checks— at least for the 13 weeks those last— but we'll be able to entertain ourselves by prompting AI to draw pictures making light of our woes!

I made that first picture with a simple prompt like, "Draw a comic book style picture of Supergirl as a plumber." I then refined it a bit to include cues about where the leak is in our neighborhood and got this:

Supergirl the plumber - generated by Gemini AI (Jun 2025)


canyonwalker: wiseguy (Default)
At my sales training seminar the past few days I had a number of conversations with colleagues about AI. These convos spanned topics from "What are we doing [in our product] to align with industry demand for AI powered features?" to "How can we use AI in our jobs in sales to sell more effectively?"to "Is our job [in sales] even going to exist in 5 years?" There's so much I could write about AI even within these topics, let alone the broader discussions about AI. For this, my first journal entry about AI, I'll start with the latter question— which, to state it in more dire terms, is,  Is AI coming for my job?

I use this alarmist language to make a point: This is what people are worry about more and more. And this is the type of lanaguage that's becoming increasingly common as people express their thoughts/concerns.

I don't think the future is as bad as all that. I think we're at a point in the technology hype curve where there's a lot of uncertainty. And I want to be careful to say that I really can't predict the future of AI, even 5 years out.

Why 5 years? Consider how far AI has come in 5 years. 5 years ago AI was more science fiction than science fact.

Three years ago AI was full of hype but still short of reality. While many people in software development, my field, were buzzing about how AI would give us "10x" improvements and pouring money into it, a few of us were pointing out that there was currently no there there and such investment was like the proverbial lemmings chasing each other over the cliff.

Two years ago in software development we started to see the actual value of AI appear. AI could write code— but generally simple code, and it needed more testing and definitely review by a skilled person. The new wisdom became, "AI makes programmers 30% more efficient." That's a far cry from the 1000% gain people were still frothing about 12 months earlier!

Today, in software, we're seeing that 30% level of gain take hold more broadly. Some people react to that figure by asking "Does that mean layoffs of 30% among software developers?" I think that viewpoint fails to appreciate what's happened across the history of technological progress.

Yes, new technology has always reduced the number of old jobs that were doing things the old way. In the industrial revolution factory automation reduced the number of jobs for everything from sawing wood to stitching clothes to digging for coal. A simplistic view of it is, "Machines replaced people." But while machines replaced jobs where people were doing rote, manual work, the economy was not a zero-sum game. Overall the economy grew because of efficiency, and new, higher value jobs were created elsewhere.

The same lesson applies with the AI transformation. AI will replace people who are doing lower level, more rote jobs. But economic gains will mean more higher level jobs can be created elsewhere. For those who are looking at it as zero-sum, though, and wondering, "Will AI take my job?" the answer is really, "People who know how to use AI to be more productive will replace those who don't."


Profile

canyonwalker: wiseguy (Default)
canyonwalker

February 2026

S M T W T F S
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 2021
22232425262728

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 23rd, 2026 11:32 am
Powered by Dreamwidth Studios