← Back to Eric Florenzano's Blog

Talk Your Game Into Existence

By Eric Florenzano • October 10, 2025

I’m dictating this to a Samsung S23 Ultra—last year’s flagship, still a pocket powerhouse—just to see how the Android build of my favorite tool feels against the polished macOS version I’ve lived in for months. The experiment is minor, but it hints at something bigger: the hardware gap is shrinking, and the real revolution is happening in how we talk to our machines.

I’m dictating this to a Samsung S23 Ultra—last year’s flagship, still a pocket powerhouse—just to see how the Android build of my favorite tool feels against the polished macOS version I’ve lived in for months. The experiment is minor, but it hints at something bigger: the hardware gap is shrinking, and the real revolution is happening in how we talk to our machines.

A couple of years back I had an idea I kept to myself because it sounded like sci-fi. Today it feels inevitable. Here it is in one breath:

What if the English paragraph you jot down becomes the only file you ever commit?
What if every C# script, 3-D mesh, or UI texture is just a deterministic echo of those plain words?

From Prompt to Permanence

Right now the AI loop is familiar: type a prompt, an LLM spits out C#, you eyeball the code and check it into Git. The source of truth is the generated code.

I think we’re about to flip that chain. Instead of saving the C#, we’ll save the English—tight, living descriptions bolted right onto Unity GameObjects. At build time (or even at runtime, using a fixed seed) we let the model expand those sentences into identical, reproducible C#. Change the seed, get a fresh gameplay mutation. Layer on extra paragraphs, ship a mod. The natural-language instructions become the canonical artifact; everything downstream is disposable.

Why Games Are the Perfect Sandbox

Unity’s beauty is its chaos. Drop an enemy into an empty scene and it’s just a mesh—no brain, no bones. You grow it: arms, legs, torso, each wired with homemade behaviors. An ArmBehavior script can, if you’re feeling reckless, reach over and twist the torso’s scale. It’s wild, flexible, and after a year of development you’re drowning in dozens of interlocking scripts.

Now imagine never writing ArmBehavior.cs again. You’d write:

“Left arm swings with a relaxed 45-degree arc while idle; when the player approaches, it lunges forward, triggering a damage event on contact.”

That paragraph lives on the arm’s GameObject. At build time the LLM turns it into the exact C# you need—every frame, every condition, every callback—byte-for-byte identical across rebuilds if you keep temperature at zero.

The Cost Collapse

Traditional scope: a team of programmers, artists, designers, version-control wranglers, multi-year burn. Proposed scope: one human with a clear design voice and a rack of GPUs. The model writes code, diffuses textures, maybe even extrudes 3-D models. You iterate in minutes, not sprints. Quality bars skyrocket; more people can enter the industry; weird, wonderful ideas surface because the price of failure drops to near zero.

Beyond Games

Picture a phone that boots to a blank home screen. You say, “I need a tiny app that tracks how many sparkling-water cans I crack open today, with a silly animation when I hit ten.” The agent drops the micro-app onto your handset. You tweak it aloud; it rebuilds. When tomorrow’s smarter model lands, it silently upgrades your applet to match your intent more precisely.

Carry the same concept to AR glasses: the world stays real while software—personal, ephemeral, conversational—augments only what you want, when you want it.

I’m impatient for that future. Not because coding is evil, but because the world deserves more voices building weirder, warmer things. Once description equals creation, technology finally becomes a two-way conversation instead of a product we unwrap.

So here I am, dictating the last sentence of this piece into last year’s phone, watching the cursor blink like it’s waiting for the next idea. I can’t wait to see what we all say next.