• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

John August

  • Arlo Finch
  • Scriptnotes
  • Library
  • Store
  • About

Geek Alert

New Florida, an Alien RPG scenario

April 8, 2025 Games, Geek Alert, Resources

book cover for Alien RPGOver the weekend, friends and I played our first game of Free League’s Alien The Roleplaying Game. We had a blast. The game mechanics are fun, built around a Stress mechanic that basically ensures characters will freak out at some point.

Alien RPG can be played in campaign mode, where characters progress over many sessions like classic D&D, but it’s probably best suited to “cinematic mode,” which captures the experience of the movies. Most characters will die. That’s how we played it.

The rulebook comes with a scenario — a one-shot called Hadley’s Hope — but I was eager to design my own. The result is New Florida, which finds a crew on a routine, short-haul flight from a research station to Torin Prime.

Spoiler: things go very wrong.

The scenario is designed for 3-5 player characters, and should take about three hours to play. It’s heavy on role-playing, with appropriately terrifying bursts of combat.

The characters are pre-generated, and include:

Vivian Rook — Company executive. Sharp, polished, and dangerous. She rose through the Weyland-Yutani ranks with a mix of ruthlessness and charm.

Sgt. Elias Kane — Marine, Rook’s bodyguard. Professional to a fault — except with Vivian. Loyal, physically imposing, emotionally compartmentalized.

Jace Calder — Pilot. Roguish but competent. Thinks quick, acts quicker.

Milo “Gramps” Vech — Engineer. A long-timer nearing retirement.
Grizzled and grease-stained, he knows this ship better than anyone.

Corporal Dex Marrow — Marine. Sardonic, weary, smokes when he shouldn’t. Doesn’t trust many, but once he gives it, it’s for life. Good friend of Kane’s.

While it’s not a murder mystery, players know details about their characters (and crucial story points) that the others don’t. I gave each player a backstory sheet before we started playing.

I’ve packaged up all the maps, tokens, and other resources I used in a .zip file you can download here: New Florida – Alien RPG Scenario

If you play New Florida at your own table, please let me know!

More on AI environmental costs

January 23, 2025 Follow Up, Geek Alert

In my earlier explainer about recent AI developments, I linked out to a Wikipedia entry that ran through some of the environmental impacts of AI models, mostly in terms of energy and water usage.

Since that post, Andy Masley came out with a much more useful comparison of AI costs. It’s worth reading the whole thing, but the short version is these costs seem much less massive when you compare them to everyday things like hamburgers and leaking pipes.

Chart comparing AI water usage, showing that a ChatGPT query is incredible small compared to a hamburger, and that all daily ChatGPT usage is tiny compared to daily leaking pipes in the US.

Low costs are not zero costs. But if you’re making choices as an individual, there are many more effective steps you can take to reduce your carbon footprint.

Bar chart showing that 50,000 ChatGPT requests is minute compared with the impact of switching to LEDs or flying less.

Masley’s whole article is worth reading, and a good reminder that big numbers (20,000 households!) and tangible metaphors (a bottle of water) can often lead to framing effects and vividness bias.

Something’s Coming

January 6, 2025 Film Industry, Geek Alert, General, Psych 101, Tools

Last week, Dwarkesh Patel put words to an uneasy feeling that resonated with me:

I think we’re at what late February 2020 was for Covid, but for AI.

If you can remember back to February 2020, both the media and the general public were still in normal-times mode, discussing Trump’s impeachment, the Democratic primaries and Harvey Weinstein. Epidemiologists recognized that something big and potentially unprecedented was coming, but the news hadn’t yet broken through.

One of the first front-page articles I can find in the NY Times about Covid is from February 22nd, 2020.

image of NY Times front page, with covid story on left edge

Just three weeks later, markets had crashed and schools were closing. The world was upended. Covid had become the context for everything.

Patel foresees a similar pattern with AI:

Every single world leader, every single CEO, every single institution, members of the general public are going to realize pretty soon that the main thing we as a world are dealing with is Covid, or in this case, AI.

By “pretty soon,” I don’t think Patel believes we’re three weeks away from global upheaval. But the timeframes are much shorter than commonly believed — and getting shorter month by month.

Wait, what? And why?

This post is meant to be an explainer for friends and readers who haven’t been paying close attention to what’s been happening in AI. Which is okay! Technology is full of hype and bullshit, which most people should happily ignore.

We’ve seen countless examples of Next Big Things ultimately revealed to be nothing burgers. Many of the promises and perils of AI could meet a similar fate. Patel himself is putting together a media venture focused on AI, so of course he’s going to frame the issue as existential. Wherever there’s billions of dollars being spent, there’s hype and hyperbole, predictions and polemics.

Still — much like with epidemiologists and Covid in February 2020, the folks who deal with AI for a living are pretty sure something big is coming, and sooner than expected.

Something big doesn’t necessarily mean catastrophic; the Covid analogy only goes so far. Indeed, some researchers see AI ushering in a golden age of scientific enlightenment and economic bounty. Others are more pessimistic — realistic, I’d say — warning that we’re in for a bumpy and unpredictable ride, one that’s going to be playing out in a lot of upcoming headlines.

The sky isn’t falling — but it’s worth directing your gaze upwards.

The world of tomorrow, today

Science fiction is becoming science fact much faster than almost anyone anticipated. One way to track this is to ask interested parties how many years it will be before we have artificial general intelligence (AGI) capable of doing most human tasks. In 2020, the average estimate was around 50 years. By the end of 2023, it was seven.

chart showing decline from 30 years to 8 years, with dashed lines indicating further declines

Over the past few months, a common prediction has become three years. That’s the end of 2027. Exactly how much AI progress we’ll see by then has become the subject of a recent bet. Of the ten evaluation criteria for the bet, one hits particularly close to home for me:

8) With little or no human involvement, [AI will be able to] write Oscar-caliber screenplays.

As a professional screenwriter and Academy voter, I can’t give you precise delimiters for “Oscar-caliber” versus “pretty good” screenplays. But the larger point is that AI should be able to generate text that feels original, compelling and emotionally honest, both beat-by-beat and over the course of 120 satisfying pages. Very few humans can do that, so will an AI be able to?

A lot of researchers say yes, and by the end of 2027.

I’m skeptical — but that may be a combination of ego preservation and goalpost-moving. It’s not art without struggle, et cetera.

The fact that we’ve moved from the theoretical (“Could AI generate a plausible screenplay?”) to practical (“Should an AI-generated screenplay be eligible for an Oscar?”) in two years is indicative of just how fast things are moving.

So what happened? Basically, AI got smarter much faster than expected.

Warp speed

Some of the acceleration is easy to notice. When large language models (LLMs) like ChatGPT debuted at the end of 2022, they felt like a novelty. They generated text and images, but nothing particularly useful, and they frequently “hallucinated,” a polite way of saying made shit up.

If you shrugged and moved on, I get it.

The quality of LLM’s output has improved a lot over the past two years, to the point that real professionals are using them daily. Even in their current state — even if they never get any better — LLMs can disrupt a lot of work, for better and for worse.

An example: Over the holidays, I built two little iOS apps using Cursor, which generates code from plain text using an LLM.

Here’s what I told it as I was starting one app:

I’ll be attaching screen shots to show you what I’m describing.

  1. Main screen is the starting screen upon launching the app. There will be a background image, but you can ignore that for now. There are three buttons. New Game, How to Play, and Credits.
  2. How to Play is reached through the How to Play button on the main screen. The text for that scrolling view is the file in the project how-to-play.txt.

  3. New Game screen is reached through the new game button. It has two pop-up lists. the first chooses from 3 to 20. the second from 1 to 10. Clicking Start takes you into the game. (In the game view, the top-right field should show the players times round, so if you had 3 players and five rounds, it would start with 1/15, then 2/15.

  4. the Setup screen is linked to from the game screen, if they need to make adjustments or restart/quit the game.

Within seconds, it had generated an app I could build and run in Xcode. It’s now installed on my phone. It’s not a commercial app anyone will ever buy, but if it were, this would be a decent prototype.

Using Cursor feels like magic. I’m barely a programmer, but in the hands of someone who knew what they were doing, it’s easy to imagine technology like this tripling their productivity. ((Google’s CEO says that more than 25% of their code is already being generated by AI.)) That’s great for the software engineer — unless the company paying them decides they don’t need triple the productivity and will instead just hire one-third the engineers.

The same calculation can be applied to nearly any industry involving knowledge work. If your job can be made more productive by AI, your position is potentially in jeopardy.

That LLMs are getting better at doing actually useful things is notable, but that’s not the main reason timelines are shortening.

Let’s see how clever you really are

To measure how powerful a given AI system is, you need to establish some benchmarks. Existing LLMs easily pass the SAT, the GRE, and most professional certification exams. So researchers must come up with harder and harder questions, ones that won’t be in the model’s training set.

No matter how high you set the bar, the newest systems keep jumping over it. Month after month, each new model does a little better. Then, right before the holidays, OpenAI announced that its o3 system made a huge and unexpected leap:

chart showing o3 performance and cost, both vastly higher

With LLMs like ChatGPT or Claude, we’re used to getting fast and cheap answers. They spit out a text or image in seconds. In contrast, o3 spends considerably more time (and computing power) planning and assessing. It’s a significant change in the paradigm. The o3 approach is slower and more expensive — potentially thousands of dollars per query versus mere pennies — but the results for certain types of questions are dramatically better. For billion-dollar companies, it’s worth it.

Systems like these are particularly good at solving difficult math and computer science problems. And since AI systems themselves are based on math and computer science, today’s model will help build the next generation. This virtuous cycle is a significant reason the timelines keep getting shorter. AI is getting more powerful because AI is getting more powerful.

When and why this will become the major story

In 2020, Covid wasn’t on the front page of the NY Times until its economic and societal impacts were unmistakable. The stock market tanked; hospitals were filling up. Covid became impossible to ignore. Patel’s prediction is the same thing will happen with AI. I agree.

I can imagine many scenarios bringing AI to the front page, none of which involve a robot uprising.

Here are a few topics I expect we’ll see in the headlines over the next three years.

  • Global tensions. As with nuclear technology during the Cold War, big nations worry about falling behind. China has caps on the number of high-performance AI chips it’s allowed to import. Those chips it needs? They’re made in Taiwan. Gulp.
  • Espionage. Corporations spend billions training their models. ((DeepSeek, a Chinese firm, apparently trained their latest LLM for just $6 million, an impressive feat if true.)) Those model weights are incredibly valuable, both to competitors and bad actors.

  • Alignment. This is a term of art for “making sure the AI doesn’t kill us,” and is a major source of concern for professionals working in the field. How do you teach AI to act responsibly, and how do you know it’s not just faking it? AI safety is currently the responsibility of corporations racing to be the first to market. Not ideal!

  • Nationalizing AI. For all three of the reasons above, a nation (say, the U.S.) might decide that it’s a security risk to allow such powerful technology to be controlled by anyone but the government.

  • Spectacular bankruptcy. Several of these companies have massive valuations and questionable governance. It seems likely one or more will fail, which will lead to questions about the worth of the whole AI industry.

  • The economy. The stock market could skyrocket — or tank. Many economists believe AI will lead to productivity gains that will increase GDP, but also, people work jobs to earn money and buy things? That seems important.

  • Labor unrest. Unemployment is one thing, but what happens when entire professions are no longer viable? What’s the point in retraining for a different job if AI could do that one too?

  • Breakthroughs in science and medicine. Once you have one AI as smart as a Nobel prize winner, you can spin up one million of them to work in parallel. New drugs? Miracle cures? Revolutionary technology, like fusion power and quantum computing? Everything seems possible.

  • Environmental impact (bad). When you see articles about the carbon footprint of LLMs, they’re talking about initial training stage. That’s the energy intensive step, but also way smaller than you may be expecting? After that, the carbon impact of each individual query is negligible, on the order of watching a YouTube video. That said, the techniques powering systems like o3 involve using more power to deliver answers, which is why you see Microsoft and others talking about recommissioning nuclear plants. Also, e-waste! All those outdated chips need to be recycled.

  • Environmental impact (good). AI systems excel at science, engineering, and anything involving patterns. Last month, Google’s DeepMind pushed weather forecasting from 10 days to 15 days. Work like this could help us deal with effects of climate change, by improving crop yields and the energy grid, for example.

  • So how freaked out should you be?

    What is an ordinary person supposed to do with the knowledge that the world could suddenly change?

    My best advice is to hold onto your assumptions about the future loosely. Make plans. Live your life. Pay attention to what’s happening, but don’t let it dominate your decision-making. Don’t let uncertainty paralyze you.

    A healthy dose of skepticism is warranted. But denial isn’t. I still hear smart colleagues dismissing AI as fancy autocomplete. Sure, fine — but if it can autocomplete a diagnosis more accurately than a trained doctor, we should pay attention.

    It’s reasonable to assume that 2027 will look a lot like 2024. We’ll still have politics and memes and misbehaving celebrities. It’ll be different from today in ways we can’t fully predict. The future, as always, will remain confusing, confounding and unevenly distributed.

    Just like the actual pandemic wasn’t quite Contagion or Outbreak, the arrival of stronger AI won’t closely resemble Her or The Terminator or Leave the World Behind. Rather, it’ll be its own movie of some unspecified genre.

    Which hopefully won’t be written by an AI. We’ll see.

    Thanks to Drew, Nima and other friends for reading an early draft of this post.

    A few thoughts on Sora

    February 16, 2024 Film Industry, Geek Alert, WGA

    Yesterday, OpenAI announced [Sora](https://openai.com/sora), a new product that generates realistic video from text prompts. ((Sora is a great name, btw. It doesn’t mean anything, and doesn’t have any specific connotation, yet feels like something that should exist.)) The examples are remarkable.

    A TV writer friend texted me to ask “is it time to be petrified?”

    I wrote back:

    > I don’t think you need to be petrified. It’s very impressive at creating video in a way that’s like how Dall-E does images. A huge achievement. For pre-viz? Mood reels? Incredible. We’ll see stuff coming out of it used in commercials first.

    > For longer, narrative stuff, there’s a real challenge moving from text generation (gpt-4 putting together something that looks like a script) to “filming” that script with these tools to resemble anything like our movies and television.

    > Writers, directors, actors and crew have a sense of why they’re doing what they’re doing, and what makes sense in this fictitious reality they’re creating. I don’t think you can do that without consciousness, without self-awareness, and if/when AI gets there, stuff like Sora will be the least of our concerns.

    With a night to sleep on it, I think there are a few larger, more immediate concerns. Writers (and humans in general) should be aware of but not petrified by some of the implications of this technology beyond the obvious ones like deepfakes and disinformation.

    1. **Video as input.** Like image generators, this technology can work off of a text prompt. But you can also feed it video and have it change things. Do you want *A Few Good Men*, but with Muppets? Done. Need to [replace Kevin Spacey](https://www.theguardian.com/film/2018/jan/05/removing-kevin-spacey-from-movie-was-a-business-decision-says-ridley-scott-all-the-money-in-the-world) in a movie? No need to reshoot anything. Just let Sora do it.

    2. **Remake vs. refresh.** Similarly, any existing film or television episode could be “redone” with this technology. In some cases, that could mean a restoration or visual effects refresh, like George Lucas did with Star Wars. Or it could be what we’d consider a remake, where the original writer gets paid. What’s the difference between a refresh and a remake, and who decides?

    3. **Animation vs. live action.** How do we define the video material that comes out of Sora? It can look like live action, but wasn’t filmed with cameras. It can look like animation, but it didn’t come out of an animation process. This matters because while the WGA represents writers of both live action and animation, studios are not currently required to use WGA writers in animation. **We can’t let this technology to be used as an end-run around WGA (and other guild) jurisdiction.**

    4. **Reality engines.** In a [second paper](https://openai.com/research/video-generation-models-as-world-simulators), OpenAI notes that Sora could point to “general purpose simulators of the physical world.” The implications go far beyond any disruptive effects on Hollywood, and are worth a closer look.

    It seems like a long way to go from videos of cute paper craft turtles to The Matrix, but it’s worth taking the progress they’ve made here seriously. In generating video, Sora does a few things that are really difficult, and resemble human developmental milestones.

    Like all models, Sora is predictive, making guesses about what just happened and what happens next. But it feels different because it’s doing this in a 3D space that largely tracks with our lived experience. It remembers objects, even if they’re not on screen at the moment, and recognizes interactions between objects, such as paintbrushes leaving marks on the canvas. ((Not to dive too deeply into theories of human consciousness, but the ability to internally model reality and predict things feel like table stakes.))

    Sora makes mistakes, but the results surprisingly good for a system that wasn’t explicitly trained to do anything other than generate video. Those capabilities could be used to do other things. In a jargon-heavy paragraph, OpenAI notes:

    > Sora is also able to simulate artificial processes — one example is video games. Sora can simultaneously control the player in Minecraft with a basic policy while also rendering the world and its dynamics in high fidelity. These capabilities can be elicited zero-shot by prompting Sora with captions mentioning “Minecraft.”

    Sora “gets” Minecraft because it’s ingested countless hours of Minecraft videos. If it’s able to create a simulation of the game that is indistinguishable from the original, is there really a difference? If it’s able to create a convincing simulation of reality based on the endless video it scapes, what are the implications for “our” reality?

    These are questions for philosophers, sure, but we’re all going to be faced with them sooner than we’d like. Sora and its descendants are going to have an impact beyond the cool video they generate.

    Next Page »

    Primary Sidebar

    Newsletter

    Inneresting Logo A Quote-Unquote Newsletter about Writing
    Read Now

    Explore

    Projects

    • Aladdin (1)
    • Arlo Finch (27)
    • Big Fish (88)
    • Charlie (39)
    • Charlie's Angels (16)
    • Chosen (2)
    • Corpse Bride (9)
    • Dead Projects (18)
    • Frankenweenie (10)
    • Go (30)
    • Karateka (4)
    • Monsterpocalypse (3)
    • One Hit Kill (6)
    • Ops (6)
    • Preacher (2)
    • Prince of Persia (13)
    • Shazam (6)
    • Snake People (6)
    • Tarzan (5)
    • The Nines (118)
    • The Remnants (12)
    • The Variant (22)

    Apps

    • Bronson (14)
    • FDX Reader (11)
    • Fountain (32)
    • Highland (73)
    • Less IMDb (4)
    • Weekend Read (64)

    Recommended Reading

    • First Person (88)
    • Geek Alert (151)
    • WGA (162)
    • Workspace (19)

    Screenwriting Q&A

    • Adaptation (66)
    • Directors (90)
    • Education (49)
    • Film Industry (491)
    • Formatting (129)
    • Genres (90)
    • Glossary (6)
    • Pitches (29)
    • Producers (59)
    • Psych 101 (119)
    • Rights and Copyright (96)
    • So-Called Experts (47)
    • Story and Plot (170)
    • Television (164)
    • Treatments (21)
    • Words on the page (237)
    • Writing Process (178)

    More screenwriting Q&A at screenwriting.io

    © 2025 John August — All Rights Reserved.