• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

John August

  • Arlo Finch
  • Scriptnotes
  • Library
  • Store
  • About

John

Something’s Coming

January 6, 2025 Film Industry, Geek Alert, General, Psych 101, Tools

Last week, Dwarkesh Patel put words to an uneasy feeling that resonated with me:

I think we’re at what late February 2020 was for Covid, but for AI.

If you can remember back to February 2020, both the media and the general public were still in normal-times mode, discussing Trump’s impeachment, the Democratic primaries and Harvey Weinstein. Epidemiologists recognized that something big and potentially unprecedented was coming, but the news hadn’t yet broken through.

One of the first front-page articles I can find in the NY Times about Covid is from February 22nd, 2020.

image of NY Times front page, with covid story on left edge

Just three weeks later, markets had crashed and schools were closing. The world was upended. Covid had become the context for everything.

Patel foresees a similar pattern with AI:

Every single world leader, every single CEO, every single institution, members of the general public are going to realize pretty soon that the main thing we as a world are dealing with is Covid, or in this case, AI.

By “pretty soon,” I don’t think Patel believes we’re three weeks away from global upheaval. But the timeframes are much shorter than commonly believed — and getting shorter month by month.

Wait, what? And why?

This post is meant to be an explainer for friends and readers who haven’t been paying close attention to what’s been happening in AI. Which is okay! Technology is full of hype and bullshit, which most people should happily ignore.

We’ve seen countless examples of Next Big Things ultimately revealed to be nothing burgers. Many of the promises and perils of AI could meet a similar fate. Patel himself is putting together a media venture focused on AI, so of course he’s going to frame the issue as existential. Wherever there’s billions of dollars being spent, there’s hype and hyperbole, predictions and polemics.

Still — much like with epidemiologists and Covid in February 2020, the folks who deal with AI for a living are pretty sure something big is coming, and sooner than expected.

Something big doesn’t necessarily mean catastrophic; the Covid analogy only goes so far. Indeed, some researchers see AI ushering in a golden age of scientific enlightenment and economic bounty. Others are more pessimistic — realistic, I’d say — warning that we’re in for a bumpy and unpredictable ride, one that’s going to be playing out in a lot of upcoming headlines.

The sky isn’t falling — but it’s worth directing your gaze upwards.

The world of tomorrow, today

Science fiction is becoming science fact much faster than almost anyone anticipated. One way to track this is to ask interested parties how many years it will be before we have artificial general intelligence (AGI) capable of doing most human tasks. In 2020, the average estimate was around 50 years. By the end of 2023, it was seven.

chart showing decline from 30 years to 8 years, with dashed lines indicating further declines

Over the past few months, a common prediction has become three years. That’s the end of 2027. Exactly how much AI progress we’ll see by then has become the subject of a recent bet. Of the ten evaluation criteria for the bet, one hits particularly close to home for me:

8) With little or no human involvement, [AI will be able to] write Oscar-caliber screenplays.

As a professional screenwriter and Academy voter, I can’t give you precise delimiters for “Oscar-caliber” versus “pretty good” screenplays. But the larger point is that AI should be able to generate text that feels original, compelling and emotionally honest, both beat-by-beat and over the course of 120 satisfying pages. Very few humans can do that, so will an AI be able to?

A lot of researchers say yes, and by the end of 2027.

I’m skeptical — but that may be a combination of ego preservation and goalpost-moving. It’s not art without struggle, et cetera.

The fact that we’ve moved from the theoretical (“Could AI generate a plausible screenplay?”) to practical (“Should an AI-generated screenplay be eligible for an Oscar?”) in two years is indicative of just how fast things are moving.

So what happened? Basically, AI got smarter much faster than expected.

Warp speed

Some of the acceleration is easy to notice. When large language models (LLMs) like ChatGPT debuted at the end of 2022, they felt like a novelty. They generated text and images, but nothing particularly useful, and they frequently “hallucinated,” a polite way of saying made shit up.

If you shrugged and moved on, I get it.

The quality of LLM’s output has improved a lot over the past two years, to the point that real professionals are using them daily. Even in their current state — even if they never get any better — LLMs can disrupt a lot of work, for better and for worse.

An example: Over the holidays, I built two little iOS apps using Cursor, which generates code from plain text using an LLM.

Here’s what I told it as I was starting one app:

I’ll be attaching screen shots to show you what I’m describing.

  1. Main screen is the starting screen upon launching the app. There will be a background image, but you can ignore that for now. There are three buttons. New Game, How to Play, and Credits.
  2. How to Play is reached through the How to Play button on the main screen. The text for that scrolling view is the file in the project how-to-play.txt.

  3. New Game screen is reached through the new game button. It has two pop-up lists. the first chooses from 3 to 20. the second from 1 to 10. Clicking Start takes you into the game. (In the game view, the top-right field should show the players times round, so if you had 3 players and five rounds, it would start with 1/15, then 2/15.

  4. the Setup screen is linked to from the game screen, if they need to make adjustments or restart/quit the game.

Within seconds, it had generated an app I could build and run in Xcode. It’s now installed on my phone. It’s not a commercial app anyone will ever buy, but if it were, this would be a decent prototype.

Using Cursor feels like magic. I’m barely a programmer, but in the hands of someone who knew what they were doing, it’s easy to imagine technology like this tripling their productivity. ((Google’s CEO says that more than 25% of their code is already being generated by AI.)) That’s great for the software engineer — unless the company paying them decides they don’t need triple the productivity and will instead just hire one-third the engineers.

The same calculation can be applied to nearly any industry involving knowledge work. If your job can be made more productive by AI, your position is potentially in jeopardy.

That LLMs are getting better at doing actually useful things is notable, but that’s not the main reason timelines are shortening.

Let’s see how clever you really are

To measure how powerful a given AI system is, you need to establish some benchmarks. Existing LLMs easily pass the SAT, the GRE, and most professional certification exams. So researchers must come up with harder and harder questions, ones that won’t be in the model’s training set.

No matter how high you set the bar, the newest systems keep jumping over it. Month after month, each new model does a little better. Then, right before the holidays, OpenAI announced that its o3 system made a huge and unexpected leap:

chart showing o3 performance and cost, both vastly higher

With LLMs like ChatGPT or Claude, we’re used to getting fast and cheap answers. They spit out a text or image in seconds. In contrast, o3 spends considerably more time (and computing power) planning and assessing. It’s a significant change in the paradigm. The o3 approach is slower and more expensive — potentially thousands of dollars per query versus mere pennies — but the results for certain types of questions are dramatically better. For billion-dollar companies, it’s worth it.

Systems like these are particularly good at solving difficult math and computer science problems. And since AI systems themselves are based on math and computer science, today’s model will help build the next generation. This virtuous cycle is a significant reason the timelines keep getting shorter. AI is getting more powerful because AI is getting more powerful.

When and why this will become the major story

In 2020, Covid wasn’t on the front page of the NY Times until its economic and societal impacts were unmistakable. The stock market tanked; hospitals were filling up. Covid became impossible to ignore. Patel’s prediction is the same thing will happen with AI. I agree.

I can imagine many scenarios bringing AI to the front page, none of which involve a robot uprising.

Here are a few topics I expect we’ll see in the headlines over the next three years.

  • Global tensions. As with nuclear technology during the Cold War, big nations worry about falling behind. China has caps on the number of high-performance AI chips it’s allowed to import. Those chips it needs? They’re made in Taiwan. Gulp.
  • Espionage. Corporations spend billions training their models. ((DeepSeek, a Chinese firm, apparently trained their latest LLM for just $6 million, an impressive feat if true.)) Those model weights are incredibly valuable, both to competitors and bad actors.

  • Alignment. This is a term of art for “making sure the AI doesn’t kill us,” and is a major source of concern for professionals working in the field. How do you teach AI to act responsibly, and how do you know it’s not just faking it? AI safety is currently the responsibility of corporations racing to be the first to market. Not ideal!

  • Nationalizing AI. For all three of the reasons above, a nation (say, the U.S.) might decide that it’s a security risk to allow such powerful technology to be controlled by anyone but the government.

  • Spectacular bankruptcy. Several of these companies have massive valuations and questionable governance. It seems likely one or more will fail, which will lead to questions about the worth of the whole AI industry.

  • The economy. The stock market could skyrocket — or tank. Many economists believe AI will lead to productivity gains that will increase GDP, but also, people work jobs to earn money and buy things? That seems important.

  • Labor unrest. Unemployment is one thing, but what happens when entire professions are no longer viable? What’s the point in retraining for a different job if AI could do that one too?

  • Breakthroughs in science and medicine. Once you have one AI as smart as a Nobel prize winner, you can spin up one million of them to work in parallel. New drugs? Miracle cures? Revolutionary technology, like fusion power and quantum computing? Everything seems possible.

  • Environmental impact (bad). When you see articles about the carbon footprint of LLMs, they’re talking about initial training stage. That’s the energy intensive step, but also way smaller than you may be expecting? After that, the carbon impact of each individual query is negligible, on the order of watching a YouTube video. That said, the techniques powering systems like o3 involve using more power to deliver answers, which is why you see Microsoft and others talking about recommissioning nuclear plants. Also, e-waste! All those outdated chips need to be recycled.

  • Environmental impact (good). AI systems excel at science, engineering, and anything involving patterns. Last month, Google’s DeepMind pushed weather forecasting from 10 days to 15 days. Work like this could help us deal with effects of climate change, by improving crop yields and the energy grid, for example.

  • So how freaked out should you be?

    What is an ordinary person supposed to do with the knowledge that the world could suddenly change?

    My best advice is to hold onto your assumptions about the future loosely. Make plans. Live your life. Pay attention to what’s happening, but don’t let it dominate your decision-making. Don’t let uncertainty paralyze you.

    A healthy dose of skepticism is warranted. But denial isn’t. I still hear smart colleagues dismissing AI as fancy autocomplete. Sure, fine — but if it can autocomplete a diagnosis more accurately than a trained doctor, we should pay attention.

    It’s reasonable to assume that 2027 will look a lot like 2024. We’ll still have politics and memes and misbehaving celebrities. It’ll be different from today in ways we can’t fully predict. The future, as always, will remain confusing, confounding and unevenly distributed.

    Just like the actual pandemic wasn’t quite Contagion or Outbreak, the arrival of stronger AI won’t closely resemble Her or The Terminator or Leave the World Behind. Rather, it’ll be its own movie of some unspecified genre.

    Which hopefully won’t be written by an AI. We’ll see.

    Thanks to Drew, Nima and other friends for reading an early draft of this post.

    At the table with Kamala Harris

    November 4, 2024 Citizenship, First Person

    I first met Kamala Harris at a small lunch in 2010. Just four or five of us around a table. Harris was running to become California’s next attorney general, so a friend suggested we meet her. I found Harris to be incredibly bright and charismatic. I donated to her campaign on the spot.

    Afterwards, I described her as a superstar. My friend suggested Kamala Harris could be president one day. I agreed.

    In the years after that initial meeting, I crossed paths with Harris several times. My husband and I were seated with her at a fundraiser in 2012. She was there to introduce President Obama. Everything was running late, so we ended up talking with her for more than an hour. She’s incredibly easy to talk to, and funny. She asks questions. She’s curious.

    This video in which Harris explains dry-brining a turkey captures a bit of that vibe.

    When Harris ran for U.S. senator in 2016, I met her again at a backyard fundraiser. (I keep saying “met” because while I’ve probably spoken with Harris for two hours over 14 years, she almost certainly doesn’t know who I am. And that’s totally how it should be! When it comes to people, “knowing” isn’t really reciprocal.) Harris was still the same warm/funny/smart candidate I’d met at that lunch in 2010. I happily made my donation, excited to see her become our senator.

    In 2019, Harris dropped out of the crowded Democratic primary fairly early, but I was delighted to see her become Biden’s running mate. It’s easy to forget Harris has already made history as our nation’s first-ever female vice president. But then again, it’s easy to forget vice presidents if things are going well.

    My most recent encounter with Harris was, again, in a back yard. In June 2024, just three days after Biden’s disastrous debate performance, Harris had to convince a group of terrified donors that the campaign could still win. She largely succeeded. She acknowledged reality — that Biden had lost the debate — but then laid out in clear terms the issues and the dangers presented by another Trump presidency. She wasn’t afraid to swear and smile and laugh. She was very much the woman I’d met at a lunch in 2010. I walked away thinking, “Man, I wish she were the candidate instead of Biden.”

    And then she was.

    In the 106 days she’s been at the top of the ticket, it’s been remarkable to see this singular talent translate the energy she’s always brought to one-on-one encounters to giant arenas. Beyoncé now opens for her.

    This ad, the closer of the campaign, accurately captures Harris’s unique blend of compassion, curiosity and conviction.

    The election is tomorrow. It could go either way. I’ve donated and phone banked and done all the things. I fervently hope Kamala Harris will be our next president.

    Obviously, one votes based on which candidate best reflects their world view and priorities. But you’re also electing a person. Trust matters. So does authenticity. That’s why I’m writing up these observations.

    Most people will never have the chance to meet Kamala Harris in person. But as someone who’s been lucky to interact with her over the years, I can tell you that the Kamala Harris you get face-to-face is just as impressive as the candidate on the stage.

    If you’re undecided about who to vote for, or worry that Kamala Harris is some manufactured political entity, I can assure you she was this cool 14 years ago.

    Play AlphaBirds with us

    November 1, 2024 Los Angeles, Talk

    AlphaBirds — the fun, fast word game we make — is having its first-ever live event this month.

    We’re holding it at Village Well in Culver City, November 22nd at 6:30pm.

    Come have a drink and learn to play our game. Space is limited, so do get a ticket in advance.

    Not in the LA area? AlphaBirds is also available through our store and on Amazon.

    How to sell Big Fish

    October 9, 2024 Big Fish, Projects

    This afternoon, I came across the letter I wrote in 1998 trying to convince Columbia Pictures to option the rights to Daniel Wallace’s novel Big Fish for me to adapt.

    It’s strange seeing this letter now. In it, I describe the very broad shape of the movie, but at the time I didn’t know so many of the details. Crucial elements like the circus, the war, Josephine, Norther Winslow — none of these existed in the book, and I had at most a vague sense of what I wanted to do.

    At the time, there were no producers involved, and no director. It was just me and the studio.

    The truth is, this letter probably didn’t convince anyone. Columbia wanted me under contract so they could have me work on other more-commercial movies. But it served an important role in convincing myself that there really was a movie to make out of Wallace’s weird and delightful little book.


    To: Readers of Daniel Wallace’s BIG FISH

    From: John August

    Date: 9/14/98

    RE: This book

    I come to you with an unfair advantage: I read BIG FISH a few weeks ago, whereas many of you probably only read it last night or this morning. Trust me — it’s the kind of book that sticks with you and gets better as you think back through it. But since you probably don’t have the luxury of weeks to mull it over, I wanted to tell you why I liked this book so much when I first read it, and like it even more as I look back.

    If you’re reading coverage of this book, the logline probably includes the words dying father and humorous anecdotes, which sounds suspiciously like the TV Guide listing for a Hallmark Hall of Fame movie that would be nominated for an Emmy, even though nobody you know actually saw it. The problem with that logline is that while it’s technically correct, it’s absolutely wrong.

    BIG FISH is the story of Edward Bloom, a charming pain in the ass, as told by his immensely frustrated son William, who in the absence of any concrete history, can only tell us the wild exaggerations his father has been shoving upon him his entire life.

    Edward Bloom feeds his son the kinds of stories you tell a wide-eyed five-year old — how you used to walk to school five miles, uphill each way. But now his son is in his 30’s, and Bloom never stopped telling these stories. Rather, he kept embellishing them, until they became a second life of sorts — perhaps the one he secretly wished he had lived. We pick up the tale as the elder Bloom lies on his deathbed, but the question of the story is not “will he die?” but “will he finally drop the facade?”

    At this point, I have to digress and tell an anecdote from my life. (This is the kind of book that inevitably makes you want to talk about your own life; it stirs up strange recollections.)

    On a dark rainy night in production on GO, I was sent off to set up a second-unit shot with a talented young actor who is, moment for moment, one of the funniest people I’ve ever met. ((Jay Mohr.)) The problem is, he doesn’t shut up. It’s as if every sensory input is channeled through a part of his brain that seeks humorous output. This life-as-Groundlings-sketch is charming at three in the afternoon, but at three in the morning, when you’re cold and exhausted and first unit has the lens you really really need, you find yourself searching for the switch that turns him off. Would you please just stop being funny so we can do this fucking shot?

    In BIG FISH, William has the same frustration with his father: Would he please, just for once, not make a joke of all this?

    Even as Edward Bloom amuses us, we can understand why William is annoyed. And honestly, if we had to spend an entire movie with this old man, we might get sick of him too. But the special treat of this movie is that you spend most of it with Bloom as a young man, tracking his life from impossible story to impossible story. He’s a modern-day Paul Bunyan, funnier for the inconsistencies in his tales.

    If it sounds like I’m downplaying the dramatic elements, I’m not. Like FORREST GUMP or ORDINARY PEOPLE, there’s honest emotion at its core, and a movie shouldn’t shy away from that. I lost my own father at 21, and can remember sharply the months of walking on eggshells, and the weird power dynamics of a household built on maintaining tranquility at any cost. ((I was 28 when I wrote this. I made Will my age and Edward my father’s age so I could keep track of the timelines.))

    Because even as they’re fading, people can piss you off. Just because you’re dying doesn’t give you an excuse to be an asshole.

    While Edward spends his life trying to convince his son what a great man he is, William just wants to see a glimpse of the real man behind the bravado. In the end, neither wins, but there’s a more fundamental truth to be learned: even if you never really understand a man, that doesn’t keep you from appreciating him. ((This thesis gets restated different ways in the movie, including “My father and I were strangers who knew each other very well.” and “You become what you always were: a very big fish.”))

    Now that I’ve rhapsodized about the book’s many virtues, let me note that it isn’t perfect. The individual anecdotes don’t always thread together especially well, and need to be more consistently (a) funny and (b) relevant. Properly told, we should see the reality behind the wild exaggerations. Even though we see the “myth” of Bloom’s life, there’s truth in the lies.

    I’m not crazy about the ending; magical realism is a tough sell, and almost always feels like a cheat. But I think we can have it both ways. My instinct is to let Bloom die the way actual people die — quiet and peacefully — then show his death the way he would want us to believe: a funny, cataclysmic event that burns down half the town and coincidentally resolves many of the loose threads from his various stories.

    I hope these ramblings give you a forecast of what you might be thinking about this book a week or two from now. Likely you’ll have your own anecdotes, because Wallace has the weird ability to feel universal and highly specific, as if he stumbled across some secret trove of shared histories.

    « Previous Page
    Next Page »

    Primary Sidebar

    Newsletter

    Inneresting Logo A Quote-Unquote Newsletter about Writing
    Read Now

    Explore

    Projects

    • Aladdin (1)
    • Arlo Finch (27)
    • Big Fish (88)
    • Birdigo (2)
    • Charlie (39)
    • Charlie's Angels (16)
    • Chosen (2)
    • Corpse Bride (9)
    • Dead Projects (18)
    • Frankenweenie (10)
    • Go (29)
    • Karateka (4)
    • Monsterpocalypse (3)
    • One Hit Kill (6)
    • Ops (6)
    • Preacher (2)
    • Prince of Persia (13)
    • Shazam (6)
    • Snake People (6)
    • Tarzan (5)
    • The Nines (118)
    • The Remnants (12)
    • The Variant (22)

    Apps

    • Bronson (14)
    • FDX Reader (11)
    • Fountain (32)
    • Highland (74)
    • Less IMDb (4)
    • Weekend Read (64)

    Recommended Reading

    • First Person (87)
    • Geek Alert (151)
    • WGA (162)
    • Workspace (19)

    Screenwriting Q&A

    • Adaptation (65)
    • Directors (90)
    • Education (49)
    • Film Industry (489)
    • Formatting (128)
    • Genres (89)
    • Glossary (6)
    • Pitches (29)
    • Producers (59)
    • Psych 101 (118)
    • Rights and Copyright (96)
    • So-Called Experts (47)
    • Story and Plot (170)
    • Television (165)
    • Treatments (21)
    • Words on the page (237)
    • Writing Process (177)

    More screenwriting Q&A at screenwriting.io

    © 2026 John August — All Rights Reserved.