• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

John August

  • Arlo Finch
  • Scriptnotes
  • Library
  • Store
  • About

General

Something’s Coming

January 6, 2025 Film Industry, Geek Alert, General, Psych 101, Tools

Last week, Dwarkesh Patel put words to an uneasy feeling that resonated with me:

I think we’re at what late February 2020 was for Covid, but for AI.

If you can remember back to February 2020, both the media and the general public were still in normal-times mode, discussing Trump’s impeachment, the Democratic primaries and Harvey Weinstein. Epidemiologists recognized that something big and potentially unprecedented was coming, but the news hadn’t yet broken through.

One of the first front-page articles I can find in the NY Times about Covid is from February 22nd, 2020.

image of NY Times front page, with covid story on left edge

Just three weeks later, markets had crashed and schools were closing. The world was upended. Covid had become the context for everything.

Patel foresees a similar pattern with AI:

Every single world leader, every single CEO, every single institution, members of the general public are going to realize pretty soon that the main thing we as a world are dealing with is Covid, or in this case, AI.

By “pretty soon,” I don’t think Patel believes we’re three weeks away from global upheaval. But the timeframes are much shorter than commonly believed — and getting shorter month by month.

Wait, what? And why?

This post is meant to be an explainer for friends and readers who haven’t been paying close attention to what’s been happening in AI. Which is okay! Technology is full of hype and bullshit, which most people should happily ignore.

We’ve seen countless examples of Next Big Things ultimately revealed to be nothing burgers. Many of the promises and perils of AI could meet a similar fate. Patel himself is putting together a media venture focused on AI, so of course he’s going to frame the issue as existential. Wherever there’s billions of dollars being spent, there’s hype and hyperbole, predictions and polemics.

Still — much like with epidemiologists and Covid in February 2020, the folks who deal with AI for a living are pretty sure something big is coming, and sooner than expected.

Something big doesn’t necessarily mean catastrophic; the Covid analogy only goes so far. Indeed, some researchers see AI ushering in a golden age of scientific enlightenment and economic bounty. Others are more pessimistic — realistic, I’d say — warning that we’re in for a bumpy and unpredictable ride, one that’s going to be playing out in a lot of upcoming headlines.

The sky isn’t falling — but it’s worth directing your gaze upwards.

The world of tomorrow, today

Science fiction is becoming science fact much faster than almost anyone anticipated. One way to track this is to ask interested parties how many years it will be before we have artificial general intelligence (AGI) capable of doing most human tasks. In 2020, the average estimate was around 50 years. By the end of 2023, it was seven.

chart showing decline from 30 years to 8 years, with dashed lines indicating further declines

Over the past few months, a common prediction has become three years. That’s the end of 2027. Exactly how much AI progress we’ll see by then has become the subject of a recent bet. Of the ten evaluation criteria for the bet, one hits particularly close to home for me:

8) With little or no human involvement, [AI will be able to] write Oscar-caliber screenplays.

As a professional screenwriter and Academy voter, I can’t give you precise delimiters for “Oscar-caliber” versus “pretty good” screenplays. But the larger point is that AI should be able to generate text that feels original, compelling and emotionally honest, both beat-by-beat and over the course of 120 satisfying pages. Very few humans can do that, so will an AI be able to?

A lot of researchers say yes, and by the end of 2027.

I’m skeptical — but that may be a combination of ego preservation and goalpost-moving. It’s not art without struggle, et cetera.

The fact that we’ve moved from the theoretical (“Could AI generate a plausible screenplay?”) to practical (“Should an AI-generated screenplay be eligible for an Oscar?”) in two years is indicative of just how fast things are moving.

So what happened? Basically, AI got smarter much faster than expected.

Warp speed

Some of the acceleration is easy to notice. When large language models (LLMs) like ChatGPT debuted at the end of 2022, they felt like a novelty. They generated text and images, but nothing particularly useful, and they frequently “hallucinated,” a polite way of saying made shit up.

If you shrugged and moved on, I get it.

The quality of LLM’s output has improved a lot over the past two years, to the point that real professionals are using them daily. Even in their current state — even if they never get any better — LLMs can disrupt a lot of work, for better and for worse.

An example: Over the holidays, I built two little iOS apps using Cursor, which generates code from plain text using an LLM.

Here’s what I told it as I was starting one app:

I’ll be attaching screen shots to show you what I’m describing.

  1. Main screen is the starting screen upon launching the app. There will be a background image, but you can ignore that for now. There are three buttons. New Game, How to Play, and Credits.
  2. How to Play is reached through the How to Play button on the main screen. The text for that scrolling view is the file in the project how-to-play.txt.

  3. New Game screen is reached through the new game button. It has two pop-up lists. the first chooses from 3 to 20. the second from 1 to 10. Clicking Start takes you into the game. (In the game view, the top-right field should show the players times round, so if you had 3 players and five rounds, it would start with 1/15, then 2/15.

  4. the Setup screen is linked to from the game screen, if they need to make adjustments or restart/quit the game.

Within seconds, it had generated an app I could build and run in Xcode. It’s now installed on my phone. It’s not a commercial app anyone will ever buy, but if it were, this would be a decent prototype.

Using Cursor feels like magic. I’m barely a programmer, but in the hands of someone who knew what they were doing, it’s easy to imagine technology like this tripling their productivity. ((Google’s CEO says that more than 25% of their code is already being generated by AI.)) That’s great for the software engineer — unless the company paying them decides they don’t need triple the productivity and will instead just hire one-third the engineers.

The same calculation can be applied to nearly any industry involving knowledge work. If your job can be made more productive by AI, your position is potentially in jeopardy.

That LLMs are getting better at doing actually useful things is notable, but that’s not the main reason timelines are shortening.

Let’s see how clever you really are

To measure how powerful a given AI system is, you need to establish some benchmarks. Existing LLMs easily pass the SAT, the GRE, and most professional certification exams. So researchers must come up with harder and harder questions, ones that won’t be in the model’s training set.

No matter how high you set the bar, the newest systems keep jumping over it. Month after month, each new model does a little better. Then, right before the holidays, OpenAI announced that its o3 system made a huge and unexpected leap:

chart showing o3 performance and cost, both vastly higher

With LLMs like ChatGPT or Claude, we’re used to getting fast and cheap answers. They spit out a text or image in seconds. In contrast, o3 spends considerably more time (and computing power) planning and assessing. It’s a significant change in the paradigm. The o3 approach is slower and more expensive — potentially thousands of dollars per query versus mere pennies — but the results for certain types of questions are dramatically better. For billion-dollar companies, it’s worth it.

Systems like these are particularly good at solving difficult math and computer science problems. And since AI systems themselves are based on math and computer science, today’s model will help build the next generation. This virtuous cycle is a significant reason the timelines keep getting shorter. AI is getting more powerful because AI is getting more powerful.

When and why this will become the major story

In 2020, Covid wasn’t on the front page of the NY Times until its economic and societal impacts were unmistakable. The stock market tanked; hospitals were filling up. Covid became impossible to ignore. Patel’s prediction is the same thing will happen with AI. I agree.

I can imagine many scenarios bringing AI to the front page, none of which involve a robot uprising.

Here are a few topics I expect we’ll see in the headlines over the next three years.

  • Global tensions. As with nuclear technology during the Cold War, big nations worry about falling behind. China has caps on the number of high-performance AI chips it’s allowed to import. Those chips it needs? They’re made in Taiwan. Gulp.
  • Espionage. Corporations spend billions training their models. ((DeepSeek, a Chinese firm, apparently trained their latest LLM for just $6 million, an impressive feat if true.)) Those model weights are incredibly valuable, both to competitors and bad actors.

  • Alignment. This is a term of art for “making sure the AI doesn’t kill us,” and is a major source of concern for professionals working in the field. How do you teach AI to act responsibly, and how do you know it’s not just faking it? AI safety is currently the responsibility of corporations racing to be the first to market. Not ideal!

  • Nationalizing AI. For all three of the reasons above, a nation (say, the U.S.) might decide that it’s a security risk to allow such powerful technology to be controlled by anyone but the government.

  • Spectacular bankruptcy. Several of these companies have massive valuations and questionable governance. It seems likely one or more will fail, which will lead to questions about the worth of the whole AI industry.

  • The economy. The stock market could skyrocket — or tank. Many economists believe AI will lead to productivity gains that will increase GDP, but also, people work jobs to earn money and buy things? That seems important.

  • Labor unrest. Unemployment is one thing, but what happens when entire professions are no longer viable? What’s the point in retraining for a different job if AI could do that one too?

  • Breakthroughs in science and medicine. Once you have one AI as smart as a Nobel prize winner, you can spin up one million of them to work in parallel. New drugs? Miracle cures? Revolutionary technology, like fusion power and quantum computing? Everything seems possible.

  • Environmental impact (bad). When you see articles about the carbon footprint of LLMs, they’re talking about initial training stage. That’s the energy intensive step, but also way smaller than you may be expecting? After that, the carbon impact of each individual query is negligible, on the order of watching a YouTube video. That said, the techniques powering systems like o3 involve using more power to deliver answers, which is why you see Microsoft and others talking about recommissioning nuclear plants. Also, e-waste! All those outdated chips need to be recycled.

  • Environmental impact (good). AI systems excel at science, engineering, and anything involving patterns. Last month, Google’s DeepMind pushed weather forecasting from 10 days to 15 days. Work like this could help us deal with effects of climate change, by improving crop yields and the energy grid, for example.

  • So how freaked out should you be?

    What is an ordinary person supposed to do with the knowledge that the world could suddenly change?

    My best advice is to hold onto your assumptions about the future loosely. Make plans. Live your life. Pay attention to what’s happening, but don’t let it dominate your decision-making. Don’t let uncertainty paralyze you.

    A healthy dose of skepticism is warranted. But denial isn’t. I still hear smart colleagues dismissing AI as fancy autocomplete. Sure, fine — but if it can autocomplete a diagnosis more accurately than a trained doctor, we should pay attention.

    It’s reasonable to assume that 2027 will look a lot like 2024. We’ll still have politics and memes and misbehaving celebrities. It’ll be different from today in ways we can’t fully predict. The future, as always, will remain confusing, confounding and unevenly distributed.

    Just like the actual pandemic wasn’t quite Contagion or Outbreak, the arrival of stronger AI won’t closely resemble Her or The Terminator or Leave the World Behind. Rather, it’ll be its own movie of some unspecified genre.

    Which hopefully won’t be written by an AI. We’ll see.

    Thanks to Drew, Nima and other friends for reading an early draft of this post.

    What is a #writesprint?

    March 19, 2020 General, How-To, Psych 101

    A #writesprint is a timed writing session. For a set period — often 60 minutes but sometimes shorter — you sit down and focus all your attention on writing.

    No checking Twitter. No Googling lyrics. No running to the kitchen for a snack.

    Just write.

    It doesn’t have to be screenwriting; you can #writesprint a term paper, a novel or a blog post. The important thing is that you’re writing *something you want or need to write.*

    **A #writesprint is about showing up.** It’s designed to get your butt in the seat, fingers on the keyboard.

    When the timer ends, stand up and walk away. You can come back to do more writing later, even another sprint, but definitely reward yourself for having done the work.

    You can do a #writesprint by yourself, but it often helps to have the social pressure and accountability of others. I’ll occasionally announce on Twitter that I’m about to start a #writesprint:

    https://twitter.com/johnaugust/status/1240315695331590144?s=20

    If you want to write along with me, reply or favorite or just start. You never need permission. If you want to brag about how much you got done during your sprint, go for it!

    ### Frequently Asked Questions

    **Do I need any special equipment or software?**
    Not really. You can set a timer on your phone. If you’re using [Highland 2](https://quoteunquoteapps.com/highland-2/), the built-in Sprint function will keep track of your words, which is handy.

    **Do I need to start at the top of the hour?**
    No. It’s convenient but not necessary. When I was [writing the Arlo Finch books](https://johnaugust.com/2018/how-and-why-to-write-a-novel-in-highland-2), I found it useful to schedule two sprints a day, generally at 10am and 2pm.

    **Can I use a #writesprint to do non-writing work?**
    Of course! If it’s something you’re kind of dreading doing, but a timer and some social pressure helps, go for it.

    **Where did this idea come from?**
    I *might* have created the #writesprint hashtag, ((I’ve deleted my old tweets, but the earliest appearance of #writesprint is in 2011, which is when I started doing them.)) but I definitely got the idea from [Jane Espenson](https://twitter.com/JaneEspenson), who’s been doing these for years. (She calls them writing sprints, which sounds better but doesn’t hashtag as neatly.) And of course it shares a tradition with the [Pomodoro Technique](https://en.wikipedia.org/wiki/Pomodoro_Technique) and other productivity hacks.

    **Will this really boost my productivity?**
    If you’re spending a fixed amount of time at the keyboard concentrating on one thing to write, you’re going to get more accomplished than if you’re jumping between email and YouTube and various news sites. It’s like putting blinders on a horse. It keeps you focused.

    **How short can a #writesprint be?**
    You can get a lot done in just 10 minutes of focused writing. Don’t be afraid to set short sprints.

    **Can I go longer than 60 minutes?**
    If you’re in the flow and decide you want to keep working past the bell, that’s your choice. But don’t set out to write for more than 60 minutes. The idea of a sprint is that it’s intense and focused. It’s a different energy than a marathon.

    Professionalism in the Age of the Influencer

    November 20, 2019 Film Industry, Follow Up, General, International, Random Advice

    *On October 24, 2019, I presented the Hawley Foundation Lecture at Drake University. It was an update and reexamination of a 2006 [speech on professionalism]((https://johnaugust.com/2006/professional-writing-and-the-rise-of-the-amateur)) I originally gave at Trinity University, and later that year at Drake.*

    *What follows is a pretty close approximation of my speech, but hardly a transcript. It’s long, around 14,000 words. My presentation originally had slides. I’ve included many of them, and swapped out others for links or embedded posts.*

    *If you’re familiar with the earlier speech and want to jump to the new stuff, you can click here.*

    —

    Back in 2006, I gave a speech here at Drake entitled “Professional Writing and the Rise of the Amateur.” In it, I presented my observations and arguments about how the emergence of the internet had made the old distinctions between amateurs and professionals largely irrelevant. Tonight I want to revisit that speech and look at what still makes sense in 2019, and more importantly, what I got wrong.

    To do that, we need to start with a bit of time travel so we can all remember what 2006 looked like.

    Here’s Facebook:

    facebook 2006

    Here’s Twitter:

    twitter 2006

    Here’s Netflix:

    netflix home screen 2006

    Here’s Reddit:

    reddit 2006

    Here’s Instagram:

    instagram debuted in 2010

    Oh, 2006 was a simpler time. The internet existed, but it wasn’t as all-consuming as it is now. We had blogs. We had MySpace. But we didn’t have the internet on our iPhones. Because iPhones wouldn’t come out for another year.

    However, even in this innocent age, issues would arise that would feel very familiar today. We had fake news and trolls and pile-ons.

    For example, back in 2006, I started my speech with this anecdote:

    > On March 21, 2004, at about nine in the morning, I got an email from my friend James, saying, “Hey, congrats on the great review of Charlie and the Chocolate Factory on Ain’t It Cool News!”

    Let’s start by answering, What is Ain’t It Cool News? It was a movie website started by a guy named Harry Knowles. It looked like this:

    aicn 2006

    Ain’t It Cool News billed itself as a fan site. I’d argue that it was an incredibly significant step towards today’s fan-centered nerd culture, for better and for worse. Online fandom has brought forth the Avengers and fixed Sonic the Hedgehog’s teeth, but it’s also unleashed digital mobs upon actors and journalists, women in particular.

    Back in 2006, the nexus of movie fandom was Ain’t It Cool News. It wasn’t just a barometer of what a certain class of movie fan would like; it could set expectations and buzz. Studio publicity departments checked it constantly.

    So, back to my email from James. He’d written:

    > “Hey, congrats on the great review of Charlie and the Chocolate Factory on Ain’t It Cool News!”

    This was troubling for a couple of reasons.

    First off, the movie hadn’t been shot yet. We weren’t in production. So the review was actually a review of the script. Studios and filmmakers really, really don’t like it when scripts leak out and get reviewed on the internet, because it starts this cycle of conjecture and fuss about things that may or may not ever be shot. So I knew that no matter what, I was going to get panicked phone calls from Warner Bros.

    I click through to Ain’t It Cool and read this “review.” And it’s immediately clear that it’s a complete work of fiction.

    aicn article 2006

    The author of the article, “Michael Marker,” claims to have read the script, but he definitely hasn’t. He’s just making it up. It is literally fake news.

    Fortunately, back in 2004, I knew exactly one person at Ain’t It Cool News. His name was Jeremy, but he went by the handle “Mr. Beaks.” So I emailed him, and say, hey, that review of the Charlie script is bullshit.

    Actually, I don’t say that. I say, “That guy is bullshitting you.” It’s not that I’m wronged, no. It’s that that guy, Michael Marker, is besmirching the good name of Ain’t It Cool News by trying to pass off his deluded ramblings as truth. How dare he!

    And it works. Mr. Beaks talks to Harry Knowles, and Harry posts a new article saying that the review was bogus.

    aicn article screenshot

    They don’t pull the original article, but oh well. It’s basically resolved.

    I can’t help but think — this article was wrong, but it was really, really positive. What if it had been negative? Would Mr. Beaks or Harry Knowles have believed me? Probably not. They would have said, “Oh, sour grapes.” My complaining would have made the readers believe the bogus review even more.

    It might have led to the [Streisand effect](https://en.wikipedia.org/wiki/Streisand_effect), where complaining about something just brings more attention to it.

    Back in 2006, if you tried to really go after any of these film-related sites, criticizing them for say, running a review of a test screening or just outright making shit up, you’d get one standard response:

    > Hey, we’re not professional journalists. We’re just a bunch of guys who really love movies.

    Their defense is that they’re amateurs, so they can’t be held to the same standards of the New York Times or NBC.

    That became the topic of my speech in 2006: the eroding distinction between professionals and amateurs.

    The classic, easy distinction is that the professional gets paid for it, while the amateur doesn’t. For a lot of things, that works. You have a professional boxer versus an amateur. You have a professional astronomer versus an amateur — some guy with a telescope in his back yard.

    [Read more…] about Professionalism in the Age of the Influencer

    Craig is running a few minutes behind

    September 4, 2015 General, Los Angeles

    Recording the podcast today, Craig apologized for being a few minutes late, “like always,” he said. “No worries,” I said. After all, it’s just Skype.

    But Craig’s comment got me looking through my Messages history. What follows is a very slightly redacted version of our entire conversation thread since the start of Scriptnotes.

    message thread

    Craig is still the best co-host in the universe.

    Happy Labor Day weekend!

    Next Page »

    Primary Sidebar

    Newsletter

    Inneresting Logo A Quote-Unquote Newsletter about Writing
    Read Now

    Explore

    Projects

    • Aladdin (1)
    • Arlo Finch (27)
    • Big Fish (88)
    • Birdigo (2)
    • Charlie (39)
    • Charlie's Angels (16)
    • Chosen (2)
    • Corpse Bride (9)
    • Dead Projects (18)
    • Frankenweenie (10)
    • Go (30)
    • Karateka (4)
    • Monsterpocalypse (3)
    • One Hit Kill (6)
    • Ops (6)
    • Preacher (2)
    • Prince of Persia (13)
    • Shazam (6)
    • Snake People (6)
    • Tarzan (5)
    • The Nines (118)
    • The Remnants (12)
    • The Variant (22)

    Apps

    • Bronson (14)
    • FDX Reader (11)
    • Fountain (32)
    • Highland (73)
    • Less IMDb (4)
    • Weekend Read (64)

    Recommended Reading

    • First Person (88)
    • Geek Alert (151)
    • WGA (162)
    • Workspace (19)

    Screenwriting Q&A

    • Adaptation (66)
    • Directors (90)
    • Education (49)
    • Film Industry (491)
    • Formatting (130)
    • Genres (90)
    • Glossary (6)
    • Pitches (29)
    • Producers (59)
    • Psych 101 (119)
    • Rights and Copyright (96)
    • So-Called Experts (47)
    • Story and Plot (170)
    • Television (164)
    • Treatments (21)
    • Words on the page (238)
    • Writing Process (178)

    More screenwriting Q&A at screenwriting.io

    © 2025 John August — All Rights Reserved.