I’d figured that we were meant to assume that Emma was meant to be closer to Charlie’s age than Zendaya really is to Pattinson. I know your analysis is mostly metatextual in its consideration of the actors’ ages, but doesn’t the movie itself suggest less of a generational divide between the characters? I’m thinking particularly about the scene where Emma is recording her “manifesto,” and she’s using a chunky early/mid 2000s desktop computer. It would’ve been outdated by nearly a decade when Zendaya herself was the same age, circa 2012-2014.
Also, in my experience being the same “cusper” age as Zendaya, I’ve found that my elder Millennial friends are generally more forgiving of past transgressions than my younger Zoomer friends. Maybe that has something to do with the roles social media and “accountability culture” have played in our lives since our formative years. Not to say there aren’t exceptions to this rule, but I found that this movie seemed generally unconcerned with the real-life generational gap between the couple.
Personally (and I know this isn’t really a hot take), I think Borgli is saying more about how the superficiality of identity politics has seeped its way into our sense of morality, regardless of generational divide. Rachel’s reaction feels authentic to me not because she’s older than Emma and excepting her own pre-documented transgression, but because she’s immersed in a culture where any opportunity to claim victimhood (in her case, vicariously through her cousin) is rewarded with sympathy and credibility, and an onus is placed on the “aggressor” to be held accountable at all costs.
Minor disagreements aside, this was a really great read about a really fascinating case-study of a movie. Thanks for writing!
Emma also seems younger than Charlie and his friends (and the co-workers) because she's constantly interrogated about a transgression that signifies an irrevocable pathology. Charlie doesn't go look for her manifesto; he fantasizes that her child and adult self are identical.
You put this in the footnote, but your Drama discourse made me think about my own identity as an "elder millennial" (approximately Pattinson's age) and how many of the people I know are more fatalist than judgy. Or, I was judgy (in the 10's) and grew into fatalism just fine. I had family who worked with computers so I grew up fully interneted; a lot of my cohort got internetted in late high school and college. I like the Zoomers, generally, I just think they don't know what they don't know because they never had to understand html and how the internet (used to) be made by a bunch of humans making human decisions, it isn't the magic that the platforms want you to think it is. The really mystifying people to me are generally the ones born in the 90's. Zendaya and Pattinson are great actors who can play against the impulses of their generations.
The microgenerational framing is more useful than the standard generation-cohort framing for tech adoption analysis. Platform habits now diverge across 4-5 year bands, not 15-20. The kid who came up on Snap is meaningfully different from the kid who came up on TikTok, and both are different from the kid coming up on Discord-as-feed. For anyone marketing or building consumer products, the old "Gen Z" or "Gen Alpha" aggregate is too coarse to be predictive.
I like the generational take on The Drama. My reading was that where you come down on whether what Zendaya did was unforgivable or not is if you believe that, to the extent people can truly change, does that change have to come from some deep inner work, or can random events change your attitude/path for the better and does that still count? Because it seemed like the Rachel character was like well maybe she wouldn't do a shooting now, but that's only by pure chance rather than by a self-initiated moment of reckoning with her own behavior. And so that doesn't make her a good person. As a millennial, I have never really thought about if that belief -- that true change can only come from "doing the work" -- is a mostly millennial thing, but I guess it kind of is
It seems obvious to me that the right move is to aim to develop students into the best humans they can possibly be, in that the truly human things AI can’t do will retain value and dignity as the cost of using AI to do everything else goes to 0. There is obviously way more factory-made furniture in the world than handmade, and far fewer furniture craftsmen than there used to be, but no one is throwing tens of thousands of dollars at an ikea dining table. We need people to work the ikea factory floor (for now), but I suspect the craftsman is more fulfilled.
It seems contradictory to imagine that AI will be seamlessly embedded into every facet of life and become the water we all swim in, yet we will also place great value on specially-trained prompt engineers. I don’t know how exactly the economy of the future will compensate the skills that AI can’t master, but it won’t be kind to those it can, and I don’t see why “talking effectively to AI” would be in the former category.
> But, not to ask a stupid question: Is A.I. actually like a forklift or a semi truck? It’s neither as immediately dangerous nor as patently useful; more importantly, it’s not actually very hard to use.
Strongly disagree. I think this take comes from the perspective of a *consumer* who has used consumer-grade tools and isn't aware of the burgeoning *commercial* applications of AI.
First off, the chat interface is not the most commercially useful one for LLMs. On a podcast you mentioned people asking, "what, are they asking Claude 'where should we bomb next'?" in regard to the Iranian air campaign. Asking that question betrays a fundamental misunderstanding of how these systems work and are being deployed.
The most accessible interface for a regular person is a chat box, but these systems can be deployed *anywhere a computer does something*. And computers *do lots of things*: process orders of real-world goods, manage trillions of dollars in financial transactions, make automated decisions about which goods in factories meet quality assurance standards and which don't, sort resumes & job applications into quality tiers for review by humans, create music, edit video, manage accounting for trillion-dollar businesses, automate technical support, manage call center schedules, etc. etc.
The people who are using Claude for planning Iranian bombing operations may, for example, be writing into a text box "where do we bomb next", but that would be in the context of giving Claude access to real-time intelligence information, real-time flight tracking, munitions data, operational parameters. Claude may also have the ability to *write out to* external systems, eg. if given the permission by a human operator it could POST to an API that updates a target list on a server that aircraft are reading from. Whether you think this is a good thing or not (I don't necessarily), it illustrates how these systems aren't just *text boxes*; they can be *agentic systems* that read from and publish to other computer systems.
In order to understand *how* to apply AI to the kinds of things, you need to develop specialized knowledge similar to a forklift operator. You need to understand, for example, how a web server and an API work, and how to configure an LLM to write API calls to take some kind of action on a computer. You also need to develop specialized knowledge of how these systems operate--eg. how to manage context & compaction, how to set up pre-defined "skills" the systems can call on as needed, etc.
I'm convinced, like it or not, that AI is going to transform every aspect of knowledge work over the coming decades--it's just a question of figuring out the tools, user experience, and technical implementation that unlock AI automation for a given type of work or domain. For example, well-configured AI agents can already automate a wide range of human tasks *if set up correctly*, with a reasonably high level of reliability at a cost lower than humans (one example would be technical support triage). Most business, however, don't have the right data & IT architecture in place to set up agents correctly. Many developers don't yet understand the key design principles required to develop high-quality agentic systems (eg. using automated evals to detect errors). Not enough users understand how the LLMs work to develop effective applications for them.
Full disclosure: I work at a company whose product was obsolesced by the advent of coding LLMs, and who is now becoming an AI company. Obviously I have a vested interest in the above being true, but I also come from a place of skepticism; it was only in the last 6 months or so that I have become convinced of the above based on what I've seen in the industry.
I work in higher education. I think everything you wrote here is true, but when school admin talk about "teaching students how to AI" they are not thinking about APIs or agents. They are thinking only of prompt engineering.
I'm not sure we disagree, though! I mean, at least on the level that A.I. is not a "forklift." What you're describing is a much more complex and transformative technology, something on the order of the internal combustion engine.
It's interesting to write about the "forklift model" since one of the main analogies used to persuade students that using AI to do their work is bad is that it's like using a forklift to lift weights at the gym.
Someone pointed this out on Bluesky and I wish I’d known! I think it fits, actually—to the extent that AI is a forklift that can do some heavy mental lifting (and possibly atrophy your intellectual muscles), the only necessary “certification” is a general liberal-arts eduction.
I came here to scold you for calling someone born in 1997 a Zoomer when I saw the admission at the end of the post. Millennials are born up until 2002, arguably 2005! Generations are long!
If we go with the "Forklift Model" for AI education, then we need the accompanying OSHA videos that show people getting gored horrifically by unattended chatbots while "Danger Zone" plays on repeat.
I’d figured that we were meant to assume that Emma was meant to be closer to Charlie’s age than Zendaya really is to Pattinson. I know your analysis is mostly metatextual in its consideration of the actors’ ages, but doesn’t the movie itself suggest less of a generational divide between the characters? I’m thinking particularly about the scene where Emma is recording her “manifesto,” and she’s using a chunky early/mid 2000s desktop computer. It would’ve been outdated by nearly a decade when Zendaya herself was the same age, circa 2012-2014.
Also, in my experience being the same “cusper” age as Zendaya, I’ve found that my elder Millennial friends are generally more forgiving of past transgressions than my younger Zoomer friends. Maybe that has something to do with the roles social media and “accountability culture” have played in our lives since our formative years. Not to say there aren’t exceptions to this rule, but I found that this movie seemed generally unconcerned with the real-life generational gap between the couple.
Personally (and I know this isn’t really a hot take), I think Borgli is saying more about how the superficiality of identity politics has seeped its way into our sense of morality, regardless of generational divide. Rachel’s reaction feels authentic to me not because she’s older than Emma and excepting her own pre-documented transgression, but because she’s immersed in a culture where any opportunity to claim victimhood (in her case, vicariously through her cousin) is rewarded with sympathy and credibility, and an onus is placed on the “aggressor” to be held accountable at all costs.
Minor disagreements aside, this was a really great read about a really fascinating case-study of a movie. Thanks for writing!
Emma also seems younger than Charlie and his friends (and the co-workers) because she's constantly interrogated about a transgression that signifies an irrevocable pathology. Charlie doesn't go look for her manifesto; he fantasizes that her child and adult self are identical.
You put this in the footnote, but your Drama discourse made me think about my own identity as an "elder millennial" (approximately Pattinson's age) and how many of the people I know are more fatalist than judgy. Or, I was judgy (in the 10's) and grew into fatalism just fine. I had family who worked with computers so I grew up fully interneted; a lot of my cohort got internetted in late high school and college. I like the Zoomers, generally, I just think they don't know what they don't know because they never had to understand html and how the internet (used to) be made by a bunch of humans making human decisions, it isn't the magic that the platforms want you to think it is. The really mystifying people to me are generally the ones born in the 90's. Zendaya and Pattinson are great actors who can play against the impulses of their generations.
The microgenerational framing is more useful than the standard generation-cohort framing for tech adoption analysis. Platform habits now diverge across 4-5 year bands, not 15-20. The kid who came up on Snap is meaningfully different from the kid who came up on TikTok, and both are different from the kid coming up on Discord-as-feed. For anyone marketing or building consumer products, the old "Gen Z" or "Gen Alpha" aggregate is too coarse to be predictive.
I like the generational take on The Drama. My reading was that where you come down on whether what Zendaya did was unforgivable or not is if you believe that, to the extent people can truly change, does that change have to come from some deep inner work, or can random events change your attitude/path for the better and does that still count? Because it seemed like the Rachel character was like well maybe she wouldn't do a shooting now, but that's only by pure chance rather than by a self-initiated moment of reckoning with her own behavior. And so that doesn't make her a good person. As a millennial, I have never really thought about if that belief -- that true change can only come from "doing the work" -- is a mostly millennial thing, but I guess it kind of is
LOL just here to stan the The Drama analysis - spot on :D
It seems obvious to me that the right move is to aim to develop students into the best humans they can possibly be, in that the truly human things AI can’t do will retain value and dignity as the cost of using AI to do everything else goes to 0. There is obviously way more factory-made furniture in the world than handmade, and far fewer furniture craftsmen than there used to be, but no one is throwing tens of thousands of dollars at an ikea dining table. We need people to work the ikea factory floor (for now), but I suspect the craftsman is more fulfilled.
It seems contradictory to imagine that AI will be seamlessly embedded into every facet of life and become the water we all swim in, yet we will also place great value on specially-trained prompt engineers. I don’t know how exactly the economy of the future will compensate the skills that AI can’t master, but it won’t be kind to those it can, and I don’t see why “talking effectively to AI” would be in the former category.
> But, not to ask a stupid question: Is A.I. actually like a forklift or a semi truck? It’s neither as immediately dangerous nor as patently useful; more importantly, it’s not actually very hard to use.
Strongly disagree. I think this take comes from the perspective of a *consumer* who has used consumer-grade tools and isn't aware of the burgeoning *commercial* applications of AI.
First off, the chat interface is not the most commercially useful one for LLMs. On a podcast you mentioned people asking, "what, are they asking Claude 'where should we bomb next'?" in regard to the Iranian air campaign. Asking that question betrays a fundamental misunderstanding of how these systems work and are being deployed.
The most accessible interface for a regular person is a chat box, but these systems can be deployed *anywhere a computer does something*. And computers *do lots of things*: process orders of real-world goods, manage trillions of dollars in financial transactions, make automated decisions about which goods in factories meet quality assurance standards and which don't, sort resumes & job applications into quality tiers for review by humans, create music, edit video, manage accounting for trillion-dollar businesses, automate technical support, manage call center schedules, etc. etc.
The people who are using Claude for planning Iranian bombing operations may, for example, be writing into a text box "where do we bomb next", but that would be in the context of giving Claude access to real-time intelligence information, real-time flight tracking, munitions data, operational parameters. Claude may also have the ability to *write out to* external systems, eg. if given the permission by a human operator it could POST to an API that updates a target list on a server that aircraft are reading from. Whether you think this is a good thing or not (I don't necessarily), it illustrates how these systems aren't just *text boxes*; they can be *agentic systems* that read from and publish to other computer systems.
In order to understand *how* to apply AI to the kinds of things, you need to develop specialized knowledge similar to a forklift operator. You need to understand, for example, how a web server and an API work, and how to configure an LLM to write API calls to take some kind of action on a computer. You also need to develop specialized knowledge of how these systems operate--eg. how to manage context & compaction, how to set up pre-defined "skills" the systems can call on as needed, etc.
I'm convinced, like it or not, that AI is going to transform every aspect of knowledge work over the coming decades--it's just a question of figuring out the tools, user experience, and technical implementation that unlock AI automation for a given type of work or domain. For example, well-configured AI agents can already automate a wide range of human tasks *if set up correctly*, with a reasonably high level of reliability at a cost lower than humans (one example would be technical support triage). Most business, however, don't have the right data & IT architecture in place to set up agents correctly. Many developers don't yet understand the key design principles required to develop high-quality agentic systems (eg. using automated evals to detect errors). Not enough users understand how the LLMs work to develop effective applications for them.
Full disclosure: I work at a company whose product was obsolesced by the advent of coding LLMs, and who is now becoming an AI company. Obviously I have a vested interest in the above being true, but I also come from a place of skepticism; it was only in the last 6 months or so that I have become convinced of the above based on what I've seen in the industry.
I work in higher education. I think everything you wrote here is true, but when school admin talk about "teaching students how to AI" they are not thinking about APIs or agents. They are thinking only of prompt engineering.
I'm not sure we disagree, though! I mean, at least on the level that A.I. is not a "forklift." What you're describing is a much more complex and transformative technology, something on the order of the internal combustion engine.
It's interesting to write about the "forklift model" since one of the main analogies used to persuade students that using AI to do their work is bad is that it's like using a forklift to lift weights at the gym.
Someone pointed this out on Bluesky and I wish I’d known! I think it fits, actually—to the extent that AI is a forklift that can do some heavy mental lifting (and possibly atrophy your intellectual muscles), the only necessary “certification” is a general liberal-arts eduction.
“World being reshaped” No! It! Isn’t!
I came here to scold you for calling someone born in 1997 a Zoomer when I saw the admission at the end of the post. Millennials are born up until 2002, arguably 2005! Generations are long!
there is literally no definition of millennial that includes people born the same year as facebook
If we go with the "Forklift Model" for AI education, then we need the accompanying OSHA videos that show people getting gored horrifically by unattended chatbots while "Danger Zone" plays on repeat.
I'll never forget my forklift training.