As someone who has been participating in a 2 year long review Gen AI software to use in a corporate legal setting, I have been astounded at the many things these softwares can do that have absolutely little or no value added. "Wow, it can do that?" followed by "and that's a good thing why?".
I have many extremely real concerns about literacy (I do a lot of test prep tutoring and the challenges I’ve seen in college-bound kids with good grades and affluent patents are pretty alarming), but I found the Atlantic article pretty unconvincing for a number of reasons, chief among them that its evidence that that American schools as a whole have significantly moved away from assigning whole books is pretty thin - I think this tends to be surprising to people who went to schools that did focus on books, but teaching with basal readers has been pretty common in US schools since the 1860s, to the extent that “teach whole books” was a radical progressive idea in the mid-twentieth century, and my understanding is that at any given moment in the past several decades you could find plenty of teachers still doing it. The author characterizes an EdWeek survey as saying “nearly a quarter of respondents said that books are no longer the center of their curricula,” but as far as I can tell from the article linked, the survey says no such thing - it asks about current practices but didn’t ask respondents about whether their practices have changed, so it’s possible that all along a quarter of teachers have mostly been using basals. (This is, to be frank, the exact kind of interpretive error I have to coach eighteen-year-olds out of making on evidence questions.)
It’s honestly wild to me how sloppy the Atlantic is with links… in addition to the survey misinterpretation issue, the author says whole books are disappearing from middle- and high- school programs and links to an article about the implementation in some NYC districts of a a program that is k-6 only, and its link for a phrase about NCLB & Common Core emphasizing informational text links to an NYT piece explicitly about teachers finding ways to integrate & balance informational text with literature (which strikes me as perfectly reasonable). This is pretty standard for the times I’ve tried to fact check major publications (shout out to the Slate advice columnist who dropped a link to a paper about eating disorders that said literally the exact opposite of what she was claiming). It feels a little too mean to say “it’s almost like the writers and editors at certain national publications are already passable but limited synthesists,” but, well…
To speak anecdotally for a minute, I also found the Atlantic article frustrating because it seems to imply the only challenge students are having with complex texts in college is their length… but I regularly meet students who struggle to understand 500 words of a nonfiction passage written for educated laypeople and/or absolutely cannot comprehend even one paragraph of any text more than a hundred years old. Like, the kids in NYC private schools are out there struggling through three sentences from a Lincoln speech. They’re taking ten minutes to slowly, with help, make it through the first two paragraphs of an article such as one might find in Scientific American. The author claims that students at Columbia can decode words and sentences, but I’ve definitely met kids who while not necessarily bound for Columbia are on track for selective private colleges who struggle to figure out unfamiliar words (recent example: “uninhibitedly”) and can’t make it unsupported through a sentence of Austen or the Federalist papers. (I often ask them to identity what a given pronoun refers to, and it’s amazing how hard this is for some of them! They have made it almost to college without having internalized that if you read a word like “its” or “those,” you should know what the author is pointing to, and it doesn’t strike them as unusual when they don’t.) I am sure they struggle with long books, because it’s pretty hard and miserable to do something you can’t really do or get much out of for an extended period of time, but the underlying assumption of the piece that students are doing okay with short text is - anecdotally - wildly at odds with what I encounter. (Sorry to be like, “it’s actually so much worse,” but, well…)
Yes, the quality of education in public and private schools is horrible, and it’s not only the emphasis on standardized testing, but also that America spends too little money on education, teachers aren’t supported (financially or otherwise), and teaching isn’t respected. I could go on forever, but you are totally right: it is SO much worse. The number of college students who cannot produce a grammatically correct sentence is astounding.
I've always struggled with the dense mythology, experimental prose, and themes within the Curious George ouvere so I'm glad that someone is (finally!) making it accessible
I apologize in advance for this inception-level move, but I couldn't help myself. Below is what ChatGPT had to say when I asked it to summarize this piece. And, well… it's an impressively un-self-aware performance. More broadly, it all seems hell-bent on answering the question, what if we took every spark of originality and creativity out of the endeavour of writing?
---
The author appreciates NotebookLM for its straightforward interface and practical applications, but notes it shares common limitations with other generative AI tools. They identify five key qualities prevalent in generative AI apps today:
- Novelty Over Utility: Users often engage with these apps for entertainment rather than true productivity.
- Fun to Use: The interaction itself is enjoyable, with users experimenting creatively.
- Adequate but Shallow Outputs: While the summaries and podcasts generated are generally readable, they often lack depth and can miss key insights.
- Frequent Errors: The AI can generate inaccurate information, reflecting common pitfalls in large language models.
- Rapid Production of Mediocre Content: The potential for creating low-quality content increases quickly, as seen in the examples of users generating superficial podcasts from bulk content.
The author expresses concern that reliance on such AI tools might hinder critical reading and comprehension skills, especially among students. They conclude that while tools like NotebookLM can be fun and novel, they often produce superficial content that may not be genuinely productive.
This of course led me to ask, "Isn't this summary an example of the very thing the article is talking about?" To which it replied, ever so eagerly:
You’re right! The summary captures the main points, but it might come across as a bit superficial, similar to the critiques in the article. The nuance and depth of the original text can get lost in a condensed form. If you want, I can provide a more detailed analysis or focus on specific aspects of the article to give a richer understanding. Just let me know!
"Setting aside the likelihood that the A.I. is getting these summaries wrong (which it almost always will with the kinds of socially complex messages you get from friends), is reading an email or a text or even a whole article really that much of a burden? Is replacing human-generated text with a slightly smaller amount of machine-generated text actually any kind of timesaver?"
This reminds me of one of the new Apple Intelligence ads with Bella Ramsey:
What is the selling point here? Ramsey comes across as unprepared and shallow. And isn't the conclusion that they are signing on to a project they haven't read the pitch for? (I guess this explains a lot about Hollywood...). This must be what it's like to be doing a job interview with someone who's using AI for their answers.
I hate that the bots both have that very specific "podcaster vocal fry" thing going on. Like, you suspect in the back of your head it's a thing, but then to hear two bots replicate it so accurately and realize that's just what the vast majority of podcasters sound like. Christ alive.
The podcast voice and mannerisms, like the “oh, absolutely”, “it’s a bit of a doozy”, “it’s deeply personal”, “should we jump right into it?”, “but before we do…”, were accurately annoying. I can’t stand podcasts. It's much less efficient than reading the Wikipedia article or the email or whatever. It’s one thing if the podcast is uniquely funny (Desus & Mero) or beautifully produced (Shit Town) informative (an audiobook), but outside of that, WHY would I want to hear two idiots who know little more than me waste my time?
As someone who has been participating in a 2 year long review Gen AI software to use in a corporate legal setting, I have been astounded at the many things these softwares can do that have absolutely little or no value added. "Wow, it can do that?" followed by "and that's a good thing why?".
Usually the why is underwhelming.
I have many extremely real concerns about literacy (I do a lot of test prep tutoring and the challenges I’ve seen in college-bound kids with good grades and affluent patents are pretty alarming), but I found the Atlantic article pretty unconvincing for a number of reasons, chief among them that its evidence that that American schools as a whole have significantly moved away from assigning whole books is pretty thin - I think this tends to be surprising to people who went to schools that did focus on books, but teaching with basal readers has been pretty common in US schools since the 1860s, to the extent that “teach whole books” was a radical progressive idea in the mid-twentieth century, and my understanding is that at any given moment in the past several decades you could find plenty of teachers still doing it. The author characterizes an EdWeek survey as saying “nearly a quarter of respondents said that books are no longer the center of their curricula,” but as far as I can tell from the article linked, the survey says no such thing - it asks about current practices but didn’t ask respondents about whether their practices have changed, so it’s possible that all along a quarter of teachers have mostly been using basals. (This is, to be frank, the exact kind of interpretive error I have to coach eighteen-year-olds out of making on evidence questions.)
It’s honestly wild to me how sloppy the Atlantic is with links… in addition to the survey misinterpretation issue, the author says whole books are disappearing from middle- and high- school programs and links to an article about the implementation in some NYC districts of a a program that is k-6 only, and its link for a phrase about NCLB & Common Core emphasizing informational text links to an NYT piece explicitly about teachers finding ways to integrate & balance informational text with literature (which strikes me as perfectly reasonable). This is pretty standard for the times I’ve tried to fact check major publications (shout out to the Slate advice columnist who dropped a link to a paper about eating disorders that said literally the exact opposite of what she was claiming). It feels a little too mean to say “it’s almost like the writers and editors at certain national publications are already passable but limited synthesists,” but, well…
To speak anecdotally for a minute, I also found the Atlantic article frustrating because it seems to imply the only challenge students are having with complex texts in college is their length… but I regularly meet students who struggle to understand 500 words of a nonfiction passage written for educated laypeople and/or absolutely cannot comprehend even one paragraph of any text more than a hundred years old. Like, the kids in NYC private schools are out there struggling through three sentences from a Lincoln speech. They’re taking ten minutes to slowly, with help, make it through the first two paragraphs of an article such as one might find in Scientific American. The author claims that students at Columbia can decode words and sentences, but I’ve definitely met kids who while not necessarily bound for Columbia are on track for selective private colleges who struggle to figure out unfamiliar words (recent example: “uninhibitedly”) and can’t make it unsupported through a sentence of Austen or the Federalist papers. (I often ask them to identity what a given pronoun refers to, and it’s amazing how hard this is for some of them! They have made it almost to college without having internalized that if you read a word like “its” or “those,” you should know what the author is pointing to, and it doesn’t strike them as unusual when they don’t.) I am sure they struggle with long books, because it’s pretty hard and miserable to do something you can’t really do or get much out of for an extended period of time, but the underlying assumption of the piece that students are doing okay with short text is - anecdotally - wildly at odds with what I encounter. (Sorry to be like, “it’s actually so much worse,” but, well…)
Booming comment, thank you!
Yes, the quality of education in public and private schools is horrible, and it’s not only the emphasis on standardized testing, but also that America spends too little money on education, teachers aren’t supported (financially or otherwise), and teaching isn’t respected. I could go on forever, but you are totally right: it is SO much worse. The number of college students who cannot produce a grammatically correct sentence is astounding.
I've always struggled with the dense mythology, experimental prose, and themes within the Curious George ouvere so I'm glad that someone is (finally!) making it accessible
I apologize in advance for this inception-level move, but I couldn't help myself. Below is what ChatGPT had to say when I asked it to summarize this piece. And, well… it's an impressively un-self-aware performance. More broadly, it all seems hell-bent on answering the question, what if we took every spark of originality and creativity out of the endeavour of writing?
---
The author appreciates NotebookLM for its straightforward interface and practical applications, but notes it shares common limitations with other generative AI tools. They identify five key qualities prevalent in generative AI apps today:
- Novelty Over Utility: Users often engage with these apps for entertainment rather than true productivity.
- Fun to Use: The interaction itself is enjoyable, with users experimenting creatively.
- Adequate but Shallow Outputs: While the summaries and podcasts generated are generally readable, they often lack depth and can miss key insights.
- Frequent Errors: The AI can generate inaccurate information, reflecting common pitfalls in large language models.
- Rapid Production of Mediocre Content: The potential for creating low-quality content increases quickly, as seen in the examples of users generating superficial podcasts from bulk content.
The author expresses concern that reliance on such AI tools might hinder critical reading and comprehension skills, especially among students. They conclude that while tools like NotebookLM can be fun and novel, they often produce superficial content that may not be genuinely productive.
This of course led me to ask, "Isn't this summary an example of the very thing the article is talking about?" To which it replied, ever so eagerly:
You’re right! The summary captures the main points, but it might come across as a bit superficial, similar to the critiques in the article. The nuance and depth of the original text can get lost in a condensed form. If you want, I can provide a more detailed analysis or focus on specific aspects of the article to give a richer understanding. Just let me know!
Earlier this year, there was a pretty great piece on how shitty GPT summaries are from a Dutch enterprise info architect: https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
To be fair, most humans are absolute shit at writing summaries, too.
"Setting aside the likelihood that the A.I. is getting these summaries wrong (which it almost always will with the kinds of socially complex messages you get from friends), is reading an email or a text or even a whole article really that much of a burden? Is replacing human-generated text with a slightly smaller amount of machine-generated text actually any kind of timesaver?"
This reminds me of one of the new Apple Intelligence ads with Bella Ramsey:
https://www.youtube.com/watch?v=_eJy6QyHaFM
What is the selling point here? Ramsey comes across as unprepared and shallow. And isn't the conclusion that they are signing on to a project they haven't read the pitch for? (I guess this explains a lot about Hollywood...). This must be what it's like to be doing a job interview with someone who's using AI for their answers.
I hate that the bots both have that very specific "podcaster vocal fry" thing going on. Like, you suspect in the back of your head it's a thing, but then to hear two bots replicate it so accurately and realize that's just what the vast majority of podcasters sound like. Christ alive.
The podcast voice and mannerisms, like the “oh, absolutely”, “it’s a bit of a doozy”, “it’s deeply personal”, “should we jump right into it?”, “but before we do…”, were accurately annoying. I can’t stand podcasts. It's much less efficient than reading the Wikipedia article or the email or whatever. It’s one thing if the podcast is uniquely funny (Desus & Mero) or beautifully produced (Shit Town) informative (an audiobook), but outside of that, WHY would I want to hear two idiots who know little more than me waste my time?
The computer doesn't understand the flesh. It should be made crazy about the flesh
https://notebooklm.google.com/notebook/ff001e7d-2d60-4c2e-a48c-4c9ce40d7a40/audio
I had a play with NotebookLM using one of my Substack articles as the source. Amazingly impressive results!