2 Comments

This could be very useful for cable new presenters who are required to talk non-stop during a major crisis. It also brings to mind a trend happening over at Audible. It's being flooded with short "books" that are created in response to key word searches by individuals who hire ghost writers, narrators and graphic artists. I'm not saying there's anything wrong with this - just that it makes it harder to sift through all of the offerings to find something written by someone who actually knows something.

Expand full comment

"This is a pretty characteristic GPT-3 text, which is to say it reads like what a smart and contemptuous 10th-grader would produce based on the prompt."

As a whole, yeah, but it reads more like a (poorly-done) Times' pieces about such and such being radicalized. I would assume that's part of the text being quasi-plagiarized here.

"The tech industry has done an admirable job of conditioning us to read and write like advanced robots."

The writing advice about polished prose seems, a lot of the time, to result in a lifeless wet mound of pulp: it's readable and conveys the information but it doesn't do all that much. I think that's me agreeing with you. Hard to say, because reading the examples I'm experiencing near-instant turing test collapse. (Maybe because I have fussed with Eliza long ago, and it wasn't much worse quality than this? Or possibly because I have read enough ordinary people chatting over almost four decades that an individual voice underlying 'polished prose' can be picked out or noticeably missing?)

I have seen two types of web prose that looks machine-generated: a) the SEO-jamming link text page, and b) the very generic 'recommendations page' and none of this is quite good enough for that. It's going to be hard to tell when very badly written pages are generated by a human or a machine and I assume some of that is going on. This stuff is nowhere near good enough to pass for human at length. (It's very similar in quality to the faces generated by DALLE-E: the generator can't manage coherence.)

It's not distinct enough for twitter either. Reading this post has convinced me of the opposite conclusion: in exactly the same way that AI driving isn't good enough, this thing is not up to snuff.

(I can think of way it can be used though: to generate conservative/centrist op-ed boilerplate, along with some liberal boilerplate. That is likely because those folks are just rewording press releases and talking point releases. So you could maybe populate part of Town Hall with this stuff. You could also, perhaps, used it as a skeleton generator and then essentially work out why it's wrong on paper while rewriting the skeleton. Maybe.)

elm

articles of this type are basically 'if it bleeds, it leads' stories for the chattering precariat, yeah?

Expand full comment