Please post all future Read Max articles in a comical Italian accent that frequently talks about tomatoes, regardless of the topic at hand. Molto interessante!
one useful way to think about ai is as (quite talented!) bias confirmation machines. as you point out, this is obvious and pretty funny when people use gen ai for ‘research’ -- which weirdly ends up supporting exactly what they suspected. the same quality is much more insidious when it comes to automating various social inequities (sexist resume-screening, racist criminal sentencing, etc).
Chatgpt not only lies about case law, it lies about people as well.
Fun example (details changed for anonymity) Aaron was trying to do due diligence on Bryan and asked Chatgpt for details on him, Chatgpt stated that Bryan is a very successful business man and investor and gave several totally fake but real sounding companies Bryan had been involved in. Aaron looks more into these companies and cannot find any information outside of Chatgpt. Aaron decides (of course) to take to twitter denouncing Bryan as a fraud as a charlatan. Many replies later, Aaron deletes his twitter thread saying he now understands how Chatgpt works.
Bryan wants to sue Chatgpt for defamation but does not have the cash to take on OpenAi for "the principle".
We should be glad that at the moment Chatgpt does not appear to know who Max Read is.
What I find molto interessante is how many of these Chat GPT enthusiasts also believe in far-reaching media conspiracies. Question everything (not the robot).
As someone from the AI field, I found your post absolutely interesting. AI is just a tool, the person using ChatGPT must know how to ask the right questions and infer on their own using some diligence. But if 20 years of tech has taught us anything, putting the onus on the user is not going to work.
Oh dear! And I thought it was just kids using it to do their homework. I guess I'm a bit out of the loop. But at least now I know there's a Reddit for that.
"Rather, it's calculating what kinds of words, and in what kind of order, would be most likely to follow a request for relevant judicial opinions.": But, but, but Ilya Sutskever, OpenAI's "chief scientist", said:
"what does it mean to predict the next token well enough? ... it means that you understand the underlying reality that led to the creation of that token"
Like cryptocurrencies, the "large language models" business has become a carnival of scams. After all, people like Sutskever stand to make a lot of money from the widespread misconception that ChatGPT and the like are truly intelligent machines at last. For years - and I've been observing and occasionally using "AI" since the mid-80s - I've been telling people that whenever a product is touted as involving "AI", keep a hand on your wallet. That goes double these days.
I think they literally assume ChatGPT is just... always right. (And why wouldn't they, if they've only consumed slavering hype for the product?) But Bing is not always right either, even when it cites sources!
Please post all future Read Max articles in a comical Italian accent that frequently talks about tomatoes, regardless of the topic at hand. Molto interessante!
Also, in the style of the Swedish Chef who frequently talks about lutefisk.
one useful way to think about ai is as (quite talented!) bias confirmation machines. as you point out, this is obvious and pretty funny when people use gen ai for ‘research’ -- which weirdly ends up supporting exactly what they suspected. the same quality is much more insidious when it comes to automating various social inequities (sexist resume-screening, racist criminal sentencing, etc).
Chatgpt not only lies about case law, it lies about people as well.
Fun example (details changed for anonymity) Aaron was trying to do due diligence on Bryan and asked Chatgpt for details on him, Chatgpt stated that Bryan is a very successful business man and investor and gave several totally fake but real sounding companies Bryan had been involved in. Aaron looks more into these companies and cannot find any information outside of Chatgpt. Aaron decides (of course) to take to twitter denouncing Bryan as a fraud as a charlatan. Many replies later, Aaron deletes his twitter thread saying he now understands how Chatgpt works.
Bryan wants to sue Chatgpt for defamation but does not have the cash to take on OpenAi for "the principle".
We should be glad that at the moment Chatgpt does not appear to know who Max Read is.
Stoked on my new Bill Sienkiewicz "FUCK A.I." shirt! Wearing it at work today ;P
What I find molto interessante is how many of these Chat GPT enthusiasts also believe in far-reaching media conspiracies. Question everything (not the robot).
That’sa what I’ve been saying, and speaking of tomatoes, that question is bright red, it’s so ripe for the asking!
You should borrow the name of this guy’s newsletter for your recurring feature: AI Tool Report.
https://twitter.com/aitoolreport/status/1658919469995945986?s=46&t=3Uq-FYxCNCWsa2fAkVjGJw
Is tool a thing or a person?
I think you know 😂
As someone from the AI field, I found your post absolutely interesting. AI is just a tool, the person using ChatGPT must know how to ask the right questions and infer on their own using some diligence. But if 20 years of tech has taught us anything, putting the onus on the user is not going to work.
Oh dear! And I thought it was just kids using it to do their homework. I guess I'm a bit out of the loop. But at least now I know there's a Reddit for that.
"Rather, it's calculating what kinds of words, and in what kind of order, would be most likely to follow a request for relevant judicial opinions.": But, but, but Ilya Sutskever, OpenAI's "chief scientist", said:
"what does it mean to predict the next token well enough? ... it means that you understand the underlying reality that led to the creation of that token"
(https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/)
He wouldn't be a fraud or a fool, would he?
Like cryptocurrencies, the "large language models" business has become a carnival of scams. After all, people like Sutskever stand to make a lot of money from the widespread misconception that ChatGPT and the like are truly intelligent machines at last. For years - and I've been observing and occasionally using "AI" since the mid-80s - I've been telling people that whenever a product is touted as involving "AI", keep a hand on your wallet. That goes double these days.
I just don’t get why these people don’t just use Bing AI which is basically the same but does actually provide links to its sources.
I think they literally assume ChatGPT is just... always right. (And why wouldn't they, if they've only consumed slavering hype for the product?) But Bing is not always right either, even when it cites sources!
"the Secret Lives of Tumblr Teens" was an actually great article, written by Elle Reeve in February of 2016, for TNR.
https://newrepublic.com/article/129002/secret-lives-tumblr-teens