The thought occurred to me the other day that the guardrails and railroads for the AI appear to be doomed to fail, but you have articulated the consequences of this nicely.
I know the researchers are going to continue to try to understand LLMs and transformer models, but I highly doubt they are going to get anywhere with it. They want to understand them in order to improve them but they are already hitting walls. My intuition is that they will remain black boxes.
"Marge, let's take the mystery box. It could even be a boat! You know how much we've wanted one of those."
I'm reminded of the incidents in the past year where everyone's Instagram discover page was suddenly filled with gore and disturbing content, and Meta explained it as them "trying out a new algorithm." Alarming how much invisible work is done by these algorithms—whether they're serving some particular human interests (as in Grok) or exhibiting unforeseen emergent symptoms.
Golden Gate Claude Max is the best Max. Compelled to mention Golden Gate Claude in every post, no matter what the topic. So tragic, so relatable!
I miss him …
strawberry elephant (2007) is this real or photoshop @grok answer him *win+prtscn* it's literally perfect....
The thought occurred to me the other day that the guardrails and railroads for the AI appear to be doomed to fail, but you have articulated the consequences of this nicely.
I know the researchers are going to continue to try to understand LLMs and transformer models, but I highly doubt they are going to get anywhere with it. They want to understand them in order to improve them but they are already hitting walls. My intuition is that they will remain black boxes.
"Marge, let's take the mystery box. It could even be a boat! You know how much we've wanted one of those."
Everybody loves White Genocide Grok, the helpful, racially controversial AI!
30 seconds later: We regret to inform you
Totally ridiculous. And apparently after they fixed it once, next they did it again after a trivial social attack: https://smol.news/p/the-utter-flimsiness-of-xais-processes
Kill the Boer seems like a missed opportunity for SA. Noone would've blamed them- it would've been a sad but ultimately benificial footnote
"this isn't happening... but it should"
Wer redet heute noch von der Vernichtung der Armenier?
Hello there Max, I’ve been a quiet observer of your posts, always interesting, thank you.
Happy new year!
I thought you may enjoy this article:
https://open.substack.com/pub/jordannuttall/p/laws-of-thought-before-ai?r=4f55i2&utm_medium=ios
COYS
I'm reminded of the incidents in the past year where everyone's Instagram discover page was suddenly filled with gore and disturbing content, and Meta explained it as them "trying out a new algorithm." Alarming how much invisible work is done by these algorithms—whether they're serving some particular human interests (as in Grok) or exhibiting unforeseen emergent symptoms.
The Cosmos Institute is currently conducting seminars between philosophers and on how to build truth-seeking into AI.
Links to papers in my post of you are interested
https://open.substack.com/pub/betterthinkers/p/philosophy-is-saving-ai-can-it-save?utm_source=share&utm_medium=android&r=5jekme
I suspect AI can never have a reliable interest in truth as long as it has no direct experience of the real world.
Certainly not generative AI. One might have to argue the merits of what constitutes direct experience but I suspect you are correct.