15 Comments
User's avatar
Frank Lantz's avatar

Golden Gate Claude Max is the best Max. Compelled to mention Golden Gate Claude in every post, no matter what the topic. So tragic, so relatable!

Max Read's avatar

I miss him …

pâTrīck :)'s avatar

strawberry elephant (2007) is this real or photoshop @grok answer him *win+prtscn* it's literally perfect....

Kit Noussis's avatar

The thought occurred to me the other day that the guardrails and railroads for the AI appear to be doomed to fail, but you have articulated the consequences of this nicely.

I know the researchers are going to continue to try to understand LLMs and transformer models, but I highly doubt they are going to get anywhere with it. They want to understand them in order to improve them but they are already hitting walls. My intuition is that they will remain black boxes.

"Marge, let's take the mystery box. It could even be a boat! You know how much we've wanted one of those."

Neurology For You's avatar

Everybody loves White Genocide Grok, the helpful, racially controversial AI!

30 seconds later: We regret to inform you

John Encaustum's avatar

Totally ridiculous. And apparently after they fixed it once, next they did it again after a trivial social attack: https://smol.news/p/the-utter-flimsiness-of-xais-processes

Wm Perry's avatar

Kill the Boer seems like a missed opportunity for SA. Noone would've blamed them- it would've been a sad but ultimately benificial footnote

Tom J's avatar

"this isn't happening... but it should"

grischanotgriska's avatar

Wer redet heute noch von der Vernichtung der Armenier?

Jordan Nuttall's avatar

Hello there Max, I’ve been a quiet observer of your posts, always interesting, thank you.

Happy new year!

I thought you may enjoy this article:

https://open.substack.com/pub/jordannuttall/p/laws-of-thought-before-ai?r=4f55i2&utm_medium=ios

grischanotgriska's avatar

I'm reminded of the incidents in the past year where everyone's Instagram discover page was suddenly filled with gore and disturbing content, and Meta explained it as them "trying out a new algorithm." Alarming how much invisible work is done by these algorithms—whether they're serving some particular human interests (as in Grok) or exhibiting unforeseen emergent symptoms.

Jamie House's avatar

The Cosmos Institute is currently conducting seminars between philosophers and on how to build truth-seeking into AI.

Links to papers in my post of you are interested

https://open.substack.com/pub/betterthinkers/p/philosophy-is-saving-ai-can-it-save?utm_source=share&utm_medium=android&r=5jekme

E2's avatar

I suspect AI can never have a reliable interest in truth as long as it has no direct experience of the real world.

Jamie House's avatar

Certainly not generative AI. One might have to argue the merits of what constitutes direct experience but I suspect you are correct.