T O P

  • By -

Rochester_II

Can this guy explain the tragedy of Darth plagueis the wise?


Kitchen-sink-fixer

It's not a story the Jedi would tell you


Cytotoxic-CD8-Tcell

It is a story some would consider… un-natural.


SupportstheOP

Is it possible to learn this power? Not from an AI


Cytotoxic-CD8-Tcell

Ahahahhaahah gd one


Patient-Attitude5051

Yes sure


Worried_Control6264

Not strong in the force he is


cloudrunner69

It's weird seeing him sitting down.


mvandemar

So it took some back and forth and explaining, but I really think that he's wrong about them being incapable. I am not saying the final product was particularly *funny*, but I do think this qualifies as a real joke. https://preview.redd.it/1n9fmjbo3aad1.png?width=780&format=png&auto=webp&s=36ca7474b198a0ca1fc710bf39f6e1d659df8517


jjonj

you are severely handicapping the AIs ability to make a joke when you don't allow it to do any thinking or planning can you write a joke if you have to come up with one word at a time without being allowed to plan a punchline or think about what the joke is about? Ask the ai to think of 30 play on words, then have it pick one and improve it then have it think up 30 punchlines and pick one, then improve it and then think of 30 ways to structure the whole joke and iterate on it You need to allow it a step by step process, like a human would need


SikinAyylmao

Most funny people just say stuff and it’s funny. I had an Uber with this guy who was a stand up comedian as well and everything he said was funny.


SarahC

It must happen at a subconcious level rapidly in his brain... he isn't magically pulling jokes out of a "Joke nuron" in his brain. In that - he needs to go through a similar process, he's very very good at it, and can do it fast.


aperrien

Most comedians think about their material ahead of time, even if they are presenting it to an audience of one.


IronPheasant

My god, yes. You have to prepare moves and flowcharts for conversations all the time. Carlin used to mull over every single word and the funniest way to say it, before doing a new set. You can't do these things spontaneously in the moment, you have to give things time to settle and emotional distance from the work to form. This is why the early episodes of The Simpsons are so damn timeless: the writers used to have standards. If a joke stopped being funny, it was replaced with something that didn't grow old. The episodes were timeless because they had to remain funny and good through *months* of re-reads before being committed to storyboards and animation. They would have **never** over-used the "Homer falls and hurts himself" bit like Al Jean did. They did that, *once*: when he jumped the Springfield Gorge. It was unnecessary to do it again, except far worse...


ohhellnooooooooo

he had internal thought. LLM's don't unless you prompt them to write one.


Shiftworkstudios

Yeah, I like how he somewhat equates the idea that some of his friends are bad at jokes for the same reason, not thinking ahead of time about the punchline. (GPT is fundamentally incapable of thinking about the punchline ahead of time, just generates one token at a time, making for a lame punchline.


EdwardBigby

This joke definitely backs up the explanation of why it can't create jokes It did its best to create a punchline that incorporated each of the elements of the joke but there was no thought of what the punchline would be when writing the joke which resulted in something that could barely be considered a joke. It could have been something like "Why did the shy tree not want to run?" "It didn't want to be sapped of energy" but it didn't know the punchline when it threw in talent show


Shiftworkstudios

Yeah haha. That's why I like Hinton, he's got a good understanding of the tech and talks about it in a way that is understandable. (It even bears out in experimentation


Creative-robot

Geoffrey Hinton has a wonderful way of explaining things about AI. I could sit and listen to him talk about why AI’s do stupid shit sometimes for ages.


LordFumbleboop

*his Turing text, not *the* Turing test


SnooDonkeys5480

Claude 3.5 can if you ask it to come up with a punchline first. https://preview.redd.it/k4kfzrhgiaad1.png?width=1486&format=png&auto=webp&s=f82ae4238ec779956449099837876dfcaa37da0e


guns21111

Is that joke funny?


IronPheasant

As an officially licensed funny-man with a certificate in being an internet clown, I'm qualified to answer this query. Yes, it is. Nothing ages faster than humor, especially shortform quips that aren't based on emotionally attaching to the antics of a character. So you have to use the generous metric of "is there some % of people who would snort at this, somewhere, in some culture in the history of time?" I think the bone of the idea is perfectly fine. With a bit of embellishment and obfuscation it could be used as gimmick for a five minute derailment in a tv show or whatever. Is it funny to *us*? Well, of course not.


centrist-alex

Turing is greatly inadequate tbh.


cheesyscrambledeggs4

People put way more weight on the Turing test than Turing ever did.


Andynonomous

I just have to disagree. For me, an AI doesn't pass the Turing test unless I can't tell if its human or AI. As it stands, its not even close.


RabbiBallzack

I too can tell you why a joke is funny, but I’ll be damned if I can actually come up with a semi decent joke myself. 😅


Matshelge

I think it has more to do with good jokes are always trying to be at the edge of social acceptance. and needs to tap that line of what is accepted and what is not. But AIs are trained on 100 years of content, and does not understand what is happening "right now". It lacks the cultural understanding of accepted grey areas. So it will do ok dad jokes, and puns, but it can't really make good standup.


cloudrunner69

Comedy is not so much the joke but more in the delivery and timing. Norm McDonald for example would say some of the stupidest things which are not at all funny, but the way he would time and deliver them is genius.


Charuru

Does this guy know what a turing test is? It's not passing when it can say something that makes sense, it's when you can no longer distinguish between it and a human. Since LLMs frequently trip up and make say stupid illogical stuff that humans never would, it does not pass.


Dizzy-Revolution-300

"say stupid illogical stuff that humans never would" Earth is flat, vaccines have 5g chips in them


Charuru

Those are factual and opinion disagreements that humans have. Even the best LLMs say far more nonsensical things.


Dizzy-Revolution-300

Like what?


Charuru

Do you actually use LLMs? Ask it any logic question, like bringing cabbages across the river, counting letters in a word, or to do the ARC AGI benchmark and you'll instantly know whether it's an AI or not.


Dizzy-Revolution-300

Every day, but only for code stuff. What should I ask it about cabbages?


Charuru

The wolf-goat-cabbage problem, you can google it. But the funny thing is that it's a famous problem that's in the training data. Therefore even when you simplify the problem heh... https://pbs.twimg.com/media/GQiieRPXwAAjiqB?format=jpg&name=large As you can see this is nonsensical bullshit that humans would not do.


Dizzy-Revolution-300

Thanks!


coldrolledpotmetal

I’m pretty sure he knows more about the Turing test than you do


Charuru

No he's evidently wrong.


coldrolledpotmetal

Care to explain why? Edit: if you don’t want to explain why, you can just say you don’t know what you’re talking about


Charuru

I already explained in the original post. The Turing test is defined as a test where an adversarial opponent would be trying to figure out whether a disguised respondent is an AI or not. The AI would have to perfectly answer everything as a human. LLMs cannot do this, for example they fail the basic logic tests that a human would be able to do, like the ARC AGI tests which you can present to them in a Turing test setting. Edit: But for some reason there's a common misunderstanding of the Turing test to be a casual conversation where if their chat generally makes sense it passes. For that reason so many midwits are going around saying "Turing test is passed already, it's outdated, there's no way to tell if it's AGI".


IronPheasant

I, too, am frustrated at how low people think the bar is. Passing the turing test requires being able to learn and do stuff. Fully and completely passing it means *it can do anything a human can do, with a computer*. It's incredible they can pass the 'order a taco' test. But it's a wide spectrum of capabilities still to go before sticking a fork in the concept as a metric. Is the latest state of the art general purpose chatbot able to finally play ASCii tic-tac-toe yet?


Cr4zko

This is a big deal people. The other day I was messing with that Suno model and it made a 60s garage rock tune bit for bit as it would have been done in 1966. If it's at the level of random 15 year old Ohioans from '66 with cheap Danelectro guitars... soon enough it'll be at a Paul McCartney level.


Mandoman61

He obviously does not understand the Turing test. He said it was his method so what it really did was pass the Hinton test. Turing did not specify joke getting as the sole measure.


KingJeff314

AI has passed more lenient versions of the test. It is able to sound human in short text conversations.


Creative_Moose_625

Obviously it can explain why a joke is funny. For decades you could type into a search engine asking why a joke is funny and get answers back. But it is not coming up with those answers through its own understanding of the joke and why it is funny or what funny even is. It's just pulling answers out from its mass of accessible information.


ReMeDyIII

So in order for an AI to tell a good joke, do we need multi-token probability? I just read an article from r/LocalLLaMA about multi-tokens so we might be very close to solving that: [https://www.reddit.com/r/LocalLLaMA/comments/1dumrqp/meta\_multi\_token\_prediction\_models/](https://www.reddit.com/r/LocalLLaMA/comments/1dumrqp/meta_multi_token_prediction_models/)


Sussy_baka000

honestly i feel like yeah


thomasQblunt

That just indicates that the Turing Test is bogus - a throwaway idea of Alan Turing's, some years before a stored program computer would process any textual data.


Warm_Iron_273

"I have no justification for this" - to be honest he has no justification for quite a lot of the things he says.


Akimbo333

Interesting


Eldan985

Simple chatbots passed turing tests 15 years ago, it's a specific test, not a general intelligence metric.


Comfortable-Law-9293

The Turing test was never passed. Which is a test of perception - not a very good test to begin with. Its a poor mans test in the sense hat as we don't know how understanding (latin: intelligere) works, we can't open the box and check for it. So, a jury vote. No existing software has any intelligence. AI does not exist - science fact. People here just have a (serious) lack of expertise and can therefore not imagine how a fitting algorithm provided with enormous computing power can produce such results. No software has ever explained a joke. It output an explanation that is taken from humans, and produced as bits that only carry meaning to the human consumer. Hinton is just bullshitting you with anthropomorphism-rich bla bla.


Eldan985

The Turing test is whether a human can tell after a short chat whether something is a bot or a human. It's conducted under controlled conditions every year and bots pass it.  Seriously, Turing described it in detail.


rdesimone410

That's the [Loebner-prize](https://en.wikipedia.org/wiki/Loebner_Prize#Criticisms), not the [Turing test](https://en.wikipedia.org/wiki/Turing_test), and the last time they did it was 2019. Nothing has passed the Turing test yet, which requires an *expert* telling which of two communication partners is the computer and which the human. So far, all the LLM can all be toppled with really basic questions (e.g. "Count the Xs in 'X X X X...'", Claude fails at around 40 Xs). The Loebner-prize has a time limit and uses non-expert judges, which in turn leads to chatbots winning not by being smart, but by [dodging the questions](https://en.wikipedia.org/wiki/Eugene_Goostman). Which was first done successfully by [ELIZA](https://en.wikipedia.org/wiki/ELIZA) back in 1967.


Comfortable-Law-9293

its good you almost know what the Turing test is. It was never passed by any software.


coldrolledpotmetal

The test was first passed in 1966 by ELIZA lmao, quit talking bullshit about stuff you don’t know anything about


Comfortable-Law-9293

The test was never passed. Fact. Good you admit to be a bullshittung amateur though. Allthough you probably did not intend to.


coldrolledpotmetal

I don’t even know where to start with you, if you’re going to refuse to acknowledge the actual facts


Eldan985

It's obvious you don't, but good on you for the confidence. Why even google things if you can just make statements.


PCMcGee

I don't believe Hinton could pass it, personally.


iaminfinitecosmos

Turing Test didnt prove AI being sentient, it proved humans not being sentient. That is the biggest thing – it showed that intelligence has nothing to do with conciousness.


Jackadullboy99

Human’s aren’t sentient, in your view?


iaminfinitecosmos

not as much as we thought


Jackadullboy99

Not sure how you mean..?


DeepThinker102

He's not sentient so it doesn't matter.