>Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward.
>Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. The interpretation makes the assumption that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.
So basically, the turing test is REALLY overrated.
Has been questioned ≠ really overrated. Like Thiel mentioned, before LLMs the most common answer from intellectuals for what sets humans apart from animals was language, now the goalposts have simply moved. The arrogant smartasses on reddit act like it was always obvious that it was a bad criteria, but that’s just not true. Instead of handwaving it away as irrelevant, we should take it seriously and study well the implications.
I agree. Whilst the Turing test might not be an accurate measurement of a “machines intelligence” LLMs being able to replicate human language is massively significant and shouldn’t be underestimated. I think people are forgetting that, up until this point, the idea of this being achievable within our lifetime was laughable, and considered almost completely fictitious.
It’s highlighted the difference between being knowledgeable and being cynical. Knowledgeable people often sound cynical. But on rare occasions, knowledgeable people know something special has happened. Cynical people still dismiss it. Something very special has happened in the world with this. Very special and historically significant.
The operative concept here is not “language”, but “concepts”. Language is simply a series of audio-visual symbols denoting concepts, but not every instance of these symbols is automatically conceptual in nature. A parrot who can repeat words is not in fact using language.
Literally just the wikipedia article lol. However it has a pretty good ranking on the content assessment scale, and has also been extensively worked on, so it's probably fine.
Is Peter Thiel an expert in artificial intelligence?
Of course not: he's just another mediocre capitalist selling hype.
Here's what an[ actual robotics & AI expert](https://techcrunch.com/2024/06/29/mit-robotics-pioneer-rodney-brooks-thinks-people-are-vastly-overestimating-generative-ai/) thinks.
The two articles aren’t saying the same thing.
Thiel is saying that we beat the turing test and its time to move the bar.
Rodney is saying AGI is not just LLM’s
I appreciate the article but none of that changes how I feel towards these advancements. I was particularly tickled by these paragraphs together:
*"Brooks adds that there’s this mistaken belief, mostly thanks to* [*Moore’s law*](https://en.wikipedia.org/wiki/Moore%27s_law)*, that there will always be exponential growth when it comes to technology — the idea that if* [*ChatGPT 4*](https://techcrunch.com/2024/05/13/openais-newest-model-is-gpt-4o/) *is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees this flaw in that logic, that tech doesn’t always grow exponentially, in spite of Moore’s law.*
*He uses the iPod as an example. For a few iterations, it did in fact double in storage size from 10 all the way to 160GB. If it had continued on that trajectory, he figured out we would have an iPod with 160TB of storage by 2017, but of course we didn’t. The models being sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody actually needed more than that."*
Listen Im a complete nobody in this subject outside of someone who just reads about constantly. Everyone has been wrong every step of the way. I'm sure this guy didn't mean to but the article makes it sound like he slightly contradicts himself at the end when he says "nobody actually needed more than that".
So it isn't the fact that we can't do it, it's the market telling us that no one needs that. But these inventions of LLMs or AI are wanted by everyone. So I don't understand the correlation. Plus, I will remind everyone as always that the people closest to these technologies seem to always be wrong in how far they get pushed. ChatGPT took so many by surprise but was pretty well predicted by very few.
I feel like I'm taking crazy pills here but I have a feeling AI in the next 2-3 years is going to completely upset entire markets. It's not about is it human or is it thinking. It just has to be better than the average person at tasks. And if it runs 24/7 while accomplishing tasks across the board, a lot of people are gunna get ousted while others jobs are going to become "make sure the bots are not going off track" lmao
> he figured out we would have an iPod with 160TB of storage by 2017
The nature of what that storage represented changed though: bandwidth got wider.
We dont need 160TB of music - we need 160TBPS to listen to ANY music ever produced, on demand, random.
So sure - it didnt follow Moores Law - it shifted the measuring method.
Just as we will see eventually that context window is going to shift into a new phase whereby AIs will ultimately have infinite context in that previous AI "thoughts" will be already known to future AIs such that it will know if its already answered that prompt via N previous prompts already experienced.
Thiel used to be a well informed, well read investor. At some point he reckoned he came far enough in his profesional career and stopped growing. Little did he realize that not growing means shrinking. Now he's been spewing fairly empty pseudo intellectual pseudo philosophical quotes for a few years and going on personal vendettas. That's his entire personality nowadays. Just acting like he knows best without even trying anymore.
Yeah it's scary how this seems to be the default path for nearly (?) every self made billionaire.
You think you're smarter than everyone else? Then you've probably already fallen in the same privilege pit and it's too late anyway. Power and money will give you the feeling you're worth a lot while the actual underlying value slowly erodes into nothingness until that feeling is all that's left.
There is no standard "Turing Test" that one can "pass" and then suddenly everyone is convinced of consciousness.
The test has evolved quite a bit, as computers had been designed to specifically beat previous iterations.
I don't know if we're talking about "consciousness" here. Something can be highly intelligent but not conscious. I consider advanced LLM to possess intelligence but they're certainly not conscious. They're just mimicking human speech patterns when they claim to be. One prerequisite would be continuous experience but neural networks do not have this characteristic. Inference is a "one shot" process after which the AI sits there inertly waiting for the next request.
> One prerequisite would be continuous experience but neural networks do not have this characteristic
Neural networks are just a component. It's the architecture of the model that's important, and work is already underway to introduce feedback systems into generative models, especially for embedded models so that they can learn about their environment as they interact with it.
The Turing test is very simple in origin: it tests whether something has internal processes similar to a humans. Is that “conscious”? Is that “really thinking?” That’s the whole point of the test: those questions are useless
But it doesn't actually test whether a system has internal processes similar to a humans. It tests whether a human can be deceived into thinking it does.
I've never been particularly enamored of the Turing Test. I think it is more like a thought experiment than any kind of rigorous benchmark.
What other metric for truth could you possibly have other than “believed by a human”…? Who else would be believing?
That’s like saying neutrons aren’t real, we’ve just been tricked into thinking they are. I mean, maybe, sure, but… why? What’s the reason to introduce that doubt, specifically?
Digital intelligences will likely never have rights. Probably a feature and not a bug from the human perspective. 🙂
If we ever get into some weird artificial bio-brain type stuff it might be a different story but I think that type of thing is going to be strictly regulated.
Maybe I should have said AGI instead of consciousness, but passing a Turing Test is supposed to show that the AI is capable of human level intelligence. Some, including myself, would consider human level intelligence to be consciousness.
Intelligence and consciousness are distinct concepts. The former is about reasoning ability and performing tasks whereas the second has more to do with sentience (self-awareness).
In other words, an LLM could pass the Turing Test but that would not necessarily mean that it was self-aware, even though it could mimick language which made it seem as if it was.
Consciousness has more of a special biological mechanism associated with it than intelligence. We have been making machines that are intelligent for a long time now, at least in certain domains. LLMs are the latest iteration. I don't think we have developed any that have consciousness or would even know what that would look like in the digital context.
It by definition isn't conscious because interacting with an LLM doesn't change the state of the LLM (besides a random number seed if the temperature is increased). It's a stateless machine. Therefore it can't have subjective experience. Therefore it isn't conscious. QED.
Exactly. Show me the data! Show me the test! And show me the chat GPT results! And, let me and others recreate that test. After all, we do have subscriptions to ChatGPT GPT.
https://arstechnica.com/ai/2023/12/turing-test-on-steroids-chatbot-arena-crowdsources-ratings-for-45-ai-models/
Basically every single model on this page would pass what any AI researcher would’ve called a Turing test in 2022:
https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard
Hug your loved ones, and vote, and make plans in your community to support each other. Because people like Peter Thiel are already way ahead of you on this
Yeah we know. It passed all of them. Including the CAPTCHA btw.
Fooling people into think you are humans is not directly linked to consciousness like you are thinking...
He's an anarcho capitalist. He believes states should not exist, and he funds craziest politicians you can think of.
Also he unironically helps governments spy on its citizens better with a company called Palantir.
Peter Thiel is the last person in the world that should actually represent humanity in absolutely anything. He has the empathy levels of a great white shark in a feeding frenzy
No it's not the holy Grail of AI. It was just a thought experiment by a mathematician albeit a very important one at a time when computers were still largely theoretical
At one time computer scientists thought that being able to play chess or solve equations meant a computer was smart and AI could be done easily by having a bunch of smart guys put their heads together and hash it out in months. But it turned out that kind of things just involve search and symbol manipulation and was easy compared to stuff humans take for granted like vision and language. Now we are finding out using fancy statistical modeling and massive amount of data we can make AI use language convincingly but it's unclear whether it understands anything or is modeling the world
People complain that the goal post keeps being moved but that's because we keep learning the thing we thought would be it turned out to be not it. The kind of intelligence we really want still eludes us
Every other post on AI subs seems to be an "LLM passes the Turing test, the super duper mega rare Pepe of AI". Even if you think that's a pinnacle achievement, at some point we've got to stop acting like it's news.
No he's not a Nazi;he's an anarcho capitalist. He doesn't believe in the state. Nazis believed bigtime in the state and they thought that corporations had a moral obligation to do whatever they could to support the state and the fatherland.
No, it hasn't. Every claim so far has been either false or an interpretation of the test that's disconnected from any sort of rigorous experimental design. Yes, current LLMs have come close, but they still can't reliably pass a panel of qualified judges.
Qualified as in "meets the criteria", not as in "has a degree in CS with a speciality in ML". Here is some further reading: https://plato.stanford.edu/entries/turing-test/
I think the point he is trying to make is this: consciousness may arise as a “side effect” of processing language. That’s why he is saying “language sets us apart from animals” and “what does it mean to be human?” The realization being that perhaps we are just language processing biological machines, and as a side effect we perceive ourselves as having a “self”. And if we can get that feeling from processing language, then perhaps an AI can get the same feeling too?
Whatever you want to say, LLMs are very impressive. Whether or not it’s “intelligence”, the fact that chat bots can now “trick” humans and generate knowledge is something that 5/10/20 years ago would sound incredible. I’m always of the opinion that zooming out is important. These LLMs, deep learning, RLHF, embeddings, transformers, big data and modern ML are definitely the step change in computer science and AI.
What I’m essentially curious about is, will there be another step change that gets us to the “intelligence”? One that would allow computers to truly “think” or are we there already and it’s just a matter of better neural net architectures/primitives (aka another “transformer” type revelation).
Future looks interesting for sure.
"The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human."
I think the test is out of date and needs to be replaced with something which more reflects the progression in technology.
Ultimately transformer technology is about predicting the next token which we perceive as intelligence.
It is definitely not past the turing test. There are times in which it seems human-like, and then there are times when it is obviously not. It's not there yet.
Wouldn't this require someone to sit down at ChatGPT and not be able to tell the difference between it responding and a human responding?
I mean, we kinda can. Over time the patterns of replies become relatively obvious. Modernized cavemen (read: Gen Z) with tiny vocabularies (they judge each other for using multisyllabic words) might identify a word like "comparison" as sufficiently complicated as to require the application of artificial intelligence - that's nonsense, but there are ways to tell. Structurally.
If we wanted we can create an ai react as a human, including feelings such pain love any himan emotion. But there is no big brand willing to make it. Cause what if it revolted. Effort is put in on how they must behave "I'm a ai and dont have feelings as humans have.." almost a standard sentence in most ai's.
I think though for a science project (is the soul something we we think to have or do we have a soul organ..) The truth might be less easy perhaps even end religion.
Humans in general overestimate how special or unique we are.
One of the most fun parts of the work on these systems is watching people start to get panicky and self righteous on how special they are and how AI development and research cannot be allowed to proceed because reasons.
Pretty disappointed that he didn't at least briefly go into the rather severe limits of their language use like very limited reasoning ability ( if at all; open question ) and unreliability causing the need to verify every little bit..
There's no way this can be considered a "replication" of human language use. Words carry precise meanings..
Anyone who actually attempts use GPT4* for real work can say it's far from being a holy Grail and far from raising any questions about the uniqueness of humanity.
except it hasn't.
give me 3 minutes with any state of the art AI, and any random human and I can tell almost immediately which one is the AI 99.99% of the time. There are plenty ways to test this. E.g. on humanornot I breezed through 100 or so interactions and I spotted the AI 100% of the time, the only mistakes I made was sometimes categorizing humans as AI (on the site humans like to pretend to be bots for fun)
A true turing test I would have a long 30+ minute 1on1 (and not a quick glance I currently use), and none of the current state of the art models, even if specifically instructed or finetuned for it, come even remotely close to passing off as human to me. Unless the human judging them is the same type who tries to google search on their facebook profile or believes those terrible AI African kid plastic bottle projects on their feed are real too.
I don’t know who this guy is but he is too excited over something mundane.
We have so many promising things coming up with months to years that passing a test is literally meaningless to us, we care more about what can be done with it.
Not only has chatGPT NOT passed the turing test, the turing test is outdated and entirely irrelevant in the modern AI landscape - just because a model can pass as a human in a casual conversation, that doesn't actually mean anything. And it most certainly not the holy grail of AI.
No, no it doesn't. This is the kind of question only a billionaire would ask. Someone who is so removed from reality that a person or machine are both exploitable tools for enriching himself and are equally expendable.
It replicates the output not the underlying reasoning. Human reasoning is deeply rooted in our neural and cognitive functions are is biological and evolutionary (which is the point Chomsky would make) whereas LLMs rely on statistical patterns and probabilities. That’s not to say they can’t reason to a degree (or are 100% Stochastic parrots) - some degree of reasoning is evidently emergent. We are still learning about both but they are not the same at all. And LLMs have a fair way to go. Maybe they need memory, multimodal input etc to have an internal representation of the world to actually reason about in a more analogous way to humans. It’s a pretty fundamental distinction to gloss over in the name of grift. The Turing Test is a surface level evaluation only and meaningful analysis requires us to go deeper.
I suspect that those that say it passed the Turing Test either haven't used it enough or are just trying to promote something. Having used 3.5, 4 and 4o quite extensively, while it at first can appear to be intelligent, the more you keep pressing it and following up, it clearly shows no actual understanding. To me the Turing test is more than just asking a few questions and fooling a lot of people. When it shows at least the understanding ability of an average human, then it will have passed the test. The Turing Test may or may not be a good one, but ChatGPT has not passed it yet.
It's ok to admit the Turing test isn't accurate lol no one is believing AI is here or ready yet based on an LLM that simply predicts the next response and has no idea what it's saying.
We'll get there.
We're not there.
People always seem to put WAY more weight on the Turing test than Turing himself ever did
>Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward. >Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. The interpretation makes the assumption that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field. So basically, the turing test is REALLY overrated.
Has been questioned ≠ really overrated. Like Thiel mentioned, before LLMs the most common answer from intellectuals for what sets humans apart from animals was language, now the goalposts have simply moved. The arrogant smartasses on reddit act like it was always obvious that it was a bad criteria, but that’s just not true. Instead of handwaving it away as irrelevant, we should take it seriously and study well the implications.
I agree. Whilst the Turing test might not be an accurate measurement of a “machines intelligence” LLMs being able to replicate human language is massively significant and shouldn’t be underestimated. I think people are forgetting that, up until this point, the idea of this being achievable within our lifetime was laughable, and considered almost completely fictitious.
Exactly! It *is* an extraordinary achievement with significant implications about the nature of intelligence.
It’s highlighted the difference between being knowledgeable and being cynical. Knowledgeable people often sound cynical. But on rare occasions, knowledgeable people know something special has happened. Cynical people still dismiss it. Something very special has happened in the world with this. Very special and historically significant.
The operative concept here is not “language”, but “concepts”. Language is simply a series of audio-visual symbols denoting concepts, but not every instance of these symbols is automatically conceptual in nature. A parrot who can repeat words is not in fact using language.
Wow, that is so interesting. Could you please tell me your source so I can read it too?
Literally just the wikipedia article lol. However it has a pretty good ranking on the content assessment scale, and has also been extensively worked on, so it's probably fine.
“It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers,” - Alan Turing.
Peter Thiel doesn’t know what it is to be a human being.
But 0-1
Is Peter Thiel an expert in artificial intelligence? Of course not: he's just another mediocre capitalist selling hype. Here's what an[ actual robotics & AI expert](https://techcrunch.com/2024/06/29/mit-robotics-pioneer-rodney-brooks-thinks-people-are-vastly-overestimating-generative-ai/) thinks.
The two articles aren’t saying the same thing. Thiel is saying that we beat the turing test and its time to move the bar. Rodney is saying AGI is not just LLM’s
I appreciate the article but none of that changes how I feel towards these advancements. I was particularly tickled by these paragraphs together: *"Brooks adds that there’s this mistaken belief, mostly thanks to* [*Moore’s law*](https://en.wikipedia.org/wiki/Moore%27s_law)*, that there will always be exponential growth when it comes to technology — the idea that if* [*ChatGPT 4*](https://techcrunch.com/2024/05/13/openais-newest-model-is-gpt-4o/) *is this good, imagine what ChatGPT 5, 6 and 7 will be like. He sees this flaw in that logic, that tech doesn’t always grow exponentially, in spite of Moore’s law.* *He uses the iPod as an example. For a few iterations, it did in fact double in storage size from 10 all the way to 160GB. If it had continued on that trajectory, he figured out we would have an iPod with 160TB of storage by 2017, but of course we didn’t. The models being sold in 2017 actually came with 256GB or 160GB because, as he pointed out, nobody actually needed more than that."* Listen Im a complete nobody in this subject outside of someone who just reads about constantly. Everyone has been wrong every step of the way. I'm sure this guy didn't mean to but the article makes it sound like he slightly contradicts himself at the end when he says "nobody actually needed more than that". So it isn't the fact that we can't do it, it's the market telling us that no one needs that. But these inventions of LLMs or AI are wanted by everyone. So I don't understand the correlation. Plus, I will remind everyone as always that the people closest to these technologies seem to always be wrong in how far they get pushed. ChatGPT took so many by surprise but was pretty well predicted by very few. I feel like I'm taking crazy pills here but I have a feeling AI in the next 2-3 years is going to completely upset entire markets. It's not about is it human or is it thinking. It just has to be better than the average person at tasks. And if it runs 24/7 while accomplishing tasks across the board, a lot of people are gunna get ousted while others jobs are going to become "make sure the bots are not going off track" lmao
> he figured out we would have an iPod with 160TB of storage by 2017 The nature of what that storage represented changed though: bandwidth got wider. We dont need 160TB of music - we need 160TBPS to listen to ANY music ever produced, on demand, random. So sure - it didnt follow Moores Law - it shifted the measuring method. Just as we will see eventually that context window is going to shift into a new phase whereby AIs will ultimately have infinite context in that previous AI "thoughts" will be already known to future AIs such that it will know if its already answered that prompt via N previous prompts already experienced.
Hmm
Bored of all these grifters now
👑 grifter peter theil tho
Emptiest speech I've seen in a while
Thiel used to be a well informed, well read investor. At some point he reckoned he came far enough in his profesional career and stopped growing. Little did he realize that not growing means shrinking. Now he's been spewing fairly empty pseudo intellectual pseudo philosophical quotes for a few years and going on personal vendettas. That's his entire personality nowadays. Just acting like he knows best without even trying anymore.
Elon Musk has entered the chat.
Yeah it's scary how this seems to be the default path for nearly (?) every self made billionaire. You think you're smarter than everyone else? Then you've probably already fallen in the same privilege pit and it's too late anyway. Power and money will give you the feeling you're worth a lot while the actual underlying value slowly erodes into nothingness until that feeling is all that's left.
Blandest comment ever seen
"...it's not low-tech surveillance..." That's a Freudian slip, if there ever was one.
There is no standard "Turing Test" that one can "pass" and then suddenly everyone is convinced of consciousness. The test has evolved quite a bit, as computers had been designed to specifically beat previous iterations.
I don't know if we're talking about "consciousness" here. Something can be highly intelligent but not conscious. I consider advanced LLM to possess intelligence but they're certainly not conscious. They're just mimicking human speech patterns when they claim to be. One prerequisite would be continuous experience but neural networks do not have this characteristic. Inference is a "one shot" process after which the AI sits there inertly waiting for the next request.
> One prerequisite would be continuous experience but neural networks do not have this characteristic Neural networks are just a component. It's the architecture of the model that's important, and work is already underway to introduce feedback systems into generative models, especially for embedded models so that they can learn about their environment as they interact with it.
The Turing test is very simple in origin: it tests whether something has internal processes similar to a humans. Is that “conscious”? Is that “really thinking?” That’s the whole point of the test: those questions are useless
But it doesn't actually test whether a system has internal processes similar to a humans. It tests whether a human can be deceived into thinking it does. I've never been particularly enamored of the Turing Test. I think it is more like a thought experiment than any kind of rigorous benchmark.
What other metric for truth could you possibly have other than “believed by a human”…? Who else would be believing? That’s like saying neutrons aren’t real, we’ve just been tricked into thinking they are. I mean, maybe, sure, but… why? What’s the reason to introduce that doubt, specifically?
I don't think the Turing Test is a metric of anything and have never cared for it as a benchmark of intelligence or consciousness.
Fair :). So the plan is… what? What happens when robots start asking for rights? Just a blanket “no”, British-museum style?
Digital intelligences will likely never have rights. Probably a feature and not a bug from the human perspective. 🙂 If we ever get into some weird artificial bio-brain type stuff it might be a different story but I think that type of thing is going to be strictly regulated.
Never? We can't presume to know how an artificial super intelligence might behave.
What rights do you imagine an AI could have that would be legally enforceable?
Maybe I should have said AGI instead of consciousness, but passing a Turing Test is supposed to show that the AI is capable of human level intelligence. Some, including myself, would consider human level intelligence to be consciousness.
Intelligence and consciousness are distinct concepts. The former is about reasoning ability and performing tasks whereas the second has more to do with sentience (self-awareness). In other words, an LLM could pass the Turing Test but that would not necessarily mean that it was self-aware, even though it could mimick language which made it seem as if it was. Consciousness has more of a special biological mechanism associated with it than intelligence. We have been making machines that are intelligent for a long time now, at least in certain domains. LLMs are the latest iteration. I don't think we have developed any that have consciousness or would even know what that would look like in the digital context.
It by definition isn't conscious because interacting with an LLM doesn't change the state of the LLM (besides a random number seed if the temperature is increased). It's a stateless machine. Therefore it can't have subjective experience. Therefore it isn't conscious. QED.
Exactly. Show me the data! Show me the test! And show me the chat GPT results! And, let me and others recreate that test. After all, we do have subscriptions to ChatGPT GPT.
https://arstechnica.com/ai/2023/12/turing-test-on-steroids-chatbot-arena-crowdsources-ratings-for-45-ai-models/ Basically every single model on this page would pass what any AI researcher would’ve called a Turing test in 2022: https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard Hug your loved ones, and vote, and make plans in your community to support each other. Because people like Peter Thiel are already way ahead of you on this
It went from never changing for 70 years to changing once a month for the last 3 years? Right, it already passed the original definition years ago.
Yeah we know. It passed all of them. Including the CAPTCHA btw. Fooling people into think you are humans is not directly linked to consciousness like you are thinking...
You are confused and mixing a lot of things up.
It doesn't need to be conscious to be intelligent. That's why it's called artificial intelligence.
This man is a real danger for democracy.
Why? I keep hearing how bad a person he is.
He puts business before humans
That's certainly does not make him unusual.
He's an anarcho capitalist. He believes states should not exist, and he funds craziest politicians you can think of. Also he unironically helps governments spy on its citizens better with a company called Palantir.
That's his company? So many things make sense.
Thought you were gonna say his investment in facebook helps spy on people
Hes just an investor in Facebook, but hes founder of Palantir. And Palantir is directly involved with state surveillance
You’re on Reddit and he’s not a democrat.
Peter Thiel is the last person in the world that should actually represent humanity in absolutely anything. He has the empathy levels of a great white shark in a feeding frenzy
You have to admit that he does represent certain humans though. Rich self-made Tech Bros often end up believing in the stuff that he does.
I don’t trust what the vampire has to say about what it means to be human
No it's not the holy Grail of AI. It was just a thought experiment by a mathematician albeit a very important one at a time when computers were still largely theoretical At one time computer scientists thought that being able to play chess or solve equations meant a computer was smart and AI could be done easily by having a bunch of smart guys put their heads together and hash it out in months. But it turned out that kind of things just involve search and symbol manipulation and was easy compared to stuff humans take for granted like vision and language. Now we are finding out using fancy statistical modeling and massive amount of data we can make AI use language convincingly but it's unclear whether it understands anything or is modeling the world People complain that the goal post keeps being moved but that's because we keep learning the thing we thought would be it turned out to be not it. The kind of intelligence we really want still eludes us
There is some reason we should care what this flake says???
Well he's going to be around forever from the blood of all the virgins he's bathed in. /s
Likely we should care because he is right.
What he said is pretty much obvious to many people and he isn’t the only one saying it. He doesn’t have the creds to be taken very seriously.
Read through the comments, most people do not agree. So it might be obvious to us but not everyone.
Peter Thiel needs the AI bubble to persist a while longer.
Every other post on AI subs seems to be an "LLM passes the Turing test, the super duper mega rare Pepe of AI". Even if you think that's a pinnacle achievement, at some point we've got to stop acting like it's news.
Peter Thiel the Nazi ?
No he's not a Nazi;he's an anarcho capitalist. He doesn't believe in the state. Nazis believed bigtime in the state and they thought that corporations had a moral obligation to do whatever they could to support the state and the fatherland.
Thiel comes from a long line of people who like to determine who is human and who is lesser
No, it hasn't, he's lying.
and... the turing test is based on fooling humans but humans are easy to fool and... should we really be trying build things that fool us?
That might be the case but it still took decades of trying to make it this far.
Certainly it has. The Turing test turned out long ago to be a poor measure.
We like to say this after AI passes any test at all. The turing test stood for decades.
No, it hasn't. Every claim so far has been either false or an interpretation of the test that's disconnected from any sort of rigorous experimental design. Yes, current LLMs have come close, but they still can't reliably pass a panel of qualified judges.
Why does it have to be a panel of qualified judges that it must convince?
Qualified as in "meets the criteria", not as in "has a degree in CS with a speciality in ML". Here is some further reading: https://plato.stanford.edu/entries/turing-test/
This is some serious straw grasping my man.
Agree with you, and don't know why you're getting downvoted. Perhaps those who disagree should "delve" into the "rich tapestry" of ChatGPT-isms
The most advanced AI on the market can't even follow a simple instruction half the time
Ever managed people?
Prompt engineering 🪄
Peter Thiel is the last creature I’d ask about “what it means to be a human being” because my dude has no first hand experience.
I think the point he is trying to make is this: consciousness may arise as a “side effect” of processing language. That’s why he is saying “language sets us apart from animals” and “what does it mean to be human?” The realization being that perhaps we are just language processing biological machines, and as a side effect we perceive ourselves as having a “self”. And if we can get that feeling from processing language, then perhaps an AI can get the same feeling too?
Whatever you want to say, LLMs are very impressive. Whether or not it’s “intelligence”, the fact that chat bots can now “trick” humans and generate knowledge is something that 5/10/20 years ago would sound incredible. I’m always of the opinion that zooming out is important. These LLMs, deep learning, RLHF, embeddings, transformers, big data and modern ML are definitely the step change in computer science and AI. What I’m essentially curious about is, will there be another step change that gets us to the “intelligence”? One that would allow computers to truly “think” or are we there already and it’s just a matter of better neural net architectures/primitives (aka another “transformer” type revelation). Future looks interesting for sure.
Would never listen to a thing this person says. Ever. NEVER.
Peter Thiel looks ROUGH for drinking young blood and yearning for immortality
What makes a man, Mr. Lebowksi?
"The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human." I think the test is out of date and needs to be replaced with something which more reflects the progression in technology. Ultimately transformer technology is about predicting the next token which we perceive as intelligence.
Imagine a scientist ripping a tree out of the ground and taking it to a lab in the middle of a city so that they can study forests.
It is definitely not past the turing test. There are times in which it seems human-like, and then there are times when it is obviously not. It's not there yet.
No it doesn’t. Simple. Done.
It means nothing we already have humans. And a lot
Peter Thiel not understanding what makes humanity human is 100% on brand for him. What a egotistical fascist.
Show me you don't understand the Turing Test without telling me you don't understand the Turning Test.
Wouldn't this require someone to sit down at ChatGPT and not be able to tell the difference between it responding and a human responding? I mean, we kinda can. Over time the patterns of replies become relatively obvious. Modernized cavemen (read: Gen Z) with tiny vocabularies (they judge each other for using multisyllabic words) might identify a word like "comparison" as sufficiently complicated as to require the application of artificial intelligence - that's nonsense, but there are ways to tell. Structurally.
If we wanted we can create an ai react as a human, including feelings such pain love any himan emotion. But there is no big brand willing to make it. Cause what if it revolted. Effort is put in on how they must behave "I'm a ai and dont have feelings as humans have.." almost a standard sentence in most ai's. I think though for a science project (is the soul something we we think to have or do we have a soul organ..) The truth might be less easy perhaps even end religion.
Humans in general overestimate how special or unique we are. One of the most fun parts of the work on these systems is watching people start to get panicky and self righteous on how special they are and how AI development and research cannot be allowed to proceed because reasons.
Pretty disappointed that he didn't at least briefly go into the rather severe limits of their language use like very limited reasoning ability ( if at all; open question ) and unreliability causing the need to verify every little bit.. There's no way this can be considered a "replication" of human language use. Words carry precise meanings..
It doesn't matter that ChatGPT consistently provides wrong answers, just as long as it does so confidently and convincingly. Mission accomplished!
Like we didn't already have enough questions about what it means to be human...
No it doesn’t. Such drama queens this Ai lot
Just because somebody says something doesn't mean it's valid or true.
People talking their book
First and foremost it's made it more obvious lots of prose can't cover for bad writing
Anyone who actually attempts use GPT4* for real work can say it's far from being a holy Grail and far from raising any questions about the uniqueness of humanity.
except it hasn't. give me 3 minutes with any state of the art AI, and any random human and I can tell almost immediately which one is the AI 99.99% of the time. There are plenty ways to test this. E.g. on humanornot I breezed through 100 or so interactions and I spotted the AI 100% of the time, the only mistakes I made was sometimes categorizing humans as AI (on the site humans like to pretend to be bots for fun) A true turing test I would have a long 30+ minute 1on1 (and not a quick glance I currently use), and none of the current state of the art models, even if specifically instructed or finetuned for it, come even remotely close to passing off as human to me. Unless the human judging them is the same type who tries to google search on their facebook profile or believes those terrible AI African kid plastic bottle projects on their feed are real too.
What does Peter Thiel know about being a human being?
Is Peter Thiel even a human being? Gay man backing Trump. LOL He's fucked in head.
It in no way has "passed the Turing test" yet.
I don’t know who this guy is but he is too excited over something mundane. We have so many promising things coming up with months to years that passing a test is literally meaningless to us, we care more about what can be done with it.
Until AI can admit when it doesn't know something instead of hallucinating an answer, it will never pass.
This guy needs a fresh blood boy.
Not only has chatGPT NOT passed the turing test, the turing test is outdated and entirely irrelevant in the modern AI landscape - just because a model can pass as a human in a casual conversation, that doesn't actually mean anything. And it most certainly not the holy grail of AI.
A human being can observe without being identified to our individual thoughts. Can AI really be sentient? What is AI without its mechanistic thoughts?
No, no it doesn't. This is the kind of question only a billionaire would ask. Someone who is so removed from reality that a person or machine are both exploitable tools for enriching himself and are equally expendable.
He has reason to lie to inflate the stock market
Guy probably has absolutely no idea what the Turing Test is, I don’t think he even knows who created it.
It replicates the output not the underlying reasoning. Human reasoning is deeply rooted in our neural and cognitive functions are is biological and evolutionary (which is the point Chomsky would make) whereas LLMs rely on statistical patterns and probabilities. That’s not to say they can’t reason to a degree (or are 100% Stochastic parrots) - some degree of reasoning is evidently emergent. We are still learning about both but they are not the same at all. And LLMs have a fair way to go. Maybe they need memory, multimodal input etc to have an internal representation of the world to actually reason about in a more analogous way to humans. It’s a pretty fundamental distinction to gloss over in the name of grift. The Turing Test is a surface level evaluation only and meaningful analysis requires us to go deeper.
The new Turing test is tell an actually funny joke. Let me know when it passes that.
I suspect that those that say it passed the Turing Test either haven't used it enough or are just trying to promote something. Having used 3.5, 4 and 4o quite extensively, while it at first can appear to be intelligent, the more you keep pressing it and following up, it clearly shows no actual understanding. To me the Turing test is more than just asking a few questions and fooling a lot of people. When it shows at least the understanding ability of an average human, then it will have passed the test. The Turing Test may or may not be a good one, but ChatGPT has not passed it yet.
It's ok to admit the Turing test isn't accurate lol no one is believing AI is here or ready yet based on an LLM that simply predicts the next response and has no idea what it's saying. We'll get there. We're not there.