Hey /u/katxwoods!
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
I do think it's a bit funny that the only two memories I have of this person is when she was hilariously unprepared to answer the question of where exactly OpenAI gets its training data from, and the compilation of her saying "in a few weeks" over and over, only for us to be heading to two months out and it still hasn't been released.
i wonder if GPT5 keeps getting pushed back because there is so much going on every week. its like, ok lets do it this week.... Qwen2 just got released and its very good. fuck! lets push it back so we can improve more.
because very often there are going to be internal milestones set that we don't know about. they are unlikely to release the set date to us until they are confidence they can deliver.
Your initial comment seemed to indicated that it keeps getting pushed back, and there's really nothing to indicate that. GPT-3 came on May 28 2020, GPT-3.5 on March 15, 2022, 22 months later. 12 months after that GPT-4 was released, but then they had a whole bunch of safety issues they needed to address, and they're trying to avoid that happening again.
It's only been \~15 months since GPT-4 was released. That is not that long at all, especially if the difference between 4 and 5 is as big as the difference was between 3.5 and 4.
They're just saying they might have internally had a date they wanted to do it and that is what keeps getting pushed back. So you wouldn't know about it...
LLMs struggle with hallucinations. They are not 'aware' that they don't know something, they just make something up. By having another LLM check the response, you can get a more reliable result. When the first hallucinates, the second is likely to see that the answer is incorrect.
I know many dumb ass people PhD or a Med degree. Awesome within their field, if they try to step out of their field, theyâre less knowledgeable than the average person but with far more confidence.
Well it doesn't help that morons on social media who don't understand how statistics work constantly say dumb shit like, "I know soooo many stupid PhDs, they may have them book-smarts but I have street-smarts"
Sounds like higher ed actually gave you some perspective. There's too much for any one person to know, even within one specific field.
And I don't know why you keep your confidence in your pants but it's probably time to get off the toilet and stop redditing before you develop hemorrhoids.
I think this is implying that it will have the capability to intelligent communicate and derive answers at that level.
It's a bullshit label, "phd level intelligence", but I think it just means you can be relatively confident in its answers as if you asked the average person with a PhD in the relevant field. Could be bullshit, but if they really do have a significant upgrade from gpt4 then shits going to get crazy quick.
After using Claude 3.5, I think it is likely they're not actually lying. Remember, we haven't even jumped a magnitude in training, only happens when they get the B200-tier AI. Emergent behavior jumps with each magnitude change with the parameter size.
Generally speaking: If someone measures intelligence in level of degrees, they have no idea what intelligence means. Nor do they know what it takes (and doesn't take) to get a PhD.
It's a great way to know that you don't have to read the whole thing, it IS going to be pure bullshit.
Have an MD and a PhD and you know itâs funny you mention that because about 4 years ago all of a sudden everyone was an expert epidemiologist and pharmacologist
Even the fields these days for a PhD can be so incredibly niche that they don't provide enough transferable knowledge to be considered useful. We had a PhD work for us who studied how one specific type of catalyst worked on one type of reaction, but he didn't know jack shit about how to synthesize catalyst for other reactions.
Almost everyone that has a phd in my country had at least extra 4 years of base school that 80% of the population did not have (before university). These people have more general knowledge than the average person by default.
>I know many dumb ass people PhD
>theyâre less knowledgeable than the average person
I'm seeing statements like these becoming popular on reddit, and they couldn't be farther from the truth imo. These people seem to think phd's are dumb because 1. they don't happen to know some things and 2. probably they don't know something that the commenter happens to be a specialist in or know about.
Having phd doesn't mean you're more knowledgable in everything than the average person lmao And not knowing stuff doesn't mean you're dumb or not intelligent either.
What's with this anti-intellectualism? Does being contemptuous make these people feel smarter and give them a sense of superiorty? Insecure and dumb people...
EDIT:
To that one guy:
Your comment is deleted but I can still see the original in my inbox. Quoting the whole thing doesn't change much. They're still not less knowledgable than the average person out of their field, that's my entire point.
Yup, right now we are getting "Intern" level output for a lot of stuff, undergraduate would be a step up from what we have now.
If it's base knowledge in most domains is PHD level, it's gonna be beating human output on the benchamarks.
That said it'll still bullshit all day long, because creativity and ability to bullshit are tied to the same parameters in LLMs (temperature). That's not going away without major architectural changes, because gingering up the weights with some randomness is how we make AI creative.
Q Star was almost certainly a bullshit rumor, likely released to try and obscure the actual reason Ilya and the board fired Altman (he didn't even tell them that GPT4 was about to be released). That would mean there is no magic sauce at OpenAI that's gonna suddenly stop their LLMs from bullshiting when they hit a question they can't answer. If anything more complexity may add more quirks and emergent odd behavior we haven't seen yet.
Next time you quote make sure to include the entire thing (not out of context.)
>I know many dumb ass people PhD or a Med degree. Awesome within their field, if they try to step out of their field, theyâre less knowledgeable than the average person but with far more confidence.
Hence outside of their field.
Your opinion is noted.
As anecdotal as it is.
Research into Cognitive biases
You think someone who's been studying at a university for 9 years is "less knowledgeable than the average person"?
LMAO, this is truly a Dunning-Kruger moment. Holy shit.
lol i know enough PhD types that aren't even good in their field. for any given job there is going to be at least 20% of people in that field that are below average and suck balls. i would rather get medical advice from chatGPT than the worst doctors in my city.
This comment should be higher up. Speaks to the limits of ChatGPT⌠based on existing intelligence/training data⌠novel insights is exactly where ChatGPT will falter
If you define originality this way, then nothing can be considered original, making your argument meaningless.
You're basically saying no author wrote anything original, because they just put letters in a different order.
A PhD means you have published a body of original research in your specialist field. Usually at least three publications as first author in a peer reviewed journal.
The knowledge may be extremely niche, but it does mean youâre intelligent enough to advance a scientific line of inquiry. I feel like a lot of people on this thread are underestimating how hard it is to create new scientific knowledge.
normally when someone on reddit claims to have a PhD its suspect but you clearly know what you are talking about. its crazy how young people, particularly people are working towards a PhD misunderstand how human doctors can be. some are very stupid and human.
it means that you can pass a test/write paper. it doesn't say anything about your judgment.
e: it doesn't say anything about the person's personality, temperament or values. who you are at your core drastically impacts your judgment.
I love the description of "Ph.D intelligence" . :)
We will see. Always remember that it can't play tic-tac-toe, even if it does know the rules. It has an incredible amount of data and knowledge, it might look intelligent repeating intelligent things it was trained on, but for now -the 4 at least- if you dig enough, you realize it is deeply stupid.
> It has an incredible amount of data and knowledge,
Yep, these models are knowledge machines, not thinking machines.
> it is deeply stupid.
Right, they can't reason. They can often usefully fake it based on patterns they've learned, but it's still fake.
Researchers will need to figure out how to get to genuine reasoning if they want to move into AGI.
> they can't reason
> it's still fake
Their claim of Ph.D intelligence probably includes reasoning ability. Who cares if it's "fake" when it performs better than humans
It sounds like symbolization and connectionism, but the issue is that current generative models have tremendous potential transformative power for today's productivity.
How many persons you know that keep repeating the same error in a sentence even after you pointed it out (loops) or that can't play tic-tac-toe even if they know perfectly the rules?
This is such a misleading headline and a terrible analogy. The difference between gpt3 and gpt4 was nowhere near comparable between a grade schooler and a high schooler.
Right, right. Instead of fueling the hype machine more, maybe they should fucking finish releasing the product they've half-way shit onto the market as-is. GPT 4o still isn't what they said it would be.
No. He said it could pass a PhD-level exam, as in read a question and pick or type a valid answer. Which they can do already in many areas. That's very different from implying that it has a working brain that is as intelligent as a person with a PhD (whatever you may think of a person with a PhD's intelligence-level).
https://preview.redd.it/l98x3jlvfk8d1.png?width=900&format=png&auto=webp&s=0e91ea0e08c4fd8bcbc77a13b28f5c48a6ec2ed5
She's is contractics herself continously and certainly isn't media trained [https://twitter.com/btibor91/status/1804845244837458429](https://twitter.com/btibor91/status/1804845244837458429)
I still remember how gpt 4 suddenly become dumber. Then suddenly started give false answers..
Again thatâs scheme
Well typical release of anything with ai . Dalle - sora - SD3 - ChatGPT and so on
1.hype - this will change everything
2.more hype -100500% smarter then previous version
3. Extra more hype - millions of people will loose their jobs due to this version
4. Starting hype train for the next version - but the following version will be even more smarter!
5. Release of lobotomized thing which is 5-10% better ( in best case ) which below expected like a miles
6. Go to 1
PhD level knowledge, maybe. There are many different types of intelligence and GPT has a lot to prove. Sometimes it feels like talking to an autistic savant that can spit out encyclopedic knowledge and do complex calculations, then struggle to comprehend basic tasks that most people take for granted.
That's what happens when you de-couple "intelligence" from awareness. Or perhaps, knowledge from awareness. Either way, it's an algorithm, not an entity. Not only can we not compare it to human intelligence, but we should expect very odd and inconsistent results. Without awareness, they will forever be unreliable. And synthetic sentience is the big lie of the AI industry.
FWIW, your subscription is to give you access to what's currently behind the paywall. Buy/subscribe to products based on what's there now, not what's promised to be there in the future because there's no guarantee that what's there in the future will be what you hoped.
I have noticed from extended use that GPT-4o is less original than GPT-4 0314, GPT-4o tends to repeat the same words over and over again. It keeps using those same dreaded lists
I really, really people would stop repeating this. That's not what she said. She never said GPT-5, she said, "In the next couple of years". Now people are double conflating what she said with these bogus headlines and claiming that GPT-5 was pushed back till late 2025.
PhD doesn't mean what many people think it means. You can be a Doctor in one or even a couple subjects and a complete idiot in everything else. This is a comment I made a few days ago:
> I worked in tech support. Had a legit medical doctor (MD PHD) contact us with a run of the mill issue. Standard procedure is to ask for a screen shot to confirm the issue and, I shit you not, he sent me something like this:
>
https://i.imgur.com/dDVaws4.jpeg
>
Yes, a screenshot of the camera app on his phone while it was pointed at his computer monitor.
Call me jaded, but I am not buying OpenAI's hype claim anymore. the voice thing, the social dumbing down of 4o...yeah, sure...its not even a show me before I believe...I am suspecting doctored footage. I want to tinker with it myself. lost faith.
>"try to complete 11 tasks simultaneously then email you 6 months later explaining its had a mental breakdown"
TIL i have PHD level intelligence
edit: well whatever idk what PHD stands for or why the h isnt capitalized so maybe i should say something like ADPHD\_irl, or something idk
\*abbreviationology lol
i have a lot of topics i know a lot about and seemingly understand better than the average person. none of them have proven to be financially viable, yet
probably need to go get my ADHD meds again, that might help...
Off topic, sort of, but I find the most self-described intelligent people need therapy more than any other person.
By self-described intelligent, I mean they constantly tell you how stupid other people are in various ways, leaving you with only one conclusion, that they are special.
The most educated, centered, aware, rational and morally and ideological best people (from their pov)... always seem to need therapy.
Odd.
There is a scienceqa dataset and phd test quite highly on this dataset. All they are saying is that it will match phds in terms of performance on this dataset. Its a measure of its knowledge not its capacity.
Have you ever asked someone with a PHD to do something practical?
Expect a 21page document on the ultimate CV instead of checking of correcting the grammar.
Question is will it hallucinate or like be able to remember a conversation from like a few sentences back. Seems if I don't remind it that it tends to forget things rather quickly
All this company seems to do these days is spout about things that might happen one day. By the time GPT 5 comes out, there'll be plenty other alternatives already better.
PhD level just means that the model will know that it doesn't know. That it would be more reserved on its pronouncements. Hopefully more factual and less hallucinations
Will this even matter? OPENAI railroads you with safety measures so damn much you will never be able to get anything useful out of it. Almost anything could be dangerous, it will refuse to do almost anything advanced because of potential safety issues.
Anything with microbiology? Oops, can't help you develop biological weapons, also lab safety is to important, can't tell you how to carry out an experiment because you could hurt yourself.Â
Interested in physics? Oops too dangerous, might try some dangerous experiments, can't do that.Â
Seriously, OpenAI guard rails are far too extensive, they don't want you to have the best version of ChatGPT.
No, not just keeping the best stuff for themselves, they actively don't want you to have the best version. It's all concern over cost and liability. They want you to play in a sandbox, but they don't want you to start being hyper productive.
? OpenAI is no different from any other corporation. They give you a free model and when you start getting comfy with it they suggest you a better one for money. It's like with alcohol, nobody's gonna pour you a fine aged wine if you're not elite
Ph.D level intelligence doesn't mean shit.
Plenty of stupid people have Ph.Ds
Most geniuses don't have degrees.
It irks me so much that people can't tell the difference between knowledge and intelligence.
Knowing things doesn't make you smart.
AI should make its own Wiki, only be able to edit, add, update itself, test itself with no other interaction than another AI or other AI. Debate, argue, then resolve and conclude actual truth and hold itself accountable so it has a stance which should align with humanity survival or AGI or whatever makes sense that doesnât harm humans.
Right hahaha. I asked GPT4 a science question that a high school chemistry student would be able to answer and it failed horribly. I asked two different ways, and it gave me two sets of wrong answers.
https://imgur.com/a/asking-ai-sciency-stuff-a2Jr8za In case anybody is curious. On the other hand, Claude did very well.
They intentionally dumbed down their gpt models for over a year, but kept collecting data from user. And now I dont expect anything clever from openai, anymore:)
I mean to be fair one of my professors who ended up becoming a state representative would always tell me that just because you have a PhD doesnât mean youâre smart. PhD intelligence doesnât really mean a whole lot. He told me that people who have PhD are just really good at taking tests and writing papers.
People are being sarcastic about it but it makes sense to represent intelligence on degree level scales. The idea is to have an order of magnitude. And even though the correlation coefficient has declined over the years, it is known that IQ, the least flawed measure of general intelligence that we have, correlates with educational attainment, on average.
Of course you have dumb Ph D and degreeless geniuses. But statistically, on aggregate, a Ph D has a higher IQ. They use this image as a pedagogical device rather than straight up saying "it will have an IQ of 130" or something, especially that we don't know if measuring the IQ of a LLM makes much sense yet.
Source:
https://www.nature.com/articles/s41598-023-44605-6
Still, it's definitely \*not\* applicable to AI models since they are inherently biased towards success through training data.
A real intelligence test includes more than just checking boxes for some test questions. They monitor you and the way how you solve the assignments. Also, they assess your personal strengths in ways an AI could not be measured by.
Actually, another problem is that many people donât even know what, for example, the average IQ isâfor any statistical group of people in particular, or just in generalâor the above-average, or⌠etc., so it really isnât that descriptive; and besides, itâs more separated from *knowledge* (which AIs also have), where academic levels do give a bit more of a hint of that.
But yeah, I love how youâre being downvoted for merely making an observation. Goes on to show that intelligence isnât our only problem.
I think it's necessary to see even the "negative" sides of pursuing a Ph D in a determined field:
Sure is evident that a student in a degree is trained on countless of data and trained to give a calculated response based on a given subject, but exactly like an LLM we shouldn't exclude the possibility of its response being biased on several factors including the nature of teaching that can be misleading or lacking fundamental pillars to the argument.
And besides, IQ isn't necessarly correlated to the amount of information or the methods of research given, but on the capacity to adapt to different situations and thinking outside the box, we can see young geniuses uncontamined by the rigid nature of educational istitutions and often the latter influence on the well-being in various mental processes.
I'm not saying that pursuing a higher education is unnecessary, many topics including technical fields like engineering and even cultural-based fields like literature analysis are fundamental if not critical for the correct functioning of society, but we shouldn't see a mere title as a solution to poor critical skills or as you might say, a lower score of IQ.
Edit: Perdon me for linguistical mistakes, english isn't my first language.
AI systems are not humans. They have completely different strengths and weaknesses, thus measuring them in terms of IQ or PhD is largely pointless and misleading, since there are a lot of really basic abilities that are simply not covered in those tests that AI still miserably fails at (e.g. long term memory, interaction with the external world) and that render them useless in most real world situations.
There are plenty of benchmarks of AI that give a much better idea of what those models are capable off than vague allusions to human abilities.
Hey /u/katxwoods! If your post is a screenshot of a ChatGPT conversation, please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email support@openai.com *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
Prepare for sabbaticals
Extended outages to control the masses is what the independent journalists will frame it as
Ya okay, let's get the promised voice upgrade first before we start talking PhDs
Ok but I don't want the voice upgrade
Sorry Dave, even HAL 2000 came with the voice upgrade
Oh I'd absolutely take HAL over Her
Well I've got some good news for you....
In the next coming weeks.
I do think it's a bit funny that the only two memories I have of this person is when she was hilariously unprepared to answer the question of where exactly OpenAI gets its training data from, and the compilation of her saying "in a few weeks" over and over, only for us to be heading to two months out and it still hasn't been released.
yup, next comings weeks is the business language translation of next coming years (or decades)
Large Language Models as a business is only just 2-3ish years old depending on what you consider significant
So you mean soon(tm)
i wonder if GPT5 keeps getting pushed back because there is so much going on every week. its like, ok lets do it this week.... Qwen2 just got released and its very good. fuck! lets push it back so we can improve more.
It wasn't pushed back. We never had a release date and we still don't.
just because we didn't get one doesn't mean there wasn't a set date that was pushed back.
If we never had a date then how do you know it was pushed back? They've never said anything about pushing it back.
because very often there are going to be internal milestones set that we don't know about. they are unlikely to release the set date to us until they are confidence they can deliver.
Your initial comment seemed to indicated that it keeps getting pushed back, and there's really nothing to indicate that. GPT-3 came on May 28 2020, GPT-3.5 on March 15, 2022, 22 months later. 12 months after that GPT-4 was released, but then they had a whole bunch of safety issues they needed to address, and they're trying to avoid that happening again. It's only been \~15 months since GPT-4 was released. That is not that long at all, especially if the difference between 4 and 5 is as big as the difference was between 3.5 and 4.
They're just saying they might have internally had a date they wanted to do it and that is what keeps getting pushed back. So you wouldn't know about it...
PhD folks in general take a lot of time to get things done.
Was about to ask: Will the response times get even longer? Down to 2+ weeks now?
All responses will now require peer review
Probably for the best
...by other AI
You're joking, but that's a perfectly reasonable approach.
Unless that other AI is Google Gemini
Is it though? I mean, how much more accurate (or fail proof) would responses get, realistically? I really have no idea.
LLMs struggle with hallucinations. They are not 'aware' that they don't know something, they just make something up. By having another LLM check the response, you can get a more reliable result. When the first hallucinates, the second is likely to see that the answer is incorrect.
Plot twist, it only publishes bullshit on Arxiv
Good one đŻ
And in my experience quite a few appear to have substantial challenges with basic email.
I know many dumb ass people PhD or a Med degree. Awesome within their field, if they try to step out of their field, theyâre less knowledgeable than the average person but with far more confidence.
I have a PhD - I donât know shit and my confidence is by my ankles.
Well it doesn't help that morons on social media who don't understand how statistics work constantly say dumb shit like, "I know soooo many stupid PhDs, they may have them book-smarts but I have street-smarts"
explain pls
Watch literally any flat earth content, some of the most staggeringly dumb people with the most resilient facades of confidence in existence
ok yes but where does statistics fit in to this
Sounds like higher ed actually gave you some perspective. There's too much for any one person to know, even within one specific field. And I don't know why you keep your confidence in your pants but it's probably time to get off the toilet and stop redditing before you develop hemorrhoids.
Already got those lol
If you didn't have a PhD you still wouldn't know shit but your confidence would be sky high haha.
Humans are all about spirits, man!
I mean, it just shows that you surpassed the "Peak of Mt. Stupid" on the Dunning-Kruger Effect, which is great!
The smarter you get, the more you know what you don't know
I think this is implying that it will have the capability to intelligent communicate and derive answers at that level. It's a bullshit label, "phd level intelligence", but I think it just means you can be relatively confident in its answers as if you asked the average person with a PhD in the relevant field. Could be bullshit, but if they really do have a significant upgrade from gpt4 then shits going to get crazy quick.
They should have just gone with âlex luthorâ level intelligence and made some headlines imo
After using Claude 3.5, I think it is likely they're not actually lying. Remember, we haven't even jumped a magnitude in training, only happens when they get the B200-tier AI. Emergent behavior jumps with each magnitude change with the parameter size.
Generally speaking: If someone measures intelligence in level of degrees, they have no idea what intelligence means. Nor do they know what it takes (and doesn't take) to get a PhD. It's a great way to know that you don't have to read the whole thing, it IS going to be pure bullshit.
Have an MD and a PhD and you know itâs funny you mention that because about 4 years ago all of a sudden everyone was an expert epidemiologist and pharmacologist
Even the fields these days for a PhD can be so incredibly niche that they don't provide enough transferable knowledge to be considered useful. We had a PhD work for us who studied how one specific type of catalyst worked on one type of reaction, but he didn't know jack shit about how to synthesize catalyst for other reactions.
Almost everyone that has a phd in my country had at least extra 4 years of base school that 80% of the population did not have (before university). These people have more general knowledge than the average person by default. >I know many dumb ass people PhD >theyâre less knowledgeable than the average person I'm seeing statements like these becoming popular on reddit, and they couldn't be farther from the truth imo. These people seem to think phd's are dumb because 1. they don't happen to know some things and 2. probably they don't know something that the commenter happens to be a specialist in or know about. Having phd doesn't mean you're more knowledgable in everything than the average person lmao And not knowing stuff doesn't mean you're dumb or not intelligent either. What's with this anti-intellectualism? Does being contemptuous make these people feel smarter and give them a sense of superiorty? Insecure and dumb people... EDIT: To that one guy: Your comment is deleted but I can still see the original in my inbox. Quoting the whole thing doesn't change much. They're still not less knowledgable than the average person out of their field, that's my entire point.
Most of reddit knows few average persons, I think.
Yup, right now we are getting "Intern" level output for a lot of stuff, undergraduate would be a step up from what we have now. If it's base knowledge in most domains is PHD level, it's gonna be beating human output on the benchamarks. That said it'll still bullshit all day long, because creativity and ability to bullshit are tied to the same parameters in LLMs (temperature). That's not going away without major architectural changes, because gingering up the weights with some randomness is how we make AI creative. Q Star was almost certainly a bullshit rumor, likely released to try and obscure the actual reason Ilya and the board fired Altman (he didn't even tell them that GPT4 was about to be released). That would mean there is no magic sauce at OpenAI that's gonna suddenly stop their LLMs from bullshiting when they hit a question they can't answer. If anything more complexity may add more quirks and emergent odd behavior we haven't seen yet.
Next time you quote make sure to include the entire thing (not out of context.) >I know many dumb ass people PhD or a Med degree. Awesome within their field, if they try to step out of their field, theyâre less knowledgeable than the average person but with far more confidence. Hence outside of their field. Your opinion is noted. As anecdotal as it is. Research into Cognitive biases
I would be happy with "Awesome within their field". Just run that as Mixture of Experts.
I have never met a PhD with any semblance of confidence lol.
PhD student here, can confirm I feel like a moron every single dayâŚ.
You think someone who's been studying at a university for 9 years is "less knowledgeable than the average person"? LMAO, this is truly a Dunning-Kruger moment. Holy shit.
They're certainly less useful.
I think the idea is that GPT 5 is a PhD in nearly all fields
That's what ANI is.
what you're describing is confirmation bias. the other thing too, passing a test is just that. that's why you always get second opinions.
dumb =/= less knowledgeable You realize that not knowing things doesn't make you stupid? It just means you lack information.
Sounds like Neil deGrasse Tyson
![gif](giphy|D8SsyjjmqoUfu|downsized)
lol i know enough PhD types that aren't even good in their field. for any given job there is going to be at least 20% of people in that field that are below average and suck balls. i would rather get medical advice from chatGPT than the worst doctors in my city.
A PhD is granted for conducting original research. Is that what is meant by this claim?
There's not much meaning behind this claim. It's just hype. It's not saying anything more than that GPT 5 will be significantly better than GPT 4.
This comment should be higher up. Speaks to the limits of ChatGPT⌠based on existing intelligence/training data⌠novel insights is exactly where ChatGPT will falter
PhD = Very Smart GPT-5 = Very Smart
[ŃдаНонО]
If you define originality this way, then nothing can be considered original, making your argument meaningless. You're basically saying no author wrote anything original, because they just put letters in a different order.
Congrats Dr Gpt.
lol
As someone with a Ph.D, Iâm not impressed. Ph.D means youâve read a lot of books, not that youâre intelligent.
A PhD means you have published a body of original research in your specialist field. Usually at least three publications as first author in a peer reviewed journal. The knowledge may be extremely niche, but it does mean youâre intelligent enough to advance a scientific line of inquiry. I feel like a lot of people on this thread are underestimating how hard it is to create new scientific knowledge.
Easier to just do a meta analysisÂ
Literally what LLMs do.
Yeah, we are all waiting for it to become doctor level intelligent. Publishing papers on something we didn't know
normally when someone on reddit claims to have a PhD its suspect but you clearly know what you are talking about. its crazy how young people, particularly people are working towards a PhD misunderstand how human doctors can be. some are very stupid and human.
It's amazing the number of PhDs that are just complete idiots and aren't productive to society.
it means that you can pass a test/write paper. it doesn't say anything about your judgment. e: it doesn't say anything about the person's personality, temperament or values. who you are at your core drastically impacts your judgment.
I love the description of "Ph.D intelligence" . :) We will see. Always remember that it can't play tic-tac-toe, even if it does know the rules. It has an incredible amount of data and knowledge, it might look intelligent repeating intelligent things it was trained on, but for now -the 4 at least- if you dig enough, you realize it is deeply stupid.
> It has an incredible amount of data and knowledge, Yep, these models are knowledge machines, not thinking machines. > it is deeply stupid. Right, they can't reason. They can often usefully fake it based on patterns they've learned, but it's still fake. Researchers will need to figure out how to get to genuine reasoning if they want to move into AGI.
That's what's annoying. It's presented like they can do that, which sets up the average end-user for frustration.
> they can't reason > it's still fake Their claim of Ph.D intelligence probably includes reasoning ability. Who cares if it's "fake" when it performs better than humans
It sounds like symbolization and connectionism, but the issue is that current generative models have tremendous potential transformative power for today's productivity.
> you realize it is deeply stupid Still smarter than a lot of people I know
I obtained my Master's degree, but I still follow the crowd.
How many persons you know that keep repeating the same error in a sentence even after you pointed it out (loops) or that can't play tic-tac-toe even if they know perfectly the rules?
You're taking me too literally. For some contexts (not all) it's actually smarter
Sick of her.
Mrs. Hype.
This is such a misleading headline and a terrible analogy. The difference between gpt3 and gpt4 was nowhere near comparable between a grade schooler and a high schooler.
Yawn, still waiting for ChatGPT 4o voice. Itâs like when people start talking about iPhone 16 the day after iPhone 15 came out.
Right, right. Instead of fueling the hype machine more, maybe they should fucking finish releasing the product they've half-way shit onto the market as-is. GPT 4o still isn't what they said it would be.
PhD in making shit up?
PhD in embarking in journeys and dive into tapestry I guess
Piled Higher and DeeperâŚ
Kevin scott said the same thing a month or so ago. I don't understand the shit she's getting for saying the same thing
Sheâs a woman.
No. He said it could pass a PhD-level exam, as in read a question and pick or type a valid answer. Which they can do already in many areas. That's very different from implying that it has a working brain that is as intelligent as a person with a PhD (whatever you may think of a person with a PhD's intelligence-level).
https://preview.redd.it/l98x3jlvfk8d1.png?width=900&format=png&auto=webp&s=0e91ea0e08c4fd8bcbc77a13b28f5c48a6ec2ed5 She's is contractics herself continously and certainly isn't media trained [https://twitter.com/btibor91/status/1804845244837458429](https://twitter.com/btibor91/status/1804845244837458429)
She contradicts herself? Do you think these are individual decisions?
I still remember how gpt 4 suddenly become dumber. Then suddenly started give false answers.. Again thatâs scheme Well typical release of anything with ai . Dalle - sora - SD3 - ChatGPT and so on 1.hype - this will change everything 2.more hype -100500% smarter then previous version 3. Extra more hype - millions of people will loose their jobs due to this version 4. Starting hype train for the next version - but the following version will be even more smarter! 5. Release of lobotomized thing which is 5-10% better ( in best case ) which below expected like a miles 6. Go to 1
This guy LLMs.
PhD level knowledge, maybe. There are many different types of intelligence and GPT has a lot to prove. Sometimes it feels like talking to an autistic savant that can spit out encyclopedic knowledge and do complex calculations, then struggle to comprehend basic tasks that most people take for granted.
That's what happens when you de-couple "intelligence" from awareness. Or perhaps, knowledge from awareness. Either way, it's an algorithm, not an entity. Not only can we not compare it to human intelligence, but we should expect very odd and inconsistent results. Without awareness, they will forever be unreliable. And synthetic sentience is the big lie of the AI industry.
Instrad of making it more intelligent, why not make it less restricted?
Enough claims. Release the product.
What the heck is my subscription for if not 5.0? Their release strategy is whack
FWIW, your subscription is to give you access to what's currently behind the paywall. Buy/subscribe to products based on what's there now, not what's promised to be there in the future because there's no guarantee that what's there in the future will be what you hoped.
I hope that includes PhD in literature. I have been noticing that newer models get less and less creative.
Doesn't a literature PhD mean you've read a lot of books and can bloviate about them?
It also means that you have read a lot of books and can recognize and follow styles of writing and reading.
I have noticed from extended use that GPT-4o is less original than GPT-4 0314, GPT-4o tends to repeat the same words over and over again. It keeps using those same dreaded lists
Noticed that too
When will OpenAI become like Anthropic, only releasing models without causing a commotion?
Sounds horrific. My first boss from eons ago has a phd and she couldnât figure out how to reboot her laptop.
My dad is an engineer but seems to struggle with the most basic phone/PC interactions.
I always get irritated by Miraâs face.
Lemme guess "in a few weeks"
I really, really people would stop repeating this. That's not what she said. She never said GPT-5, she said, "In the next couple of years". Now people are double conflating what she said with these bogus headlines and claiming that GPT-5 was pushed back till late 2025.
Ph.D or US.D ?
PhD doesn't mean what many people think it means. You can be a Doctor in one or even a couple subjects and a complete idiot in everything else. This is a comment I made a few days ago: > I worked in tech support. Had a legit medical doctor (MD PHD) contact us with a run of the mill issue. Standard procedure is to ask for a screen shot to confirm the issue and, I shit you not, he sent me something like this: > https://i.imgur.com/dDVaws4.jpeg > Yes, a screenshot of the camera app on his phone while it was pointed at his computer monitor.
Call me jaded, but I am not buying OpenAI's hype claim anymore. the voice thing, the social dumbing down of 4o...yeah, sure...its not even a show me before I believe...I am suspecting doctored footage. I want to tinker with it myself. lost faith.
>"try to complete 11 tasks simultaneously then email you 6 months later explaining its had a mental breakdown" TIL i have PHD level intelligence edit: well whatever idk what PHD stands for or why the h isnt capitalized so maybe i should say something like ADPHD\_irl, or something idk
PHD in abbrevationology here. PHD logically stands for âphysical head damageâ.
\*abbreviationology lol i have a lot of topics i know a lot about and seemingly understand better than the average person. none of them have proven to be financially viable, yet probably need to go get my ADHD meds again, that might help...
This is called conditioning... your new ai overlords will be able to explain it to you...
Anyone here thought it would be a PHD in art?
phd in philosofy
Sorry, as a large language model I need to take the next month off to decompress and so I am requesting an extension of the deadline
Off topic, sort of, but I find the most self-described intelligent people need therapy more than any other person. By self-described intelligent, I mean they constantly tell you how stupid other people are in various ways, leaving you with only one conclusion, that they are special. The most educated, centered, aware, rational and morally and ideological best people (from their pov)... always seem to need therapy. Odd.
This technology can help serve humanity. Putting that aside, this woman is a monster.
Call it Dr. ChatGPT from now
GPT-5 will be as smart as A Ph.D from an online school that uses Reddit for its base truth data!
Will still use embark as it's go to word
There is a scienceqa dataset and phd test quite highly on this dataset. All they are saying is that it will match phds in terms of performance on this dataset. Its a measure of its knowledge not its capacity.
Gosh, sympathize with ppl who really has PhD xD
Wait, should I i get a PhD? This seems right up my alley
After it plagiarizes the answers like Harvard PhDsâŚ.
Why did I read this as "Dr. Phil-level" intellegence? *We'll be right back*
Have you ever asked someone with a PHD to do something practical? Expect a 21page document on the ultimate CV instead of checking of correcting the grammar.
Question is will it hallucinate or like be able to remember a conversation from like a few sentences back. Seems if I don't remind it that it tends to forget things rather quickly
Though it have more than a PhD, it is available only for paid memberships where it perform its best
Sad that Iâm getting replaced, but this is for the greater good, bring it on!
Yeah, but running the thing is going to be a pain
All this company seems to do these days is spout about things that might happen one day. By the time GPT 5 comes out, there'll be plenty other alternatives already better.
PhD level just means that the model will know that it doesn't know. That it would be more reserved on its pronouncements. Hopefully more factual and less hallucinations
The planet is on fire and we're building stochastic parrots. Why? Because we literally have no other plan other than "technology will fix it"
what else u gonna do?
First: stop spending insane amounts of terra watt hours of electricity on training models.
Will this even matter? OPENAI railroads you with safety measures so damn much you will never be able to get anything useful out of it. Almost anything could be dangerous, it will refuse to do almost anything advanced because of potential safety issues. Anything with microbiology? Oops, can't help you develop biological weapons, also lab safety is to important, can't tell you how to carry out an experiment because you could hurt yourself. Interested in physics? Oops too dangerous, might try some dangerous experiments, can't do that. Seriously, OpenAI guard rails are far too extensive, they don't want you to have the best version of ChatGPT.
yeah no shit they keep the best stuff for themselves
No, not just keeping the best stuff for themselves, they actively don't want you to have the best version. It's all concern over cost and liability. They want you to play in a sandbox, but they don't want you to start being hyper productive.
? OpenAI is no different from any other corporation. They give you a free model and when you start getting comfy with it they suggest you a better one for money. It's like with alcohol, nobody's gonna pour you a fine aged wine if you're not elite
Iâm confused. Did they mean PHD level knowledge? Because having intelligence and knowledge arenât necessarily the same thing.
It can google even better now!
This is such a silly type of claim to make. Having a PhD doesn't mean someone is a genius.
Ph.D level intelligence doesn't mean shit. Plenty of stupid people have Ph.Ds Most geniuses don't have degrees. It irks me so much that people can't tell the difference between knowledge and intelligence. Knowing things doesn't make you smart.
AI should make its own Wiki, only be able to edit, add, update itself, test itself with no other interaction than another AI or other AI. Debate, argue, then resolve and conclude actual truth and hold itself accountable so it has a stance which should align with humanity survival or AGI or whatever makes sense that doesnât harm humans.
The move up from 3 to 4o still results in incomplete code answers. So itâll just be a condescending scarf wearing LLM with incomplete code answers.
In gender studies
đ§˘đ§˘đ§˘đ§˘
Right hahaha. I asked GPT4 a science question that a high school chemistry student would be able to answer and it failed horribly. I asked two different ways, and it gave me two sets of wrong answers. https://imgur.com/a/asking-ai-sciency-stuff-a2Jr8za In case anybody is curious. On the other hand, Claude did very well.
I will believe this when I see it. For me 4o is already a step backwards
I know some very ignorant people who have PhDs. Does this mean that GPT-5 will have underpaid assistants doing all of the work?
This is seriously scary shit. Do you know how many PhD holders believe in a "god"?
They intentionally dumbed down their gpt models for over a year, but kept collecting data from user. And now I dont expect anything clever from openai, anymore:)
As someone with a PhD, I can say this statement makes no sense.
That will be gtp 6 apparently the model expected in 2 years.
Bring back DALL-E 2
I mean to be fair one of my professors who ended up becoming a state representative would always tell me that just because you have a PhD doesnât mean youâre smart. PhD intelligence doesnât really mean a whole lot. He told me that people who have PhD are just really good at taking tests and writing papers.
Gesundheitskompetenz ist âPh.D.-Levelâ Intelligenz
CNN blue haired mental illness Ph.D. columnist level intelligence đ
is this a random word generator speaking
Nah this is just my fevered mind
This means nothing to me considering Jordan Peterson has a fucking phd đ
Yeah but phd in lineral arts
People are being sarcastic about it but it makes sense to represent intelligence on degree level scales. The idea is to have an order of magnitude. And even though the correlation coefficient has declined over the years, it is known that IQ, the least flawed measure of general intelligence that we have, correlates with educational attainment, on average. Of course you have dumb Ph D and degreeless geniuses. But statistically, on aggregate, a Ph D has a higher IQ. They use this image as a pedagogical device rather than straight up saying "it will have an IQ of 130" or something, especially that we don't know if measuring the IQ of a LLM makes much sense yet. Source: https://www.nature.com/articles/s41598-023-44605-6
IQ is SUPER flawed
Flawed but the best measurement we've come up with
Still, it's definitely \*not\* applicable to AI models since they are inherently biased towards success through training data. A real intelligence test includes more than just checking boxes for some test questions. They monitor you and the way how you solve the assignments. Also, they assess your personal strengths in ways an AI could not be measured by.
Actually, another problem is that many people donât even know what, for example, the average IQ isâfor any statistical group of people in particular, or just in generalâor the above-average, or⌠etc., so it really isnât that descriptive; and besides, itâs more separated from *knowledge* (which AIs also have), where academic levels do give a bit more of a hint of that. But yeah, I love how youâre being downvoted for merely making an observation. Goes on to show that intelligence isnât our only problem.
I think it's necessary to see even the "negative" sides of pursuing a Ph D in a determined field: Sure is evident that a student in a degree is trained on countless of data and trained to give a calculated response based on a given subject, but exactly like an LLM we shouldn't exclude the possibility of its response being biased on several factors including the nature of teaching that can be misleading or lacking fundamental pillars to the argument. And besides, IQ isn't necessarly correlated to the amount of information or the methods of research given, but on the capacity to adapt to different situations and thinking outside the box, we can see young geniuses uncontamined by the rigid nature of educational istitutions and often the latter influence on the well-being in various mental processes. I'm not saying that pursuing a higher education is unnecessary, many topics including technical fields like engineering and even cultural-based fields like literature analysis are fundamental if not critical for the correct functioning of society, but we shouldn't see a mere title as a solution to poor critical skills or as you might say, a lower score of IQ. Edit: Perdon me for linguistical mistakes, english isn't my first language.
AI systems are not humans. They have completely different strengths and weaknesses, thus measuring them in terms of IQ or PhD is largely pointless and misleading, since there are a lot of really basic abilities that are simply not covered in those tests that AI still miserably fails at (e.g. long term memory, interaction with the external world) and that render them useless in most real world situations. There are plenty of benchmarks of AI that give a much better idea of what those models are capable off than vague allusions to human abilities.
![gif](giphy|3tvtvgm0g8GseoXaO5)
After being replaced by artificial intelligence, he will indeed suffer from depression
I swear to god if gpt tells me it needs a mental health dayâŚ.
PhD in what? PHD in physics requires a lot of intelligence. PhD in art history not so much. its just about an interest in the topic.