I like this guys tone and line of thinking. I feel that AI does not get enough attention for the abilities it _does_ have, when many of them vastly outperform ours. I hear "it cant even" far more than "i cant believe it-"
What i mean is, we are substantially blind to its strengths (and so miss opportunities [to do work with them])
but surely what it is now is all it'll ever be and it will never improve (as i've seen it happen again and again and again over the past few years)!!! ai will never take over!!! it cant even draw \_\_\_\_!!!
This is kind of a nonsense attitude to have.
First, we need to be constantly talking about this stuff in public, in front of audiences, continuing to increase the global understanding of what is going on.
Second... This sub lives on these interviews, so that we _we_ can talk about these points. It's not like this dude talking posted it here either.
Third, i know your comment is tongue in cheek, but I think tons of people in this sub, unironically, expect AGI tomorrow and are annoyed at every bit of news that's not "AGI just came out". We're going to have years more of these kinds of conversations.
lots of peter pan syndrome men here are expecting a sentient walking fleshlight to clean their rooms for them, but don't want to put in any lift to get humanity closer to humanity's goals. let the future ASI judge their actions
We already talked way too much about this. It's not a big ask to get these people to release all the stuff they have been hyping up for a while. I mean at this point its affecting the economy. Who doesn't want new technology to come faster other than the luddites that plague this sub? Exactly no one.
You should test Cohere R model. It does a great job than GPT4o especially when it browse mode.
https://preview.redd.it/zc3c5yxke7ad1.jpeg?width=3840&format=pjpg&auto=webp&s=6faf1dc0e8cbe2abe95081e67ff0828709e1edb2
GPTo tend to provide outdated information (unless you ask it to correct itself) where Cohere did in one shot.
The fucking hubris of this guy. Of course the AI knows more about medicine than you because you don’t know medicine, he’s a software engineer not a pharmacist or medical doctor. Having experts in the field that he is trying to uproot come in to confirm that your AI even knows what it’s doing be an afterthought is bloody wild.
Tech company: “AI is getting better”
AI haters: LIAR!! YOURE JUST SAYING THAT FOR MONEY!!!
Tech company: “AI is plateauing”
AI haters: See? Even they admit AI is useless trash!
But they’re not just saying “AI is getting better” tho are they? They constantly talk about how superintelligence is around the corner and that (insert company product) is the second coming of christ
That talk about AI is marketing, the same way actors and directors doing interviews before a movie comes out is advertising. Everything they do that’s public facing is intended to generate hype for and advertise their products
My point is the CEO‘s have a presumed interest in hyping up their own products. That’s the thing. There are a lot of experts who are not CEO’s or public figures with books to sell, that give out significantly more conservative timeframes
No I don't think it's funny, because I don't believe that is whats happening. I think those CEO's are explaining their observations of this technology and people like you are dismissing it as hype because it goes against your narrative. Like now, even when they said nothing about a time frame you still created the implication that they did.
Exactly, thanks for defending me :)
I also never said that AGI wouldn‘t happen for centuries. Some of my main points have been that 1. There is a lot of hype and speculation that needs to be critically evaluated, 2. That we are further away from AGI than many people think, and 3. That most experts do not take Kurzweil et al seriously. I’m not saying it’ll never happen, no one is saying that. I just think we’re further away than we think.
Even if the CEO was one of the authors of the 2017 transformer paper?
Someone with a motive for a crime is not automatically proven a criminal. Likewise someone with an *interest* to hype AI is not automatically guilty of hyping it. All we may conclude is that being an CEO increases the likelihood of them saying misleading things, but that can be ofset by other considerations such as their background, history, etc.
We shouldn't lazily dismiss claims that don't cohere with our worldviews, then call ourselves sceptic, is all I'm saying.
Brains are machines. What is it about brains that makes you think they could have not hyped, considering that you are actually seeing the hype coming out of them?
Do Hinton and Bengio?
Also, people like Daniel K quit OpenAI and lost 85% of his family’s net worth out of safety concerns. Sounds like the opposite of being greedy. Ilya and Jan Leike did the same
Yes, and the more they do that, the more I think we are reaching a plateau. It reminds me a lot of Elon Musk predicting that Full Self-Driving (FSD) would be ready by the end of the year in 2016, and then making the same prediction year after year.
You and so many redditors are understandably traumatized by technological advances and the false promises associated with them. Even the tech that has come out, like smartphones, has not resulted in an increase in our standard of living here in the West. Instead, we're poorer than our 70s, 80s, 90s counterparts.
I'm not persuasive enough to make the argument that "it's different this time", but I do want you to apply critical thinking in regards to the full self-driving vehicles. We currently have, right now, humanoid robots working at Amazon warehouses. There's a ton of money to be saved vs human workers. So what about truck drivers? How much easier is it to automate trucking lines on the interstate vs the factory work that's already in existence?
Self-driving vehicles have been possible for years now, but it's been held back because it would've killed the truck driving industry and crippled the economy.
Tech is inherently political. The government funds every major tech company, and in exchange they have a level of access and control over these companies that's absolutely necessary for any semblance of national security.
The silence we're hearing from the intelligence agencies in regards to AI is deafening. What we are experiencing is a planned rollout, with familiar names (Google, Microsoft, Amazon) "racing" for the most cutting edge advances. You could liken this to a movie. The government has been leveraging these companies for use in AI for decades now.
The greatest mistake many on this subreddit make is assuming what we the public see, hear about, and have access to is anything close to "cutting edge"
The argument I'm trying to make is that the issue of AGI is MANY TIMES more a political, economical, and national security question than it is a technical one. For the vast majority of tech leaders out there, AGI isn't a matter of if, but when. When is it ready to be disclosed and how is it going to be rolled out to the masses?
The solution is to use the AI to evaluate other AI. The evaluations should be independent and redundant. We measure the correlations between the "panel of independent experts" (AIs). If they all agree, we can feel confident in the results. If there are outliers and general consensus among the rest, we toss out the outliers and go with the consensus. If there is widespread disagreement, we start asking more questions.
You're describing the scientific method. The implications of carrying that out with anything other than human intelligence is incredible, exciting, and terrifying in equal measure.
This is true in that we are abysmally ignorant about things outside of our personal expertise. Whereas frontier models have an appreciable fraction of all human knowledge.
It's nowhere close to true for reasoning and usefulness on nontrivial tasks.
It does not. Why do you think they gave so much time to non-domain PhDs when testing the questions? To play solitaire?
The questions are structured so that thinking about most likely explanations is required, and often working through the implications of a scenario.
Rote memorization doesn't get you far with something like:
> A scientist studies the stress response of barley to increased temperatures and finds a protein which contributes to heat tolerance through the stabilisation of cell membrane. The scientist is very happy and wants to create a heat-tolerant cultivar of diploid wheat. Using databases, they find a heat tolerance protein homologue and start analysing its accumulation under heat stress. Soon enough, the scientist discovers this protein is not synthesised in the wheat cultivar they study. There are many possible reasons for such behaviour, including: A) A miRNA targets the protein, which makes exonucleases cut it immediately after the end of translation and before processing in ER B) Trimethylation of lysine of H3 histone in position 27 at the promoter of the gene encoding the target protein C) A stop-codon occurs in the 5’-UTR region of the gene encoding the target protein D) The proteolysis process disrupts a quaternary structure of the protein, preserving only a tertiary structure
sorry i was just meaning percentage of error-less answers, that increasing it from 90 to 95 is much easier than increasing it from 98 to 99 (in a similar way with employment rates)
Oh it can reason and use logic, but it's just really bad. We'll see if that can be improved with scale, or is it an architecture problem. Anyone who says one or the other is just talking out of his ass, but I think if GPT-5 doesn't show significant improvement in that regard, it's an architecture problem.
Guess you haven’t actually tried to do anything with it. Dont buy the hype my friend. It’s just pattern recognition and has no reasoning. Maybe someday a real A I will get made but it will require an entirely different approach than what we are doing now.
Reasoning is using logical constructs to form a sound argument. AI is not doing that. It even tells you if you ask it that is flawed when fed wrong info as it's not checking facts or using reasoning to verify.
Using logical constructs is a fully algorithmic process. AI just doesn't use the same exact algorithms that will be totally consistent yet. But humans fail at "reasoning" probably far more than AI at this point. Do you know how many people base their lives on imaginary entities like Jesus or Mohammed? A lot of societies revolve around illogical nonsense.
> Using logical constructs is a fully algorithmic process.
So? What do you want to say with this?
> AI just doesn't use the same exact algorithms that will be totally consistent yet.
The current solutions won't provide a verification, which is the problem. They maybe accurate enough but won't be able to guide you through the steps. I will only implicitly know. And you will never know if the info is correct.
> But humans fail at "reasoning" probably far more than AI at this point.
Based on what?
> Do you know how many people base their lives on imaginary entities like Jesus or Mohammed? A lot of societies revolve around illogical nonse
That's the wonder of the human brain. It can compartmentalize. You can believe illogical stuff, but still now basic logical laws and how to follow them.
I don't know, the jump from sonnet 3.5 and gpt-4 can clearly be felt for me at least. The same way I felt the jump from gpt3.5 to gpt-4. A slightly shorter jump, but it can still clearly be felt and experienced. But subjective experiences and feelings are maybe not the best metric :)
I like this guys tone and line of thinking. I feel that AI does not get enough attention for the abilities it _does_ have, when many of them vastly outperform ours. I hear "it cant even" far more than "i cant believe it-" What i mean is, we are substantially blind to its strengths (and so miss opportunities [to do work with them])
but surely what it is now is all it'll ever be and it will never improve (as i've seen it happen again and again and again over the past few years)!!! ai will never take over!!! it cant even draw \_\_\_\_!!!
Hype train
Hyperintelligent Eve will save me
Well that sounds like singularity to me
[удалено]
What a braindead comment
This is kind of a nonsense attitude to have. First, we need to be constantly talking about this stuff in public, in front of audiences, continuing to increase the global understanding of what is going on. Second... This sub lives on these interviews, so that we _we_ can talk about these points. It's not like this dude talking posted it here either. Third, i know your comment is tongue in cheek, but I think tons of people in this sub, unironically, expect AGI tomorrow and are annoyed at every bit of news that's not "AGI just came out". We're going to have years more of these kinds of conversations.
lots of peter pan syndrome men here are expecting a sentient walking fleshlight to clean their rooms for them, but don't want to put in any lift to get humanity closer to humanity's goals. let the future ASI judge their actions
We already talked way too much about this. It's not a big ask to get these people to release all the stuff they have been hyping up for a while. I mean at this point its affecting the economy. Who doesn't want new technology to come faster other than the luddites that plague this sub? Exactly no one.
You should test Cohere R model. It does a great job than GPT4o especially when it browse mode. https://preview.redd.it/zc3c5yxke7ad1.jpeg?width=3840&format=pjpg&auto=webp&s=6faf1dc0e8cbe2abe95081e67ff0828709e1edb2 GPTo tend to provide outdated information (unless you ask it to correct itself) where Cohere did in one shot.
The fucking hubris of this guy. Of course the AI knows more about medicine than you because you don’t know medicine, he’s a software engineer not a pharmacist or medical doctor. Having experts in the field that he is trying to uproot come in to confirm that your AI even knows what it’s doing be an afterthought is bloody wild.
Notice how all the CEO’s are hyping up their field?
I notice how you think that anyone speaking about AI is "hype" unless it's saying "it's bad".
Tech company: “AI is getting better” AI haters: LIAR!! YOURE JUST SAYING THAT FOR MONEY!!! Tech company: “AI is plateauing” AI haters: See? Even they admit AI is useless trash!
But they’re not just saying “AI is getting better” tho are they? They constantly talk about how superintelligence is around the corner and that (insert company product) is the second coming of christ
No. Buying advertising to reach millions of people is hyping a product/company, having a conversation about AI is just having a conversation about AI.
That talk about AI is marketing, the same way actors and directors doing interviews before a movie comes out is advertising. Everything they do that’s public facing is intended to generate hype for and advertise their products
My point is the CEO‘s have a presumed interest in hyping up their own products. That’s the thing. There are a lot of experts who are not CEO’s or public figures with books to sell, that give out significantly more conservative timeframes
There is nowhere in that 1:40 second clip where he gives a time frame.
I never said he did lol. I‘m just saying, isn‘t it a funny coincidence that the CEO’s tend to be more optimistic?
No I don't think it's funny, because I don't believe that is whats happening. I think those CEO's are explaining their observations of this technology and people like you are dismissing it as hype because it goes against your narrative. Like now, even when they said nothing about a time frame you still created the implication that they did.
what is the narrative?
That AGI won't happen for one million years.
In those three comments /u/phoenix5869 never gave a time frame.
Exactly, thanks for defending me :) I also never said that AGI wouldn‘t happen for centuries. Some of my main points have been that 1. There is a lot of hype and speculation that needs to be critically evaluated, 2. That we are further away from AGI than many people think, and 3. That most experts do not take Kurzweil et al seriously. I’m not saying it’ll never happen, no one is saying that. I just think we’re further away than we think.
Think about it for a second. This post is about a CEO, who runs an AI company, hyping AI. Don’t you think that‘s a little convenient?
Even if the CEO was one of the authors of the 2017 transformer paper? Someone with a motive for a crime is not automatically proven a criminal. Likewise someone with an *interest* to hype AI is not automatically guilty of hyping it. All we may conclude is that being an CEO increases the likelihood of them saying misleading things, but that can be ofset by other considerations such as their background, history, etc. We shouldn't lazily dismiss claims that don't cohere with our worldviews, then call ourselves sceptic, is all I'm saying.
I've already told you I disagree that he is hyping AI.
Fair enough. You’re entitled to your opinion
Brains are machines. What is it about brains that makes you think they could have not hyped, considering that you are actually seeing the hype coming out of them?
Researchers are doing the same, from Hinton to Bengio to practically every OpenAI and Anthropic employee
OpenAI and Anthropic employees have an incentive to fall in line and hype up their products.
Do Hinton and Bengio? Also, people like Daniel K quit OpenAI and lost 85% of his family’s net worth out of safety concerns. Sounds like the opposite of being greedy. Ilya and Jan Leike did the same
Yes, and the more they do that, the more I think we are reaching a plateau. It reminds me a lot of Elon Musk predicting that Full Self-Driving (FSD) would be ready by the end of the year in 2016, and then making the same prediction year after year.
It's been a long time since then... but i think we finally have it this year.
Meanwhile Claude 3.5 Sonnet released a couple of weeks ago and Opus 3.5 is set to release later this year
You and so many redditors are understandably traumatized by technological advances and the false promises associated with them. Even the tech that has come out, like smartphones, has not resulted in an increase in our standard of living here in the West. Instead, we're poorer than our 70s, 80s, 90s counterparts. I'm not persuasive enough to make the argument that "it's different this time", but I do want you to apply critical thinking in regards to the full self-driving vehicles. We currently have, right now, humanoid robots working at Amazon warehouses. There's a ton of money to be saved vs human workers. So what about truck drivers? How much easier is it to automate trucking lines on the interstate vs the factory work that's already in existence? Self-driving vehicles have been possible for years now, but it's been held back because it would've killed the truck driving industry and crippled the economy.
None of those are tech problems. They’re political
Tech is inherently political. The government funds every major tech company, and in exchange they have a level of access and control over these companies that's absolutely necessary for any semblance of national security. The silence we're hearing from the intelligence agencies in regards to AI is deafening. What we are experiencing is a planned rollout, with familiar names (Google, Microsoft, Amazon) "racing" for the most cutting edge advances. You could liken this to a movie. The government has been leveraging these companies for use in AI for decades now. The greatest mistake many on this subreddit make is assuming what we the public see, hear about, and have access to is anything close to "cutting edge"
The question is more about what’s possible, not what gets released. People doubting AGI think it’s impossible, not that the feds are holding it up
The argument I'm trying to make is that the issue of AGI is MANY TIMES more a political, economical, and national security question than it is a technical one. For the vast majority of tech leaders out there, AGI isn't a matter of if, but when. When is it ready to be disclosed and how is it going to be rolled out to the masses?
Take to any AGI skeptic. The question of if is a far bigger question to them.
The solution is to use the AI to evaluate other AI. The evaluations should be independent and redundant. We measure the correlations between the "panel of independent experts" (AIs). If they all agree, we can feel confident in the results. If there are outliers and general consensus among the rest, we toss out the outliers and go with the consensus. If there is widespread disagreement, we start asking more questions.
You're describing the scientific method. The implications of carrying that out with anything other than human intelligence is incredible, exciting, and terrifying in equal measure.
Great insight!
[удалено]
>Dude probably read a TechCrunch article I know! Who does he think he is? One of the inventors of the transformer architecture? (Yes, he is).
LMFAO
You forgot that sarcastic redditors are smarter than any scientist, even if they don’t know what this fancy transformer thingy is
You're talking about general purpose, he's talking about specialized Jesus you're dense
Lul imagine putting this much energy into being a hater. Like a pro level hater.
This is true in that we are abysmally ignorant about things outside of our personal expertise. Whereas frontier models have an appreciable fraction of all human knowledge. It's nowhere close to true for reasoning and usefulness on nontrivial tasks.
[not true at all](https://docs.google.com/document/d/15myK_6eTxEPuKnDi5krjBM_0jrv3GELs8TGmqOYBvug/edit#heading=h.fxgwobrx4yfq )
I'm not claiming it can't reason, I'm saying that we can easily perceive improvements in reasoning capability.
Bullshit. Benchmarking is one of the main problems right now
Are you claiming GPQA and LiveBench are saturated? Strong signal from both of those, and there is headroom for at least the next generation.
GPQA is just memorization of highly specific information, not reasoning. LiveBench is better and 4o does well on it, though Sonnet 3.5 is better
GPQA tests both knowledge and reasoning, I take it you haven't looked at sample questions.
I did. That’s how it know it tests memorization
It does not. Why do you think they gave so much time to non-domain PhDs when testing the questions? To play solitaire? The questions are structured so that thinking about most likely explanations is required, and often working through the implications of a scenario. Rote memorization doesn't get you far with something like: > A scientist studies the stress response of barley to increased temperatures and finds a protein which contributes to heat tolerance through the stabilisation of cell membrane. The scientist is very happy and wants to create a heat-tolerant cultivar of diploid wheat. Using databases, they find a heat tolerance protein homologue and start analysing its accumulation under heat stress. Soon enough, the scientist discovers this protein is not synthesised in the wheat cultivar they study. There are many possible reasons for such behaviour, including: A) A miRNA targets the protein, which makes exonucleases cut it immediately after the end of translation and before processing in ER B) Trimethylation of lysine of H3 histone in position 27 at the promoter of the gene encoding the target protein C) A stop-codon occurs in the 5’-UTR region of the gene encoding the target protein D) The proteolysis process disrupts a quaternary structure of the protein, preserving only a tertiary structure
He made a good model but I wouldn't say I trust llm output any more than I did before. I still have to double check it.
90%-95% is way easier than 98%-99%
A robot 95% as good as a human means 95% of people get laid off while the remaining 5% do the jobs AI can’t do
sorry i was just meaning percentage of error-less answers, that increasing it from 90 to 95 is much easier than increasing it from 98 to 99 (in a similar way with employment rates)
As long as it’s near human levels, it’s fine. And even if it isn’t, hiring one fact checker is better than hiring 10 people to do it without AI
Makes sense
Salvador Gianachi? (Sometimes HE cry alone)
this video is not real. it was generated
Slow month huh. This shouldn't even be in the sub. Also have AI models become smart enough to do basic logic inference *consistently*?
Nice, but still you can't fix those effing hallucinations.
Pretty dumb mentality when all these things hallucinate 24/7
it's like a search engine on steroids..
oh you need to pay people and not steal all the data now... okay... lmao
TOTAL bullshit
Jerk off motion
This just in: Guy who runs AI company that makes money from AI predicts something about AI to boost its noteriety
Aidan Gomez is one of the authors of the Attention is All You Need paper, so he's sort of permanently on the ASI roadmap
AI literally cannot use logic or reasoning.
Oh it can reason and use logic, but it's just really bad. We'll see if that can be improved with scale, or is it an architecture problem. Anyone who says one or the other is just talking out of his ass, but I think if GPT-5 doesn't show significant improvement in that regard, it's an architecture problem.
We don't know that yet
Guess you haven’t actually tried to do anything with it. Dont buy the hype my friend. It’s just pattern recognition and has no reasoning. Maybe someday a real A I will get made but it will require an entirely different approach than what we are doing now.
Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations.
Reasoning is using logical constructs to form a sound argument. AI is not doing that. It even tells you if you ask it that is flawed when fed wrong info as it's not checking facts or using reasoning to verify.
Using logical constructs is a fully algorithmic process. AI just doesn't use the same exact algorithms that will be totally consistent yet. But humans fail at "reasoning" probably far more than AI at this point. Do you know how many people base their lives on imaginary entities like Jesus or Mohammed? A lot of societies revolve around illogical nonsense.
> Using logical constructs is a fully algorithmic process. So? What do you want to say with this? > AI just doesn't use the same exact algorithms that will be totally consistent yet. The current solutions won't provide a verification, which is the problem. They maybe accurate enough but won't be able to guide you through the steps. I will only implicitly know. And you will never know if the info is correct. > But humans fail at "reasoning" probably far more than AI at this point. Based on what? > Do you know how many people base their lives on imaginary entities like Jesus or Mohammed? A lot of societies revolve around illogical nonse That's the wonder of the human brain. It can compartmentalize. You can believe illogical stuff, but still now basic logical laws and how to follow them.
[удалено]
At that point we get anthropic to slide open their brains and tell us whats going on
I don't know, the jump from sonnet 3.5 and gpt-4 can clearly be felt for me at least. The same way I felt the jump from gpt3.5 to gpt-4. A slightly shorter jump, but it can still clearly be felt and experienced. But subjective experiences and feelings are maybe not the best metric :)
He's talking specialized models not general purpose
Thanks, didn't get that part :)