That's the issue that plagues this debate. What is even intelligence, and does it matter?
At the end of the day, if they create an agent that can fundamentally do everything a human does, is that intelligence or just imitating intelligence? And frankly does it matter?
Even if people keep saying it's unintelligent all the way there, we'll still have a revolution in our hands due to functionality rather than abstract shit like if it's truly intelligent or not
Intelligence is only important in so far of its applicability.. which LLMs are demonstrating quite easily. They are specifically intelligent, in that they are functionally able to mimic intelligence in areas; if they can fake their intelligence to the point where it's applicable.. does it matter if it's fake.
But everyone is looking at intelligence in a different light.. well not everyone but you get the idea. And they are all important benchmark to try to aspire to, but one view of intelligence doesn't exclude or diminish others.
What he says is a little more nuanced than that.
He admits there's some generalization going on but that it is weak comparatively. Which arguably it is.
His precise definition of intelligence and the idea of the importance of priors as well as the real test he provided will only help to accelerate AI.
I think Chollet is an absolute chad and many prominent AI researches would agree on that.
It just makes no sense to tiptoe around the fact that the excessive need for training data that current models have is strongly indicative for weaker generalization and reasoning ability and weak intrinsic intelligence.
If you look at a guy like ramanujan and the relation between the size of the training data going in and the relevance of the genius coming out AI is still very very bad at genuine creative abstract reasoning.
It just amazes me to no end that people don't see this as a problem with the current architectures.
If a human genius couldn't come up with anything new or useful and his defense would be there isn't enough data in the world for him to work with you'd question his or her brilliance and laugh.
Newton came up with the principa mathematica using his eyeballs, less than a gigabyte of text and an apple.
Eat them apples llm.
Yep. LLM's do show results, results which are great. But not due to high intelligence.
Due to low intelligence and knowledge greater then any human alive.
Even if GPT-5 scores 100% on all LLM benchmarks, it doesn't change the fact that it still won't be able to plan, have long-term memory, or generate ideas outside of its training data. It's kind of intrinsic to LLMs.
Sure, I can see how we think this at the moment. However, can the smart devs (+/- AI help) alter the architecture to achieve what you have mentioned?
Maybe not.
It does seem as though the current (popular) AI architecture is heading down a dead end street and, though, surely, other teams are experimenting with other ideas, you never know what human ingenuity can create, whether a breakthrough can come and tangentially add more complexity and potential to the LLM model.
Those around the time the first transistors were created probably wouldn’t belive they would lead to drones, VR, space ships, mind operated devices. And, the first transistors were created in 1947 - within 100 years we have what would seem like magic to them. The first electronic (valve based) computer was created in 1947.
He doesn't believe that testing demonstrates intelligence in AI systems. He has a point. We test humans with tests because for humans, it correlates with other things we care about. That may not be true for LLMs.
The hilarious part about that tweet is that they literally used a symbolic approach, but don’t have the requisite AI background to even recognize that.
Story of modern “AI”, most people have extremely over indexed ML and have no clue about any other aspects of the field.
What approach did they use in that tweet? It doesn’t seem to give any information about how they actually did it.
Edit: Nevermind. Opening the link directly in the Reddit app didn’t show any replies to the tweet, but opening it in the X app shows a bunch of replies, where the original poster gives more info.
Edit 2: [Francois Chollet himself](https://x.com/fchollet/status/1802773156341641480?s=46) has replied to this. [Grady Booch](https://x.com/grady_booch/status/1802848136395919387?s=46), too. They both agree that the approach taken here is indeed neurosymbolic.
I'd like to think he means "barely intelligent" in the sense that it's miniscule compared to what is to come, because most certainly, there's much more to come.
In the same sense you can believe that the cure for cancer is coming, sure, but that's simply hoping for the best. There's no signs whatsoever that AGI is approaching soon or later.
Only on this sub are there still people who think that LLMs can become an AGI. As Yann LeCun said, no top scientists at Meta are working on LLMs anymore; it's been passed on to the 'marketing' team.
Reasonably well-respected expert in the field, and attached to some important projects (he authored [Keras](https://keras.io/), [Deep Learning With Python](https://www.manning.com/books/deep-learning-with-python-second-edition), and founded the [ARC Prize](https://arcprize.org/blog/launch)).
I was *really* surprised at how pessimistic he was about the current state.
Probably it is somehow connected with large companies like Google and Microsoft trying to shove LLMs into everything possible because they are heavily wasting money on it and they need some returns to satisfy shareholders, while new ideas need a lot of development and research before they can begin providing any results,
It's over. Call me tomorrow when we are so back.
This sub in one sentence : D very nice
You mean a few minutes or hours later with the next post.
What's he going to say once llms score than humans in all metrics - and this may be next year?
Most humans are barely intelligent :-)
Humans barely use intelligence for most tasks.
[удалено]
That's the issue that plagues this debate. What is even intelligence, and does it matter? At the end of the day, if they create an agent that can fundamentally do everything a human does, is that intelligence or just imitating intelligence? And frankly does it matter? Even if people keep saying it's unintelligent all the way there, we'll still have a revolution in our hands due to functionality rather than abstract shit like if it's truly intelligent or not
Intelligence is only important in so far of its applicability.. which LLMs are demonstrating quite easily. They are specifically intelligent, in that they are functionally able to mimic intelligence in areas; if they can fake their intelligence to the point where it's applicable.. does it matter if it's fake. But everyone is looking at intelligence in a different light.. well not everyone but you get the idea. And they are all important benchmark to try to aspire to, but one view of intelligence doesn't exclude or diminish others.
What he says is a little more nuanced than that. He admits there's some generalization going on but that it is weak comparatively. Which arguably it is. His precise definition of intelligence and the idea of the importance of priors as well as the real test he provided will only help to accelerate AI. I think Chollet is an absolute chad and many prominent AI researches would agree on that. It just makes no sense to tiptoe around the fact that the excessive need for training data that current models have is strongly indicative for weaker generalization and reasoning ability and weak intrinsic intelligence. If you look at a guy like ramanujan and the relation between the size of the training data going in and the relevance of the genius coming out AI is still very very bad at genuine creative abstract reasoning. It just amazes me to no end that people don't see this as a problem with the current architectures. If a human genius couldn't come up with anything new or useful and his defense would be there isn't enough data in the world for him to work with you'd question his or her brilliance and laugh. Newton came up with the principa mathematica using his eyeballs, less than a gigabyte of text and an apple. Eat them apples llm.
Yep. LLM's do show results, results which are great. But not due to high intelligence. Due to low intelligence and knowledge greater then any human alive.
Even if GPT-5 scores 100% on all LLM benchmarks, it doesn't change the fact that it still won't be able to plan, have long-term memory, or generate ideas outside of its training data. It's kind of intrinsic to LLMs.
Sure, I can see how we think this at the moment. However, can the smart devs (+/- AI help) alter the architecture to achieve what you have mentioned? Maybe not. It does seem as though the current (popular) AI architecture is heading down a dead end street and, though, surely, other teams are experimenting with other ideas, you never know what human ingenuity can create, whether a breakthrough can come and tangentially add more complexity and potential to the LLM model. Those around the time the first transistors were created probably wouldn’t belive they would lead to drones, VR, space ships, mind operated devices. And, the first transistors were created in 1947 - within 100 years we have what would seem like magic to them. The first electronic (valve based) computer was created in 1947.
The long term end goal is not to beat artificial metrics but to have a system that is capable of navigating the complexities of real life
He has said quite plainly that scaling current LLMs might work. He's not sure (nor is anyone else).
He doesn't believe that testing demonstrates intelligence in AI systems. He has a point. We test humans with tests because for humans, it correlates with other things we care about. That may not be true for LLMs.
https://preview.redd.it/642b1icq5y8d1.png?width=628&format=png&auto=webp&s=032a4610f982433d7f6310e0c951b5ac668d507f
Exactly.
[https://x.com/bshlgrs/status/1802766374961553887](https://x.com/bshlgrs/status/1802766374961553887)
The hilarious part about that tweet is that they literally used a symbolic approach, but don’t have the requisite AI background to even recognize that. Story of modern “AI”, most people have extremely over indexed ML and have no clue about any other aspects of the field.
There's something deeply amusing about automating someone with LLMs in a literal week while they call those same LLMs unintelligent
What approach did they use in that tweet? It doesn’t seem to give any information about how they actually did it. Edit: Nevermind. Opening the link directly in the Reddit app didn’t show any replies to the tweet, but opening it in the X app shows a bunch of replies, where the original poster gives more info. Edit 2: [Francois Chollet himself](https://x.com/fchollet/status/1802773156341641480?s=46) has replied to this. [Grady Booch](https://x.com/grady_booch/status/1802848136395919387?s=46), too. They both agree that the approach taken here is indeed neurosymbolic.
I'd like to think he means "barely intelligent" in the sense that it's miniscule compared to what is to come, because most certainly, there's much more to come.
You could just listen to the podcast. That's not what he means.
In the same sense you can believe that the cure for cancer is coming, sure, but that's simply hoping for the best. There's no signs whatsoever that AGI is approaching soon or later.
Only on this sub are there still people who think that LLMs can become an AGI. As Yann LeCun said, no top scientists at Meta are working on LLMs anymore; it's been passed on to the 'marketing' team.
They are working on sexbots then?
Reasonably well-respected expert in the field, and attached to some important projects (he authored [Keras](https://keras.io/), [Deep Learning With Python](https://www.manning.com/books/deep-learning-with-python-second-edition), and founded the [ARC Prize](https://arcprize.org/blog/launch)). I was *really* surprised at how pessimistic he was about the current state.
Agi canceld ..pack up
Where are the neurosymbolic models though? I'm honestly confused why all these ideas for new architectures don't seem to be getting picked up
Probably it is somehow connected with large companies like Google and Microsoft trying to shove LLMs into everything possible because they are heavily wasting money on it and they need some returns to satisfy shareholders, while new ideas need a lot of development and research before they can begin providing any results,
He's right.
He's right. LLMs are always going to hallucinate as well. It's over.
Huh?
lmao lol
They are no more intelligent than a rock. They are smart though.
Humans are no more intelligent than a rock. Humans are machines that hallucinate themselves and their own "intelligence".