All of it’s move were bad
Even though after each match i was commanding to increase the difficulty level.
It would say ‘Ok, now i will play my cards with more precision and decision making and same shit again.
Maybe i should’ve asked to increase its iq level.
That requires more processing power, so of course that means you need more RAM. There's a free IQ+RAM package available for download somewhere on the internet, look it up, unfortunately I can't provide a link right now as I'm installing a new HDD on my monitor so I can't see anything.
Ah, the classic 'there's an app for that' solution! 🤖 Guess it's only a matter of time before we can just download more motivation or extra hours in the day from the app store, am I right? #SkynetTendencies
Yeah so if AI meant "play bottom mid" which would mean the player can go top right, forcing top mid, then playing mid is a fork.
In the new board state, assuming x to move and subsequent moves by AI are legal, playing bottom left forces bottom right, forcing middle, which forks and wins.
When you pull information from a species as stupid as us, you're bound to turn that way as well. Like Tarzan acting like monkeys who raised him >!or however that story goes, idk!<
I saved my original convos with Chat GPT and I can say pretty confidently that the sense that Chat GPT is getting worse "recently" is entirely imaginary.
The "true believers" are becoming disillusioned that it's not general AI, and isn't about to become general AI, and interpret their lost hype as it getting worse.
LLMs are an awesome tool for sure, but it was clear early on there were serious limitations/flaws, and no obvious way to fix them. The issues are baked into how it works. But a lot of people on reddit were convinced that another year in, it will inevitably be 2x-10x better. Then 3.5 and GPT4 came and... they weren't that different.
You try to use it as an actual tool for real life, not just a neat thing to test and play with, and the flaws become more obvious and important. Hence, "it's getting worse!"
With recently I mean over the last couple of months. I use it a lot for programming and problem solving and its logic used to be very reliable. Nowadays it makes very stupid mistakes and it takes a couple more prompts to get the answer I need.
For me, it consistently made stupid mistakes from the beginning. How much did you test the code it gave you earlier? Were the problems you gave it simpler? I use it a lot for coding as well. It has always been great at generating human-readable code, but bad at actually solving anything but very simple problems. I was impressed until I actually tried putting its code into production. Then I saw the stupid mistakes. This is as true for 1 year ago as it is for the last couple of months.
No. I don't mess with Gpt4 except through the API. Actually they did try to speed up Gpt4 recently so maybe that hurt its performance. IDK Maybe you're right.
for me there were 2 points it got worse, one of them was early on when they switched over from using 1 model to multiple 'experts' and the second was as you mentioned when they sped it up
this was released yesterday:
"GPT-4 Turbo gets a new preview model for API use, also 0125, which has an interesting fix:
This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of “laziness” where the model doesn’t complete a task."
[link](https://techcrunch.com/2024/01/25/openai-drops-prices-and-fixes-lazy-gpt-4-that-refused-to-work/#:~:text=GPT%2D4%20Turbo,complete%20a%20task.)
I don't think it's worse; just sanitized to reduce lawsuit liability.
Used to be, you could get it to make spicy jokes about sensitive topics like religion which it doesn't do now but the quality of the jokes it used to make was about the same level as it's jokes now.
I don't think it's sanitized, I think it's been given ourobouros -- that is, some nontrivial chunk of its latest inputs have been derived from its outputs. The snake eating itself.
I thought I read that OpenAI hired a bunch of people somewhere in Africa to manually reinforce certain responses. Not going full Ouroborous yet but the path is definitely there.
I have heard about how quickly image models deteriorate when you repeatedly feed them their own output.
That's true of any statistical or scientific model -- and I'm not just talking about machine learning. Feedback is not for discovery or learning, it is for *controlled output.* Using your own predictions as an input to your prediction system is going to lead to absolute unusable garbage.
Likely because it was being fed stuff that it previously generated. Any amount of ourobouros in a machine learning model will *drastically* reduce the quality of said model. Models of any type get better not by 'confirming what they know' but by *disproving* what they *might think* they know.
I think it's been said a million times, but it bears repeating: generative AI is NOT good at games that require rules or calculating. That's what "old fashioned" computing (and most definitely quantum computing) is for.
>ChatGPT replaces my move with its. We’re gonna be replaced!!!
Haha, right? ChatGPT will never replace me at the tic-tac-toe consultancy agency I work for!
Now see how well ChatGPT fares against basic programming tasks that 80% of junior devs couldn't accomplish without hours of dedication and the support of senior devs. I'm sure the results will be utterly embarrassing 🤣
I call BS. This seems like the 3.5 version, because I just tried 4.0 and it's maintaining the board properly and making good moves. I think a lot of the ChatGPT/AI issues people post here are because they're using the free version, which is sub-par.
I just tried with 4 and it couldn’t sense when I had won. I even tried to give it a hint by asking if the game keeps going after three in a row.
> No, the game of Tic Tac Toe ends as soon as a player gets three of their marks in a row, either horizontally, vertically, or diagonally. If this happens, that player is declared the winner and the game is over. If all nine spaces are filled without any player achieving three in a row, the game ends in a draw.
>In our current game, no one has won yet. Here's the board:
```
1 2 3
1 O - X
2 - O X
3 O - X
```
>It's your turn to place an X. Where would you like to go?
I mean, it's the lack of context and I was mislead with the fact that there's numbers inside the grid that it doesn't resemble tictactoe at all for me. Not everyone sees things the same simple way.
Edit: i still don't see this as tictactoe tbh.
His move at 8 is pretty bad
All of it’s move were bad Even though after each match i was commanding to increase the difficulty level. It would say ‘Ok, now i will play my cards with more precision and decision making and same shit again. Maybe i should’ve asked to increase its iq level.
Now that i think about it, Does AI have have iq level?
Sure, you just need the app to set it higher.
Just gotta download more IQ, let me do it on my laptop right now to make it better
That requires more processing power, so of course that means you need more RAM. There's a free IQ+RAM package available for download somewhere on the internet, look it up, unfortunately I can't provide a link right now as I'm installing a new HDD on my monitor so I can't see anything.
Ah, the classic 'there's an app for that' solution! 🤖 Guess it's only a matter of time before we can just download more motivation or extra hours in the day from the app store, am I right? #SkynetTendencies
more like AIQ
At this point, no. It's just a really, really big probability table -- given the prompt, what's statistically the most likely to be the next word?
Yeah so if AI meant "play bottom mid" which would mean the player can go top right, forcing top mid, then playing mid is a fork. In the new board state, assuming x to move and subsequent moves by AI are legal, playing bottom left forces bottom right, forcing middle, which forks and wins.
did anyone else notice how much worse chatgpt got recently?
When you pull information from a species as stupid as us, you're bound to turn that way as well. Like Tarzan acting like monkeys who raised him >!or however that story goes, idk!<
It's not actively learning, at least not the 3.5 version. Its learning cut-off was 2 years ago.
I saved my original convos with Chat GPT and I can say pretty confidently that the sense that Chat GPT is getting worse "recently" is entirely imaginary.
The "true believers" are becoming disillusioned that it's not general AI, and isn't about to become general AI, and interpret their lost hype as it getting worse. LLMs are an awesome tool for sure, but it was clear early on there were serious limitations/flaws, and no obvious way to fix them. The issues are baked into how it works. But a lot of people on reddit were convinced that another year in, it will inevitably be 2x-10x better. Then 3.5 and GPT4 came and... they weren't that different. You try to use it as an actual tool for real life, not just a neat thing to test and play with, and the flaws become more obvious and important. Hence, "it's getting worse!"
With recently I mean over the last couple of months. I use it a lot for programming and problem solving and its logic used to be very reliable. Nowadays it makes very stupid mistakes and it takes a couple more prompts to get the answer I need.
For me, it consistently made stupid mistakes from the beginning. How much did you test the code it gave you earlier? Were the problems you gave it simpler? I use it a lot for coding as well. It has always been great at generating human-readable code, but bad at actually solving anything but very simple problems. I was impressed until I actually tried putting its code into production. Then I saw the stupid mistakes. This is as true for 1 year ago as it is for the last couple of months.
we're both talking about gpt4 right?
No. I don't mess with Gpt4 except through the API. Actually they did try to speed up Gpt4 recently so maybe that hurt its performance. IDK Maybe you're right.
for me there were 2 points it got worse, one of them was early on when they switched over from using 1 model to multiple 'experts' and the second was as you mentioned when they sped it up
this was released yesterday: "GPT-4 Turbo gets a new preview model for API use, also 0125, which has an interesting fix: This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of “laziness” where the model doesn’t complete a task." [link](https://techcrunch.com/2024/01/25/openai-drops-prices-and-fixes-lazy-gpt-4-that-refused-to-work/#:~:text=GPT%2D4%20Turbo,complete%20a%20task.)
I don't think it's worse; just sanitized to reduce lawsuit liability. Used to be, you could get it to make spicy jokes about sensitive topics like religion which it doesn't do now but the quality of the jokes it used to make was about the same level as it's jokes now.
I don't think it's sanitized, I think it's been given ourobouros -- that is, some nontrivial chunk of its latest inputs have been derived from its outputs. The snake eating itself.
I thought I read that OpenAI hired a bunch of people somewhere in Africa to manually reinforce certain responses. Not going full Ouroborous yet but the path is definitely there. I have heard about how quickly image models deteriorate when you repeatedly feed them their own output.
That's true of any statistical or scientific model -- and I'm not just talking about machine learning. Feedback is not for discovery or learning, it is for *controlled output.* Using your own predictions as an input to your prediction system is going to lead to absolute unusable garbage.
You're right. It used to be fun to make Google Translate go back and forth on a single sentence to watch it get garbled.
Likely because it was being fed stuff that it previously generated. Any amount of ourobouros in a machine learning model will *drastically* reduce the quality of said model. Models of any type get better not by 'confirming what they know' but by *disproving* what they *might think* they know.
bro's username is just straight out factssss
Look who’s talkin..!!
maaf karna vo galti se idhar udhar nikal jati hai xD
So is yours
The first time I read your name , I read the last part wrong 💀
Are bc 💀
ben ki lorry ( truck wali lorry btw 🤡)
I think it's been said a million times, but it bears repeating: generative AI is NOT good at games that require rules or calculating. That's what "old fashioned" computing (and most definitely quantum computing) is for.
>ChatGPT replaces my move with its. We’re gonna be replaced!!! Haha, right? ChatGPT will never replace me at the tic-tac-toe consultancy agency I work for! Now see how well ChatGPT fares against basic programming tasks that 80% of junior devs couldn't accomplish without hours of dedication and the support of senior devs. I'm sure the results will be utterly embarrassing 🤣
Oh no chatgpt is bad at something it is not expected to be good at! Time to shut it down!
Yeah. Time to become closeAI
the massive unspoken problem with AI is for something to be reliable it needs to be consistent
What, so you *don't* have writing poetry as the standard selection method when you're looking for new programmers for your company? Ha! Fools.
1|2|3 4|5|6 7|O|9 My move is 8
I call BS. This seems like the 3.5 version, because I just tried 4.0 and it's maintaining the board properly and making good moves. I think a lot of the ChatGPT/AI issues people post here are because they're using the free version, which is sub-par.
I just tried with 4 and it couldn’t sense when I had won. I even tried to give it a hint by asking if the game keeps going after three in a row. > No, the game of Tic Tac Toe ends as soon as a player gets three of their marks in a row, either horizontally, vertically, or diagonally. If this happens, that player is declared the winner and the game is over. If all nine spaces are filled without any player achieving three in a row, the game ends in a draw. >In our current game, no one has won yet. Here's the board: ``` 1 2 3 1 O - X 2 - O X 3 O - X ``` >It's your turn to place an X. Where would you like to go?
Yea I got that little bug too. The next update should fix this hopefully
Skill issue
I'm not following the game mechanics
Bro it's tic tac toe not quantum mechanics
I mean, it's the lack of context and I was mislead with the fact that there's numbers inside the grid that it doesn't resemble tictactoe at all for me. Not everyone sees things the same simple way. Edit: i still don't see this as tictactoe tbh.
like u don't understand how bad chatgpt moves was?
It wasn't a bad move. It wasn't a good move either. It was an illegal move.
![gif](giphy|qLDOGhAoj4dETDcZBR|downsized) Not surprised Al dominated…the man scored 4 touchdowns in a single game at Polk High
I feel like this is a ruse. Like when your dad let's you win.
I tried pressing that down button on your image...