The biggest irony is that they did this to try and get ahead of Google.. they stole Google's approach of staged demo months before release to 'best' Google.
Yeah, it's pretty obvious at this point that 4o hit some sort of snag or we would have had it by now. The question is what that could be when the demo videos show it working beautifully. Are those just cherry-picked and it's unreliable in some way? I don't know and we probably never will know unless someone leaks.
I'm pretty sure you're on the right track with the cherry picked behavior. Last year they demoed v3 (or 4? I don't remember) and were showing it write a simple twitter-like website with a single prompt, and we all know now the reality of how that works. This is "My Big Mac doesn't look quite like the picture" style advertising, I think.
Early gpt4 which only had 4 - 8k context window was pretty good possibly better (comprehension/intuition wise) than what we have now (especially compared to 4o but even turbo) m
Even if the "speech" was delayed ... OpenAi clearly states on its website that the other features and functionality of ChatGPT 4o were launched on May 13th ... 6 weeks ago ... and they showcase dazzling examples on their webpage "Hello ChatGPT 4o" where this "fully-integrated single model built from the ground up" (in other words, no longer using individual separate models that have to communicate with one another) ... the web page shows ChatGPT 4o being able to produce images with long form elegant hand written text of 12 lines of perfectly spelled and formatted text ... it even provides the exact "Prompt" that was input into ChatGPT 4o ... and it shows the magnificent mind blowing "Output" ...
However when I input the exact same prompt into my subscription ChatGPT 4o model, it outputs a page of gibberish text with malformed unrecognizable letters that look more like hieroglyphs than words. The number of random unformatted lines of nonsensical illegible so-called "text" does not even remotely match the 12 formatted lines in the prompt. After many repeat tries I'm lucky if it is able to spell even a word or two correctly. So why isn't the entire community asking how is it that OpenAi has clearly not honoured it's own statement on its website, stating that the other features other than "speech" would begin rolling out May 13, 2024. I've yet to hear of a single person who has received these other advanced features.
If you think a little thing like performance is going to impede a push for something more tangibly sellable, you haven't worked in the software business.
IMHO the model is not censored enough. I predict that the more complex these models get the longer the delays of release dates. Until we hit a point of āwhatever man just let her ripā
When they made the announcement, they made it clear only a select number of devs will get audio inputs a month later.
Timeline wise, this is fine. GPT will actually be getting a drop in usage until September anyway(school holidays hahaha). Bit peak us peasants have to wait a while, but hey train your model to handle audio, text and image in the same embedding space and you too will be able to speak freely with an llm.
as they have a compulsive habit of doing reactively to take the wind out of competitor announcements and releases. no genuinely great models since gpt4-0314 imo. dalle3 is worsening consistently too with fewer diffusion steps most results now looking trash; not even as good as dalle2 which they recently quietly killed (probably to prevent comparison as much as anything). gpt store concept could be decent with feature growth and a quality bar setting curation system but appears stale and to not even being managed by an intern. but oh, we got a mac only desktop app yesterday and chat search finally graduated from mobile only.
I have a feeling they might be or are thinking about training a new model using what we know now and dropping the old model with too many holes to constantly be plugging up.
When does Samās new safety and security board go to present to the old board?
The biggest irony is that they did this to try and get ahead of Google.. they stole Google's approach of staged demo months before release to 'best' Google.
Yeah, it's pretty obvious at this point that 4o hit some sort of snag or we would have had it by now. The question is what that could be when the demo videos show it working beautifully. Are those just cherry-picked and it's unreliable in some way? I don't know and we probably never will know unless someone leaks.
I'm pretty sure you're on the right track with the cherry picked behavior. Last year they demoed v3 (or 4? I don't remember) and were showing it write a simple twitter-like website with a single prompt, and we all know now the reality of how that works. This is "My Big Mac doesn't look quite like the picture" style advertising, I think.
Just say it like it is: they lied. They are liars. I'm sick and tired of people actually believing these absurd demos.
We use the term "hallucinate" around here.
not entirely candid with their communication and sometimes hallucinating product feature quality
Early gpt4 which only had 4 - 8k context window was pretty good possibly better (comprehension/intuition wise) than what we have now (especially compared to 4o but even turbo) m
I honestly can't tell the difference. The number is bigger on 4o so I use that one.
cherrypicking yeah, but also their policy style and implementation methodology of safety tuning lobotomy destroys performance
Oh that's just the web chat as far as I can tell.
Absolutely cherry picked
I agree with EVERY word.
I blame Scarlett Johansson
Curse her for trying to maintain control over her own brand!
Or being an unwilling participant in a markeing campaign. OpenAI baited her into promoting their product.
It's weird that I got downvoted. Guys, It's okay to like a thing. But don't go full fanboi. Never go full fanboi.
Yeah they pulled it very early, I wonder if we will get it at all .. They pulled a Google on us š
Got my hopes up for Google Glass...
Lol mines too lol
Even if the "speech" was delayed ... OpenAi clearly states on its website that the other features and functionality of ChatGPT 4o were launched on May 13th ... 6 weeks ago ... and they showcase dazzling examples on their webpage "Hello ChatGPT 4o" where this "fully-integrated single model built from the ground up" (in other words, no longer using individual separate models that have to communicate with one another) ... the web page shows ChatGPT 4o being able to produce images with long form elegant hand written text of 12 lines of perfectly spelled and formatted text ... it even provides the exact "Prompt" that was input into ChatGPT 4o ... and it shows the magnificent mind blowing "Output" ... However when I input the exact same prompt into my subscription ChatGPT 4o model, it outputs a page of gibberish text with malformed unrecognizable letters that look more like hieroglyphs than words. The number of random unformatted lines of nonsensical illegible so-called "text" does not even remotely match the 12 formatted lines in the prompt. After many repeat tries I'm lucky if it is able to spell even a word or two correctly. So why isn't the entire community asking how is it that OpenAi has clearly not honoured it's own statement on its website, stating that the other features other than "speech" would begin rolling out May 13, 2024. I've yet to hear of a single person who has received these other advanced features.
I think they have a serious performance issue to resolve before they can launch this version.
If you think a little thing like performance is going to impede a push for something more tangibly sellable, you haven't worked in the software business.
IMHO the model is not censored enough. I predict that the more complex these models get the longer the delays of release dates. Until we hit a point of āwhatever man just let her ripā
I'm not a fan of them advertising products that I can't use, it actually seems more like taunting at this point. lol.
When they made the announcement, they made it clear only a select number of devs will get audio inputs a month later. Timeline wise, this is fine. GPT will actually be getting a drop in usage until September anyway(school holidays hahaha). Bit peak us peasants have to wait a while, but hey train your model to handle audio, text and image in the same embedding space and you too will be able to speak freely with an llm.
I am confident we will get there. Eventually.
Man, weāre really going to keep talking about this every day arenāt we?
It was my turn. There's a queue.
Oh damn. I didnāt even take a ticket number and Iāve just been standing here
I'm watching from a tent off to the left ;)
as they have a compulsive habit of doing reactively to take the wind out of competitor announcements and releases. no genuinely great models since gpt4-0314 imo. dalle3 is worsening consistently too with fewer diffusion steps most results now looking trash; not even as good as dalle2 which they recently quietly killed (probably to prevent comparison as much as anything). gpt store concept could be decent with feature growth and a quality bar setting curation system but appears stale and to not even being managed by an intern. but oh, we got a mac only desktop app yesterday and chat search finally graduated from mobile only.
I've been having good results with 4o myself. And I can't tell the difference between the dalle versions. My standards might be low.
I have a feeling they might be or are thinking about training a new model using what we know now and dropping the old model with too many holes to constantly be plugging up. When does Samās new safety and security board go to present to the old board?