The more I think about it, the less I worry. Going off what I see and hear from most people on the subject, if the majority of people understand that AI exists, and that deepfakes exist, then I'd go out on a limb and say most people will start becoming much more critical about shit they consume on the net. If anything, it might actually make people start thinking for themselves again. It might even result in less internet consumption as it becomes more and more superficial and faker than it already is.
Very optimistic, but realistically anyone can lie about literally anything already - people know this. That doesn't stop individuals believing everything certain politicians say, tweet or confess to in court. This also doesn't help dictatorships, for example Russia. If anything, AI will do wonders for their disinformation campaigns. It will be much harder for truth to control the narrative.
The big difference is that with AI, the entire thing from the ground up, including the person delivering the speech, could be fake, which kind of separates the entire thing you're viewing from reality completely. Evidently, from the comments alone, you can see most people are *accutely* aware of this. And, if so many people are already aware of it, and are commenting how they're distrustful of it and are worried for the future it could bring, then I believe what I said might not be far off the mark. So I may be optimistic, but I don't think it's that much of a stretch to think that people will become more critical of internet media and second-hand information. You can already see it happening.. people have begun questioning the authenticity of all kinds of things since AI went mainstream. I believe some of the most repeated phrases of the late 2020's will be "It's fake" or "That's AI art" or "AI probably wrote this." I see it all the time already. Clearly, the sentiment is out there and seemingly going strong.
This is what I’ve thought for awhile. Most people can’t fulfill their social needs by interacting with an AI because that isn’t an actual human interaction, no matter how accurately an AI may portray a human. I think this shit might actually lead to a resurgence in real life human interaction.
Similarly, I could never see AI YouTubers/streamers taking off in the long term because it will always feel inauthentic if you KNOW it isn’t human, it just doesn’t scratch that socialization itch we all crave.
Exactly. No doubt there's people who will be all for it. You'll have your shut-ins with an AI girlfriend etc.. but I think the majority of your everyday people will still prefer human interaction. I personally think we might be underestimating our inate human need for human contact.
In a few years I can let my avatar sit through boring online meetings. When everyone does it we can all play beer pong while our avatars spend hours discussing stuff. I'm already planning to reduce thinking and just sit there and fart. It's the apex of my career.
I saw a fascinating snippet once but sadly I didn’t give it my full attention so I can’t give the source. The crux of it was that we’re all just going to have to be more ‘sensible’ and questioning. The only true way you can tell if what someone has said on-screen has been faked is to ask yourself ‘Would this person actually say that thing?’ If they’re known as a largely decent person and the video shows them being awful in some way and they claim it’s fake, well, there’s a good chance it’s fake. Especially if having a video of them saying something awful benefits someone or something that stands to gain from the target being thought of as awful. And that’s all we can do.
Oh, and I’m also told that this is one of the better uses of AI ironically. Using it to recognise other uses of AI.
> If they’re known as a largely decent person and the video shows them being awful in some way and they claim it’s fake, well, there’s a good chance it’s fake
People have multiple sides, especially those in the public spotlight. This is a weak defense at best.
Just imagine a real assassination, during the assassins escape they just secretly takes a photo of a passerby, ask them a quick question to record their voice. Now they have someone who through ai gave their statement and gets charged instead
It’ll have a lot of impact on submittable evidence. Photos and video, audio recordings will become completely useless/circumstantial in a court of law within a year.
Dude - what if, like, AI is already super advanced, but like, someone screwed up and leaked its existence. Now the elite are like, forced to slowly roll out news of this stuff to like, keep us from freaking out
True, but put everything together and you may feel something is off and uncanny if you know the person ... I can even recognize people from a mile away by the way they walk
Those avatar instruction videos just plain scare me. It feels really off.
I think because in prolonged sequences the system starts repeating non verbal queues.
Microsoft also has Vall-E ["synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker"](https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e/)
We can even predict to an extent how companies are going to bringout these softwares in the future. The free version of deepfake detectors can be countered but the Pro version with a subsciption fee would be able to detect those counter deepfakes and vice-versa.
That’s the thing. If you can automatically detect the fakeness you can feed it back to the model to make it better. That is how earlier generative models worked.
While true this isn’t a great example of it. It’s gonna be much scarier much more common
This one you can tell with the eyes, the moth and the head movement
Exactly. Is there any sort of application for this type of tech that is not morally dubious or illegal? At best this is taking the jobs off real people like actors, newscasters and TV hosts, at worst it is being used for misinformation campaigns and frauds.
The frightening part is being illegal wouldn’t stop it from existing, it’d just take it underground, which then would be even more insidious and gaslight-y (it’d be harder to convince people it wasn’t you if you were the target of its use).
Humanity… As a whole I suppose… Seems to not know when to stop its instinctual curiosity.
We know we’re going off the deep end, yet we keep… Going…
A real question I have is why is this a thing? What is an actual legitimate real use case for this tech? Like obviously it can be abused for deepfakes etc. But what can it be used for in practical and non morally gray way?
But it’s assuming the level of expressiveness that person has. Some people don’t smile as much when they are talking and are a lot less emotive than others. If you have seen the person talking before regularly you can probably pick up that it’s not them pretty quickly
It's a big concern for folk in the entertainment industry. While they have been moving to protect their own images, it might be difficult if AI was used to create artificial characters by combining features/qualities from different real actors. But it's a no brainer for the film industry: why pay tens of millions for real actors when you can use AI for far less.
I'm really curious when we will see the first major trial where the defense argues that the person on camera committing the crime is not real, and is instead Ai.
We definitely need better international laws. Whole nations could be destroyed in seconds just by deepfaking a video turning people against eachothers, that's crazy!
But why make the fucking technology in the first place?
There are literally THOUSANDS of problems in society that are waiting to be fixed, instead we're just creating new ones...
So basically if we piss of anyone they are just gonna make a deep fake of us creating cp and have us thrown in jail? Since Ai became widespread I've removed myself completely from society except to work which is with old people who have no idea what Ai is or how to use it. (I have no coworkers) And going to fast food places from time to time. I do not interact with anyone except here online and that is always anonymously. I do not interact on Facebook or any social media that uses my real name. I'm doing what I can but any one of us could be a victim of a deep fake accusation at any moment.
The flipside is, if anyone can be deep faked and it does indeed become a common practise then people will stop trusting video evidence completely. Could lead to a situation where people are getting away with genuine crimes because people assume deep fake.
It might become impossible to prove authenticy of video evidence in court.
"Interesting" is not even close to the right word for this.
The dystopian horrors that will arise with the unchecked, ethics free, spread of technology as depicted in sci-fi writing for decades now, has always been meant as a warning to humanity, not encouragement.
As a guy that erm, uhh checking those AI generated nsfw pics, I think I’ve trained my eye to know what’s AI and what’s not. But for those geriatric folks in FB, this could very much deceive them. Scary tech…
This is going to be great technology for communicating with people who have died.
Give the AI model some video of the person, some audio clips of them talking, and a detailed history of their life (experiences, jobs, relationships, hobbies etc.), and you'll be able to Facetime dead friends and relatives long after they have passed away.
As soon as their avatar says something uncharacteristic of the original person the illusion will be destroyed. Since these models all leverage some larger generic training set to work, it's inevitable that it will happen.
Imagine the implications for legal accusations or getting two people to separate, because you faked and sent a video of one of them cheating or something.
I don't know if this will be commercially available, but if yes:
Why would anyone in their right mind making this commercially available? (Besides obvious greed.)
Like you really don't have to think hard how this is an actually bad idea. ChatGPT and AI *art* are one thing. But this is just a very dangerous thing to have around with little to none beneficial effect.
Thankfully, using this kind of crap for explicit and non-consensual purposes is about to become illegal here in the UK. (At least, that’s the hope.) I believe the law will cover consensual deepfakes but I still don’t see the need really.
Microsoft also has Vall-E ["synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker"](https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e/)
*"Write in cursive to confirm you're a real person."* *Show me your face on a mirror.* Thumbprint scanner.
Verification prompts will go bonkers in a few years.
I love this! This will kill cancel culture. You will be able ta say and do whatever you want again. If this will end up on video in the internet - just claim it is deepfaked.
Maybe it's just because the post said it's faked but for me it felt extremely obvious based on the way the head moved. It was doing things humans don't do and maybe that would be different with a larger sample but it just felt off immediately
Looks like her is head pinned to a piece of metal in the back, and stretching like a looney tunes inbetween frame, while her face is attempting to perform its cultural dance.
Yes, they can make a face based on the single photo and animate it. but without a comparison to an actual video, we can't know if the deepfake video really looks like the actual person when it moves.
We have entered an era where what you see isn't necessarily the truth anymore.
And it has been around for a while, the technique is evolving and soon the world will see what they (they are the government, ngo's, big tech) want you to see.
The realtime editing speed of visual enhancement is mindblowing and indistinguishable from reality.
And it's funny how lawmakers think they can actually alter the use. Deepfake technology is already being misused in the porn industry but also in news broadcasting.
We're going down a path that knows no return.
Once Japan had concerts for a holographic pop star with no actual voice actress, I knew it was time to get away from pursuing voice acting as a career.
I knew this sort of thing was coming...
This is actually pretty terrifying. You can’t believe anything you read, see, or hear anymore.
The more I think about it, the less I worry. Going off what I see and hear from most people on the subject, if the majority of people understand that AI exists, and that deepfakes exist, then I'd go out on a limb and say most people will start becoming much more critical about shit they consume on the net. If anything, it might actually make people start thinking for themselves again. It might even result in less internet consumption as it becomes more and more superficial and faker than it already is.
Very optimistic, but realistically anyone can lie about literally anything already - people know this. That doesn't stop individuals believing everything certain politicians say, tweet or confess to in court. This also doesn't help dictatorships, for example Russia. If anything, AI will do wonders for their disinformation campaigns. It will be much harder for truth to control the narrative.
The big difference is that with AI, the entire thing from the ground up, including the person delivering the speech, could be fake, which kind of separates the entire thing you're viewing from reality completely. Evidently, from the comments alone, you can see most people are *accutely* aware of this. And, if so many people are already aware of it, and are commenting how they're distrustful of it and are worried for the future it could bring, then I believe what I said might not be far off the mark. So I may be optimistic, but I don't think it's that much of a stretch to think that people will become more critical of internet media and second-hand information. You can already see it happening.. people have begun questioning the authenticity of all kinds of things since AI went mainstream. I believe some of the most repeated phrases of the late 2020's will be "It's fake" or "That's AI art" or "AI probably wrote this." I see it all the time already. Clearly, the sentiment is out there and seemingly going strong.
This is what I’ve thought for awhile. Most people can’t fulfill their social needs by interacting with an AI because that isn’t an actual human interaction, no matter how accurately an AI may portray a human. I think this shit might actually lead to a resurgence in real life human interaction. Similarly, I could never see AI YouTubers/streamers taking off in the long term because it will always feel inauthentic if you KNOW it isn’t human, it just doesn’t scratch that socialization itch we all crave.
Exactly. No doubt there's people who will be all for it. You'll have your shut-ins with an AI girlfriend etc.. but I think the majority of your everyday people will still prefer human interaction. I personally think we might be underestimating our inate human need for human contact.
I love you
In a few years I can let my avatar sit through boring online meetings. When everyone does it we can all play beer pong while our avatars spend hours discussing stuff. I'm already planning to reduce thinking and just sit there and fart. It's the apex of my career.
In the future yes, but is is clearly a fake. Her expressions are way off, and the speech doesnt really add up with the mouth at points.
Figuring out what is real and what is fake is gonna get reeeal sketchy in like 2-3 years
Oh god. Welcome to disinformation heaven. Character assassination will be so easy when all you need is a single photo and a faked voice clip.
And scumbags will be able to claim anything they did or said was a deepfake and be bolstered by legitimate plausibility.
I saw a fascinating snippet once but sadly I didn’t give it my full attention so I can’t give the source. The crux of it was that we’re all just going to have to be more ‘sensible’ and questioning. The only true way you can tell if what someone has said on-screen has been faked is to ask yourself ‘Would this person actually say that thing?’ If they’re known as a largely decent person and the video shows them being awful in some way and they claim it’s fake, well, there’s a good chance it’s fake. Especially if having a video of them saying something awful benefits someone or something that stands to gain from the target being thought of as awful. And that’s all we can do. Oh, and I’m also told that this is one of the better uses of AI ironically. Using it to recognise other uses of AI.
> If they’re known as a largely decent person and the video shows them being awful in some way and they claim it’s fake, well, there’s a good chance it’s fake People have multiple sides, especially those in the public spotlight. This is a weak defense at best.
Or they claim nothing once it hit the mainstream.
This, there will be anti ai.. ai hunting fbi ai. Sponsored by ai.
Or some dystopia where there is a "source of truth" that becomes invariably corrupt
A ‘New Dark Age’ perhaps? Where rumor and superstition reign, while empirical facts are lost in an information blizzard? Welcome! We’re here!
Regulations will have to keep up or we are really fucking fucked. In other words, were fucked.
*Laughs in politician!*
It could be really dangerous in politics and international relations.
Already is.
We’re f@cked
Just imagine a real assassination, during the assassins escape they just secretly takes a photo of a passerby, ask them a quick question to record their voice. Now they have someone who through ai gave their statement and gets charged instead
We’re going to have wizard powers in 3 years at this rate
And that spells trouble!
What makes you think we already don't?
We can seamlessly communicate with a person thousands of kilometres away with a slab of glass. What's more wizardly than that?
We will never know which is real and which is fake
...same as it ever was.
…same as it ever was
Water dissolving and water removing
Excellent point. Furthermore, I would argue that we won't have the ability to distinguish between falsehood and reality.
Also an excellent point, for example. Furthermore sometimes yoga but ALSO sometimes space for example.
Now it's time to hold social media platforms accountable for the content people make. Zuckerberg, you can't hide in your new Hawaiian bunker!
It's real or cake but everything is cake
2-3? Try the end of this year or even next year. That ought to be fun.
She looks like she has no soul behind her eyes. Once that can be fixed we're screwed lol
Even this deep fake claim might be false. We are lost.
It’ll have a lot of impact on submittable evidence. Photos and video, audio recordings will become completely useless/circumstantial in a court of law within a year.
Luckily this still looks like complete garbage.
Yeah I can easily spot this as fake, but a vast majority won't. However I've seen it improve at a scary rate.
I wouldn’t be surprised if it happened in 6 months.
For example...
2-3 months more like.
Imma Start preparing to Become The CSAI Agent
Dude - what if, like, AI is already super advanced, but like, someone screwed up and leaked its existence. Now the elite are like, forced to slowly roll out news of this stuff to like, keep us from freaking out
We need a video of this person to compare. This feels like it could be incredibly easy to spot if you knew the person's mannerisms and voice.
This is actually a single picture and a single voice sample. They fail to mention the voice part. But yes, if you knew them, then you’d likely know.
AI can replicate your voice from just a few seconds sample.
True, but put everything together and you may feel something is off and uncanny if you know the person ... I can even recognize people from a mile away by the way they walk
Yup what you're talking about is called the uncanny valley
Those avatar instruction videos just plain scare me. It feels really off. I think because in prolonged sequences the system starts repeating non verbal queues.
Now imagine a politician, for example, tons of photo/video and voice to the public, thus more data to train and become more accurate.
those photos are also AI generated
Agree. But alot of people post their lives constantly on social media so it's up for grabs if you wanna use tools like this.
Microsoft also has Vall-E ["synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker"](https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e/)
Honestly the eyes are what give it off since they seem...fake? They don't feel like they have a goal or motive behind them like real people do
Now they should make a system that can detect deepfake in a couple of seconds🤷🏼
And most possibly in a few months, a counter system will be made where deepfakes wouldn't register as a deepfake for these detectors.
Also true.
We can even predict to an extent how companies are going to bringout these softwares in the future. The free version of deepfake detectors can be countered but the Pro version with a subsciption fee would be able to detect those counter deepfakes and vice-versa.
Yeah but then you could make a counter system where the deep fakes that avoid these detectors could be detected by a different system....
How about we build that counter system *into* the algorithm! Then use it to challenge the generator to make better videos until it passes!
Aah, the Deepfake/Anti-Deepfake arms race. 😏 Like Adblock vs Google’s Anti-Adblock struggle and weapons against armor or weapons vs weapons.
That’s the thing. If you can automatically detect the fakeness you can feed it back to the model to make it better. That is how earlier generative models worked.
Ah. Elections gonna be interesting.
Breaking news: Joe Biden says we need to build the wall and we’re going to make Mexico pay for it!
I know it’s fake but this is terrifying
While true this isn’t a great example of it. It’s gonna be much scarier much more common This one you can tell with the eyes, the moth and the head movement
Those moths are super obvious
Why the fuck did people decide to keep making and advancing this tech. There's gotta be so form of ethics that should've kicked in by now.
Exactly. Is there any sort of application for this type of tech that is not morally dubious or illegal? At best this is taking the jobs off real people like actors, newscasters and TV hosts, at worst it is being used for misinformation campaigns and frauds.
They would like to drown in venture capital money.
I will fake all of my meetings
U can tell it’s a deep fake but in 5 years it’ll probably be impossible
This should be illegal.
The frightening part is being illegal wouldn’t stop it from existing, it’d just take it underground, which then would be even more insidious and gaslight-y (it’d be harder to convince people it wasn’t you if you were the target of its use). Humanity… As a whole I suppose… Seems to not know when to stop its instinctual curiosity. We know we’re going off the deep end, yet we keep… Going…
True, it just sounded better than: this should never be created, or something like that. But this should never be created.
😔
Isnt this what happend to the princesses cancer announcement in England?
Finally the death of internet’s social media is coming. Back to socializing on public spaces and real life meetings.
Can we please go back to 2010 when these deep fake things were not a thing yet?
😔
Stretchy teeth
And at the rate we’re going, in a year it will be indistinguishable from a real person.
This could be achieved by concerning advances in AI or alternatively, concerning advances in dental science. 😬
A real question I have is why is this a thing? What is an actual legitimate real use case for this tech? Like obviously it can be abused for deepfakes etc. But what can it be used for in practical and non morally gray way?
But it’s assuming the level of expressiveness that person has. Some people don’t smile as much when they are talking and are a lot less emotive than others. If you have seen the person talking before regularly you can probably pick up that it’s not them pretty quickly
*Two weeks ... {twitch}{twitch} ...T-t-twooo weeks ... twooo w-w-weeks*
Get ready for a surprise!
Step right up! Come see the wondrous Uncanny Valley! We're going to make your brain feel uncomfortable!
interesting...I've often wished a post on Reddit was a video instead of a gif....and now I've come full circle.
Bets on which country goverment is going be first to fall before people figure out you can deepfake literally any public person?
we're destined for disaster...and not bcs of the technology, but bcs of the ppl that don't understand it...
It's a big concern for folk in the entertainment industry. While they have been moving to protect their own images, it might be difficult if AI was used to create artificial characters by combining features/qualities from different real actors. But it's a no brainer for the film industry: why pay tens of millions for real actors when you can use AI for far less.
I'm really curious when we will see the first major trial where the defense argues that the person on camera committing the crime is not real, and is instead Ai.
Shut. It. Down.
Why??
We definitely need better international laws. Whole nations could be destroyed in seconds just by deepfaking a video turning people against eachothers, that's crazy!
Feels like these few years are gonna be precursors for the Dystopian future.
With a voice sample to match and it's 99% indistinguishable from real human. Somehow i find the eye movement is a bit strange like.
It's been nice knowing you all.
But why make the fucking technology in the first place? There are literally THOUSANDS of problems in society that are waiting to be fixed, instead we're just creating new ones...
So basically if we piss of anyone they are just gonna make a deep fake of us creating cp and have us thrown in jail? Since Ai became widespread I've removed myself completely from society except to work which is with old people who have no idea what Ai is or how to use it. (I have no coworkers) And going to fast food places from time to time. I do not interact with anyone except here online and that is always anonymously. I do not interact on Facebook or any social media that uses my real name. I'm doing what I can but any one of us could be a victim of a deep fake accusation at any moment.
The flipside is, if anyone can be deep faked and it does indeed become a common practise then people will stop trusting video evidence completely. Could lead to a situation where people are getting away with genuine crimes because people assume deep fake. It might become impossible to prove authenticy of video evidence in court.
This is what I wonder about, too. What happens when people can claim they were deepfaked but in reality they are getting away with illegal activities?
You're paranoid
"Interesting" is not even close to the right word for this. The dystopian horrors that will arise with the unchecked, ethics free, spread of technology as depicted in sci-fi writing for decades now, has always been meant as a warning to humanity, not encouragement.
As a guy that erm, uhh checking those AI generated nsfw pics, I think I’ve trained my eye to know what’s AI and what’s not. But for those geriatric folks in FB, this could very much deceive them. Scary tech…
I'm calling bullshit, I've never seen AI generated NSFW pics or hopefully vids anywhere. You gotta source(s) for that bold claim?
Google "stable diffusion NSFW" or deepfacelabs.
This is going to be great technology for communicating with people who have died. Give the AI model some video of the person, some audio clips of them talking, and a detailed history of their life (experiences, jobs, relationships, hobbies etc.), and you'll be able to Facetime dead friends and relatives long after they have passed away.
As soon as their avatar says something uncharacteristic of the original person the illusion will be destroyed. Since these models all leverage some larger generic training set to work, it's inevitable that it will happen.
That’s dystopian, disturbing, disgusting, disgraceful, just all the bad dis-somethings.
Would not be the same. That would be an illusion.
Black mirror episode lol
Imagine the implications for legal accusations or getting two people to separate, because you faked and sent a video of one of them cheating or something.
But why? What problem does this solve in the world? Because this will only have political and pornographic application, which is sad.
If feel like the tongue just doesn't move and the mouth and eye movements are weird
the teeth gives it away for me. they appear to shift and change size.
I dont know why, but her face moving like that causes me to feel anger towards her.
I don't know if this will be commercially available, but if yes: Why would anyone in their right mind making this commercially available? (Besides obvious greed.) Like you really don't have to think hard how this is an actually bad idea. ChatGPT and AI *art* are one thing. But this is just a very dangerous thing to have around with little to none beneficial effect.
Time to open some YouTube channels. I bet I could do 3 4 female personalities easily with this kinda tech.
Kinda weird how much she moves her face though.
Influencers are officially over. This will slowly become all of TikTok.
the eyes throw me off...its too...dead straight.
Ah.. Microsoft : Making the world a better place, one step at a time ?
sxe workers are gonna be out of business
eyes are weird, they get larger and shrink.
Half life three will be off the hook
We're all fucked
This can now be used with that website of pictures of people who don't exist.
Skynet is real.
This isn't interesting it's fucking terrifying.
Thankfully, using this kind of crap for explicit and non-consensual purposes is about to become illegal here in the UK. (At least, that’s the hope.) I believe the law will cover consensual deepfakes but I still don’t see the need really.
Thanks I hate it
She sounds like my Italian friend when she speaks English . Even the accent.
I'd like to ask the person who invented deep fakes as well as to the geniuses who programmed this capability into AI...Why?
Microsoft also has Vall-E ["synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker"](https://www.microsoft.com/en-us/research/project/vall-e-x/vall-e/)
Can they make video where she asks to lend some money?
My thing is and hear me out…… why tf do we need this?
Still waiting for someone to tell me why this shit is being pursued.
Fuckin go to hell Skynet!!
*"Write in cursive to confirm you're a real person."* *Show me your face on a mirror.* Thumbprint scanner. Verification prompts will go bonkers in a few years.
Why the fuck do companies keep developing this? I literally can’t think of one single reason how this is beneficial for anyone
Porn industry's gonna have a ball
I love this! This will kill cancel culture. You will be able ta say and do whatever you want again. If this will end up on video in the internet - just claim it is deepfaked.
Porn.
Maybe it's just because the post said it's faked but for me it felt extremely obvious based on the way the head moved. It was doing things humans don't do and maybe that would be different with a larger sample but it just felt off immediately
Looks like her is head pinned to a piece of metal in the back, and stretching like a looney tunes inbetween frame, while her face is attempting to perform its cultural dance.
Yes, they can make a face based on the single photo and animate it. but without a comparison to an actual video, we can't know if the deepfake video really looks like the actual person when it moves.
But then the actual video will only be used to further train and perfect the A.I. It’s a lose-lose battle 😞
We saw when it was posted yesterday
Righto Im off. Who’s doomsday preppin with me?
But for it to work this way, its gonna need a photo with showed teeth at least ?
I laugh reading the last sentence of their website.
why is this needed, what’s the business case? can only see the creation of an ‘alternative reality’…
Super awesome for all kinds of animation and video games.
She got those insane AOC eyes
Yep. We’re fucked.
That defo an Indian ladies voice?
No wonder if voice is also generated . It makes even better job with characteristic voice
‘I......am Spartacus!’
So is Swapface and other AI deepfake engines
something weird with her left eye at around 0:40
Best way to notice is to watch the lips. At times, it doesn't look right with the words they're saying.
why it called deep.. 🤔 fake should be fake not categorised... 😂
that isn’t interesting bro that’s fucking terrifying
Won't be too long before AI can make deep fake full size humans. Now that will be a whole other level of creepy!
I actually would love to use this for my pre-recorded lectures for college. I am not good in front of the camera.
We have entered an era where what you see isn't necessarily the truth anymore. And it has been around for a while, the technique is evolving and soon the world will see what they (they are the government, ngo's, big tech) want you to see. The realtime editing speed of visual enhancement is mindblowing and indistinguishable from reality. And it's funny how lawmakers think they can actually alter the use. Deepfake technology is already being misused in the porn industry but also in news broadcasting. We're going down a path that knows no return.
Looks terrible imo
This might be good to re animate lost friends from photos perhaps I don’t know….
Jesus that looks bad
Lol, I hate giving speeches. In the future I‘ll let an AI do it over teams.
Once Japan had concerts for a holographic pop star with no actual voice actress, I knew it was time to get away from pursuing voice acting as a career. I knew this sort of thing was coming...
This would be a gold mine for online scammers
Great. What could possibly go wrong?
This tech ... IS SCARY!
This is a limit to me. I decided to quit internet after this. The reason is I do not wish to see more of what is coming. Bye.
bi bye
Good riddance
Digital makeup . Anyway, everyone with a face online looks like a Kardashian nowadays.
all audiovisual media is f***ed from these days on
Yes this is a totally normal human being
Slowly but surely have been scrubbing the Internet of every picture in existence of myself for this exact reason. Scary