Social AI generated Taylor Swift Images spark outrage

Ahh yes. False accusations have never ever driven someone to kill themselves .


Dude come on. A false allegation is enough for young women / and men to make terrible decisions.

Somehow " fake " video wouldn't be ?
I don't judge the propriety of the extent of my countries civil libraries on the whims of the people making the most terrible decisions.

If someone kills themselves because someone or a group of people call them fat, that doesn't mean it should become illegal to call someone fat.
 
Last edited:
I don't judge the extent of my countries civil libraries on the whims of the people making the most terrible decisions.

If someone kills themselves because someone or a group of people call them fat, that doesn't mean it should become illegal to call someone fat.

Fair enough. I think it's a bit more nefarious than a insult myself
 
Fair enough. I think it's a bit more nefarious than a insult myself
I think it's less nefarious. The insult might actually be based in reality.

I think there is a sensitivity toward pornographic material that is making otherwise logical people turn their backs on their logic and principles when it comes to this subject.
 
I think it's less nefarious. The insult might actually be based in reality.

I think there is a sensitivity toward pornographic material that is making otherwise logical people turn their backs on their logic and principles when it comes to this subject.
This is completely different to most things we've experienced thus far. Especially as the AI advances, we shouldn't claim to know what all the effects are going to be.

Many didn't draw the line at celebs, as that kind of thing has happened for years as far as deepfakes, but still never this accurately. Its not even just an outrage thing, its another layer to the already persistent degeneracy problem here in the west

But the people in everyday life, now are fair game, and this tech's ease of use will open up to weirder shit in the near future. And eventually, nobody will be able to tell the difference. Eventually being like 5 years, maybe
 
This is completely different to most things we've experienced thus far. Especially as the AI advances, we shouldn't claim to know what all the effects are going to be.

Many didn't draw the line at celebs, as that kind of thing has happened for years as far as deepfakes, but still never this accurately. Its not even just an outrage thing, its another layer to the already persistent degeneracy problem here in the west

But the people in everyday life, now are fair game, and this tech's ease of use will open up to weirder shit in the near future. And eventually, nobody will be able to tell the difference. Eventually being like 5 years, maybe
So what?

There should be no crime or civil liability unless the AI generated content is put forth as genuine.

Fraud is fraud. Free expression is free expression. Sometimes expression is intended to evoke rage or offense. That doesn't mean it should be illegal.
 
Last edited:
If it were real and they were embarrassed by their actions, it's a completely different story.

But if it's fake and they decide to kill themselves over it, they're being suicidally unreasonable.

They're not killing themselves directly over the image it's the bullying and tormenting it would cause that would do the trick.
 
They're not killing themselves directly over the image it's the bullying and tormenting it would cause that would do the trick.
AI images don't cause bullying, assholes do. Their conduct, which in many cases can break laws related to communicating threats or harassment should be dealt with by laws we already have.

We shouldn't be restricting the rights of the whole of society based on the actions of the worst actors or on the reactions of the most sensitive people in society.
 
Tbh, I could see the porn industry putting a shit ton of money into any case trying to ban deepfake porn.

Why watch pornstars when you can watch anyone you want.

maybe setup a deepfake category under a premium subscription and try to profit off it
 
So what?

There should be no crime or civil liability unless the AI generated content is put forth as genuine.

Fraud is fraud. Free expression is free expression. Sometimes expression is intended to evoke rage or offense. That doesn't mean it should be illegal.

How do you establish whether an AI image is meant to trick to people into believing it's genuine?

Force them to watermark it as AI?
 
How do you establish whether an AI image is meant to trick to people into believing it's genuine?

Force them to watermark it as AI?
If they say it's genuine, when in fact it's AI, that can be a crime and/or open people up to civil liability.

But I feel like this isn't the issue really being discussed in this thread. The production value and other characteristics will make them more than likely, obviously fake.

I think that if someone posted a celebrities face on a pornstar in a gangbang using AI, the celeb and her fans would still be upset if it was watermarked.

This feels more like people think the act itself is disrespectful, which it is, and they viscerally find it distasteful.

Nobody claimed the Taylor Swift pictures were real (which they obviously weren't), yet all this outrage.

Just because someone finds something offensive doesn't mean a law needs to be made about it.
 
If they say it's genuine, when in fact it's AI, that can be a crime and/or open people up to civil liability.

But I feel like this isn't the issue really being discussed in this thread. The production value and other characteristics will make them more than likely, obviously fake.

I think that if someone posted a celebrities face on a pornstar in a gangbang using AI, the celeb and her fans would still be upset if it was watermarked.

This feels more like people think the act itself is disrespectful, which it is, and they viscerally find it distasteful.

Nobody claimed the Taylor Swift pictures were real (which they obviously weren't), yet all this outrage.

Just because someone finds something offensive doesn't mean a law needs to be made about it.

They didn't claim they were real this time but they could easily achieve a situation where it's unknown or seems real with a celeb or non celeb.

Where are the producers of these images stating they are genuine or not? They don't. There's no labels. That's the point. The photos spread online without any context to whom produced them for what.

media produced to give the impression a real person did something they didn't that harms that person's reputation and mental health does need laws obviously.
 
If they say it's genuine, when in fact it's AI, that can be a crime and/or open people up to civil liability.

But I feel like this isn't the issue really being discussed in this thread. The production value and other characteristics will make them more than likely, obviously fake.

I think that if someone posted a celebrities face on a pornstar in a gangbang using AI, the celeb and her fans would still be upset if it was watermarked.

This feels more like people think the act itself is disrespectful, which it is, and they viscerally find it distasteful.

Nobody claimed the Taylor Swift pictures were real (which they obviously weren't), yet all this outrage.

Just because someone finds something offensive doesn't mean a law needs to be made about it.

Who are "they" that are declaring they are responsible for the image and whether it's AI or not. This scenario doesn't exist. The creators names are not attached when the images make their rounds. That' what makes it so difficult to differentiate if it's real or not at times because the source is unknown.

Would they still be upset, sure. Would they be under the assumption it was a real action the celebrity took and thus they are responsible for it? no. huge difference.

It's not just disrespectful or offensive, it's violating and harmful to reputations and mental health. The "who cares they are celebs" argument is so fucking stupid in this thread for many reasons and this could happen to minors or anyone so it's erroneous.
 
Last edited:
A few episodes into Pluto on netflix and it's a pretty good AI murder mystery.
Next in my listing. Trying to finish The Vinland. Season 1 was great season 2 is tough to finish.
 
If it was from Twitters or Elon's official account, you might be correct, but as I said before and you conveniently omitted, section 230 prevent platforms from facing liability for content posted by users.

Unlike Trump, Taylor has competent lawyers and Elon did want to roll the dice on court. He may win but if if he loses he will have to stroke a 8 figure check possibly more. Elon just lost in court in Delawre, which cost him billions. The smart thing financially to do is just remove all the AI of Taylor and block the searches. Elon doing it as a favor is laughable.
 
What you were all failing to realize is that in the next 5 to 10 years nearly all evidence, for crimes, will be subject to the consideration, as to whether or not, they are authentic or not. Technology will be so advanced at that point that you will be able to create almost anything, visual, or audio to look like anything you want.
 
Who are "they" that are declaring they are responsible for the image and whether it's AI or not. This scenario doesn't exist. The creators names are not attached when the images make their rounds. That' what makes it so difficult to differentiate if it's real or not at times because the source is unknown.

Would they still be upset, sure. Would they be under the assumption it was a real action the celebrity took and thus they are responsible for it? no. huge difference.

It's not just disrespectful or offensive, it's violating and harmful to reputations and mental health. The "who cares they are celebs" argument is so fucking stupid in this thread for many reasons and this could happen to minors or anyone so it's erroneous.
Did you honestly believe the Swift pictures were real?

Did you honestly believe that her ass grew 4 sizes larger or that she banged Oscar the grouch?

Nobody thought the pictures were real and nobody asserted that they were, which would be fraud.

It's free expression. You not liking the content doesn't criminalize it.

Nobody thought the pictures were real but there was all this uproar. The outrage is not about the realism of the pictures; it's about people not liking their favorite celebrities being disrespected.

No law change is necessary. People just need to stop believing that everything on the Internet is real.
 
Last edited:

Latest posts

Forum statistics

Threads
1,238,151
Messages
55,540,791
Members
174,823
Latest member
MaybeHawk
Back
Top