Social AI generated Taylor Swift Images spark outrage

Did you honestly believe the Swift pictures were real?

Did you honestly believe that her ass grew 4 sizes larger or that she banged Oscar the grouch?

Nobody thought the pictures were real and nobody asserted that they were, which would be fraud.

It's free expression. You not liking the content doesn't criminalize it.

I honestly believe these programs have the ability to make images that are indiscernible from reality if that's the intention so ignoring they can is dumb. The laws regarding them should take that into account.

Free expression has limits. These include incitement, defamation, fraud, obscenity, child pornography, etc. AI is a tool that can used to achieve these so it will be regulated to control that. As it should.
 
I honestly believe these programs have the ability to make images that are indiscernible from reality if that's the intention so ignoring they can is dumb. The laws regarding them should take that into account.

Free expression has limits. These include incitement, defamation, fraud, obscenity, child pornography, etc. AI is a tool that can used to achieve these so it will be regulated to control that. As it should.
We already have laws against all the things you mentioned above. There is no need for more.

If someone passes off AI generated material as genuine, we have fraud laws for that.

The world's best Photoshopper is better than the world's worst AI bot. He or she doesn't deserve to be in jail unless he or she uses their talent to commit fraud.

The Taylor pictures were obviously fake and still caused a ruckus. It's not the realism of the photos making everyone angry here. People need to start confronting whatever is actually making them angry about this.

People are angry due to feeling that these images are disrespectful. Expressing disrespect is a form of free expression. It's constitutionally protected.
 
Last edited:
We already have laws against all the things you mentioned above. There is no need for more.

If someone passes off AI generated material as genuine, we have fraud laws for that.

The world's best Photoshopper is better than the world's worst AI bot. He or she doesn't deserve to be in jail unless he or she uses their talent to commit fraud.

The Taylor pictures were obviously fake and still caused a ruckus. It's not the realism of the photos making everyone angry here. People need to start confronting whatever is actually making them angry about this.

People are angry due to feeling that these images are disrespectful. Expressing disrespect is a form of free expression. It's constitutionally protected.

exactly, you used to have to be a highly skilled photoshopper now you literally can type *persons name* performing *sex act* and get believable porn that can trick the public. Which is why it will get its own laws eventually. The ease it can destroy someone’s reputation, mental health etc is too great.

You could make a convincing AI video of someone committing a crime and put their lives in serious danger. I’m fact AI programs writing news articles have created fake arrest records for real people.

Just like social media programs are now getting laws aimed at them specifically so will ai. New tech brings new laws.

What’s the threshold for passing off an AI image as genuine? Isn’t the point of the program it’s ability to be as close to indiscernible when you request it adds someone to an image as possible? Who is responsible, the person who requested the program create an image or the program that obligated and created it just off a request? This shit needs to get sorted.
 
exactly, you used to have to be a highly skilled photoshopper now you literally can type *persons name* performing *sex act* and get believable porn that can trick the public. Which is why it will get its own laws eventually. The ease it can destroy someone’s reputation, mental health etc is too great.

You could make a convincing AI video of someone committing a crime and put their lives in serious danger. I’m fact AI programs writing news articles have created fake arrest records for real people.

Just like social media programs are now getting laws aimed at them specifically so will ai. New tech brings new laws.

What’s the threshold for passing off an AI image as genuine? Isn’t the point of the program it’s ability to be as close to indiscernible when you request it adds someone to an image as possible? Who is responsible, the person who requested the program create an image or the program that obligated and created it just off a request? This shit needs to get sorted.
it's too easy to circumvent. you can make an algo in 3 minutes whose first command line is a binary of contact/don't contact specific AI photo maker, and if line 1 chooses randomly the contact, second command line is to instruct the AI to generate/or not porn pictures of whichever celebrity it chooses. if command line 2 is chosen by the AI, then randomize 1/0 spread of the content on the internet.
final command line is for the program to create a copy of itself, with identical features and commands, and let it loose.

you could literally flood the internet with tiny programs like this which are impossible to argue you control in any way. There's ZERO chance AI will be reigned in, unless we all burn all our computers.
 
exactly, you used to have to be a highly skilled photoshopper now you literally can type *persons name* performing *sex act* and get believable porn that can trick the public. Which is why it will get its own laws eventually. The ease it can destroy someone’s reputation, mental health etc is too great.

You could make a convincing AI video of someone committing a crime and put their lives in serious danger. I’m fact AI programs writing news articles have created fake arrest records for real people.

Just like social media programs are now getting laws aimed at them specifically so will ai. New tech brings new laws.

What’s the threshold for passing off an AI image as genuine? Isn’t the point of the program it’s ability to be as close to indiscernible when you request it adds someone to an image as possible? Who is responsible, the person who requested the program create an image or the program that obligated and created it just off a request? This shit needs to get sorted.
Should the skilled photoshopper be put in jail even if he doesn't pass his art off as genuine?

I would argue no. The only difference here is that more people can do it. It doesn't require new laws.
 
it's too easy to circumvent. you can make an algo in 3 minutes whose first command line is a binary of contact/don't contact specific AI photo maker, and if line 1 chooses randomly the contact, second command line is to instruct the AI to generate/or not porn pictures of whichever celebrity it chooses. if command line 2 is chosen by the AI, then randomize 1/0 spread of the content on the internet.
final command line is for the program to create a copy of itself, with identical features and commands, and let it loose.

you could literally flood the internet with tiny programs like this which are impossible to argue you control in any way. There's ZERO chance AI will be reigned in, unless we all burn all our computers.

Laws being hypothetically easy to break has never stopped anyone from enacting them.
 
Laws being hypothetically easy to break has never stopped anyone from enacting them.
well yeah, doesn't mean they will be worth a fuck.
AI moves so fast that they will soon look like "don't hug a horse in the backyard in Georgia" type laws.
 
Should the skilled photoshopper be put in jail even if he doesn't pass his art off as genuine?

I would argue no. The only difference here is that more people can do it. It doesn't require new laws.

Again, how are you determining the photoshopper’s intent to pass it off either way? The images don’t go viral with a caption by the creator with his personal info and few lines giving context to their motivations. This is a made up scenario.

It’s even less applicable with AI images.
 
Again, how are you determining the photoshopper’s intent to pass it off either way? The images don’t go viral with a caption by the creator with his personal info and few lines giving context to their motivations. This is a made up scenario.

It’s even less applicable with AI images.
The standard would be the creator making the affirmative statement that the media was genuine. Absent that, I don't see a crime or the need to amend existing law.

The number of people who view a fake picture should have no bearing on it's potential to expose the creator to civil or criminal liability.

No matter how many people viewed the pictures of Taylor Swift with Oscar the grouch, nobody has any reasonable belief that they are real nor did the creator claim they were. People just want to illegalize expression they find distasteful. It's an authoritarian reflex.
 
I honestly believe these programs have the ability to make images that are indiscernible from reality if that's the intention so ignoring they can is dumb. The laws regarding them should take that into account.

Free expression has limits. These include incitement, defamation, fraud, obscenity, child pornography, etc. AI is a tool that can used to achieve these so it will be regulated to control that. As it should.
AI won't be regulated any more than books and comics are. The law already covers this stuff.

The implications of AI use in this space are much greater than previous image manipulation work (like doctored photographs) but the usages that you're discussing are old hat. You don't need special AI fraud laws because fraud laws would already cover fraud conducted via AI.
 
AI won't be regulated any more than books and comics are. The law already covers this stuff.

The implications of AI use in this space are much greater than previous image manipulation work (like doctored photographs) but the usages that you're discussing are old hat. You don't need special AI fraud laws because fraud laws would already cover fraud conducted via AI.
It will be treated different. There are different factors are play.

Including where the creators of the AI are sourcing the data the program is basing it's creations off, how much responsibility the programers and the program share for the images with those dictating requests to them, the ease of which you can achieve indiscernible images and video, the list goes on.
 
The standard would be the creator making the affirmative statement that the media was genuine. Absent that, I don't see a crime or the need to amend existing law.

The number of people who view a fake picture should have no bearing on it's potential to expose the creator to civil or criminal liability.

No matter how many people viewed the pictures of Taylor Swift with Oscar the grouch, nobody has any reasonable belief that they are real nor did the creator claim they were. People just want to illegalize expression they find distasteful. It's an authoritarian reflex.

But the creators don't make affirmative statement regarding it. That's not the reality of the internet or how this images reach the public. They go viral, all the reposts aren't all citing who originally distributed it. Unless you are arguing the creators need to legally be prompted to claim ownership and intent..then ok that might work.

The number of people who view a fake picture they are led to believe could be real directly impacts how damaging it can be to the person it's faking reputation, mental health, or safety. So obviously that matters. When these images are put on the internet there is no excuse "I didn't there was a risk a lot of people could see them".

AI can, with extreme ease, create images the public reasonably mistakes for real. So this one example won't account for new laws but the overall threat will.
 
It will be treated different. There are different factors are play.

Including where the creators of the AI are sourcing the data the program is basing it's creations off, how much responsibility the programers and the program share for the images with those dictating requests to them, the ease of which you can achieve indiscernible images and video, the list goes on.
No, there really aren't different factors in play when it comes to this specific type of use.

This is no different than some random person crafting the images in his basement via photoshop and then leaving stacks of them at the bus stop and no one knows who did it. Criminalizing the programmers is like criminalizing a newspaper because someone downloaded a picture and then manipulated on their own.

Intent matters. Parody is not criminal, regardless of how it is created. Offensive images are not criminal, unless they become obscenity (an undefined term). It doesn't matter how the images are created, if the intent enters into something actionable then it's criminal. If they can't prove it then it's not.

The difference between free speech and something actionable is intent. Prove intent, otherwise how the images were created doesn't matter.
 
No, there really aren't different factors in play when it comes to this specific type of use.

This is no different than some random person crafting the images in his basement via photoshop and then leaving stacks of them at the bus stop and no one knows who did it. Criminalizing the programmers is like criminalizing a newspaper because someone downloaded a picture and then manipulated on their own.

Intent matters. Parody is not criminal, regardless of how it is created. Offensive images are not criminal, unless they become obscenity (an undefined term). It doesn't matter how the images are created, if the intent enters into something actionnable then it's criminal. If they can't prove it then it's not.

The difference between free speech and something actionable is intent. Prove intent, otherwise how the images were created doesn't matter.

It is different. In your scenario you are creating the images, not asking a program created by a company to create them via it's intelligence and then send to you. In this scenario you have sourced the photos to alter, ai they are media the programmers (who likely don't own the rights) gave the program to copy. Also using photoshop takes some skill , investment of resources and time, and is beyond the capabilities of most of the public to create fake sex scenes or videos of people doing crime convincingly, with AI a six year old could type a request into a program and create an image that is indistinguishable from a fake.

It's different.
 
Last edited:
cry me a river, she's a 5 at best, and this has been going on for ages.

artist rendition, artistic freedom

its not real, you must dismiss
 
That cracks me up almost as much as people being sucked into defending Russian propaganda as the way foreigners really view the US, just to pretend to themselves it pwns the libz.
as much as it cracks me up you think its 1986
 
Back
Top