- Joined
- Aug 28, 2008
- Messages
- 11,378
- Reaction score
- 2,822
1. These issues are already addressed under existing law. Stealing a likeness for brand advertisement has already been addressed. Parody, lookalike and photoshopped porn has also already been addressed. There is no new legal frontier to be found here.1) yes, you can find multiple examples of serious legal issues popping up in this novel technology and they are growing in number and scope. Which is why I predicted it will be regulated with specific laws in the near future.
2) ease of use does have a bearing. If companies are creating tools designed for anyone to be able to create these fake but indiscernible images with a few keystrokes, they will bear legal responsibility along with the person using it.
Defamation has been addressed before with laws, this is a new frontier of it that will take on new laws fighting it. Whining about celebs is erroneous and deflecting. You don't have to be famous to have this pose a threat. And folks who make music or movies still have rights.
3) If it's too difficult for prosecutors to prove elements instead of just giving up they will likely make companies make that process easier for investigation.
4) The amount of people who could create that Scarlett AI ad without AI programs is incredibly small, and thus not a big threat to the general public. The amount of people who can create a believable fake Scarlett video with AI now is pretty much anyone who can type a couple prompts. Thus an exponentially larger threat with a different response.
2. This is just flat out wrong, legally speaking. If a company creates products and/or services with many legal uses, and a user abuses that product or service for illegal uses, the company is not responsible at all. If I buy a new set of kitchen knives and use one of them to stab my wife, the knife company can not be held criminally or civilly responsible. There is a caveat to this that the illegal uses can't be advertised by the company. So as long as the knife company doesn't explicitly state "our new knives are excellent for stabbing your wife, they won't be culpable. I know this is an extreme example, but the idea is the same. Brands are generally smart enough not to put illegal uses in their advertisements.
Unnumbered paragraph. Defamation has been addressed and doesn't include parody, satire, lookalikes or photoshopped content that doesn't include specific intent to defraud. That all falls under protected free expression, even if people don't like it. AI content should be no different.
3. Prosecutors don't make laws, legislators do. Unless a law is changed, which would likely make it to at least a state Supreme Court level, in depth analysis of proprietary AI deep learning models could be protected under trade secrets. While the legal system in general is generally skewed toward the benefit of prosecutors, specific laws are generally designed to balance protecting individual libraries with three public interest of deleting societal harm, which in all likelihood would not extend to AI generated parody porn absent an intent to defraud.
4. Legally speaking, that doesn't matter. Whether 1 person is capable of committing a crime or opening themselves to civil liability or a million people are capable, the conduct itself is what is illegal. The number of people able to commit the crime has little bearing on who actually does it.
Everybody who has a gun is capable of easily killing someone. That doesn't make guns illegal or gun manufacturers civilian liable for illegal actions committed with their products. More people own guns than own AI software.
Last edited: