,亚洲欧美日韩国产成人精品影院,亚洲国产欧美日韩精品一区二区三区,久久亚洲国产成人影院,久久国产成人亚洲精品影院老金,九九精品成人免费国产片,国产精品成人综合网,国产成人一区二区三区,国产成...

立即打開
“移花接木”色情視頻令女性人人自危,應該如何阻止?

“移花接木”色情視頻令女性人人自危,應該如何阻止?

Jeff John Roberts 2019-01-18
近年來,隨著人工智能軟件的高速發展,將女明星甚至普通女性的面部移植到AV女優的身上,已經是一件相當容易的事了。

插圖:Andrew Nusca/Fortune

如今在互聯網的一些陰暗角落里,你能夠找到艾瑪·沃特森和薩爾瑪·海耶克等名人主演的A片。這些作品當然不是由沃特森等人本人出演的,然而要分辨這些偽作卻是相當困難的。近年來,隨著人工智能軟件的高速發展,將女明星甚至普通女性的面部移植到AV女優的身上,搞“移花接木”的A片,騙過普通觀眾的眼睛,已經是一件相當容易的事了。

這種移花接木的A片只是“深度造假”技術衍生出的用途之一。這些影片都經過了精心處理,看上去十分真實。它們的出現,甚至對現代民主制度都構成了一定威脅。不法分子可以甚至已經在使用這種手段炮制假新聞。深度造假技術的另一大風險,就是它可以成為一種騷擾甚至侮辱女性的工具。

色情網站上已經有很多名人的偽AV作品了。還有不少互聯網論壇搞起了深度定制的造假服務。比如有些男人為了滿足自己的陰暗思想,在未經對方允許的情況下,花錢給這些互聯網論壇,請他們制作其前女友或同事等人的偽AV視頻。由于人工智能軟件的普及,加上在社交媒體上下載別人的照片十分簡單,因此制作這些深度造假的AV作品并不困難,費用也不算很高。

然而受害者要想刪除這些偽AV作品,卻面臨不小的法律挑戰。她們的境遇與面臨其他形式的網絡性騷擾的女性一樣,在依法維權上面臨著巨大阻礙。

《憲法第一修正案》與深度偽造技術

加利福尼亞州的作家夏洛特·勞斯深知非法色情作品對人的毀滅性。有人曾經在一個知名色情網站上發布過她年僅十幾歲的女兒的裸照。受此事影響,勞斯成功組織了一場呼吁將“色情報復”入罪化的活動。她本人也對深度造假行為深惡痛絕。

她表示:“深度造假給人帶來的痛苦不亞于色情報復作品。深度造假的作品看起來十分真實,而我們生活的這個充斥著假新聞的世界又進一步惡化了它的影響?!?/p>

勞斯補充道,深度造假已經成為了羞辱或恐嚇女性的一種常見方式。她對500名色情報復作品的受害者調查發現,其中有12%的人也是深度造假作品的受害者。

那么我們應該如何解決這一問題?推動各州立法禁止色情報復行為,或許是一種可行的解決方案。目前,美國有41個州已經制定了相關法律。這些法律都是近期出臺的,它們也標志著政客們開始轉變對非自愿色情作品的態度。

勞斯說道:“在我剛開始呼吁的時候,它并不是一個人們關心的問題。聽說了這件事的人,從媒體、立法者到執法部門都不重視受害者。事情實際上是在朝著另一個方向發展。但現在大家開始注重保護受害者了?!?/p>

推動刑事立法,可能也是打擊深度偽造作品的一種有效方法。另一種辦法則是對偽造者提起民事訴訟。正如美國電子前沿基金會在一篇博客文章中指出的那樣,深度偽造的受害者可以以誹謗罪或“歪曲形象”為名提起訴訟。受害者也可以主張自己的“形象權”受到侵犯,主張深度造假者未本人允許,通過傳播自己的形象獲取利益。

然而,所有這些潛在解決方案都可能遇到一個強大的阻礙,即美國的言論自由權。任何因涉嫌深度偽造被起訴的人,都可能主張這些視頻作品是一種文化或政治表達方式,受美國憲法第一修正案保護。

至于這個觀點能否說服法官,則是另一回事了。深度造假是一種新生事物,對于哪種深度造假可以受言論自由權保護,法院也尚未有任何決定性裁決。而美國有關形象權的法律莫衷一是,又使這個問題變得更加復雜。

對此,洛約拉法學院的教授詹妮弗·羅斯曼表示:“第一修正案的尺度應該在形象權案件中保持一致,但現實中卻并非如此,不同法院會給出不同裁決。”羅斯曼著有一本關于隱私和形象權的書。

羅斯曼認為,在涉及色情作品的深度造假案件中,多數法官可能不會支持第一修正案的主張,尤其是在受害者并不出名的案件中。她指出,要主張深度造假作品侵犯了自己的形象權或涉嫌誹謗,就要證明不法分子將深度造假作品當成了真實作品進行宣傳。而且法律對公眾人物的裁判尺度也不一樣,如果受害者是一位知名人士,她要想打贏官司,還必須證明對方有“真實惡意”,也就是證明不法分子明知道視頻材料是假的,但仍然出于真實惡意進行傳播。

任何針對深度造假的刑事法律,如果只是狹義地涵蓋性剝削因素,而不包括出于藝術和政治諷刺目的創作的材料,則還是能夠經受住第一修正案的審查的。

簡而言之,言論自由權不太可能成為打擊深度造假色情作品的阻礙。然而不幸的是,即使法律站在他們這一邊,受害者也沒有什么切實可行的途徑去刪除那些視頻,或者懲罰相關責任人。

建立新的違規視頻下架體系

如果你在網上發現了你的不雅視頻,或是移花接木的剪輯作品,你想去糾正這種情形,那么你可能還會遭到更多挫折——現在我們幾乎沒有什么實際可行的方法來解決這個問題。

好萊塢女星斯嘉麗·約翰遜最近在接受《華盛頓郵報》(Washington Post)采訪時表示:“如果你想讓自己不受互聯網和它的墮落文化的影響,基本上是徒勞的……互聯網是一個巨大的黑暗蟲洞,會吞噬掉它自己?!?/p>

斯嘉麗·約翰遜為何如此偏激呢?因為互聯網的基本設計是分布式的,沒有一個統一的中央監管機構,人們可以很容易地通過互聯網匿名發布深度造假作品以及其他令人反感的內容。雖然我們可以動用通過法律手段來識別和懲罰這些網絡惡魔,但這個過程是十分緩慢而繁瑣的——尤其是對那些無權無勢的人。

勞斯表示,在美國,提起相關法律訴訟的成本大約在5萬美元左右,由于被告往往一文不名,或是居住地十分遙遠,因此這些訴訟成本往往很難收回。最后一個選擇是追究發布不雅視頻的網站的責任,但這樣做往往也不會有什么實際收獲。

美國有一條強大的法律條文叫做“230條款”,它為網站運營商提供了一個有力的法律屏障。比如對于Craigslist這樣的網站,如果用戶使用他們的分類廣告功能撰寫誹謗信息,網站是不用承擔法律責任的。

對于8Chan和Mr. Deepfakes這種儲存有大量深度造假視頻的網站,運營商可以主張豁免,因為上傳視頻的不是他們,而是他們的用戶。

這層法律屏障不是絕對的,有一個例外情況,就是侵犯知識產權。根據法律,如果網站收到了版權所有者的通知,就必須刪除侵權內容。(如果網站反對,可以出具通知書告知請求人,并恢復相關材料。)

這條規定有助于深度造假色情作品的受害者打破網站的豁免權,尤其是在受害者主張維護形象權的時候。但是在這方面,美國的法律依然是混亂的。羅斯曼表示,很多法院并不清楚知識產權例外條款是否適用于各州的知識產權法——如適用于形象權案件,抑或只適用于關于版權和商標等爭議物的聯邦法律。

所有這些都提出了一個問題;國會和美國司法系統是否應該修改法律,使深度造假色情作品的受害者能更容易地刪除這些形象,雖然近年來,美國司法體系已經在對“230條款”進行零敲碎打的修訂。勞斯認為,修法將是一個有用的舉措。

勞斯表示:“我的看法和斯嘉麗·約翰遜不一樣,在過去的五年里,我見證了我們在報復色情領域的巨大進步,我對法律的持續進步和修改抱有很大希望,相信我們最終能控制住這些問題?!?/p>

事實上,隨著很多人越來越看不慣互聯網平臺擁有的“不負責任的權力”(法律學者麗貝卡·圖什內特語),支持勞斯的觀點的人也變得越來越多。在最近的一起受到密切關注的涉及約會軟件Grindr的案件中,法院也正在權衡是否應該要求網站運營商更加積極地凈化網站上的不法行為。

然而,并不是所有人都認為這是一個好主意?!?30條款”被很多人視為一項充滿遠見的立法,它保障了美國互聯網公司能夠在不受法律威脅的情況下蓬勃發展。美國電子前沿基金會也警告稱,削弱網站的豁免權,很可能會扼殺美國的商業和言論自由。

那么,美國國會能否起草一部專門法律,在不造成這種意外后果的情況下,維護深度造假色情作品受害者的權利呢?愛達荷大學的法學教授安瑪麗·布里迪指出,在現實中,有些企業和個人曾經惡意利用版權下架體系的漏洞,刪除網絡上的合法評論和其他合法內容。

盡管如此,布里迪認為,考慮到深度造假色情作品的危害性,美國仍然有必要起草一項新的法律。

她表示:“在我看來,深度造假色情作品的嚴重危害表明,美國有必要迅速采取補救措施。但為了正確處理問題,我們還有必要設置一種即時的、有意義的上訴權,以免有人濫用通知權,以虛假借口刪除合法內容?!?span>(財富中文網)

譯者:樸成奎

In the darker corners of the Internet, you can now find celebrities like Emma Watson and Selma Hayek performing in pornographic videos. The clips are fake, of course—but it’s distressingly hard to tell. Recent improvements in artificial intelligence software have made it surprisingly easy to graft the heads of stars, and ordinary women, to the bodies of X-rated actresses to create realistic videos.

These explicit movies are just one strain of so-called “deepfakes,” which are clips that have been doctored so well they look real. Their arrival poses a threat to democracy; mischief makers can, and already have, used them to spread fake news. But another great danger of deepfakes is their use as a tool to harass and humiliate women.

There are plenty of celebrity deepfakes on pornographic websites, but Internet forums dedicated to custom deepfakes—men paying to create videos of ex-partners, co-workers, and others without their knowledge or consent—are proliferating. Creating these deepfakes isn’t difficult or expensive in light of the proliferation of A.I. software and the easy access to photos on social media sites like Facebook.

Yet the legal challenges for victims to remove deepfakes can be daunting. While the law may be on their side, victims also face considerable obstacles—ones that are familiar to those who have sought to confront other forms of online harassment.

The First Amendment and Deepfakes

Charlotte Laws knows how devastating non-consensual pornography can be. A California author and former politician, Laws led a successful campaign to criminalize so-called “revenge porn” after someone posted nude photos of her teenage daughter on a notorious website. She is also alarmed by deepfakes.

“The distress of deepfakes is as bad as revenge porn,” she says. “Deepfakes are realistic, and their impact is compounded by the growth of the fake news world we’re living in.”

Laws adds that deepfakes have become a common way to humiliate or terrorize women. In a survey she conducted of 500 women who had been victims of revenge porn, Laws found that 12% had also been subjected to deepfakes.

One way to address the problem could involve lawmakers expanding state laws banning revenge porn. These laws, which now exist in 41 U.S. states, are of recent vintage and came about as politicians began to change their attitudes to non-consensual pornography.

“When I began, it wasn’t something people addressed,” Laws says. “Those who heard about it were against the victims, from media to legislators to law enforcement. But it’s really gone in the other direction, and now it’s about protecting the victims.”

New criminal laws could be one way to fight deepfakes. Another approach is to bring civil lawsuits against the perpetrators. As the Electronic Frontier Foundation notes in a blog post, those subjected to deepfakes could sue for defamation or for portraying them in a “false light.” They could also file a “right of publicity” claim, alleging the deepfake makers profited from their image without permission.

All of these potential solutions, however, could bump up against a powerful obstacle: free speech law. Anyone sued over deepfakes could claim the videos are a form of cultural or political expression protected by the First Amendment.

Whether this argument would persuade a judge is another matter. Deepfakes are new enough that courts haven’t issued any decisive ruling on which of them might count as protected speech. The issue is even more complicated given the messy state of the law related to the right of publicity.

“The First Amendment should be the same across the country in right of publicity cases, but it’s not,” says Jennifer Rothman, a professor at Loyola Law School and author of a book about privacy and the right of publicity. “Different circuit courts are doing different things.”

In the case of deepfakes involving pornography, however, Rothman predicts that most judges would be unsympathetic to a First Amendment claim—especially in cases where the victims are not famous. A free speech defense to claims of false light or defamation, she argues, would turn in part on whether the deepfake was presented as true and would be analyzed differently for public figures. A celebrity victim would have the added hurdle of showing “actual malice,” the legal term for knowing the material was fake, in order to win the case.

Any criminal laws aimed at deepfakes would likely survive First Amendment scrutiny so long as they narrowly covered sexual exploitation and did not include material created as art or political satire.

In short, free speech laws are unlikely to be a serious impediment for targets of deepfake pornography. Unfortunately, even if the law is on their side, the victims nonetheless have few practical options to take down the videos or punish those responsible for them.

A New Takedown System?

If you discover something false or unpleasant about you on the Internet and move to correct it, you’re likely to encounter a further frustration: There are few practical ways to address it.

“Trying to protect yourself from the Internet and its depravity is basically a lost cause … The Internet is a vast wormhole of darkness that eats itself,” actress Scarlett Johansson, whose face appears in numerous deepfakes, recently told the Washington Post.

Why is Johansson so cynical? Because the fundamental design of the Internet—distributed, without a central policing authority—makes it easy for people to anonymously post deepfakes and other objectionable content. And while it’s possible to identify and punish such trolls using legal action, the process is slow and cumbersome—especially for those who lack financial resources.

According to Laws, it typically takes $50,000 to pursue such a lawsuit. That money may be hard to recoup since defendants are often broke or based in a far-flung location. This leaves the option of going after the website that published the offending material, but this, too, is likely to prove fruitless.

The reason is because of a powerful law known as Section 230, which creates a legal shield for website operators as to what users post on their sites. It ensures a site like Craigslist, for instance, isn’t liable if someone uses their classified ads to write defamatory messages.

In the case of sites like 8Chan and Mr. Deepfakes, which host numerous deepfake videos, the operators can claim immunity because it is not them but their users that are uploading the clips.

The legal shield is not absolute. It contains an exception for intellectual property violations, which obliges websites to take down material if they receive a notice from a copyright owner. (A process that lets site operators file a counter notice and restore the material if they object).

The intellectual property exception could help deepfake victims defeat the websites’ immunity, notably if the victim invokes a right of publicity. But here again the law is muddled. According to Rothman, courts are unclear on whether the exception applies to state intellectual property laws—such as right of publicity—or only to federal ones like copyright and trademark.

All of this raises the question of whether Congress and the courts, which have been chipping away at Section 230’s broad immunity in recent years, should change the law and make it easier for deepfake victims to remove the images. Laws believes this would be a useful measure.

“I don’t feel the same as Scarlett Johansson,” Laws says. “I’ve seen the huge improvements in revenge porn being made over the past five years. I have great hope for continual improvement and amendments, and that we’ll get these issues under control eventually.”

Indeed, those who share Laws’ views have momentum on their side as more people look askance at Internet platforms that, in the words of the legal scholar Rebecca Tushnet, enjoy “power without responsibility.” And in a closely watched case involving the dating app Grindr, a court is weighing whether to require website operators to be more active in purging their platforms of abusive behavior.

Not everyone is convinced this a good idea, however. The Section 230 law is regarded by many as a visionary piece of legislation, which allowed U.S. Internet companies to flourish in the absence of legal threats. The Electronic Frontier Foundation has warned that eroding immunity for websites could stifle business and free expression.

This raises the question of whether Congress could draft a law narrow enough to help victims of deepfakes without such unintended consequences. As a cautionary tale, Annemarie Bridy, a law professor at the University of Idaho, points to the misuse of the copyright takedown system in which companies and individuals have acted in bad faith to remove legitimate criticism and other legal content.

Still, given what’s at stake with pornographic deep fake videos, Bridy says, it could be worth drafting a new law.

“The seriousness of the harm from deep fakes, to me, justifies an expeditious remedy,” she says. “But to get the balance right, we’d also need an immediate, meaningful right of appeal and safeguards against abusive notices intended to censor legitimate content under false pretenses.”

熱讀文章
熱門視頻
掃描二維碼下載財富APP