,亚洲欧美日韩国产成人精品影院,亚洲国产欧美日韩精品一区二区三区,久久亚洲国产成人影院,久久国产成人亚洲精品影院老金,九九精品成人免费国产片,国产精品成人综合网,国产成人一区二区三区,国产成...

首頁 500強(qiáng) 活動 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

Facebook怎么了?技術(shù)越來越牛,卻解決不了任何問題

JEREMY KAHN
2020-11-23

只要Facebook繼續(xù)給政治強(qiáng)權(quán)人物和流行的極端主義媒體組織特殊待遇,技術(shù)上的進(jìn)步根本不可能改變該公司糟糕的公共形象。

文本設(shè)置
小號
默認(rèn)
大號
Plus(0條)

Facebook表示,其管理社交媒體網(wǎng)站所使用的人工智能系統(tǒng)現(xiàn)在已經(jīng)足夠強(qiáng)大,可以自動識別網(wǎng)站上超過94%的仇恨言論,并且可以發(fā)現(xiàn)超過96%與有組織仇恨團(tuán)體有關(guān)的內(nèi)容。

這意味著Facebook管理其網(wǎng)站的能力大幅提升,在個別情況下,其人工智能系統(tǒng)發(fā)現(xiàn)違規(guī)內(nèi)容的效率比一年前提升了五倍。

然而,只要Facebook繼續(xù)給政治強(qiáng)權(quán)人物和流行的極端主義媒體組織特殊待遇,技術(shù)上的進(jìn)步根本不可能改變該公司糟糕的公共形象。

最近幾周,F(xiàn)acebook遭到抨擊,因為在美國總統(tǒng)唐納德·特朗普發(fā)布有關(guān)大選的言論之后,F(xiàn)acebook沒有采取更多措施阻止這些言論傳播,而特朗普的前顧問史蒂夫·班農(nóng)在Facebook上傳播要求將兩位曾激怒總統(tǒng)的美國高級官員斬首的播客,F(xiàn)acebook也沒有將其封禁。

Facebook后來確實將特朗普的帖子標(biāo)記為誤導(dǎo)性信息,比如他聲稱自己贏得大選的帖子,F(xiàn)acebook還在一些帖子上附上了“計票工作需要幾天或幾周才能完成”的備注。但批評者認(rèn)為,F(xiàn)acebook應(yīng)該直接將特朗普的帖子刪除或者屏蔽。另外一家社交媒體公司Twitter臨時屏蔽了特朗普競選團(tuán)隊的官方賬號發(fā)布的新帖子,以及特朗普的顧問在大選之前發(fā)布的帖子。Facebook稱特朗普的帖子符合其標(biāo)準(zhǔn)政策中有關(guān)“新聞價值”的豁免條款。

Facebook CEO馬克·扎克伯格表示,班農(nóng)的帖子已經(jīng)被刪除,但這個右翼煽動者并沒有頻繁違反公司的規(guī)定,因此不足以從Facebook平臺上將其封殺。

Facebook首席技術(shù)官邁克·斯科洛普夫承認(rèn),增強(qiáng)公司的人工智能系統(tǒng),使其能夠監(jiān)測甚至在許多情況下可以自動屏蔽違反公司規(guī)定的內(nèi)容,并不能完全解決公司面臨的有害內(nèi)容問題。

斯科洛普夫說:“我在這方面沒有天真的想法。我不會說技術(shù)是所有問題的解決方案?!彼硎?,F(xiàn)acebook對于社交網(wǎng)絡(luò)的管理分為三個方面:能夠識別違反公司政策的內(nèi)容的技術(shù),能夠根據(jù)信息快速采取行動從而防止內(nèi)容產(chǎn)生影響的能力,以及政策本身。他補(bǔ)充說,在前兩點上,技術(shù)可以有所幫助,但它無法決定政策。

公司越來越多地借助自動化系統(tǒng),為15,000名人類內(nèi)容審查員提供幫助。許多審查員屬于公司在全球聘請的承包商。今年,F(xiàn)acebook首次開始使用人工智能確定將內(nèi)容交由人類審查員審查的順序,由審查員確定將其保留還是刪除。人工智能軟件會根據(jù)內(nèi)容可能違反政策的嚴(yán)重性以及內(nèi)容在Facebook的社交網(wǎng)絡(luò)上傳播的可能性,確定內(nèi)容的優(yōu)先順序。

斯科洛普夫表示,該系統(tǒng)的目的是盡量限制Facebook所說的“流行度”,這個指標(biāo)大體上可以理解成會有多少用戶可能看到一條內(nèi)容或者與內(nèi)容互動。

該公司還迅速在其內(nèi)容審查系統(tǒng)中部署了多種由其內(nèi)部研究人員開發(fā)的尖端人工智能技術(shù)。其中有一款軟件可以翻譯100種語言,無需使用一種通用中間語言。這款軟件幫助公司的人工智能打擊仇恨言論和失實信息,尤其是以不常見的語言書寫的言論,因為這些語言的人類內(nèi)容審查員遠(yuǎn)遠(yuǎn)不足。

斯科洛普夫表示,公司在“相似度匹配”方面取得了巨大進(jìn)步。相似性匹配會確定一條新內(nèi)容與因為違反Facebook政策已經(jīng)被刪除的另一條內(nèi)容是否基本相似。他列舉了與新冠疫情有關(guān)的失實信息的例子。當(dāng)時有帖子謊稱外科醫(yī)用口罩中含有已知的致癌物,經(jīng)過人類事實核查員評估之后這些帖子被刪除,第二條帖子使用了略有不同的語言和類似但并不完全相同的口罩圖片,也被人工智能系統(tǒng)識別出來并自動屏蔽。

他還表示,許多系統(tǒng)現(xiàn)在擴(kuò)充到了“多模態(tài)”領(lǐng)域,可以結(jié)合文本與圖片或視頻進(jìn)行分析,有時候也會結(jié)合語音。Facebook有專門的軟件用于監(jiān)測具體類型的惡意內(nèi)容,比如識別廣告垃圾郵件的軟件和識別仇恨言論的軟件等,同時它還推出了一個新系統(tǒng)“整個帖子完整性嵌入”(簡稱“WPie”),這款軟件可以識別各種各樣不同類型的政策違規(guī)情況,不需使用各類違規(guī)的大量實例進(jìn)行培訓(xùn)。

另外,F(xiàn)acebook還利用研究競賽,幫助其開發(fā)更強(qiáng)大的內(nèi)容審查人工智能。去年,F(xiàn)acebook公布的競賽結(jié)果顯示,研究人員開發(fā)出可自動識別深度偽造視頻的軟件。深度偽造視頻本身也是使用機(jī)器學(xué)習(xí)技術(shù)開發(fā)的高度逼真的虛假視頻。Facebook目前的競賽,旨在找到檢測仇恨模因的最佳算法。這次競賽極具挑戰(zhàn)性,因為一個成功的系統(tǒng)需要理解模因中的圖片和文字對意義的影響,甚至需要理解模因本身不存在的大量語境。(財富中文網(wǎng))

翻譯:劉進(jìn)龍

審校:汪皓

Facebook表示,其管理社交媒體網(wǎng)站所使用的人工智能系統(tǒng)現(xiàn)在已經(jīng)足夠強(qiáng)大,可以自動識別網(wǎng)站上超過94%的仇恨言論,并且可以發(fā)現(xiàn)超過96%與有組織仇恨團(tuán)體有關(guān)的內(nèi)容。

這意味著Facebook管理其網(wǎng)站的能力大幅提升,在個別情況下,其人工智能系統(tǒng)發(fā)現(xiàn)違規(guī)內(nèi)容的效率比一年前提升了五倍。

然而,只要Facebook繼續(xù)給政治強(qiáng)權(quán)人物和流行的極端主義媒體組織特殊待遇,技術(shù)上的進(jìn)步根本不可能改變該公司糟糕的公共形象。

最近幾周,F(xiàn)acebook遭到抨擊,因為在美國總統(tǒng)唐納德·特朗普發(fā)布有關(guān)大選的言論之后,F(xiàn)acebook沒有采取更多措施阻止這些言論傳播,而特朗普的前顧問史蒂夫·班農(nóng)在Facebook上傳播要求將兩位曾激怒總統(tǒng)的美國高級官員斬首的播客,F(xiàn)acebook也沒有將其封禁。

Facebook后來確實將特朗普的帖子標(biāo)記為誤導(dǎo)性信息,比如他聲稱自己贏得大選的帖子,F(xiàn)acebook還在一些帖子上附上了“計票工作需要幾天或幾周才能完成”的備注。但批評者認(rèn)為,F(xiàn)acebook應(yīng)該直接將特朗普的帖子刪除或者屏蔽。另外一家社交媒體公司Twitter臨時屏蔽了特朗普競選團(tuán)隊的官方賬號發(fā)布的新帖子,以及特朗普的顧問在大選之前發(fā)布的帖子。Facebook稱特朗普的帖子符合其標(biāo)準(zhǔn)政策中有關(guān)“新聞價值”的豁免條款。

Facebook CEO馬克·扎克伯格表示,班農(nóng)的帖子已經(jīng)被刪除,但這個右翼煽動者并沒有頻繁違反公司的規(guī)定,因此不足以從Facebook平臺上將其封殺。

Facebook首席技術(shù)官邁克·斯科洛普夫承認(rèn),增強(qiáng)公司的人工智能系統(tǒng),使其能夠監(jiān)測甚至在許多情況下可以自動屏蔽違反公司規(guī)定的內(nèi)容,并不能完全解決公司面臨的有害內(nèi)容問題。

斯科洛普夫說:“我在這方面沒有天真的想法。我不會說技術(shù)是所有問題的解決方案。”他表示,F(xiàn)acebook對于社交網(wǎng)絡(luò)的管理分為三個方面:能夠識別違反公司政策的內(nèi)容的技術(shù),能夠根據(jù)信息快速采取行動從而防止內(nèi)容產(chǎn)生影響的能力,以及政策本身。他補(bǔ)充說,在前兩點上,技術(shù)可以有所幫助,但它無法決定政策。

公司越來越多地借助自動化系統(tǒng),為15,000名人類內(nèi)容審查員提供幫助。許多審查員屬于公司在全球聘請的承包商。今年,F(xiàn)acebook首次開始使用人工智能確定將內(nèi)容交由人類審查員審查的順序,由審查員確定將其保留還是刪除。人工智能軟件會根據(jù)內(nèi)容可能違反政策的嚴(yán)重性以及內(nèi)容在Facebook的社交網(wǎng)絡(luò)上傳播的可能性,確定內(nèi)容的優(yōu)先順序。

斯科洛普夫表示,該系統(tǒng)的目的是盡量限制Facebook所說的“流行度”,這個指標(biāo)大體上可以理解成會有多少用戶可能看到一條內(nèi)容或者與內(nèi)容互動。

該公司還迅速在其內(nèi)容審查系統(tǒng)中部署了多種由其內(nèi)部研究人員開發(fā)的尖端人工智能技術(shù)。其中有一款軟件可以翻譯100種語言,無需使用一種通用中間語言。這款軟件幫助公司的人工智能打擊仇恨言論和失實信息,尤其是以不常見的語言書寫的言論,因為這些語言的人類內(nèi)容審查員遠(yuǎn)遠(yuǎn)不足。

斯科洛普夫表示,公司在“相似度匹配”方面取得了巨大進(jìn)步。相似性匹配會確定一條新內(nèi)容與因為違反Facebook政策已經(jīng)被刪除的另一條內(nèi)容是否基本相似。他列舉了與新冠疫情有關(guān)的失實信息的例子。當(dāng)時有帖子謊稱外科醫(yī)用口罩中含有已知的致癌物,經(jīng)過人類事實核查員評估之后這些帖子被刪除,第二條帖子使用了略有不同的語言和類似但并不完全相同的口罩圖片,也被人工智能系統(tǒng)識別出來并自動屏蔽。

他還表示,許多系統(tǒng)現(xiàn)在擴(kuò)充到了“多模態(tài)”領(lǐng)域,可以結(jié)合文本與圖片或視頻進(jìn)行分析,有時候也會結(jié)合語音。Facebook有專門的軟件用于監(jiān)測具體類型的惡意內(nèi)容,比如識別廣告垃圾郵件的軟件和識別仇恨言論的軟件等,同時它還推出了一個新系統(tǒng)“整個帖子完整性嵌入”(簡稱“WPie”),這款軟件可以識別各種各樣不同類型的政策違規(guī)情況,不需使用各類違規(guī)的大量實例進(jìn)行培訓(xùn)。

另外,F(xiàn)acebook還利用研究競賽,幫助其開發(fā)更強(qiáng)大的內(nèi)容審查人工智能。去年,F(xiàn)acebook公布的競賽結(jié)果顯示,研究人員開發(fā)出可自動識別深度偽造視頻的軟件。深度偽造視頻本身也是使用機(jī)器學(xué)習(xí)技術(shù)開發(fā)的高度逼真的虛假視頻。Facebook目前的競賽,旨在找到檢測仇恨模因的最佳算法。這次競賽極具挑戰(zhàn)性,因為一個成功的系統(tǒng)需要理解模因中的圖片和文字對意義的影響,甚至需要理解模因本身不存在的大量語境。(財富中文網(wǎng))

翻譯:劉進(jìn)龍

審校:汪皓

Facebook has revealed that the artificial intelligence systems it uses to police its social media sites are now good enough to automatically flag more than 94% of hate speech on its social media sites, as well as catching more than 96% of content linked to organized hate groups.

This represents a rapid leap in Facebook's capabilities—in some cases, these A.I. systems are five times better at catching content that violates the company's policies than they were just one year ago.

And yet this technological progress isn't likely to do much to improve Facebook's embattled public image as long as the company continues to make exceptions to its rules for powerful politicians and popular, but extremist, media organizations.

In recent weeks, Facebook has been under fire for not doing more to slow the false claims about the election made by U.S. President Donald Trump and not banning former Trump advisor Steve Bannon after he used Facebook to distribute a podcast in which he called for the beheading of two U.S. officials whose positions have sometimes angered the president.

Facebook did belatedly label some of Trump's posts, such as ones in which he said he had won the election, as misleading and appended a note saying that "ballot counting will continue for days or weeks" to some of them. But critics said it should have removed or blocked these posts completely. Rival social media company Twitter did temporarily block new posts from the official Trump campaign account as well as those from some Trump advisors during the run-up to the election. Facebook said Trump's posts fell within a "newsworthiness" exemption to its normal policies.

As for Bannon's posts, Facebook CEO Mark Zuckerberg said they had been taken down but that the rightwing firebrand had not violated the company's rules frequently enough to warrant banning him from the platform.

Mike Schroepfer, Facebook's chief technology officer, acknowledged that efforts to strengthen the company's A.I. systems so they could detect—and in many cases automatically block—content that violates the company's rules were not a complete solution to the company's problems with harmful content.

"I'm not naive about this," Schroepfer said. "I'm not saying technology is the solution to all these problems." Schroepfer said the company's efforts to police its social network rested on three legs: technology capable of identifying content that violated the company's policies, the capability to quickly act on that information to prevent that content from having an impact and the policies themselves. Technology could help with the first two of those, but could not determine the policies, he added.

The company has increasingly turned to automated systems to help augment the 15,000 human content moderators, many of them contractors, that it employs across the globe. This year for the first time, Facebook began using using A.I. to determine the order in which content is brought before these human moderators for a decision on whether it should remain up or be taken down. The software prioritizes content based on how severe the likely policy violation may be and how likely the piece of content is to spread across Facebook's social networks.

Schroepfer said that the aim of the system is to try to limit what Facebook calls "prevalence"—a metric which translates roughly into how many users might be able to see or interact with a given piece of content.

The company has moved rapidly to put several cutting-edge A.I. technologies pioneered by its own researchers into its content moderation systems. These include software that can translate between 100 languages without using a common intermediary. This has helped the company's A.I. to combat hate speech and disinformation, especially in less common languages for which it has far fewer human content moderators.

Schroepfer said the company had made big strides in "similarity matching"—which tries to determine if a new piece of content is broadly similar to another one that has already been removed for violating Facebook's policies. He gave an example of a COVID-19-related disinformation campaign—posts falsely claiming that surgical face masks contained known carcinogens—which was taken down after review by human fact-checkers and a second post that used slightly differently language and a similar, but not identical face mask image, which an A.I. system identified and was able to automatically block.

He also said that many of these systems were now "multi-modal"—able to analyze text in conjunction with images or video and sometimes also audio. And while Facebook has individual software designed to catch each specific type of malicious content—one for advertising spam and one for hate speech, for example—it also has a new system it calls Whole Post Integrity Embedding (WPie for short) that is a single piece of software that can identify a whole range of different types of policy violations, without having to be trained on a large number of examples of each violation type.

The company has also used research competitions to try to help it build better content moderation A.I. Last year, it announced the results of a contest it ran that saw researchers build software to automatically identify deepfake videos, highly-realistic looking fake videos that are themselves created with a machine learning technique. It is currently running a competition to find the best algorithms for detecting hateful memes—a difficult challenge because a successful system will need to understand how the image and text in a meme affect meaning as well as possibly understand a lot of context not found within the meme itself.

0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財富Plus APP

前往打開