人工智能容易被騙,分不清海龜和來復槍
麻省理工學院(MIT)計算機科學和人工智能實驗室的研究人員找到了辦法來欺騙自動識別圖片中物體的谷歌(Google)軟件。他們創(chuàng)造了一個算法,略微改動了海龜?shù)膱D片,就可以讓谷歌的識圖軟件將它視作一把來復槍。特別值得一提的是,麻省理工學院的團隊3D打印了這只海龜后,谷歌的軟件依舊認為它是一把武器,而不是一只爬行動物。 這樣的混淆,意味著罪犯最終也可能利用到識圖軟件的缺陷。隨著這類軟件越來越滲透到人們的日常生活之中,情況會更為凸顯。由于科技公司和他們的客戶日益依賴人工智能來處理重要工作,他們必須考慮這個問題。 例如,機場掃描設備可能有一天會采用識別技術,自動探測旅客行李中的武器。不過罪犯可能會試圖改造炸彈等危險品,欺騙探測器而讓它們無法被檢測到。 麻省理工學院的研究者、計算機科學博士生、實驗的共同領導者阿尼什?阿塔耶解釋道,麻省理工學院的研究人員對海龜圖像所做的一切改變,都是人眼無法識別的。 在起初的海龜圖片測試后,研究人員把這只爬行動物重制成了一個物體,看看修改后的形象能否繼續(xù)欺騙谷歌的計算機。隨后,他們對3D打印的海龜進行了攝影和錄像,并將數(shù)據(jù)輸入谷歌的識圖軟件。 果然,谷歌的軟件認為這些海龜就是來復槍。 麻省理工學院上周發(fā)表了一篇關于本實驗的學術論文。這篇論文以之前幾次測試人工智能的研究為基礎,作者已經(jīng)將其提交,供即將舉辦的人工智能會議作進一步審閱。 能夠自動識別圖中物體的計算機,都依賴于神經(jīng)網(wǎng)絡,這種軟件會大致模仿人類大腦學習的方式。如果研究人員給神經(jīng)網(wǎng)絡提供了足夠的貓類圖片,它們就能識別這些圖片的模式,最終在沒有人類幫助的情況下認出圖片中的貓類。 不過,如果這些神經(jīng)網(wǎng)絡學習的圖片照明效果不好或是物體被遮擋,有時就會犯錯。阿塔耶解釋道,神經(jīng)網(wǎng)絡的工作方式仍然有些難以理解,研究人員還不清楚它們?yōu)槭裁纯梢曰驘o法準確識別某物。 麻省理工學院團隊的算法創(chuàng)造了所謂的對抗樣本,它們本質上是計算機修改的圖片,專用于迷惑識別物體的軟件。阿塔耶表示,盡管海龜?shù)膱D像在人類眼里是一只爬行動物,但算法改變了圖片,讓它與來復槍的圖像共享了某些未知的特征。這種算法還會考慮照明效果不加或色彩變換的情況,從而會導致谷歌的軟件識別失敗。在3D打印之后,谷歌軟件仍然識別錯誤,表明算法產生的對抗特性在物質世界依舊存在。 阿塔耶表示,盡管論文重點討論了谷歌的人工智能軟件,但微軟(Microsoft)和牛津大學(University of Oxford)開發(fā)的類似識圖軟件也會出錯。他推測,由Facebook和亞馬遜(Amazon)等公司開發(fā)的其他大多數(shù)識圖軟件也很可能失誤,因為它們的機制大體相同。 阿塔耶解釋道,除了機場掃描儀之外,依賴深度學習技術識別特定圖像的家庭安全系統(tǒng)也可能被欺騙。 想象一下,假如越來越多的攝像頭只在注意到物體運動時才開始錄像。那么為了避免被過路汽車之類的無害行為干擾,攝像頭可能會接受訓練,忽視那些汽車。而利用這一點,罪犯就可以穿著專門設計的T恤,讓計算機誤以為它們只是看到了卡車,而不是人。果真如此的話,竊賊就能輕易通過安全系統(tǒng)。 阿塔耶承認,這些當然都只是推測。不過考慮到黑客事件頻繁出現(xiàn),這樣的情形值得深思。阿塔耶表示,他希望測試自己的想法,并最終制造出有能力“迷惑安全攝像頭”的“對抗T恤”。 谷歌和Facebook等其他公司意識到,黑客正在試圖欺騙他們的系統(tǒng)。多年來,谷歌都在研究阿塔耶和他的麻省理工學院團隊制造的這類威脅。谷歌的一位發(fā)言人拒絕就麻省理工學院的項目發(fā)表評論,不過他指出,谷歌最近的兩篇論文體現(xiàn)了公司在應對對抗技術上的工作。 阿塔耶表示:“有許多聰明人都在努力工作,讓(類似谷歌軟件這樣的)分類器更加完善。” (財富中文網(wǎng)) 譯者:嚴匡正 |
Researchers from MIT’s computer science and artificial intelligence laboratory have discovered how to trick Google’s (GOOG, +0.66%)software that automatically recognizes objects in images. They created an algorithm that subtly modified a photo of a turtle so that Google’s image-recognition software thought it was a rifle. What’s especially noteworthy is that when the MIT team created a 3D printout of the turtle, Google’s software still thought it was a weapon rather than a reptile. The confusion highlights how criminals could eventually exploit image-detecting software, especially as it becomes more ubiquitous in everyday life. Technology companies and their clients will have to consider the problem as they increasingly rely on artificial intelligence to handle vital jobs. For example, airport scanning equipment could one day be built with technology that automatically identifies weapons in passenger luggage. But criminals could try to fool the detectors by modifying dangerous items like bombs so they are undetectable. All the changes the MIT researchers made to the turtle image were unrecognizable to the human eye, explained Anish Athalye, an MIT researcher and PHD candidate in computer science who co-led the experiment. After the original turtle image test, the researchers reproduced the reptile as a physical object to see if the modified image would still trick Google’s computers. The researchers then took photos and video of the 3-D printed turtle, and fed that data into Google’s image-recognition software. Sure enough, Google’s software thought the turtles were rifles. MIT publicized an academic paper about the experiment last week. The authors are submitting the paper, which builds on previous studies testing artificial intelligence, for further review at an upcoming AI conference. Computers designed to automatically spot objects in images are based on neural networks, software that loosely imitates how the human brain learns. If researchers feed enough images of cats into these neural networks, they learn to recognize patterns in those images so they can eventually spot felines in photos without human help. But these neural networks can sometimes stumble if they are fed certain types of pictures with bad lighting and obstructed objects. The way these neural networks work is still somewhat mysterious, Athalye explained, and researchers still don’t know why they may or may not accurately recognize something. The MIT team’s algorithm created what are known as adversarial examples, essentially computer-manipulated images that were crafted to fool software that recognize objects. While the turtle image may resemble a reptile to humans, the algorithm morphed it so that it shares unknown characteristics with an image of a rifle. The algorithm also took in account conditions like poor lighting or miscoloration that could have caused Google’s image-recognition software to misfire, Athalye said. The fact that Google’s software still mislabeled the turtle after it was 3D printed shows that the adversarial qualities embedded from the algorithm are still retained in the physical world. Although the research paper focuses on Google’s AI software, Athalye said that similar image-recognition tools from Microsoft(MSFT, +0.46%) and the University of Oxford also stumbled. Most other image-recognition software from companies like Facebook (FB, -0.40%) and Amazon (AMZN, +0.86%) would also likely blunder, he speculates, because of their similarities. In addition to airport scanners, home security systems that rely on deep learning to recognize certain images may also be vulnerable to being fooled, Athalye explained. Consider cameras that are increasingly set up to only record when they notice movement. To avoid being tripped by innocuous activity like cars driving by, cameras could be trained to ignore automobiles. To take advantage, however, criminals could wear t-shirts that have been specially designed to fool computers into thinking they see trucks instead of people. If so, burglars could easily bypass the security system. Of course, this is all speculation, Athalye concedes. But, considering the frequency of hacking, it’s something worth considering. Athalye said he wants to test his idea and eventually make “adversarial t-shirts” that have the ability to “mess up a security camera.” Google and other companies like Facebook are aware that hackers are trying to figure out ways to spoof their systems. For years, Google has been studying the kind of threats that Athalye and his MIT team produced. A Google spokesperson declined to comment on the MIT project, but pointed to two recent Google research papersthat highlight the company’s work on combating the adversarial techniques. “There are a lot of smart people working hard to make classifiers [like Google’s software] more robust,” Athalye said. |