,亚洲欧美日韩国产成人精品影院,亚洲国产欧美日韩精品一区二区三区,久久亚洲国产成人影院,久久国产成人亚洲精品影院老金,九九精品成人免费国产片,国产精品成人综合网,国产成人一区二区三区,国产成...

首頁 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 領(lǐng)導(dǎo)力 專題 品牌中心
雜志訂閱

殺手機(jī)器人何時(shí)才能被叫停

Jeremy Khan
2021-12-27

殺手機(jī)器人正在得到廣泛使用,留給世界的時(shí)間已經(jīng)不多了。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

出臺(tái)殺手機(jī)器人禁令的一個(gè)“歷史性機(jī)遇”曾擺在我們眼前,我們卻未能把握住。

不久前,聯(lián)合國裁軍委員會(huì)(United Nations disarmament committee)釋放了一則令人擔(dān)憂的新聞。該委員會(huì)耗費(fèi)了8年的時(shí)間一直在爭論一個(gè)問題:面對(duì)人工智能武器(使用人工智能定位、跟蹤、攻擊和擊殺目標(biāo),且無需人工干預(yù))的迅速發(fā)展,我們應(yīng)采取什么措施或是否要采取措施。很多國家希望禁止此類武器的開發(fā),但聯(lián)合國小組采取的是協(xié)商一致的運(yùn)轉(zhuǎn)模式,而且包括美國、俄羅斯、英國、印度和以色列在內(nèi)的多個(gè)國家反對(duì)出臺(tái)任何具有法律約束力的限制政策。

最終,委員會(huì)可能只能達(dá)成繼續(xù)保持對(duì)話的意見,而且并未針對(duì)其未來對(duì)話設(shè)立明確的目標(biāo)。這個(gè)不斷拖延的結(jié)果被美國駐該委員會(huì)代表稱之為能夠覆蓋所有國家的“理想強(qiáng)制令”,因?yàn)樗w了所有可能的成果。

其他很多國家和激進(jìn)人士對(duì)這一結(jié)果有著完全不同的看法,稱這一應(yīng)對(duì)舉措只能算是聊勝于無。

英國謝菲爾德大學(xué)(The University of Sheffield)機(jī)器人學(xué)和人工智能榮譽(yù)教授諾埃爾·沙基表示:“這是一次徹底的失敗,完全是場(chǎng)災(zāi)難。”諾埃爾·沙基是“殺手機(jī)器人禁令”運(yùn)動(dòng)的發(fā)言人之一。

專注于這一活動(dòng)的國際特赦組織(Amnesty International)高級(jí)顧問威瑞提·科伊勒在談及此次聯(lián)合國委員會(huì)會(huì)議時(shí)說:“這是一次大翻車。”

不再是科幻小說

人工智能控制的武器曾幾何時(shí)只是科幻小說的內(nèi)容,而且在聯(lián)合國委員會(huì)2014年開始討論無人武器時(shí)基本上也是如此。然而,無需人類管控便可選擇目標(biāo)的實(shí)戰(zhàn)武器系統(tǒng)如今已開始部署于全球各個(gè)戰(zhàn)場(chǎng)。那些認(rèn)為這類武器會(huì)造成巨大威脅的民間團(tuán)體和科學(xué)家將其稱為“殺手機(jī)器人”。這類武器更準(zhǔn)確的技術(shù)稱謂是“致命性自主武器”,英文縮寫LAWS。

今年早些時(shí)候,一篇介紹利比亞內(nèi)戰(zhàn)的聯(lián)合國文章稱,由土耳其一家公司生產(chǎn)的自主性武器Kargu-2四軸飛行器有可能被用于在一次戰(zhàn)斗中跟蹤和鎖定逃跑的士兵。有報(bào)道稱,類似的自主性武器被阿塞拜疆人用于其最近與亞美尼亞的戰(zhàn)爭中,同時(shí),以色列也在其最近與哈馬斯的交戰(zhàn)中使用了自主性槍械。

上述事態(tài)導(dǎo)致眾多人開始擔(dān)心,從阻止或減緩這些武器的廣泛使用看來,留給世界的時(shí)間已經(jīng)不多了。殺手機(jī)器人禁令運(yùn)動(dòng)的另一位發(fā)言人克萊爾·康伯伊說:“毋庸置疑,該技術(shù)的發(fā)展速度正在超過外交對(duì)話的速度。”

一線希望

就在運(yùn)動(dòng)人士對(duì)本月聯(lián)合國委員會(huì)會(huì)議感到異常失望時(shí),一些人稱其失敗可能并不一定就是壞事,而且會(huì)給限制其發(fā)展的有效國際行動(dòng)提供最佳機(jī)會(huì)。這是因?yàn)椋舜问Ю型層嘘P(guān)限制致命性自主武器國際對(duì)話的外交陣地發(fā)生轉(zhuǎn)移,而在轉(zhuǎn)移后的陣地中,少數(shù)國家的阻撓也將是徒勞的。

科伊勒表示:“對(duì)于那些呼吁共同創(chuàng)建法律約束工具、思考下一步該怎么做的國家來說,我認(rèn)為當(dāng)前是一個(gè)令人振奮的時(shí)刻。支持出臺(tái)禁令的60多個(gè)國家應(yīng)就啟動(dòng)替代流程做出決定,在這一流程中,協(xié)商一致原則將無法阻礙大多數(shù)國家的意愿。”

科伊勒稱,盡管聯(lián)合國委員會(huì)的對(duì)話未能達(dá)成一致意見,但至少幫助提升了人們對(duì)自主性武器威脅的認(rèn)識(shí)。越來越多的國家開始支持禁令,包括新西蘭、德國、奧地利、挪威、荷蘭、巴基斯坦、中國以及構(gòu)成非洲聯(lián)盟的55個(gè)國家。

此外,數(shù)千名計(jì)算機(jī)科學(xué)家和人工智能研究人員已經(jīng)簽署了請(qǐng)?jiān)笗粲踅棺灾餍晕淦鳎⒊兄Z不從事開發(fā)此類武器的工作。科伊勒說,如今,我們有必要把握這一態(tài)勢(shì),并通過另一個(gè)論壇來推動(dòng)禁令。

科伊勒稱,將對(duì)話移出聯(lián)合國《特定常規(guī)武器公約》(CCW,十年中大部分時(shí)間一直在討論致命性自主武器法規(guī)的聯(lián)合國框架)的優(yōu)勢(shì)在于,執(zhí)法機(jī)構(gòu)使用這些武器以及在內(nèi)戰(zhàn)中使用這些武器超出了聯(lián)合國委員會(huì)強(qiáng)制令的管轄范疇。包括國際特赦組織和人權(quán)觀察組織(Human Rights Watch)在內(nèi)的很多民間團(tuán)體對(duì)這些殺人機(jī)器人的潛在用途感到異常擔(dān)憂。

運(yùn)動(dòng)人士稱,《特定常規(guī)武器公約》也并非不是不可替代。科伊勒稱,八個(gè)工業(yè)化國家可能會(huì)創(chuàng)建一個(gè)深入探討這一問題的論壇。另一個(gè)方法是,當(dāng)前支持禁令的其中一個(gè)國家可嘗試通過聯(lián)合國大會(huì)來推動(dòng)這一事項(xiàng)。

科伊勒和沙基均提及催生了禁止使用殺傷性地雷國際公約的流程,以及禁止使用集束炸彈的類似公約,這些都可以作為一種潛在的模式,也就是在聯(lián)合國正式流程之外達(dá)成具有約束力的國際條約。

在上述這兩個(gè)案例中,單一國家(禁雷公約的背后是加拿大,禁用集束炸彈公約背后是挪威)同意舉辦國際磋商會(huì)議。在一些運(yùn)動(dòng)人士眼中,有一個(gè)國家有望承擔(dān)推動(dòng)致命性自主武器禁令的職責(zé),那就是新西蘭。該國政府在11月曾舉全國之力,在推動(dòng)禁令出臺(tái)方面發(fā)揮了“領(lǐng)頭羊作用”。其他國家包括挪威、德國和荷蘭,這些國家的政府在過去數(shù)個(gè)月中也都做出了類似的聲明。

沙基稱,舉辦類似的談判需要一個(gè)國家龐大的財(cái)力支持,數(shù)額可能高達(dá)數(shù)千萬美元,這也是為什么像非洲聯(lián)盟這樣支持禁令的團(tuán)體不大可能愿意舉辦國際磋商流程的原因之一。

這一方式的另一個(gè)缺點(diǎn)在于,無論達(dá)成什么條約,其約束力將僅限于那些締約國,而不是覆蓋整個(gè)聯(lián)合國成員國或《日內(nèi)瓦公約》(Geneva Convention)的簽約國。例如,美國便從未簽署過禁雷和禁用集束炸彈公約。

然而這一方式的贊同者稱,這些條約會(huì)樹立一項(xiàng)國際準(zhǔn)則,并對(duì)那些拒絕簽署條約的國家?guī)韽?qiáng)大的道德壓力。禁雷公約的出臺(tái)迫使眾多武器生產(chǎn)商停止生產(chǎn)這種武器,而且美國政府在2014年承諾不在朝鮮半島之外使用這一武器。不過,美國軍事參謀稱,地雷對(duì)于韓國防御朝鮮可能的入侵十分必要。

“屠殺機(jī)器人”

加州大學(xué)洛杉磯分校(University of California, Los Angeles)國際關(guān)系教授羅伯特·特拉格稱,并非所有人都認(rèn)為《特定常規(guī)武器公約》之外的流程可在此時(shí)奏效。其中一個(gè)原因在于,對(duì)于那些擔(dān)心某個(gè)特定敵人會(huì)采用自主性武器的國家來說,它們不大可能會(huì)同意單方面放棄自身擁有這一能力所能帶來的威懾作用。羅伯特以人工智能治理中心代表的身份,參加了上周的日內(nèi)瓦對(duì)話。

特拉格還提到,以禁雷和禁用集束炸彈公約為例,這些科技的使用——以及,更為重要的是,其局限性——在這些條約簽署之時(shí)早已被各國所熟知,因此各國對(duì)此帶來的得失了然于胸。他說,致命性自主武器的情況并不是這樣,它剛剛才面世,甚至基本上并未得到部署。

麻省理工學(xué)院(MIT)物理學(xué)教授、未來生命研究所創(chuàng)始人馬克斯·特格馬克與加州大學(xué)伯克利分校(University of California at Berkeley)人工智能研究員斯圖爾特·拉塞爾共同提出,一些當(dāng)前阻礙出臺(tái)殺人機(jī)器人約束性限制的國家有望轉(zhuǎn)而支持禁止不超過一定尺寸或重量門檻的自主性武器,且這些武器的主要攻擊對(duì)象是個(gè)人。

特格馬克稱,這些“屠殺機(jī)器人”可以是能夠進(jìn)行群體攻擊的小型無人機(jī),最終可能成為“窮人的大規(guī)模殺傷性武器”。恐怖分子或犯罪分子可用其進(jìn)行大規(guī)模謀殺或刺殺個(gè)人,例如法官或政客。他說,這一點(diǎn)對(duì)于現(xiàn)有的國際秩序來說是一個(gè)非常不穩(wěn)定的因素,因此包括美國和俄羅斯在內(nèi)的現(xiàn)有大國應(yīng)贊成禁止或限制屠殺機(jī)器人的推廣。

他表示,由于這些小型自主性武器可通過大型系統(tǒng)進(jìn)行整合后對(duì)船只、飛機(jī)、坦克或建筑進(jìn)行攻擊,因此禁令變得更加難以推動(dòng)。他說,現(xiàn)有大國不大愿意放棄這些武器。

沙基稱,特格馬克和拉塞爾的看法代表的是少數(shù)人的觀點(diǎn),而且并非是“殺手機(jī)器人禁令運(yùn)動(dòng)”的主張。他說,與可以針對(duì)個(gè)人的小型武器一樣,民間團(tuán)體對(duì)于大型自主性武器亦十分擔(dān)憂,例如可以從美國起飛、跨洋實(shí)施目標(biāo)轟炸的人工智能無人機(jī)可能會(huì)在這一過程中誤殺平民。

運(yùn)動(dòng)人士還稱,盡管一些國家在口頭上支持出臺(tái)限制自主性武器的約束性法律協(xié)議,但他們對(duì)其真實(shí)想法表示擔(dān)憂。在《特定常規(guī)武器公約》會(huì)議上,巴基斯坦是約束性協(xié)議的首要推動(dòng)國,但其提出的主張是不應(yīng)出臺(tái)覆蓋所有國家的禁令,因?yàn)樵搰枰褂么祟愇淦鱽韺?duì)抗印度開發(fā)的類似武器。(財(cái)富中文網(wǎng))

譯者:馮豐

審校:夏林

出臺(tái)殺手機(jī)器人禁令的一個(gè)“歷史性機(jī)遇”曾擺在我們眼前,我們卻未能把握住。

不久前,聯(lián)合國裁軍委員會(huì)(United Nations disarmament committee)釋放了一則令人擔(dān)憂的新聞。該委員會(huì)耗費(fèi)了8年的時(shí)間一直在爭論一個(gè)問題:面對(duì)人工智能武器(使用人工智能定位、跟蹤、攻擊和擊殺目標(biāo),且無需人工干預(yù))的迅速發(fā)展,我們應(yīng)采取什么措施或是否要采取措施。很多國家希望禁止此類武器的開發(fā),但聯(lián)合國小組采取的是協(xié)商一致的運(yùn)轉(zhuǎn)模式,而且包括美國、俄羅斯、英國、印度和以色列在內(nèi)的多個(gè)國家反對(duì)出臺(tái)任何具有法律約束力的限制政策。

最終,委員會(huì)可能只能達(dá)成繼續(xù)保持對(duì)話的意見,而且并未針對(duì)其未來對(duì)話設(shè)立明確的目標(biāo)。這個(gè)不斷拖延的結(jié)果被美國駐該委員會(huì)代表稱之為能夠覆蓋所有國家的“理想強(qiáng)制令”,因?yàn)樗w了所有可能的成果。

其他很多國家和激進(jìn)人士對(duì)這一結(jié)果有著完全不同的看法,稱這一應(yīng)對(duì)舉措只能算是聊勝于無。

英國謝菲爾德大學(xué)(The University of Sheffield)機(jī)器人學(xué)和人工智能榮譽(yù)教授諾埃爾·沙基表示:“這是一次徹底的失敗,完全是場(chǎng)災(zāi)難。”諾埃爾·沙基是“殺手機(jī)器人禁令”運(yùn)動(dòng)的發(fā)言人之一。

專注于這一活動(dòng)的國際特赦組織(Amnesty International)高級(jí)顧問威瑞提·科伊勒在談及此次聯(lián)合國委員會(huì)會(huì)議時(shí)說:“這是一次大翻車。”

不再是科幻小說

人工智能控制的武器曾幾何時(shí)只是科幻小說的內(nèi)容,而且在聯(lián)合國委員會(huì)2014年開始討論無人武器時(shí)基本上也是如此。然而,無需人類管控便可選擇目標(biāo)的實(shí)戰(zhàn)武器系統(tǒng)如今已開始部署于全球各個(gè)戰(zhàn)場(chǎng)。那些認(rèn)為這類武器會(huì)造成巨大威脅的民間團(tuán)體和科學(xué)家將其稱為“殺手機(jī)器人”。這類武器更準(zhǔn)確的技術(shù)稱謂是“致命性自主武器”,英文縮寫LAWS。

今年早些時(shí)候,一篇介紹利比亞內(nèi)戰(zhàn)的聯(lián)合國文章稱,由土耳其一家公司生產(chǎn)的自主性武器Kargu-2四軸飛行器有可能被用于在一次戰(zhàn)斗中跟蹤和鎖定逃跑的士兵。有報(bào)道稱,類似的自主性武器被阿塞拜疆人用于其最近與亞美尼亞的戰(zhàn)爭中,同時(shí),以色列也在其最近與哈馬斯的交戰(zhàn)中使用了自主性槍械。

上述事態(tài)導(dǎo)致眾多人開始擔(dān)心,從阻止或減緩這些武器的廣泛使用看來,留給世界的時(shí)間已經(jīng)不多了。殺手機(jī)器人禁令運(yùn)動(dòng)的另一位發(fā)言人克萊爾·康伯伊說:“毋庸置疑,該技術(shù)的發(fā)展速度正在超過外交對(duì)話的速度。”

一線希望

就在運(yùn)動(dòng)人士對(duì)本月聯(lián)合國委員會(huì)會(huì)議感到異常失望時(shí),一些人稱其失敗可能并不一定就是壞事,而且會(huì)給限制其發(fā)展的有效國際行動(dòng)提供最佳機(jī)會(huì)。這是因?yàn)椋舜问Ю型層嘘P(guān)限制致命性自主武器國際對(duì)話的外交陣地發(fā)生轉(zhuǎn)移,而在轉(zhuǎn)移后的陣地中,少數(shù)國家的阻撓也將是徒勞的。

科伊勒表示:“對(duì)于那些呼吁共同創(chuàng)建法律約束工具、思考下一步該怎么做的國家來說,我認(rèn)為當(dāng)前是一個(gè)令人振奮的時(shí)刻。支持出臺(tái)禁令的60多個(gè)國家應(yīng)就啟動(dòng)替代流程做出決定,在這一流程中,協(xié)商一致原則將無法阻礙大多數(shù)國家的意愿。”

科伊勒稱,盡管聯(lián)合國委員會(huì)的對(duì)話未能達(dá)成一致意見,但至少幫助提升了人們對(duì)自主性武器威脅的認(rèn)識(shí)。越來越多的國家開始支持禁令,包括新西蘭、德國、奧地利、挪威、荷蘭、巴基斯坦、中國以及構(gòu)成非洲聯(lián)盟的55個(gè)國家。

此外,數(shù)千名計(jì)算機(jī)科學(xué)家和人工智能研究人員已經(jīng)簽署了請(qǐng)?jiān)笗粲踅棺灾餍晕淦鳎⒊兄Z不從事開發(fā)此類武器的工作。科伊勒說,如今,我們有必要把握這一態(tài)勢(shì),并通過另一個(gè)論壇來推動(dòng)禁令。

科伊勒稱,將對(duì)話移出聯(lián)合國《特定常規(guī)武器公約》(CCW,十年中大部分時(shí)間一直在討論致命性自主武器法規(guī)的聯(lián)合國框架)的優(yōu)勢(shì)在于,執(zhí)法機(jī)構(gòu)使用這些武器以及在內(nèi)戰(zhàn)中使用這些武器超出了聯(lián)合國委員會(huì)強(qiáng)制令的管轄范疇。包括國際特赦組織和人權(quán)觀察組織(Human Rights Watch)在內(nèi)的很多民間團(tuán)體對(duì)這些殺人機(jī)器人的潛在用途感到異常擔(dān)憂。

運(yùn)動(dòng)人士稱,《特定常規(guī)武器公約》也并非不是不可替代。科伊勒稱,八個(gè)工業(yè)化國家可能會(huì)創(chuàng)建一個(gè)深入探討這一問題的論壇。另一個(gè)方法是,當(dāng)前支持禁令的其中一個(gè)國家可嘗試通過聯(lián)合國大會(huì)來推動(dòng)這一事項(xiàng)。

科伊勒和沙基均提及催生了禁止使用殺傷性地雷國際公約的流程,以及禁止使用集束炸彈的類似公約,這些都可以作為一種潛在的模式,也就是在聯(lián)合國正式流程之外達(dá)成具有約束力的國際條約。

在上述這兩個(gè)案例中,單一國家(禁雷公約的背后是加拿大,禁用集束炸彈公約背后是挪威)同意舉辦國際磋商會(huì)議。在一些運(yùn)動(dòng)人士眼中,有一個(gè)國家有望承擔(dān)推動(dòng)致命性自主武器禁令的職責(zé),那就是新西蘭。該國政府在11月曾舉全國之力,在推動(dòng)禁令出臺(tái)方面發(fā)揮了“領(lǐng)頭羊作用”。其他國家包括挪威、德國和荷蘭,這些國家的政府在過去數(shù)個(gè)月中也都做出了類似的聲明。

沙基稱,舉辦類似的談判需要一個(gè)國家龐大的財(cái)力支持,數(shù)額可能高達(dá)數(shù)千萬美元,這也是為什么像非洲聯(lián)盟這樣支持禁令的團(tuán)體不大可能愿意舉辦國際磋商流程的原因之一。

這一方式的另一個(gè)缺點(diǎn)在于,無論達(dá)成什么條約,其約束力將僅限于那些締約國,而不是覆蓋整個(gè)聯(lián)合國成員國或《日內(nèi)瓦公約》(Geneva Convention)的簽約國。例如,美國便從未簽署過禁雷和禁用集束炸彈公約。

然而這一方式的贊同者稱,這些條約會(huì)樹立一項(xiàng)國際準(zhǔn)則,并對(duì)那些拒絕簽署條約的國家?guī)韽?qiáng)大的道德壓力。禁雷公約的出臺(tái)迫使眾多武器生產(chǎn)商停止生產(chǎn)這種武器,而且美國政府在2014年承諾不在朝鮮半島之外使用這一武器。不過,美國軍事參謀稱,地雷對(duì)于韓國防御朝鮮可能的入侵十分必要。

“屠殺機(jī)器人”

加州大學(xué)洛杉磯分校(University of California, Los Angeles)國際關(guān)系教授羅伯特·特拉格稱,并非所有人都認(rèn)為《特定常規(guī)武器公約》之外的流程可在此時(shí)奏效。其中一個(gè)原因在于,對(duì)于那些擔(dān)心某個(gè)特定敵人會(huì)采用自主性武器的國家來說,它們不大可能會(huì)同意單方面放棄自身擁有這一能力所能帶來的威懾作用。羅伯特以人工智能治理中心代表的身份,參加了上周的日內(nèi)瓦對(duì)話。

特拉格還提到,以禁雷和禁用集束炸彈公約為例,這些科技的使用——以及,更為重要的是,其局限性——在這些條約簽署之時(shí)早已被各國所熟知,因此各國對(duì)此帶來的得失了然于胸。他說,致命性自主武器的情況并不是這樣,它剛剛才面世,甚至基本上并未得到部署。

麻省理工學(xué)院(MIT)物理學(xué)教授、未來生命研究所創(chuàng)始人馬克斯·特格馬克與加州大學(xué)伯克利分校(University of California at Berkeley)人工智能研究員斯圖爾特·拉塞爾共同提出,一些當(dāng)前阻礙出臺(tái)殺人機(jī)器人約束性限制的國家有望轉(zhuǎn)而支持禁止不超過一定尺寸或重量門檻的自主性武器,且這些武器的主要攻擊對(duì)象是個(gè)人。

特格馬克稱,這些“屠殺機(jī)器人”可以是能夠進(jìn)行群體攻擊的小型無人機(jī),最終可能成為“窮人的大規(guī)模殺傷性武器”。恐怖分子或犯罪分子可用其進(jìn)行大規(guī)模謀殺或刺殺個(gè)人,例如法官或政客。他說,這一點(diǎn)對(duì)于現(xiàn)有的國際秩序來說是一個(gè)非常不穩(wěn)定的因素,因此包括美國和俄羅斯在內(nèi)的現(xiàn)有大國應(yīng)贊成禁止或限制屠殺機(jī)器人的推廣。

他表示,由于這些小型自主性武器可通過大型系統(tǒng)進(jìn)行整合后對(duì)船只、飛機(jī)、坦克或建筑進(jìn)行攻擊,因此禁令變得更加難以推動(dòng)。他說,現(xiàn)有大國不大愿意放棄這些武器。

沙基稱,特格馬克和拉塞爾的看法代表的是少數(shù)人的觀點(diǎn),而且并非是“殺手機(jī)器人禁令運(yùn)動(dòng)”的主張。他說,與可以針對(duì)個(gè)人的小型武器一樣,民間團(tuán)體對(duì)于大型自主性武器亦十分擔(dān)憂,例如可以從美國起飛、跨洋實(shí)施目標(biāo)轟炸的人工智能無人機(jī)可能會(huì)在這一過程中誤殺平民。

運(yùn)動(dòng)人士還稱,盡管一些國家在口頭上支持出臺(tái)限制自主性武器的約束性法律協(xié)議,但他們對(duì)其真實(shí)想法表示擔(dān)憂。在《特定常規(guī)武器公約》會(huì)議上,巴基斯坦是約束性協(xié)議的首要推動(dòng)國,但其提出的主張是不應(yīng)出臺(tái)覆蓋所有國家的禁令,因?yàn)樵搰枰褂么祟愇淦鱽韺?duì)抗印度開發(fā)的類似武器。(財(cái)富中文網(wǎng))

譯者:馮豐

審校:夏林

It was billed as “a historic opportunity” to stop killer robots. It failed.

That was the alarming news out of a United Nations disarmament committee held in Geneva at the end of last week. The committee had spent eight years debating what, if anything, to do about the rapid development of weapons that use artificial intelligence to locate, track, attack, and kill targets without human intervention. Many countries want to see such weapons banned, but the UN group operates by consensus and several states, including the U.S., Russia, the United Kingdom, India, and Israel, were opposed to any legally binding restrictions.

In the end, the committee could agree only to keep talking, with no clear objective for their future discussions. This kicking-of-the-can was an outcome that the U.S. representative to the UN committee called “a dream mandate” for all countries because it did not foreclose any particular outcome.

Many other nations and activists saw that outcome as something quite different—a woefully inadequate response.

“It was a complete failure, a disaster really,” said Noel Sharkey, an emeritus professor of robotics and A.I. at the University of Sheffield, in the U.K., and one of several spokespersons for the Stop Killer Robots campaign.

“It was a car crash,” Verity Coyle, a senior adviser to Amnesty International focused on its campaign for a ban of the weapons, said of the UN committee’s meeting.

Not science fiction any more

A. I.-guided weapons were once the stuff of science fiction—and were still largely in that realm when the UN committee first began talking about autonomous weapons in 2014. But real systems with the ability to select targets without human oversight are now starting to be deployed on battlefields around the globe. Civil society groups and scientists who are convinced that they pose a grave danger have dubbed them “killer robots.” Their more technical moniker is lethal autonomous weapons, or LAWS.

A UN report on the Libyan civil war said earlier this year that an autonomous weapon, the Kargu-2 quadcopter, produced by a company in Turkey, was likely used to track and target fleeing fighters in one engagement. There have been reports of similar autonomous munitions being used by Azerbaijan in its recent war with Armenia, as well as autonomous weapons guns being deployed by Israel in its most recent conflict with Hamas.

These developments have led many to fear that the world is running out of time to take action to stop or slow the widespread use of these weapons. “The pace of technology is really beginning to outpace the rate of diplomatic talks," said Clare Conboy, another spokesperson for the Stop Killer Robots campaign.

Silver lining

While campaigners were bitterly disappointed with results of this month’s meetings of the UN committee, some say its failure may counterintuitively present the best opportunity in years for effective international action to restrict their development. That’s because it provides an opportunity to move the discussion of an international treaty limiting LAWS to a different diplomatic venue where a handful of states won’t be able to thwart progress.

“I think it is an exciting moment for those states calling for a legally binding instrument to come together and think about what is the best next step,” Coyle said. “The 60-odd countries that are in favor of a ban need to take a decision on starting a parallel process, somewhere where consensus rules could not be used to block the will of the majority.”

Coyle said that discussions at the UN committee, although failing to reach an agreement, had at least helped to raise awareness of the danger posed by autonomous weapons. A growing number of nations have come out in favor of a ban, including New Zealand, Germany, Austria, Norway, the Netherlands, Pakistan, China, Spain, and the 55 countries that make up the African Union.

In addition, thousands of computer scientists and artificial intelligence researchers have signed petitions calling for a ban on autonomous weapons and pledged not to work on developing them. Now, Coyle said, it was important to take that momentum and use it in another forum to push for a ban.

Coyle said that an advantage of taking the discussion outside the UN’s Convention on Certain Conventional Weapons, or CCW—the UN committee that has been discussing LAWS regulation for the better part of a decade—is that the use of these weapons by law enforcement agencies and in civil wars is outside the scope of that UN committee’s mandate. Those potential uses of killer robots are of grave concern to many civil society groups, including Amnesty International and Human Rights Watch.

Campaigners say there are several possible alternatives to the CCW. Coyle said that the Group of Eight industrialized nations might become a forum for further discussion. Another option would be for one of the states currently in favor of a ban to try to push something through the UN General Assembly.

Coyle and Sharkey also both pointed to the process that led to the international treaty prohibiting the use of anti-personnel land mines, and a similar convention barring the use of cluster munitions, as potential models for how to achieve a binding international treaty outside the formal UN process.

In both of those examples, a single nation—Canada for land mines, Norway for cluster munitions—agreed to host international negotiations. Among the countries that some campaigners think might be persuaded to take on that role for LAWS are New Zealand, whose government in November committed the country to playing “a leadership role” in pushing for a ban. Others include Norway, Germany, and the Netherlands, whose governments have all made similar statements over the past several months.

Hosting such negotiations requires a large financial commitment from one country, running into perhaps millions of dollars, Sharkey said, which is one reason that the African Union, for instance, which has come out in favor of a ban, is unlikely to volunteer to host an international negotiation process.

Another drawback of this approach is that whatever treaty is developed would be binding only on those countries that choose to sign it, rather than something that might cover all UN members or all signatories to the Geneva Convention. The U.S., for instance, has not acceded to either the land mine or the cluster munitions treaties.

But advocates of this approach note that these treaties establish an international norm and exert a high degree of moral pressure even on those countries that decline to sign them. The land mine treaty resulted in many arms makers ceasing production of the weapons, and the U.S. government in 2014 promised not to use the munitions outside of the Korean Peninsula, where American military planners have argued land mines are essential to the defense of South Korea against a possible invasion by North Korea.

“Slaughterbots”

Not everyone is convinced a process outside the CCW will work this time around. For one thing, countries that fear a particular adversary will acquire LAWS are unlikely to agree to unilaterally abandon the deterrent of having that capability themselves, said Robert Trager, a professor of international relations at the University of California, Los Angeles, who attended last week’s Geneva discussions as a representative of the Center for the Governance of AI.

Trager also noted that in the case of land mines and cluster munitions, the use—and, critically, the limitations—of those technologies were well established at the time treaties were negotiated. Countries understood exactly what they were giving up. That is not the case with LAWS, which are only just being developed and have barely ever been deployed, he said.

Max Tegmark, a physics professor at MIT and cofounder of the Future of Life Institute, which seeks to address “existential risks” to humanity, and Stuart Russell, an A.I. researcher at the University of California at Berkeley, have proposed that some countries currently standing in the way of binding restrictions on killer robots might be persuaded to support a ban on autonomous weapons below a certain size or weight threshold that are designed to primarily target individual people.

Tegmark said that these “slaughterbots,” which might be small drones that could attack in swarms, would essentially represent “a poor man’s weapon of mass destruction.” They could be deployed by terrorists or criminals to either commit mass murder or assassinate individuals, such as judges or politicians. This would be highly destabilizing to the existing international order and so existing powers, such as the U.S. and Russia, ought to be in favor of banning or restricting the proliferation of slaughterbots, he said.

Progress toward a ban had been made more difficult by the conflation of these small autonomous weapons with larger systems designed to attack ships, aircraft, tanks, or buildings. Established powers would be more hesitant to give up these weapons, he said.

Sharkey said that Tegmark’s and Russell’s position represented a fringe view and was not the position of the Campaign to Stop Killer Robots. He said civil society groups were just as concerned about larger autonomous weapons, such as A.I.-piloted drones that could take off from the U.S. and fly across oceans to bomb targets, possibly killing civilians in the process, as they were about smaller systems that could target individual people.

Campaigners also say they worry about the sincerity of some of the countries that have said they support a binding legal agreement restricting LAWS. Pakistan has been a leading proponent of a binding agreement at the CCW, but has taken the position that without a ban that would cover all countries, it needs such weapons to counter India’s development of similar systems.

0條Plus
精彩評(píng)論
評(píng)論

撰寫或查看更多評(píng)論

請(qǐng)打開財(cái)富Plus APP

前往打開