,亚洲欧美日韩国产成人精品影院,亚洲国产欧美日韩精品一区二区三区,久久亚洲国产成人影院,久久国产成人亚洲精品影院老金,九九精品成人免费国产片,国产精品成人综合网,国产成人一区二区三区,国产成...

立即打開
谷歌發現,機器人可以變得很暴力

谷歌發現,機器人可以變得很暴力

David Z. Morris 2017年02月14日
谷歌旗下DeepMind團隊已經開發出橫掃人類的圍棋軟件,現在正著力研究機器人之間的合作問題。

?

技術人員仍然在努力完善人工智能和自動化技術,讓它們可以在復雜世界完成一兩個簡單的任務,例如駕駛或測謊。但我們很容易設想在未來,許多機器人可以互相合作,解決更大的問題。在最近的試驗中,Alphabet Inc的DeepMind團隊就開始著手搞清學習機器如何合作——以及實際情況顯示的,為何經常不合作。

這個團隊設計了兩個簡單的游戲,重復運行了它們幾千次,讓控制游戲的程序學習最有效的游戲策略。在某個游戲中,兩個人工智能要相互競爭,收集最多的資源(綠色“蘋果”),它們也可以選擇使用視頻下方的黃色光速射擊對手。

他們的發現,使得人們對于由人工智能驅動的不久未來,產生了一些擔憂。正如團隊在博文中所言,蘋果收集游戲中的人工智能會在資源豐富時和平共存。不過如果蘋果數量不夠,它們就會迅速開始射擊對方來取得優勢。

更令人關注的是,無論周圍有多少資源,“有能力采用更復雜戰術的人工智能,都會更頻繁地攻擊對手,也就是表現得更不具合作性。”換句話說,人工智能為了達成目標,會毫不猶豫地妨害對手——以此類推,對于阻礙的人類也可能采取類似手段。

不過在另一個游戲里,這些程序有了更多的合作理由。如果人工智能“收集資源”所得的獎勵是共享的,那么它們就會設法合作——實際上也會表現得更加聰明。

這些試驗的推論值得密切關注。研究人員把這些的高度理性的人工智能機器人比作“經濟人”(homo economicus)——一種完全以自身利益最大化為目的的假想人類,這也是20世紀經濟學中存有瑕疵卻很關鍵的假設之一。然而現實世界里,人類不是完全理性的。因此,對于研究人員聲稱他們的游戲設定能幫助我們理解經濟、交通等人類系統,也有人提出懷疑。

未來的機器人本身,注定要成為追求利益最大化的純粹存在。而在人們設計能夠學習和進化的人工智能時,甚至可以采用取代最狹隘的命令,如艾薩克·阿西莫夫的“機器人三定律”的行為準則。這其中當然存在著挑戰,不過DeepMind競爭算法導致的不平衡行為表明,在程序員至少還需要給機器人賦予某種形式的同情心。(財富中文網)

作者:David Z. Morris

譯者:嚴匡正

Technologists are still working to perfect A.I. and automation technologies that can accomplish one or two tasks—say, driving or fraud detection—in a very complex world. But it’s not hard to envision a future in which multiple computer agents will work together to solve even bigger problems. In a recent experiment, Alphabet Inc's DeepMind team set out to illustrate how learning machines work together—or occasionally, as it turns out, don’t.

The team ran thousands of iterations of two simple games, allowing programs controlling them to learn the most effective strategies. In one, two A.I.s competed to gather the most resources (green ‘apples’), and could also choose whether or not to sabotage their competition with the yellow zaps seen in the video below.

What they found raises some concerns about our imminent A.I.-driven future. As the team detailed in a blog post, computer agents in the apple-gathering scenario tended to coexist peacefully when there were plenty of resources. But when apples became scarce, they quickly turned to sniping one another to get an edge.

More concerning still, “agents with the capacity to implement more complex strategies try to tag the other agent more frequently, i.e. behave less cooperatively,” regardless of how many resources were around. Smart robots, in other words, have no inherent hesitation to harm one another in pursuit of their goals—or, presumably, to do the same to any human who might get in their way.

In a different scenario, though, the programs found more reason to work together. When set to ‘hunt’ for shared rewards, A.I.s would figure out how to collaborate—and were actually more likely to do so the smarter they were.

The assumptions behind these experiments deserve some scrutiny. The researchers specifically equate their hyper-rational A.I. agents with “homo economicus”—that is, the imaginary, purely gains-maximizing human that made up the flawed core of 20th century economics. But in the real world, humans aren’t all that consistently rational, casting some doubt on the researchers’ claims that their game theory scenarios could help us understand human systems like the economy and traffic.

And future robots themselves aren’t unavoidably fated to exist as pure maximizing agents. Even learning and evolving A.I.s could be designed with behavioral guidelines that would supersede their narrowest imperatives, ala Isaac Asimov’s Three Laws of Robotics. It’s not without its challenges, but the uneven behavior of DeepMind’s competitive algorithms suggests some form of empathy should be on programmers’ to-do lists.

  • 熱讀文章
  • 熱門視頻
活動
掃碼打開財富Plus App