谷歌(Google)的首席決策師離職了。
卡西·科濟(jì)里科夫曾經(jīng)擔(dān)任谷歌的首席決策科學(xué)家,并幫助該公司開拓決策智能領(lǐng)域,如今她準(zhǔn)備單干,致力于幫助商業(yè)領(lǐng)袖應(yīng)對人工智能領(lǐng)域的棘手難題。
在人工智能變得更加強(qiáng)大并且更廣泛地應(yīng)用于各行各業(yè)之際,科濟(jì)里科夫?qū)⑨槍θ绾巫龀雒髦堑臎Q策推出她在LinkedIn的首套課程、出版一本書,以及發(fā)表一系列的主題演講。科濟(jì)里科夫?qū)Α敦敻弧冯s志表示,其目標(biāo)是為領(lǐng)導(dǎo)者在思考如何利用人工智能時提供工具,并幫助公眾督促人工智能決策者對影響數(shù)百萬人的選擇負(fù)起責(zé)任。
科濟(jì)里科夫在谷歌任職10年,其中五年擔(dān)任首席決策科學(xué)家。她負(fù)責(zé)指導(dǎo)公司領(lǐng)導(dǎo)者在人工智能領(lǐng)域做出明智且負(fù)責(zé)任的決策等。
科濟(jì)里科夫說:“我一直相信谷歌的初衷是好的。”然而,谷歌是一家大公司,因此外界有時會把她在某個話題上的個人觀點(diǎn)與谷歌的立場等同起來。科濟(jì)里科夫告訴《財富》雜志,在其新崗位上,她將不必?fù)?dān)心自己的主張會對她所代表的公司產(chǎn)生影響。
人工智能正在經(jīng)歷漫長的發(fā)展期,這引起了一些人對未來的擔(dān)憂。人工智能領(lǐng)域的某些頂尖人才最近警告稱,正如我們所知,人工智能可能會終結(jié)人類。此時此刻讓人感覺像是科技世界的一個拐點(diǎn)。科濟(jì)里科夫表示,至關(guān)重要的是要有接受過決策教育的領(lǐng)導(dǎo)者和可以督促他們負(fù)責(zé)任的消費(fèi)者。
科濟(jì)里科夫在南非長大,獲得了芝加哥大學(xué)(University of Chicago)的經(jīng)濟(jì)學(xué)學(xué)士學(xué)位。她還擁有北卡羅來納州立大學(xué)(North Carolina State University)的數(shù)理統(tǒng)計學(xué)碩士學(xué)位以及完成了部分的杜克大學(xué)(Duke University)心理學(xué)和神經(jīng)科學(xué)博士課程。在加入谷歌之前,她從事了10年的獨(dú)立數(shù)據(jù)科學(xué)顧問。
科濟(jì)里科夫從2018年開始擔(dān)任谷歌的首席決策科學(xué)家,在此期間,谷歌的人工智能部門迅猛發(fā)展。谷歌的首席執(zhí)行官桑達(dá)爾·皮查伊推出了谷歌助手(Google Assistant)的附加組件Duplex,它能夠代表用戶撥打電話,旨在幫助他們安排約會、預(yù)訂餐廳和預(yù)約其他活動。谷歌在根據(jù)提示生成文本、圖像和視頻方面取得了飛躍式的進(jìn)展,目前正在開發(fā)可以自己編寫代碼的機(jī)器人。該公司還發(fā)布了能夠與ChatGPT相媲美的大型語言模型Bard。與其他人工智能公司的情況類似,谷歌的許多發(fā)展成果也在員工和學(xué)者中引發(fā)了道德問題討論。谷歌沒有回應(yīng)置評請求。
由于簽訂了保密協(xié)議,科濟(jì)里科夫并未對其在谷歌促成的決策發(fā)表評論,但不難想象該公司在人工智能領(lǐng)域曾經(jīng)面臨哪些艱難抉擇。在構(gòu)建Bard的過程里,谷歌必須決定是否抓取受版權(quán)保護(hù)的信息來訓(xùn)練該人工智能模型。今年7月,一項針對谷歌的訴訟就指控訴該公司的這種做法。此外,谷歌還需要決定在什么時候發(fā)布這項技術(shù),以確保維持與ChatGPT的競爭力,同時又不損害其聲譽(yù)。在發(fā)布Bard的演示視頻后(視頻中這一聊天機(jī)器人給出了錯誤答案),谷歌立即遭到猛烈抨擊。
科濟(jì)里科夫的工作圍繞這樣一種觀點(diǎn)展開:個人做出的選擇可能會影響很多人,而那些高層未必接受過決策實踐方面的教育。她說:“人們很容易會認(rèn)為技術(shù)發(fā)展是自發(fā)性的。但實際上技術(shù)背后有其推動者,他們無論有或沒有相關(guān)技能,都會做出非常主觀的決策,從而影響著數(shù)百萬人的生活。”
長期以來,人類一直在努力尋求做出決策的最佳方法,因而推動這類方法不斷演變。科濟(jì)里科夫稱:“當(dāng)需要解答重要問題時,我們可以運(yùn)用本杰明·富蘭克林在300年前提出的贊成/反對模型,但也有更先進(jìn)的方法。”雖然科濟(jì)里科夫的目標(biāo)對象是商業(yè)領(lǐng)袖,但她的方法也能夠用于做出其他重要的人生決定,比如去哪里上大學(xué),以及是否要生孩子。
決策者應(yīng)該問問自己:怎樣才可以改變我的想法?他們還應(yīng)該利用數(shù)據(jù)信息,但在看到數(shù)據(jù)之前,首先要規(guī)定好面對不同的數(shù)據(jù)結(jié)果要采取怎樣的做法。這有助于決策者避免證真偏差,即利用數(shù)據(jù)來證實他們已有的觀點(diǎn)。科濟(jì)里科夫表示,記錄做出重要決定的過程(包括當(dāng)時能夠獲取的信息)也有助于在選擇做出后評估其優(yōu)劣水平。(財富中文網(wǎng))
譯者:中慧言-劉嘉歡
谷歌(Google)的首席決策師離職了。
卡西·科濟(jì)里科夫曾經(jīng)擔(dān)任谷歌的首席決策科學(xué)家,并幫助該公司開拓決策智能領(lǐng)域,如今她準(zhǔn)備單干,致力于幫助商業(yè)領(lǐng)袖應(yīng)對人工智能領(lǐng)域的棘手難題。
在人工智能變得更加強(qiáng)大并且更廣泛地應(yīng)用于各行各業(yè)之際,科濟(jì)里科夫?qū)⑨槍θ绾巫龀雒髦堑臎Q策推出她在LinkedIn的首套課程、出版一本書,以及發(fā)表一系列的主題演講。科濟(jì)里科夫?qū)Α敦敻弧冯s志表示,其目標(biāo)是為領(lǐng)導(dǎo)者在思考如何利用人工智能時提供工具,并幫助公眾督促人工智能決策者對影響數(shù)百萬人的選擇負(fù)起責(zé)任。
科濟(jì)里科夫在谷歌任職10年,其中五年擔(dān)任首席決策科學(xué)家。她負(fù)責(zé)指導(dǎo)公司領(lǐng)導(dǎo)者在人工智能領(lǐng)域做出明智且負(fù)責(zé)任的決策等。
科濟(jì)里科夫說:“我一直相信谷歌的初衷是好的。”然而,谷歌是一家大公司,因此外界有時會把她在某個話題上的個人觀點(diǎn)與谷歌的立場等同起來。科濟(jì)里科夫告訴《財富》雜志,在其新崗位上,她將不必?fù)?dān)心自己的主張會對她所代表的公司產(chǎn)生影響。
人工智能正在經(jīng)歷漫長的發(fā)展期,這引起了一些人對未來的擔(dān)憂。人工智能領(lǐng)域的某些頂尖人才最近警告稱,正如我們所知,人工智能可能會終結(jié)人類。此時此刻讓人感覺像是科技世界的一個拐點(diǎn)。科濟(jì)里科夫表示,至關(guān)重要的是要有接受過決策教育的領(lǐng)導(dǎo)者和可以督促他們負(fù)責(zé)任的消費(fèi)者。
科濟(jì)里科夫在南非長大,獲得了芝加哥大學(xué)(University of Chicago)的經(jīng)濟(jì)學(xué)學(xué)士學(xué)位。她還擁有北卡羅來納州立大學(xué)(North Carolina State University)的數(shù)理統(tǒng)計學(xué)碩士學(xué)位以及完成了部分的杜克大學(xué)(Duke University)心理學(xué)和神經(jīng)科學(xué)博士課程。在加入谷歌之前,她從事了10年的獨(dú)立數(shù)據(jù)科學(xué)顧問。
科濟(jì)里科夫從2018年開始擔(dān)任谷歌的首席決策科學(xué)家,在此期間,谷歌的人工智能部門迅猛發(fā)展。谷歌的首席執(zhí)行官桑達(dá)爾·皮查伊推出了谷歌助手(Google Assistant)的附加組件Duplex,它能夠代表用戶撥打電話,旨在幫助他們安排約會、預(yù)訂餐廳和預(yù)約其他活動。谷歌在根據(jù)提示生成文本、圖像和視頻方面取得了飛躍式的進(jìn)展,目前正在開發(fā)可以自己編寫代碼的機(jī)器人。該公司還發(fā)布了能夠與ChatGPT相媲美的大型語言模型Bard。與其他人工智能公司的情況類似,谷歌的許多發(fā)展成果也在員工和學(xué)者中引發(fā)了道德問題討論。谷歌沒有回應(yīng)置評請求。
由于簽訂了保密協(xié)議,科濟(jì)里科夫并未對其在谷歌促成的決策發(fā)表評論,但不難想象該公司在人工智能領(lǐng)域曾經(jīng)面臨哪些艱難抉擇。在構(gòu)建Bard的過程里,谷歌必須決定是否抓取受版權(quán)保護(hù)的信息來訓(xùn)練該人工智能模型。今年7月,一項針對谷歌的訴訟就指控訴該公司的這種做法。此外,谷歌還需要決定在什么時候發(fā)布這項技術(shù),以確保維持與ChatGPT的競爭力,同時又不損害其聲譽(yù)。在發(fā)布Bard的演示視頻后(視頻中這一聊天機(jī)器人給出了錯誤答案),谷歌立即遭到猛烈抨擊。
科濟(jì)里科夫的工作圍繞這樣一種觀點(diǎn)展開:個人做出的選擇可能會影響很多人,而那些高層未必接受過決策實踐方面的教育。她說:“人們很容易會認(rèn)為技術(shù)發(fā)展是自發(fā)性的。但實際上技術(shù)背后有其推動者,他們無論有或沒有相關(guān)技能,都會做出非常主觀的決策,從而影響著數(shù)百萬人的生活。”
長期以來,人類一直在努力尋求做出決策的最佳方法,因而推動這類方法不斷演變。科濟(jì)里科夫稱:“當(dāng)需要解答重要問題時,我們可以運(yùn)用本杰明·富蘭克林在300年前提出的贊成/反對模型,但也有更先進(jìn)的方法。”雖然科濟(jì)里科夫的目標(biāo)對象是商業(yè)領(lǐng)袖,但她的方法也能夠用于做出其他重要的人生決定,比如去哪里上大學(xué),以及是否要生孩子。
決策者應(yīng)該問問自己:怎樣才可以改變我的想法?他們還應(yīng)該利用數(shù)據(jù)信息,但在看到數(shù)據(jù)之前,首先要規(guī)定好面對不同的數(shù)據(jù)結(jié)果要采取怎樣的做法。這有助于決策者避免證真偏差,即利用數(shù)據(jù)來證實他們已有的觀點(diǎn)。科濟(jì)里科夫表示,記錄做出重要決定的過程(包括當(dāng)時能夠獲取的信息)也有助于在選擇做出后評估其優(yōu)劣水平。(財富中文網(wǎng))
譯者:中慧言-劉嘉歡
Google’s chief decision woman is out.
Cassie Kozyrkov, who has served as the internet company’s chief decision scientist and helped pioneer the field of decision intelligence, is going solo and working on projects to help business leaders navigate the tricky waters of artificial intelligence.
As AI becomes more powerful and more prevalent across industries, Kozyrkov will launch her first LinkedIn course, publish a book, and give keynote speeches about how to make informed decisions. Her goal is to give leaders the tools to think about how they deploy AI, and to help the public hold AI decision-makers accountable for the choices that impact millions of people, she told Fortune.
She spent 10 years at Google, five of which as chief decision scientist. Among her responsibilities, she guided company leaders to make informed and responsible decisions regarding AI.
“I’ve always believed Google’s heart is in the right place,” Kozyrkov said. But it is a large company, and outsiders sometimes equated her personal opinions with Google’s stance on a topic. In her new role, she won’t have to worry about how her advocacy impacts a company she represents, she told Fortune.
AI is undergoing a massive period of growth, which has caused anxieties about the future for some. Top minds in the AI space recently warned it could end humanity as we know it. This point in time feels like an inflection point in the world of tech. It is essential to have leaders in place that are educated in decision-making and consumers that can hold them accountable, according to Kozyrkov.
Kozyrkov, who grew up in South Africa, received a bachelor’s degree in economics from the University of Chicago. She also has a master’s degree in mathematical statistics from North Carolina State University and a partially completed PhD in psychology and neuroscience from Duke University. Prior to working at Google, she spent 10 years as an independent data science consultant.
During Kozyrkov’s time as chief decision scientist, which began in 2018, Google’s AI division grew substantially. CEO Sundar Pichai unveiled Duplex, an add-on to Google Assistant that can make phone calls on behalf of a user, intended to help schedule appointments, restaurant reservations, and other engagements. Google has made leaps in generating text, images, and videos from prompts, and it is developing robots that can write their own code. It also released Bard, its large language model rivaling ChatGPT. Many of Google’s developments have raised ethical questions from employees and academics, which isn’t unlike what’s happening at other AI companies. Google didn’t respond to requests for comment.
Kozyrkov would not comment on decisions she helped make at Google because of her nondisclosure agreement, but it’s not difficult to think of areas where the company has faced difficult choices when it comes to AI. In building Bard, Google had to decide whether to scrape copyrighted information to train the AI model. A lawsuit filed against Google in July accuses the company of doing so. Google also had to decide at what point to release the technology to remain competitive with ChatGPT but not damage its reputation. It came under fire right after it published the Bard demo video in which the chatbot gave an incorrect answer.
Kozyrkov’s work revolves around the idea that individuals can make choices that affect a lot of people, and those at the top aren’t necessarily educated in the practice of decision-making. “It is easy to think of technology as autonomous,” she said. “But there are people behind that technology making very subjective decisions, with or without skill, to affect millions of lives.”
The best way to make a decision is something humans have long grappled with, and which continues to evolve. There’s Benjamin Franklin’s three-century-old pro/con model, but there are also more advanced ways to answer important questions, Kozyrkov said. While she is targeting business leaders, her methods can also be used to make other important life decisions, like where to go to college or whether to start a family.
Decision-makers should ask themselves: What would it take to change my mind? They should also use data, but prior to seeing it, set criteria for what they will do based on what the data says. This helps decision-makers avoid confirmation bias, or using data to confirm an opinion they already have. It is also helpful to document the process of coming to an important decision—including the information available at the time—to evaluate the quality of a choice after it is made, according to Kozyrkov.