AI is taking over our lives but exactly what goes on inside AI systems is unclear. Two researchers from EQTY Lab shine a light on how to make these mechanics more visible.
?人工智能正在接管我們的生活,但人工智能系統(tǒng)內(nèi)部究竟發(fā)生了什么卻并不清楚。EQTY 實(shí)驗(yàn)室的兩位研究人員為我們揭示了如何讓這些機(jī)制更加清晰可見。
Part of the magic of Generative AI is that most people have no idea how it works. At a certain level, it’s even fair to say that no one is entirely sure how it works, as the inner-workings of ChatGPT can leave the brightest scientists stumped. It’s a black box. We’re not entirely sure how it’s trained, which data produces which outcomes, and what IP is being trampled in the process. This is both part of the magic and part of what’s terrifying.
?生成式人工智能的神奇之處在于,大多數(shù)人都不知道它是如何工作的。從某種程度上說,甚至可以說沒有人完全清楚它是如何工作的,因?yàn)?ChatGPT 的內(nèi)部運(yùn)作可能會(huì)讓最聰明的科學(xué)家也束手無策。這是一個(gè)黑盒子。我們并不完全清楚它是如何訓(xùn)練的,哪些數(shù)據(jù)會(huì)產(chǎn)生哪些結(jié)果,以及在這個(gè)過程中踐踏了哪些知識(shí)產(chǎn)權(quán)。這既是神奇的一部分,也是可怕的一部分。
What if there was a way to peer inside the black box, allowing a clear visualization of how AI is governed and trained and produced? This is the goal — or one of the goals — of EQTY Lab, which conducts research and creates tools to make AI models more transparent and collaborative. EQTY Lab’s Lineage Explorer, for example, gives a real-time view of how the model is built.
?如果有一種方法可以窺探黑箱內(nèi)部,讓人們清楚地看到人工智能是如何管理、訓(xùn)練和生產(chǎn)的,那會(huì)怎樣?這就是 EQTY 實(shí)驗(yàn)室的目標(biāo),或者說是目標(biāo)之一,該實(shí)驗(yàn)室開展研究并開發(fā)工具,使人工智能模型更加透明、更具協(xié)作性。例如,EQTY 實(shí)驗(yàn)室的 "線性資源管理器"(Lineage Explorer)可以實(shí)時(shí)查看模型是如何建立的。
All of these tools are meant as a check against opacity and centralization. “If you don’t understand why an AI is making the decisions it's making or who's responsible, it's really hard to interrogate why harmful things are being spewed,” says Ariana Spring, Head of Research at EQTY Lab. “So I think centralization — and keeping those secrets in black boxes — is really dangerous.”
?所有這些工具都是為了防止不透明和集中化。"EQTY實(shí)驗(yàn)室研究主管阿麗亞娜-斯普林(Ariana Spring)說:"如果你不了解人工智能為什么會(huì)做出這樣的決定,或者誰該對此負(fù)責(zé),那么就很難審問為什么會(huì)出現(xiàn)有害的東西。"因此,我認(rèn)為集中化--以及將這些秘密保存在黑盒子里--真的很危險(xiǎn)。
Joined by her colleague Andrew Stanco (head of finance), Spring shares how crypto can create more transparent AI, how these tools are already being deployed in service of climate change science, and why these open-sourced models can be more inclusive and representative of humanity at large.
??在她的同事安德魯-斯坦科(Andrew Stanco,財(cái)務(wù)主管)的參與下,斯普林分享了加密貨幣如何創(chuàng)造更透明的人工智能,這些工具如何已經(jīng)被部署到氣候變化科學(xué)服務(wù)中,以及為什么這些開源模型可以更具包容性和代表全人類。
Ariana Spring: We're pioneering new solutions to build trust and innovation in AI. And generative AI is kind of the hot topic right now, and that's the most emergent property, so that's something that we're focused on.
?阿麗亞娜-斯普林 我們正在開拓新的解決方案,以建立人工智能領(lǐng)域的信任和創(chuàng)新。生成式人工智能是當(dāng)下的熱門話題,也是最新興的特性,所以這也是我們關(guān)注的重點(diǎn)。 ?
But also we look at all different kinds of AI and data management. And really trust and innovation are where we lean into. We do that by using advanced cryptography to make models more transparent, but also collaborative. We see transparency and collaboration as two sides of the same coin of creating smarter and safer AI.
?此外,我們還關(guān)注各種不同的人工智能和數(shù)據(jù)管理。真正的信任和創(chuàng)新是我們的優(yōu)勢所在。我們通過使用先進(jìn)的加密技術(shù),使模型更加透明,同時(shí)也更具協(xié)作性。我們認(rèn)為透明度和協(xié)作是創(chuàng)造更智能、更安全的人工智能的一體兩面。
So, in a process as complex as AI training, having those tamper-proof and verifiable attestations — both during the training and afterwards — really helps. It creates trust and visibility.
因此,在像人工智能訓(xùn)練這樣復(fù)雜的過程中,擁有這些防篡改和可驗(yàn)證的證明--無論是在訓(xùn)練期間還是訓(xùn)練之后--真的很有幫助。它能創(chuàng)造信任和可見性。