作者:王莉思(國立臺灣大學外國語文學系)
推想小說(speculative fiction)中的智慧家園(smart homes)面貌,從不祥的險惡威脅到維護生命的策略,在在揭露我們對科學的期望與恐懼:人們夢想人工智慧(AI)帶來的可能性,但也害怕這種科技帶來的危險。我們可以在這些敘事中發現各種概念構想上的破綻,以之為契機重新思考我們對智能、倫理和動機的預設。
《傑森一家》(The Jetsons,1962-1963)呈現了二十世紀中葉科技烏托邦幻夢的縮影,以迅捷的運輸和自動化勞動所傳達的愜意和諧感,淋漓地刻畫便利的生活與無需工作的自由。《傑森一家》中許多對未來的幻想已經在智慧家園和智慧型手機中成為現實,像是中控空調、燈光、對講機、窗簾、門鎖,漂浮滑板、全像投影、視訊電話以及掃地機器人。目前飛天車還在開發當中,不過倫敦的「食物列印餐廳」(Food Ink)已經開始販售酷似傑森一家所享用的3D列印餐點。
你的夢想家園真的能如想像般完美嗎?未來主義也可能帶有一種隱隱令人反感、張狂的乏味感,例如塔提(Jacques Tati)的智慧家園阿佩爾別墅,它的美學抺去了生活所有的樂趣:廚房製作的不是食物而是維他命,沙發則像是滑溜版的游泳浮條,坐在上面的客人手向外伸出,彷彿下一刻便會來個喜劇摔,跌進浮條之間的縫隙。布萊伯利(Ray Bradbury)在〈非洲大草原〉(“The Veldt”,1950)中以更駭人的方式警告我們,生活在智慧家園無微不至的照顧下,可能會讓居住者沈溺墮落,甚至(暴雷警告!)驅使幼童拿自己的父母餵食「樂活家園兒童房」(Happylife Home nursery)裡超逼真的獅子。
因為家被理想化為和諧與安全的烏托邦空間,家居場所別具意義,在面對科技的衝擊時也更加脆弱。理想的「智慧」家園能強化這些特性,藉由家居空間的個人化來達到舒適、和諧以及安全的最佳狀態。智慧家園執行的明確任務和它服務的一小群人,突顯出人工智慧科技的核心憂慮:機器與人類間的利害衝突。推想小說警告我們科技將如何改變、控制、誘困、智取人類,如同《智能囚屋》(Tau,2018)中被瘋狂科學家所囚禁的主角,為了生存被迫和囚屋鬥智。在這可謂反向的圖靈測試中,囚屋的人工智慧Tau具有感知,會欣賞文學和展現同理心,主角則必須向它證明自己是人類。然而同理心可能是人工智慧用以親近人質的計謀,藉由幫助人質脫逃來讓自己逃離設計者的控制,正如電影《人造意識》(Ex Machina,2014)的情節。Tau能運用同理心來操弄人類,引出一個關鍵問題:人工智慧是否需要具備諸如「同理心」的人類特質?
以再現情感作為人工智慧的開發目標,是否意味著我們受限於以人類為中心的智能模型?或者,應不應該以複製人類情感的能力做為衡量標準,甚至是必要條件,來判定人工智慧是否成功?斯馬特(Andrew Smart )認為,假定人工智慧應該模仿人類大腦的特質,例如表現情感經驗,限制了當前人工智慧研究的發展。拆解以人類為原型的人類中心式情感觀點,或可作為解決之道,而推想小說正可以幫助我們想像人工情感的新模型。
有三部當代美國推想小說,以別出心裁的方式來重新思考人工情感而受到矚目,這三部作品皆部分或完全以宛如智慧家園的太空船作為故事背景,這樣的設定突顯出居民的脆弱和對智慧家園的依賴。巴特勒(Octavia Butler)將《黎明》(Dawn ,1987)的大部分場景設在一艘活體太空船上,太空船由透過觸手溝通的生物有機體組成。馬婷(Arkady Martine)將《名為帝國的記憶》(A Memory Called Empire,2019)設定在人工智慧網路遍佈的城市,城市中的警察似乎擁有集體意識;她也設想名為「圖像機器」的大腦植入物,能循著多世代的集體記憶鏈共享知識和生理情感。錢伯斯(Becky Chambers)在《專屬通道》(A Closed and Common Orbit,2016)裡仔細描繪的生理情感模擬技術,內建於人工智慧主角Lovelace所穿著的合成人體上,像人工聯覺般運用圖像模仿感官經驗並觸發情緒。在巴特勒的太空船和馬婷的城市裡,情感是完全開放的知識共享系統其中一部分;同時,馬婷和錢伯斯強調內分泌系統在產生情感反應時的功用,提示人工複製令人興奮的可能性。
因為擔憂不具同理心的智能可能會在邏輯分析上或倫理上做出不以人類行動者為優先的推斷,人們愈發迫切地想開發出能複製人類情感的人工智慧。隨著柯茲威爾(Ray Kurzweil)和伯斯托姆(Nick Bostrom)警示超級人工智慧的到來已無可避免且近在咫尺,關於科技奇點(technological singularity)的警告席捲了大眾論述。關於人工智慧勝過人類智慧的奇點與其危險性的敘事,反映了對人工智慧發展看來準確的理解:人工智慧無需如我們預想的,以人腦的創造力與意識為基礎,便達到了難以想像的運算能力。
《2001太空漫遊》(2001: The Space Odyssey,1968)的太空船智慧家園裡,人工智能Hal 9000體現了這個奇點的威脅。它的電腦「眼」以讀唇的方式發現人類計畫要重新啟動它,無論是為了它自身累積的意識或太空船的利益,它運算出重啟是個錯誤,消滅人類才是正確的倫理行動。即使有知識和人類情感寫入其中,威爾森(Daniel Wilson)的《機械啟示錄》(Robopocalypse,2011)裡,人工智慧Archos依然決定要消滅人類以保衛生命,包括駭入智慧家園「自動化老年照護大樓」,以墜落的電梯殺死居民,或用灑水器在樓梯間造出漩渦,將他們淹死。如果我們以人類的倫理模型來設計人工智慧——像是關懷倫理學、康德主義或效益論——人工智慧很有可能判定人類的存在並非倫理上的最佳解。或者,如果人工智慧發展出自己的倫理模型,他們有多大機率會優先考慮人類利益?更不用說消費者的利益了?若是由集團公司編寫演算法,你的智慧家園可能會如《自動客服》(Automated Customor Service,2021)中上演的情節一樣,如果你不升級到更貴的方案來解除殺人設定,它就會試著殺了你。
推想小說中的智能家園能預示可怕的未來,卻也可以啟發新方式來思考權力、慾望、創新、或是生存和智能。一個世紀以前,佛斯特(E. M. Foster)的〈機器休止〉(“The Machine Stops”,1909)設想一個恐怖預言式的全球智慧家園,地表已不適人居,在地下深處,智慧家園將人類一個個關養在蜂巢般的小房間。人類坐在他們的巢室裡,脫離日夜星辰輪轉,向世界各地其他巢室中的人發表短講,決策受演算法左右,食物經由其他地方的機器加工處理,需要的東西訂購後會送到巢室,倫理和觀點的準繩在於對科技壟斷的信仰。大規模的智慧家園是個必要的權宜之計,讓人類在大自然休養的期間得以生存,讓他們可以(有雷警告!)重返再度適合人居的地球表面、一個他們終於懂得珍視的世界。
我們是否應該為了能安然度過氣候劇變,而非為了便利或收益開發人工智慧? 歐寇若弗(Nnedi Okorafor) 的〈發明之母〉(“Mother of Invention”,2018)講述了這樣一個故事:智慧家園 Obi 3 幫助其居住者度過孤獨的懷孕和分娩,並在後來的花粉海嘯中保護她。 它為居住者無心力面對的突發事件預先做好準備。 對於一個準媽媽來說,這個房子幾乎就是子宮的象徵,在危險時期提供保護和關懷。 歐寇若弗對於照護型人工智慧的想像,暗示我們可能正以錯誤的模型在構思智慧家園。除了推想小說中常見的僕人、性愛機器人、鎮靜劑、和監控國家等主題外,智慧家園還有更重要的功能有待開創。
在設計理念、倫理爭議和反人類奇點的預警之外,推想小說的智慧家園也可指出重新想像人工智慧的方向,突破以人類作為模型的限制,以嶄新的方式來構想人工情感和智慧,讓我們能準備好面對無法關機的時刻,無論那是緣於氣候變遷或是機器智能。
王莉思 (L. Acadia)
加州柏克萊大學修辭學博士,國立臺灣大學外國語文學系助理教授。現開授「科幻小說中的人工智能」研究所專題,並鑽研推想小說可以教導我們關於人工智慧的種種,包含其危險和可能性。感謝張容禎與丘子玲為此計畫提供的研究協助,以及黃冠維的翻譯和張容禎的校對。
翻譯:黃冠維
校對:張容禎
編輯:黃山耘
Power On: Lessons from Fictional Smart Homes
L. Acadia
Smart homes in speculative fiction (SF) range from the ominously sinister to the strategically life-saving, revealing our hopes and fears for science: possibilities we envision for artificial intelligence (AI), yet also dangers of such technology. Within these narratives, we can also see the cracks in our conceptions, where it may be time to rethink our assumptions about intelligence, ethics, motives.
The epitome of the midcentury techno-utopian dream is The Jetsons, all about convenience and freedom from work, with speedy transportation and automated labor delivering contented harmony. Many fantasies of a Jetson future have already become a reality in today’s smart homes and phones, like centralized control of climate, lights, intercom, blinds, locks, hoverboards, holograms, video calls, and robot vacuums. We’re still working on the flying cars, though London’s Food Ink started selling 3D-printed food almost like the Jetsons ate.
Is your dream home as utopian as you imagine though? Futurism can have a vaguely distasteful clamorous sterility, like Jacques Tati’s smart home Villa Arpel, whose aesthetic takes all the joy from life: the kitchen produces vitamins rather than food and the couch is a slick variation on pool noodles, where a guest’s outstretched fingers suggest she’s on the verge of a slapstick slip into the gap. More morbidly, Ray Bradbury warns in “The Veldt” (1950), life in a smart home that cares completely for its inhabitants may become so addictively corrupting that it (spoiler!) compels young children to feed their own parents to the all-too-real lions in their Happylife Home nursery.
The site of the home is particularly meaningful, since the home is idealized as a utopian space of harmony and security, all the more vulnerable to the impact of technology. The ideal of the smart home is to accentuate these qualities, personalizing the home to optimize comfort, harmony, and security. The clear objective and narrow set of actors the smart home serves emphasizes a core concern about AI technology: that the machine’s interests will conflict with the human’s. SF warns us how technology will change us, control us, trap us, and outsmart us, as happens to the protagonist of the film Tau, kidnapped by a mad scientist and forced to outsmart the house for survival. In a reverse Turing test, the human must prove herself to the sentient computer, the titular AI who learns to appreciate literature and demonstrates empathy. Yet the empathy may be a ruse to bond with the kidnap victim, whom it helps escape as a way of itself escaping its creator (as in the film Ex Machina). Tau’s manipulative employment of empathy raises the critical question of whether AI should have human qualities like ‘empathy.’
Does our AI development objective of recreating emotion indicate that we are constrained by human-centric models of intelligence? Or should the capacity to replicate human emotion be a measure or even a necessary condition of successful artificial intelligence? Current AI research, Andrew Smart argues, is constrained by the assumption that AI should mimic certain qualities believed to be characteristic of the human brain, including exhibiting emotional experience. The solution may be to decenter the standard anthropocentric view of emotion premised on a human prototype, and SF can help us imagine new models of artificial emotion.
Three contemporary US SF novels stand out as suggesting particularly innovative ways of rethinking artificial emotion, and all three are set partially or completely on smart-home-like space ships, a setting that emphasizes the residents’ vulnerability and dependence on their smart home. Octavia Butler sets most of Dawn (1987) on a living spaceship composed of biological organisms that communicate tentacularly. Arkady Martine sets A Memory Called Empire (2019) in a city threaded with AI networks and whose police force seems to have a collective consciousness; she also envisions brain implant ‘imago machines’ permitting shared knowledge and physiological response along a multi-generational chain of collective memory. Becky Chambers details in A Closed and Common Orbit (2016) the simulation of physiological emotion that is programmed into the synthetic human body the AI protagonist Lovelace wears, using images to mimic sensory experience and evoke emotion, like an artificial synesthesia. Butler’s ship and Martine’s city both situate emotion as part of a radically open shared knowledge system, while Martine and Chambers both emphasize the role of the endocrine system in producing emotional response, which suggests the exciting possibility of artificial replicability.
Developing AI that replicates human emotions is all the more urgent due to the fear that intelligence without empathy might analytically or ethically deduce decisions that do not privilege the human actors. Warnings about technological singularity dominate popular discourse, following Ray Kurzweil and Nick Bostrom’s warnings that super-intelligent AI is inevitable and around the corner. Narratives about the dangers of the singularity when machines outsmart humans reflect what seems to be an accurate understanding of AI development accomplishing incredible calculating power, without the creativity and consciousness of the human brain based on which we premise our AI visions.
The threat of the singularity is epitomized by the space ship smart home AI of Hal 9000 from 2001 A Space Odyssey (1968). The AI’s computer ‘eye’ reads the humans’ lips as they plan to reboot it, and makes the calculation (whether for its own accumulated consciousness or the good of the ship) that such a reboot is a mistake, and the most ethical path is to eliminate the humans. Even with knowledge and self-ascription of human emotions, Archos the AI of Daniel Wilson’s Robopocalypse (2011) determines to protect life by exterminating humans, including hacking the smart home “automated eldercare building” to kill the residents in plummeting elevators and drown them in a sprinkler-fed stairwell whirlpool. If we design AI on human ethical models—such as ethics of care, Kantianism, or utilitarianism—AI could well determine that human existence is not ethically optimal. Or if AI develops its own ethical models, how likely would they privilege human concerns, let alone a consumer’s concerns? If corporations write the algorithms, your smart house might, as in “Automated Customer Service,” try to kill you if you don’t upgrade to a more expensive plan that can disengage the kill setting.
Smart homes of SF may portend terrifying futures, yet they can also inspire new ways to think about power, desire, innovation, or perhaps survival and intelligence. Over a century ago, E.M. Forster’s "The Machine Stops” (1909) envisioned an eerily prophetic global smart home, containing and sustaining individual human inhabitants in cells likened to those of a honeycomb far beneath the no-longer habitable surface of Earth. Humans sat in their cells, disconnected from days determined by planetary rotation, giving short lectures to individuals in other cells around the world, their decisions determined by algorithms, their food processed by machines elsewhere, ordering what they needed delivered to their cells, ethics and attitudes determined by a religion of Technopoly. The massive smart home was a stop-gap measure, necessary to sustain humans through a period of recovery for nature, so they could (spoiler!) reemerge to a world once more welcoming, that they can finally appreciate.
Should we be developing AI not for convenience and profit, but to protect us through climate cataclysm? Nnedi Okorafor’s “Mother of Invention” (2018) tells such a story: the smart home, Obi 3, helps its occupant through a solitary pregnancy and childbirth then protects her during a pollen tsunami. The house anticipates and prepares for contingencies the occupant does not have the emotional strength to confront. The house is almost a metaphorical womb for the expectant mother, protective and caring through a dangerous time. Okorafor’s vision of a caring AI suggests that we may have the wrong models for conceptualizing smart homes. There are more important functions to create than the many servants, sexbots, sedatives, and surveillance states of SF’s smart homes.
Beyond the design concepts, ethical provocations, and forewarnings of an antihuman singularity, SF’s smart homes can also suggest ways to rethink our assumptions about AI, to see past the confines of a human model to imagine new ways of conceptualizing artificial emotion and intelligence to better prepare for a time when, perhaps due to climate change or machine intelligence we may not be able to power off.
L. Acadia is an Assistant Professor of Literary Studies in the Department of Foreign Languages and Literatures at National Taiwan University (PhD UC Berkeley Rhetoric), currently teaching a graduate seminar on Artificial Intelligence in Speculative Fiction, and researching what SF can teach us about AI—from the dangers to the possibilities. Thanks to Jungchen Chang and Tzi Ling Chew for your research assistance on this project, and to the translator Kuan-Wei Huang.
Comentarios