The Invariance Principle
暫譯: 不變性原則

Lopez-Paz, David

  • 出版商: Summit Valley Press
  • 出版日期: 2026-06-30
  • 售價: $2,860
  • 貴賓價: 9.5$2,717
  • 語言: 英文
  • 頁數: 400
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 0262053349
  • ISBN-13: 9780262053341
  • 相關分類: Machine Learning
  • 尚未上市,無法訂購

商品描述

How statistical invariances will help us build AI systems exhibiting human-like performance by following human-like strategies.

Current machine learning systems crumble when the distributions of training and testing examples differ in spurious correlations. This is a major roadblock toward the development of advanced machine intelligence, which demands not only human-like performance but the deployment of human-like strategies. The prevalent approach in AI, fixated on recklessly minimizing average training error, falls short in producing AI systems capable of authentic out-of-distribution generalization. This book introduces the Invariance Principle, a new epistemological tool to unearth correlations invariant across diverse collections of empirical data.

The Invariance Principle, encapsulated in the axiom "frame your problem so its answer matches across circumstances," will not only find its practical incarnation in the family of Invariant Risk Minimization algorithms, but also illuminate our understanding of causation. It will permeate topics such as environment discovery, large-language models, self-supervised learning, mixing data augmentation, uncertainty estimation, and fairness. The author argues that the Invariance Principle is a central inductive bias fueling advances across fields of knowledge, such as physics, metaphysics, and cognitive science.

The final chapter includes personal examples of how invariance has shaped the author's understanding of his own subjective experience, as well as how he has interpreted both Eastern and Western philosophical traditions.

商品描述(中文翻譯)

統計不變性如何幫助我們建立展現類人表現的人工智慧系統,並遵循類人策略。

當訓練和測試範例的分佈在虛假相關性上有所不同時,當前的機器學習系統會崩潰。這是邁向先進機器智慧發展的一大障礙,因為這不僅要求類人表現,還需要部署類人策略。人工智慧中普遍的做法,專注於盲目地最小化平均訓練誤差,無法產生能夠真實進行分佈外泛化的人工智慧系統。本書介紹了不變性原則(Invariance Principle),這是一種新的認識論工具,用於發掘在多樣的實證數據集中不變的相關性。

不變性原則,概括為「將問題框架化,使其答案在不同情況下相符」的公理,不僅會在不變風險最小化(Invariant Risk Minimization)算法家族中找到其實際體現,還將照亮我們對因果關係的理解。它將滲透到環境發現、大型語言模型、自我監督學習、數據增強混合、不確定性估計和公平性等主題中。作者主張,不變性原則是推動物理學、形而上學和認知科學等知識領域進步的核心歸納偏見。

最後一章包括了個人範例,展示不變性如何塑造作者對自身主觀經驗的理解,以及他如何詮釋東方和西方的哲學傳統。

作者簡介

David Lopez-Paz is a research scientist at FAIR, Meta. Previously, he held positions in the European Space Agency, RedBull, Formula 1, and Google Research.

作者簡介(中文翻譯)

大衛·洛佩斯-帕茲(David Lopez-Paz)是Meta旗下FAIR的研究科學家。之前,他曾在歐洲太空總署(European Space Agency)、紅牛(RedBull)、一級方程式(Formula 1)和谷歌研究(Google Research)擔任職位。