An Introduction to Computational Learning Theory

Michael J. Kearns, Umesh Vazirani

  • 出版商: MIT
  • 出版日期: 1994-08-15
  • 售價: $3,090
  • 貴賓價: 9.5$2,936
  • 語言: 英文
  • 頁數: 221
  • 裝訂: Hardcover
  • ISBN: 0262111934
  • ISBN-13: 9780262111935
  • 無法訂購

買這商品的人也買了...

相關主題

商品描述

Description

Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.

Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.

Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.

The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.

Michael J. Kearns is Professor of Computer and Information Science at the University of Pennsylvania.

商品描述(中文翻譯)

描述

強調計算效率問題,Michael Kearns 和 Umesh Vazirani 為人工智慧、神經網絡、理論計算機科學和統計學的研究者和學生介紹了計算學習理論中的多個核心主題。

計算學習理論是一個新興且快速擴展的研究領域,旨在檢視歸納的正式模型,目標是發現高效學習算法背後的共同方法,並識別學習過程中的計算障礙。

書中的每個主題都被選擇來闡明一個一般原則,並在精確的正式環境中進行探討。為了使材料對非理論工作者更易於理解,演示中強調了直覺,同時仍然為專家提供精確的論證。這種平衡是新證明已建立定理和標準證明的新呈現的結果。

涵蓋的主題包括廣泛研究的 L. G. Valiant 模型的動機、定義和基本結果,包括正面和負面的結果;奧卡姆剃刀,這一原則形式化了學習與數據壓縮之間的關係;Vapnik-Chervonenkis 維度;弱學習與強學習的等價性;在噪聲存在下通過統計查詢方法進行高效學習;學習與密碼學之間的關係,以及由此產生的對高效學習的計算限制;學習問題之間的可約性;以及從主動實驗中學習有限自動機的算法。

Michael J. Kearns 是賓夕法尼亞大學計算機與信息科學的教授。