An Introduction to Computational Learning Theory

Author: Michael J. Kearns,Umesh Virkumar Vazirani,Umesh Vazirani

Publisher: MIT Press

ISBN: 9780262111935

Category: Computers

Page: 207

View: 6121

Release On

Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics. Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics. Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning. Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs. The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.

Computational Learning Theory

Author: M. H. G. Anthony,N. Biggs

Publisher: Cambridge University Press

ISBN: 9780521599221

Category: Computers

Page: 157

View: 9296

Release On

This an introduction to the theory of computational learning.

Systems that Learn

An Introduction to Learning Theory

Author: Sanjay Jain

Publisher: MIT Press

ISBN: 9780262100779

Category: Computers

Page: 317

View: 4699

Release On

Formal learning theory is one of several mathematical approaches to the study of intelligent adaptation to the environment. The analysis developed in this book is based on a number theoretical approach to learning and uses the tools of recursive-function theory to understand how learners come to an accurate view of reality. This revised and expanded edition of a successful text provides a comprehensive, self-contained introduction to the concepts and techniques of the theory. Exercises throughout the text provide experience in the use of computational arguments to prove facts about learning.

Reinforcement Learning

An Introduction

Author: Richard S. Sutton,Andrew G. Barto

Publisher: A Bradford Book

ISBN: 0262039249

Category: Computers

Page: 552

View: 8540

Release On

The significantly expanded and updated new edition of a widely used text on reinforcement learning, one of the most active research areas in artificial intelligence. Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives while interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the field's key ideas and algorithms. This second edition has been significantly expanded and updated, presenting new topics and updating coverage of other topics. Like the first edition, this second edition focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes. Part I covers as much of reinforcement learning as possible without going beyond the tabular case for which exact solutions can be found. Many algorithms presented in this part are new to the second edition, including UCB, Expected Sarsa, and Double Learning. Part II extends these ideas to function approximation, with new sections on such topics as artificial neural networks and the Fourier basis, and offers expanded treatment of off-policy learning and policy-gradient methods. Part III has new chapters on reinforcement learning's relationships to psychology and neuroscience, as well as an updated case-studies chapter including AlphaGo and AlphaGo Zero, Atari game playing, and IBM Watson's wagering strategy. The final chapter discusses the future societal impacts of reinforcement learning.

Introduction to Machine Learning

Author: Ethem Alpaydin

Publisher: MIT Press

ISBN: 0262325756

Category: Computers

Page: 640

View: 5237

Release On

The goal of machine learning is to program computers to use example data or past experience to solve a given problem. Many successful applications of machine learning exist already, including systems that analyze past sales data to predict customer behavior, optimize robot behavior so that a task can be completed using minimum resources, and extract knowledge from bioinformatics data. Introduction to Machine Learning is a comprehensive textbook on the subject, covering a broad array of topics not usually included in introductory machine learning texts. Subjects include supervised learning; Bayesian decision theory; parametric, semi-parametric, and nonparametric methods; multivariate analysis; hidden Markov models; reinforcement learning; kernel machines; graphical models; Bayesian estimation; and statistical testing.Machine learning is rapidly becoming a skill that computer science students must master before graduation. The third edition of Introduction to Machine Learning reflects this shift, with added support for beginners, including selected solutions for exercises and additional example data sets (with code available online). Other substantial changes include discussions of outlier detection; ranking algorithms for perceptrons and support vector machines; matrix decomposition and spectral methods; distance estimation; new kernel algorithms; deep learning in multilayered perceptrons; and the nonparametric approach to Bayesian methods. All learning algorithms are explained so that students can easily move from the equations in the book to a computer program. The book can be used by both advanced undergraduates and graduate students. It will also be of interest to professionals who are concerned with the application of machine learning methods.

An Introduction to Natural Computation

Author: Dana Harry Ballard

Publisher: MIT Press

ISBN: 9780262522588

Category: Computers

Page: 307

View: 7420

Release On

"This is a wonderful book that brings together in one place the modern view of computation as found in nature. It is well written and has something for everyone from the undergraduate to the advanced researcher." -- Terrence J. Sejnowski, Howard Hughes Medical Institute at The Salk Institute for Biological Studies, La Jolla, California It is now clear that the brain is unlikely to be understood without recourse to computational theories. The theme of An "Introduction to Natural Computation" is that ideas from diverse areas such as neuroscience, information theory, and optimization theory have recently been extended in ways that make them useful for describing the brain's programs. This book provides a comprehensive introduction to the computational material that forms the underpinnings of the currently evolving set of brain models. It stresses the broad spectrum of learning models--ranging from neural network learning through reinforcement learning to genetic learning--and situates the various models in their appropriate neural context. To write about models of the brain before the brain is fully understood is a delicate matter. Very detailed models of the neural circuitry risk losing track of the task the brain is trying to solve. At the other extreme, models that represent cognitive constructs can be so abstract that they lose all relationship to neurobiology. An "Introduction to Natural Computation" takes the middle ground and stresses the computational task while staying near the neurobiology. The material is accessible to advanced undergraduates as well as beginning graduate students. CONTENTS: 1. Introduction Part I "Core Concepts" 2. Fitness 3. Programs 4. Data 5. Dynamics 6. Optimization Part II "Memories" 7. Content Addressible Memories 8. Supervised Learning 9. Unsupervised Learning Part III "Programs" 10. Markov Models 11. Reinforcement Learning Part IV "Systems" 12. Genetic Algorithms


An Introduction to Genetic Algorithms

Author: Melanie Mitchell

Publisher: MIT Press

ISBN: 9780262631853

Category: Computers

Page: 209

View: 9420

Release On

Genetic algorithms are used in science and engineering for problem solving and as computational models. This brief introduction enables readers to implement and experiment with genetic algorithms on their own. The descriptions of applications and modeling projects stretch beyond the boundaries of computer science to include systems theory, game theory, biology, ecology, and population genetics. 20 illustrations.

Perceptrons

An Introduction to Computational Geometry

Author: Marvin Minsky,Seymour A. Papert,Léon Bottou

Publisher: MIT Press

ISBN: 0262534770

Category: Computers

Page: 316

View: 2001

Release On

Reissue of the 1988 Expanded Edition with a new foreword by Léon Bottou In 1969, ten years after the discovery of the perceptron -- which showed that a machine could be taught to perform certain tasks using examples -- Marvin Minsky and Seymour Papert published Perceptrons, their analysis of the computational capabilities of perceptrons for specific tasks. As Léon Bottou writes in his foreword to this edition, "Their rigorous work and brilliant technique does not make the perceptron look very good." Perhaps as a result, research turned away from the perceptron. Then the pendulum swung back, and machine learning became the fastest-growing field in computer science. Minsky and Papert's insistence on its theoretical foundations is newly relevant. Perceptrons -- the first systematic study of parallelism in computation -- marked a historic turn in artificial intelligence, returning to the idea that intelligence might emerge from the activity of networks of neuron-like entities. Minsky and Papert provided mathematical analysis that showed the limitations of a class of computing machines that could be considered as models of the brain. Minsky and Papert added a new chapter in 1987 in which they discuss the state of parallel computers, and note a central theoretical challenge: reaching a deeper understanding of how "objects" or "agents" with individuality can emerge in a network. Progress in this area would link connectionism with what the authors have called "society theories of mind."

Introduction to Computation and Programming Using Python

With Application to Understanding Data

Author: John V. Guttag

Publisher: MIT Press

ISBN: 0262529629

Category: Computers

Page: 472

View: 3969

Release On

The new edition of an introductory text that teaches students the art of computational problem solving, covering topics ranging from simple algorithms to information visualization.

Deep Learning

Author: Ian Goodfellow,Yoshua Bengio,Aaron Courville

Publisher: MIT Press

ISBN: 0262337371

Category: Computers

Page: 800

View: 2971

Release On

"Written by three experts in the field, Deep Learning is the only comprehensive book on the subject." -- Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

An Introduction to Neural Networks

Author: James A. Anderson

Publisher: MIT Press

ISBN: 9780262510813

Category: Computers

Page: 650

View: 7344

Release On

An Introduction to Neural Networks falls into a new ecological niche for texts. Based on notes that have been class-tested for more than a decade, it is aimed at cognitive science and neuroscience students who need to understand brain function in terms of computational modeling, and at engineers who want to go beyond formal algorithms to applications and computing strategies. It is the only current text to approach networks from a broad neuroscience and cognitive science perspective, with an emphasis on the biology and psychology behind the assumptions of the models, as well as on what the models might be used for. It describes the mathematical and computational tools needed and provides an account of the author's own ideas. Students learn how to teach arithmetic to a neural network and get a short course on linear associative memory and adaptive maps. They are introduced to the author's brain-state-in-a-box (BSB) model and are provided with some of the neurobiological background necessary for a firm grasp of the general subject. The field now known as neural networks has split in recent years into two major groups, mirrored in the texts that are currently available: the engineers who are primarily interested in practical applications of the new adaptive, parallel computing technology, and the cognitive scientists and neuroscientists who are interested in scientific applications. As the gap between these two groups widens, Anderson notes that the academics have tended to drift off into irrelevant, often excessively abstract research while the engineers have lost contact with the source of ideas in the field. Neuroscience, he points out, provides a rich and valuable source of ideas about data representation and setting up the data representation is the major part of neural network programming. Both cognitive science and neuroscience give insights into how this can be done effectively: cognitive science suggests what to compute and neuroscience suggests how to compute it.

Introduction to Description Logic

Author: Franz Baader,Ian Horrocks,Carsten Lutz,Uli Sattler

Publisher: Cambridge University Press

ISBN: 0521873614

Category: Business & Economics

Page: 262

View: 7058

Release On

The first introductory textbook on description logics, relevant to computer science, knowledge representation and the semantic web.

Learning with Kernels

Support Vector Machines, Regularization, Optimization, and Beyond

Author: Bernhard Schölkopf,Alexander J. Smola

Publisher: MIT Press

ISBN: 9780262194754

Category: Computers

Page: 626

View: 5115

Release On

A comprehensive introduction to Support Vector Machines and related kernel methods.

Neural Network Learning

Theoretical Foundations

Author: Martin Anthony,Peter L. Bartlett

Publisher: Cambridge University Press

ISBN: 9780521118620

Category: Computers

Page: 389

View: 8893

Release On

This book describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. The authors also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is essentially self-contained, since it introduces the necessary background material on probability, statistics, combinatorics and computational complexity; and it is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.

A Probabilistic Theory of Pattern Recognition

Author: Luc Devroye,Laszlo Györfi,Gabor Lugosi

Publisher: Springer Science & Business Media

ISBN: 1461207118

Category: Mathematics

Page: 638

View: 7307

Release On

A self-contained and coherent account of probabilistic techniques, covering: distance measures, kernel rules, nearest neighbour rules, Vapnik-Chervonenkis theory, parametric classification, and feature extraction. Each chapter concludes with problems and exercises to further the readers understanding. Both research workers and graduate students will benefit from this wide-ranging and up-to-date account of a fast- moving field.

Probably Approximately Correct

NatureÕs Algorithms for Learning and Prospering in a Complex World

Author: Leslie Valiant

Publisher: Basic Books

ISBN: 0465032710

Category: Science

Page: 208

View: 9731

Release On

Presenting a theory of the theoryless, a computer scientist provides a model of how effective behavior can be learned even in a world as complex as our own, shedding new light on human nature.

Computational Learning Theory and Natural Learning Systems

Intersections between theory and experiment

Author: Stephen José Hanson,Thomas Petsche,Ronald L. Rivest

Publisher: Mit Press

ISBN: 9780262581332

Category: Computers

Page: 449

View: 6359

Release On

As with Volume I, this second volume represents a synthesis of issues in three historically distinct areas of learning research: computational learning theory, neural network research, and symbolic machine learning. While the first volume provided a forum for building a science of computational learning across fields, this volume attempts to define plausible areas of joint research: the contributions are concerned with finding constraints for theory while at the same time interpreting theoretic results in the context of experiments with actual learning systems. Subsequent volumes will focus on areas identified as research opportunities. Computational learning theory, neural networks, and AI machine learning appear to be disparate fields; in fact they have the same goal: to build a machine or program that can learn from its environment. Accordingly, many of the papers in this volume deal with the problem of learning from examples. In particular, they are intended to encourage discussion between those trying to build learning algorithms (for instance, algorithms addressed by learning theoretic analyses are quite different from those used by neural network or machine-learning researchers) and those trying to analyze them. The first section provides theoretical explanations for the learning systems addressed, the second section focuses on issues in model selection and inductive bias, the third section presents new learning algorithms, the fourth section explores the dynamics of learning in feedforward neural networks, and the final section focuses on the application of learning algorithms. "A Bradford Book"

Learning Kernel Classifiers

Theory and Algorithms

Author: Ralf Herbrich

Publisher: MIT Press

ISBN: 9780262263047

Category: Computers

Page: 384

View: 7026

Release On

An overview of the theory and application of kernel classification methods. Linear classifiers in kernel spaces have emerged as a major topic within the field of machine learning. The kernel technique takes the linear classifier—a limited, but well-established and comprehensively studied model—and extends its applicability to a wide range of nonlinear pattern-recognition tasks such as natural language processing, machine vision, and biological sequence analysis. This book provides the first comprehensive overview of both the theory and algorithms of kernel classifiers, including the most recent developments. It begins by describing the major algorithmic advances: kernel perceptron learning, kernel Fisher discriminants, support vector machines, relevance vector machines, Gaussian processes, and Bayes point machines. Then follows a detailed introduction to learning theory, including VC and PAC-Bayesian theory, data-dependent structural risk minimization, and compression bounds. Throughout, the book emphasizes the interaction between theory and algorithms: how learning algorithms work and why. The book includes many examples, complete pseudo code of the algorithms presented, and an extensive source code library.

Elements of Formal Semantics

An Introduction to the Mathematical Theory of Meaning in Natural Language

Author: Yoad Winter

Publisher: Edinburgh University Press

ISBN: 0748677771

Category: Language Arts & Disciplines

Page: 272

View: 4971

Release On

Introducing some of the foundational concepts, principles and techniques in the formal semantics of natural language, Elements of Formal Semantics outlines the mathematical principles that underlie linguistic meaning. Making use of a wide range of concrete English examples, the book presents the most useful tools and concepts of formal semantics in an accessible style and includes a variety of practical exercises so that readers can learn to utilise these tools effectively. For readers with an elementary background in set theory and linguistics or with an interest in mathematical modelling, this fascinating study is an ideal introduction to natural language semantics. Designed as a quick yet thorough introduction to one of the most vibrant areas of research in modern linguistics today this volume reveals the beauty and elegance of the mathematical study of meaning.