The Elements of Statistical Learning

Data Mining, Inference, and Prediction, Second Edition

Author: Trevor Hastie,Robert Tibshirani,Jerome Friedman

Publisher: Springer Science & Business Media

ISBN: 9780387848587

Category: Computers

Page: 745

View: 7169

Release On

This book describes the important ideas in a variety of fields such as medicine, biology, finance, and marketing in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of colour graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorisation, and spectral clustering. There is also a chapter on methods for "wide'' data (p bigger than n), including multiple testing and false discovery rates.

An Introduction to Statistical Learning

with Applications in R

Author: Gareth James,Daniela Witten,Trevor Hastie,Robert Tibshirani

Publisher: Springer Science & Business Media

ISBN: 1461471389

Category: Mathematics

Page: 426

View: 978

Release On

An Introduction to Statistical Learning provides an accessible overview of the field of statistical learning, an essential toolset for making sense of the vast and complex data sets that have emerged in fields ranging from biology to finance to marketing to astrophysics in the past twenty years. This book presents some of the most important modeling and prediction techniques, along with relevant applications. Topics include linear regression, classification, resampling methods, shrinkage approaches, tree-based methods, support vector machines, clustering, and more. Color graphics and real-world examples are used to illustrate the methods presented. Since the goal of this textbook is to facilitate the use of these statistical learning techniques by practitioners in science, industry, and other fields, each chapter contains a tutorial on implementing the analyses and methods presented in R, an extremely popular open source statistical software platform. Two of the authors co-wrote The Elements of Statistical Learning (Hastie, Tibshirani and Friedman, 2nd edition 2009), a popular reference book for statistics and machine learning researchers. An Introduction to Statistical Learning covers many of the same topics, but at a level accessible to a much broader audience. This book is targeted at statisticians and non-statisticians alike who wish to use cutting-edge statistical learning techniques to analyze their data. The text assumes only a previous course in linear regression and no knowledge of matrix algebra.


Statistical Learning from a Regression Perspective

Author: Richard A. Berk

Publisher: Springer

ISBN: 3319440489

Category: Mathematics

Page: 347

View: 4462

Release On

This textbook considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. This fully revised new edition includes important developments over the past 8 years. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis derives from sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. As in the first edition, a unifying theme is supervised learning that can be treated as a form of regression analysis. Key concepts and procedures are illustrated with real applications, especially those with practical implications. The material is written for upper undergraduate level and graduate students in the social and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems. The author uses this book in a course on modern regression for the social, behavioral, and biological sciences. All of the analyses included are done in R with code routinely provided.

All of Statistics

A Concise Course in Statistical Inference

Author: Larry Wasserman

Publisher: Springer Science & Business Media

ISBN: 0387217363

Category: Mathematics

Page: 442

View: 8660

Release On

Taken literally, the title "All of Statistics" is an exaggeration. But in spirit, the title is apt, as the book does cover a much broader range of topics than a typical introductory book on mathematical statistics. This book is for people who want to learn probability and statistics quickly. It is suitable for graduate or advanced undergraduate students in computer science, mathematics, statistics, and related disciplines. The book includes modern topics like non-parametric curve estimation, bootstrapping, and classification, topics that are usually relegated to follow-up courses. The reader is presumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. Statistics, data mining, and machine learning are all concerned with collecting and analysing data.

Bayesian and Frequentist Regression Methods

Author: Jon Wakefield

Publisher: Springer Science & Business Media

ISBN: 1441909257

Category: Mathematics

Page: 697

View: 4009

Release On

Bayesian and Frequentist Regression Methods provides a modern account of both Bayesian and frequentist methods of regression analysis. Many texts cover one or the other of the approaches, but this is the most comprehensive combination of Bayesian and frequentist methods that exists in one place. The two philosophical approaches to regression methodology are featured here as complementary techniques, with theory and data analysis providing supplementary components of the discussion. In particular, methods are illustrated using a variety of data sets. The majority of the data sets are drawn from biostatistics but the techniques are generalizable to a wide range of other disciplines.

Spectral Analysis of Large Dimensional Random Matrices

Author: Zhidong Bai,Jack W. Silverstein

Publisher: Springer Science & Business Media

ISBN: 1441906614

Category: Mathematics

Page: 552

View: 8632

Release On

The aim of the book is to introduce basic concepts, main results, and widely applied mathematical tools in the spectral analysis of large dimensional random matrices. The core of the book focuses on results established under moment conditions on random variables using probabilistic methods, and is thus easily applicable to statistics and other areas of science. The book introduces fundamental results, most of them investigated by the authors, such as the semicircular law of Wigner matrices, the Marcenko-Pastur law, the limiting spectral distribution of the multivariate F matrix, limits of extreme eigenvalues, spectrum separation theorems, convergence rates of empirical distributions, central limit theorems of linear spectral statistics, and the partial solution of the famous circular law. While deriving the main results, the book simultaneously emphasizes the ideas and methodologies of the fundamental mathematical tools, among them being: truncation techniques, matrix identities, moment convergence theorems, and the Stieltjes transform. Its treatment is especially fitting to the needs of mathematics and statistics graduate students and beginning researchers, having a basic knowledge of matrix theory and an understanding of probability theory at the graduate level, who desire to learn the concepts and tools in solving problems in this area. It can also serve as a detailed handbook on results of large dimensional random matrices for practical users. This second edition includes two additional chapters, one on the authors' results on the limiting behavior of eigenvectors of sample covariance matrices, another on applications to wireless communications and finance. While attempting to bring this edition up-to-date on recent work, it also provides summaries of other areas which are typically considered part of the general field of random matrix theory.

Statistics for High-Dimensional Data

Methods, Theory and Applications

Author: Peter Bühlmann,Sara van de Geer

Publisher: Springer Science & Business Media

ISBN: 364220192X

Category: Mathematics

Page: 558

View: 2567

Release On

Modern statistics deals with large and complex data sets, and consequently with models containing a large number of parameters. This book presents a detailed account of recently developed approaches, including the Lasso and versions of it for various models, boosting methods, undirected graphical modeling, and procedures controlling false positive selections. A special characteristic of the book is that it contains comprehensive mathematical theory on high-dimensional statistics combined with methodology, algorithms and illustrations with real data examples. This in-depth approach highlights the methods’ great potential and practical applicability in a variety of settings. As such, it is a valuable resource for researchers, graduate students and experts in statistics, applied mathematics and computer science.

The Nature of Statistical Learning Theory

Author: Vladimir N. Vapnik

Publisher: Springer Science & Business Media

ISBN: 1475724403

Category: Mathematics

Page: 188

View: 6366

Release On

The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning from the general point of view of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: - the general setting of learning problems and the general model of minimizing the risk functional from empirical data - a comprehensive analysis of the empirical risk minimization principle and shows how this allows for the construction of necessary and sufficient conditions for consistency - non-asymptotic bounds for the risk achieved using the empirical risk minimization principle - principles for controlling the generalization ability of learning machines using small sample sizes - introducing a new type of universal learning machine that controls the generalization ability.

Statistical Learning with Sparsity

The Lasso and Generalizations

Author: Trevor Hastie,Robert Tibshirani,Martin Wainwright

Publisher: CRC Press

ISBN: 1498712177

Category: Business & Economics

Page: 367

View: 4855

Release On

Discover New Methods for Dealing with High-Dimensional Data A sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data. Top experts in this rapidly evolving field, the authors describe the lasso for linear regression and a simple coordinate descent algorithm for its computation. They discuss the application of l1 penalties to generalized linear models and support vector machines, cover generalized penalties such as the elastic net and group lasso, and review numerical methods for optimization. They also present statistical inference methods for fitted (lasso) models, including the bootstrap, Bayesian methods, and recently developed approaches. In addition, the book examines matrix decomposition, sparse multivariate analysis, graphical models, and compressed sensing. It concludes with a survey of theoretical results for the lasso. In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. Data analysts, computer scientists, and theorists will appreciate this thorough and up-to-date treatment of sparse statistical modeling.

Elements of Nonlinear Time Series Analysis and Forecasting

Author: Jan G. De Gooijer

Publisher: Springer

ISBN: 3319432524

Category: Mathematics

Page: 618

View: 1234

Release On

This book provides an overview of the current state-of-the-art of nonlinear time series analysis, richly illustrated with examples, pseudocode algorithms and real-world applications. Avoiding a “theorem-proof” format, it shows concrete applications on a variety of empirical time series. The book can be used in graduate courses in nonlinear time series and at the same time also includes interesting material for more advanced readers. Though it is largely self-contained, readers require an understanding of basic linear time series concepts, Markov chains and Monte Carlo simulation methods. The book covers time-domain and frequency-domain methods for the analysis of both univariate and multivariate (vector) time series. It makes a clear distinction between parametric models on the one hand, and semi- and nonparametric models/methods on the other. This offers the reader the option of concentrating exclusively on one of these nonlinear time series analysis methods. To make the book as user friendly as possible, major supporting concepts and specialized tables are appended at the end of every chapter. In addition, each chapter concludes with a set of key terms and concepts, as well as a summary of the main findings. Lastly, the book offers numerous theoretical and empirical exercises, with answers provided by the author in an extensive solutions manual.

Recursive Partitioning and Applications

Author: Heping Zhang,Burton H. Singer

Publisher: Springer Science & Business Media

ISBN: 9781441968241

Category: Mathematics

Page: 262

View: 7225

Release On

Multiple complex pathways, characterized by interrelated events and c- ditions, represent routes to many illnesses, diseases, and ultimately death. Although there are substantial data and plausibility arguments suppo- ing many conditions as contributory components of pathways to illness and disease end points, we have, historically, lacked an e?ective method- ogy for identifying the structure of the full pathways. Regression methods, with strong linearity assumptions and data-basedconstraints onthe extent and order of interaction terms, have traditionally been the strategies of choice for relating outcomes to potentially complex explanatory pathways. However, nonlinear relationships among candidate explanatory variables are a generic feature that must be dealt with in any characterization of how health outcomes come about. It is noteworthy that similar challenges arise from data analyses in Economics, Finance, Engineering, etc. Thus, the purpose of this book is to demonstrate the e?ectiveness of a relatively recently developed methodology—recursive partitioning—as a response to this challenge. We also compare and contrast what is learned via rec- sive partitioning with results obtained on the same data sets using more traditional methods. This serves to highlight exactly where—and for what kinds of questions—recursive partitioning–based strategies have a decisive advantage over classical regression techniques.

Computer Age Statistical Inference

Algorithms, Evidence, and Data Science

Author: Bradley Efron,Trevor Hastie

Publisher: Cambridge University Press

ISBN: 1108107958

Category: Mathematics

Page: N.A

View: 7554

Release On

The twenty-first century has seen a breathtaking expansion of statistical methodology, both in scope and in influence. 'Big data', 'data science', and 'machine learning' have become familiar terms in the news, as statistical methods are brought to bear upon the enormous data sets of modern science and commerce. How did we get here? And where are we going? This book takes us on an exhilarating journey through the revolution in data analysis following the introduction of electronic computation in the 1950s. Beginning with classical inferential theories - Bayesian, frequentist, Fisherian - individual chapters take up a series of influential topics: survival analysis, logistic regression, empirical Bayes, the jackknife and bootstrap, random forests, neural networks, Markov chain Monte Carlo, inference after model selection, and dozens more. The distinctly modern approach integrates methodology and algorithms with statistical inference. The book ends with speculation on the future direction of statistics and data science.

Pattern Recognition and Machine Learning

Author: Christopher M. Bishop

Publisher: Springer

ISBN: 9781493938438

Category: Computers

Page: 738

View: 2319

Release On

This is the first textbook on pattern recognition to present the Bayesian viewpoint. The book presents approximate inference algorithms that permit fast approximate answers in situations where exact answers are not feasible. It uses graphical models to describe probability distributions when no other books apply graphical models to machine learning. No previous knowledge of pattern recognition or machine learning concepts is assumed. Familiarity with multivariate calculus and basic linear algebra is required, and some experience in the use of probabilities would be helpful though not essential as the book includes a self-contained introduction to basic probability theory.

Principles and Theory for Data Mining and Machine Learning

Author: Bertrand Clarke,Ernest Fokoue,Hao Helen Zhang

Publisher: Springer Science & Business Media

ISBN: 0387981357

Category: Computers

Page: 786

View: 1711

Release On

Extensive treatment of the most up-to-date topics Provides the theory and concepts behind popular and emerging methods Range of topics drawn from Statistics, Computer Science, and Electrical Engineering

Bayesian Reasoning and Machine Learning

Author: David Barber

Publisher: Cambridge University Press

ISBN: 0521518148

Category: Computers

Page: 697

View: 8217

Release On

A practical introduction perfect for final-year undergraduate and graduate students without a solid background in linear algebra and calculus.

Applied Predictive Modeling

Author: Max Kuhn,Kjell Johnson

Publisher: Springer Science & Business Media

ISBN: 1461468493

Category: Medical

Page: 600

View: 3175

Release On

Applied Predictive Modeling covers the overall predictive modeling process, beginning with the crucial steps of data preprocessing, data splitting and foundations of model tuning. The text then provides intuitive explanations of numerous common and modern regression and classification techniques, always with an emphasis on illustrating and solving real data problems. The text illustrates all parts of the modeling process through many hands-on, real-life examples, and every chapter contains extensive R code for each step of the process. This multi-purpose text can be used as an introduction to predictive models and the overall modeling process, a practitioner’s reference handbook, or as a text for advanced undergraduate or graduate level predictive modeling courses. To that end, each chapter contains problem sets to help solidify the covered concepts and uses data available in the book’s R package. This text is intended for a broad audience as both an introduction to predictive models as well as a guide to applying them. Non-mathematical readers will appreciate the intuitive explanations of the techniques while an emphasis on problem-solving with real data across a wide variety of applications will aid practitioners who wish to extend their expertise. Readers should have knowledge of basic statistical ideas, such as correlation and linear regression analysis. While the text is biased against complex equations, a mathematical background is needed for advanced topics.

Data Mining: Concepts and Techniques

Author: Jiawei Han,Jian Pei,Micheline Kamber

Publisher: Elsevier

ISBN: 9780123814807

Category: Computers

Page: 744

View: 6675

Release On

Data Mining: Concepts and Techniques provides the concepts and techniques in processing gathered data or information, which will be used in various applications. Specifically, it explains data mining and the tools used in discovering knowledge from the collected data. This book is referred as the knowledge discovery from data (KDD). It focuses on the feasibility, usefulness, effectiveness, and scalability of techniques of large data sets. After describing data mining, this edition explains the methods of knowing, preprocessing, processing, and warehousing data. It then presents information about data warehouses, online analytical processing (OLAP), and data cube technology. Then, the methods involved in mining frequent patterns, associations, and correlations for large data sets are described. The book details the methods for data classification and introduces the concepts and methods for data clustering. The remaining chapters discuss the outlier detection and the trends, applications, and research frontiers in data mining. This book is intended for Computer Science students, application developers, business professionals, and researchers who seek information on data mining. Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of your data

Theory of Point Estimation

Author: E.L. Lehmann,George Casella

Publisher: Springer Science & Business Media

ISBN: 0387227288

Category: Mathematics

Page: 590

View: 5634

Release On

This second, much enlarged edition by Lehmann and Casella of Lehmann's classic text on point estimation maintains the outlook and general style of the first edition. All of the topics are updated, while an entirely new chapter on Bayesian and hierarchical Bayesian approaches is provided, and there is much new material on simultaneous estimation. Each chapter concludes with a Notes section which contains suggestions for further study. This is a companion volume to the second edition of Lehmann's "Testing Statistical Hypotheses".