◆新書介紹
◆圖書分類
◆進階查詢
◆特價書區
◆教師服務
◆會員專區
◆購物車
◆討論區
◆網站連結

美國地址驗證
貨物追蹤

SSL 交易安全聲明


*LEARNING FROM DATA: CONCEPTS, THEORY, AND METHODS 2E 2007 (H)
New!

△看放大圖
ISBN: 9780471681823
類別: 電腦Computer Science & Engineering
出版社: JOHN WILEY & SONS
作者: CHERKASSKY
年份: 2007
裝訂別: 精裝
頁數: 538
定價: 1,380
售價: 1,242
原幣價: USD 109.95
狀態: 缺書
An interdisciplinary framework for learning methodologies—covering statistics, neural networks, and fuzzy logic, this book provides a unified treatment of the principles and methods for learning dependencies from data. It establishes a general conceptual framework in which various learning methods from statistics, neural networks, and fuzzy logic can be applied—showing that a few fundamental principles underlie most new methods being proposed today in statistics, engineering, and computer science. Complete with over one hundred illustrations, case studies, and examples making this an invaluable text.

Table Of Contents

Preface.

Notation.

1. Introduction.

1.1 Learning and Statistical Estimation.

1.2 Statistical Dependency and Causality.

1.3 Characterization of Variables.

1.4 Characterization of Uncertainty.

References.

2. Problem Statement, Classical Approaches, and Adaptive Learning.

2.1 Formulation of the Learning Problem.

2.1.1 Objective of Learning.

2.1.2 Common Learning Tasks.

2.1.3 Scope of the Learning Problem Formulation.

2.2 Classical Approaches.

2.2.1 Density Estimation.

2.2.2 Classification.

2.2.3 Regression.

2.2.4 Stochastic Approximation.

2.2.5 Solving problems with Finite Data.

2.2.6 Nonparametric Methods.

2.3 Adaptive Learning: Concepts and Inductive Principles.

2.3.1 Philosophy, Major Concepts, and Issues .

2.3.2 A priori Knowledge and Model Complexity.

2.3.3 Inductive Principles .

2.3.4 Alternative Learning Formulations.

2.4 Summary.

References.

3. Regularization Framework.

3.1 Curse and Complexity of Dimensionality.

3.2 Function Approximation and Characterization of Complexity.

3.3 Penalization.

3.3.1 Parametric Penalties.

3.3.2 Nonparametric Penalties.

3.4 Model Selection (Complexity Control).

3.4.1 Analytical Model Selection Criteria.

3.4.2 Model Selection via Resampling.

3.4.3 Bias-Variance Trade-off.

3.4.4 Example of Model Selection.

3.4.5 Function Approximation vs Predictive Learning.

3.5 Summary.

References.

4. Statistical Learning Theory.

4.1 Conditions for Consistency and Convergence of ERM.

4.2 Growth Function and VC-Dimension.

4.2.1 VC-Dimension for Classification and Regression Problems.

4.2.2 Examples of Calculating VC-Dimension.

4.3 Bounds on the Generalization.

4.3.1 Classification.

4.3.2 Regression.

4.3.3 Generalization Bounds and Sampling Theorem.

4.4 Structural Risk Minimization.

4.5 Comparisons of Model Selection for Regression.

4.5.1 Model selection for linear estimators.

4.5.2 Model Selection for k-Nearest Neighbors Regression.

4.5.3 Model Selection for Linear Subset Regression.

4.5.4 Discussion.

4.6 Measuring the VC-dimension.

4.7 Summary and Discussion.

References.

5. Nonlinear Optimization Strategies.

5.1 Stochastic Approximation Methods.

5.1.1 Linear Parameter Estimation.

5.1.2 Backpropagation Training of MLP Networks.

5.2 Iterative Methods.

5.2.1 Expectation-Maximization Methods for Density Estimation.

5.2.2 Generalized Inverse Training of MLP Networks.

5.3 Greedy Optimization.

5.3.1 Neural Network Construction Algorithms.

5.3.2 Classification and Regression Trees (CART).

5.4 Feature Selection, Optimization, and Statistical Learning Theory .

5.5 Summary.

References.

6. Methods for Data Reduction and Dimensionality Reduction.

6.1 Vector Quantization.

6.1.1 Optimal Source Coding in Vector Quantization.

6.1.2 Generalized Lloyd Algorithm.

6.1.3 Clustering and Vector Quantization.

6.1.4 EM algorithm for VQ and Clustering.

6.2 Dimensionality Reduction: Statistical Methods.

6.2.1 Linear Principal Components.

6.2.2 Principal Curves and Surfaces.

6.2.3 Multidimensional Scaling.

6.3 Dimensionality Reduction: Neural Network Methods.

6.3.1 Discrete Principal Curves and Self-Organizing Map Algorithm.

6.3.2 Statistical Interpretation of the SOM Method.

6.3.3 Flow Through Version of the SOM and Learning Rate Schedules.

6.3.4 SOM Applications and Modifications.

6.3.5 Self-Supervised MLP .

6.4 Methods for Multivariate Data Analysis.

6.4.1 Factor Analysis.

6.4.2 Independent Component Analysis.

6.5 Summary.

References.

7. Methods for Regression.

7.1 Taxonomy: Dictionary versus Kernel Representation.

7.2 Linear Estimators .

7.2.1 Estimation of Linear Models and Equivalence of Representations.

7.2.2 Analytic Form of Cross-validation.

7.2.3 Estimating Complexity of Penalized Linear Models.

7.2.4 Nonadaptive methods.

7.2.4.1 Local Polynomial Estimators and Splines.

7.2.4.2 Radial Basis Function Networks.

7.3 Adaptive Dictionary Methods.

7.3.1 Additive Methods and Projection Pursuit Regression.

7.3.2 Multilayer Perceptrons and Backpropagation.

7.3.3 Multivariate Adaptive Regression Splines.

7.3.4 Orthogonal Basis Functions and Wavelet Signal Denoising.

7.4 Adaptive Kernel Methods and Local Risk Minimization.

7.4.1 Generalized Memory-Based Learning.

7.4.2 Constrained Topological Mapping.

7.5 Empirical Studies.

7.5.1 Predicting Net Asset Value of Mutual Funds.

7.5.2 Comparison of Adaptive Methods for Regression.

7.6 Combining Predictive Models.

7.7 Summary.

References.

8. Classification.

8.1 Statistical Learning Theory Formulation.

8.2 Classical Formulation.

8.2.1 Statistical Decision Theory.

8.2.2 Fisher’s Linear Discriminant Analysis.

8.3 Methods for Classification.

8.3.1 Regression-Based Methods.

8.3.3 Tree-Based Methods.

8.3.4 Nearest Neighbor and Prototype Methods.

8.3.5 Empirical Comparisons.

8.4 Combining Methods and Boosting.

8.4.1 Boosting as an Additive Model.

8.4.2 Boosting for Regression Problems.

8.5 Summary.

References.

9. Support Vector Machines.

9.1 Motivation for margin-based loss.

9.2 Margin-based loss, robustness and complexity control.

9.3 Optimal Separating Hyperplane .

9.4 High-Dimensional Mapping and Inner Product Kernels.

9.5 Support Vector Machine for Classification.

9.6 Support Vector Implementations.

9.7 Support Vector Machine for Regression.

9.8 SVM Model Selection.

9.9 SVM vs regularization approach.

9.10 Single-class SVM and novelty detection.

9.11 Summary and discussion.

References.

10. Non-Inductive Inference and Alternative Learning Formulations.

10.1 Sparse High-Dimensional Data.

10.2 Transduction .

10.3 Inference Through Contradictions.

10.4 Multiple Model Estimation.

10.5 Summary.

References.

Appendix A: Review of Nonlinear Optimization.

Appendix B: Eigenvalues and Singular Value Decomposition.

Index.
Springer 國外現貨
帳號:
密碼:
 

    

 

 

 
科大文化事業股份有限公司 SCI-TECH Publishing Company Ltd.
221 新北市汐止區新台五路一段99號11樓之8
TEL: 886-2-26971353 FAX: 886-2-26971631
Copyright © 2004 SCI-TECH All Rights Reserved.
訪客人數:2773238