Fr. 166.00

Combining Pattern Classifiers - Methods and Algorithms

English · Hardback

Shipping usually within 1 to 3 weeks (not available at short notice)

Description

Read more

Informationen zum Autor Ludmila Kuncheva is a Professor of Computer Science at Bangor University, United Kingdom. She has received two IEEE Best Paper awards. In 2012, Dr. Kuncheva was awarded a Fellowship to the International Association for Pattern Recognition (IAPR) for her contributions to multiple classifier systems. Klappentext A unified, coherent treatment of current classifier ensemble methods, from fundamentals of pattern recognition to ensemble feature selection, now in its second editionThe art and science of combining pattern classifiers has flourished into a prolific discipline since the first edition of Combining Pattern Classifiers was published in 2004. Dr. Kuncheva has plucked from the rich landscape of recent classifier ensemble literature the topics, methods, and algorithms that will guide the reader toward a deeper understanding of the fundamentals, design, and applications of classifier ensemble methods.Thoroughly updated, with MATLAB(r) code and practice data sets throughout, Combining Pattern Classifiers includes:* Coverage of Bayes decision theory and experimental comparison of classifiers* Essential ensemble methods such as Bagging, Random forest, AdaBoost, Random subspace, Rotation forest, Random oracle, and Error Correcting Output Code, among others* Chapters on classifier selection, diversity, and ensemble feature selectionWith firm grounding in the fundamentals of pattern recognition, and featuring more than 140 illustrations, Combining Pattern Classifiers, Second Edition is a valuable reference for postgraduate students, researchers, and practitioners in computing and engineering. Zusammenfassung Combined classifiers, which are central to the ubiquitous performance of pattern recognition and machine learning, are generally considered more accurate than single classifiers. Inhaltsverzeichnis Preface xv Acknowledgements xxi 1 Fundamentals of Pattern Recognition 1 1.1 Basic Concepts: Class, Feature, Data Set 1 1.1.1 Classes and Class Labels 1 1.1.2 Features 2 1.1.3 Data Set 3 1.1.4 Generate Your Own Data 6 1.2 Classifier, Discriminant Functions, Classification Regions 9 1.3 Classification Error and Classification Accuracy 11 1.3.1 Where Does the Error Come From? Bias and Variance 11 1.3.2 Estimation of the Error 13 1.3.3 Confusion Matrices and Loss Matrices 14 1.3.4 Training and Testing Protocols 15 1.3.5 Overtraining and Peeking 17 1.4 Experimental Comparison of Classifiers 19 1.4.1 Two Trained Classifiers and a Fixed Testing Set 20 1.4.2 Two Classifier Models and a Single Data Set 22 1.4.3 Two Classifier Models and Multiple Data Sets 26 1.4.4 Multiple Classifier Models and Multiple Data Sets 27 1.5 Bayes Decision Theory 30 1.5.1 Probabilistic Framework 30 1.5.2 Discriminant Functions and Decision Boundaries 31 1.5.3 Bayes Error 33 1.6 Clustering and Feature Selection 35 1.6.1 Clustering 35 1.6.2 Feature Selection 37 1.7 Challenges of Real-Life Data 40 Appendix 41 1.A.1 Data Generation 41 1.A.2 Comparison of Classifiers 42 1.A.2.1 MATLAB Functions for Comparing Classifiers 42 1.A.2.2 Critical Values for Wilcoxon and Sign Test 45 1.A.3 Feature Selection 47 2 Base Classifiers 49 2.1 Linear and Quadratic Classifiers 49 2.1.1 Linear Discriminant Classifier 49 2.1.2 Nearest Mean Classifier 52 2.1.3 Quadratic Discriminant Classifier 52 2.1.4 Stability of LDC and QDC 53 2.2 Decision Tree Classifiers 55 2.2.1 Basics and Terminology 55 2.2.2 Training of Decision Tree Classifiers 57 2.2.3 Selection of the Feature for a Node 58 2.2.4 Stopping Criterion 60 2.2.5 Pruning of the Decision Tree 63 2.2.6 C4.5 and ID3 64

List of contents

Preface xv
 
Acknowledgements xxi
 
1 Fundamentals of Pattern Recognition 1
 
1.1 Basic Concepts: Class Feature Data Set 1
 
1.1.1 Classes and Class Labels 1
 
1.1.2 Features 2
 
1.1.3 Data Set 3
 
1.1.4 Generate Your Own Data 6
 
1.2 Classifier Discriminant Functions Classification Regions 9
 
1.3 Classification Error and Classification Accuracy 11
 
1.3.1 Where Does the Error Come From? Bias and Variance 11
 
1.3.2 Estimation of the Error 13
 
1.3.3 Confusion Matrices and Loss Matrices 14
 
1.3.4 Training and Testing Protocols 15
 
1.3.5 Overtraining and Peeking 17
 
1.4 Experimental Comparison of Classifiers 19
 
1.4.1 Two Trained Classifiers and a Fixed Testing Set 20
 
1.4.2 Two Classifier Models and a Single Data Set 22
 
1.4.3 Two Classifier Models and Multiple Data Sets 26
 
1.4.4 Multiple Classifier Models and Multiple Data Sets 27
 
1.5 Bayes Decision Theory 30
 
1.5.1 Probabilistic Framework 301.5.2 Discriminant Functions and Decision Boundaries 31
 
1.5.3 Bayes Error 33
 
1.6 Clustering and Feature Selection 35
 
1.6.1 Clustering 35
 
1.6.2 Feature Selection 37
 
1.7 Challenges of Real-Life Data 40
 
Appendix 41
 
1.A.1 Data Generation 41
 
1.A.2 Comparison of Classifiers 42
 
1.A.2.1 MATLAB Functions for Comparing Classifiers 42
 
1.A.2.2 Critical Values for Wilcoxon and Sign Test 45
 
1.A.3 Feature Selection 47
 
2 Base Classifiers 49
 
2.1 Linear and Quadratic Classifiers 49
 
2.1.1 Linear Discriminant Classifier 49
 
2.1.2 Nearest Mean Classifier 52
 
2.1.3 Quadratic Discriminant Classifier 52
 
2.1.4 Stability of LDC and QDC 53
 
2.2 Decision Tree Classifiers 55
 
2.2.1 Basics and Terminology 55
 
2.2.2 Training of Decision Tree Classifiers 57
 
2.2.3 Selection of the Feature for a Node 58
 
2.2.4 Stopping Criterion 60
 
2.2.5 Pruning of the Decision Tree 63
 
2.2.6 C4.5 and ID3 64
 
2.2.7 Instability of Decision Trees 64
 
2.2.8 Random Trees 65
 
2.3 The Na¨?ve Bayes Classifier 66
 
2.4 Neural Networks 68
 
2.4.1 Neurons 68
 
2.4.2 Rosenblatt's Perceptron 70
 
2.4.3 Multi-Layer Perceptron 71
 
2.5 Support Vector Machines 73
 
2.5.1 Why Would It Work? 73
 
2.5.2 Classification Margins 74
 
2.5.3 Optimal Linear Boundary 76
 
2.5.4 Parameters and Classification Boundaries of SVM 78
 
2.6 The k-Nearest Neighbor Classifier (k-nn) 80
 
2.7 Final Remarks 82
 
2.7.1 Simple or Complex Models? 82
 
2.7.2 The Triangle Diagram 83
 
2.7.3 Choosing a Base Classifier for Ensembles 85
 
Appendix 85
 
2.A.1 MATLAB Code for the Fish Data 85
 
2.A.2 MATLAB Code for Individual Classifiers 86
 
2.A.2.1 Decision Tree 86
 
2.A.2.2 Na¨?ve Bayes 89
 
2.A.2.3 Multi-Layer Perceptron 90
 
2.A.2.4 1-nn Classifier 92
 
3 An Overview of the Field 94
 
3.1 Philosophy 94
 
3.2 Two Examples 98
 
3.2.1 The Wisdom of the "Classifier Crowd" 98
 
3.2.2 The Power of Divide-and-Conquer 98
 
3.3 Structure of the Area 100
 
3.3.1 Terminology 100
 
3.3.2 A Taxonomy of Classifier Ensemble Methods 100
 
3.3.3 Classifier Fusion and Classifier Selection 104
 
3.4 Quo Vadis? 105
 
3.4.1 Reinventing the Wheel? 105
 
3.4.2 The Illusion of Progress? 106
 
3.4.3 A Bibliometri

Customer reviews

No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.

Write a review

Thumbs up or thumbs down? Write your own review.

For messages to CeDe.ch please use the contact form.

The input fields marked * are obligatory

By submitting this form you agree to our data privacy statement.