Share
Fr. 211.00
Sigaud, Olivier Sigaud, Olivier Buffet, Olivier Sigaud
Markov Decision Processes Artificial Int
English · Hardback
Shipping usually within 1 to 3 weeks (not available at short notice)
Description
Informationen zum Autor Olivier Sigaud is a Professor of Computer Science at the University of Paris 6 (UPMC). He is the Head of the "Motion" Group in the Institute of Intelligent Systems and Robotics (ISIR). Olivier Buffet has been an INRIA researcher in the Autonomous Intelligent Machines (MAIA) team of theLORIA laboratory, since November 2007. Klappentext Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustrative applications. Zusammenfassung Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. Inhaltsverzeichnis 13609620
List of contents
Preface xvii
List of Authors xix
PART 1. MDPS: MODELS AND METHODS 1
Chapter 1. Markov Decision Processes 3
Frédérick GARCIA and Emmanuel RACHELSON
1.1. Introduction 3
1.2. Markov decision problems 4
1.3. Value functions 9
1.4. Markov policies 12
1.5. Characterization of optimal policies 14
1.6. Optimization algorithms for MDPs 28
1.7. Conclusion and outlook 37
1.8. Bibliography 37
Chapter 2. Reinforcement Learning 39
Olivier SIGAUD and Frédérick GARCIA
2.1. Introduction 39
2.2. Reinforcement learning: a global view 40
2.3. Monte Carlo methods 45
2.4. From Monte Carlo to temporal difference methods 45
2.5. Temporal difference methods 46
2.6. Model-based methods: learning a model 59
2.7. Conclusion 63
2.8. Bibliography 63
Chapter 3. Approximate Dynamic Programming 67
Rémi MUNOS
3.1. Introduction 68
3.2. Approximate value iteration (AVI) 70
3.3. Approximate policy iteration (API) 77
3.4. Direct minimization of the Bellman residual 87
3.5. Towards an analysis of dynamic programming in Lp-norm 88
3.6. Conclusions 93
3.7. Bibliography 93
Chapter 4. Factored Markov Decision Processes 99
Thomas DEGRIS and Olivier SIGAUD
4.1. Introduction 99
4.2. Modeling a problem with an FMDP 100
4.3. Planning with FMDPs 108
4.4. Perspectives and conclusion 122
4.5. Bibliography 123
Chapter 5. Policy-Gradient Algorithms 127
Olivier BUFFET
5.1. Reminder about the notion of gradient 128
5.2. Optimizing a parameterized policy with a gradient algorithm 130
5.3. Actor-critic methods 143
5.4. Complements 147
5.5. Conclusion 150
5.6. Bibliography 150
Chapter 6. Online Resolution Techniques 153
Laurent PÉRET and Frédérick GARCIA
6.1. Introduction 153
6.2. Online algorithms for solving an MDP 155
6.3. Controlling the search 167
6.4. Conclusion 180
6.5. Bibliography 180
PART 2. BEYOND MDPS 185
Chapter 7. Partially Observable Markov Decision Processes 187
Alain DUTECH and Bruno SCHERRER
7.1. Formal definitions for POMDPs 188
7.2. Non-Markovian problems: incomplete information 196
7.3. Computation of an exact policy on information states 202
7.4. Exact value iteration algorithms 207
7.5. Policy iteration algorithms 222
7.6. Conclusion and perspectives 223
7.7. Bibliography 225
Chapter 8. Stochastic Games 229
Andriy BURKOV, Laëtitia MATIGNON and Brahim CHAIB-DRAA
8.1. Introduction 229
8.2. Background on game theory 230
8.3. Stochastic games 245
8.4. Conclusion and outlook 269
8.5. Bibliography 270
Chapter 9. DEC-MDP/POMDP 277
Aurélie BEYNIER, François CHARPILLET, Daniel SZER and Abdel-Illah MOUADDIB
9.1. Introduction 277
9.2. Preliminaries 278
9.3. Multi agent Markov decision processes 279
9.4. Decentralized control and local observability 280
9.5. Sub-classes of DEC-POMDPs 285
9.6. Algorithms for solving DEC-POMDPs 295
9.7. Applicative scenario: multirobot exploration 310
9.8. Conclusion and outlook . . . 312
9.9. Bibliography 313
Chapter 10. Non-Standard Criteria 319
Matthieu BOUSSARD, Maroua BOUZID, Abdel-Illah MOUADDIB, Régis SABBADIN and Paul WENG
10.1. Introduction 319
10.2. Multicriteria approaches 320
10.3. Robustness in MDPs 327
10.4. Possibilistic MDPs 329
10.5. Algebraic MDPs 342
10.6. Conclusion 354
10.7. Bibliography 355
PART 3. APPLICATIONS 361
Chapter 11. Online Learning for Micro-Object Manipulation 363
Guillaume LAURENT
11.1. Introduction 363
11.2. Manipulation device 364
11.3. Choice of the reinforcement learning algorithm 367
11.4. Experimental results 370
11.5. Conclusion 373
11.6. Bibliography 373
Chapter 12. Conservation of Biodiversity 375
Iadine CHADÈS
12.1. Introduction 375
12.2. When to protect, survey or surrender cryptic endangered species 376
12.3. Can sea otters and abalone co-exist? 381
12.4. Other applications in conservati
Report
"As an overall conclusion, this book is an extensive presentation of MDPs and their applications in modeling uncertain decision problems and in reinforcement learning." (Zentralblatt MATH, 2011)
"The range of subjects covered is fascinating, however, from game-theoretical applications to reinforcement learning, conservation of biodiversity and operations planning. Oriented towards advanced students and researchers in the fields of both artificial intelligence and the study of algorithms as well as discrete mathematics." ( Book News , September 2010)
Product details
Authors | Sigaud, Olivier Sigaud |
Assisted by | Olivier Buffet (Editor), Olivier Sigaud (Editor) |
Publisher | Wiley, John and Sons Ltd |
Languages | English |
Product format | Hardback |
Released | 16.02.2010 |
EAN | 9781848211674 |
ISBN | 978-1-84821-167-4 |
No. of pages | 480 |
Subject |
Natural sciences, medicine, IT, technology
> Technology
> Electronics, electrical engineering, communications engineering
|
Customer reviews
No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.
Write a review
Thumbs up or thumbs down? Write your own review.