Uncertainty Modelling in Data Science - Sébastien Destercke, Thierry Denoeux, María Ángeles Gil, Przemysław Grzegorzewski & Olgierd Hryniewicz

Uncertainty Modelling in Data Science

By Sébastien Destercke, Thierry Denoeux, María Ángeles Gil, Przemysław Grzegorzewski & Olgierd Hryniewicz

  • Release Date: 2018-07-24
  • Genre: Computers & Internet

Description

This book features 29 peer-reviewed papers presented at the 9th International Conference on Soft Methods in Probability and Statistics (SMPS 2018), which was held in conjunction with the 5th International Conference on Belief Functions (BELIEF 2018) in Compiègne, France on September 17–21, 2018. It includes foundational, methodological and applied contributions on topics as varied as imprecise data handling, linguistic summaries, model coherence, imprecise Markov chains, and robust optimisation. These proceedings were produced using EasyChair. 
Over recent decades, interest in extensions and alternatives to probability and statistics has increased significantly in diverse areas, including decision-making, data mining and machine learning, and optimisation. This interest stems from the need to enrich existing models, in order to include different facets of uncertainty, like ignorance, vagueness, randomness, conflict or imprecision. Frameworks such as rough sets, fuzzy sets,fuzzy random variables, random sets, belief functions, possibility theory, imprecise probabilities, lower previsions, and desirable gambles all share this goal, but have emerged from different needs. 
The advances, results and tools presented in this book are important in the ubiquitous and fast-growing fields of data science, machine learning and artificial intelligence. Indeed, an important aspect of some of the learned predictive models is the trust placed in them. 
Modelling the uncertainty associated with the data and the models carefully and with principled methods is one of the means of increasing this trust, as the model will then be able to distinguish between reliable and less reliable predictions. In addition, extensions such as fuzzy sets can be explicitly designed to provide interpretable predictive models, facilitating user interaction and increasing trust.