Webinar ID: 984 0628 8966
Random forest (Breiman, 2001), that is now part of the essential toolbox of any data scientist, is a very powerful non-parametric statistical learning method. The original and basic description of a random forest is that it is an ensemble method that averages the predictions of many individual trees that are built by injecting randomness in the process, and that it is useful for prediction problems. This view hides the fact that random forests can be used for problems that are way more general. This can be seen by switching to another view that sees a random forest as a weight-generating machine that automatically finds observations that are similar to the one for which we want an estimation (prediction). From this view, a random forest is a practical, flexible, and powerful local estimation and prediction method. In this talk, I will present 1) the basic traditional description of a random forest, 2) the modern view that a random forest is a weight-generating machine for local estimation and prediction 3) various examples of applications of random forests to complex problems beyond the usual supervised learning prediction problem.