CaptionsMaker
.com
Learning Classifier Systems in a Nutshell
Edit Subtitles
Download Subtitles
SRT
TXT
Title:
Description:
This video offers an accessible introduction to the basics of how Learning Classifier Systems (LCS), also known as Rule-Based Machine Learning (RBML), operate to learn patterns and make predictions. To simplify these concepts, we have focused on a generic ‘Michigan-style LCS’ algorithm architecture designed for supervised learning. The example algorithm described in this video is probably closest to the UCS algorithm described by Bernadó-Mansilla and Garrell-Guiu in their 2003 publication. However, the modern concept of the LCS algorithm is the result of founding work by John Henry Holland (https://en.wikipedia.org/wiki/John_Henry_Holland) While this video focuses on how the algorithm itself works, here we provide a brief background on why LCS algorithms are valuable and unique compared to other machine learning strategies. LCSs are a family of advanced machine learning algorithms that learn to represent patterns of association in a distributed, piece-wise fashion. These systems break down associations between independent and dependent variables into simple ‘IF:THEN’ statements. This makes them very flexible and adaptive learners that can approach data in a model free and assumption free manner. Research and development of LCS algorithms was initially focused on reinforcement learning problems such as behavior modeling, but in the last decade, the advantages of applying these systems as supervised learners has become clear. In particular LCS algorithms have been demonstrated to perform particularly well on the detection, modeling and characterization of complex, multi-variate, epistatic, or heterogeneous patterns of association. Additionally, LCS algorithms are naturally multi-objective (accuracy, and generality), niche learners, and can easily be thought of as implicit ensemble learners. Furthermore, LCSs can be adapted to handle missing data values, imbalanced data, discrete and continuous features, as well as binary class, multi-class, and regression learning/prediction. The flagship benchmark problem for these systems has traditionally been the n-bit multiplexer problem. The multiplexer is a binary classification problem that is both epistatic and heterogeneous where no single feature is predictive of class on its own. This benchmark can be scaled up in dimensional complexity to include the 6-bit, 11-bit, 20-bit, 37-bit, 70-bit, and 135-bit variations. Most other machine learners struggle, in particular, with heterogeneous relationships. As of 2016, our own LCS algorithm, called ‘ExSTraCS’ was still the only algorithm in the world to report having the ability to solve the 135-bit multiplexer problem directly. For a complete introduction, review, and roadmap to LCS algorithms, check out my review paper from 2009: http://dl.acm.org/citation.cfm?id=1644491 The first introductory textbook on LCS algorithms (authored by Will Browne and myself) will be published by 'Springer' this fall: (link will be found here once it's available) To follow research and software developed by Ryan Urbanowicz PhD on rule-based machine learning methods or other topics, check out the following links. http://www.ryanurbanowicz.com https://github.com/ryanurbs To follow research and software development by Jason H. Moore PhD, and his Computation Genetics Lab at the University of Pennsylvania’s Institute for Biomedical Informatics, check out the following links. http://epistasis.org/ http://upibi.org/
YouTube url:
https://youtu.be/CRge_cZ2cJc
Created:
10. 1. 2021 00:38:32