CaptionsMaker
.com
Feature Dimension Reduction Using LDA and PCA in Python - Principal Component Analysis in Python
Edit Subtitles
Download Subtitles
SRT
TXT
Title:
Description:
Hi, You got a new video on ML. Please watch: "TensorFlow 2.0 Tutorial for Beginners 10 - Breast Cancer Detection Using CNN in Python" https://www.youtube.com/watch?v=Y6UDeGRyNZk ---------------------------------------------------------------- --~-- Download Working File: https://github.com/laxmimerit/Feature-Selection-in-Machine-Learning-using-Python-All-Code Linear Discriminant Analysis is a supervised algorithm as it takes the class label into consideration. It is a way to reduce ‘dimensionality’ while at the same time preserving as much of the class discrimination information as possible. LDA helps you find the boundaries around clusters of classes. It projects your data points on a line so that your clusters are as separated as possible, with each cluster having a relative (close) distance to a centroid. So the question arises- how are these clusters are defined and how do we get the reduced feature set in case of LDA? Basically LDA finds a centroid of each class datapoints. For example with thirteen different features LDA will find the centroid of each of its class using the thirteen different feature dataset. PCA: Principal Component Analysis (PCA) is a linear dimensionality reduction technique that can be utilized for extracting information from a high-dimensional space by projecting it into a lower-dimensional sub-space. It tries to preserve the essential parts that have more variation of the data and remove the non-essential parts with fewer variation.One important thing to note about PCA is that it is an Unsupervised dimensionality reduction technique, you can cluster the similar data points based on the feature correlation between them without any supervision (or labels), and you will learn how to achieve this practically using Python in later sections of this tutorial! According to Wikipedia, PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components.
YouTube url:
https://www.youtube.com/watch?v=CI7dIwMCRlk
Created:
16. 3. 2020 14:35:16