You are here

L1-Norm Principal Components Analysis by Linear Programming Minimization

Abstract: 
Principal Components Analysis (PCA) is widely used in dimensionality reduction because it extracts a small number of orthonormal vectors that explain most of the variation in a dataset. Conventional PCA is sensitive to outliers because it is based on the L2-norm, so to improve robustness several algorithms based on the L1-norm have been introduced in the literature. We present a new L1-PCA algorithm based on linear programming (LP) minimization in tangent hyperplanes, to iteratively find components with minimum absolute deviation. It also has only one runtime parameter so it is simple to tune. We show that on common benchmarks it performs well compared to other methods. This seminar will not be only a presentation of our work. It will include a small lecture about PCA, so you'll not need any previous knowledge. Then we will go through its pros and its cons. What makes a PCA robust and why use these versions instead of the original one. Finally, when you should and when you shouldn’t apply it. Then I’ll present 2 real world applications. One regards computer vision. PCA is widely use in face recognition and reconstruction. The second one regards finance analysis. I’ll introduce an application in stock analysis.
Speaker Name: 
Andrea Visentin
Speaker Bio: 
PhD student in Insight Centre for Data Analytics
Speaker Photo: 
Seminar Date: 
Wednesday, 6 April, 2016 (All day)
Seminar Location: 
UCC
Room: 
WGB 2.16