You are here
Learning Kernel-based Approximate Isometries
- Date Issued:
- 2017
- Abstract/Description:
- The increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-Dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and disclosing the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi-Dimensional Scaling approach for exploratory data analysis and, when desirable, data visualization. We assume our embedding map belongs to a Reproducing Kernel Hilbert Space of vector-valued functions which allows for embeddings of previously unseen data. Also, given appropriate positive-definite kernel functions, it extends the applicability of our methodto non-numerical data. Furthermore, the framework employs Multiple Kernel Learning for implicitlyidentifying an effective feature map and, hence, kernel function. Finally, via the use ofsparsity-promoting regularizers, the technique is capable of embedding data on a, typically, lowerdimensionalmanifold by naturally inferring the embedding dimension from the data itself. In theprocess, key training samples are identified, whose participation in the embedding map's kernelexpansion is most influential. As we will show, such influence may be given interesting interpretations in the context of the data at hand. The resulting multi-kernel learning, non-convex framework can be effectively trained via a block coordinate descent approach, which alternates between an accelerated proximal average method-based iterative majorization for learning the kernel expansion coefficients and a simple quadratic program, which deduces the multiple-kernel learning coefficients. Experimental results showcase potential uses of the proposed framework on artificial data as well as real-world datasets, that underline the merits of our embedding framework. Our method discovers genuine hidden structure in the data, that in case of network data, matches the results of well-known Multi- level Modularity Optimization community structure detection algorithm.
Title: | Learning Kernel-based Approximate Isometries. |
26 views
9 downloads |
---|---|---|
Name(s): |
Sedghi, Mahlagha, Author Georgiopoulos, Michael, Committee Chair Anagnostopoulos, Georgios, Committee CoChair Atia, George, Committee Member Liu, Fei, Committee Member University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2017 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | The increasing availability of public datasets offers an inexperienced opportunity to conduct data-driven studies. Metric Multi-Dimensional Scaling aims to find a low-dimensional embedding of the data, preserving the pairwise dissimilarities amongst the data points in the original space. Along with the visualizability, this dimensionality reduction plays a pivotal role in analyzing and disclosing the hidden structures in the data. This work introduces Sparse Kernel-based Least Squares Multi-Dimensional Scaling approach for exploratory data analysis and, when desirable, data visualization. We assume our embedding map belongs to a Reproducing Kernel Hilbert Space of vector-valued functions which allows for embeddings of previously unseen data. Also, given appropriate positive-definite kernel functions, it extends the applicability of our methodto non-numerical data. Furthermore, the framework employs Multiple Kernel Learning for implicitlyidentifying an effective feature map and, hence, kernel function. Finally, via the use ofsparsity-promoting regularizers, the technique is capable of embedding data on a, typically, lowerdimensionalmanifold by naturally inferring the embedding dimension from the data itself. In theprocess, key training samples are identified, whose participation in the embedding map's kernelexpansion is most influential. As we will show, such influence may be given interesting interpretations in the context of the data at hand. The resulting multi-kernel learning, non-convex framework can be effectively trained via a block coordinate descent approach, which alternates between an accelerated proximal average method-based iterative majorization for learning the kernel expansion coefficients and a simple quadratic program, which deduces the multiple-kernel learning coefficients. Experimental results showcase potential uses of the proposed framework on artificial data as well as real-world datasets, that underline the merits of our embedding framework. Our method discovers genuine hidden structure in the data, that in case of network data, matches the results of well-known Multi- level Modularity Optimization community structure detection algorithm. | |
Identifier: | CFE0007132 (IID), ucf:52315 (fedora) | |
Note(s): |
2017-08-01 M.S.E.E. Engineering and Computer Science, Electrical Engineering and Computer Engineering Masters This record was generated from author submitted information. |
|
Subject(s): | Data visualization -- Exploratory data analysis -- Multi-dimensional scaling -- Kernel methods -- Structured sparsity -- Iterative Majorization | |
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0007132 | |
Restrictions on Access: | campus 2019-02-15 | |
Host Institution: | UCF |