You are here

DECISION THEORY CLASSIFICATION OF HIGH-DIMENSIONAL VECTORS BASED ON SMALL SAMPLES

Download pdf | Full Screen View

Date Issued:
2005
Abstract/Description:
In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.
Title: DECISION THEORY CLASSIFICATION OF HIGH-DIMENSIONAL VECTORS BASED ON SMALL SAMPLES.
41 views
19 downloads
Name(s): Bradshaw, David, Author
Pensky, Marianna, Committee Chair
University of Central Florida, Degree Grantor
Type of Resource: text
Date Issued: 2005
Publisher: University of Central Florida
Language(s): English
Abstract/Description: In this paper, we review existing classification techniques and suggest an entirely new procedure for the classification of high-dimensional vectors on the basis of a few training samples. The proposed method is based on the Bayesian paradigm and provides posterior probabilities that a new vector belongs to each of the classes, therefore it adapts naturally to any number of classes. Our classification technique is based on a small vector which is related to the projection of the observation onto the space spanned by the training samples. This is achieved by employing matrix-variate distributions in classification, which is an entirely new idea. In addition, our method mimics time-tested classification techniques based on the assumption of normally distributed samples. By assuming that the samples have a matrix-variate normal distribution, we are able to replace classification on the basis of a large covariance matrix with classification on the basis of a smaller matrix that describes the relationship of sample vectors to each other.
Identifier: CFE0000753 (IID), ucf:46593 (fedora)
Note(s): 2005-12-01
Ph.D.
Arts and Sciences, Department of Mathematics
Doctorate
This record was generated from author submitted information.
Subject(s): Support Vector Machine
decision theory
posterior probabilities
matrix-variate normal
Persistent Link to This Record: http://purl.flvc.org/ucf/fd/CFE0000753
Restrictions on Access: public
Host Institution: UCF

In Collections