You are here
STUDY OF HUMAN ACTIVITY IN VIDEO DATA WITH AN EMPHASIS ON VIEW-INVARIANCE
- Date Issued:
- 2012
- Abstract/Description:
- The perception and understanding of human motion and action is an important area of research in computer vision that plays a crucial role in various applications such as surveillance, HCI, ergonomics, etc. In this thesis, we focus on the recognition of actions in the case of varying viewpoints and different and unknown camera intrinsic parameters. The challenges to be addressed include perspective distortions, differences in viewpoints, anthropometric variations,and the large degrees of freedom of articulated bodies. In addition, we are interested in methods that require little or no training. The current solutions to action recognition usually assume that there is a huge dataset of actions available so that a classifier can be trained. However, thismeans that in order to define a new action, the user has to record a number of videos fromdifferent viewpoints with varying camera intrinsic parameters and then retrain the classifier, which is not very practical from a development point of view. We propose algorithms that overcome these challenges and require just a few instances of the action from any viewpoint with any intrinsic camera parameters. Our first algorithm is based on the rank constraint on the family of planar homographies associated with triplets of body points. We represent action as a sequence of poses, and decompose the pose into triplets. Therefore, the pose transition is brokendown into a set of movement of body point planes. In this way, we transform the non-rigid motion of the body points into a rigid motion of body point planes. We use the fact that the family of homographies associated with two identical poses would have rank 4 to gauge similarity of the pose between two subjects, observed by different perspective cameras and from different viewpoints. This method requires only one instance of the action. We then show that it is possible to extend the concept of triplets to line segments. In particular, we establish that if we look at the movement of line segments instead of triplets, we have more redundancy in data thus leading to better results. We demonstrate this concept on (")fundamental ratios.(") We decompose a human body pose into line segments instead of triplets and look at set of movement of line segments. This method needs only three instances of the action. If a larger dataset is available, we can also apply weighting on line segments for better accuracy. The last method is based onthe concept of (")Projective Depth("). Given a plane, we can find the relative depth of a point relative to the given plane. We propose three different ways of using (")projective depth:(") (i) Triplets - the three points of a triplet along with the epipole defines the plane and the movement of points relative to these body planes can be used to recognize actions; (ii) Ground plane - if we are able to extract the ground plane, we can find the (")projective depth(") of the body points withrespect to it. Therefore, the problem of action recognition would translate to curve matching; and (iii) Mirror person (-) We can use the mirror view of the person to extract mirror symmetric planes. This method also needs only one instance of the action. Extensive experiments are reported on testing view invariance, robustness to noisy localization and occlusions of bodypoints, and action recognition. The experimental results are very promising and demonstrate the efficiency of our proposed invariants.
Title: | STUDY OF HUMAN ACTIVITY IN VIDEO DATA WITH AN EMPHASIS ON VIEW-INVARIANCE. |
![]() ![]() |
---|---|---|
Name(s): |
Ashraf, Nazim, Author Foroosh, Hassan, Committee Chair Hughes, Charles, Committee Member Tappen, Marshall, Committee Member Moshell, Jack, Committee Member University of Central Florida, Degree Grantor |
|
Type of Resource: | text | |
Date Issued: | 2012 | |
Publisher: | University of Central Florida | |
Language(s): | English | |
Abstract/Description: | The perception and understanding of human motion and action is an important area of research in computer vision that plays a crucial role in various applications such as surveillance, HCI, ergonomics, etc. In this thesis, we focus on the recognition of actions in the case of varying viewpoints and different and unknown camera intrinsic parameters. The challenges to be addressed include perspective distortions, differences in viewpoints, anthropometric variations,and the large degrees of freedom of articulated bodies. In addition, we are interested in methods that require little or no training. The current solutions to action recognition usually assume that there is a huge dataset of actions available so that a classifier can be trained. However, thismeans that in order to define a new action, the user has to record a number of videos fromdifferent viewpoints with varying camera intrinsic parameters and then retrain the classifier, which is not very practical from a development point of view. We propose algorithms that overcome these challenges and require just a few instances of the action from any viewpoint with any intrinsic camera parameters. Our first algorithm is based on the rank constraint on the family of planar homographies associated with triplets of body points. We represent action as a sequence of poses, and decompose the pose into triplets. Therefore, the pose transition is brokendown into a set of movement of body point planes. In this way, we transform the non-rigid motion of the body points into a rigid motion of body point planes. We use the fact that the family of homographies associated with two identical poses would have rank 4 to gauge similarity of the pose between two subjects, observed by different perspective cameras and from different viewpoints. This method requires only one instance of the action. We then show that it is possible to extend the concept of triplets to line segments. In particular, we establish that if we look at the movement of line segments instead of triplets, we have more redundancy in data thus leading to better results. We demonstrate this concept on (")fundamental ratios.(") We decompose a human body pose into line segments instead of triplets and look at set of movement of line segments. This method needs only three instances of the action. If a larger dataset is available, we can also apply weighting on line segments for better accuracy. The last method is based onthe concept of (")Projective Depth("). Given a plane, we can find the relative depth of a point relative to the given plane. We propose three different ways of using (")projective depth:(") (i) Triplets - the three points of a triplet along with the epipole defines the plane and the movement of points relative to these body planes can be used to recognize actions; (ii) Ground plane - if we are able to extract the ground plane, we can find the (")projective depth(") of the body points withrespect to it. Therefore, the problem of action recognition would translate to curve matching; and (iii) Mirror person (-) We can use the mirror view of the person to extract mirror symmetric planes. This method also needs only one instance of the action. Extensive experiments are reported on testing view invariance, robustness to noisy localization and occlusions of bodypoints, and action recognition. The experimental results are very promising and demonstrate the efficiency of our proposed invariants. | |
Identifier: | CFE0004352 (IID), ucf:49449 (fedora) | |
Note(s): |
2012-08-01 Ph.D. Engineering and Computer Science, Computer Science Doctoral This record was generated from author submitted information. |
|
Subject(s): | Action Recognition -- Geometric Invariants -- View Invariance | |
Persistent Link to This Record: | http://purl.flvc.org/ucf/fd/CFE0004352 | |
Restrictions on Access: | public 2012-08-15 | |
Host Institution: | UCF |