Current Search: Robotics -- Grasping -- Computer Vision -- Depth Map -- Force Closure -- Assistive Robotic -- RGBD Sensor (x)
-
-
Title
-
AUTONOMOUS ROBOTIC GRASPING IN UNSTRUCTURED ENVIRONMENTS.
-
Creator
-
Jabalameli, Amirhossein, Behal, Aman, Haralambous, Michael, Pourmohammadi Fallah, Yaser, Boloni, Ladislau, Xu, Yunjun, University of Central Florida
-
Abstract / Description
-
A crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps...
Show moreA crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot's visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps according to their prior knowledge of either the target object, human experience, or through information obtained from acquired data. In this dissertation, we propose a framework based on the supporting principle that potential contacting regions for a stable grasp can be foundby searching for (i) sharp discontinuities and (ii) regions of locally maximal principal curvature in the depth map. In addition to suggestions from empirical evidence, we discuss this principle by applying the concept of force-closure and wrench convexes. The key point is that no prior knowledge of objects is utilized in the grasp planning process; however, the obtained results show thatthe approach is capable to deal successfully with objects of different shapes and sizes. We believe that the proposed work is novel because the description of the visible portion of objects by theaforementioned edges appearing in the depth map facilitates the process of grasp set-point extraction in the same way as image processing methods with the focus on small-size 2D image areas rather than clustering and analyzing huge sets of 3D point-cloud coordinates. In fact, this approach dismisses reconstruction of objects. These features result in low computational costs and make it possible to run the proposed algorithm in real-time. Finally, the performance of the approach is successfully validated by applying it to the scenes with both single and multiple objects, in both simulation and real-world experiment setups.
Show less
-
Date Issued
-
2019
-
Identifier
-
CFE0007892, ucf:52757
-
Format
-
Document (PDF)
-
PURL
-
http://purl.flvc.org/ucf/fd/CFE0007892