Contents lists available at SciVerse ScienceDirect
Robotics and Autonomous Systems
journal homepage: www.elsevier.com/locate/robot
Autonomous grasp and manipulation planning using a ToF camera
Zhixing Xue ∗ , Steffen W. Ruehl, Andreas Hermann, Thilo Kerscher, Ruediger Dillmann
FZI Forschungszentrum Informatik, Haid-und-Neu-Str. 10-14,76137 Karlsruhe, Germany
A time-of-flight camera can help a service robot to sense its 3D environment. In this paper, we introduce our methods for sensor calibration and 3D data segmentation to use it to automatically plan grasps and manipulation actions for a service robot. Impedance control is intensively used to further compensate the modeling error and to applythe computed forces. The methods are further demonstrated in three service robotic applications. Sensor-based motion planning allows the robot to move within dynamic and cluttered environment without collision. Unknown objects can be detected and grasped. In the autonomous ice cream serving scenario, the robot captures the surface of ice cream and plans a manipulation trajectory to scoop a portionof ice cream. © 2011 Elsevier B.V. All rights reserved.
Article history: Available online 27 August 2011 Keywords: Manipulation planning Grasp planning Time-of-flight sensors Impedance control
1. Introduction A domestic service robot should help people to handle daily tasks in the household environment. Autonomous grasping and manipulation of household items are key functions that a robotshould have. To accomplish grasping and manipulation tasks, a robot needs the ability to perceive the 3D environment and the ability to interact with it adaptively. For the former ability, range sensors have been developed, which can provide depth information at each pixel instead of gray or color information as their 2D counterparts do. For the latter ability, soft robots with passively ormechanically compliant joints have been developed, which cause less impact on the environment and yield better compliant behavior. In this article, we present our approach to combine these two new technologies for autonomous grasping and manipulation. We use a time-of-flight camera to capture the object models and environment information. Grasp planning, motion planning and manipulation planning areinvolved to plan grasps, collision-free paths and manipulation actions, respectively. 1.1. Range sensors In the past 25 years, range sensors have been intensively developed and commercialized and now enable robots to capture their 3D environment. Depending on the applied measurement principles, they can be categorized into four sensor technologies . The laser scanner measures the time delay betweenan emitted, reflected and received laser signal to infer the distance to a target point. By mechanically deflecting the laser signal in azimuthal and longitudinal directions and sequentially acquiring
Corresponding author. E-mail address: email@example.com (Z. Xue).
multiple measurement points, a 3D point cloud can be obtained. Since the laser scanners can measure ranges over hundreds ofmeters, they are widely applied in applications for both indoor and outdoor environments, especially navigation , 3D SLAM , indoor modeling and collision avoidance for mobile robots. The slit scanner projects a laser line onto the object and simultaneously detects the laser profile in a single video frame. It is the most widely used triangulation-based 3D laser camera because of its optical andmechanical simplicity and low cost. Instead of a laser line, multiple stripes or patterns can also be projected on the object. This is the pattern projection principle. These projection techniques work only for a short range. Since they need multiple projections of lines or patterns to acquire the whole scene, the scan time is relatively long and not suited for dynamic scenes. For large...