|Reference Type||Conference Proceedings|
|Author(s)||Chen, N.; Urban, S.; Osendorfer, C.; Bayer, J.; Smagt, P. van der|
|Title||Estimating finger grip force from an image of the hand using Convolutional Neural Networks and Gaussian Processes|
|Journal/Conference/Book Title||Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA)|
|Abstract||Estimating human fingertip forces is required to understand force distribution in grasping and manipulation. Human grasping behavior can then be used to develop force- and impedance-based grasping and manipulation strategies for robotic hands. However, estimating human grip force naturally is only possible with instrumented objects or unnatural gloves, thus greatly limiting the type of objects used.
In this paper we describe an approach which uses images of the human fingertip to reconstruct grip force and torque at the finger. Our approach does not use finger-mounted equipment, but instead a steady camera observing the fingers of the hand from a distance. This allows for finger force estimation without any physical interference with the hand or object itself, and is therefore universally applicable.
We construct a 3-dimensional finger model from 2D images. Convolutional Neural Networks (CNN) are used to predict the 2D image to a 3D model transformation matrix. Two methods of CNN are designed for separate and combined outputs of orientation and position. After learning, our system shows an alignment accuracy over 98% on unknown data.
In the final step, a Gaussian process estimates finger force and torque from the aligned images based on color changes and deformations of the nail and its surrounding skin. Experimental results shows that the accuracy achieves about 95% in the force estimation and 90% in the torque.|
|Link to PDF||http://www.brml.org/uploads/tx_sibibtex/CheSma2014.pdf|