Humans can feel, weigh and grasp diverse objects, and simultaneously
infer their material properties while applying the right amount of
force—a challenging set of tasks for a modern robot1. Mechanoreceptor networks that provide sensory feedback and enable the dexterity of the human grasp2 remain difficult to replicate in robots. Whereas computer-vision-based robot grasping strategies3,4,5
have progressed substantially with the abundance of visual data and
emerging machine-learning tools, there are as yet no equivalent sensing
platforms and large-scale datasets with which to probe the use of the
tactile information that humans rely on when grasping objects. Studying
the mechanics of how humans grasp objects will complement vision-based
robotic object handling. Importantly, the inability to record and
analyse tactile signals currently limits our understanding of the role
of tactile information in the human grasp itself—for example, how
tactile maps are used to identify objects and infer their properties is
unknown6.
Here we use a scalable tactile glove and deep convolutional neural
networks to show that sensors uniformly distributed over the hand can be
used to identify individual objects, estimate their weight and explore
the typical tactile patterns that emerge while grasping objects. The
sensor array (548 sensors) is assembled on a knitted glove, and consists
of a piezoresistive film connected by a network of conductive thread
electrodes that are passively probed. Using a low-cost (about US$10)
scalable tactile glove sensor array, we record a large-scale tactile
dataset with 135,000 frames, each covering the full hand, while
interacting with 26 different objects. This set of interactions with
different objects reveals the key correspondences between different
regions of a human hand while it is manipulating objects. Insights from
the tactile signatures of the human grasp—through the lens of an
artificial analogue of the natural mechanoreceptor network—can thus aid
the future design of prosthetics7, robot grasping tools and human–robot interactions1,8,9,10.