To reliably complete various manual tasks, robots should be able to handle a variety of objects, ranging from items found in households to tools used in specific professional settings. While many existing robotic systems can now complete basic manual tasks, such as picking up objects and carrying them to a set location, most systems still struggle with tasks that entail the dexterous manipulation of objects.
The term dexterous manipulation describes the ability to skillfully and precisely move objects in nuanced ways, which is central to the completion of many of the tasks that humans tackle daily. Replicating this ability in robots can be very difficult, as it typically requires gathering and interpreting different types of sensory information.
Conventional approaches for robot manipulation rely on visual sensors, such as cameras, and tactile sensors, devices that pick up tactile information. Yet most existing tactile sensors only provide feedback after a robot touches an object, which makes it difficult to plan manipulation strategies in advance.








