Controlling Prosthetic Limbs With Thought and AI

Bionics is no longer the stuff of science fiction. For some amputees, the ability to carry out daily living activities depends on how efficiently and reliably they can bypass broken pathways between the brain and a prosthetic limb. Robust brain-machine interfaces can help restore mobility for these people.

A variety of conditions such as stroke, spinal cord injury, traumatic limb injury and several neuromuscular and neurological diseases can also cause limb failure. In these conditions, the motor cortex is often intact and can potentially benefit from assistive technologies. A durable brain-machine interface has the potential to rehabilitate these people by enabling natural control of limbs through decoding movement intentions from the brain and then translating them into executable actions for robotic actuators connected to the affected limb.

Brain-machine interface

For the first time, we have demonstrated an end-to-end proof-of-concept for such a brain-machine interface by combining custom-developed AI code with exclusively commercially available, low-cost system components. In two peer-reviewed scientific papers presented and published in parallel at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) and the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, we describe how novel deep-learning algorithms can be used to decode activity intentions solely from a take-home scalp electroencephalography (EEG) system and to execute the intended activities by operating an off-the-shelf robotic arm in a real-world environment.

 

Brain-machine interfaces comprising expensive medical-grade EEG systems run in carefully-controlled environments are impractical for take-home use. Previous studies employed low-cost systems; however, performance measures were suboptimal or inconclusive. We evaluated a low-cost EEG system, the OpenBCI headset, in a natural environment to decode the intentions of test subjects purely by analysing their thought patterns. By running the to-date largest cohort of healthy test subjects in combination with neurofeedback training techniques and deep learning, we show that our AI-based method is more robust than previous studies attempting to decode brain states from OpenBCI data.

GraspNet

Once intended activities were decoded, we translated them into instructions for a robotic arm capable of grasping and positioning objects in a real life environment. We linked the robotic arm to a camera and a custom developed deep learning framework, which we call GraspNet. GraspNet can determine the best positions for the robotic gripper to pick up objects of interest. It is a novel deep learning architecture which outperforms state of the art deep learning models in terms of grasp accuracy with fewer parameters, a memory footprint of only 7.2MB and real time inference speed on an Nvidia Jetson TX1 processor. These attributes make GraspNet an ideal deep learning model for embedded systems that require fast over the air model updates.

In more than 50 experiments, we demonstrated that the entire pipeline from decoding an intended task by analysing a test subject’s thought patterns to executing the intended tasks using a robotic arm can be executed in real time and in a real life environment. We plan to extend the GraspNet architecture for simultaneous object recognition and grasp detection, and to further reduce the overall latency of visual recognition systems, while maintaining a compact model design and real-time inference speed.

Our demonstration marks an important step toward developing robust, low-cost, low-power brain-machine interfaces for controlling artificial limbs through the power of thought. One day, this technology may be used to create a device that can drive prosthetic limbs.

The post Bionics in Real Life appeared first on IBM Blog Research.

#awvi,#IBMCloud

via IBM Blog Research https://ibm.co/2cYaEHU

July 23, 2018 at 11:09AM