Selected finalist for the Best Student Paper award at ICORR 2019
Updated: Feb 6
The 16th IEEE/RAS-EMBS International Conference on Rehabilitation Robotics (ICORR 2019) [Toronto, Canada, June 24–28] is a bi-annual appointment on theoretical and experimental issues in the fields of Rehabilitation Robotics and Neuroscience applied to Healthcare. Prosthetic, assistive, diagnostic, and therapeutic robots continue to face challenges of adoption by the users.
Karoline Heiwolt, a CNCR student, has developed under my supervision a novel approach to make myocontrol more reliable and therefore more appealing for patients with upper-limb amputations. Our paper 'Automatic Detection of Myocontrol Failures Based upon Situational Context Information' in collaboration with ERL and the German space agency (DLR) has been selected as one of the six finalist for the Best Student Paper award at ICORR2019. The paper will be presented by Karoline at the oral presentation at the conference and published in the conference proceedings available from June 24th 2019.
The problem with myocontrol
When designing a prosthesis, we aim to restore the full functionality of the missing body part. In case of a prosthetic hand that includes a multitude of complex grasps and simultaneous, proportional control of single fingers (S/P control). Mechanically, this is quite tractable and some commercially available prostheses, such as the i-limb ultra by Touch Bionics, already come with independent motors for each digit and provide a high degree of flexibility. But as the functionality increases, it becomes more challenging to reliably control the device. While manual interfaces work well for simple manipulators, they can be very restricting. Often they become increasingly complicated and non-intuitive with the addition of more features.
Furthermore, the user’s input signals can become unstable over time due to e.g. fatigue, electrode displacement, or sweat. If the electrode moves, or is corrupted in any other way, the signal interpreted by the controller could be very different from the intention of the user. Although advancements have been achieved in this field recently, as e.g. (Gijsberts et al. 2014), in which a supervised incremental learning method, called iRR-RFF, is proposed to enable continuous adjustment of the control model with minimal training effort. Although this mechanism stabilises reliability over time, it still relies heavily on human driven feedback. The user will need to permanently monitor performance and intervene for updates. In order to facilitate day-to-day use of an electric prosthesis, it would be desirable to develop a complete system, that assesses its state automatically and triggers updates when needed.
In a previous attempt to automate the incremental model updates, Nowak (Novak et al. 2017) proposed a standard linear classifier was trained to detect failures in the sEMG input signal. The model classified examples of myocontrol use in different tasks as good versus poor control performance. This classification relies mostly on detecting features that are generally not desired, such as oscillatory behaviour or high accelerations. It remains unclear whether this classifier could detect a shift in sEMG patterns, that results in plausible predictions, but produces the wrong hand configuration.
The underpinning idea of this work is that automatic failure detection should instead be able to spot every instance, where the myocontrol output does not match the user’s intention. Our hypothesis is that we can detect more subtle shifts in the control mapping by incorporating situational context information. In other words, we provide to the controller information over the task/context in which is used. This could be done by capturing the scene with a RGB-D camera.
From human demonstration we learn the relation between context and user's movements. We then use this information as a prior estimate of the user intention in similar local contexts. By comparing new motions to our models’ predictions, we can assess which of the movement commands do not reflect the intention and are more likely caused by a myocontrol failure. We developed and demonstrated our approach in a virtual reality (VR) simulation, where both patients and able-bodied users can control a model of a prosthetic hand.
Our results show that our approach detects statistically highly significant differences in error distributions with p<0.001 between successful and failed grasp attempts in both cases.
(Gijsberts et al. 2014) A. Gijsberts, R. Bohra, D. Sierra Gonzalez, A. Werner, M. Nowak, B. Caputo, M. A. Roa, and C. Castellini, “Stable myoelectric control of a hand prosthesis using non-linear incremental learning,” Frontiers in Neurorobotics, vol. 8, 2014.
(Nowak et al. 2017) M. Nowak, S. Engel, and C. Castellini, “A preliminary study towards automatic detection of failures in myocontrol,” in MEC17 - A Sense of What’s to Come, 2017.