Online Machine Learning-Based Control of Lower Limb Exoskeletons
ICRA 2022 Workshop
Our workshop is planned for an in-person format (all sessions will be live-streamed) while following the ICRA 2022 guidelines.
Robotic lower-limb exoskeletons are capable of augmenting human mobility and assisting individuals with mobility impairments. Conventionally, these systems generate parallel joint torques that mimic the user’s underlying biological joint demand during ambulation. Unfortunately, due to the dynamic nature of human movement during daily locomotor activities, it is challenging to develop a control framework that captures the full range of intended movements. However, recent breakthroughs in machine learning (ML) have enabled improved comprehension of the human’s state information in real-time, enabling robust control of these wearable systems during dynamic locomotion. While these ML-based strategies show exciting promise, there remain critical hurdles for these interventions to be deployed to the real world. Challenges include positive feedback loops between actuation and sensing, data size requirements for user-independent models, model robustness to unseen mobility contexts, transitions between locomotion modes, and sensor shifting. In general, there have been few attempts to tackle the critical problem of translating/generalizing laboratory-based ML approaches to real-world, large-scale applications. In this workshop, we will tackle these important challenges from multiple perspectives (both high-level and practical; academic and industrial) and provide roadmaps for future exoskeleton developers to incorporate ML-based controllers for their applications.
Our full-day, in-person workshop will involve seminar talks, panel discussions, and a poster session, with ample coffee and networking breaks. All sessions will be live-streamed via video call to enable access for participants joining the conference remotely (except for the poster session). Below is the detailed information about each session. We’ve structured the proposed sessions to be specifically oriented to include both junior and senior researchers/engineers to organically engage and network amongst each other.
Call For Abstracts (CLOSED)
We are pleased to invite 1 page extended abstract submissions for the Online Machine Learning-Based Control of Lower-Limb Exoskeletons Workshop at ICRA 2022, which will be reviewed and selected for short talks and/or a poster session.
Abstract topics of interest include all aspects of ML-based exoskeleton control, including (but not exclusive to): Estimation of environmental state; Intent recognition; Simulation and data augmentation for informing controller design; Adaptive exoskeleton control; Novel sensing methods for exoskeleton control.
- A series of short talks given by junior researchers in the field (student or postdoctoral researcher)
- 8 minutes each followed by a 2 minute Q&A
- To extend active discussions on relevant topics, we will hold the poster session and a short networking session following this event
- A small symposium where both junior and senior researchers interact by presenting work via a poster session
- The entire poster session will be held for 2 hours during lunchtime
- Potential short talk presenters will be solicited through the same abstract submission
Invited Seminar Speakers
User and Task Invariant Control Strategies using AI & Deep Learning for Wearable Robotics
New advanced robotic prostheses and orthoses are helping to restore function to individuals with lower limb disability by reducing the metabolic cost of walking and restoring normal biomechanics. An important function of these devices is to timely and accurately recognize user intent and optimize the control to provide biomechanically appropriate assistance across multimodal task paradigms. Key challenges in the wearable robotics control community include generalizing control systems across a rich variety of real-world tasks and diverse individuals while simultaneously personalizing control systems to each individual’s specific set of biomechanical needs. Our research has focused on data-driven approaches using deep learning to tackle these challenges with applications in both a variety of lower limb exoskeletons and prostheses. This talk will examine approaches and evaluation metrics for AI-driven personalization of controllers to unique subjects and generalizing controllers across a rich variety of real-world tasks. New open-source datasets to facilitate research in this field and some of our new datasets will also be discussed.
Optimal Adaptive Control of Wearable Robots for Personalized Walking Assistance
Wearable assistive robots must be personalized towards each individual wearer to achieve a desired motor performance because human wearers with neuromotor deficits often present large inter- and intra-variations. Control personalization of wearable robots can be formulated as optimal adaptive control. However, many challenges exist, such as lack of a closed-form model of wearer-robot systems and tuning of high-dimensional control parameters, making the design of optimal adaptive control challenging. In this talk, I will focus on our solution based on reinforcement learning, which can heuristically tune wearable robot control to achieve personalized gait assistance effectively and efficiently. Our current effort is toward translation of our innovation to rehabilitation clinics to improve quality of life for people with lower limb movement disabilities.
Application of LSTM-Based Gait Phase Estimator to Control a Powered Ankle Exoskeleton
We developed an ankle exoskeleton actuated in both directions — dorsiflexion and plantarflexion in order to compensate gait function in stroke patients. To control the device as intended, it was important to estimate gait phase either as discrete states or continuous values. Our first approach to place a pressure sensor on the insole showed issues with calibration due to the pressure inside the shoe especially when the device was actuated. We also used a shank-mounted IMU sensor but the complexity in the signal made it difficult to determine gait phase. To overcome the issues, we applied machine learning model on the IMU signals. The LSTM model was trained with data collected from 8 subjects to predict gait phase as a continuous value from 0 to 1. The trained model was applied to the ankle exoskeleton as a part of the controller. Our result suggested that the proposed approach was feasible even when the walking speed changed. Based on the experience in the industry, we will share practical issues and requirements in applying machine learning to a product.
Computer Vision and Deep Learning for Robotic Exoskeleton Control
Robotic exoskeletons can replace the propulsive function of impaired biological muscles and allow users with mobility impairments to perform daily locomotor activities. However, the current locomotion mode recognition systems being developed for automated high-level control and decision-making rely on mechanical, inertial, and/or neuromuscular sensors, which inherently have limited prediction horizons (i.e., analogous to walking blindfolded). In this talk, I will present our research on the development of bioinspired environment recognition systems powered by computer vision and deep learning to predict real-world walking environments prior to physical interaction, therein allowing for more accurate and robust locomotion mode transitions. These environment recognition systems based on state-of-the-art deep convolutional neural networks serve to improve the automated control and decision-making of next-generation robotic exoskeletons for daily locomotor assistance and rehabilitation.
Data-driven approach to personalize wearable robots considering unknown human-robot interaction
The drive to discover effective strategies for human gait assistance has led to investigations into human-wearable robots. In response, my research strives to advance the field by focusing on wearable robots that can respond to individual users, resulting in a smart assistance strategy in which the robot adapts to the human wearers. In this talk, I will introduce a robot adaption method to a user, human-in-the-loop (HIL) optimization, and the user guidance method to facilitate robot use. The HIL optimization is a machine learning approach using biofeedback, which significantly reduced walking and squatting efforts when users wore various wearable-robots. In addition, my research considers how users adjust their movement patterns to optimize their benefit from the personalized robot when user guidance is given. This talk will be concluded with a discussion about the challenges and opportunities offered by the human-in-the-loop assistance controller and user guidance method.
Innovations in Peripheral Sensing and Predictive Modeling to Account for the Intention, Form and Objective of Wearers of Assistive Devices during Ambulation
Wearable robots are commonly used to provide basic functionality such as mediating standing and level-ground walking. To achieve robust control of these devices during widely-varying ambulation tasks, hierarchical control systems have been implemented that classify an individual’s high-level intention using discrete labels and delegate the phase-varying joint-level response of the device within a predicted mode to mid-level controllers that render motion, torque or impedance reference trajectories. This presentation will first highlight ongoing efforts to eliminate this hierarchy by developing volitional and semi-volitional control systems of robotic knee-ankle prostheses using new sensing modalities of peripheral muscles and shared robot control paradigms. These systems consider the continuous nature of human ambulation and acknowledge that some forms of ambulation are difficult to characterize using discrete labels. Secondly, this presentation will emphasize that each device wearer is unique in their physical form as well as the priority they place on specific neuromotor objectives during movement. Recently-developed predictive neuromusculoskeletal modeling systems that incorporate an individual’s anthropometric shape and underlying task objectives will be discussed within the context of soft-hip flexion exosuits. These systems can account for such human factors in order to optimize a device’s design and control during specific ambulation tasks, with the overarching goal of reducing the time needed (and thus the number of strides taken) to configure devices to individuals with lower-limb impairment.
Online optimization in preference-based exoskeleton control
A key challenge to the widespread success of augmentative exoskeletons is correctly “tuning” the controller to provide helpful and cooperative assistance. Often, the controller parameters are tuned to optimize a physiological or biomechanical objective (e.g. metabolic rate, kinematics). However, these approaches are time consuming and resource intensive, while only enabling optimization of a single objective. In reality, exoskeleton user experience is derived from a broad array of factors, including comfort, exertion, stability, among others. The goal of this project is to conveniently and automatically tune the exoskeleton controller settings to maximize user preference in real time (i.e. preference-in-the-loop control). We propose a machine learning-based optimization framework to personalize controller settings in four controller dimensions. We utilize a previously-collected dataset to learn the generic preference landscape with a neural network, then the learned landscape is used to inform suggestions for an evolutionary strategy that optimizes the controller settings of a novel user. The user provides their feedback through a ‘like’ and ‘dislike’ interface via a touchscreen. Our innovations are threefold: 1. optimizing user preference for a lower-limb exoskeleton in a multi-dimensional controller space; 2. using a neural-network to learn the preference landscape across multiple users; 3. employing black-box optimization + a neural network to efficiently identify the user’s preferred settings. Our preliminary results indicate that a user was able to identify optimized controller settings over randomly generated settings larger than 90% accuracy.
Panel 1: Practical considerations for model training and deployment in wearable robotics
Postdoctoral Fellow University of Toronto
Univ. of Illinois Chicago
Director & Scientific Chair Shirley Ryan Ability Lab
Panel 2: Roadblocks and Solutions of machine learning model deployment
X, the moonshot factory
Univ. of Texas at Austin
08:45 Welcome and Workshop Overview
09:00 Seminar Talk – Aaron Young, Georgia Tech
09:50 Seminar Talk – Keehong Seo, Samsung Electronics
10:20 Short Break
10:30 Seminar Talk – Helen Huang, NCSU/UNC
11:00 Short Talks (from abstract submission)
12:00 Lunch and Poster Session
1:30 Seminar Talk – Nick Fey, University of Texas at Austin
1:45 Seminar Talk – Myunghee Kim, University of Illinois Chicago
2:00 Seminar Talk – Brokoslaw Laschowski, University of Toronto
2:15 Panel Discussion 1
3:00 Coffee Break and Networking Session
3:30 Seminar Talk – Elliott Rouse, University of Michigan
3:45 Panel Discussion 2
4:30 Wrap up