TY - JOUR
T1 - Cross-LSTM
T2 - integrating cross attention with long short-term memory neural networks in estimating joint moments from wearable sensors
AU - Niu, Wenlong
AU - Bian, Qingyao
AU - Yang, Li
AU - Zhou, Hui
AU - Ding, Ziyun
PY - 2025/9/29
Y1 - 2025/9/29
N2 - Precise estimation of joint moments is essential for designing rehabilitation interventions and optimising the control of assistive devices. Current methods, including physics-based modelling and data-driven approaches, heavily depend on motion capture equipment such as bulky cameras and force plates, which limits their real-time use in real-world environments. To address these limitations, we propose the Cross-LSTM, a dual-stream neural network architecture integrating Long Short-Term Memory (LSTM) networks with Residual Connected Cross-Attention (RCCA) mechanisms to estimate joint moments using signals from wearable sensors. Cross-LSTM fuses IMU and EMG signals dynamically using a cross-attention mechanism, significantly enhancing the estimation accuracy of lower limb joint moments. Cross-LSTM achieved superior predictive performance, demonstrating significantly lower root mean squared errors (RMSE) compared to existing benchmarks. Incorporating transfer learning further improved model robustness and accuracy with limited training data, showcasing its adaptability for deployment in more challenging functional tasks such as incline walking and stair navigation. The interpretability analysis identified IMU data as the primary predictive contributor, suggesting the possibility of reducing sensor complexity and enhancing clinical usability. Comprehensive evaluations highlight Cross-LSTM’s potential as an accurate, robust, and cost-effective solution for lower limb joint moment estimation, aimed at personalised rehabilitation and low-cost assistive device development.
AB - Precise estimation of joint moments is essential for designing rehabilitation interventions and optimising the control of assistive devices. Current methods, including physics-based modelling and data-driven approaches, heavily depend on motion capture equipment such as bulky cameras and force plates, which limits their real-time use in real-world environments. To address these limitations, we propose the Cross-LSTM, a dual-stream neural network architecture integrating Long Short-Term Memory (LSTM) networks with Residual Connected Cross-Attention (RCCA) mechanisms to estimate joint moments using signals from wearable sensors. Cross-LSTM fuses IMU and EMG signals dynamically using a cross-attention mechanism, significantly enhancing the estimation accuracy of lower limb joint moments. Cross-LSTM achieved superior predictive performance, demonstrating significantly lower root mean squared errors (RMSE) compared to existing benchmarks. Incorporating transfer learning further improved model robustness and accuracy with limited training data, showcasing its adaptability for deployment in more challenging functional tasks such as incline walking and stair navigation. The interpretability analysis identified IMU data as the primary predictive contributor, suggesting the possibility of reducing sensor complexity and enhancing clinical usability. Comprehensive evaluations highlight Cross-LSTM’s potential as an accurate, robust, and cost-effective solution for lower limb joint moment estimation, aimed at personalised rehabilitation and low-cost assistive device development.
UR - https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7333
U2 - 10.1109/TNSRE.2025.3610281
DO - 10.1109/TNSRE.2025.3610281
M3 - Article
SN - 1534-4320
VL - 33
SP - 3793
EP - 3804
JO - IEEE Transactions on Neural Systems and Rehabilitation Engineering
JF - IEEE Transactions on Neural Systems and Rehabilitation Engineering
ER -