Linglong Mao
Master, School of Computer and Cybersecurity, Chengdu University of Technology, Chengdu, China, 610059
Chao Lin
Master, School of Computer and Cybersecurity, Chengdu University of Technology, Chengdu, China, 610059
Wei Dong
Bachelor, School of Computer and Cybersecurity, Chengdu University of Technology, Chengdu, China, 610059
Yangchen Zhang
Bachelor, School of Computer and Cybersecurity, Chengdu University of Technology, Chengdu, China, 610059
Zhi Li
Master, bachelor, Department of Rehabilitation Medicine, 416 Hospital of Nuclear Industry, Chengdu, China, 610050
Nan Jiang
Master, bachelor, Department of Rehabilitation Medicine, 416 Hospital of Nuclear Industry, Chengdu, China, 610050
Zhanyong Mei
Associate professor, School of Computer and Cybersecurity, Chengdu University of Technology, Chengdu, China, 610059

Abstract:

Leveraging pressure sensor arrays enables the quantification of spatiotemporal gait parameters and the assessment of symmetry between the affected and unaffected limbs. However, existing methods still face critical challenges in precise footprint segmentation and automated foot-side classification. To address these issues, the following contributions are made:1) A robust plantar pressure extraction algorithm, termed Spatiotemporal Footprint Segmentation (STF-Seg), is proposed, which integrates spatial and temporal cues into an optimized DBSCAN clustering framework to enhance extraction robustness. 2) A multi-task learning model, the Cumulative Foot Pressure Image Network (CFPINet), is developed to simultaneously perform foot-side identification and footprint completeness assessment, incorporating a dynamic task-weighting mechanism to balance task importance and mitigate hard samples. 3) Extensive evaluations are conducted on a shod plantar pressure dataset collected from 60 young participants, along with cross-dataset validations on elderly and hemiplegic cohorts, demonstrating the effectiveness and generalizability of the proposed methods. Experimental results show that CFPINet improves foot-side and footprint completeness classification accuracy by 7.14% and 8.82%, respectively, compared to the Center of Pressure Temporal Convolutional Network (COP-TCN). Additionally, STF-Seg achieves 100% and 97% extraction accuracy on the elderly and hemiplegic datasets, respectively, demonstrating its strong generalizability.