Human recognition by pattern of running
- Authors: 1
-
Affiliations:
- Samara University
- Issue: Vol 2 (2025)
- Pages: 206-207
- Section: ЧАСТЬ II. Иностранный язык в области профессиональной коммуникации
- Submitted: 20.05.2025
- Accepted: 03.06.2025
- Published: 06.11.2025
- URL: https://consilium.orscience.ru/osnk-sr2025/article/view/679883
- ID: 679883
Cite item
Full Text
Abstract
Background. In the modern world, individuals can be identified through various biometric parameters such as facial features, fingerprints, or retinal patterns. While these methods offer high accuracy, they are not always practical due to certain limitations—most notably, the need for the subject’s cooperation during data collection. An alternative method of identification involves analyzing unique physical or behavioral characteristics, particularly gait recognition. Gait analysis presents several advantages: it can be observed from a distance and is difficult to disguise. Most current approaches are statistical in nature and focus solely on walking. However, by examining leg movements, we demonstrate that it is also possible to identify individuals while running [1].
According to biomechanics, walking and running differ in parameters such as step length, step duration, speed, and the amplitude of limb movements. Nevertheless, both are rhythmic forms of human motion, characterized by coordinated oscillatory movements and bilateral symmetry with a phase shift of half a period (fig. 1).
Fig. 1. Walking and running steps
Aim. To identify features that can be used to recognize individuals based on their running patterns.
Methods. To analyze human pose over time, we used motion capture technologies, which encompass various systems and techniques for digitally recording the trajectories of moving objects or human bodies. With advancements in artificial intelligence and machine learning, human pose estimation from video input is commonly performed using convolutional neural networks (CNNs), which identify individuals and locate their key points in each frame [2, 3].
There are two primary approaches to 2D human pose estimation: top-down and bottom-up. In this study, we employed the bottom-up approach, which begins by detecting key body points and subsequently connecting them to form a skeletal model. This method is implemented in the MediaPipe framework [4–6], which is highly effective in detecting human movement under various conditions and from multiple viewing angles—a significant advantage for our research.
We collected a dataset of video clips, each featuring a single individual taking more than three running steps. These materials were sourced from open-access platforms such as Freepik.com and Vecteezy.com. Each video was then divided into individual frames.
Person detection in each frame was performed using a custom Python program based on the MediaPipe framework. This process yielded a dataset of 3D coordinates for each body key point. The data were normalized, and the center of mass for each subject was calculated. Limb coordinates were then recalculated relative to the center of mass, and the data were smoothed using the Savitzky–Golay filter.
In our research, a step is defined as the motion of the ankle joint relative to the hip, measured over a specific time interval. To enable meaningful comparisons between steps, we interpolated the data to standardize step duration. We then calculated the average of all steps to define a reference step. Each step was compared to this reference using two metrics: Root Mean Square Error (RMSE) and cosine distance.
Results. Figures 2 and 3 display the RMSE and cosine distance values obtained during the experiment using the parameter static_image_mode=False, which preserves the continuity of poses across consecutive frames.
Fig. 2. Similarity matrix between steps (metric RMSE)
Fig. 3. Similarity matrix between steps (metric cosine distanse)
Conclusions. The results confirm the effectiveness of the proposed method. We observed significant similarities between individual running steps. Low RMSE values indicate that the model accurately captures the characteristics of running movements, while the cosine distance values demonstrate strong consistency between analyzed steps.
Full Text
Background. In the modern world, individuals can be identified through various biometric parameters such as facial features, fingerprints, or retinal patterns. While these methods offer high accuracy, they are not always practical due to certain limitations—most notably, the need for the subject’s cooperation during data collection. An alternative method of identification involves analyzing unique physical or behavioral characteristics, particularly gait recognition. Gait analysis presents several advantages: it can be observed from a distance and is difficult to disguise. Most current approaches are statistical in nature and focus solely on walking. However, by examining leg movements, we demonstrate that it is also possible to identify individuals while running [1].
According to biomechanics, walking and running differ in parameters such as step length, step duration, speed, and the amplitude of limb movements. Nevertheless, both are rhythmic forms of human motion, characterized by coordinated oscillatory movements and bilateral symmetry with a phase shift of half a period (fig. 1).
Fig. 1. Walking and running steps
Aim. To identify features that can be used to recognize individuals based on their running patterns.
Methods. To analyze human pose over time, we used motion capture technologies, which encompass various systems and techniques for digitally recording the trajectories of moving objects or human bodies. With advancements in artificial intelligence and machine learning, human pose estimation from video input is commonly performed using convolutional neural networks (CNNs), which identify individuals and locate their key points in each frame [2, 3].
There are two primary approaches to 2D human pose estimation: top-down and bottom-up. In this study, we employed the bottom-up approach, which begins by detecting key body points and subsequently connecting them to form a skeletal model. This method is implemented in the MediaPipe framework [4–6], which is highly effective in detecting human movement under various conditions and from multiple viewing angles—a significant advantage for our research.
We collected a dataset of video clips, each featuring a single individual taking more than three running steps. These materials were sourced from open-access platforms such as Freepik.com and Vecteezy.com. Each video was then divided into individual frames.
Person detection in each frame was performed using a custom Python program based on the MediaPipe framework. This process yielded a dataset of 3D coordinates for each body key point. The data were normalized, and the center of mass for each subject was calculated. Limb coordinates were then recalculated relative to the center of mass, and the data were smoothed using the Savitzky–Golay filter.
In our research, a step is defined as the motion of the ankle joint relative to the hip, measured over a specific time interval. To enable meaningful comparisons between steps, we interpolated the data to standardize step duration. We then calculated the average of all steps to define a reference step. Each step was compared to this reference using two metrics: Root Mean Square Error (RMSE) and cosine distance.
Results. Figures 2 and 3 display the RMSE and cosine distance values obtained during the experiment using the parameter static_image_mode=False, which preserves the continuity of poses across consecutive frames.
Fig. 2. Similarity matrix between steps (metric RMSE)
Fig. 3. Similarity matrix between steps (metric cosine distanse)
Conclusions. The results confirm the effectiveness of the proposed method. We observed significant similarities between individual running steps. Low RMSE values indicate that the model accurately captures the characteristics of running movements, while the cosine distance values demonstrate strong consistency between analyzed steps.
About the authors
Samara University
Author for correspondence.
Email: feodorowa.sof@yandex.ru
student, group 6301-010302D, Institute of Informatics and Cybernetics
Russian Federation, SamaraReferences
- Yam C.Y., Nixon M.S., Carter J.N., et al. Automated person recognition by walking and running via model-based approaches // Pattern recognition. 2004. Vol. 37, N 5. P. 1057–1072. doi: 10.1016/j.patcog.2003.09.012
- Yang Y., Zeng Y., Yang L., et al. Action recognition and sports evaluation of running pose based on pose estimation // International Journal of Human Movement and Sports Sciences. 2024. Vol. 12(1). P. 148–163. doi: 10.13189/saj.2024.120118 EDN: PCDJBN
- Media Pipe Pose [Internet]. Режим доступа: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/pose.md Дата обращения: 11.02.2025.
- Bao W., Niu T., Wang N., et al. Human pose estimation of ski jumpers based on video. In: 7th International Symposium on Advances in Electrical, Electronics, and Computer Engineering. 2022. Vol. 12294. P. 732–737. doi: 10.1117/12.2639724
- Singh A.K., Kumbhare V.A., Arthi K. Real-time human pose detection and recognition using mediapipe. In: Advances in Intelligent Systems and Computing. International conference on soft computing and signal processing. Singapore: Springer Nature Singapore, 2021. P. 145–154. doi: 10.1007/978-981-16-7088-6_12
- Kim W., Choi J.Y., Ha E.J., et al. Human pose estimation using mediapipe pose and optimization method based on a humanoid model //Applied sciences. 2023. Vol. 13, N 4. P. 2700. doi: 10.3390/app13042700 EDN: PVMJKC
Supplementary files






