Human motion estimation is a topic receiving high attention during the last decades. There is a vast range of applications that employ human motion tracking, while the industry is continuously offering novel motion tracking systems, which are opening new paths compared to traditionally used pas-sive cameras. Motion tracking algorithms, in their general form, estimate the skeletal structure of the human body and consider it as a set of joints and limbs. However, human motion tracking systems usually work on a single sensor ba-sis, hypothesizing on occluded parts. We hereby present a methodology for fusing information from multiple sensors (Microsoft's Kinect sensors were utilized in this work) based on a series of factors that can alleviate from the problem of occlusion or noisy estimates of 3D joints' positions.
|Title of host publication||Proceedings of the 6th International Conference on Computer Vision / Computer Graphics Collaboration Techniques and Applications - MIRAGE '13|
|Number of pages||1|
|Publication status||Published - 2013|
|Series||Proceedings of the 6th International Conference on Computer Vision / Computer Graphics Collaboration Techniques and Applications - MIRAGE '13|
- kinect-based motion detection
- multiple kinects