CN111950392A - Human body sitting posture identification method based on depth camera Kinect - Google Patents

Human body sitting posture identification method based on depth camera Kinect Download PDF

Info

Publication number
CN111950392A
CN111950392A CN202010719422.XA CN202010719422A CN111950392A CN 111950392 A CN111950392 A CN 111950392A CN 202010719422 A CN202010719422 A CN 202010719422A CN 111950392 A CN111950392 A CN 111950392A
Authority
CN
China
Prior art keywords
sitting posture
angle
joint
human body
depth camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010719422.XA
Other languages
Chinese (zh)
Other versions
CN111950392B (en
Inventor
冀晶晶
姚子恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202010719422.XA priority Critical patent/CN111950392B/en
Publication of CN111950392A publication Critical patent/CN111950392A/en
Application granted granted Critical
Publication of CN111950392B publication Critical patent/CN111950392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of depth camera identification, and particularly relates to a human body sitting posture identification method based on a depth camera Kinect. The method comprises the following steps: s1, capturing an image of the object to be recognized by using a Kinect camera, and recognizing three-dimensional position coordinates and quaternion position coordinates of a plurality of bone joint characteristic points of the object; s2, calculating the distance between the two eyes of the object to be identified and the object to be read, the height difference of the two shoulders and the angle of the spine; then calculating the three-way rotation angle of each section of bone; and S3, comparing the calculation results with the corresponding preset acceptable ranges respectively, wherein if the calculation results are in the preset acceptable ranges, the sitting posture of the object to be identified is qualified, otherwise, the sitting posture of the object to be identified is unqualified. The invention effectively solves the problem of human body sitting posture recognition, and has high accuracy and comfort and low cost.

Description

Human body sitting posture identification method based on depth camera Kinect
Technical Field
The invention belongs to the technical field of depth camera identification, and particularly relates to a human body sitting posture identification method based on a depth camera Kinect.
Background
With the rising of the competitive pressure level of the modern society and academic, the health hidden danger of the growth and development of the teenagers and children is more obvious. When the working and learning state is entered, each part of the vertebra synchronously enters a tense loading posture. In order to carefully observe the book and the screen, the cervical vertebrae must be kept in a stiff state for a long time, and if the user reads and writes, the user can still be in a stiff head-lowering posture, which is one of the least good postures of the cervical vertebrae. Long-time reading and writing on desk and irregular sitting posture are direct factors causing symptoms of myopia, humpback, lateral inclination of spine and the like of teenagers and children. The humpback rate of teenagers in China is 47 percent, the myopia rate of college students is 77 percent, and the number of myopia people in the world is more than 14 hundred million. The method of the invention uses the depth camera Kinect to remind the user to correct the sitting posture in the learning process through the human body posture recognition technology, provides help for the healthy growth of teenagers and children, and has considerable practical significance.
And (3) recognizing the sitting posture of the human body, and acquiring position information covering 32 bone joint characteristic points of the human body by utilizing human body bone tracking based on the depth camera Kinect. And (3) performing action linkage on the captured bone joint characteristic points and the real bone, and acquiring sitting posture judgment indexes such as the distance from two eyes to a reading object, the height difference of two shoulders, the bending angle of lumbar by using the geometry. And acquiring the three-way rotation angle of each part of the skeleton of the spine by utilizing 3D graphics and converting the quaternion position information into Euler angle position information. And carrying out comprehensive human sitting posture identification and evaluation according to the judgment indexes.
At present, the human body sitting posture recognition is still in a starting exploration stage. The existing mechanical sitting posture detection system has the advantages of high accuracy and high real-time performance, but has the defects of low comfort level and high cost; the existing optical sitting posture detection system has the advantages of high comfort and low cost, but has the defects of high hardware requirement and more use restrictions; the existing RGB image type sitting posture detection system has the advantages of high efficiency, but has the defect of lack of depth information. Therefore, a human sitting posture identification method with high accuracy, low cost and high accuracy is urgently needed.
Disclosure of Invention
Aiming at the defects or improvement requirements of the prior art, the invention provides a human body sitting posture identification method based on a depth camera Kinect, wherein the human body joint characteristic points, three-dimensional position coordinates and quaternary position coordinates of the human body are captured by the depth camera Kinect, the human body sitting posture is identified and judged by using the captured result, the accuracy and the comfort are high, the cost is low, and the problem of human body sitting posture identification is efficiently solved.
In order to achieve the above object, according to the present invention, there is provided a human body sitting posture identifying method based on a depth camera Kinect, the method comprising the steps of:
s1 capturing, with respect to an object to be recognized, an image of the object to be recognized, in which three-dimensional position coordinates and quaternion position coordinates of a plurality of bone joint feature points that recognize the object to be recognized are present, with a Kinect camera;
s2, calculating the distance between the two eyes of the object to be identified and the object to be read, the height difference of the two shoulders and the angle of the spine by using the obtained three-dimensional position coordinates of the joint feature points; then calculating the three-way rotation angle of each section of skeleton by using the obtained quaternion position coordinates of the joint feature points;
and S3, comparing the calculation results obtained in the step S2 with the corresponding preset acceptable ranges respectively, wherein if the calculation results are in the preset acceptable ranges, the sitting posture of the object to be identified is qualified, and otherwise, the calculation results are unqualified.
Further preferably, in step S2, the calculating the vertebral angle is performed according to the following steps: and acquiring three joint characteristic points on the vertebra, including cervical vertebra joint characteristic points, thoracic vertebra joint characteristic points and lumbar vertebra joint characteristic points, connecting the three joint characteristic points, and finally calculating an included angle between a connecting line between the cervical vertebra and the thoracic vertebra and a connecting line between the thoracic vertebra and the lumbar vertebra, wherein the included angle is a vertebra angle.
Further preferably, in step S2, the calculating the three-way rotation angle of each bone by using the obtained quaternion position coordinates of the joint feature points is performed according to the following steps:
for each joint feature point, selecting the adjacent joint feature point as a father joint point;
s21, for each joint feature point, selecting the adjacent joint feature point as a father joint point;
s22 construction of ground coordinate systemOXgYgZgThe direction horizontally pointing to the rear of the object to be recognized is ZgAxial direction, vertically downward direction being YgIn the positive direction of the axis, the original point is a characteristic point of the sub-joint;
s23 construction of a rotating coordinate system OXbYbZbWherein, the direction of the child joint feature point pointing to the parent joint feature point corresponding to the child joint feature point is XbPositive direction of axis, YbAnd ZbPositive direction of (1) and Yg、ZgThe positive directions are the same, and the original points are the characteristic points of the sub-joints;
s24 determining X by calculating the quaternion position coordinates of the child joint feature point and the parent joint feature pointbIn the positive direction of the axis,
s25 calculating X separatelybAnd XgOYgThe included angle of the plane is a pitch angle theta; xbAxis in XgOYgProjection and X on a planegThe angle of (a), i.e. the yaw angle ψ; zbAxis and through XbThe included angle of the vertical plane of the shaft, i.e. the roll angle phi, the pitch angle theta, the yaw angle psi and the roll angle phi are the required three-way rotation angles.
Further preferably, in step S3, the acceptable range of the distance between the two eyes and the read object is: the absolute distance between the positive eyes and the object to be read is 200-400 mm, and the distance difference between the two eyes and the object to be read is 0-20 mm.
Further preferably, in step S3, the acceptable range of the shoulder height difference is: 0 to 100 mm.
Further preferably, in step S3, the acceptable range of the vertebral angle is: 125-160 degrees.
Further preferably, in step S1, the number of bone joint feature points is preferably 32.
Generally, compared with the prior art, the technical scheme of the invention has the following beneficial effects:
1. according to the invention, accurate parameter results such as the distance from the eyes of a human body to a reading object, the distance difference between the eyes, the height difference between shoulders, the included angle of the spine and the three-way rotation angle of each section of skeleton are output through main detection data, whether the sitting posture of a subject is reasonable and healthy is evaluated, the fatigue degree and the concentration degree of a user are evaluated by combining reading time and the sitting posture, if the sitting posture of the person is not proper, a corresponding prompt is given to the user, the accuracy and the comfort are high, and meanwhile, the cost is low;
2. compared with the existing RGB image type human body sitting posture detection system, the invention has the advantages that: the Kinect belongs to a cheap depth camera, so the cost is low, the Kinect software and the working principle are very simple, the depth information is applied, and the Kinect is not influenced by illumination and clothes in non-planar motion such as three-dimensional space motion and has good performance;
3. the invention has the advantages compared with the existing mechanical sitting posture detection system that: the sitting posture identification method is high in comfort level, a subject does not need to wear an additional sensor, inconvenience in movement is avoided, the sensor does not need to be adapted, cost is low, cost performance is high, meanwhile, the Kinect belongs to a high-frame-rate depth camera, and the high-frame-rate characteristic guarantees the real-time performance of sitting posture identification;
4. the advantages of the invention and the existing optical sitting posture detection system are as follows: the practicality is high, and the hardware requires lowly, uses the restriction few, mainly is applied to scenes such as student and the position of sitting staff of a specified duration and corrects and vertebra protection.
Drawings
Fig. 1 is a schematic flow chart of a human body sitting posture recognition method based on a depth camera Kinect according to a preferred embodiment of the invention;
FIG. 2 is a schematic diagram of a depth camera constructed in accordance with a preferred embodiment of the present invention capturing 32 skeletal joint feature points of a human body;
FIG. 3 is a schematic diagram of the distance between the two eyes and the object to be read according to the preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of a dual shoulder height difference detection configuration constructed in accordance with a preferred embodiment of the present invention;
FIG. 5 is a schematic illustration of the structure of a spinal angle sensor constructed in accordance with a preferred embodiment of the present invention;
fig. 6 is a schematic structural diagram for three-way rotation angle detection of each bone segment constructed in accordance with a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the invention provides a human body sitting posture identification method based on a depth camera Kinect, which specifically comprises the following steps:
1) developing software and hardware for the Kinect, and acquiring position information of 32 skeleton joint characteristic points of the human body through skeleton tracking;
the method specifically comprises the following steps: and C language is used as a programming language to carry out code compiling on a visual studio platform, so that joint point capturing, image data output, three-dimensional space position data output and quaternion position data output are realized.
2) The method comprises the following steps of collecting human body depth image information through a Kinect, combining three-dimensional bone joint characteristic coordinates, performing data processing by using geometry, and performing gesture recognition by using a processing result as a judgment index, wherein the method specifically comprises the following steps:
2.1) presetting a reference point position required by the judgment index, and capturing a three-dimensional coordinate of the reference point.
2.2) the captured bone joint characteristic points are in action linkage with real bones, the binocular characteristic points are in linkage with human cervical vertebra actions, the distance between two eyes and a reading object reference point is used as a judgment index of myopia tendency, the backpack characteristic points are in linkage with thoracic vertebra actions, the height difference of two shoulders is used as a judgment index of lateral bending tendency of the vertebra, the included angle between the thoracic vertebra and the lumbar vertebra and the caudal vertebra is used as a judgment index of kyphosis tendency, different parameters related to the characteristic points are obtained by using geometry and reference point coordinates, and the different parameters are used as action identification judgment indexes.
2.3) setting a value range of the judgment index according to a scientific policy, and judging the processed result and the value range.
3) Human body depth image information is collected through the Kinect, the bone joint quaternion position coordinates are combined, data processing is conducted through 3D graphics, and the bone joint rotation angle is obtained. And the gesture recognition is carried out by taking the gesture as a discrimination index, and the method specifically comprises the following steps:
3.1) selecting a target bone joint characteristic point and a parent joint point thereof, wherein the direction is that a child joint points to the parent joint.
And 3.2) carrying out data processing by using 3D graphics, so that the quaternion of the joint feature point is converted into three Euler angles.
And 3.3) combining the position and the angle of the tested human body and the Kinect equipment, selecting the position and the direction of a ground coordinate system and a rotating coordinate system, and determining the position of an Euler angle (a pitch angle, a yaw angle and a roll angle).
And 3.4) analyzing Euler angle information of the bone joint points, and determining a three-way rotation angle of the bone between a pair of father and son joints by combining two coordinate systems so as to identify the human body sitting posture.
4) And analyzing and processing the judgment indexes and the judgment results thereof, thereby comprehensively identifying the human body sitting posture.
As shown in fig. 2, the diagram is a schematic diagram of 32 bone joint feature points covering a human body captured by bone tracking, software and hardware development is performed on a visual studio platform for Kinect, programming is performed with c language as a programming language, joint point capture, image data output, three-dimensional spatial position data output and quaternion position data output are realized, and an output object is 32 bone feature points covering the human body.
The position of the external reference point is selected, the right hand tiger's mouth is placed at the preset reference point, and the coordinates of the joint points of the tiger's mouth skeleton can be approximately regarded as the spatial coordinates of the reference point due to the fact that the palm of the human body is flat and flaky.
As shown in fig. 3, the figure is a schematic diagram of the judgment index of the human body myopia tendency after the movement of the two eyes and the cervical vertebra are linked, and the captured bone joint characteristic points are linked with the movement of the real bone. The movement range of the cervical vertebra is not obvious, so that the characteristic points of the two eyes are linked with the action of the cervical vertebra of the human body, the distance from the two eyes to a reading object reference point is obtained by using the geometry, the distance difference of the two eyes is obtained and is used as a judgment index of the myopia tendency, the absolute distance from the human eyes to the reference point in a normal sitting posture is a normal value when the absolute distance is between 200mm and 400mm, the distance difference between the two eyes and the reference point is a normal value when the absolute distance is between 0mm and 20mm, the myopia tendency exists when the parameter is too low, and the hypermetropia tendency exists when.
As shown in fig. 4, the figure is a schematic diagram of a judgment index of the lateral inclination tendency of the human spine after the motion linkage of the shoulders and the thoracic vertebra, because the motion amplitude of the thoracic vertebra is small, the characteristic points of the shoulders and the motion of the thoracic vertebra are linked, the height difference of the shoulders is used as the judgment index of the lateral bending tendency of the spine, the height difference of the shoulders in a normal sitting posture is between 0 and 100mm, which is a normal value, and the parameter exceeding range indicates that the lateral bending tendency of the spine exists in the sitting posture.
As shown in fig. 5, the figure is a schematic diagram of the judgment indexes of the bending angle of the spine and the humpback tendency of the human body, the included angle of the thoracic vertebra, the lumbar vertebra and the caudal vertebra is obtained by using the geometry as the judgment index of the humpback tendency, the included angle of the vertebra, the lumbar vertebra and the caudal vertebra under normal sitting posture is a normal value when being between 125 and 160 degrees mm, and the humpback tendency exists when the parameter is lower than the normal value.
And (2) selecting one group of father-son joints of the human spine by using the 32 human skeletal joint points and the quaternion position information thereof captured in the step (1), taking the spine skeleton between the father-son joints as a research object, and pointing the direction of the spine skeleton from the son joints to the father joints.
As shown in fig. 6, two sets of coordinate systems are determined, a three-way rotation angle is used as a determination index, a skeleton rotation diagram of a spine is obtained, the quaternion position of a selected sub-joint is processed by using 3D graphics, quaternion information is converted into euler angle information, the positions and directions of a ground coordinate system and a rotation coordinate system are determined by combining the positions and angles of a human body and Kinect, and the three-way rotation angle of a selected skeleton object is represented by three euler angles (a pitch angle, a yaw angle and a roll angle). The human sitting posture is evaluated and judged by taking the three-way rotation angle as an index, and the smaller the angle is, the more the human action is stretched, and the pressure on the vertebra skeleton is also smaller.
4) Comprehensive evaluation of human sitting posture
Evaluating the kyphosis tendency, the myopia tendency and the lateral inclination of the spine of the current sitting posture of the human body through the step 2; evaluating the extension and burden degree of the multi-section skeleton of the spine through the step 3; and (3) comprehensively analyzing the indexes to perform reliable human body sitting posture assessment, if any index in the steps 2 and 3 is unqualified, the human body sitting posture is unqualified, otherwise, the human body sitting posture is qualified, and for the unqualified condition, the correction can be performed according to the position or angle corresponding to the unqualified index, so that the corresponding requirement is met.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A human body sitting posture identification method based on a depth camera Kinect is characterized by comprising the following steps:
s1, for an object to be recognized, capturing an image of the object to be recognized by using a Kinect camera, and recognizing three-dimensional position coordinates and quaternion position coordinates of a plurality of bone joint characteristic points of the object to be recognized in the image;
s2, calculating the distance between the two eyes of the object to be identified and the object to be read, the height difference of the two shoulders and the angle of the spine by using the obtained three-dimensional position coordinates of the joint feature points; then calculating the three-way rotation angle of each section of skeleton by using the obtained quaternion position coordinates of the joint feature points;
and S3, comparing the calculation results obtained in the step S2 with the corresponding preset acceptable ranges respectively, wherein if the calculation results are in the preset acceptable ranges, the sitting posture of the object to be identified is qualified, and otherwise, the calculation results are unqualified.
2. The human body sitting posture identifying method based on the depth camera Kinect as claimed in claim 1, wherein in step S2, the calculating the vertebral angle is performed according to the following steps: and acquiring three joint characteristic points on the vertebra, including cervical vertebra joint characteristic points, thoracic vertebra joint characteristic points and lumbar vertebra joint characteristic points, connecting the three joint characteristic points, and finally calculating an included angle between a connecting line between the cervical vertebra and the thoracic vertebra and a connecting line between the thoracic vertebra and the lumbar vertebra, wherein the included angle is a vertebra angle.
3. The human body sitting posture identifying method based on the depth camera Kinect as claimed in claim 1, wherein in step S2, the calculating the three-way rotation angle of each segment of bone by using the quaternion position coordinates of the obtained joint feature points is performed according to the following steps:
s21, for each joint feature point, selecting the adjacent joint feature point as a father joint point;
s22 construction of ground coordinate system OXgYgZgThe direction horizontally pointing to the rear of the object to be recognized is ZgAxial direction, vertically downward direction being YgIn the positive direction of the axis, the original point is a characteristic point of the sub-joint;
s23 construction of a rotating coordinate system OXbYbZbWherein, the direction of the child joint feature point pointing to the parent joint feature point corresponding to the child joint feature point is XbPositive direction of axis, YbAnd ZbPositive direction of (1) and Yg、ZgThe positive directions are the same, and the original points are the characteristic points of the sub-joints;
s24 determining X by calculating the quaternion position coordinates of the child joint feature point and the parent joint feature pointbIn the positive direction of the axis,
s25 calculating X separatelybAnd XgOYgThe included angle of the plane is a pitch angle theta; xbAxis in XgOYgProjection and X on a planegThe angle of (a), i.e. the yaw angle ψ; zbAxis and through XbThe included angle of the vertical plane of the shaft, i.e. the roll angle phi, the pitch angle theta, the yaw angle psi and the roll angle phi are the required three-way rotation angles.
4. The human body sitting posture identifying method based on the depth camera Kinect as claimed in claim 1, wherein in step S3, the acceptable range of the distance between the two eyes and the object to be read is: the absolute distance between the positive eyes and the object to be read is 200-400 mm, and the distance difference between the two eyes and the object to be read is 0-20 mm.
5. The human body sitting posture identifying method based on the depth camera Kinect as claimed in claim 1, wherein in step S3, the acceptable range of the shoulder height difference is: 0 to 100 mm.
6. The human body sitting posture identifying method based on the depth camera Kinect as claimed in claim 1, wherein in step S3, the acceptable range of the vertebra angle is: 125-160 degrees.
7. The human body sitting posture identifying method based on the depth camera Kinect as claimed in claim 1, wherein in step S1, the plurality of bone joint feature points are preferably 32.
CN202010719422.XA 2020-07-23 2020-07-23 Human body sitting posture identification method based on depth camera Kinect Active CN111950392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010719422.XA CN111950392B (en) 2020-07-23 2020-07-23 Human body sitting posture identification method based on depth camera Kinect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010719422.XA CN111950392B (en) 2020-07-23 2020-07-23 Human body sitting posture identification method based on depth camera Kinect

Publications (2)

Publication Number Publication Date
CN111950392A true CN111950392A (en) 2020-11-17
CN111950392B CN111950392B (en) 2022-08-05

Family

ID=73340933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010719422.XA Active CN111950392B (en) 2020-07-23 2020-07-23 Human body sitting posture identification method based on depth camera Kinect

Country Status (1)

Country Link
CN (1) CN111950392B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065532A (en) * 2021-05-19 2021-07-02 南京大学 Sitting posture geometric parameter detection method and system based on RGBD image
CN114092447A (en) * 2021-11-23 2022-02-25 北京阿尔法三维科技有限公司 Method, device and equipment for measuring scoliosis based on human body three-dimensional image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104157107A (en) * 2014-07-24 2014-11-19 燕山大学 Human body posture correction device based on Kinect sensor
CN106643708A (en) * 2016-09-21 2017-05-10 苏州坦特拉自动化科技有限公司 IMU-based interactive sitting posture correction device, sitting posture correction appliance and monitoring software
CN107153829A (en) * 2017-06-09 2017-09-12 南昌大学 Incorrect sitting-pose based reminding method and device based on depth image
CN107169456A (en) * 2017-05-16 2017-09-15 湖南巨汇科技发展有限公司 A kind of sitting posture detecting method based on sitting posture depth image
WO2018120964A1 (en) * 2016-12-30 2018-07-05 山东大学 Posture correction method based on depth information and skeleton information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104157107A (en) * 2014-07-24 2014-11-19 燕山大学 Human body posture correction device based on Kinect sensor
CN106643708A (en) * 2016-09-21 2017-05-10 苏州坦特拉自动化科技有限公司 IMU-based interactive sitting posture correction device, sitting posture correction appliance and monitoring software
WO2018120964A1 (en) * 2016-12-30 2018-07-05 山东大学 Posture correction method based on depth information and skeleton information
CN107169456A (en) * 2017-05-16 2017-09-15 湖南巨汇科技发展有限公司 A kind of sitting posture detecting method based on sitting posture depth image
CN107153829A (en) * 2017-06-09 2017-09-12 南昌大学 Incorrect sitting-pose based reminding method and device based on depth image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许运程等: "基于Kinect的体感交互技术的坐姿检测方法", 《科技创新与应用》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113065532A (en) * 2021-05-19 2021-07-02 南京大学 Sitting posture geometric parameter detection method and system based on RGBD image
CN113065532B (en) * 2021-05-19 2024-02-09 南京大学 Sitting posture geometric parameter detection method and system based on RGBD image
CN114092447A (en) * 2021-11-23 2022-02-25 北京阿尔法三维科技有限公司 Method, device and equipment for measuring scoliosis based on human body three-dimensional image
CN114092447B (en) * 2021-11-23 2022-07-22 北京阿尔法三维科技有限公司 Method, device and equipment for measuring scoliosis based on human body three-dimensional image

Also Published As

Publication number Publication date
CN111950392B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN111950392B (en) Human body sitting posture identification method based on depth camera Kinect
WO2020042345A1 (en) Method and system for acquiring line-of-sight direction of human eyes by means of single camera
CN104157107B (en) A kind of human posture's apparatus for correcting based on Kinect sensor
CN110913751B (en) Wearable eye tracking system with slip detection and correction functions
CN112069933A (en) Skeletal muscle stress estimation method based on posture recognition and human body biomechanics
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN111553229B (en) Worker action identification method and device based on three-dimensional skeleton and LSTM
CN107103309A (en) A kind of sitting posture of student detection and correcting system based on image recognition
CN107633240B (en) Sight tracking method and device and intelligent glasses
CN109543651B (en) Method for detecting dangerous driving behavior of driver
CN113856186B (en) Pull-up action judging and counting method, system and device
Arar et al. A regression-based user calibration framework for real-time gaze estimation
CN112435731B (en) Method for judging whether real-time gesture meets preset rules
CN111881888A (en) Intelligent table control method and device based on attitude identification
CN112184898A (en) Digital human body modeling method based on motion recognition
CN115661862A (en) Pressure vision convolution model-based sitting posture sample set automatic labeling method
CN110163113B (en) Human behavior similarity calculation method and device
CN113989936A (en) Desk lamp capable of recognizing sitting posture of child and automatically correcting voice
CN113065532B (en) Sitting posture geometric parameter detection method and system based on RGBD image
Pagnon et al. Pose2Sim: An open-source Python package for multiview markerless kinematics
CN111241926A (en) Attendance checking and learning condition analysis method, system, equipment and readable storage medium
US10036902B2 (en) Method of determining at least one behavioural parameter
CN114612526A (en) Joint point tracking method, and Parkinson auxiliary diagnosis method and device
CN115631155A (en) Bone disease screening method based on space-time self-attention
CN115331281A (en) Anxiety and depression detection method and system based on sight distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant