CN112364694B - Human body sitting posture identification method based on key point detection - Google Patents

Human body sitting posture identification method based on key point detection Download PDF

Info

Publication number
CN112364694B
CN112364694B CN202011088718.2A CN202011088718A CN112364694B CN 112364694 B CN112364694 B CN 112364694B CN 202011088718 A CN202011088718 A CN 202011088718A CN 112364694 B CN112364694 B CN 112364694B
Authority
CN
China
Prior art keywords
sitting posture
value
deviation
data set
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011088718.2A
Other languages
Chinese (zh)
Other versions
CN112364694A (en
Inventor
郑佳罄
石守东
胡加钿
房志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202011088718.2A priority Critical patent/CN112364694B/en
Publication of CN112364694A publication Critical patent/CN112364694A/en
Application granted granted Critical
Publication of CN112364694B publication Critical patent/CN112364694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Dentistry (AREA)
  • Physiology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body sitting posture identification method based on key point detection 1 、T 2 、T 3 And T 4 By key point coordinate determination, when the sitting posture of a human body is detected, the key point coordinate of the correct sitting posture of the human body is firstly obtained as a reference, then the real-time key point coordinate of the human body and four judgment thresholds are combined to judge the sitting posture of the human body in real time, when the same sitting posture is judged to be incorrect for 3 times, voice broadcasting is carried out to remind a user, and when more than two incorrect sitting postures appear for 3 times continuously, the sitting posture with the highest priority level is broadcasted during voice broadcasting; the method has the advantages of simple implementation process, low requirement on the computing capacity of hardware, higher practicability, lower cost, higher real-time property and good interactivity, and can be subsequently transplanted to embedded equipment.

Description

Human body sitting posture identification method based on key point detection
Technical Field
The invention relates to a human body sitting posture identification method, in particular to a human body sitting posture identification method based on key point detection.
Background
In work and life, people adopt sitting postures most of the time, and take incorrect sitting postures with little attention, and long-term incorrect sitting postures can cause scoliosis, cervical spondylosis, myopia and a series of complications. The good sitting posture has important influence on improving the living and working efficiency of people and keeping physical and mental health, and the correct recognition of the sitting posture of people can assist people to form good sitting posture habits. For this reason, human sitting posture recognition technology has been widely studied.
Most of existing human sitting posture recognition technologies are based on machine learning, for example, chinese patent application publication No. CN 111414780A) discloses a human sitting posture recognition method, which collects a user sitting posture image in real time, recognizes human characteristic key points, calculates current sitting posture data according to the human characteristic key points, where the key point data includes eye coordinates, mouth coordinates, neck coordinates, and shoulder coordinates, and the current sitting posture data includes a current head inclination angle, a current shoulder inclination angle, a current height difference between the neck and the face, and a current height difference between the shoulder and the face, and finally compares the current sitting posture data with standard sitting posture data to determine whether the current sitting posture is abnormal. The standard sitting posture data comprise a standard head inclination angle threshold value, a standard shoulder inclination angle threshold value, a standard eye-using over-close difference value ratio threshold value and a standard lying table difference value ratio threshold value, and the four threshold values are acquired by performing big data training through a machine learning supervised learning classification algorithm. The supervised learning classification algorithm of machine learning has high requirement on the computing power of hardware, a large amount of data is needed during training to ensure the accuracy of the algorithm, and a certain time is needed to calculate a corresponding result. Therefore, when the human body sitting posture identification method is realized, the requirement on the computing capacity of hardware is high, the cost is high, in order to ensure the accuracy of the human body sitting posture identification method, a large amount of time is needed to be spent for manufacturing a large amount of training data, the realization process is complex, more time is spent on the calculation result during identification, and the real-time performance is not high.
Disclosure of Invention
The invention aims to solve the technical problem of providing the method for identifying the human body sitting posture based on the key point detection, which has the advantages of simple implementation process, lower requirement on the computing capacity of hardware, higher practicability, lower cost, higher real-time property and good interactivity.
The technical scheme adopted by the invention for solving the technical problems is as follows: a human body sitting posture identification method based on key point detection comprises the following steps:
(1) Be equipped with a PC that has the image processing procedure in advance, an infrared distance measurement sensor and a camera, be connected infrared distance measurement sensor and camera equipment and with the PC, infrared distance measurement sensor and camera are on same vertical plane and the distance is no longer than 5 centimetres, use the picture upper left corner that the camera was gathered in real time as the origin of coordinates, the level right side direction is the x axle positive direction, the vertical direction is y axle positive direction, establish the coordinate system, it has four to judge threshold value T to store in advance in the PC 1 、T 2 、T 3 And T 4 The four decision thresholds are predetermined by the following method:
1-1, dividing sitting posture behaviors into 9 categories including too close distance, too far distance, left head deviation, right head deviation, left body inclination, right body inclination, non-parallel shoulder, bent spine and correct sitting posture;
step 1-2, selecting 120 females with the height of 120 cm-180 cm and 120 males with the height of 130 cm-190 cm as pre-detection personnel, wherein each 10cm of 120 cm-180 cm is divided into 6 grades, each grade of female is 20, each 10cm of 130 cm-190 cm is divided into 6 grades, and each grade of male is 20; randomly numbering 240 pre-examination personnel as 1-240, and designating the pre-examination personnel with the number i as the ith pre-examination personnel, i =1,2, \ 8230, 240;
step 1-3, respectively carrying out pre-detection on 240 pre-detection personnel, wherein the specific process is as follows:
s1, the camera directly faces the face of the person to be inspected, the distance between the face of the person to be inspected and the face of the person to be inspected is 30-50 cm, and the face and shoulders of the person to be inspected cannot be shielded;
s2, each pre-detection person sequentially takes 7 sitting postures of correct sitting posture, left head deviation, right head deviation, left body inclination, right body inclination, spine bending and non-parallel shoulders in front of the camera, the camera shoots images of the 7 sitting postures of the pre-detection person and sends the images to a PC, wherein the 7 sitting postures are sequentially numbered as 1-7, the sitting posture numbered as j is called as the jth sitting posture, j =1,2, \8230, 7, the correct sitting posture is that the waist and the back are naturally straight, the chest is open, the shoulders are flat, the neck, the chest and the waist are kept straight, and other 6 sitting postures except the correct sitting posture are implemented according to personal normal habits;
s3, respectively acquiring and recording coordinates of 6 key points of a left eye pupil, a right eye pupil, a nose tip, a neck (a concave point at the joint of two clavicles) and a left shoulder and a right shoulder of each pre-inspector in 7 sitting postures by adopting an image processing program at a PC (personal computer) to obtain 240 groups of coordinate data, wherein each group of coordinate data respectively comprises a left eye pupil coordinate, a right eye pupil coordinate, a nose tip coordinate, a neck coordinate, a left shoulder coordinate and a right shoulder coordinate of one pre-inspector in 7 sitting postures, and recording the coordinates of the left eye pupil of the ith pre-inspector in the jth sitting posture as coordinates
Figure BDA0002721249650000021
The coordinate of the pupil of the right eye is recorded as->
Figure BDA0002721249650000022
The coordinates of the tip of the nose are recorded as
Figure BDA0002721249650000023
The coordinates of the neck are recorded as->
Figure BDA0002721249650000024
The coordinate of the left shoulder is recorded as>
Figure BDA0002721249650000025
The coordinate of the right shoulder is recorded as +>
Figure BDA0002721249650000031
S4, taking the left deviation of the left eye on the x axis when the ith pre-inspected person inclines to the left as the left deviation, and recording the left deviation as delta L i And the right deviation of the right eye on the x axis in the right inclination of the body is taken as the right inclination deviation and is recorded as delta R i The amount of cervical offset in the y-axis during spinal flexion is denoted as Δ C i When the shoulders are not parallel, the difference value of the key points of the two shoulders on the y axis is taken as the shoulder deviation and is recorded as delta H i Respectively calculating by adopting formulas (1), (2), (3) and (4) to obtain delta L i 、ΔR i 、ΔC i And Δ H i
Figure BDA0002721249650000032
Figure BDA0002721249650000033
Figure BDA0002721249650000034
Figure BDA0002721249650000035
In the formula (4), | is an absolute value symbol;
s5, integrating 240 groups of coordinate data according to sitting posture categories, and then respectively carrying out 7 groups again according to 7 sitting posture categories to obtain 7 groups of sitting posture data, wherein each group of sitting posture data respectively comprises a left eye pupil coordinate, a right eye pupil coordinate, a nose tip coordinate, a neck coordinate, a left shoulder coordinate and a right shoulder coordinate of 240 testers in the sitting posture;
s6, respectively determining judgment threshold values T 1 、T 2 、T 3 And T 4 Wherein a decision threshold value T is determined 1 The specific process comprises the following steps:
A. by Δ L 1 ~ΔL 240 The 240 left inclination deviation quantities form an original data set, and the original data set is used as a 0 th generation data set;
B. setting an iteration variable t, initializing t, and enabling t =1;
C. and carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
c1, calculating the kurtosis of the t-1 generation data set
Figure BDA0002721249650000036
Mean value->
Figure BDA0002721249650000037
And standard deviation>
Figure BDA0002721249650000038
C2, judgment
Figure BDA0002721249650000039
Whether or not it is greater than 3, if->
Figure BDA00027212496500000310
Is not more than 3 and->
Figure BDA00027212496500000311
If the difference to 3 is not greater than 1, the decision threshold is activated>
Figure BDA00027212496500000312
If +>
Figure BDA00027212496500000313
Not more than 3, and +>
Figure BDA00027212496500000314
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the value of the maximum left deviation amount in the T-1 generation data group is used as a judgment threshold value T 1 If->
Figure BDA00027212496500000315
Is greater than 3, and the content of the active ingredient, then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA00027212496500000316
The square value of the difference, the left inclination deviation amount corresponding to the maximum square value is deleted from the t-1 generation data group to obtain a t generation data group, then the current value of t is adopted to add 1 and update the value of t, the step C is returned, and the next iteration is carried out until the next iteration is carried out
Figure BDA00027212496500000317
Not more than 3;
determining a decision threshold T 2 The specific process comprises the following steps:
D. by Δ R 1 ~ΔR 240 The 240 right deviation quantities form an original data set, and the original data set is used as a 0 th generation data set;
E. setting an iteration variable t, initializing the t, and enabling the t =1;
F. and (3) carrying out the t iteration updating to obtain a t generation data set, wherein the specific process is as follows:
f1, calculating the kurtosis of the t-1 generation data set
Figure BDA0002721249650000041
Mean value->
Figure BDA0002721249650000042
And standard deviation->
Figure BDA0002721249650000043
F2, judgment
Figure BDA0002721249650000044
Whether or not it is greater than 3, if->
Figure BDA0002721249650000045
Not more than 3, and +>
Figure BDA0002721249650000046
If the difference to 3 is not greater than 1, the decision threshold is activated>
Figure BDA0002721249650000047
If->
Figure BDA0002721249650000048
Is not more than 3 and->
Figure BDA0002721249650000049
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the value of the maximum right deviation amount in the T-1 generation data group is taken as a judgment threshold value T 2 If->
Figure BDA00027212496500000410
Is greater than 3, and the content of the active ingredient, then each right deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA00027212496500000411
And F, adopting the current value of t plus 1 and updating the value of t, returning to the step F, and performing the next iteration until the current value of t is added with 1 and the value of t is updated
Figure BDA00027212496500000412
Not more than 3;
determining a decision threshold T 3 The specific process comprises the following steps:
G. by using Δ C 1 ~ΔC 240 The 240 neck offsets form an original data set, and the original data set is used as a 0 th generation data set;
H. setting an iteration variable t, initializing the t, and enabling the t =1;
I. and carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
i1, calculating kurtosis of the t-1 generation data set
Figure BDA00027212496500000413
Mean value->
Figure BDA00027212496500000414
And standard deviation->
Figure BDA00027212496500000415
I2, judgment
Figure BDA00027212496500000416
Whether or not it is greater than 3, if->
Figure BDA00027212496500000417
Not more than 3, and +>
Figure BDA00027212496500000418
If the difference to 3 is not greater than 1, the decision threshold is activated>
Figure BDA00027212496500000419
If +>
Figure BDA00027212496500000420
Is not more than 3 and->
Figure BDA00027212496500000421
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the maximum neck deviation value in the T-1 generation data set is taken as a judgment threshold value T 3 Such asFruit/vegetable device>
Figure BDA00027212496500000422
Is greater than 3, and the content of the active ingredient, then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA00027212496500000423
The square value of the difference is obtained by deleting the neck deviation corresponding to the maximum square value from the t-1 generation data set to obtain the t generation data set, then adding the value of 1 to the current value of t and updating the value of t, returning to the step I, and performing the next iteration until the current value of t is added to the value of 1 to update the value of t
Figure BDA00027212496500000424
Not more than 3;
determining a decision threshold T 4 The specific process comprises the following steps:
J. by Δ H 1 ~ΔH 240 The 240 shoulder offsets form an original data set, and the original data set is used as a 0 th generation data set;
K. setting an iteration variable t, initializing t, and enabling t =1;
l, carrying out the t iteration updating to obtain a t generation data set, wherein the specific process is as follows:
l1, calculating kurtosis of the t-1 generation data set
Figure BDA00027212496500000425
Mean value->
Figure BDA00027212496500000426
And standard deviation->
Figure BDA00027212496500000427
L2, determination
Figure BDA00027212496500000428
Whether or not it is greater than 3, if->
Figure BDA00027212496500000429
Not more than 3, and +>
Figure BDA00027212496500000430
A difference with 3 is not greater than 1, the decision threshold is ≥ based on>
Figure BDA00027212496500000431
If +>
Figure BDA00027212496500000432
Is not more than 3 and->
Figure BDA00027212496500000433
If the difference from 3 is greater than 1, the value of the largest shoulder deviation in the T-1 generation data set is used as the judgment threshold T 4 If->
Figure BDA0002721249650000051
Then each of the left offset and ÷ based values in the t-1 th generation data set is calculated>
Figure BDA0002721249650000052
The square value of the difference is obtained by deleting the shoulder deviation corresponding to the maximum square value from the t-1 th generation data set to obtain a t-th generation data set, then adopting the current value of t plus 1 and updating the value of t, returning to the step L, and carrying out the next iteration until the value is greater than or equal to the value of the t>
Figure BDA0002721249650000053
Not more than 3;
(2) When need carrying out the position of sitting discernment to the user, the camera is installed and is being carried out the position department that corresponds that the position of sitting discerned in needs, and the advanced reference data acquisition who uses the user, specific process is: the method comprises the following steps that a user sits in front of a camera with a correct sitting posture, the camera is over against the face of the user, the distance between the face and shoulders of the user is 30-50 cm, the face and shoulders of the user cannot be shielded, the camera shoots an image of the correct sitting posture of the user and sends the image to a PC, the PC processes the image of the correct sitting posture of the user by using an image processing program prestored in the PC, and determines and records coordinates of 6 key points, namely left eye pupils, right eye pupils, nose tips, neck parts (concave points at the joints of two clavicles), left shoulders and right shoulders of the user with the correct sitting posture, and records the coordinates of the left eye pupils of the user as (lx, ly), the coordinates of the right eye pupils as (rx, ry), the coordinates of the nose tips as (nx, ny), the coordinates of the neck parts as (bx, by), the coordinates of the left shoulders as (lsx, lsy), and the coordinates of the right shoulders as (rsx, rsy);
(3) After the reference data of the user is determined, the sitting posture of the user is identified in real time, and the specific process is as follows:
step 3-1, the PC collects images of the sitting posture of the user from the camera every 2 seconds, an image processing program is adopted to process the real-time images of the sitting posture of the user, coordinates of 6 key points of a left eye pupil, a right eye pupil, a nose tip, a neck (concave points at the joints of two clavicles) and a left shoulder and a right shoulder of the user in the current sitting posture are determined and recorded, meanwhile, the distance between the user and the camera, which is measured by the infrared distance measuring sensor, is received, and the coordinates of the left eye pupil of the user in the current sitting posture are recorded as (lx) N ,ly N ) And the coordinates of the pupil of the right eye are noted as (rx) N ,ry N ) The coordinate of the tip of the nose is (nx) N ,ny N ) The coordinates of the neck are (bx) N ,by N ) Left shoulder coordinate (lsx) N ,lsy N ) The coordinates of the right shoulder are noted as (rsx) N ,rsy N ) The distance between the user and the camera is recorded as D, the left eye pupil key point is connected with the nose tip key point, the connection line of the left eye pupil key point and the nose tip key point is recorded as a line segment a, the right eye pupil key point is respectively connected with the nose tip key point, the connection line of the right eye pupil key point and the nose tip key point is recorded as a line segment b, the nose tip key point is connected with the neck key point, the connection line of the nose tip key point and the neck key point is recorded as a line segment c, the left shoulder key point is connected with the neck key point, the connection line of the left shoulder key point and the neck key point is recorded as a line segment D, the right shoulder key point is connected with the neck key point, the connection line of the right shoulder key point and the neck key point is recorded as a line segment e, the included angle between the line segment c and the line segment D is recorded as an angle alpha, and the included angle between the line segment c and the line segment e is recorded as an angle beta;
step 3-2, the sitting posture of the user is judged in real time according to the real-time data condition in the step 3-1, and the specific judgment standard is as follows:
if D is less than 30cm, determining that the distance is too close;
if D is larger than 50 cm, determining that the distance is too far;
if alpha is larger than 0 degree and smaller than or equal to 70 degrees, the current sitting posture is judged to be the left head deviation;
if the beta is larger than 0 degree and smaller than or equal to 70 degrees, judging that the current sitting posture is the head right deviation;
if lx-lx N >T 1 If so, judging that the current sitting posture is left inclined;
if rx N -rx>T 2 Judging that the current sitting posture is right inclination;
if | lsy N -rsy N |>T 4 Judging that the current sitting posture is not parallel to the shoulders;
if by N -by>T 3 If so, judging that the current sitting posture is spinal curvature;
if the situation is other than the above situation, the current sitting posture is judged to be the correct sitting posture;
and 3-3, if the same incorrect sitting posture is judged for 3 times continuously, voice broadcasting is carried out to remind a user, when more than two incorrect sitting postures appear for 3 times continuously, the sitting posture with the highest priority level is broadcasted during voice broadcasting, and the priorities of the 8 incorrect sitting postures are too close, too far, left head deviation, right head deviation, left body deviation, right body deviation, uneven shoulders and bent spine from high to low in sequence.
Compared with the prior art, the method has the advantages that a hardware environment is built through the PC pre-stored with the image processing program, the infrared distance measuring sensor and the camera, a coordinate system is built by taking the upper left corner of a picture acquired by the camera in real time as the origin of coordinates, the horizontal right direction as the positive direction of an x axis and the vertical downward direction as the positive direction of a y axis, and four judgment threshold values T are pre-stored in the PC 1 、T 2 、T 3 And T 4 Four decision thresholds T 1 、T 2 、T 3 And T 4 The method is characterized in that key point coordinates are determined in advance, when the sitting posture of a human body is detected, the key point coordinates of the correct sitting posture of the human body are obtained as a reference, and then the real-time key point coordinates of the human body and four judgment thresholds T are combined 1 、T 2 、T 3 And T 4 The method for determining the four determination threshold values has the advantages that compared with the existing machine learning, a large amount of training data does not need to be made, the calculation process is simplified while high accuracy is guaranteed, and the time required by calculation is shortened.
Drawings
Fig. 1 is a schematic diagram of each key point, key point connecting lines and included angles between the connecting lines of the human body sitting posture identification method based on key point detection.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
Example (b): a human body sitting posture identification method based on key point detection comprises the following steps:
(1) Be equipped with a PC that has the image processing procedure in advance, an infrared distance measurement sensor and a camera, be connected infrared distance measurement sensor and camera equipment and with the PC, infrared distance measurement sensor and camera are on same vertical plane and the distance is no longer than 5 centimetres, use the picture upper left corner that the camera was gathered in real time as the origin of coordinates, the level right side direction is the x axle positive direction, the vertical direction is y axle positive direction, establish the coordinate system, it has four to judge threshold value T to store in advance in the PC 1 、T 2 、T 3 And T 4 The four decision thresholds are predetermined by the following method:
1-1, dividing sitting posture behaviors into 9 categories including too close distance, too far distance, left head deviation, right head deviation, left body inclination, right body inclination, non-parallel shoulder, bent spine and correct sitting posture;
step 1-2, selecting 120 females with the height of 120 cm-180 cm and 120 males with the height of 130 cm-190 cm as pre-inspectors, wherein each 10cm of 120 cm-180 cm is a grade, the total grade is 6, each female grade is 20, each 10cm of 130 cm-190 cm is a grade, the total grade is 6, and each male grade is 20; the 240 pre-detection personnel are randomly numbered as 1 to 240, the pre-detection personnel numbered as i is called the ith pre-detection personnel, i =1,2, \8230, 240;
step 1-3, respectively carrying out pre-detection on 240 pre-detection personnel, wherein the specific process is as follows:
s1, the camera is over against the face of the pre-detected person, the distance between the camera and the face of the pre-detected person is 30-50 cm, and the face and shoulders of the pre-detected person cannot be shielded;
s2, each pre-detection person sequentially takes 7 sitting postures of correct sitting posture, left head deviation, right head deviation, left body inclination, right body inclination, spine bending and non-parallel shoulders in front of the camera, the camera shoots images of the 7 sitting postures of the pre-detection person and sends the images to a PC, wherein the 7 sitting postures are sequentially numbered as 1-7, the sitting posture numbered as j is called as the jth sitting posture, j =1,2, \8230, 7, the correct sitting posture is that the waist and the back are naturally straight, the chest is open, the shoulders are flat, the neck, the chest and the waist are kept straight, and other 6 sitting postures except the correct sitting posture are implemented according to personal normal habits;
s3, respectively acquiring and recording coordinates of 6 key points of a left eye pupil, a right eye pupil, a nose tip, a neck (a concave point at the joint of two clavicles) and a left shoulder and a right shoulder of each pre-inspector in 7 sitting postures by adopting an image processing program at a PC (personal computer) to obtain 240 groups of coordinate data, wherein each group of coordinate data respectively comprises a left eye pupil coordinate, a right eye pupil coordinate, a nose tip coordinate, a neck coordinate, a left shoulder coordinate and a right shoulder coordinate of one pre-inspector in 7 sitting postures, and recording the coordinates of the left eye pupil of the ith pre-inspector in the jth sitting posture as coordinates
Figure BDA0002721249650000071
The coordinate of the pupil of the right eye is recorded as->
Figure BDA0002721249650000072
The coordinates of the tip of the nose are recorded as
Figure BDA0002721249650000081
The coordinates of the neck are recorded as->
Figure BDA0002721249650000082
The coordinate of the left shoulder is recorded as->
Figure BDA0002721249650000083
The coordinate of the right shoulder is recorded as +>
Figure BDA0002721249650000084
S4, taking the left deviation of the left eye on the x axis when the ith pre-inspected person inclines to the left as the left deviation, and recording the left deviation as delta L i And the right deviation of the right eye on the x axis in the right inclination of the body is taken as the right inclination deviation and is recorded as delta R i The amount of cervical offset in the y-axis during spinal flexion is denoted as Δ C i When the shoulders are not parallel, the difference value of the key points of the two shoulders on the y axis is taken as the shoulder deviation and is recorded as delta H i Respectively calculating by adopting formulas (1), (2), (3) and (4) to obtain delta L i 、ΔR i 、ΔC i And Δ H i
Figure BDA0002721249650000085
Figure BDA0002721249650000086
Figure BDA0002721249650000087
Figure BDA0002721249650000088
In the formula (4), | | | is an absolute value symbol;
s5, integrating 240 groups of coordinate data according to sitting posture categories, and then respectively grouping 7 groups of coordinate data according to 7 sitting posture categories to obtain 7 groups of sitting posture data, wherein each group of sitting posture data respectively comprises a left eye pupil coordinate, a right eye pupil coordinate, a nose tip coordinate, a neck coordinate, a left shoulder coordinate and a right shoulder coordinate of 240 testers in the sitting posture;
s6, respectively determining judgment threshold values T 1 、T 2 、T 3 And T 4 Wherein a decision threshold value T is determined 1 The specific process comprises the following steps:
A. by Δ L 1 ~ΔL 240 The 240 left-leaning deviation quantities form an original data set, and the original data set is used as a 0 th generation data set;
B. setting an iteration variable t, initializing t, and enabling t =1;
C. and (3) carrying out the t iteration updating to obtain a t generation data set, wherein the specific process is as follows:
c1, calculating kurtosis of the t-1 generation data set
Figure BDA0002721249650000089
Mean value->
Figure BDA00027212496500000810
And standard deviation->
Figure BDA00027212496500000811
C2, judgment
Figure BDA00027212496500000812
Whether or not it is greater than 3, if>
Figure BDA00027212496500000813
Is not more than 3 and->
Figure BDA00027212496500000814
If the difference to 3 is not greater than 1, the decision threshold is activated>
Figure BDA00027212496500000815
If->
Figure BDA00027212496500000816
Is not more than 3 and->
Figure BDA00027212496500000817
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the value of the maximum left deviation amount in the T-1 generation data group is used as a judgment threshold value T 1 If->
Figure BDA00027212496500000818
Is greater than 3, and the content of the active ingredient, then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA00027212496500000819
The square value of the difference is obtained by deleting the left inclination deviation corresponding to the maximum square value from the t-1 th generation data set to obtain a t-th generation data set, then adding 1 to the current value of t and updating the value of t, returning to the step C, and performing the next iteration until the current value of t is added to the value of 1 and the value of t is updated, and repeating the step C until the next iteration is performed
Figure BDA00027212496500000820
Not more than 3;
determining a decision threshold T 2 The specific process comprises the following steps:
D. by Δ R 1 ~ΔR 240 The 240 right deviation quantities form an original data set, and the original data set is used as a 0 th generation data set;
E. setting an iteration variable t, initializing t, and enabling t =1;
F. and carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
f1, calculating the kurtosis of the t-1 generation data set
Figure BDA0002721249650000091
Mean value->
Figure BDA0002721249650000092
And standard deviation->
Figure BDA0002721249650000093
F2, judgment
Figure BDA0002721249650000094
Whether or not it is greater than 3, if->
Figure BDA0002721249650000095
Is not more than 3 and->
Figure BDA0002721249650000096
If the difference to 3 is not greater than 1, the decision threshold is activated>
Figure BDA0002721249650000097
If +>
Figure BDA0002721249650000098
Is not more than 3 and->
Figure BDA0002721249650000099
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the value of the maximum right deviation amount in the T-1 generation data group is used as a judgment threshold value T 2 If->
Figure BDA00027212496500000910
Is greater than 3, and the content of the active ingredient, then each right deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA00027212496500000911
And F, adopting the current value of t plus 1 and updating the value of t, returning to the step F, and performing the next iteration until the current value of t is added with 1 and the value of t is updated
Figure BDA00027212496500000912
Not more than 3;
determining a decision threshold T 3 The specific process comprises the following steps:
G. by using Δ C 1 ~ΔC 240 The 240 neck offsets form an original data set, and the original data set is used as a 0 th generation data set;
H. setting an iteration variable t, initializing the t, and enabling the t =1;
I. and carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
i1, calculating kurtosis of the t-1 generation data set
Figure BDA00027212496500000913
Mean value->
Figure BDA00027212496500000914
And standard deviation->
Figure BDA00027212496500000915
I2, judgment
Figure BDA00027212496500000916
Whether or not it is greater than 3, if->
Figure BDA00027212496500000917
Is not more than 3 and->
Figure BDA00027212496500000918
If the difference to 3 is not greater than 1, the decision threshold is activated>
Figure BDA00027212496500000919
If->
Figure BDA00027212496500000920
Not more than 3, and +>
Figure BDA00027212496500000921
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the maximum neck deviation value in the T-1 generation data set is taken as a judgment threshold value T 3 If->
Figure BDA00027212496500000922
Is greater than 3, and the content of the active ingredient, then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA00027212496500000923
The square value of the difference is obtained by deleting the neck deviation corresponding to the maximum square value from the t-1 generation data set to obtain the t generation data set, then adding 1 to the current value of t and updating the value of t, returning to the step I, and performing the next iteration until the next iteration is performed
Figure BDA00027212496500000924
Not more than 3;
determining a decision threshold T 4 The specific process comprises the following steps:
J. by using Δ H 1 ~ΔH 240 The 240 shoulder offsets form an original data set, and the original data set is used as a 0 th generation data set;
K. setting an iteration variable t, initializing t, and enabling t =1;
l, carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
l1, calculating kurtosis of the t-1 generation data set
Figure BDA00027212496500000925
Mean value->
Figure BDA00027212496500000926
And standard deviation>
Figure BDA00027212496500000927
L2, determination
Figure BDA0002721249650000101
Whether or not it is greater than 3, if>
Figure BDA0002721249650000102
Is not more than 3 and->
Figure BDA0002721249650000103
If the difference from 3 is not greater than 1, it is judgedThreshold value->
Figure BDA0002721249650000104
If->
Figure BDA0002721249650000105
Not more than 3, and +>
Figure BDA0002721249650000106
If the difference from 3 is greater than 1, the value of the largest shoulder deviation in the T-1 generation data set is used as the judgment threshold T 4 If->
Figure BDA0002721249650000107
Then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure BDA0002721249650000108
The square value of the difference is obtained by deleting the shoulder deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group, then adopting the current value of t plus 1 and updating the value of t, returning to the step L, and carrying out the next iteration until the value is based on the value of ^ er>
Figure BDA0002721249650000109
Not more than 3;
(2) When need carrying out the position of sitting discernment to the user, the camera is installed and is being carried out the position department that corresponds that the position of sitting discerned in needs, and the advanced reference data acquisition who uses the user, specific process is: the method comprises the following steps that a user sits in front of a camera in a correct sitting posture, the camera is over against the face of the user, the distance between the camera and the face of the user is 30-50 cm, the face and shoulders of the user cannot be shielded, the camera shoots an image of the correct sitting posture of the user and sends the image to a PC, the PC processes the image of the correct sitting posture of the user by adopting an image processing program prestored in the PC, and determines and records coordinates of 6 key points, namely a left eye pupil, a right eye pupil, a nose tip, a neck (concave points at the joints of two clavicles), a left shoulder and a right shoulder of the user in the correct sitting posture, and records the coordinates of the left eye pupil as (lx, ly), the coordinates of the right eye pupil as (rx, ry), the coordinates of the nose tip as (nx, ny), the coordinates of the neck as (bx, by), the coordinates of the left shoulder as (lsx, isy), and the coordinates of the right shoulder as (rsx, rsy);
(3) After the reference data of the user is determined, the sitting posture of the user is identified in real time, and the specific process is as follows:
step 3-1, the PC collects images of the sitting posture of the user from the camera every 2 seconds, an image processing program is adopted to process the real-time images of the sitting posture of the user, coordinates of 6 key points of a left eye pupil, a right eye pupil, a nose tip, a neck (concave points at the joints of two clavicles) and a left shoulder and a right shoulder of the user in the current sitting posture are determined and recorded, meanwhile, the distance between the user and the camera, which is measured by the infrared distance measuring sensor, is received, and the coordinates of the left eye pupil of the user in the current sitting posture are recorded as (lx) N ,ly N ) And the coordinates of the pupil of the right eye are noted as (rx) N ,ry N ) The coordinate of the tip of the nose is (nx) N ,ny N ) The coordinates of the neck are (bx) N ,by N ) Left shoulder coordinate (lsx) N ,lsy N ) The coordinates of the right shoulder are noted as (rsx) N ,rsy N ) The distance between the user and the camera is recorded as D, the left eye pupil key point and the nose tip key point are connected, the connecting line of the left eye pupil key point and the nose tip key point is recorded as a line segment a, the right eye pupil key point and the nose tip key point are respectively connected, the connecting line of the right eye pupil key point and the nose tip key point is recorded as a line segment b, the nose tip key point and the neck key point are connected, the connecting line of the nose tip key point and the neck key point is recorded as a line segment c, the left shoulder key point and the neck key point are connected, the connecting line of the left shoulder key point and the neck key point is recorded as a line segment D, the connecting line of the right shoulder key point and the neck key point is recorded as a line segment e, the included angle between the line segment c and the line segment D is recorded as an angle alpha, and the included angle between the line segment c and the line segment e is recorded as an angle beta;
step 3-2, the sitting posture of the user is judged in real time according to the real-time data condition of the step 3-1, and the specific judgment standard is as follows:
if D is less than 30cm, determining that the distance is too close;
if D is larger than 50 cm, determining that the distance is too far;
if alpha is larger than 0 degrees and smaller than or equal to 70 degrees, judging that the current sitting posture is head left deviation;
if the beta is larger than 0 degree and smaller than or equal to 70 degrees, judging that the current sitting posture is the head right deviation;
if lx-lx N >T 1 If so, judging that the current sitting posture is left-leaning;
if rx N -rx>T 2 Judging that the current sitting posture is right inclination;
if | lsy N -rsy N |>T 4 Judging that the current sitting posture is not parallel to the shoulders;
if by N -by>T 3 Judging that the current sitting posture is spinal curvature;
if the situation is other than the above situation, the current sitting posture is judged to be the correct sitting posture;
and 3-3, if the same incorrect sitting posture is judged for 3 times continuously, voice broadcasting is carried out to remind a user, when more than two incorrect sitting postures appear for 3 times continuously, the sitting posture with the highest priority level is broadcasted during voice broadcasting, and the priorities of the 8 incorrect sitting postures are too close, too far, left head deviation, right head deviation, left body deviation, right body deviation, uneven shoulders and bent spine from high to low in sequence.

Claims (1)

1. A human body sitting posture identification method based on key point detection is characterized by comprising the following steps:
(1) Be equipped with a PC that has the image processing procedure in advance, an infrared distance measuring sensor and a camera, be connected infrared distance measuring sensor and camera equipment and with the PC, infrared distance measuring sensor and camera are on same vertical plane and the distance is no longer than 5 centimetres, use the picture upper left corner that the camera was gathered in real time as the origin of coordinates, the level direction of rightwards is the x axle positive direction, the vertical direction is y axle positive direction, set up the coordinate system, it judges threshold value T still to have four to save in advance in the PC 1 、T 2 、T 3 And T i The four decision thresholds are predetermined by the following method:
1-1, dividing sitting posture behaviors into 9 categories including too close distance, too far distance, left head deviation, right head deviation, left body inclination, right body inclination, non-parallel shoulder, bent spine and correct sitting posture;
step 1-2, selecting 120 females with the height of 120 cm-180 cm and 120 males with the height of 130 cm-190 cm as pre-inspectors, wherein each 10cm of 120 cm-180 cm is a grade, the total grade is 6, each female grade is 20, each 10cm of 130 cm-190 cm is a grade, the total grade is 6, and each male grade is 20; randomly numbering 240 pre-examination personnel as 1-240, and designating the pre-examination personnel with the number i as the ith pre-examination personnel, i =1,2, \ 8230, 240;
step 1-3, respectively carrying out pre-detection on 240 pre-detection personnel, wherein the specific process is as follows:
s1, the camera is over against the face of the pre-detected person, the distance between the camera and the face of the pre-detected person is 30-50 cm, and the face and shoulders of the pre-detected person cannot be shielded;
s2, each pre-detection person sequentially takes 7 sitting postures of correct sitting posture, left head deviation, right head deviation, left body inclination, right body inclination, spine bending and non-parallel shoulders in front of the camera, the camera shoots images of the 7 sitting postures of the pre-detection person and sends the images to a PC, wherein the 7 sitting postures are sequentially numbered as 1-7, the sitting posture numbered as j is called as the jth sitting posture, j =1,2, \8230, 7, the correct sitting posture is that the waist and the back are naturally straight, the chest is open, the shoulders are flat, the neck, the chest and the waist are kept straight, and other 6 sitting postures except the correct sitting posture are implemented according to personal normal habits;
s3, respectively acquiring and recording coordinates of 6 key points of left eye pupil, right eye pupil, nose tip, neck, left shoulder and right shoulder of each pre-inspected person in 7 sitting postures by adopting an image processing program at a PC (personal computer) to obtain 240 groups of coordinate data, wherein each group of coordinate data respectively comprises left eye pupil coordinates, right eye pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of one pre-inspected person in 7 sitting postures, and recording the coordinates of the left eye pupil of the ith pre-inspected person in the jth sitting posture as coordinates of the left eye pupil
Figure QLYQS_1
The coordinate of the pupil of the right eye is recorded as->
Figure QLYQS_2
The coordinate of the nose tip is recorded as->
Figure QLYQS_3
The coordinates of the neck are recorded as>
Figure QLYQS_4
The coordinate of the left shoulder is recorded as->
Figure QLYQS_5
The coordinate of the right shoulder is recorded as +>
Figure QLYQS_6
S4, taking the left deviation of the left eye on the x axis when the ith pre-inspected person inclines to the left as the left deviation, and recording the left deviation as delta L i And the right deviation of the right eye on the x axis in the right inclination of the body is taken as the right deviation and recorded as delta R i The amount of cervical offset in the y-axis during spinal flexion is denoted as Δ C i When the shoulders are not parallel, the difference value of the key points of the two shoulders on the y axis is taken as the shoulder deviation and is recorded as delta H i Respectively calculating by adopting formulas (1), (2), (3) and (4) to obtain delta L i 、ΔR i 、ΔC i And Δ H i
Figure QLYQS_7
Figure QLYQS_8
Figure QLYQS_9
Figure QLYQS_10
In the formula (4), | is an absolute value symbol;
s5, integrating 240 groups of coordinate data according to sitting posture categories, and then respectively carrying out 7 groups again according to 7 sitting posture categories to obtain 7 groups of sitting posture data, wherein each group of sitting posture data respectively comprises a left eye pupil coordinate, a right eye pupil coordinate, a nose tip coordinate, a neck coordinate, a left shoulder coordinate and a right shoulder coordinate of 240 testers in the sitting posture;
s6, respectively determining judgment threshold values T 1 、T 2 、T 3 And T 4 Wherein a decision threshold value T is determined 1 The specific process comprises the following steps:
A. by Δ L 1 ~ΔL 240 The 240 left inclination deviation quantities form an original data set, and the original data set is used as a 0 th generation data set;
B. setting an iteration variable t, initializing t, and enabling t =1;
C. and (3) carrying out the t iteration updating to obtain a t generation data set, wherein the specific process is as follows:
c1, calculating the kurtosis of the t-1 generation data set
Figure QLYQS_11
Mean value->
Figure QLYQS_12
And standard deviation>
Figure QLYQS_13
C2, judgment
Figure QLYQS_16
Whether or not it is greater than 3, if>
Figure QLYQS_18
Is not more than 3 and->
Figure QLYQS_21
If the difference from 3 is not greater than 1, the decision threshold is set
Figure QLYQS_15
If +>
Figure QLYQS_19
Is not more than 3 and->
Figure QLYQS_20
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the value of the maximum left deviation amount in the T-1 generation data group is taken as a judgment threshold value T 1 If +>
Figure QLYQS_22
Is greater than 3, and the content of the active ingredient, then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure QLYQS_14
The square value of the difference is obtained by deleting the left inclination deviation amount corresponding to the maximum square value from the t-1 generation data group to obtain a t generation data group, then adding 1 to the current value of t and updating the value of t, returning to the step C, and performing the next iteration until the value is based on the value of ^ er>
Figure QLYQS_17
Not more than 3;
determining a decision threshold T 2 The specific process comprises the following steps:
D. by Δ R 1 ~ΔR 240 The 240 right deviation values form an original data set, and the original data set is used as a 0 th generation data set;
E. setting an iteration variable t, initializing t, and enabling t =1;
F. and carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
f1, calculating the kurtosis of the t-1 generation data set
Figure QLYQS_23
Mean value->
Figure QLYQS_24
And standard deviation->
Figure QLYQS_25
F2, judgment
Figure QLYQS_28
Whether or not it is greater than 3, if->
Figure QLYQS_30
Is not more than 3 and->
Figure QLYQS_32
If the difference from 3 is not greater than 1, the decision threshold is set
Figure QLYQS_27
If->
Figure QLYQS_31
Is not more than 3 and->
Figure QLYQS_33
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the value of the maximum right deviation amount in the T-1 generation data group is used as a judgment threshold value T 2 If->
Figure QLYQS_34
Is greater than 3, and the content of the active ingredient, then each right deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure QLYQS_26
Deleting the left deviation amount corresponding to the maximum square value from the t-1 th generation data group to obtain a t-th generation data group, then adding 1 to the current value of t and updating the value of t, returning to the step F, and performing the next iteration until the value is based on the value of the square of the difference>
Figure QLYQS_29
Not more than 3;
determining a decision threshold T 3 The specific process comprises the following steps:
G. by using Δ C 1 ~ΔC 240 These 240 neck offsets form the original data set, which is then usedThe initial data set is used as a 0 th generation data set;
H. setting an iteration variable t, initializing t, and enabling t =1;
I. and carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
i1, calculating kurtosis of the t-1 generation data set
Figure QLYQS_35
Mean value->
Figure QLYQS_36
And standard deviation>
Figure QLYQS_37
I2, judgment
Figure QLYQS_39
Whether or not it is greater than 3, if->
Figure QLYQS_43
Not more than 3, and +>
Figure QLYQS_46
If the difference from 3 is not greater than 1, the decision threshold is set
Figure QLYQS_40
If->
Figure QLYQS_42
Is not more than 3 and->
Figure QLYQS_44
If the difference value between the threshold value T and the threshold value 3 is larger than 1, the maximum neck deviation value in the T-1 generation data set is taken as a judgment threshold value T 3 If->
Figure QLYQS_45
Is greater than 3, and the content of the active ingredient, then each of the left deflection and/or ^ er in the t-1 th generation data set is calculated>
Figure QLYQS_38
The square value of the difference, the neck deviation corresponding to the maximum square value is deleted from the t-1 generation data set to obtain the t generation data set, then the current value of t is added with 1 and the value of t is updated, the step I is returned, and the next iteration is carried out until the value is greater than or equal to the preset value>
Figure QLYQS_41
Not more than 3;
determining a decision threshold T 4 The specific process comprises the following steps:
J. by using Δ H 1 ~ΔH 240 The 240 shoulder offsets form an original data set, and the original data set is used as a 0 th generation data set;
K. setting an iteration variable t, initializing t, and enabling t =1;
l, carrying out the t-th iteration updating to obtain a t-th generation data set, wherein the specific process is as follows:
l1, calculating kurtosis of the t-1 generation data set
Figure QLYQS_47
Mean value->
Figure QLYQS_48
And standard deviation>
Figure QLYQS_49
L2, determination
Figure QLYQS_51
Whether or not it is greater than 3, if>
Figure QLYQS_55
Not more than 3, and +>
Figure QLYQS_56
If the difference from 3 is not greater than 1, the threshold value is determined
Figure QLYQS_52
If +>
Figure QLYQS_54
Is not more than 3 and->
Figure QLYQS_57
If the difference from 3 is greater than 1, the value of the largest shoulder deviation in the T-1 generation data set is used as the judgment threshold T 4 If +>
Figure QLYQS_58
Then calculate each left-leaning deviation and
Figure QLYQS_50
the square value of the difference is obtained by deleting the shoulder deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group, then adopting the current value of t plus 1 and updating the value of t, returning to the step L, and carrying out the next iteration until the value is based on the value of ^ er>
Figure QLYQS_53
Not more than 3;
(2) When need carrying out position of sitting discernment to the user, the camera is installed and is being carried out position of corresponding that position of sitting discernment in needs department, and the reference data acquisition who uses the user earlier, and concrete process is: the method comprises the following steps that a user firstly sits in front of a camera in a correct sitting posture, the camera is over against the face of the user, the distance between the camera and the face of the user is 30-50 cm, the face and shoulders of the user cannot be shielded, the camera shoots an image of the correct sitting posture of the user and sends the image to a PC, the PC processes the image of the correct sitting posture of the user by adopting an image processing program prestored in the PC, and determines and records coordinates of 6 key points of a left eye pupil, a right eye pupil, a nose tip, a neck, a left shoulder and a right shoulder of the user in the correct sitting posture, the coordinates of the left eye pupil of the user in the correct sitting posture are recorded as (lx, ly), the coordinates of the right eye pupil are recorded as (rx, ry), the coordinates of the nose tip are recorded as (nx, ny), the coordinates of the neck are recorded as (bx, by), the coordinates of the left shoulder are recorded as (lsx, lsy), and the coordinates of the right shoulder are recorded as (rsx, rsy);
(3) After confirming user's reference data, carry out real-time identification to user's position of sitting, concrete process is:
step 3-1, the PC collects images of the sitting posture of the user from the camera every 2 seconds, processes the real-time images of the sitting posture of the user by adopting an image processing program, determines and records coordinates of 6 key points of a left eye pupil, a right eye pupil, a nose tip, a neck, a left shoulder and a right shoulder of the user in the current sitting posture, receives the distance between the user and the camera measured by the infrared distance measuring sensor, and records the coordinates of the left eye pupil of the user in the current sitting posture as (lx) N ,ly N ) And the coordinates of the pupil of the right eye are noted as (rx) N ,ry N ) The coordinates of the tip of the nose are expressed as (nx) N ,ny N ) The coordinates of the neck are (bx) N ,by N ) Left shoulder coordinate (lsx) N ,lsy N ) The coordinates of the right shoulder are noted as (rsx) N ,rsy N ) The distance between the user and the camera is recorded as D, the left eye pupil key point and the nose tip key point are connected, the connecting line of the left eye pupil key point and the nose tip key point is recorded as a line segment a, the right eye pupil key point and the nose tip key point are respectively connected, the connecting line of the right eye pupil key point and the nose tip key point is recorded as a line segment b, the nose tip key point and the neck key point are connected, the connecting line of the nose tip key point and the neck key point is recorded as a line segment c, the left shoulder key point and the neck key point are connected, the connecting line of the left shoulder key point and the neck key point is recorded as a line segment D, the connecting line of the right shoulder key point and the neck key point is recorded as a line segment e, the included angle between the line segment c and the line segment D is recorded as an angle alpha, and the included angle between the line segment c and the line segment e is recorded as an angle beta;
step 3-2, the sitting posture of the user is judged in real time according to the real-time data condition of the step 3-1, and the specific judgment standard is as follows:
if D is less than 30cm, determining that the distance is too close;
if D is larger than 50 cm, determining that the distance is too far;
if alpha is larger than 0 degree and smaller than or equal to 70 degrees, the current sitting posture is judged to be the left head deviation;
if the beta is larger than 0 degree and smaller than or equal to 70 degrees, judging that the current sitting posture is the head right deviation;
if lx-lx N >T 1 If so, judging that the current sitting posture is left-leaning;
if rx N -rx>T 2 Judging that the current sitting posture is right inclination;
if | lsy N -rsy N |>T 4 Judging that the current sitting posture is not parallel to the shoulders;
if by N -by>T 3 Judging that the current sitting posture is spinal curvature;
if the situation is other than the above situation, the current sitting posture is judged to be the correct sitting posture;
and 3-3, if the same incorrect sitting posture is judged for 3 times continuously, voice broadcasting is carried out to remind a user, when more than two incorrect sitting postures appear for 3 times continuously, the sitting posture with the highest priority level is broadcasted during voice broadcasting, and the priorities of the 8 incorrect sitting postures are too close, too far, left head deviation, right head deviation, left body deviation, right body deviation, uneven shoulders and bent spine from high to low in sequence.
CN202011088718.2A 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection Active CN112364694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011088718.2A CN112364694B (en) 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011088718.2A CN112364694B (en) 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection

Publications (2)

Publication Number Publication Date
CN112364694A CN112364694A (en) 2021-02-12
CN112364694B true CN112364694B (en) 2023-04-18

Family

ID=74507159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011088718.2A Active CN112364694B (en) 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection

Country Status (1)

Country Link
CN (1) CN112364694B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112770B (en) * 2021-03-11 2022-05-17 合肥视其佳科技有限公司 Detection method for recognizing head postures of students
CN113034322B (en) * 2021-04-01 2024-02-02 珠海爱浦京软件股份有限公司 Internet-based online education supervision system and method
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene
CN113657271B (en) * 2021-08-17 2023-10-03 上海科技大学 Sitting posture detection method and system combining quantifiable factors and unquantifiable factor judgment
CN113743255A (en) * 2021-08-18 2021-12-03 广东机电职业技术学院 Neural network-based child sitting posture identification and correction method and system
CN114550099A (en) * 2022-03-01 2022-05-27 常莫凡 Comprehensive health management system based on digital twins
CN118592939A (en) * 2023-06-21 2024-09-06 圣奥科技股份有限公司 Sitting posture detection system based on human body key points
CN117746505A (en) * 2023-12-21 2024-03-22 武汉星巡智能科技有限公司 Learning accompanying method and device combined with abnormal sitting posture dynamic detection and robot

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382653A (en) * 2018-12-29 2020-07-07 沈阳新松机器人自动化股份有限公司 Human body sitting posture monitoring method
CN111414780A (en) * 2019-01-04 2020-07-14 卓望数码技术(深圳)有限公司 Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10667697B2 (en) * 2015-06-14 2020-06-02 Facense Ltd. Identification of posture-related syncope using head-mounted sensors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382653A (en) * 2018-12-29 2020-07-07 沈阳新松机器人自动化股份有限公司 Human body sitting posture monitoring method
CN111414780A (en) * 2019-01-04 2020-07-14 卓望数码技术(深圳)有限公司 Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭园 ; 郭晨旭 ; 时新 ; 申黎明 ; .基于OpenPose学习坐姿分析的桌椅人机适应性研究.林业工程学报.2020,(02),全文. *

Also Published As

Publication number Publication date
CN112364694A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112364694B (en) Human body sitting posture identification method based on key point detection
CN110495889B (en) Posture evaluation method, electronic device, computer device, and storage medium
CN106022378B (en) Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system
CN110170159A (en) A kind of human health's action movement monitoring system
CN111368810A (en) Sit-up detection system and method based on human body and skeleton key point identification
KR101930652B1 (en) Gait analysis system and computer program recorded on recording medium
CN111931733B (en) Human body posture detection method based on depth camera
CN113191200A (en) Push-up test counting method, device, equipment and medium
CN112115827A (en) Falling behavior identification method based on human body posture dynamic characteristics
CN111998829B (en) Method for judging read-write posture based on sensor
CN114973423B (en) Warning method and system for sitting posture monitoring of child learning table
CN110059670A (en) Human body Head And Face, limb activity angle and body appearance non-contact measurement method and equipment
CN115240247A (en) Recognition method and system for detecting motion and posture
CN115661930A (en) Action scoring method and device, action scoring equipment and storage medium
CN109674477A (en) Computer vision Postural Analysis method based on deep learning
KR20210118496A (en) Image-based intelligent push-up discrimination method and system
CN109558824B (en) Fitness action monitoring and analyzing system based on personnel image recognition
Wang et al. The sitting posture monitoring method based on notch sensor
CN114973048A (en) Method and device for correcting rehabilitation action, electronic equipment and readable medium
CN117653084A (en) Method for evaluating scoliosis rehabilitation state by using gait
CN116580359A (en) Construction personnel severe fatigue detection system and method based on space-time characteristics
CN111814700A (en) Behavior and action recognition algorithm based on child behavior characteristics
CN117333932A (en) Method, equipment and medium for identifying sarcopenia based on machine vision
CN116612529A (en) Pedestrian falling behavior identification method and device based on human dynamics centroid model
CN115588229A (en) Internet-based care service management system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant