CN112364694B - Human body sitting posture identification method based on key point detection - Google Patents

Human body sitting posture identification method based on key point detection Download PDF

Info

Publication number
CN112364694B
CN112364694B CN202011088718.2A CN202011088718A CN112364694B CN 112364694 B CN112364694 B CN 112364694B CN 202011088718 A CN202011088718 A CN 202011088718A CN 112364694 B CN112364694 B CN 112364694B
Authority
CN
China
Prior art keywords
sitting posture
coordinates
data set
shoulder
deviation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011088718.2A
Other languages
Chinese (zh)
Other versions
CN112364694A (en
Inventor
郑佳罄
石守东
胡加钿
房志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202011088718.2A priority Critical patent/CN112364694B/en
Publication of CN112364694A publication Critical patent/CN112364694A/en
Application granted granted Critical
Publication of CN112364694B publication Critical patent/CN112364694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Dentistry (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Radiology & Medical Imaging (AREA)
  • Human Computer Interaction (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于关键点检测的人体坐姿识别方法,通过PC机、红外测距传感器以及摄像头搭建硬件环境,PC机内预存的判定阈值T1、T2、T3和T4通过关键点坐标确定,在对人体坐姿进行检测时,先获取人体正确坐姿下的关键点坐标作为基准,然后结合人体实时关键点坐标和四个判定阈值实时判断人体坐姿,且当连续3次判定为同一不正确的坐姿,则语音进行播报提醒使用者,当两种以上不正确坐姿连续3次同时出现时,语音播报时播报优先级别最高的坐姿;优点是实现过程简单,对硬件的计算能力要求较低,后续可移植到嵌入式设备上,实用性较高,成本较低,实时性较高,且交互性良好。

Figure 202011088718

The invention discloses a human body sitting posture recognition method based on key point detection. A hardware environment is built through a PC, an infrared ranging sensor and a camera . The point coordinates are determined. When detecting the sitting posture of the human body, the key point coordinates under the correct sitting posture of the human body are first obtained as a reference, and then the human sitting posture is judged in real time in combination with the real-time key point coordinates of the human body and four judgment thresholds, and when three consecutive judgments are the same If there is an incorrect sitting posture, the voice will broadcast to remind the user. When two or more incorrect sitting postures appear at the same time for 3 times in a row, the sitting posture with the highest priority will be broadcast during the voice broadcast. The advantage is that the implementation process is simple and the computing power of the hardware is relatively high. Low, and can be transplanted to embedded devices later, with high practicability, low cost, high real-time performance, and good interactivity.

Figure 202011088718

Description

一种基于关键点检测的人体坐姿识别方法A method for human sitting posture recognition based on key point detection

技术领域Technical Field

本发明涉及一种人体坐姿识别方法,尤其是涉及一种基于关键点检测的人体坐姿识别方法。The invention relates to a method for recognizing a sitting posture of a human body, and in particular to a method for recognizing a sitting posture of a human body based on key point detection.

背景技术Background Art

在工作和生活中,人们大多数时间采取的都是坐姿,并且稍不注意就会采取不正确的坐姿,而长期的坐姿不正确可能会引发脊柱侧弯、颈椎病、近视以及一系列并发症。良好的坐姿对于提高人们生活工作效率以及保持身心健康有着重要影响,正确识别人们的坐姿可以辅助人们养成良好的坐姿习惯。为此,人体坐姿识别技术得到了广泛研究。In work and life, people sit most of the time, and if they are not careful, they will adopt an incorrect sitting posture. Long-term incorrect sitting posture may cause scoliosis, cervical spondylosis, myopia and a series of complications. Good sitting posture has an important impact on improving people's life and work efficiency and maintaining physical and mental health. Correctly identifying people's sitting posture can help people develop good sitting habits. For this reason, human sitting posture recognition technology has been widely studied.

现有的人体坐姿识别技术大多是基于机器学习的,例如申请公布号为CN111414780A)的中国专利中公开了一种人体坐姿识别方法,该方法实时采集用户坐姿图像,识别人体特征关键点并根据人体特征关键点数据计算当前坐姿数据,关键点数据包括眼坐标、嘴坐标、脖坐标、肩坐标,当前坐姿数据包括当前头部倾斜角度、当前肩部倾斜角度、当前脖子与脸部之间高度差值、当前肩与脸部之间高度差值,最后将当前坐姿数据与标准坐姿数据进行比较,判断当前坐姿是否异常。标准坐姿数据包括标准头部倾斜角度阈值、标准肩部倾斜角度阈值、标准用眼过近差值比阈值、标准趴桌差值比阈值,这四个阈值通过机器学习的监督学习分类算法进行大数据训练来获取。机器学习的监督学习分类算法对硬件的计算能力要求较高,训练时需要大量的数据以确保算法的准确率,并且需要一定的时间才能计算出相应的结果。由此,上述人体坐姿识别方法在实现时,对硬件的计算能力要求较高,成本较高,且为了保证其准确性,需要花费大量的时间制作大量的训练数据,实现过程复杂,且在识别时计算结果花费时间较多,实时性不高。Most of the existing human sitting posture recognition technologies are based on machine learning. For example, a Chinese patent with application publication number CN111414780A discloses a human sitting posture recognition method. The method collects user sitting posture images in real time, identifies key points of human body features, and calculates current sitting posture data based on key point data of human body features. The key point data includes eye coordinates, mouth coordinates, neck coordinates, and shoulder coordinates. The current sitting posture data includes the current head tilt angle, the current shoulder tilt angle, the height difference between the current neck and face, and the height difference between the current shoulder and face. Finally, the current sitting posture data is compared with the standard sitting posture data to determine whether the current sitting posture is abnormal. The standard sitting posture data includes a standard head tilt angle threshold, a standard shoulder tilt angle threshold, a standard eye near difference ratio threshold, and a standard lying on the table difference ratio threshold. These four thresholds are obtained by big data training through supervised learning classification algorithms of machine learning. The supervised learning classification algorithms of machine learning have high requirements on the computing power of hardware. A large amount of data is required during training to ensure the accuracy of the algorithm, and it takes a certain amount of time to calculate the corresponding results. Therefore, when implementing the above-mentioned human sitting posture recognition method, it has high requirements on the computing power of the hardware and high cost. In order to ensure its accuracy, it takes a lot of time to produce a large amount of training data. The implementation process is complicated, and it takes a long time to calculate the results during recognition, and the real-time performance is not high.

发明内容Summary of the invention

本发明所要解决的技术问题是提供一种实现过程简单,对硬件的计算能力要求较低,实用性较高,成本较低,实时性较高,且交互性良好的基于关键点检测的人体坐姿识别方法。The technical problem to be solved by the present invention is to provide a human sitting posture recognition method based on key point detection, which has a simple implementation process, low requirements on hardware computing power, high practicality, low cost, high real-time performance, and good interactivity.

本发明解决上述技术问题所采用的技术方案为:一种基于关键点检测的人体坐姿识别方法,包括以下步骤:The technical solution adopted by the present invention to solve the above technical problems is: a method for human sitting posture recognition based on key point detection, comprising the following steps:

(1)、配备一台预存有图像处理程序的PC机、一个红外测距传感器以及一个摄像头,将红外测距传感器与摄像头组装并和PC机连接,红外测距传感器与摄像头在同一竖直平面上且距离不超过5厘米,以摄像头实时采集的画面左上角为坐标原点,水平向右方向为x轴正方向,垂直向下方向为y轴正方向,建立坐标系,PC机内还预先存储有四个判定阈值T1、T2、T3和T4,这四个判定阈值采用以下方法预先确定:(1) A PC with a pre-stored image processing program, an infrared distance sensor and a camera are equipped. The infrared distance sensor and the camera are assembled and connected to the PC. The infrared distance sensor and the camera are on the same vertical plane and the distance between them is no more than 5 cm. The upper left corner of the image captured by the camera in real time is the origin of the coordinate system, the horizontal right direction is the positive direction of the x-axis, and the vertical downward direction is the positive direction of the y-axis. A coordinate system is established. Four judgment thresholds T 1 , T 2 , T 3 and T 4 are also pre-stored in the PC. These four judgment thresholds are pre-determined by the following method:

步骤1-1、将坐姿行为分为距离过近、距离过远、头部左偏、头部右偏、身体左倾、身体右倾、肩膀不平行、脊椎弯曲以及正确坐姿9种类别;Step 1-1, divide the sitting posture into 9 categories: too close distance, too far distance, head tilted to the left, head tilted to the right, body leaning to the left, body leaning to the right, shoulders not parallel, spine curvature and correct sitting posture;

步骤1-2、选取身高在120cm~180cm之间的120名女性以及身高在130cm~190cm之间的120名男性作为预检人员,其中,120cm~180cm每10cm为一档,共分为6档,每一档女性为20人,130cm~190cm每10cm为一档,共分为6档,每一档男性为20人;将240名预检人员随机编号为1~240,将编号为i的预检人员称为第i个预检人员,i=1,2,…,240;Step 1-2, select 120 females with a height between 120 cm and 180 cm and 120 males with a height between 130 cm and 190 cm as pre-inspection personnel, wherein 120 cm to 180 cm is divided into 6 levels every 10 cm, each level has 20 females, and 130 cm to 190 cm is divided into 6 levels every 10 cm, each level has 20 males; 240 pre-inspection personnel are randomly numbered from 1 to 240, and the pre-inspection personnel numbered i is called the i-th pre-inspection personnel, i = 1, 2, ..., 240;

步骤1-3、对240名预检人员分别进行预检测,具体过程为:Step 1-3: Conduct pre-tests on 240 pre-test personnel. The specific process is as follows:

S1、摄像头正对预检人员脸部,两者距离为30~50厘米,预检人员脸部和肩膀不能被遮挡;S1. The camera is facing the face of the pre-inspection personnel, with a distance of 30 to 50 cm. The face and shoulders of the pre-inspection personnel cannot be blocked;

S2、每个预检人员在摄像头前依次采取正确坐姿、头部左偏、头部右偏、身体左倾、身体右倾、脊椎弯曲以及肩膀不平行共7种坐姿,摄像头拍摄预检人员这7种坐姿的图像并发送给PC机,其中这7种坐姿按顺序依次编号为1-7,将编号为j的坐姿称为第j种坐姿,j=1,2,…,7,正确坐姿为腰背自然挺直,胸部张开,双肩放平,颈、胸和腰都要保持平直,除正确坐姿以外的其他6种坐姿按个人平时习惯实施;S2. Each pre-inspection person takes seven sitting postures in front of the camera, namely, correct sitting posture, head tilted to the left, head tilted to the right, body tilted to the left, body tilted to the right, spine bent, and shoulders not parallel. The camera takes images of the seven sitting postures of the pre-inspection person and sends them to the PC. The seven sitting postures are numbered 1-7 in sequence. The sitting posture numbered j is called the jth sitting posture, j=1, 2, ..., 7. The correct sitting posture is to keep the back naturally straight, the chest open, the shoulders flat, and the neck, chest and waist all kept straight. The other six sitting postures except the correct sitting posture are implemented according to personal usual habits.

S3、在PC机处采用图像处理程序分别获取并记录每个预检人员在7种坐姿下的左眼瞳孔、右眼瞳孔、鼻尖、颈部(两边锁骨连接处的凹点)、左肩和右肩这6个关键点的坐标,得到240组坐标数据,每组坐标数据分别包括一个预检人员在7种坐姿下的左眼瞳孔坐标、右眼瞳孔坐标、鼻尖坐标、颈部坐标、左肩坐标和右肩坐标,将第i个预检人员第j种坐姿下左眼瞳孔的坐标记为

Figure BDA0002721249650000021
右眼瞳孔的坐标记为
Figure BDA0002721249650000022
鼻尖的坐标记为
Figure BDA0002721249650000023
颈部的坐标记为
Figure BDA0002721249650000024
左肩的坐标记为
Figure BDA0002721249650000025
右肩的坐标记为
Figure BDA0002721249650000031
S3. Use an image processing program on a PC to obtain and record the coordinates of the six key points of the left pupil, right pupil, nose tip, neck (the concave point at the connection of the two clavicles), left shoulder and right shoulder of each pre-inspection person in the seven sitting postures, and obtain 240 sets of coordinate data. Each set of coordinate data includes the left pupil coordinates, right pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of a pre-inspection person in the seven sitting postures. The coordinates of the left pupil of the i-th pre-inspection person in the j-th sitting posture are marked as
Figure BDA0002721249650000021
The coordinates of the right pupil are
Figure BDA0002721249650000022
The coordinates of the nose tip are
Figure BDA0002721249650000023
The coordinates of the neck are
Figure BDA0002721249650000024
The coordinates of the left shoulder are
Figure BDA0002721249650000025
The coordinates of the right shoulder are
Figure BDA0002721249650000031

S4、将第i个预检人员身体左倾时左眼在x轴上的左偏量作为左倾偏量,记为ΔLi、身体右倾时右眼在x轴上的右偏量作为右倾偏量,记为ΔRi,脊椎弯曲时颈部在y轴上的偏移量作为颈部偏量,记为ΔCi,肩膀不平行时两个肩部关键点在y轴上的差值作为肩膀偏量,记为ΔHi,采用公式(1)、(2)、(3)、(4)分别计算得到ΔLi、ΔRi、ΔCi和ΔHiS4. The left deviation of the left eye on the x-axis when the body of the i-th pre-inspection person leans to the left is taken as the left deviation, recorded as ΔL i ; the right deviation of the right eye on the x-axis when the body leans to the right is taken as the right deviation, recorded as ΔR i ; the displacement of the neck on the y-axis when the spine is bent is taken as the neck deviation, recorded as ΔC i ; the difference between the two shoulder key points on the y-axis when the shoulders are not parallel is taken as the shoulder deviation, recorded as ΔH i ; ΔL i , ΔR i , ΔC i and ΔH i are calculated using formulas (1), (2), (3) and (4) respectively:

Figure BDA0002721249650000032
Figure BDA0002721249650000032

Figure BDA0002721249650000033
Figure BDA0002721249650000033

Figure BDA0002721249650000034
Figure BDA0002721249650000034

Figure BDA0002721249650000035
Figure BDA0002721249650000035

式(4)中,||为取绝对值符号;In formula (4), || is the absolute value symbol;

S5、按坐姿类别对240组坐标数据进行整合后按照7种坐姿类别重新分别7组,得到7组坐姿数据,每组坐姿数据分别包括240名测试人员在该坐姿下的左眼瞳孔坐标、右眼瞳孔坐标、鼻尖坐标、颈部坐标、左肩坐标和右肩坐标;S5. After integrating the 240 sets of coordinate data according to the sitting posture categories, the data are divided into 7 sets according to the 7 sitting posture categories to obtain 7 sets of sitting posture data. Each set of sitting posture data includes the left eye pupil coordinates, right eye pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of the 240 test persons in the sitting posture.

S6、分别确定判定阈值T1、T2、T3和T4的值,其中,确定判定阈值T1的具体过程为:S6. Determine the values of the determination thresholds T 1 , T 2 , T 3 and T 4 respectively, wherein the specific process of determining the determination threshold T 1 is as follows:

A、采用ΔL1~ΔL240这240个左倾偏量构成原始数据组,将原始数据组作为第0代数据组;A. Using 240 left-leaning deviations from ΔL 1 to ΔL 240 to form an original data set, and taking the original data set as the 0th generation data set;

B、设定迭代变量t,对t进行初始化,令t=1;B. Set the iteration variable t and initialize t to 1;

C、进行第t次迭代更新,得到第t代数据组,具体过程为:C. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

C1、计算第t-1代数据组的峰度

Figure BDA0002721249650000036
均值
Figure BDA0002721249650000037
和标准差
Figure BDA0002721249650000038
C1. Calculate the kurtosis of the t-1 generation data set
Figure BDA0002721249650000036
Mean
Figure BDA0002721249650000037
and standard deviation
Figure BDA0002721249650000038

C2、判断

Figure BDA0002721249650000039
是否大于3,如果
Figure BDA00027212496500000310
不大于3,且
Figure BDA00027212496500000311
与3的差值不大于1,则令判定阈值
Figure BDA00027212496500000312
如果
Figure BDA00027212496500000313
不大于3,且
Figure BDA00027212496500000314
与3的差值大于1,则将第t-1代数据组中最大的左倾偏量的值作为判定阈值T1,如果
Figure BDA00027212496500000315
大于3,则计算第t-1代数据组中每个左倾偏量与
Figure BDA00027212496500000316
之差的平方值,将最大平方值对应的左倾偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤C,进行下一次迭代,直至
Figure BDA00027212496500000317
不大于3;C2. Judgment
Figure BDA0002721249650000039
Is it greater than 3? If
Figure BDA00027212496500000310
Not more than 3, and
Figure BDA00027212496500000311
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure BDA00027212496500000312
if
Figure BDA00027212496500000313
Not more than 3, and
Figure BDA00027212496500000314
If the difference between t and 3 is greater than 1, the maximum left-leaning deviation value in the t-1 generation data set is used as the decision threshold T 1 .
Figure BDA00027212496500000315
If it is greater than 3, then calculate the left deviation of each data set in the t-1th generation and
Figure BDA00027212496500000316
The square value of the difference between the two values is used to delete the left deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step C for the next iteration until
Figure BDA00027212496500000317
Not more than 3;

确定判定阈值T2的具体过程为:The specific process of determining the judgment threshold T2 is:

D、采用ΔR1~ΔR240这240个右倾偏量构成原始数据组,将原始数据组作为第0代数据组;D. Using 240 right-leaning deviations from ΔR 1 to ΔR 240 to form an original data set, and taking the original data set as the 0th generation data set;

E、设定迭代变量t,对t进行初始化,令t=1;E. Set the iteration variable t and initialize t to 1;

F、进行第t次迭代更新,得到第t代数据组,具体过程为:F. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

F1、计算第t-1代数据组的峰度

Figure BDA0002721249650000041
均值
Figure BDA0002721249650000042
和标准差
Figure BDA0002721249650000043
F1, calculate the kurtosis of the t-1 generation data group
Figure BDA0002721249650000041
Mean
Figure BDA0002721249650000042
and standard deviation
Figure BDA0002721249650000043

F2、判断

Figure BDA0002721249650000044
是否大于3,如果
Figure BDA0002721249650000045
不大于3,且
Figure BDA0002721249650000046
与3的差值不大于1,则令判定阈值
Figure BDA0002721249650000047
如果
Figure BDA0002721249650000048
不大于3,且
Figure BDA0002721249650000049
与3的差值大于1,则将第t-1代数据组中最大的右倾偏量的值作为判定阈值T2,如果
Figure BDA00027212496500000410
大于3,则计算第t-1代数据组中每个右倾偏量与
Figure BDA00027212496500000411
之差的平方值,将最大平方值对应的左倾偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤F,进行下一次迭代,直至
Figure BDA00027212496500000412
不大于3;F2. Judgment
Figure BDA0002721249650000044
Is it greater than 3? If
Figure BDA0002721249650000045
Not more than 3, and
Figure BDA0002721249650000046
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure BDA0002721249650000047
if
Figure BDA0002721249650000048
Not more than 3, and
Figure BDA0002721249650000049
If the difference between t and 3 is greater than 1, the maximum right deviation value in the t-1 generation data set is used as the decision threshold T 2 .
Figure BDA00027212496500000410
If it is greater than 3, then calculate the right deviation of each data set in the t-1th generation and
Figure BDA00027212496500000411
The square value of the difference between the two values is used to delete the left deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step F for the next iteration until
Figure BDA00027212496500000412
Not more than 3;

确定判定阈值T3的具体过程为:The specific process of determining the judgment threshold T3 is:

G、采用ΔC1~ΔC240这240个颈部偏量构成原始数据组,将原始数据组作为第0代数据组;G. Using 240 neck deflections from ΔC 1 to ΔC 240 to form an original data set, and using the original data set as the 0th generation data set;

H、设定迭代变量t,对t进行初始化,令t=1;H. Set the iteration variable t and initialize t to 1;

I、进行第t次迭代更新,得到第t代数据组,具体过程为:I. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

I1、计算第t-1代数据组的峰度

Figure BDA00027212496500000413
均值
Figure BDA00027212496500000414
和标准差
Figure BDA00027212496500000415
I1. Calculate the kurtosis of the t-1 generation data group
Figure BDA00027212496500000413
Mean
Figure BDA00027212496500000414
and standard deviation
Figure BDA00027212496500000415

I2、判断

Figure BDA00027212496500000416
是否大于3,如果
Figure BDA00027212496500000417
不大于3,且
Figure BDA00027212496500000418
与3的差值不大于1,则令判定阈值
Figure BDA00027212496500000419
如果
Figure BDA00027212496500000420
不大于3,且
Figure BDA00027212496500000421
与3的差值大于1,则将第t-1代数据组中最大的颈部偏量的值作为判定阈值T3,如果
Figure BDA00027212496500000422
大于3,则计算第t-1代数据组中每个左倾偏量与
Figure BDA00027212496500000423
之差的平方值,将最大平方值对应的颈部偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤I,进行下一次迭代,直至
Figure BDA00027212496500000424
不大于3;I2. Judgment
Figure BDA00027212496500000416
Is it greater than 3? If
Figure BDA00027212496500000417
Not more than 3, and
Figure BDA00027212496500000418
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure BDA00027212496500000419
if
Figure BDA00027212496500000420
Not more than 3, and
Figure BDA00027212496500000421
If the difference between t and 3 is greater than 1, the value of the maximum neck deviation in the t-1 generation data set is used as the judgment threshold T 3 .
Figure BDA00027212496500000422
If it is greater than 3, then calculate the left deviation of each data set in the t-1th generation and
Figure BDA00027212496500000423
The square value of the difference between the two values is used to delete the neck deviation corresponding to the maximum square value from the t-1 generation data set to obtain the t generation data set. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step I for the next iteration until
Figure BDA00027212496500000424
Not more than 3;

确定判定阈值T4的具体过程为:The specific process of determining the decision threshold T4 is:

J、采用ΔH1~ΔH240这240个肩膀偏量构成原始数据组,将原始数据组作为第0代数据组;J. Using 240 shoulder deviations from ΔH 1 to ΔH 240 to form an original data set, and using the original data set as the 0th generation data set;

K、设定迭代变量t,对t进行初始化,令t=1;K. Set the iteration variable t and initialize t to 1;

L、进行第t次迭代更新,得到第t代数据组,具体过程为:L. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

L1、计算第t-1代数据组的峰度

Figure BDA00027212496500000425
均值
Figure BDA00027212496500000426
和标准差
Figure BDA00027212496500000427
L1, calculate the kurtosis of the t-1 generation data group
Figure BDA00027212496500000425
Mean
Figure BDA00027212496500000426
and standard deviation
Figure BDA00027212496500000427

L2、判断

Figure BDA00027212496500000428
是否大于3,如果
Figure BDA00027212496500000429
不大于3,且
Figure BDA00027212496500000430
与3的差值不大于1,则判定阈值
Figure BDA00027212496500000431
如果
Figure BDA00027212496500000432
不大于3,且
Figure BDA00027212496500000433
与3的差值大于1,则将第t-1代数据组中最大的肩膀偏量的值作为判定阈值T4,如果
Figure BDA0002721249650000051
则计算第t-1代数据组中每个左倾偏量与
Figure BDA0002721249650000052
之差的平方值,将最大平方值对应的肩膀偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤L,进行下一次迭代,直至
Figure BDA0002721249650000053
不大于3;L2. Judgment
Figure BDA00027212496500000428
Is it greater than 3? If
Figure BDA00027212496500000429
Not more than 3, and
Figure BDA00027212496500000430
The difference between 3 and 1 is not greater than 1, then the threshold is determined
Figure BDA00027212496500000431
if
Figure BDA00027212496500000432
Not more than 3, and
Figure BDA00027212496500000433
If the difference between t and 3 is greater than 1, the value of the maximum shoulder deviation in the t-1 generation data set is used as the decision threshold T 4 .
Figure BDA0002721249650000051
Then calculate each left-leaning deviation in the t-1 generation data set and
Figure BDA0002721249650000052
The square value of the difference between the two values is used to delete the shoulder deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the step L is returned to perform the next iteration until
Figure BDA0002721249650000053
Not more than 3;

(2)、当需对使用者进行坐姿识别时,摄像头安装在需要进行坐姿识别的对应位置处,先进行使用者的参考数据采集,具体过程为:使用者先采用正确坐姿坐在摄像头前,摄像头正对使用者脸部,两者距离为30~50厘米,使用者脸部和肩膀不能被遮挡,摄像头拍摄使用者正确坐姿的图像发送给PC机,PC机采用其内预存的图像处理程序对使用者正确坐姿的图像进行处理,确定并记录正确坐姿下使用者的左眼瞳孔、右眼瞳孔、鼻尖、颈部(两边锁骨连接处的凹点)、左肩和右肩这6个关键点的坐标,将使用者正确坐姿下左眼瞳孔的坐标记为(lx,ly),右眼瞳孔的坐标记为(rx,ry)、鼻尖的坐标记为(nx,ny)、颈部的坐标记为(bx,by)、左肩的坐标记为(lsx,lsy)、右肩的坐标记为(rsx,rsy);(2) When the user's sitting posture needs to be recognized, the camera is installed at the corresponding position where the sitting posture recognition needs to be performed, and the user's reference data is first collected. The specific process is as follows: the user first sits in front of the camera with a correct sitting posture, with the camera facing the user's face, the distance between the two is 30 to 50 cm, and the user's face and shoulders cannot be blocked. The camera takes an image of the user's correct sitting posture and sends it to the PC. The PC uses the image processing program pre-stored therein to process the image of the user's correct sitting posture, determine and record the coordinates of the user's left pupil, right pupil, nose tip, neck (the concave point at the connection of the two clavicles), left shoulder and right shoulder in the correct sitting posture, and mark the coordinates of the left pupil of the user in the correct sitting posture as (lx, ly), the coordinates of the right pupil as (rx, ry), the coordinates of the nose tip as (nx, ny), the coordinates of the neck as (bx, by), the coordinates of the left shoulder as (lsx, lsy), and the coordinates of the right shoulder as (rsx, rsy);

(3)确定使用者的参考数据后,对使用者的坐姿进行实时识别,具体过程为:(3) After determining the user's reference data, the user's sitting posture is recognized in real time. The specific process is as follows:

步骤3-1、PC机每隔2秒从摄像头处采集一次使用者坐姿的图像,并采用图像处理程序对使用者坐姿的实时图像进行处理,确定并记录当前坐姿下使用者的左眼瞳孔、右眼瞳孔、鼻尖、颈部(两边锁骨连接处的凹点)、左肩和右肩这6个关键点的坐标,同时接收红外测距传感器测得的使用者与摄像头的距离,将使用者当前坐姿下左眼瞳孔的坐标记为(lxN,lyN),右眼瞳孔的坐标记为(rxN,ryN)、鼻尖的坐标记为(nxN,nyN)、颈部的坐标记为(bxN,byN)、左肩的坐标记为(lsxN,lsyN)、右肩的坐标记为(rsxN,rsyN),使用者与摄像头的距离记为D,将左眼瞳孔关键点与鼻尖关键点相连接且两者的连线记为线段a,将右眼瞳孔关键点分别与鼻尖关键点相连接且两者的连线记为线段b,将鼻尖关键点与颈部关键点相连接且两者的连线记为线段c,将左肩关键点与颈部关键点相连接且两者的连线记为线段d,将右肩关键点与颈部关键点相连接且两者的连线记为线段e,将线段c和线段d之间的夹角记为角α,线段c和线段e之间的夹角记为角β;Step 3-1, the PC collects an image of the user's sitting posture from the camera every 2 seconds, and uses an image processing program to process the real-time image of the user's sitting posture, determine and record the coordinates of the user's left pupil, right pupil, nose tip, neck (the concave point where the two clavicles are connected), left shoulder and right shoulder in the current sitting posture, and at the same time receive the distance between the user and the camera measured by the infrared ranging sensor, and mark the coordinates of the left pupil of the user in the current sitting posture as (lx N , ly N ), the coordinates of the right pupil as (rx N , ry N ), the coordinates of the nose tip as (nx N , ny N ), the coordinates of the neck as (bx N , by N ), the coordinates of the left shoulder as (lsx N , lsy N ), and the coordinates of the right shoulder as (rsx N , rsy N ), the distance between the user and the camera is recorded as D, the left eye pupil key point is connected to the nose tip key point and the line connecting the two is recorded as line segment a, the right eye pupil key point is connected to the nose tip key point respectively and the line connecting the two is recorded as line segment b, the nose tip key point is connected to the neck key point and the line connecting the two is recorded as line segment c, the left shoulder key point is connected to the neck key point and the line connecting the two is recorded as line segment d, the right shoulder key point is connected to the neck key point and the line connecting the two is recorded as line segment e, the angle between the line segment c and the line segment d is recorded as angle α, and the angle between the line segment c and the line segment e is recorded as angle β;

步骤3-2、对根据步骤3-1的实时数据情况对使用者坐姿进行实时判定,具体判定标准为:Step 3-2: Based on the real-time data from step 3-1, the user's sitting posture is judged in real time. The specific judgment criteria are:

如果D小于30厘米,则判定为距离过近;If D is less than 30 cm, it is judged as too close;

如果D大于50厘米,则判定为距离过远;If D is greater than 50 cm, it is judged to be too far away;

如果α大于0°且小于等于70°,则判定当前坐姿为头部左偏;If α is greater than 0° and less than or equal to 70°, the current sitting posture is judged as head tilted to the left;

如果β大于0°且小于等于70°,则判定当前坐姿为头部右偏;If β is greater than 0° and less than or equal to 70°, the current sitting posture is judged as head right tilt;

如果lx-lxN>T1,则判定当前坐姿为身体左倾;If lx-lx N >T 1 , the current sitting posture is determined to be a left leaning posture;

如果rxN-rx>T2,则判定当前坐姿为身体右倾;If rx N -rx>T 2 , the current sitting posture is determined to be right-leaning;

如果|lsyN-rsyN|>T4,则判定当前坐姿为肩膀不平行;If |lsy N -rsy N |>T 4 , the current sitting posture is judged as shoulders not being parallel;

如果byN-by>T3,则判定当前坐姿为脊椎弯曲;If by N -by>T 3 , the current sitting posture is determined to be spinal curvature;

如果为上述情况以外的其他情况,则判定当前坐姿为正确坐姿;If it is any other situation than the above situation, the current sitting posture is determined to be the correct sitting posture;

步骤3-3、如果连续3次判定为同一不正确的坐姿,则语音进行播报提醒使用者,当两种以上不正确坐姿连续3次同时出现时,语音播报时播报优先级别最高的坐姿,8种不正确坐姿的优先级从高到低依次为距离过近、距离过远、头部左偏、头部右偏、身体左倾、身体右倾、肩膀不平行和脊椎弯曲。Step 3-3, if the same incorrect sitting posture is determined for three consecutive times, a voice broadcast will be used to remind the user. When two or more incorrect sitting postures appear for three consecutive times at the same time, the sitting posture with the highest priority will be broadcasted. The priorities of the eight incorrect sitting postures from high to low are: distance too close, distance too far, head tilted to the left, head tilted to the right, body leaning to the left, body leaning to the right, shoulders not parallel and spine curvature.

与现有技术相比,本发明的优点在于通过一台预存有图像处理程序的PC机、一个红外测距传感器以及一个摄像头搭建硬件环境,以摄像头实时采集的画面左上角为坐标原点,水平向右方向为x轴正方向,垂直向下方向为y轴正方向,建立坐标系,PC机内还预先存储有四个判定阈值T1、T2、T3和T4,四个判定阈值T1、T2、T3和T4预先通过关键点坐标确定,在对人体坐姿进行检测时,先获取人体正确坐姿下的关键点坐标作为基准,然后结合人体实时关键点坐标和四个判定阈值T1、T2、T3和T4实时判断人体坐姿,另外,使用红外测距传感器来获取使用者与摄像头之间的距离,红外测距传感器价格低廉,响应速度快,且当连续3次判定为同一不正确的坐姿,则语音进行播报提醒使用者,当两种以上不正确坐姿连续3次同时出现时,语音播报时播报优先级别最高的坐姿,本发明中四个判定阈值确定方法相对于现有的机器学习,不需要制作大量的训练数据,在保证高准确率的同时,简化了计算的过程,缩短了计算所需的时间,由此本发明实现过程简单,对硬件的计算能力要求较低,后续可移植到嵌入式设备上,实用性较高,成本较低,实时性较高,且交互性良好。Compared with the prior art, the advantage of the present invention is that a hardware environment is built by a PC pre-stored with an image processing program, an infrared ranging sensor and a camera, a coordinate system is established with the upper left corner of the picture collected by the camera in real time as the coordinate origin, the horizontal right direction as the positive direction of the x-axis, and the vertical downward direction as the positive direction of the y-axis, and four determination thresholds T 1 , T 2 , T 3 and T 4 are also pre-stored in the PC, and the four determination thresholds T 1 , T 2 , T 3 and T 4 are determined in advance by the key point coordinates. When detecting the sitting posture of a human body, the key point coordinates of the human body in the correct sitting posture are first obtained as a reference, and then the real-time key point coordinates of the human body and the four determination thresholds T 1 , T 2 , T 3 and T 4 are combined to determine the sitting posture of the human body. 4. Real-time judgment of human sitting posture. In addition, an infrared distance sensor is used to obtain the distance between the user and the camera. The infrared distance sensor is low in price and has a fast response speed. When the same incorrect sitting posture is judged for three consecutive times, a voice broadcast is used to remind the user. When two or more incorrect sitting postures appear for three consecutive times at the same time, the sitting posture with the highest priority is broadcast during the voice broadcast. Compared with the existing machine learning, the four judgment threshold determination methods in the present invention do not need to produce a large amount of training data. While ensuring high accuracy, the calculation process is simplified and the time required for calculation is shortened. Therefore, the implementation process of the present invention is simple, the requirement for hardware computing power is low, and it can be transplanted to embedded devices later. It has high practicality, low cost, high real-time performance, and good interactivity.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明的基于关键点检测的人体坐姿识别方法的各关键点、关键点连线以及连线间夹角示意图。FIG1 is a schematic diagram of the key points, key point connecting lines and angles between connecting lines of the human sitting posture recognition method based on key point detection of the present invention.

具体实施方式DETAILED DESCRIPTION

以下结合附图实施例对本发明作进一步详细描述。The present invention is further described in detail below with reference to the accompanying drawings.

实施例:一种基于关键点检测的人体坐姿识别方法,包括以下步骤:Embodiment: A method for human sitting posture recognition based on key point detection comprises the following steps:

(1)、配备一台预存有图像处理程序的PC机、一个红外测距传感器以及一个摄像头,将红外测距传感器与摄像头组装并和PC机连接,红外测距传感器与摄像头在同一竖直平面上且距离不超过5厘米,以摄像头实时采集的画面左上角为坐标原点,水平向右方向为x轴正方向,垂直向下方向为y轴正方向,建立坐标系,PC机内还预先存储有四个判定阈值T1、T2、T3和T4,这四个判定阈值采用以下方法预先确定:(1) A PC with a pre-stored image processing program, an infrared distance sensor and a camera are equipped. The infrared distance sensor and the camera are assembled and connected to the PC. The infrared distance sensor and the camera are on the same vertical plane and the distance between them is no more than 5 cm. The upper left corner of the image captured by the camera in real time is the origin of the coordinate system. The horizontal right direction is the positive direction of the x-axis and the vertical downward direction is the positive direction of the y-axis. A coordinate system is established. Four judgment thresholds T 1 , T 2 , T 3 and T 4 are also pre-stored in the PC. These four judgment thresholds are pre-determined by the following method:

步骤1-1、将坐姿行为分为距离过近、距离过远、头部左偏、头部右偏、身体左倾、身体右倾、肩膀不平行、脊椎弯曲以及正确坐姿9种类别;Step 1-1, divide the sitting posture into 9 categories: too close distance, too far distance, head tilted to the left, head tilted to the right, body leaning to the left, body leaning to the right, shoulders not parallel, spine curvature and correct sitting posture;

步骤1-2、选取身高在120cm~180cm之间的120名女性以及身高在130cm~190cm之间的120名男性作为预检人员,其中,120cm~180cm每10cm为一档,共分为6档,每一档女性为20人,130cm~190cm每10cm为一档,共分为6档,每一档男性为20人;将240名预检人员随机编号为1~240,将编号为i的预检人员称为第i个预检人员,i=1,2,…,240;Step 1-2, select 120 females with a height between 120 cm and 180 cm and 120 males with a height between 130 cm and 190 cm as pre-inspection personnel, wherein 120 cm to 180 cm is divided into 6 levels every 10 cm, each level has 20 females, and 130 cm to 190 cm is divided into 6 levels every 10 cm, each level has 20 males; randomly number the 240 pre-inspection personnel from 1 to 240, and the pre-inspection personnel numbered i is called the i-th pre-inspection personnel, i = 1, 2, ..., 240;

步骤1-3、对240名预检人员分别进行预检测,具体过程为:Step 1-3: Conduct pre-tests on 240 pre-test personnel. The specific process is as follows:

S1、摄像头正对预检人员脸部,两者距离为30~50厘米,预检人员脸部和肩膀不能被遮挡;S1. The camera is facing the face of the pre-inspection personnel, with a distance of 30 to 50 cm. The face and shoulders of the pre-inspection personnel cannot be blocked;

S2、每个预检人员在摄像头前依次采取正确坐姿、头部左偏、头部右偏、身体左倾、身体右倾、脊椎弯曲以及肩膀不平行共7种坐姿,摄像头拍摄预检人员这7种坐姿的图像并发送给PC机,其中这7种坐姿按顺序依次编号为1-7,将编号为j的坐姿称为第j种坐姿,j=1,2,…,7,正确坐姿为腰背自然挺直,胸部张开,双肩放平,颈、胸和腰都要保持平直,除正确坐姿以外的其他6种坐姿按个人平时习惯实施;S2. Each pre-inspection person takes seven sitting postures in front of the camera, namely, correct sitting posture, head tilted to the left, head tilted to the right, body tilted to the left, body tilted to the right, spine bent, and shoulders not parallel. The camera takes images of the seven sitting postures of the pre-inspection person and sends them to the PC. The seven sitting postures are numbered 1-7 in sequence. The sitting posture numbered j is called the jth sitting posture, j=1, 2, ..., 7. The correct sitting posture is to keep the back naturally straight, the chest open, the shoulders flat, and the neck, chest and waist all kept straight. The other six sitting postures except the correct sitting posture are implemented according to personal usual habits.

S3、在PC机处采用图像处理程序分别获取并记录每个预检人员在7种坐姿下的左眼瞳孔、右眼瞳孔、鼻尖、颈部(两边锁骨连接处的凹点)、左肩和右肩这6个关键点的坐标,得到240组坐标数据,每组坐标数据分别包括一个预检人员在7种坐姿下的左眼瞳孔坐标、右眼瞳孔坐标、鼻尖坐标、颈部坐标、左肩坐标和右肩坐标,将第i个预检人员第j种坐姿下左眼瞳孔的坐标记为

Figure BDA0002721249650000071
右眼瞳孔的坐标记为
Figure BDA0002721249650000072
鼻尖的坐标记为
Figure BDA0002721249650000081
颈部的坐标记为
Figure BDA0002721249650000082
左肩的坐标记为
Figure BDA0002721249650000083
右肩的坐标记为
Figure BDA0002721249650000084
S3. Use an image processing program on a PC to obtain and record the coordinates of the six key points of the left pupil, right pupil, nose tip, neck (the concave point at the connection of the two clavicles), left shoulder and right shoulder of each pre-inspection person in the seven sitting postures, and obtain 240 sets of coordinate data. Each set of coordinate data includes the left pupil coordinates, right pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of a pre-inspection person in the seven sitting postures. The coordinates of the left pupil of the i-th pre-inspection person in the j-th sitting posture are marked as
Figure BDA0002721249650000071
The coordinates of the right pupil are
Figure BDA0002721249650000072
The coordinates of the nose tip are
Figure BDA0002721249650000081
The coordinates of the neck are
Figure BDA0002721249650000082
The coordinates of the left shoulder are
Figure BDA0002721249650000083
The coordinates of the right shoulder are
Figure BDA0002721249650000084

S4、将第i个预检人员身体左倾时左眼在x轴上的左偏量作为左倾偏量,记为ΔLi、身体右倾时右眼在x轴上的右偏量作为右倾偏量,记为ΔRi,脊椎弯曲时颈部在y轴上的偏移量作为颈部偏量,记为ΔCi,肩膀不平行时两个肩部关键点在y轴上的差值作为肩膀偏量,记为ΔHi,采用公式(1)、(2)、(3)、(4)分别计算得到ΔLi、ΔRi、ΔCi和ΔHiS4. The left deviation of the left eye on the x-axis when the body of the i-th pre-inspection person leans to the left is taken as the left deviation, recorded as ΔL i ; the right deviation of the right eye on the x-axis when the body leans to the right is taken as the right deviation, recorded as ΔR i ; the displacement of the neck on the y-axis when the spine is bent is taken as the neck deviation, recorded as ΔC i ; the difference between the two shoulder key points on the y-axis when the shoulders are not parallel is taken as the shoulder deviation, recorded as ΔH i ; ΔL i , ΔR i , ΔC i and ΔH i are calculated using formulas (1), (2), (3) and (4) respectively:

Figure BDA0002721249650000085
Figure BDA0002721249650000085

Figure BDA0002721249650000086
Figure BDA0002721249650000086

Figure BDA0002721249650000087
Figure BDA0002721249650000087

Figure BDA0002721249650000088
Figure BDA0002721249650000088

式(4)中,||为取绝对值符号;In formula (4), || is the absolute value symbol;

S5、按坐姿类别对240组坐标数据进行整合后按照7种坐姿类别重新分别7组,得到7组坐姿数据,每组坐姿数据分别包括240名测试人员在该坐姿下的左眼瞳孔坐标、右眼瞳孔坐标、鼻尖坐标、颈部坐标、左肩坐标和右肩坐标;S5. After integrating the 240 sets of coordinate data according to the sitting posture categories, the data are divided into 7 sets according to the 7 sitting posture categories to obtain 7 sets of sitting posture data. Each set of sitting posture data includes the left eye pupil coordinates, right eye pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of the 240 test persons in the sitting posture.

S6、分别确定判定阈值T1、T2、T3和T4的值,其中,确定判定阈值T1的具体过程为:S6. Determine the values of the determination thresholds T 1 , T 2 , T 3 and T 4 respectively, wherein the specific process of determining the determination threshold T 1 is as follows:

A、采用ΔL1~ΔL240这240个左倾偏量构成原始数据组,将原始数据组作为第0代数据组;A. Using 240 left-leaning deviations from ΔL 1 to ΔL 240 to form an original data set, and taking the original data set as the 0th generation data set;

B、设定迭代变量t,对t进行初始化,令t=1;B. Set the iteration variable t and initialize t to 1;

C、进行第t次迭代更新,得到第t代数据组,具体过程为:C. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

C1、计算第t-1代数据组的峰度

Figure BDA0002721249650000089
均值
Figure BDA00027212496500000810
和标准差
Figure BDA00027212496500000811
C1. Calculate the kurtosis of the t-1 generation data set
Figure BDA0002721249650000089
Mean
Figure BDA00027212496500000810
and standard deviation
Figure BDA00027212496500000811

C2、判断

Figure BDA00027212496500000812
是否大于3,如果
Figure BDA00027212496500000813
不大于3,且
Figure BDA00027212496500000814
与3的差值不大于1,则令判定阈值
Figure BDA00027212496500000815
如果
Figure BDA00027212496500000816
不大于3,且
Figure BDA00027212496500000817
与3的差值大于1,则将第t-1代数据组中最大的左倾偏量的值作为判定阈值T1,如果
Figure BDA00027212496500000818
大于3,则计算第t-1代数据组中每个左倾偏量与
Figure BDA00027212496500000819
之差的平方值,将最大平方值对应的左倾偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤C,进行下一次迭代,直至
Figure BDA00027212496500000820
不大于3;C2. Judgment
Figure BDA00027212496500000812
Is it greater than 3? If
Figure BDA00027212496500000813
Not more than 3, and
Figure BDA00027212496500000814
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure BDA00027212496500000815
if
Figure BDA00027212496500000816
Not more than 3, and
Figure BDA00027212496500000817
If the difference between t and 3 is greater than 1, the maximum left-leaning deviation value in the t-1 generation data set is used as the decision threshold T 1 .
Figure BDA00027212496500000818
If it is greater than 3, then calculate the left deviation of each data set in the t-1th generation and
Figure BDA00027212496500000819
The square value of the difference between the two values is used to delete the left deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step C for the next iteration until
Figure BDA00027212496500000820
Not more than 3;

确定判定阈值T2的具体过程为:The specific process of determining the judgment threshold T2 is:

D、采用ΔR1~ΔR240这240个右倾偏量构成原始数据组,将原始数据组作为第0代数据组;D. Using 240 right-leaning deviations from ΔR 1 to ΔR 240 to form an original data set, and taking the original data set as the 0th generation data set;

E、设定迭代变量t,对t进行初始化,令t=1;E. Set the iteration variable t and initialize t to 1;

F、进行第t次迭代更新,得到第t代数据组,具体过程为:F. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

F1、计算第t-1代数据组的峰度

Figure BDA0002721249650000091
均值
Figure BDA0002721249650000092
和标准差
Figure BDA0002721249650000093
F1, calculate the kurtosis of the t-1 generation data group
Figure BDA0002721249650000091
Mean
Figure BDA0002721249650000092
and standard deviation
Figure BDA0002721249650000093

F2、判断

Figure BDA0002721249650000094
是否大于3,如果
Figure BDA0002721249650000095
不大于3,且
Figure BDA0002721249650000096
与3的差值不大于1,则令判定阈值
Figure BDA0002721249650000097
如果
Figure BDA0002721249650000098
不大于3,且
Figure BDA0002721249650000099
与3的差值大于1,则将第t-1代数据组中最大的右倾偏量的值作为判定阈值T2,如果
Figure BDA00027212496500000910
大于3,则计算第t-1代数据组中每个右倾偏量与
Figure BDA00027212496500000911
之差的平方值,将最大平方值对应的左倾偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤F,进行下一次迭代,直至
Figure BDA00027212496500000912
不大于3;F2. Judgment
Figure BDA0002721249650000094
Is it greater than 3? If
Figure BDA0002721249650000095
Not more than 3, and
Figure BDA0002721249650000096
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure BDA0002721249650000097
if
Figure BDA0002721249650000098
Not more than 3, and
Figure BDA0002721249650000099
If the difference between t and 3 is greater than 1, the maximum right deviation value in the t-1 generation data set is used as the decision threshold T 2 .
Figure BDA00027212496500000910
If it is greater than 3, then calculate the right deviation of each data set in the t-1th generation and
Figure BDA00027212496500000911
The square value of the difference between the two values is used to delete the left deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step F for the next iteration until
Figure BDA00027212496500000912
Not more than 3;

确定判定阈值T3的具体过程为:The specific process of determining the judgment threshold T3 is:

G、采用ΔC1~ΔC240这240个颈部偏量构成原始数据组,将原始数据组作为第0代数据组;G. Using 240 neck deflections from ΔC 1 to ΔC 240 to form an original data set, and using the original data set as the 0th generation data set;

H、设定迭代变量t,对t进行初始化,令t=1;H. Set the iteration variable t and initialize t to 1;

I、进行第t次迭代更新,得到第t代数据组,具体过程为:I. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

I1、计算第t-1代数据组的峰度

Figure BDA00027212496500000913
均值
Figure BDA00027212496500000914
和标准差
Figure BDA00027212496500000915
I1. Calculate the kurtosis of the t-1 generation data group
Figure BDA00027212496500000913
Mean
Figure BDA00027212496500000914
and standard deviation
Figure BDA00027212496500000915

I2、判断

Figure BDA00027212496500000916
是否大于3,如果
Figure BDA00027212496500000917
不大于3,且
Figure BDA00027212496500000918
与3的差值不大于1,则令判定阈值
Figure BDA00027212496500000919
如果
Figure BDA00027212496500000920
不大于3,且
Figure BDA00027212496500000921
与3的差值大于1,则将第t-1代数据组中最大的颈部偏量的值作为判定阈值T3,如果
Figure BDA00027212496500000922
大于3,则计算第t-1代数据组中每个左倾偏量与
Figure BDA00027212496500000923
之差的平方值,将最大平方值对应的颈部偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤I,进行下一次迭代,直至
Figure BDA00027212496500000924
不大于3;I2. Judgment
Figure BDA00027212496500000916
Is it greater than 3? If
Figure BDA00027212496500000917
Not more than 3, and
Figure BDA00027212496500000918
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure BDA00027212496500000919
if
Figure BDA00027212496500000920
Not more than 3, and
Figure BDA00027212496500000921
If the difference between t and 3 is greater than 1, the value of the maximum neck deviation in the t-1 generation data set is used as the judgment threshold T 3 .
Figure BDA00027212496500000922
If it is greater than 3, then calculate the left deviation of each data set in the t-1th generation and
Figure BDA00027212496500000923
The square value of the difference between the two values is used to delete the neck deviation corresponding to the maximum square value from the t-1 generation data set to obtain the t generation data set. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step I for the next iteration until
Figure BDA00027212496500000924
Not more than 3;

确定判定阈值T4的具体过程为:The specific process of determining the decision threshold T4 is:

J、采用ΔH1~ΔH240这240个肩膀偏量构成原始数据组,将原始数据组作为第0代数据组;J. Using 240 shoulder deviations from ΔH 1 to ΔH 240 to form an original data set, and using the original data set as the 0th generation data set;

K、设定迭代变量t,对t进行初始化,令t=1;K. Set the iteration variable t and initialize t to 1;

L、进行第t次迭代更新,得到第t代数据组,具体过程为:L. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is:

L1、计算第t-1代数据组的峰度

Figure BDA00027212496500000925
均值
Figure BDA00027212496500000926
和标准差
Figure BDA00027212496500000927
L1, calculate the kurtosis of the t-1 generation data group
Figure BDA00027212496500000925
Mean
Figure BDA00027212496500000926
and standard deviation
Figure BDA00027212496500000927

L2、判断

Figure BDA0002721249650000101
是否大于3,如果
Figure BDA0002721249650000102
不大于3,且
Figure BDA0002721249650000103
与3的差值不大于1,则判定阈值
Figure BDA0002721249650000104
如果
Figure BDA0002721249650000105
不大于3,且
Figure BDA0002721249650000106
与3的差值大于1,则将第t-1代数据组中最大的肩膀偏量的值作为判定阈值T4,如果
Figure BDA0002721249650000107
则计算第t-1代数据组中每个左倾偏量与
Figure BDA0002721249650000108
之差的平方值,将最大平方值对应的肩膀偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤L,进行下一次迭代,直至
Figure BDA0002721249650000109
不大于3;L2. Judgment
Figure BDA0002721249650000101
Is it greater than 3? If
Figure BDA0002721249650000102
Not more than 3, and
Figure BDA0002721249650000103
If the difference between the value of the threshold value and 3 is not greater than 1, the threshold value is determined.
Figure BDA0002721249650000104
if
Figure BDA0002721249650000105
Not more than 3, and
Figure BDA0002721249650000106
If the difference between t and 3 is greater than 1, the value of the maximum shoulder deviation in the t-1 generation data set is used as the decision threshold T 4 .
Figure BDA0002721249650000107
Then calculate each left-leaning deviation in the t-1 generation data set and
Figure BDA0002721249650000108
The square value of the difference between the two values is used to delete the shoulder deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the step L is returned to perform the next iteration until
Figure BDA0002721249650000109
Not more than 3;

(2)、当需对使用者进行坐姿识别时,摄像头安装在需要进行坐姿识别的对应位置处,先进行使用者的参考数据采集,具体过程为:使用者先采用正确坐姿坐在摄像头前,摄像头正对使用者脸部,两者距离为30~50厘米,使用者脸部和肩膀不能被遮挡,摄像头拍摄使用者正确坐姿的图像发送给PC机,PC机采用其内预存的图像处理程序对使用者正确坐姿的图像进行处理,确定并记录正确坐姿下使用者的左眼瞳孔、右眼瞳孔、鼻尖、颈部(两边锁骨连接处的凹点)、左肩和右肩这6个关键点的坐标,将使用者正确坐姿下左眼瞳孔的坐标记为(lx,ly),右眼瞳孔的坐标记为(rx,ry)、鼻尖的坐标记为(nx,ny)、颈部的坐标记为(bx,by)、左肩的坐标记为(lsx,lsy)、右肩的坐标记为(rsx,rsy);(2) When the user's sitting posture needs to be recognized, the camera is installed at the corresponding position where the sitting posture recognition needs to be performed, and the user's reference data is first collected. The specific process is as follows: the user first sits in front of the camera with a correct sitting posture, with the camera facing the user's face, the distance between the two is 30 to 50 cm, and the user's face and shoulders cannot be blocked. The camera takes an image of the user's correct sitting posture and sends it to the PC. The PC uses the image processing program pre-stored therein to process the image of the user's correct sitting posture, determine and record the coordinates of the user's left pupil, right pupil, nose tip, neck (the concave point at the connection of the two clavicles), left shoulder and right shoulder in the correct sitting posture, and mark the coordinates of the left pupil of the user in the correct sitting posture as (lx, ly), the coordinates of the right pupil as (rx, ry), the coordinates of the nose tip as (nx, ny), the coordinates of the neck as (bx, by), the coordinates of the left shoulder as (lsx, lsy), and the coordinates of the right shoulder as (rsx, rsy);

(3)确定使用者的参考数据后,对使用者的坐姿进行实时识别,具体过程为:(3) After determining the user's reference data, the user's sitting posture is recognized in real time. The specific process is as follows:

步骤3-1、PC机每隔2秒从摄像头处采集一次使用者坐姿的图像,并采用图像处理程序对使用者坐姿的实时图像进行处理,确定并记录当前坐姿下使用者的左眼瞳孔、右眼瞳孔、鼻尖、颈部(两边锁骨连接处的凹点)、左肩和右肩这6个关键点的坐标,同时接收红外测距传感器测得的使用者与摄像头的距离,将使用者当前坐姿下左眼瞳孔的坐标记为(lxN,lyN),右眼瞳孔的坐标记为(rxN,ryN)、鼻尖的坐标记为(nxN,nyN)、颈部的坐标记为(bxN,byN)、左肩的坐标记为(lsxN,lsyN)、右肩的坐标记为(rsxN,rsyN),使用者与摄像头的距离记为D,将左眼瞳孔关键点与鼻尖关键点相连接且两者的连线记为线段a,将右眼瞳孔关键点分别与鼻尖关键点相连接且两者的连线记为线段b,将鼻尖关键点与颈部关键点相连接且两者的连线记为线段c,将左肩关键点与颈部关键点相连接且两者的连线记为线段d,将右肩关键点与颈部关键点相连接且两者的连线记为线段e,将线段c和线段d之间的夹角记为角α,线段c和线段e之间的夹角记为角β;Step 3-1, the PC collects an image of the user's sitting posture from the camera every 2 seconds, and uses an image processing program to process the real-time image of the user's sitting posture, determine and record the coordinates of the user's left pupil, right pupil, nose tip, neck (the concave point at the connection of the two clavicles), left shoulder and right shoulder in the current sitting posture, and at the same time receive the distance between the user and the camera measured by the infrared ranging sensor, and mark the coordinates of the left pupil of the user in the current sitting posture as (lx N , ly N ), the coordinates of the right pupil as (rx N , ry N ), the coordinates of the nose tip as (nx N , ny N ), the coordinates of the neck as (bx N , by N ), the coordinates of the left shoulder as (lsx N , lsy N ), and the coordinates of the right shoulder as (rsx N , rsy N ), the distance between the user and the camera is recorded as D, the left eye pupil key point is connected to the nose tip key point and the line connecting the two is recorded as line segment a, the right eye pupil key point is connected to the nose tip key point respectively and the line connecting the two is recorded as line segment b, the nose tip key point is connected to the neck key point and the line connecting the two is recorded as line segment c, the left shoulder key point is connected to the neck key point and the line connecting the two is recorded as line segment d, the right shoulder key point is connected to the neck key point and the line connecting the two is recorded as line segment e, the angle between the line segment c and the line segment d is recorded as angle α, and the angle between the line segment c and the line segment e is recorded as angle β;

步骤3-2、对根据步骤3-1的实时数据情况对使用者坐姿进行实时判定,具体判定标准为:Step 3-2: Based on the real-time data from step 3-1, the user's sitting posture is judged in real time. The specific judgment criteria are:

如果D小于30厘米,则判定为距离过近;If D is less than 30 cm, it is judged as too close;

如果D大于50厘米,则判定为距离过远;If D is greater than 50 cm, it is judged to be too far away;

如果α大于0°且小于等于70°,则判定当前坐姿为头部左偏;If α is greater than 0° and less than or equal to 70°, the current sitting posture is judged as head tilted to the left;

如果β大于0°且小于等于70°,则判定当前坐姿为头部右偏;If β is greater than 0° and less than or equal to 70°, the current sitting posture is judged as head right tilt;

如果lx-lxN>T1,则判定当前坐姿为身体左倾;If lx-lx N >T 1 , the current sitting posture is determined to be a left leaning posture;

如果rxN-rx>T2,则判定当前坐姿为身体右倾;If rx N -rx>T 2 , the current sitting posture is determined to be right-leaning;

如果|lsyN-rsyN|>T4,则判定当前坐姿为肩膀不平行;If |lsy N -rsy N |>T 4 , the current sitting posture is judged as shoulders not being parallel;

如果byN-by>T3,则判定当前坐姿为脊椎弯曲;If by N -by>T 3 , the current sitting posture is determined to be spinal curvature;

如果为上述情况以外的其他情况,则判定当前坐姿为正确坐姿;If it is any other situation than the above situation, the current sitting posture is determined to be the correct sitting posture;

步骤3-3、如果连续3次判定为同一不正确的坐姿,则语音进行播报提醒使用者,当两种以上不正确坐姿连续3次同时出现时,语音播报时播报优先级别最高的坐姿,8种不正确坐姿的优先级从高到低依次为距离过近、距离过远、头部左偏、头部右偏、身体左倾、身体右倾、肩膀不平行和脊椎弯曲。Step 3-3, if the same incorrect sitting posture is determined for three consecutive times, a voice broadcast will be used to remind the user. When two or more incorrect sitting postures appear for three consecutive times at the same time, the sitting posture with the highest priority will be broadcasted. The priorities of the eight incorrect sitting postures from high to low are: distance too close, distance too far, head tilted to the left, head tilted to the right, body leaning to the left, body leaning to the right, shoulders not parallel and spine curvature.

Claims (1)

1.一种基于关键点检测的人体坐姿识别方法,其特征在于包括以下步骤:1. A method for human sitting posture recognition based on key point detection, characterized by comprising the following steps: (1)、配备一台预存有图像处理程序的PC机、一个红外测距传感器以及一个摄像头,将红外测距传感器与摄像头组装并和PC机连接,红外测距传感器与摄像头在同一竖直平面上且距离不超过5厘米,以摄像头实时采集的画面左上角为坐标原点,水平向右方向为x轴正方向,垂直向下方向为y轴正方向,建立坐标系,PC机内还预先存储有四个判定阈值T1、T2、T3和Ti,这四个判定阈值采用以下方法预先确定:(1) A PC with a pre-stored image processing program, an infrared distance sensor and a camera are equipped. The infrared distance sensor and the camera are assembled and connected to the PC. The infrared distance sensor and the camera are on the same vertical plane and the distance between them is no more than 5 cm. The upper left corner of the image captured by the camera in real time is the origin of the coordinate system. The horizontal right direction is the positive direction of the x-axis and the vertical downward direction is the positive direction of the y-axis. A coordinate system is established. Four judgment thresholds T 1 , T 2 , T 3 and Ti are also pre-stored in the PC. These four judgment thresholds are pre-determined by the following method: 步骤1-1、将坐姿行为分为距离过近、距离过远、头部左偏、头部右偏、身体左倾、身体右倾、肩膀不平行、脊椎弯曲以及正确坐姿9种类别;Step 1-1, divide the sitting posture into 9 categories: too close distance, too far distance, head tilted to the left, head tilted to the right, body leaning to the left, body leaning to the right, shoulders not parallel, spine curvature and correct sitting posture; 步骤1-2、选取身高在120cm~180cm之间的120名女性以及身高在130cm~190cm之间的120名男性作为预检人员,其中,120cm~180cm每10cm为一档,共分为6档,每一档女性为20人,130cm~190cm每10cm为一档,共分为6档,每一档男性为20人;将240名预检人员随机编号为1~240,将编号为i的预检人员称为第i个预检人员,i=1,2,…,240;Step 1-2, select 120 females with a height between 120 cm and 180 cm and 120 males with a height between 130 cm and 190 cm as pre-inspection personnel, wherein 120 cm to 180 cm is divided into 6 levels every 10 cm, each level has 20 females, and 130 cm to 190 cm is divided into 6 levels every 10 cm, each level has 20 males; 240 pre-inspection personnel are randomly numbered from 1 to 240, and the pre-inspection personnel numbered i is called the i-th pre-inspection personnel, i = 1, 2, ..., 240; 步骤1-3、对240名预检人员分别进行预检测,具体过程为:Step 1-3: Conduct pre-tests on 240 pre-test personnel. The specific process is as follows: S1、摄像头正对预检人员脸部,两者距离为30~50厘米,预检人员脸部和肩膀不能被遮挡;S1. The camera is facing the face of the pre-inspection personnel, with a distance of 30 to 50 cm. The face and shoulders of the pre-inspection personnel cannot be blocked; S2、每个预检人员在摄像头前依次采取正确坐姿、头部左偏、头部右偏、身体左倾、身体右倾、脊椎弯曲以及肩膀不平行共7种坐姿,摄像头拍摄预检人员这7种坐姿的图像并发送给PC机,其中这7种坐姿按顺序依次编号为1-7,将编号为j的坐姿称为第j种坐姿,j=1,2,…,7,正确坐姿为腰背自然挺直,胸部张开,双肩放平,颈、胸和腰都要保持平直,除正确坐姿以外的其他6种坐姿按个人平时习惯实施;S2. Each pre-inspection person takes seven sitting postures in front of the camera, namely, correct sitting posture, head tilted to the left, head tilted to the right, body tilted to the left, body tilted to the right, spine bent, and shoulders not parallel. The camera takes images of the seven sitting postures of the pre-inspection person and sends them to the PC. The seven sitting postures are numbered 1-7 in sequence. The sitting posture numbered j is called the jth sitting posture, j=1, 2, ..., 7. The correct sitting posture is to keep the back naturally straight, the chest open, the shoulders flat, and the neck, chest and waist all kept straight. The other six sitting postures except the correct sitting posture are implemented according to personal usual habits. S3、在PC机处采用图像处理程序分别获取并记录每个预检人员在7种坐姿下的左眼瞳孔、右眼瞳孔、鼻尖、颈部、左肩和右肩这6个关键点的坐标,得到240组坐标数据,每组坐标数据分别包括一个预检人员在7种坐姿下的左眼瞳孔坐标、右眼瞳孔坐标、鼻尖坐标、颈部坐标、左肩坐标和右肩坐标,将第i个预检人员第j种坐姿下左眼瞳孔的坐标记为
Figure QLYQS_1
右眼瞳孔的坐标记为
Figure QLYQS_2
鼻尖的坐标记为
Figure QLYQS_3
颈部的坐标记为
Figure QLYQS_4
左肩的坐标记为
Figure QLYQS_5
右肩的坐标记为
Figure QLYQS_6
S3. Use an image processing program at the PC to obtain and record the coordinates of the six key points of the left pupil, right pupil, nose tip, neck, left shoulder and right shoulder of each pre-inspection person in the seven sitting postures, and obtain 240 sets of coordinate data. Each set of coordinate data includes the left pupil coordinates, right pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of a pre-inspection person in the seven sitting postures. The coordinates of the left pupil of the i-th pre-inspection person in the j-th sitting posture are marked as
Figure QLYQS_1
The coordinates of the right pupil are
Figure QLYQS_2
The coordinates of the nose tip are
Figure QLYQS_3
The coordinates of the neck are
Figure QLYQS_4
The coordinates of the left shoulder are
Figure QLYQS_5
The coordinates of the right shoulder are
Figure QLYQS_6
S4、将第i个预检人员身体左倾时左眼在x轴上的左偏量作为左倾偏量,记为ΔLi、身体右倾时右眼在x轴上的右偏量作为右倾偏量,记为ΔRi,脊椎弯曲时颈部在y轴上的偏移量作为颈部偏量,记为ΔCi,肩膀不平行时两个肩部关键点在y轴上的差值作为肩膀偏量,记为ΔHi,采用公式(1)、(2)、(3)、(4)分别计算得到ΔLi、ΔRi、ΔCi和ΔHiS4. The left deviation of the left eye on the x-axis when the body of the i-th pre-inspection person leans to the left is taken as the left deviation, recorded as ΔL i ; the right deviation of the right eye on the x-axis when the body leans to the right is taken as the right deviation, recorded as ΔR i ; the displacement of the neck on the y-axis when the spine is bent is taken as the neck deviation, recorded as ΔC i ; the difference between the two shoulder key points on the y-axis when the shoulders are not parallel is taken as the shoulder deviation, recorded as ΔH i ; ΔL i , ΔR i , ΔC i and ΔH i are calculated using formulas (1), (2), (3) and (4) respectively:
Figure QLYQS_7
Figure QLYQS_7
Figure QLYQS_8
Figure QLYQS_8
Figure QLYQS_9
Figure QLYQS_9
Figure QLYQS_10
Figure QLYQS_10
式(4)中,||为取绝对值符号;In formula (4), || is the absolute value symbol; S5、按坐姿类别对240组坐标数据进行整合后按照7种坐姿类别重新分别7组,得到7组坐姿数据,每组坐姿数据分别包括240名测试人员在该坐姿下的左眼瞳孔坐标、右眼瞳孔坐标、鼻尖坐标、颈部坐标、左肩坐标和右肩坐标;S5. After integrating the 240 sets of coordinate data according to the sitting posture categories, the data are divided into 7 sets according to the 7 sitting posture categories to obtain 7 sets of sitting posture data. Each set of sitting posture data includes the left eye pupil coordinates, right eye pupil coordinates, nose tip coordinates, neck coordinates, left shoulder coordinates and right shoulder coordinates of the 240 test persons in the sitting posture. S6、分别确定判定阈值T1、T2、T3和T4的值,其中,确定判定阈值T1的具体过程为:S6. Determine the values of the determination thresholds T 1 , T 2 , T 3 and T 4 respectively, wherein the specific process of determining the determination threshold T 1 is as follows: A、采用ΔL1~ΔL240这240个左倾偏量构成原始数据组,将原始数据组作为第0代数据组;A. Using 240 left-leaning deviations from ΔL 1 to ΔL 240 to form an original data set, and taking the original data set as the 0th generation data set; B、设定迭代变量t,对t进行初始化,令t=1;B. Set the iteration variable t and initialize t to 1; C、进行第t次迭代更新,得到第t代数据组,具体过程为:C. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is: C1、计算第t-1代数据组的峰度
Figure QLYQS_11
均值
Figure QLYQS_12
和标准差
Figure QLYQS_13
C1. Calculate the kurtosis of the t-1 generation data set
Figure QLYQS_11
Mean
Figure QLYQS_12
and standard deviation
Figure QLYQS_13
C2、判断
Figure QLYQS_16
是否大于3,如果
Figure QLYQS_18
不大于3,且
Figure QLYQS_21
与3的差值不大于1,则令判定阈值
Figure QLYQS_15
如果
Figure QLYQS_19
不大于3,且
Figure QLYQS_20
与3的差值大于1,则将第t-1代数据组中最大的左倾偏量的值作为判定阈值T1,如果
Figure QLYQS_22
大于3,则计算第t-1代数据组中每个左倾偏量与
Figure QLYQS_14
之差的平方值,将最大平方值对应的左倾偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤C,进行下一次迭代,直至
Figure QLYQS_17
不大于3;
C2. Judgment
Figure QLYQS_16
Is it greater than 3? If
Figure QLYQS_18
Not more than 3, and
Figure QLYQS_21
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure QLYQS_15
if
Figure QLYQS_19
Not more than 3, and
Figure QLYQS_20
If the difference between t and 3 is greater than 1, the maximum left-leaning deviation value in the t-1 generation data set is used as the decision threshold T 1 .
Figure QLYQS_22
If it is greater than 3, then calculate the left deviation of each data set in the t-1th generation and
Figure QLYQS_14
The square value of the difference between the two values is used to delete the left deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step C for the next iteration until
Figure QLYQS_17
Not more than 3;
确定判定阈值T2的具体过程为:The specific process of determining the decision threshold T2 is: D、采用ΔR1~ΔR240这240个右倾偏量构成原始数据组,将原始数据组作为第0代数据组;D. Using 240 right-leaning deviations from ΔR 1 to ΔR 240 to form an original data set, and taking the original data set as the 0th generation data set; E、设定迭代变量t,对t进行初始化,令t=1;E. Set the iteration variable t and initialize t to 1; F、进行第t次迭代更新,得到第t代数据组,具体过程为:F. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is: F1、计算第t-1代数据组的峰度
Figure QLYQS_23
均值
Figure QLYQS_24
和标准差
Figure QLYQS_25
F1, calculate the kurtosis of the t-1 generation data group
Figure QLYQS_23
Mean
Figure QLYQS_24
and standard deviation
Figure QLYQS_25
F2、判断
Figure QLYQS_28
是否大于3,如果
Figure QLYQS_30
不大于3,且
Figure QLYQS_32
与3的差值不大于1,则令判定阈值
Figure QLYQS_27
如果
Figure QLYQS_31
不大于3,且
Figure QLYQS_33
与3的差值大于1,则将第t-1代数据组中最大的右倾偏量的值作为判定阈值T2,如果
Figure QLYQS_34
大于3,则计算第t-1代数据组中每个右倾偏量与
Figure QLYQS_26
之差的平方值,将最大平方值对应的左倾偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤F,进行下一次迭代,直至
Figure QLYQS_29
不大于3;
F2. Judgment
Figure QLYQS_28
Is it greater than 3? If
Figure QLYQS_30
Not more than 3, and
Figure QLYQS_32
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure QLYQS_27
if
Figure QLYQS_31
Not more than 3, and
Figure QLYQS_33
If the difference between t and 3 is greater than 1, the maximum right deviation value in the t-1 generation data set is used as the decision threshold T 2 .
Figure QLYQS_34
If it is greater than 3, then calculate the right deviation of each data set in the t-1th generation and
Figure QLYQS_26
The square value of the difference between the two values is used to delete the left deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step F for the next iteration until
Figure QLYQS_29
Not more than 3;
确定判定阈值T3的具体过程为:The specific process of determining the judgment threshold T3 is: G、采用ΔC1~ΔC240这240个颈部偏量构成原始数据组,将原始数据组作为第0代数据组;G. Using 240 neck deflections from ΔC 1 to ΔC 240 to form an original data set, and using the original data set as the 0th generation data set; H、设定迭代变量t,对t进行初始化,令t=1;H. Set the iteration variable t and initialize t to 1; I、进行第t次迭代更新,得到第t代数据组,具体过程为:I. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is: I1、计算第t-1代数据组的峰度
Figure QLYQS_35
均值
Figure QLYQS_36
和标准差
Figure QLYQS_37
I1. Calculate the kurtosis of the t-1 generation data group
Figure QLYQS_35
Mean
Figure QLYQS_36
and standard deviation
Figure QLYQS_37
I2、判断
Figure QLYQS_39
是否大于3,如果
Figure QLYQS_43
不大于3,且
Figure QLYQS_46
与3的差值不大于1,则令判定阈值
Figure QLYQS_40
如果
Figure QLYQS_42
不大于3,且
Figure QLYQS_44
与3的差值大于1,则将第t-1代数据组中最大的颈部偏量的值作为判定阈值T3,如果
Figure QLYQS_45
大于3,则计算第t-1代数据组中每个左倾偏量与
Figure QLYQS_38
之差的平方值,将最大平方值对应的颈部偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤I,进行下一次迭代,直至
Figure QLYQS_41
不大于3;
I2. Judgment
Figure QLYQS_39
Is it greater than 3? If
Figure QLYQS_43
Not more than 3, and
Figure QLYQS_46
The difference between 3 and 1 is not greater than 1, so let the judgment threshold
Figure QLYQS_40
if
Figure QLYQS_42
Not more than 3, and
Figure QLYQS_44
If the difference between t and 3 is greater than 1, the value of the maximum neck deviation in the t-1 generation data set is used as the judgment threshold T 3 .
Figure QLYQS_45
If it is greater than 3, then calculate the left deviation of each data set in the t-1th generation and
Figure QLYQS_38
The square value of the difference between the two values is used to delete the neck deviation corresponding to the maximum square value from the t-1 generation data set to obtain the t generation data set. Then, the value of t is updated by adding 1 to the current value of t, and the process returns to step I for the next iteration until
Figure QLYQS_41
Not more than 3;
确定判定阈值T4的具体过程为:The specific process of determining the decision threshold T4 is: J、采用ΔH1~ΔH240这240个肩膀偏量构成原始数据组,将原始数据组作为第0代数据组;J. Using 240 shoulder deviations from ΔH 1 to ΔH 240 to form an original data set, and using the original data set as the 0th generation data set; K、设定迭代变量t,对t进行初始化,令t=1;K. Set the iteration variable t and initialize t to 1; L、进行第t次迭代更新,得到第t代数据组,具体过程为:L. Perform the t-th iteration update to obtain the t-th generation data set. The specific process is: L1、计算第t-1代数据组的峰度
Figure QLYQS_47
均值
Figure QLYQS_48
和标准差
Figure QLYQS_49
L1, calculate the kurtosis of the t-1 generation data group
Figure QLYQS_47
Mean
Figure QLYQS_48
and standard deviation
Figure QLYQS_49
L2、判断
Figure QLYQS_51
是否大于3,如果
Figure QLYQS_55
不大于3,且
Figure QLYQS_56
与3的差值不大于1,则判定阈值
Figure QLYQS_52
如果
Figure QLYQS_54
不大于3,且
Figure QLYQS_57
与3的差值大于1,则将第t-1代数据组中最大的肩膀偏量的值作为判定阈值T4,如果
Figure QLYQS_58
则计算第t-1代数据组中每个左倾偏量与
Figure QLYQS_50
之差的平方值,将最大平方值对应的肩膀偏量从第t-1代数据组中删除,得到第t代数据组,然后采用t的当前值加1的和更新t的取值,返回步骤L,进行下一次迭代,直至
Figure QLYQS_53
不大于3;
L2. Judgment
Figure QLYQS_51
Is it greater than 3? If
Figure QLYQS_55
Not more than 3, and
Figure QLYQS_56
The difference between 3 and 1 is not greater than 1, then the threshold is determined
Figure QLYQS_52
if
Figure QLYQS_54
Not more than 3, and
Figure QLYQS_57
If the difference between t and 3 is greater than 1, the value of the maximum shoulder deviation in the t-1 generation data set is used as the decision threshold T 4 .
Figure QLYQS_58
Then calculate each left-leaning deviation in the t-1 generation data set and
Figure QLYQS_50
The square value of the difference between the two values is used to delete the shoulder deviation corresponding to the maximum square value from the t-1 generation data group to obtain the t generation data group. Then, the value of t is updated by adding 1 to the current value of t, and the step L is returned to perform the next iteration until
Figure QLYQS_53
Not more than 3;
(2)、当需对使用者进行坐姿识别时,摄像头安装在需要进行坐姿识别的对应位置处,先进行使用者的参考数据采集,具体过程为:使用者先采用正确坐姿坐在摄像头前,摄像头正对使用者脸部,两者距离为30~50厘米,使用者脸部和肩膀不能被遮挡,摄像头拍摄使用者正确坐姿的图像发送给PC机,PC机采用其内预存的图像处理程序对使用者正确坐姿的图像进行处理,确定并记录正确坐姿下使用者的左眼瞳孔、右眼瞳孔、鼻尖、颈部、左肩和右肩这6个关键点的坐标,将使用者正确坐姿下左眼瞳孔的坐标记为(lx,ly),右眼瞳孔的坐标记为(rx,ry)、鼻尖的坐标记为(nx,ny)、颈部的坐标记为(bx,by)、左肩的坐标记为(lsx,lsy)、右肩的坐标记为(rsx,rsy);(2) When the user's sitting posture needs to be recognized, the camera is installed at the corresponding position where the sitting posture recognition needs to be performed, and the user's reference data is first collected. The specific process is as follows: the user first sits in front of the camera with a correct sitting posture, with the camera facing the user's face, the distance between the two is 30 to 50 cm, and the user's face and shoulders cannot be blocked. The camera takes an image of the user's correct sitting posture and sends it to the PC. The PC uses the image processing program pre-stored therein to process the image of the user's correct sitting posture, determine and record the coordinates of the six key points of the user's left pupil, right pupil, nose tip, neck, left shoulder and right shoulder in the correct sitting posture, and mark the coordinates of the left pupil of the user in the correct sitting posture as (lx, ly), the coordinates of the right pupil as (rx, ry), the coordinates of the nose tip as (nx, ny), the coordinates of the neck as (bx, by), the coordinates of the left shoulder as (lsx, lsy), and the coordinates of the right shoulder as (rsx, rsy); (3)确定使用者的参考数据后,对使用者的坐姿进行实时识别,具体过程为:(3) After determining the user's reference data, the user's sitting posture is recognized in real time. The specific process is as follows: 步骤3-1、PC机每隔2秒从摄像头处采集一次使用者坐姿的图像,并采用图像处理程序对使用者坐姿的实时图像进行处理,确定并记录当前坐姿下使用者的左眼瞳孔、右眼瞳孔、鼻尖、颈部、左肩和右肩这6个关键点的坐标,同时接收红外测距传感器测得的使用者与摄像头的距离,将使用者当前坐姿下左眼瞳孔的坐标记为(lxN,lyN),右眼瞳孔的坐标记为(rxN,ryN)、鼻尖的坐标记为(nxN,nyN)、颈部的坐标记为(bxN,byN)、左肩的坐标记为(lsxN,lsyN)、右肩的坐标记为(rsxN,rsyN),使用者与摄像头的距离记为D,将左眼瞳孔关键点与鼻尖关键点相连接且两者的连线记为线段a,将右眼瞳孔关键点分别与鼻尖关键点相连接且两者的连线记为线段b,将鼻尖关键点与颈部关键点相连接且两者的连线记为线段c,将左肩关键点与颈部关键点相连接且两者的连线记为线段d,将右肩关键点与颈部关键点相连接且两者的连线记为线段e,将线段c和线段d之间的夹角记为角α,线段c和线段e之间的夹角记为角β;Step 3-1, the PC collects an image of the user's sitting posture from the camera every 2 seconds, and uses an image processing program to process the real-time image of the user's sitting posture, determine and record the coordinates of the six key points of the user's left pupil, right pupil, nose tip, neck, left shoulder and right shoulder in the current sitting posture, and at the same time receive the distance between the user and the camera measured by the infrared ranging sensor, and mark the coordinates of the left pupil of the user in the current sitting posture as (lx N , ly N ), the coordinates of the right pupil as (rx N , ry N ), the coordinates of the nose tip as (nx N , ny N ), the coordinates of the neck as (bx N , by N ), the coordinates of the left shoulder as (lsx N , lsy N ), and the coordinates of the right shoulder as (rsx N , rsy N ), the distance between the user and the camera is recorded as D, the left eye pupil key point is connected to the nose tip key point and the line connecting the two is recorded as line segment a, the right eye pupil key point is connected to the nose tip key point respectively and the line connecting the two is recorded as line segment b, the nose tip key point is connected to the neck key point and the line connecting the two is recorded as line segment c, the left shoulder key point is connected to the neck key point and the line connecting the two is recorded as line segment d, the right shoulder key point is connected to the neck key point and the line connecting the two is recorded as line segment e, the angle between the line segment c and the line segment d is recorded as angle α, and the angle between the line segment c and the line segment e is recorded as angle β; 步骤3-2、对根据步骤3-1的实时数据情况对使用者坐姿进行实时判定,具体判定标准为:Step 3-2: Based on the real-time data from step 3-1, the user's sitting posture is judged in real time. The specific judgment criteria are: 如果D小于30厘米,则判定为距离过近;If D is less than 30 cm, it is judged as too close; 如果D大于50厘米,则判定为距离过远;If D is greater than 50 cm, it is judged to be too far away; 如果α大于0°且小于等于70°,则判定当前坐姿为头部左偏;If α is greater than 0° and less than or equal to 70°, the current sitting posture is judged as head tilted to the left; 如果β大于0°且小于等于70°,则判定当前坐姿为头部右偏;If β is greater than 0° and less than or equal to 70°, the current sitting posture is judged as head right tilt; 如果lx-lxN>T1,则判定当前坐姿为身体左倾;If lx-lx N >T 1 , the current sitting posture is determined to be a left leaning posture; 如果rxN-rx>T2,则判定当前坐姿为身体右倾;If rx N -rx>T 2 , the current sitting posture is determined to be right-leaning; 如果|lsyN-rsyN|>T4,则判定当前坐姿为肩膀不平行;If |lsy N -rsy N |>T 4 , the current sitting posture is judged as shoulders not being parallel; 如果byN-by>T3,则判定当前坐姿为脊椎弯曲;If by N -by>T 3 , the current sitting posture is determined to be spinal curvature; 如果为上述情况以外的其他情况,则判定当前坐姿为正确坐姿;If it is any other situation than the above situation, the current sitting posture is determined to be the correct sitting posture; 步骤3-3、如果连续3次判定为同一不正确的坐姿,则语音进行播报提醒使用者,当两种以上不正确坐姿连续3次同时出现时,语音播报时播报优先级别最高的坐姿,8种不正确坐姿的优先级从高到低依次为距离过近、距离过远、头部左偏、头部右偏、身体左倾、身体右倾、肩膀不平行和脊椎弯曲。Step 3-3, if the same incorrect sitting posture is determined for three consecutive times, a voice broadcast will be used to remind the user. When two or more incorrect sitting postures appear for three consecutive times at the same time, the sitting posture with the highest priority will be broadcasted. The priorities of the eight incorrect sitting postures from high to low are: distance too close, distance too far, head tilted to the left, head tilted to the right, body leaning to the left, body leaning to the right, shoulders not parallel and spine curvature.
CN202011088718.2A 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection Active CN112364694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011088718.2A CN112364694B (en) 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011088718.2A CN112364694B (en) 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection

Publications (2)

Publication Number Publication Date
CN112364694A CN112364694A (en) 2021-02-12
CN112364694B true CN112364694B (en) 2023-04-18

Family

ID=74507159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011088718.2A Active CN112364694B (en) 2020-10-13 2020-10-13 Human body sitting posture identification method based on key point detection

Country Status (1)

Country Link
CN (1) CN112364694B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112770B (en) * 2021-03-11 2022-05-17 合肥视其佳科技有限公司 Detection method for recognizing head postures of students
CN113034322B (en) * 2021-04-01 2024-02-02 珠海爱浦京软件股份有限公司 Internet-based online education supervision system and method
CN113627369A (en) * 2021-08-16 2021-11-09 南通大学 Action recognition and tracking method in auction scene
CN113657271B (en) * 2021-08-17 2023-10-03 上海科技大学 Sitting posture detection method and system combining quantifiable factors and unquantifiable factor judgment
CN113743255A (en) * 2021-08-18 2021-12-03 广东机电职业技术学院 Neural network-based child sitting posture identification and correction method and system
CN114550099A (en) * 2022-03-01 2022-05-27 常莫凡 Comprehensive health management system based on digital twins
CN116884083B (en) * 2023-06-21 2024-05-28 圣奥科技股份有限公司 Sitting posture detection method, medium and equipment based on key points of human body
CN117746505B (en) * 2023-12-21 2024-11-12 武汉星巡智能科技有限公司 Learning and accompanying method, device and robot combined with abnormal sitting posture dynamic detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382653A (en) * 2018-12-29 2020-07-07 沈阳新松机器人自动化股份有限公司 Human body sitting posture monitoring method
CN111414780A (en) * 2019-01-04 2020-07-14 卓望数码技术(深圳)有限公司 Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10667697B2 (en) * 2015-06-14 2020-06-02 Facense Ltd. Identification of posture-related syncope using head-mounted sensors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382653A (en) * 2018-12-29 2020-07-07 沈阳新松机器人自动化股份有限公司 Human body sitting posture monitoring method
CN111414780A (en) * 2019-01-04 2020-07-14 卓望数码技术(深圳)有限公司 Sitting posture real-time intelligent distinguishing method, system, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭园 ; 郭晨旭 ; 时新 ; 申黎明 ; .基于OpenPose学习坐姿分析的桌椅人机适应性研究.林业工程学报.2020,(02),全文. *

Also Published As

Publication number Publication date
CN112364694A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112364694B (en) Human body sitting posture identification method based on key point detection
CN107169453B (en) Sitting posture detection method based on depth sensor
Qiu et al. Pose-guided matching based on deep learning for assessing quality of action on rehabilitation training
CN106022378B (en) Sitting posture judgment method and cervical spondylosis recognition system based on camera and pressure sensor
CN106022304B (en) A kind of real-time body's sitting posture situation detection method based on binocular camera
CN102194131B (en) Fast human face recognition method based on geometric proportion characteristic of five sense organs
CN101833672B (en) Sparse Representation Face Recognition Method Based on Constrained Sampling and Shape Features
CN112990137B (en) Classroom student sitting posture analysis method based on template matching
CN109785396B (en) Writing posture monitoring method, system and device based on binocular camera
JP7531168B2 (en) Method and system for detecting a child's sitting posture based on child's face recognition
CN105740780A (en) Method and device for human face in-vivo detection
CN105740779A (en) Method and device for human face in-vivo detection
CN110934591A (en) Sitting posture detection method and device
CN112749684A (en) Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN112232128B (en) Eye tracking based method for identifying care needs of old disabled people
CN114120357B (en) Neural network-based myopia prevention method and device
CN101833654A (en) Sparse Representation Face Recognition Method Based on Constrained Sampling
CN101539989A (en) Human face detection-based method for testing incorrect reading posture
CN110226913A (en) A kind of self-service examination machine eyesight detection intelligent processing method and device
Bei et al. Sitting posture detection using adaptively fused 3D features
CN109674477A (en) Computer vision Postural Analysis method based on deep learning
CN113609963B (en) Real-time multi-human-body-angle smoking behavior detection method
CN112527118B (en) Head posture recognition method based on dynamic time warping
CN106327484B (en) A method of it is assessed for dentist's operation posture
CN114550099A (en) Comprehensive health management system based on digital twins

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant