CN109674477A - Computer vision Postural Analysis method based on deep learning - Google Patents

Computer vision Postural Analysis method based on deep learning Download PDF

Info

Publication number
CN109674477A
CN109674477A CN201810884943.3A CN201810884943A CN109674477A CN 109674477 A CN109674477 A CN 109674477A CN 201810884943 A CN201810884943 A CN 201810884943A CN 109674477 A CN109674477 A CN 109674477A
Authority
CN
China
Prior art keywords
point
human body
deep learning
key point
inflection point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810884943.3A
Other languages
Chinese (zh)
Inventor
李志男
丁正
安森文
李勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sennotech Co Ltd
Original Assignee
Shenzhen Sennotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sennotech Co Ltd filed Critical Shenzhen Sennotech Co Ltd
Priority to CN201810884943.3A priority Critical patent/CN109674477A/en
Publication of CN109674477A publication Critical patent/CN109674477A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1077Measuring of profiles

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Dentistry (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention provides a kind of computer vision Postural Analysis method based on deep learning, its human body bone key point automatic positioning method based on deep learning, first the positioning certain key points in human body edge, i.e. Direct Recognition point, it is based on the Direct Recognition point again, positions body key point;On the basis of again, the abnormal stature degree of tester is assessed by above-mentioned " bone key point " and " Direct Recognition point " using the abnormal posture appraisal procedure based on human body bone key point;The posture risk include: head lean forward, head inclination, hunchback, high-low shoulder, scoliosis, pelvis inclination, knee hyperextension, O-shaped leg, X-type leg.The present invention quickly identifies human body exception posture using common photographing device (such as mobile phone), is detected by the computer vision technique based on deep learning.The technology has many leading types, it not only greatly improves the rapidity, ready availability of abnormal stature detection function, reduces testing cost, it more eliminates the interference of the subjective factor of operator, eliminates the threshold limitation of operator's professional knowledge.

Description

Computer vision Postural Analysis method based on deep learning
Technical field
The present invention relates to a kind of Postural Analysis methods, particularly, be related to it is a kind of it is based on depth learning technology, pass through calculating The method that machine vision means assess human body posture.
Background introduction
" posture " refers to the posture of human body, and the human body external morphology generated due to organization of human body characterization is refered in particular in health is learned. For the posture of human body not only with shape makings close association, bad posture more reflects physical health issues.Common anomalous body State include: head lean forward, head inclination, hunchback, high-low shoulder, scoliosis, pelvis inclination, knee hyperextension, O-shaped leg, X-type leg etc.. These characterize abnormal posture and often lead to function of human body exception, for example, abnormal stature is usually associated with muscle skeleton position and function Can exception, and exception of muscle skeleton may cause strain of joint, poor circulation, it is neural oppressed, internal organ function Can be impaired etc. health problems.
The generation of abnormal posture is divided into congenital reason and posteriority reason, and contemporary people is due to great work intensity and bad Living habit, the great raising of posteriority reason proportion, and due to the day after tomorrow caused by postural problems also present people The trend of group's sprawling, group of people at high risk includes children, office's white collar, overweight people etc..And these abnormal statures are not controlled once System is left, and it develops, and would potentially result in more serious body illness, such as cervical spondylosis, lumbar vertebra disease, extreme influence people's Life.
Abnormal posture, and the best solution of disease being induced by it are exactly early discovery, early intervene.And it finds different Normal posture is generally required by certain detection means.Traditional posture detection mode mainly includes two kinds: 1. manual measurements. It needs professional person by touching measured's body surface manually to determine body surface bone mark point, is surveyed by tape, protractor, human body The tools such as high instrument are directly measured;2. machine measures comprising two dimensional image analysis and Three-dimensional human body measurement technology.Wherein, Two dimensional image analysis method, it is desirable that after shooting measured front, the back side, side photo, pass is artificially marked out on photo Key point reuses scale analysis abnormal stature.Three-dimensional human body measurement technology can be divided into optical pattern method, based on imaging sensor Photoelectric method etc..
But traditional posture detection method has shortcomings, leads to not popularize well, social utility still compares at present It is lower.For manual measurement method, there is very high professional knowledge requirement for operator, and the long period need to be consumed, With greater need for undressing or interacting, it is uncomfortable to be easy to cause the psychology of measured, physiology, at the same analyze result there is also it is biggish it is subjective because Element;Two dimensional image analysis method, it is still desirable to which operator has enough professional knowledge, and marked point process is easy to introduce operation Error, and introduce measurement error;Other traditional machine detection methods are required to specific instrument, environment and professional operation people Member, it is at high cost and inconvenient.
Summary of the invention
For above situation, the purpose of the application is: providing a kind of computer vision technique based on deep learning Postural Analysis method, this method quickly can identify human body exception posture using common photographing device, be detected;It can pole It is the big rapidity for improving abnormal stature detection function, ready availability, testing cost is reduced, the dry of the subjective factor of operator is excluded It disturbs, eliminates the threshold limitation of operator's professional knowledge.
To achieve the above object, the technical solution used in the present invention is: providing a kind of computer based on deep learning Vision Postural Analysis method;It is specifically divided into two steps:
Step 1: the human body bone key point automatic positioning method based on deep learning;
Step 2: the abnormal posture appraisal procedure based on human body bone key point.
The human body bone key point automatic positioning method based on deep learning, the input data needed are as follows:
1. testee's normally loosens standing full face (hereinafter referred to as picture A);
2. picture A shoot, viewing apparatus tilt angle (in the plane vertical with shooting direction, the underface of photo with The angle of gravity direction);
3. testee's normally loosens standing side photo (hereinafter referred to as photo B);
4. photo B shoot, viewing apparatus tilt angle (in the plane vertical with shooting direction, the underface of photo with The angle of gravity direction).
The output data of the human body bone key point automatic positioning method based on deep learning are as follows: testee is as follows The severity numerical representation method of abnormal posture.Abnormal posture include: head lean forward, head inclination, hunchback, high-low shoulder, backbone side Curved, pelvis inclination, knee hyperextension, O-shaped leg, X-type leg.
Step 1: the human body bone key point automatic positioning method based on deep learning:
In vision Postural Analysis, the body key point of testee is influenced by clothing is worn, and is worn the clothes to minimize tester When measurement, influence of the clothing to test accuracy uses " Relative localization method " for the position fixing process of above-mentioned " body key point "; The certain key points in human body edge, i.e. Direct Recognition point are positioned first, then are based on the Direct Recognition point, position body key point.By Human body surface can be characterized when wearing the clothes in Direct Recognition point, and is easy to be automatically positioned by computer technology, so tool There are higher positional accuracy and robustness.
The human body bone key point includes: facial key point, body key point;And facial key point include: porus acusticus externus, Ear-lobe, eyes and nose;Body key point include: angulus sterni, acromion, xiphoid-process, anterior superior spine, knee joint center, in ankle-joint The heart, elbow joint center and wrist joint center.
The Direct Recognition point is a series of points easy to identify, positioning at human body contour outline edge in image.Wherein, front elevation The Direct Recognition point of picture includes following point: neck-trunk inflection point, shoulder inflection point, armpit, large arm forearm inflection point, thigh trunk turn Inflection point, the thigh shank inflection point, shank foot inflection point of point, both legs intersection point, forearm and hand.
It according to practical application, only can also include the subset of the set of keypoints, or more key points are added.
Positioning " Direct Recognition point " " facial key point " is specifically comprised the following steps: by deep learning method
A) prepare training data: the method by manually demarcating, " Direct Recognition point " is calibrated on human body image, and " face closes The position of key point ".Wherein, direct picture, side image respectively prepare no less than 100,000.
B) training deep learning model: using the training data prepared, training deep neural network, and tuning reaches most High accuracy.The process belongs to traditional technology, is not belonging to the new technology of this patent proposition, so expansion is not said in this patent It is bright.
C) positioning of Direct Recognition point: based on the deep neural network after the training, inputting a human body direct picture, Or a human body side image, the accuracy that network can require in advance positions " Direct Recognition point " therein and " face is crucial Point ".
" Relative localization method " determines " body key point " in direct picture that is, based on " Direct Recognition point " position as above Position, it is specific as follows: human body boundary to be identified first, the boundary of the identification includes the clothing that human body is dressed.Boundary Identification technology is traditional technology, and non-this patent proposes, is not discussed in this patent.
In direct picture:
(1) acromion: taking from " shoulder inflection point " to the human body boundary curve " neck-trunk inflection point ", bent apart from shoulder inflection point Line length is that the position of total length 36% is " acromion " point;
(2) angulus sterni: taking the line midpoint A of right and left shoulders inflection point, the line midpoint B of left and right armpit, and angulus sterni is on AB line, The distance of distance A point is the point of AB length 20%;
(3) xiphoid-process: taking the line midpoint A of right and left shoulders inflection point, the line midpoint B of left and right armpit, and xiphoid-process is on AB extended line, away from It is the point of AB length 40% with a distance from B point;
(4) anterior superior spine: left side anterior superior spine is on left side " thigh trunk inflection point " and " both legs intersection point " line, apart from left side " thigh trunk inflection point " distance is the point of wire length 2%;Right side is similarly;
(5) knee joint center: following to position: from left side thigh shank inflection point, to do level to the right for left side knee joint center Line, until horizontal line intersects with left leg internal edge;Taking the line segment midpoint is left side knee joint center;Right side is similarly;
(6) ankle-joint center, elbow joint center, wrist joint center: localization method is the same as knee joint center.
In side image, it is affected by tester's figure, the use without " Relative localization method ".But directly make With " Direct Recognition point ".
Step 2: the abnormal posture appraisal procedure based on human body bone key point;
The step assesses the abnormal stature degree of tester by above-mentioned " bone key point " and " Direct Recognition point ". Posture risk include: head lean forward, head inclination, hunchback, high-low shoulder, scoliosis, pelvis inclination, knee hyperextension, O-shaped leg, X-type Leg.
For each abnormal posture subject, it is all made of " section grading method " and evaluates its intensity of anomaly R, this method is as follows. For a certain abnormal posture subject:
Step 1: calculating " posture subject key value K ", determining " expectation section " (0, A).Wherein, A is determined for tester and is positive Under normal scene, the maximum acceptable value of posture subject key value.Once posture subject key value is more than A, this subject is judged to Break as exception.
Step 2: determine intensity of anomaly R grading quantitative criteria are as follows: (0,60] be it is normal, (60,80] be slightly it is different Often, (80,100] it is severe abnormality
Step 3: quantitative formula:
If K≤A, the * of R=60 K/A
The * of A < K≤2A, R=20 if (K-A)/A+60
If 2A < K, the * of R=20 Sigmoid (S * (K -2A))+80
Wherein: Sigmoid (S * (K -2A))=1/(1+e- S * (K – 2A) )
Wherein, S is sensitivity coefficient.
For each abnormal posture subject, need to determine the calculation method of A, S and K in advance, in the hope of R value.Every kind Abnormal posture subject, two parameters of A, S are the numerical value counted by demographic data, and K is methodology calculation formula.Every kind of anomalous body State subject, A, S, K are respectively (hereinafter, will indicate " when full face is shot, the tilt angle of viewing apparatus " are as follows: α;By " side When photograph taking, the tilt angle of viewing apparatus " it indicates are as follows: β);
(1) head is leaned forward: A=3, S=10
The calculation method of K:
In side view: connection " porus acusticus externus ", " ear-lobe ", line segment midpoint are set as A, connection " rear dorsal hump point " and A point, the line segment θ is set as with the angle of vertical direction;
K = θ – 25.3 – β;
(2) head rolls: A=2, S=20
The calculation method of K:
In front elevation: connection left and right ear-lobe, line segment and horizontal angle are set as θ;
K = θ – α;
(3) bow-backed: A=8, S=20
The calculation method of K:
In side view: connection " rear dorsal hump point " and " inflection point on rear side of neck ", the angle of the line segment and vertical direction is set as θ;
K = θ – 18.2 – β;
(4) high-low shoulder: A=2, S=10
The calculation method of K:
In front elevation: connection left side acromion and angulus sterni, line segment and horizontal angle are set as θ 1;Connection right side acromion and chest Bone angle, line segment and horizontal angle are set as θ 2;
K = |θ1 - θ2 – 2α| ;
(5) scoliosis: A=1.1, S=20
The calculation method of K:
In front elevation: connection left side " acromion " and left side " anterior superior spine ", length along path is set as L1;Connection right side " acromion " and right side " anterior superior spine ", line segment length is set as L2;
K = Max(L1 , L2) / Min(L1, L2);
(6) pelvis rolls: A=2, S=15
The calculation method of K:
In front elevation: connection left and right anterior superior spine, line segment and horizontal angle are set as θ;
K = θ – α;
(7) knee hyperextension: A=1.5, S=20
The calculation method of K:
In side view: with " shank foot rear portion inflection point " for initial point, " thigh posterior calf inflection point " is terminal, is vector R1;It is perpendicular Straight upward vector is set as R2;The angle for going to R1 from R2 is set as θ;It rotates clockwise and is positive, rotation is negative counterclockwise;K = θ – β;
(8) O-shaped leg: A=2, S=20
The calculation method of K:
In front elevation: for left leg, with " anterior superior spine " for initial point, " knee joint " center " is terminal, is vector R1;With " knee closes Section " center " is initial point, and " ankle-joint " center " is terminal, is vector R2;R2 is turned to from R1, the angle turned over is θ,
It is positive, is negative counterclockwise clockwise;For right crus of diaphragm, similarly, but it is negative, is positive counterclockwise clockwise;
K = θ;
(9) X-type leg: A=2, S=20
The calculation method of K:
In front elevation: for left leg, with " anterior superior spine " for initial point, " knee joint " center " is terminal, is vector R1;With " knee closes Section " center " is initial point, and " ankle-joint " center " is terminal, is vector R2;R2 is turned to from R1, the angle turned over is θ;It is clockwise It is negative, it is positive counterclockwise;For right crus of diaphragm, similarly, it is positive, is negative counterclockwise clockwise;
K = θ。
The beneficial effects of the present invention are: this method quickly can carry out human body exception posture using common photographing device Identification, detection;The rapidity, ready availability of abnormal stature detection function can be greatly improved, reduce testing cost, exclude operator's The threshold limitation of operator's professional knowledge is eliminated in the interference of subjective factor.
Detailed description of the invention
Fig. 1 is facial key point schematic diagram;
Fig. 2 is body key point schematic diagram;
Fig. 3 is that front directly connects identification point schematic diagram;
Fig. 4 is that side directly connects identification point schematic diagram.
Specific embodiment
By the following examples, the application is illustrated in conjunction with attached drawing.
Step 1: the human body bone key point automatic positioning method based on deep learning;
Determine and need the human body bone key point that positions, as shown in Figure 1, 2, wherein facial key point include: porus acusticus externus, ear-lobe, Eyes, nose;Body key point includes: angulus sterni, acromion, xiphoid-process, anterior superior spine, knee joint center, ankle-joint center, elbow pass Section center, wrist joint center.It according to practical application, only can also include the subset of the set of keypoints, or more close is added Key point.
Positioning " Direct Recognition point " " facial key point " is by deep learning method.It is broadly divided into following steps:
A) prepare training data: the method by manually demarcating, " Direct Recognition point " is calibrated on human body image, and " face is crucial The position of point ".Wherein, direct picture, side image respectively prepare no less than 100,000;
B) training deep learning model: using the training data prepared, training deep neural network, and tuning reaches highest standard Exactness.The process belongs to traditional technology, is not belonging to the new technology of this patent proposition, so explanation is not unfolded in this patent;
C) based on the deep neural network after the training, a human body direct picture or one positioning of Direct Recognition point: are inputted Human body side image, the accuracy that network can require in advance position " Direct Recognition point " therein and " facial key point ".
In practical application, since the body key point of testee is influenced by clothing is worn, to minimize tester It wears the clothes when measuring, influence of the clothing to test accuracy, first positioning certain key point (the hereinafter referred to as Direct Recognitions in human body edge Point), then it is based on the Direct Recognition point, position above-mentioned body key point.
As shown in figure 3, the Direct Recognition point of direct picture includes following point (as shown in Figure 3): neck-trunk inflection point, shoulder It is portion's inflection point, armpit, large arm forearm inflection point, thigh trunk inflection point, both legs intersection point, the inflection point of forearm and hand, thigh shank inflection point, small Leg foot inflection point;As shown in figure 4, the Direct Recognition point of side image includes following point: inflection point on rear side of neck, rear dorsal hump point, Waist inflection point, buttocks swell point, thigh posterior calf inflection point, shank foot rear portion inflection point.
" Relative localization method " determines " body key point " in direct picture that is, based on " Direct Recognition point " position as above Position, it is specific as follows: human body boundary to be identified first, the boundary of the identification includes the clothing that human body is dressed.Boundary Identification technology is traditional technology, and non-this patent proposes, is not discussed in this patent.
As shown in Figure 1, in direct picture:
(1) acromion: taking from " shoulder inflection point " to the human body boundary curve " neck-trunk inflection point ", bent apart from shoulder inflection point Line length is that the position of total length 36% is " acromion " point;
(2) angulus sterni: taking the line midpoint A of right and left shoulders inflection point, the line midpoint B of left and right armpit, and angulus sterni is on AB line, The distance of distance A point is the point of AB length 20%;
(3) xiphoid-process: taking the line midpoint A of right and left shoulders inflection point, the line midpoint B of left and right armpit, and xiphoid-process is on AB extended line, away from It is the point of AB length 40% with a distance from B point;
(4) anterior superior spine: left side anterior superior spine is on left side " thigh trunk inflection point " and " both legs intersection point " line, apart from left side " thigh trunk inflection point " distance is the point of wire length 2%.Right side is similarly.
(5) knee joint center: following to position: from left side thigh shank inflection point, to do to the right for left side knee joint center Horizontal line, until horizontal line intersects with left leg internal edge.Taking the line segment midpoint is left side knee joint center.Right side is similarly.
(6) ankle-joint center, elbow joint center, wrist joint center: localization method is the same as knee joint center.
In side image, it is affected by tester's figure, the use without " Relative localization method ".But directly make With " Direct Recognition point ".
Step 2: the abnormal posture appraisal procedure based on human body bone key point;
The step assesses the abnormal stature degree of tester by above-mentioned " bone key point " and " Direct Recognition point ". Posture risk include: head lean forward, head inclination, hunchback, high-low shoulder, scoliosis, pelvis inclination, knee hyperextension, O-shaped leg, X-type Leg.
For each abnormal posture subject, it is all made of " section grading method " and evaluates its intensity of anomaly R, this method is as follows. For a certain abnormal posture subject:
Step 1: calculating " posture subject key value K ", determining " expectation section " (0, A).Wherein, A is determined for tester and is positive Under normal scene, the maximum acceptable value of posture subject key value.Once posture subject key value is more than A, this subject is judged to Break as exception.
Step 2: determine intensity of anomaly R grading quantitative criteria are as follows: (0,60] be it is normal, (60,80] be slightly it is different Often, (80,100] it is severe abnormality
Step 3: quantitative formula:
If K≤A, the * of R=60 K/A
The * of A < K≤2A, R=20 if (K-A)/A+60
If 2A < K, the * of R=20 Sigmoid (S * (K -2A))+80
Wherein: Sigmoid (S * (K -2A))=1/(1+e- S * (K – 2A) )
Wherein, S is sensitivity coefficient.
For each abnormal posture subject, need to determine the calculation method of A, S and K in advance, in the hope of R value;Every kind Abnormal posture subject, two parameters of A, S are the numerical value counted by demographic data, and K is methodology calculation formula.Every kind of anomalous body State subject, A, S, K are respectively (hereinafter, will indicate " when full face is shot, the tilt angle of viewing apparatus " are as follows: α;By " side When photograph taking, the tilt angle of viewing apparatus " it indicates are as follows: β):
(1) head is leaned forward: A=3, S=10
The calculation method of K:
In side view: connection " porus acusticus externus ", " ear-lobe ", line segment midpoint are set as A, connection " rear dorsal hump point " and A point, the line segment θ is set as with the angle of vertical direction;
K = θ – 25.3 – β;
(2) head rolls: A=2, S=20
The calculation method of K:
In front elevation: connection left and right ear-lobe, line segment and horizontal angle are set as θ;
K = θ – α;
(3) bow-backed: A=8, S=20
The calculation method of K:
In side view: connection " rear dorsal hump point " and " inflection point on rear side of neck ", the angle of the line segment and vertical direction is set as θ;
K = θ – 18.2 – β;
(4) high-low shoulder: A=2, S=10
The calculation method of K:
In front elevation: connection left side acromion and angulus sterni, line segment and horizontal angle are set as θ 1;Connection right side acromion and chest Bone angle, line segment and horizontal angle are set as θ 2;
K = |θ1 - θ2 – 2α| ;
(5) scoliosis: A=1.1, S=20
The calculation method of K:
In front elevation: connection left side " acromion " and left side " anterior superior spine ", length along path is set as L1;Connection right side " acromion " and right side " anterior superior spine ", line segment length is set as L2;
K = Max(L1 , L2) / Min(L1, L2);
(6) pelvis rolls: A=2, S=15
The calculation method of K:
In front elevation: connection left and right anterior superior spine, line segment and horizontal angle are set as θ;
K = θ – α;
(7) knee hyperextension: A=1.5, S=20
The calculation method of K:
In side view: with " shank foot rear portion inflection point " for initial point, " thigh posterior calf inflection point " is terminal, is vector R1;It is perpendicular Straight upward vector is set as R2;The angle for going to R1 from R2 is set as θ;It rotates clockwise and is positive, rotation is negative counterclockwise;K = θ – β;
(8) O-shaped leg: A=2, S=20
The calculation method of K:
In front elevation: for left leg, with " anterior superior spine " for initial point, " knee joint " center " is terminal, is vector R1;With " knee closes Section " center " is initial point, and " ankle-joint " center " is terminal, is vector R2;R2 is turned to from R1, the angle turned over is θ,
It is positive, is negative counterclockwise clockwise;For right crus of diaphragm, similarly, but it is negative, is positive counterclockwise clockwise;
K = θ;
(9) X-type leg: A=2, S=20
The calculation method of K:
In front elevation: for left leg, with " anterior superior spine " for initial point, " knee joint " center " is terminal, is vector R1;With " knee closes Section " center " is initial point, and " ankle-joint " center " is terminal, is vector R2;R2 is turned to from R1, the angle turned over is θ;It is clockwise It is negative, it is positive counterclockwise;For right crus of diaphragm, similarly, it is positive, is negative counterclockwise clockwise;
K = θ。

Claims (9)

1. a kind of computer vision Postural Analysis method based on deep learning;It is characterized by: being specifically divided into two steps:
Step 1: the human body bone key point automatic positioning method based on deep learning, first the positioning certain keys in human body edge Point, i.e. Direct Recognition point, then it is based on the Direct Recognition point, position body key point;
Direct Recognition point is positioned, facial key point is specifically comprised the following steps: by deep learning method
Prepare training data: the method by manually demarcating, Direct Recognition point, facial key point are calibrated on human body image Position;Wherein, direct picture, side image respectively prepare no less than 100,000;
Training deep learning model: using the training data prepared, training deep neural network, and tuning reach highest accurate Degree;
The positioning of Direct Recognition point: based on the deep neural network after the training, a human body direct picture or a people are inputted Body side surface image, the accuracy required in advance position Direct Recognition point therein and facial key point;
Step 2: the abnormal posture appraisal procedure based on human body bone key point;
By above-mentioned bone key point and Direct Recognition point, the abnormal stature degree of tester is assessed;
For each abnormal posture subject, it is all made of " section grading method " and evaluates its intensity of anomaly R, this method is as follows:
For a certain abnormal posture subject:
Step 1: calculating " posture subject key value K ", determining " expectation section " (0, A);Wherein, A is determined for tester and is positive Under normal scene, the maximum acceptable value of posture subject key value;Once posture subject key value is more than A, this subject is judged to Break as exception;
Step 2: determine intensity of anomaly R grading quantitative criteria are as follows: (0,60] be it is normal, (60,80] for mile abnormality, (80,100] it is severe abnormality;
Step 3: quantitative formula:
If K≤A, the * of R=60 K/A
The * of A < K≤2A, R=20 if (K-A)/A+60
If 2A < K, the * of R=20 Sigmoid (S * (K -2A))+80
Wherein: Sigmoid (S * (K -2A))=1/(1+e- S * (K – 2A) )
Wherein, S is sensitivity coefficient.
2. a kind of computer vision Postural Analysis method based on deep learning according to claim 1, it is characterised in that: The human body bone key point automatic positioning method based on deep learning, the input data needed are as follows:
(1) the normal of testee loosens standing full face, i.e. picture A;
(2) when picture A is shot, the tilt angle of viewing apparatus, i.e., in the plane vertical with shooting direction, the underface of photo With the angle of gravity direction;
(3) the normal of testee loosens standing side photo, i.e. photo B;
(4) when photo B is shot, the tilt angle of viewing apparatus, i.e., in the plane vertical with shooting direction, the underface of photo With the angle of gravity direction;
The output data of the human body bone key point automatic positioning method based on deep learning are as follows: testee is abnormal as follows The severity numerical representation method of posture.
3. a kind of computer vision Postural Analysis method based on deep learning according to claim 1, it is characterised in that: The posture risk includes: that head leans forward, is head inclination, hunchback, high-low shoulder, scoliosis, pelvis inclination, knee hyperextension, O-shaped Leg, X-type leg.
4. a kind of computer vision Postural Analysis method based on deep learning according to claim 1, it is characterised in that: The human body bone key point automatic positioning method based on deep learning, in vision Postural Analysis, the positioning of body key point Process uses Relative localization method;The certain key points in human body edge, i.e. Direct Recognition point are positioned first, then are based on the Direct Recognition Point positions body key point.
5. a kind of computer vision Postural Analysis method based on deep learning according to claim 1, it is characterised in that: The human body bone key point includes: facial key point, body key point;And facial key point includes: porus acusticus externus, ear-lobe, eye Eyeball and nose;Body key point includes: angulus sterni, acromion, xiphoid-process, anterior superior spine, knee joint center, ankle-joint center, elbow pass Section center and wrist joint center.
6. a kind of computer vision Postural Analysis method based on deep learning according to claim 1, it is characterised in that: The Direct Recognition point is a series of points easy to identify, positioning at human body contour outline edge in image;Wherein, direct picture is direct Identification point includes following point: neck-trunk inflection point, shoulder inflection point, armpit, large arm forearm inflection point, thigh trunk inflection point, both legs are handed over Point, the inflection point of forearm and hand, thigh shank inflection point, shank foot inflection point.
7. according to claim 1, a kind of computer vision Postural Analysis method based on deep learning, feature described in 4 exist In: the method for human body bone key point in identification direct picture:
(1) acromion: taking from " shoulder inflection point " to the human body boundary curve " neck-trunk inflection point ", bent apart from shoulder inflection point Line length is that the position of total length 36% is " acromion " point;
(2) angulus sterni: taking the line midpoint A of right and left shoulders inflection point, the line midpoint B of left and right armpit, and angulus sterni is on AB line, The distance of distance A point is the point of AB length 20%;
(3) xiphoid-process: taking the line midpoint A of right and left shoulders inflection point, the line midpoint B of left and right armpit, and xiphoid-process is on AB extended line, away from It is the point of AB length 40% with a distance from B point;
(4) anterior superior spine: left side anterior superior spine is on left side " thigh trunk inflection point " and " both legs intersection point " line, apart from left side " thigh trunk inflection point " distance is the point of wire length 2%;Right side is similarly;
(5) knee joint center: following to position: from left side thigh shank inflection point, to do level to the right for left side knee joint center Line, until horizontal line intersects with left leg internal edge;Taking the line segment midpoint is left side knee joint center;Right side is similarly;
(6) ankle-joint center, elbow joint center, wrist joint center: localization method is the same as knee joint center.
8. according to claim 1, a kind of computer vision Postural Analysis method based on deep learning, feature described in 4 exist In: the method that human body bone key point uses Direct Recognition point in identification side image.
9. a kind of computer vision Postural Analysis method based on deep learning according to claim 1, it is characterised in that: For each abnormal posture subject, the calculation method and preset value numerical value of relevant parameter are as follows: will " when full face is shot, The tilt angle of viewing apparatus " indicates are as follows: α;It will be indicated " when the photograph taking of side, the tilt angle of viewing apparatus " are as follows: β;
(1) head is leaned forward: A=3, S=10
The calculation method of K:
In side view: connection " porus acusticus externus ", " ear-lobe ", line segment midpoint are set as A, connection " rear dorsal hump point " and A point, the line segment θ is set as with the angle of vertical direction;
K = θ – 25.3 – β;
(2) head rolls: A=2, S=20
The calculation method of K:
In front elevation: connection left and right ear-lobe, line segment and horizontal angle are set as θ;
K = θ – α;
(3) bow-backed: A=8, S=20
The calculation method of K:
In side view: connection " rear dorsal hump point " and " inflection point on rear side of neck ", the angle of the line segment and vertical direction is set as θ;
K = θ – 18.2 – β;
(4) high-low shoulder: A=2, S=10
The calculation method of K:
In front elevation: connection left side acromion and angulus sterni, line segment and horizontal angle are set as θ 1;Connection right side acromion and chest Bone angle, line segment and horizontal angle are set as θ 2;
K = |θ1 - θ2 – 2α| ;
(5) scoliosis: A=1.1, S=20
The calculation method of K:
In front elevation: connection left side " acromion " and left side " anterior superior spine ", length along path is set as L1;Connection right side " acromion " and right side " anterior superior spine ", line segment length is set as L2;
K = Max(L1 , L2) / Min(L1, L2);
(6) pelvis rolls: A=2, S=15
The calculation method of K:
In front elevation: connection left and right anterior superior spine, line segment and horizontal angle are set as θ;
K = θ – α;
(7) knee hyperextension: A=1.5, S=20
The calculation method of K:
In side view: with " shank foot rear portion inflection point " for initial point, " thigh posterior calf inflection point " is terminal, is vector R1;It is perpendicular Straight upward vector is set as R2;The angle for going to R1 from R2 is set as θ;It rotates clockwise and is positive, rotation is negative counterclockwise;K = θ – β;
(8) O-shaped leg: A=2, S=20
The calculation method of K:
In front elevation: for left leg, with " anterior superior spine " for initial point, " knee joint " center " is terminal, is vector R1;With " knee closes Section " center " is initial point, and " ankle-joint " center " is terminal, is vector R2;R2 is turned to from R1, the angle turned over is θ,
It is positive, is negative counterclockwise clockwise;For right crus of diaphragm, similarly, but it is negative, is positive counterclockwise clockwise;
K = θ;
(9) X-type leg: A=2, S=20
The calculation method of K:
In front elevation: for left leg, with " anterior superior spine " for initial point, " knee joint " center " is terminal, is vector R1;With " knee closes Section " center " is initial point, and " ankle-joint " center " is terminal, is vector R2;R2 is turned to from R1, the angle turned over is θ;It is clockwise It is negative, it is positive counterclockwise;For right crus of diaphragm, similarly, it is positive, is negative counterclockwise clockwise;
K = θ。
CN201810884943.3A 2018-08-06 2018-08-06 Computer vision Postural Analysis method based on deep learning Pending CN109674477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810884943.3A CN109674477A (en) 2018-08-06 2018-08-06 Computer vision Postural Analysis method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810884943.3A CN109674477A (en) 2018-08-06 2018-08-06 Computer vision Postural Analysis method based on deep learning

Publications (1)

Publication Number Publication Date
CN109674477A true CN109674477A (en) 2019-04-26

Family

ID=66184436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810884943.3A Pending CN109674477A (en) 2018-08-06 2018-08-06 Computer vision Postural Analysis method based on deep learning

Country Status (1)

Country Link
CN (1) CN109674477A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
CN112070031A (en) * 2020-09-09 2020-12-11 中金育能教育科技集团有限公司 Posture detection method, device and equipment
CN112107318A (en) * 2020-09-24 2020-12-22 自达康(北京)科技有限公司 Physical activity ability assessment system
WO2021179230A1 (en) * 2020-03-12 2021-09-16 南方科技大学 Scoliosis detection model generating method and computer device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A. ABOBAKR.ETL: "RGB-D human posture analysis for ergonomie studies using deep convolutional neural network", 《PROCEEDINGS OF THE 2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC)》 *
ADANKON MM: "Scoliosis follow-up using noninvasive trunk surface acquisition", 《IEEE TRANS BIOMED ENG》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110495889A (en) * 2019-07-04 2019-11-26 平安科技(深圳)有限公司 Postural assessment method, electronic device, computer equipment and storage medium
WO2021000401A1 (en) * 2019-07-04 2021-01-07 平安科技(深圳)有限公司 Posture assessment method, electronic apparatus, computer device, and storage medium
WO2021179230A1 (en) * 2020-03-12 2021-09-16 南方科技大学 Scoliosis detection model generating method and computer device
CN112070031A (en) * 2020-09-09 2020-12-11 中金育能教育科技集团有限公司 Posture detection method, device and equipment
CN112107318A (en) * 2020-09-24 2020-12-22 自达康(北京)科技有限公司 Physical activity ability assessment system
CN112107318B (en) * 2020-09-24 2024-02-27 自达康(北京)科技有限公司 Physical activity ability evaluation system

Similar Documents

Publication Publication Date Title
CN109674477A (en) Computer vision Postural Analysis method based on deep learning
US10467744B2 (en) Systems and methods to automatically determine garment fit
US10353222B2 (en) Methods for measuring actual distance of human body and customizing spectacle frame
WO2021000401A1 (en) Posture assessment method, electronic apparatus, computer device, and storage medium
CN110059670B (en) Non-contact measuring method and equipment for head and face, limb movement angle and body posture of human body
CN109164918A (en) Intelligent sitting posture tracking and method of adjustment, device, intelligent elevated table and storage medium
KR101777391B1 (en) Method for vertebral joint range of motion measurements
CN113139962B (en) System and method for scoliosis probability assessment
CN109008937A (en) Method for detecting diopter and equipment
CN114222521A (en) Instant eye gaze calibration system and method
US20210338148A1 (en) System, method, and apparatus for temperature asymmetry measurement of body parts
KR20170107416A (en) Method for recommending a mattress automatically
KR20090109413A (en) Figure diagnosis system and the method thereof
CN207785161U (en) Expression analysis system
CN113892904A (en) Children and teenagers&#39; refractive state change prediction system based on camera device
CN106618580B (en) Strabismus and nystagmus head position detection method, device and system
CN117503099A (en) Human body composition analyzer, measuring method, measuring device, apparatus and storage medium
CN110934597B (en) Operation method of abnormal gait monitoring equipment
CN107014592B (en) Safety goggles visual field detection system and detection method
KR20200134022A (en) Apparatus, method and system for measuring exophthalmos using 3D depth camera
CN110338835A (en) A kind of intelligent scanning stereoscopic monitoring method and system
KR101715567B1 (en) Method for facial analysis for correction of anthroposcopic errors from Sasang constitutional specialists
CN110101377A (en) A kind of blood pressure measurement platform of automatic adaptation user height
KR20190085794A (en) posture analysis application providing method for correct posture management
CN114299600A (en) Intelligent feedback method in visual function examination and training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB03 Change of inventor or designer information

Inventor after: Ding Zheng

Inventor after: Li Zhinan

Inventor after: An Senwen

Inventor after: Li Qin

Inventor before: Li Zhinan

Inventor before: Ding Zheng

Inventor before: An Senwen

Inventor before: Li Qin

CB03 Change of inventor or designer information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190426

WD01 Invention patent application deemed withdrawn after publication