CN106022378B - Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system - Google Patents

Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system Download PDF

Info

Publication number
CN106022378B
CN106022378B CN201610343035.4A CN201610343035A CN106022378B CN 106022378 B CN106022378 B CN 106022378B CN 201610343035 A CN201610343035 A CN 201610343035A CN 106022378 B CN106022378 B CN 106022378B
Authority
CN
China
Prior art keywords
sitting posture
user
face
current
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610343035.4A
Other languages
Chinese (zh)
Other versions
CN106022378A (en
Inventor
朱卫平
尹韶升
刘国檩
邵泽宇
周旺
夏天一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201610343035.4A priority Critical patent/CN106022378B/en
Publication of CN106022378A publication Critical patent/CN106022378A/en
Application granted granted Critical
Publication of CN106022378B publication Critical patent/CN106022378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The cervical spondylosis recognition methods based on camera and pressure sensor that the invention discloses a kind of, comprising: S1 acquires the test sample under user's difference sitting posture using camera with pressure sensor;Face classification device and sitting posture classifier identification test sample is respectively adopted in S2, obtains the accuracy rate of face classification device and sitting posture classifier;The weight of S3 adjustment face classification device and sitting posture classifier;The current face's image and current pressure data of S4 acquisition user;S5, from each sitting posture probability of current face's image recognition, identifies each sitting posture probability from current pressure data using sitting posture classifier, and calculate the combined chance of each sitting posture, the maximum current sitting posture of sitting posture, that is, user of combined chance using face classification device;S6 counts the duration that user in predetermined period is in standard sitting posture and non-standard sitting posture respectively, determines that user has and suffers from cervical spondylosis risk.Present invention combination facial image and body pressure carry out neck posture and determine, increase judgement precision.

Description

Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system
Technical field
The present invention relates to image recognitions and pressure distribution technical field, and in particular to one kind is based on camera and pressure sensing The cervical spondylosis recognition methods of device.
Background technique
Cervical spondylosis is a kind of common and very harmful chronic disease, is cured very difficult.The generation of cervical spondylosis be one by Shallowly enter deep process, if the prevention and early diagnosis of cervical spondylosis can be carried out by long-term monitoring mode, enables patient in time It examines, thus will not delay treatment.
Prevented based on the cervical spondylosis of camera and pressure sensor and needs to carry out head fortune according to image in diagnostic method Dynamic judgement is often used the obvious characteristic that image recognition technology is accurately positioned in face and background, and passes through positioned face And the relationship (usually distance and angle) in background between feature judges the motion conditions on head.But head movement judgement Realize that there is also following technical bottlenecks:
(1) since target to be positioned is more, it is limited to level of hardware, locating speed is low, will appear prolong under normal conditions Late;
(2) it as obvious characteristic in the background of object of reference, is generally selected according to characteristic value, being unable to ensure must be in background Still life, if the object of reference selected moves in the judgment process, it will cause result error;
(3) when user's head deflection no longer faces camera, account of receiving guests portion, which deflects back to will lead to again, to judge to be not allowed Really or even it not can determine whether.
Currently, multi-cam can be used to carry out the mode of omnidirectional Recognition to solve the above problems, but it is tired to there is arrangement The problems such as hardly possible, higher cost.
Summary of the invention
To solve the technical issues of mentioning in background technique, the present invention provides a kind of efficiently and accurately based on camera with The cervical spondylosis of pressure sensor is prevented and diagnostic method.
In order to solve the above technical problems, the present invention adopts the following technical scheme that:
One, a kind of motion capture sitting posture judgment method based on image, comprising:
S1 camera acquires the facial image of user's difference sitting posture as training sample;
S2 obtains face classification device using training sample and the corresponding sitting posture training random forest of each training sample;
S3 camera acquires current face's image of user, obtains face from current face's image using face classification device Coordinate, the center point coordinate of face coordinate, that is, left eye, right eye and mouth calculate the average coordinates of face coordinate, and it is average to be denoted as face Coordinate;
S4 identifies the current sitting posture of user according to face coordinate in current face's image, this step further comprises:
4.1 obtain face coordinate, i.e. face standard coordinate using face classification device from the training sample of standard sitting posture, count The average coordinates for calculating each face standard coordinate are denoted as face average coordinate;
4.2 subtract the x-axis coordinate of face average coordinates the x-axis coordinate of face average coordinate, and gained difference is denoted as First difference;The y-axis coordinate of face average coordinates is subtracted to the y-axis coordinate of face average coordinate, gained difference is denoted as Two differences;
4.3 according to the 1. size of the first difference and the second difference, 2. face average coordinates and face average coordinate In the angle of line and horizontal direction and 3. current face's image and standard sitting posture training sample between each face distance difference Size judges the offset direction of face in the training sample of face relative standard sitting posture in current face's image, to identify use The current sitting posture in family;
S5 is based on current face's image, identifies the current sitting posture of user using face classification device;
The recognition result of S6 comparison step S4 and S5, if recognition result is identical, the current sitting posture of recognition result, that is, user, User is advised or alerted according to user's current sitting posture, then, step S3~S6 is executed to next frame facial image;If knowing Other result is different, directly executes step S3~S6 to next frame facial image.
Above-mentioned difference sitting posture includes standard sitting posture, face upward head, bow, head left avertence, head right avertence, head far from camera and Head is close to camera.
Sub-step 4.3 specifically:
If (a) the first difference and the second difference are all larger than a, the company of face average coordinates and face average coordinate is calculated The tangent value of line and horizontal direction angle determines that the current sitting posture of user is to face upward head if tangent value is greater than 1;Otherwise, it is determined that user Current sitting posture is head right avertence;
If (b) the first difference is greater than a while the second difference is less than-a, calculates face average coordinates and face average is sat The tangent value of target line and horizontal direction angle determines that the current sitting posture of user is to bow if tangent value is greater than 1;Otherwise, sentence Determining the current sitting posture of user is head right avertence;
If (c) the first difference is greater than a while the second difference is located in range [- a, a], then determine that the current sitting posture of user is head Portion's right avertence;
If (d) the first difference is less than-a while the second difference is greater than a, calculates face average coordinates and face average is sat The tangent value of target line and horizontal direction angle determines that the current sitting posture of user is to face upward head if tangent value is greater than 1;Otherwise, sentence Determining the current sitting posture of user is head left avertence;
If (e) the first difference and the second difference are respectively less than-a, face average coordinates and face average coordinate are calculated The tangent value of line and horizontal direction angle determines that the current sitting posture of user is to bow if tangent value is greater than 1;Otherwise, it is determined that with The current sitting posture in family is head left avertence;
If (f) the first difference is less than-a while the second difference is located at range [- a, a], then determine that the current sitting posture of user is head Portion's left avertence;
If (g) the first difference is located in range [- a, a], while the second difference is greater than a, then determines that the current sitting posture of user is Face upward head;
If (h) the first difference is located in range [- a, a], while the second difference is greater than a, then determines that the current sitting posture of user is It bows;
If (i) the first difference and the second difference are respectively positioned in range [- a, a], calculate left in the training sample of standard sitting posture Eye is denoted as gauged distance 1, gauged distance 2, gauged distance 3, calculating is worked as at a distance from right eye, left eye and mouth, right eye and mouth respectively Left eye is denoted as distance 1, distance 2, distance 3 at a distance from right eye, left eye and mouth, right eye and mouth respectively in preceding facial image, meter Gauged distance 1 and distance 1, gauged distance 2 and distance 2, the difference of gauged distance 3 and distance 3 are calculated, if all differences are all larger than threshold Value b then judges that the current sitting posture of user is head close to camera;If all differences are both less than threshold value b, judge that user currently sits Appearance is head far from camera;Otherwise, judge the current sitting posture of user for standard sitting posture.
Threshold value a and b are empirical value related with face width pixel value in standard sitting posture training sample, pass through test of many times Adjustment obtains.
Two, a kind of sitting posture judgment method based on pressure distribution detection, comprising:
Pressure sensor node is arranged on S1 cushion, acquires the pressure of each pressure sensor node under user's difference sitting posture Value, the pressure value of pressure sensor node and corresponding sitting posture composing training sample set D;
S2 characterized by the pressure value of node, different sitting posture types be it is different classes of, using Relief or ReliefF method point Each node weights are analysed, the maximum top n node of weight selection is as effective node, and N is in 15~20 range values;
S3 acquires the pressure value of each effective node under different sitting postures, and the pressure value of effective node and corresponding sitting posture constitute instruction Practice sample set D', using training sample set D' training random forest, obtains sitting posture classifier;
The effective node of S4 acquires current pressure data, is based on current pressure data, and using sitting posture classifier, identification user works as Preceding sitting posture.
In step S2, each node weights are analyzed using Relief method, further comprise:
2a.1 randomly selects sample R from training sample set D, and sample R is found from training sample identical with sample R classification Nearest samples H, from the training sample different with sample R classification find sample R nearest samples M;
The initial weight of each pressure sensor node is set as 0 by 2a.2, is then carried out one by one to each pressure sensor node:
On current pressure sensor node, compare the distance of R and H at a distance from R and M, if the distance of R and H be less than R and The distance of M, increases the weight of current pressure sensor node, and the incrementss of weight are the R and M on current pressure sensor node Distance;Otherwise, the weight of current pressure sensor node is reduced, the reduction amount of weight is on current pressure sensor node The distance of R and H;
2a.3 judges whether the present weight of all the sensors node and the difference of the variance of a upper weight are less than default threshold Value executes sub-step 2a.4 if being less than preset threshold;Otherwise, sub-step 2a.1 is re-executed;
The maximum top n pressure sensor node of 2a.4 weighting weight is as effective node.
In step S2, each node weights are analyzed using ReliefF method, further comprise:
2b.1 randomly selects sample R from training sample set D, finds sample from training sample identical with sample R classification K neighbour's sample of R, is denoted as Set (H);K neighbour's sample of sample R, note are found from the training sample different with sample R classification For Setc(M);
The initial weight of each pressure sensor node is set as 0 by 2b.2, is then carried out one by one to each pressure sensor node:
On current pressure sensor node, compare the distance and R and Set of R and Set (H)c(M) distance, if R and The distance of Set (H) is less than R and Setc(M) distance, increases the weight of current pressure sensor node, and the incrementss of weight are Each sample H in R and Set (H) on current pressure sensor nodeiSum of the distance;Otherwise, current pressure sensor node is reduced Weight, the reduction amount of weight are R and Set on current pressure sensor nodec(M) each sample M inciDistance weighted sum, R and MciDistance weight be Setc(M) in sample MciThe sample number of same type accounts for Setc(M) ratio of total sample number in;
2b.3 judges whether the present weight of all the sensors node and the difference of the variance of a upper weight are less than default threshold Value executes sub-step 2b.4 if being less than preset threshold;Otherwise, sub-step 2b.1 is re-executed;
The maximum top n pressure sensor node of 2b.4 weighting weight is as effective node.
Three, the cervical spondylosis recognition methods based on camera and pressure sensor, comprising:
S1 acquires facial image and effective node pressure under user's difference sitting posture with pressure sensor using camera respectively Force value, facial image, effective node pressure value and corresponding sitting posture constitute test sample;
Sitting posture classification obtained by face classifier obtained by claim 1 step S2 and claim 4 step S3 is respectively adopted in S2 Device identifies test sample, calculates face classification device and sitting posture classifier identifies that correct sample number accounts for the percentage of test sample sum Than the i.e. accuracy rate of face classification device and sitting posture classifier;
S3 adjusts weight according to the accuracy rate of face classification device and sitting posture classifier, that is, if face classification device accuracy rate is low In sitting posture classifier, face classification device weight is reduced by default weight adjusted value, improves sitting posture classifier weight;Conversely, by pre- If weight adjusted value improves face classification device weight, sitting posture classifier is reduced;The initial power of face classification device and sitting posture classifier It is reset to 0.5;Weight adjusted value is rule of thumb manually set;
S4 camera acquires current face's image of user, the current pressure data of effective node acquisition user;
S5 using face classification device from each sitting posture probability of current face's image recognition, using sitting posture classifier from current pressure Data identify each sitting posture probability, calculate separately the combined chance c of each sitting posturei=mai+n·bi, the maximum sitting posture of combined chance That is the current sitting posture of user;M, n is respectively the weight of face classification device and sitting posture classifier;ai、biRespectively indicate face classification device and Sitting posture classifier from current face's image recognition be the i-th class sitting posture probability;
S6 counts the duration that user in predetermined period is in standard sitting posture and non-standard sitting posture respectively, calculates non-standard sitting posture Duration account for the ratio of predetermined period, when the ratio reaches specified degree, determine that user has and suffer from cervical spondylosis risk.
When judgement user, which has, suffers from cervical spondylosis risk, according to user's cervical vertebra health performance level, further determine whether Suffer from cervical spondylosis risk, specifically include:
(1) user bows to greatest extent, and camera acquires the facial image of user, obtains the angle of depression according to facial image, will The angle of depression is compared with predetermined normal level;
(2) user faces upward head to greatest extent, and camera acquires the facial image of user, obtains the elevation angle according to facial image, will The elevation angle is compared with setting normal value;
(3) to two sides, torticollis, camera acquire the facial image of user, are obtained according to facial image user to greatest extent respectively Inclination angle is taken, inclination angle and setting normal value are compared, while judging whether two angles of heel are equal;
(4) respectively to two sides rotary head to greatest extent, camera acquires the facial image of user, is obtained according to facial image user Take family rotary head track;
(5) user does movement of shrugging, and camera acquires the facial image of user, and the shoulder of user is obtained according to facial image Horizontal line is at a distance from chin edge;
(6) user does the movement of pocket shoulder, and camera acquires the facial image of user;
(7) user's wrist and ancon curve inwardly to greatest extent, and camera acquires the facial image of user, according to face figure As obtaining crooked radian;
(8) user makees chest expanding movement, and camera acquires the facial image of user, obtains user's chest expanding according to facial image Cheng Zhong is the maximum diameter of a circle of diameter and track by the center of circle, arm of shoulder;
(9) user shakes the head clockwise, counterclockwise respectively, and camera acquires the facial image of user, is obtained according to facial image Take frequency of shaking the head.
Four, the cervical spondylosis identifying system based on camera and pressure sensor, comprising:
Video monitoring apparatus, data transmission module, the pressure sensor node set on cushion, processing unit and warning system System, video monitoring apparatus, warning system are all connected with processing unit, and pressure sensor node is handled by data transmission module connection Unit;
Video monitoring apparatus is used to acquire the video flowing of user's face;Pressure sensor node is used to acquire the pressure of user Information, processing unit are used to the current sitting posture according to video monitoring apparatus and pressure sensor node acquisition data identification user, And non-standard sitting posture duration accounts for the ratio of predetermined period, determines whether user has and suffers from cervical spondylosis risk;Warning system, which is used to work as, to be used Family, which is judged as to have, provides warning when suffering from cervical spondylosis risk.
Aiming at the problem that needing to identify multiple targets, the present invention only needs to identify eyes and mouth target, reduces quite More system operations amounts, and in actual test using optimal frequency detect so that the speed of service of system faster.In addition, being Judging the status monitoring and motion tracking when user does not face screen, the present invention is also trained using random forest method, according to The training of individual subscriber sample achievees the purpose that accurately to monitor.
Using the human face image information of camera acquisition and the body pressure measurement connection acquired by pressure sensor in the present invention The posture for closing progress neck determines that increase judges precision.
Detailed description of the invention
Fig. 1 is system structure of the invention figure;
Fig. 2 is present system operation workflow figure;
Fig. 3 is the weight map of ReliefF;
Fig. 4 is node deployment figure.
Specific embodiment
The specific embodiment of three kinds of technical solutions of the invention will be described in detail below.
One, the motion capture sitting posture judgment method based on image, the specific steps are as follows:
Step 1, training sample is acquired.
Using the image of camera acquisition user's difference sitting posture as training sample, image is user's shoulder above section. In this specific implementation, sitting posture collected includes standard sitting posture, face upward head, bow, head left avertence, head right avertence, head is far from taking the photograph As head, head are close to seven kinds of camera;Sample frequency is 24 frame per second, and each sitting posture acquires 1000 samples respectively, needed for sampling Time 8-10 minute.
Step 2, the training sample and the corresponding sitting posture training random forest of each training sample acquired using step 1, obtains people Face classifier.
Random forest method establishes random forest using random fashion, random forest is made of several decision trees, random gloomy It is not associated between each decision tree in woods.After establishing random forest, one training sample of every input (being denoted as current training sample) is allowed All decision trees judge the affiliated sitting posture of current training sample respectively in random forest, are judged most sitting postures and currently train sample This sitting posture.
Step 3, the present image for being acquired user in real time using camera is obtained face using face classification device in real time and worked as Coordinate in preceding image, face include left eye, right eye and mouth here.It is used in the present invention with the central point of left eye, right eye, mouth as circle Three circles of the heart respectively indicate the position of left eye, right eye, mouth, using central coordinate of circle as left eye, right eye, mouth coordinate, face sit Mark had both included the coordinate of left eye, right eye, mouth.
Step 4, the current sitting posture of user is identified according to face coordinate in present image.
This step further comprises:
(1) using face coordinate in the training sample of standard sitting posture as face standard coordinate.
(2) 3 will be removed in present image after the x-axis coordinate addition of left eye, right eye, mouth, y-axis coordinate removes 3 after being also added, gained Coordinate is denoted as face average coordinates.By left eye in face standard coordinate, right eye, mouth standard coordinate x-axis coordinate be added after remove 3, y-axis coordinate removes 3 after being also added, and gained coordinate is denoted as face average coordinate.
(3) x-axis coordinate of face average coordinates is subtracted to the x-axis coordinate of face average coordinate, gained difference is denoted as First difference;The y-axis coordinate of face average coordinates is subtracted to the y-axis coordinate of face average coordinate, gained difference is denoted as Two differences;
The current sitting posture of user is judged according to the first difference and the second difference, specifically:
If (a) the first difference and the second difference are all larger than a, the company of face average coordinates and face average coordinate is calculated The tangent value of line and horizontal direction angle determines that the current sitting posture of user is to face upward head if tangent value is greater than 1;Otherwise, it is determined that user Current sitting posture is head right avertence;
If (b) the first difference is greater than a while the second difference is less than-a, calculates face average coordinates and face average is sat The tangent value of target line and horizontal direction angle determines that the current sitting posture of user is to bow if tangent value is greater than 1;Otherwise, sentence Determining the current sitting posture of user is head right avertence;
If (c) the first difference is greater than a while the second difference is located in range [- a, a], then determine that the current sitting posture of user is head Portion's right avertence;
If (d) the first difference is less than-a while the second difference is greater than a, calculates face average coordinates and face average is sat The tangent value of target line and horizontal direction angle determines that the current sitting posture of user is to face upward head if tangent value is greater than 1;Otherwise, sentence Determining the current sitting posture of user is head left avertence;
If (e) the first difference and the second difference are respectively less than-a, face average coordinates and face average coordinate are calculated The tangent value of line and horizontal direction angle determines that the current sitting posture of user is to bow if tangent value is greater than 1;Otherwise, it is determined that with The current sitting posture in family is head left avertence;
If (f) the first difference is less than-a while the second difference is located at range [- a, a], then determine that the current sitting posture of user is head Portion's left avertence;
If (g) the first difference is located in range [- a, a], while the second difference is greater than a, then determines that the current sitting posture of user is Face upward head;
If (h) the first difference is located in range [- a, a], while the second difference is greater than a, then determines that the current sitting posture of user is It bows;
If (i) the first difference and the second difference are respectively positioned in range [- a, a], calculate left in the training sample of standard sitting posture Eye is denoted as gauged distance 1, gauged distance 2, gauged distance 3, calculating is worked as at a distance from right eye, left eye and mouth, right eye and mouth respectively Left eye is denoted as distance 1, distance 2, distance 3 at a distance from right eye, left eye and mouth, right eye and mouth respectively in preceding image, calculates standard Distance 1 and distance 1, gauged distance 2 and distance 2, the difference of gauged distance 3 and distance 3, if all differences are all larger than threshold value b, Judge that the current sitting posture of user is head close to camera;If all differences are both less than threshold value b, judge the current sitting posture of user for head Portion is far from camera;Otherwise, judge the current sitting posture of user for standard sitting posture.
Threshold value a and b are empirical value related with face width pixel value in standard sitting posture training sample, pass through test of many times Adjustment obtains.In the present embodiment, threshold value a is set as the 1/3 of face width pixel value;Threshold value b is the 1/4 of face width pixel value.
In the present invention coordinate use absolute coordinate, used coordinate system are as follows: using the image upper left corner as origin, horizontally to the right for Positive direction of the x-axis is straight down positive direction of the y-axis, and coordinate unit length is pixel.
Step 5, it is based on present image, the current sitting posture of user is identified using face classification device.
Step 6, the recognition result of comparison step 4 and 5, if recognition result is identical, which works as Preceding sitting posture is advised or is alerted to user according to the current sitting posture of user, then, executes step 3~6 to next frame image;If Recognition result is different, skips present image, jumps to next frame image, executes step 3~6 to next frame image.
Two, the sitting posture judgment method based on pressure distribution detection, the specific steps are as follows:
Step 1, pressure sensor node is arranged on cushion, acquires the pressure of each pressure sensor node under different sitting postures Value, the pressure value of pressure sensor node and corresponding sitting posture composing training sample set D.
Step 2, characterized by the pressure value of pressure sensor node, different sitting posture types are different classes of, use Relief method or ReliefF method analyze each pressure sensor node weights, the maximum top n pressure sensor section of weight selection Point is used as effective node, and N is preferably 15~20.
Relief method is used for the classification of two kinds of sitting postures, analyzes the specific of each pressure sensor node weights using Relief method Steps are as follows:
2a.1 randomly selects sample R from training sample set D, and sample R is found from training sample identical with sample R classification Nearest samples H, from the training sample different with sample R classification find sample R nearest samples M;
2a.2 carries out each pressure sensor node one by one:
On current pressure sensor node, compare the distance of R and H at a distance from R and M, if the distance of R and H be less than R and The distance of M shows that current pressure sensor node is beneficial to distinguishing, increases the weight of current pressure sensor node;Otherwise, subtract The weight of few current pressure sensor node.
When updating weight every time, the incrementss of weight are the distance of the R and M on current pressure sensor node, weight Reduction amount is the distance of the R and H on current pressure sensor node.
The initial weight of each pressure sensor node is set as 0.In this step, the distance of sample and nearest samples be can be used to Indicate sample and nearest samples in the pressure difference of current pressure sensor node.
2a.3 judges whether the present weight of all the sensors node and the difference of the variance of a upper weight are less than default threshold Value, if being less than preset threshold, then it is assumed that acute variation will not occur for each pressure sensor node weights, restrain at this time, execute son Step 2a.4;Otherwise, sub-step 2a.1 is re-executed;
The maximum top n pressure sensor node of 2a.4 weighting weight only arranges pressure as effective node at effective node Force snesor.
ReliefF method is used for the classification of two or more sitting postures, analyzes each pressure sensor node weights using ReliefF method Specific step is as follows:
2b.1 randomly selects sample R from training sample set D, finds sample from training sample identical with sample R classification K neighbour's sample of R, is denoted as Set (H);K neighbour's sample of sample R, note are found from the training sample different with sample R classification For Setc(M);
2b.2 carries out each pressure sensor node one by one:
On current pressure sensor node, compare the distance and R and Set of R and Set (H)c(M) distance, if R and The distance of Set (H) is less than R and Setc(M) distance shows that current pressure sensor node is beneficial to distinguishing, increases current pressure The weight of force snesor node;Otherwise, the weight of current pressure sensor node is reduced.
When updating weight every time, the incrementss of weight are each sample H in R and Set (H) on current pressure sensor nodei Sum of the distance, the reduction amount of weight are R and Set on current pressure sensor nodec(M) each sample M inciDistance weighting With R and MciDistance weight be Setc(M) in sample MciThe sample number of same type accounts for Setc(M) ratio of total sample number in Example.
The initial weight of each pressure sensor node is set as 0.In this step, the distance of sample and k neighbour's sample be can be used to Indicate sample and nearest samples in the pressure difference of current pressure sensor node.
2b.3 judges whether the present weight of all the sensors node and the difference of the variance of a upper weight are less than default threshold Value, if being less than preset threshold, then it is assumed that acute variation will not occur for each pressure sensor node weights, restrain at this time, execute son Step 2b.4;Otherwise, sub-step 2b.1 is re-executed;
The maximum top n pressure sensor node of 2b.4 weighting weight only arranges pressure as effective node at effective node Force snesor.
Step 3, the pressure value of each effective node under different sitting postures, the pressure value of effective node and corresponding sitting posture structure are acquired Sitting posture classifier is obtained using training sample set D' training random forest at training sample set D';Sitting posture classifier can be according to real-time Pressure information, export the current sitting posture of user in real time by ballot mode.
Step 4, the current pressure data of effective node are acquired, current pressure data are based on, are identified using sitting posture classifier The current sitting posture of user.
Three, the cervical spondylosis recognition methods based on camera and pressure sensor, the specific steps are as follows:
Step 1, according to the accuracy rate of face classification device and sitting posture classifier identification sitting posture, face classification device and sitting posture are adjusted The weight of classifier, specifically:
The data of camera with pressure sensor acquisition user's difference sitting posture are respectively adopted, are denoted as test sample, test specimens This includes the pressure data and corresponding sitting posture of facial image, effective node.In this specific real-time mode, by the people of 7 kinds of sitting postures Face image is respectively labeled as a1, a2, a3, a4, a5, a6, a7;By the pressure data of 7 kinds of sitting postures be respectively labeled as b1, b2, b3, b4、b5、b6、b7。
Face classification device and sitting posture classifier identification test sample is respectively adopted, calculates and identifies that correct sample number accounts for test specimens The percentage of this sum obtains the accuracy rate of face classification device and sitting posture classifier.
Weight is adjusted according to the accuracy rate of face classification device and sitting posture classifier, if face classification device accuracy rate is lower than sitting posture Classifier reduces face classification device weight by preset weight adjusted value, improves sitting posture classifier weight;Conversely, then by default Weight adjusted value improve face classification device weight, reduce sitting posture classifier.Face classification device and sitting posture classifier in the present invention Initial weight be set as 0.5, weight adjusted value is manually set, and in the present embodiment, weight adjusted value is set as 0.01.
Step 2, the current sitting posture type of user is judged according to weight after adjustment.
Camera acquires the present image of user, the current pressure data of effective node acquisition user.Using face classification Device identifies the probability a of each sitting posture from present imagei, the probability of each sitting posture is identified from current pressure data using sitting posture classifier bi.Calculate separately the combined chance c of each sitting posturei=mai+n·bi, wherein m, n are respectively face classification device and sitting posture classification The weight of device, aiIndicate the probability for the i-th class sitting posture that face classification device is identified from present image, biIt is sitting posture classifier from current The probability of i-th class sitting posture of pressure data identification;The current sitting posture type of the maximum sitting posture, that is, user of combined chance.
The probability of i-th class sitting posture is the ratio for being identified as the total decision tree number of decision tree number Zhan of the i-th class sitting posture.
Step 3, the duration that user in predetermined period is in standard sitting posture and non-standard sitting posture is counted respectively, calculates separately mark The duration of quasi- sitting posture and non-standard sitting posture accounts for the ratio of predetermined period, when non-standard sitting posture duration proportion reach specified degree, Determine that user has and suffer from cervical spondylosis risk, and reminds user.
Function mode of the present invention is further illustrated below in conjunction with attached drawing.
See Fig. 1~2, present system includes video monitoring apparatus, data transmission module, cushion module, processing unit, police Announcement system and server.Video monitoring apparatus is that computer carries camera or external camera;Data transmission module uses bluetooth 4.0 agreements, the bluetooth module connecting with processing unit set a property as host, and the bluetooth module setting connecting with cushion module belongs to Property be slave, data transmission use data penetration transmission mode;Cushion module includes pressure sensor and Arduino control module;Place It manages unit and uses computer, warning is shown in computer screen;Server is responsible for data backup memory.
Video monitoring apparatus is used to obtain video flowing.User makes corresponding sitting posture according to prompt before video monitoring apparatus Certain time is acted and held, video monitoring apparatus acquires training sample at this time, proper using training sample training random forest The face classification device of preceding user.
Video flowing is divided into image input processing unit according to the interval of 1 frame per second, using random gloomy in the library opencv Formwork erection type carries out image procossing, predominantly detects the coordinate of the eyes and mouth of user in image.Judged using the coordinate of eyes and mouth The preliminary motion direction on head show that face's precise motion changes further according to compared with threshold value a, b with this.To incorrect seat Appearance is alerted.
System default coordinate system are as follows: be horizontally to the right positive direction of the x-axis using the image upper left corner as origin, be straight down y Axis positive direction can guarantee that point coordinate is positive number on image in this way.
Fig. 3~4 are seen, in the sitting posture judgment method based on pressure distribution detection, firstly, 11 rows 11 of setting arrange totally 121 pressures Force snesor node acquires the pressure value of each pressure sensor node, the pressure value of pressure sensor node under different sitting postures And corresponding sitting posture type composing training collection D;Then, effective node is chosen in 121 nodes by ReliefF method.This reality It applies in example, the maximum pressure sensor node of 20 weights is had found as effective node by ReliefF method, sees Fig. 4.
Random forest establishes multiple decision trees indeed through random fashion and carries out decision, for the test number of input According to being classified by every decision tree, classified finally by ballot mode.The mode of ballot is that every decision tree generates One sitting posture type identification counts the sitting posture type that each decision tree generates, the seat of the highest sitting posture type, that is, final output of quantity Appearance type.Random forest shows well on data set, is capable of handling very high-dimensional data, and the choosing of it goes without doing feature It selects, training speed is fast, may be readily formed as parallel method.By by the pressure data of pressure sensor node input it is trained with Machine forest can obtain the sitting posture of user in real time.
When user, which tentatively judges to have, suffers from cervical spondylosis risk, this system also provides cervical vertebra health, user's execution, root According to user action performance level, further judging whether user has suffers from cervical spondylosis risk:
One, it bows, action point: bows to greatest extent.Camera collection image, bowing when record is bowed to greatest extent Angle is compared with the normal value of setting.
Two, head is faced upward, action point: faces upward head to greatest extent.Camera collection image, facing upward when record is bowed to greatest extent Angle is compared with the normal value of setting.
Three, torticollis, action point: respectively to two sides torticollis to greatest extent.Camera collection image records two angles of heel, It is compared with the normal value of setting, and judges whether two angles of heel are equivalent.
Four, rotary head, action point: respectively to two sides rotary head to greatest extent, the concept of rotary head is that the crown is done the best forward, chin Side rear is stretched to as possible.Camera collection image, module provides a user simulation rotational trajectory in rotary course, and records use The track when rotary head of family.
Five, shrug action point: a hand holds another hand naturally, alarms, draws back one's neck on both shoulders.Camera acquisition figure Picture records the relative distance of shoulder level line and chin edge.
Six, pocket shoulder, action point: both shoulders will pocket preceding as far as possible.
Seven, elbow is bent in wrist flexion, action point: wrist, ancon curve inwardly as possible.Module records crooked radian.
Eight, chest expanding.Camera collection image, during recording chest expanding, using shoulder as the center of circle, hand is the greatest circle of diameter Diameter and track.
Nine, it shakes the head, action point: shakes the head clockwise, counterclockwise respectively.Module record is shaken the head frequency, remind user according to Setpoint frequency is shaken the head, too fast or too slow with antishimmy.
Above-mentioned movement all passes through whether in place camera judges movement.

Claims (4)

1. a kind of motion capture sitting posture judgment method based on image, characterized in that include:
S1 camera acquires the facial image of user's difference sitting posture as training sample;
S2 obtains face classification device using training sample and the corresponding sitting posture training random forest of each training sample;
S3 camera acquires current face's image of user, obtains face seat from current face's image using face classification device Mark, the center point coordinate of face coordinate, that is, left eye, right eye and mouth calculate the average coordinates of face coordinate, are denoted as face and averagely sit Mark;
S4 identifies the current sitting posture of user according to face coordinate in current face's image, this step further comprises:
4.1 obtain face coordinate, i.e. face standard coordinate using face classification device from the training sample of standard sitting posture, calculate each The average coordinates of face standard coordinate are denoted as face average coordinate;
4.2 subtract the x-axis coordinate of face average coordinates the x-axis coordinate of face average coordinate, and gained difference is denoted as first Difference;The y-axis coordinate of face average coordinates is subtracted to the y-axis coordinate of face average coordinate, it is poor that gained difference is denoted as second Value;
4.3 according to the line of the 1. size of the first difference and the second difference, 2. face average coordinates and face average coordinate With the angle of horizontal direction and 3. the difference of distance is big between each face in current face's image and standard sitting posture training sample It is small, the offset direction of face in the training sample of face relative standard sitting posture in current face's image is judged, to identify user Current sitting posture;
Sub-step 4.3 specifically:
If (a), the first difference and the second difference are all larger than a, calculate the line of face average coordinates and face average coordinate with The tangent value of horizontal direction angle determines that the current sitting posture of user is to face upward head if tangent value is greater than 1;Otherwise, it is determined that user is current Sitting posture is head right avertence;
If (b), the first difference is greater than a the second difference is less than-a simultaneously, calculates face average coordinates and face average coordinate The tangent value of line and horizontal direction angle determines that the current sitting posture of user is to bow if tangent value is greater than 1;Otherwise, it is determined that with The current sitting posture in family is head right avertence;
If (c) the first difference is greater than a while the second difference is located in range [- a, a], then determine the current sitting posture of user for the head right side Partially;
If (d), the first difference is less than-a the second difference is greater than a simultaneously, calculates face average coordinates and face average coordinate The tangent value of line and horizontal direction angle determines that the current sitting posture of user is to face upward head if tangent value is greater than 1;Otherwise, it is determined that with The current sitting posture in family is head left avertence;
If (e) the first difference and the second difference are respectively less than-a, the line of face average coordinates and face average coordinate is calculated Determine that the current sitting posture of user is to bow if tangent value is greater than 1 with the tangent value of horizontal direction angle;Otherwise, it is determined that user works as Preceding sitting posture is head left avertence;
If (f) the first difference is less than-a while the second difference is located at range [- a, a], then determine the current sitting posture of user for a head left side Partially;
If (g) the first difference is located in range [- a, a], while the second difference is greater than a, then determines that the current sitting posture of user is to face upward head;
If (h) the first difference is located in range [- a, a], while the second difference is greater than a, then determines that the current sitting posture of user is to bow;
If (i) the first difference and the second difference are respectively positioned in range [- a, a], calculate in the training sample of standard sitting posture left eye with Right eye, left eye are denoted as gauged distance 1, gauged distance 2, gauged distance 3 at a distance from mouth, right eye and mouth respectively, and forefathers are worked as in calculating Left eye is denoted as distance 1, distance 2, distance 3 at a distance from right eye, left eye and mouth, right eye and mouth respectively in face image, calculates standard Distance 1 and distance 1, gauged distance 2 and distance 2, the difference of gauged distance 3 and distance 3, if all differences are all larger than threshold value b, Judge that the current sitting posture of user is head close to camera;If all differences are both less than threshold value b, judge the current sitting posture of user for head Portion is far from camera;Otherwise, judge the current sitting posture of user for standard sitting posture;
Threshold value a and b are empirical value related with face width pixel value in standard sitting posture training sample, are adjusted by test of many times It obtains;
S5 is based on current face's image, identifies the current sitting posture of user using face classification device;
The recognition result of S6 comparison step S4 and S5, if recognition result is identical, the current sitting posture of recognition result, that is, user, according to The current sitting posture of user is advised or is alerted to user, then, executes step S3~S6 to next frame facial image;If identification knot Fruit is different, directly executes step S3~S6 to next frame facial image.
2. the motion capture sitting posture judgment method based on image as described in claim 1, characterized in that include:
Different sitting postures described in S1 include standard sitting posture, face upward head, bow, head left avertence, head right avertence, head is far from camera With head close to camera.
3. the cervical spondylosis identifying system based on camera and pressure sensor, characterized in that include:
Camera and pressure sensor, for acquiring facial image and the effective node pressure value under user's difference sitting posture respectively, Facial image, effective node pressure value and corresponding sitting posture constitute test sample;And work as forefathers for acquire user respectively Face image and current pressure data;
Accuracy rate computing module, for face classifier obtained by claim 1 step S2 and the identification of sitting posture classifier is respectively adopted Test sample, calculates face classification device and sitting posture classifier identifies that correct sample number accounts for the percentage of test sample sum, i.e., The accuracy rate of face classification device and sitting posture classifier;
The sitting posture classifier obtains with the following method:
Acquire the pressure value of each effective node under different sitting postures, the pressure value of effective node and corresponding sitting posture composing training sample Collect D', using training sample set D' training random forest, obtains sitting posture classifier;
Weight module is adjusted, for adjusting weight according to the accuracy rate of face classification device and sitting posture classifier, that is, if face classification Device accuracy rate is lower than sitting posture classifier, reduces face classification device weight by default weight adjusted value, improves sitting posture classifier weight; Conversely, improving face classification device weight by default weight adjusted value, sitting posture classifier is reduced;Face classification device and sitting posture classifier Initial weight be set as 0.5;Weight adjusted value is rule of thumb manually set;
Sitting posture probability identification module is used to using face classification device from each sitting posture probability of current face's image recognition, using sitting posture Classifier identifies each sitting posture probability from current pressure data, calculates separately the combined chance c of each sitting posturei=mai+n·bi, comprehensive Close the current sitting posture of sitting posture, that is, user of maximum probability;M, n is respectively the weight of face classification device and sitting posture classifier;ai、biRespectively Indicate the probability of face classification device and sitting posture classifier from current face's image recognition for the i-th class sitting posture;
Cervical spondylosis risk judgment module, for count respectively user in predetermined period be in standard sitting posture and non-standard sitting posture when Long, the duration for calculating non-standard sitting posture accounts for the ratio of predetermined period, and when the ratio reaches specified degree, judgement user, which has, suffers from cervical spondylosis Risk.
4. the cervical spondylosis identifying system based on camera and pressure sensor as claimed in claim 3, it is characterized in that:
Further include cervical spondylosis judgment module, for when determine user have suffer from cervical spondylosis risk when, it is complete according to user's cervical vertebra health At degree, further determines whether to suffer from cervical spondylosis risk, specifically include:
(1) user bows to greatest extent, obtains the angle of depression according to the facial image that camera acquires user, by the angle of depression and presets normal Value compares;
(2) user faces upward head to greatest extent, obtains the elevation angle according to the facial image that camera acquires user, and the elevation angle and setting is normal Value compares;
(3) user obtains inclination angle according to the facial image that camera acquires user, by inclination angle respectively to two sides torticollis to greatest extent It is compared with setting normal value, while judging whether two angles of heel are equal;
(4) user obtains user's rotary head rail according to the facial image that camera acquires user respectively to two sides rotary head to greatest extent Mark;
(5) user does movement of shrugging, and the shoulder level line and chin of user are obtained according to the facial image that camera acquires user The distance at edge;
(6) user does the movement of pocket shoulder, and the facial image of user is acquired according to camera;
(7) user's wrist and ancon curve inwardly to greatest extent, obtain arc of curvature according to the facial image that camera acquires user Degree;
(8) user makees chest expanding movement, during obtaining user's chest expanding according to the facial image that camera acquires user, is with shoulder The center of circle, the maximum diameter of a circle that arm is diameter and track;
(9) user shakes the head clockwise, counterclockwise respectively, obtains frequency of shaking the head according to the facial image that camera acquires user.
CN201610343035.4A 2016-05-23 2016-05-23 Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system Active CN106022378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610343035.4A CN106022378B (en) 2016-05-23 2016-05-23 Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610343035.4A CN106022378B (en) 2016-05-23 2016-05-23 Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system

Publications (2)

Publication Number Publication Date
CN106022378A CN106022378A (en) 2016-10-12
CN106022378B true CN106022378B (en) 2019-05-10

Family

ID=57096124

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610343035.4A Active CN106022378B (en) 2016-05-23 2016-05-23 Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system

Country Status (1)

Country Link
CN (1) CN106022378B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991399B (en) * 2017-04-01 2020-07-31 浙江陀曼精密机械有限公司 Sitting posture image detection and comparison system and method thereof
CN107562352A (en) * 2017-08-15 2018-01-09 苏州三星电子电脑有限公司 Computer user's posture correcting method
CN108741862A (en) * 2018-05-22 2018-11-06 四川斐讯信息技术有限公司 A kind of sitting posture method of adjustment and sitting posture adjust seat
CN108881827A (en) * 2018-06-22 2018-11-23 上海掌门科技有限公司 Determine that user sits in meditation into the method, equipment and storage medium fixed time
CN110968854A (en) * 2018-09-29 2020-04-07 北京航空航天大学 Sitting posture identity authentication method and device
CN109685025A (en) * 2018-12-27 2019-04-26 中科院合肥技术创新工程院 Shoulder feature and sitting posture Activity recognition method
CN110443147B (en) * 2019-07-10 2022-03-18 广州市讯码通讯科技有限公司 Sitting posture identification method and system and storage medium
CN110781741A (en) * 2019-09-20 2020-02-11 中国地质大学(武汉) Face recognition method based on Relief feature filtering method
CN112220212B (en) * 2020-12-16 2021-03-16 湖南视觉伟业智能科技有限公司 Table/chair adjusting system and method based on face recognition
CN113361342B (en) * 2021-05-20 2022-09-20 杭州好学童科技有限公司 Multi-mode-based human body sitting posture detection method and device
CN113288122B (en) * 2021-05-21 2023-12-19 河南理工大学 Wearable sitting posture monitoring device and sitting posture monitoring method
CN116884083B (en) * 2023-06-21 2024-05-28 圣奥科技股份有限公司 Sitting posture detection method, medium and equipment based on key points of human body
CN116999222A (en) * 2023-09-28 2023-11-07 杭州键嘉医疗科技股份有限公司 Soft tissue tension measuring device and pressure calibration method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697199A (en) * 2009-08-11 2010-04-21 北京盈科成章科技有限公司 Detection method of head-face gesture and disabled assisting system using same to manipulate computer
CN101916496A (en) * 2010-08-11 2010-12-15 无锡中星微电子有限公司 System and method for detecting driving posture of driver
CN102298692A (en) * 2010-06-24 2011-12-28 北京中星微电子有限公司 Method and device for detecting body postures
CN102629305A (en) * 2012-03-06 2012-08-08 上海大学 Feature selection method facing to SNP (Single Nucleotide Polymorphism) data
CN104239860A (en) * 2014-09-10 2014-12-24 广东小天才科技有限公司 Detecting and reminding method and device for sitting posture in using process of intelligent terminal
CN105139447A (en) * 2015-08-07 2015-12-09 天津中科智能技术研究院有限公司 Sitting posture real-time detection method based on double cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI411935B (en) * 2009-12-25 2013-10-11 Primax Electronics Ltd System and method for generating control instruction by identifying user posture captured by image pickup device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101697199A (en) * 2009-08-11 2010-04-21 北京盈科成章科技有限公司 Detection method of head-face gesture and disabled assisting system using same to manipulate computer
CN102298692A (en) * 2010-06-24 2011-12-28 北京中星微电子有限公司 Method and device for detecting body postures
CN101916496A (en) * 2010-08-11 2010-12-15 无锡中星微电子有限公司 System and method for detecting driving posture of driver
CN102629305A (en) * 2012-03-06 2012-08-08 上海大学 Feature selection method facing to SNP (Single Nucleotide Polymorphism) data
CN104239860A (en) * 2014-09-10 2014-12-24 广东小天才科技有限公司 Detecting and reminding method and device for sitting posture in using process of intelligent terminal
CN105139447A (en) * 2015-08-07 2015-12-09 天津中科智能技术研究院有限公司 Sitting posture real-time detection method based on double cameras

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于体压分布检测和支持向量机分类的乘员体征识别;肖振华;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20081015(第10期);第40-41、46-50页
基于机器视觉技术的人体坐姿特征提取及识别算法研究;贾若辰;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20150715(第7期);第2、17-18、36-41页和表4-1

Also Published As

Publication number Publication date
CN106022378A (en) 2016-10-12

Similar Documents

Publication Publication Date Title
CN106022378B (en) Sitting posture judgment method and based on camera and pressure sensor cervical spondylosis identifying system
US11989340B2 (en) Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system
CN111083922B (en) Dental image analysis method for correction diagnosis and apparatus using the same
CN107341473B (en) Palm characteristic recognition method, palm characteristic identificating equipment and storage medium
CN107169453B (en) Sitting posture detection method based on depth sensor
CN107103309A (en) A kind of sitting posture of student detection and correcting system based on image recognition
CN108090742A (en) Sport and body-building project management system
KR20170052628A (en) Motor task analysis system and method
Jensen et al. Classification of kinematic swimming data with emphasis on resource consumption
CN106923839A (en) Exercise assist device, exercising support method and recording medium
CN112101124B (en) Sitting posture detection method and device
CN102298692B (en) A kind of detection method of human body attitude and device
CN108305680A (en) Intelligent parkinsonism aided diagnosis method based on multi-element biologic feature and device
CN112464793A (en) Method, system and storage medium for detecting cheating behaviors in online examination
CN106971131A (en) A kind of gesture identification method based on center
CN114358194A (en) Gesture tracking based detection method for abnormal limb behaviors of autism spectrum disorder
JP2023549838A (en) Method and system for detecting child sitting posture based on child face recognition
US20220095959A1 (en) Feigned Injury Detection Systems And Methods
CN110123280A (en) A kind of finger dexterity detection method based on the identification of intelligent mobile terminal operation behavior
CN109758737A (en) Posture correction device and method for underwater motion
CN116469174A (en) Deep learning sitting posture measurement and detection method based on monocular camera
JP3686418B2 (en) Measuring device and method
CN113069088B (en) Artificial intelligence interaction device
CN109044380A (en) A kind of driver status detection device and condition detection method
CN112527118B (en) Head posture recognition method based on dynamic time warping

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant