CN113197572A - Human body work correction system based on vision - Google Patents
Human body work correction system based on vision Download PDFInfo
- Publication number
- CN113197572A CN113197572A CN202110501374.1A CN202110501374A CN113197572A CN 113197572 A CN113197572 A CN 113197572A CN 202110501374 A CN202110501374 A CN 202110501374A CN 113197572 A CN113197572 A CN 113197572A
- Authority
- CN
- China
- Prior art keywords
- action
- human body
- detected
- correction
- joint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241000282414 Homo sapiens Species 0.000 title claims abstract description 77
- 238000012937 correction Methods 0.000 title claims abstract description 41
- 230000009471 action Effects 0.000 claims abstract description 135
- 238000000034 method Methods 0.000 claims abstract description 39
- 238000004458 analytical method Methods 0.000 claims abstract description 31
- 239000013598 vector Substances 0.000 claims abstract description 20
- 238000011156 evaluation Methods 0.000 claims abstract description 19
- 230000008859 change Effects 0.000 claims abstract description 15
- 230000008569 process Effects 0.000 claims abstract description 13
- 241000238421 Arthropoda Species 0.000 claims abstract description 10
- 238000010586 diagram Methods 0.000 claims abstract description 7
- 210000000988 bone and bone Anatomy 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005021 gait Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1121—Determining geometric values, e.g. centre of rotation or angular range of movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1121—Determining geometric values, e.g. centre of rotation or angular range of movement
- A61B5/1122—Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Abstract
The invention belongs to the technical field of action analysis and application in the intelligent medical first-aid process by computer vision technology, in particular to a human body working correction system based on vision, which carries out human body posture estimation by a computer vision method to obtain human body joint point information, carries out real-time action correction according to the change of an arthropod track formed by corresponding joint points of a human body of a to-be-detected action and a standard action, carries out error analysis by using a characteristic vector-based method assisted by joint point angle similarity in order to have a general action analysis result, and visually represents a motion process by using a joint angle track change diagram to obtain general evaluation, and comprises the following steps: s1-based on computer vision intelligent medical first aid action correction method, action correction is carried out on the basis that human body posture estimation obtains human body skeleton joint points; s2, constructing a judgment basis for medical emergency action correction; s3-correcting human body movement; s4-action alignment analysis.
Description
Technical Field
The invention belongs to the technical field of action analysis and application of a computer vision technology in an intelligent medical emergency process, and particularly relates to a human body work correcting system based on vision.
Background
With the development of internet technology, the role of computer vision is increasingly prominent; the video-based human body action comparison analysis is concerned by the fields of intelligent monitoring and the like, and can also be applied to medical first aid; at present, medical emergency knowledge is mainly learned by adopting a explaining mode, the teaching effect is limited, and the learned emergency knowledge is not easy to be verified standardly, even if some methods are 'rescue and support injury by mistake, and the rescue is not as good as no rescue', a plurality of people applying emergency measures are questioned that some emergency actions are not standard, and dangerous situations cause more threat to the wounded; the expert suggests that non-professional personnel and people without professional rescue knowledge are discouraged to take a battle to rescue the sick and wounded personally, and the emergency call is preferably dialed to rescue the professional; in fact, however, some emergency actions are the fundamental emergency skills that people all have to learn advocated by the modern civilization society. The popularization of these first aid actions is also part of creating a civilized healthy and safe society; in order to popularize medical emergency knowledge, learn medical emergency actions, perfect a medical health system, combine with the development direction of an information-based society, and adopt a computer vision-based method, real-time action detection can be provided for people learning medical emergency knowledge, and judgment and action correction of correctness of the emergency actions are obtained; in contrast, the learner can obtain the feedback medical emergency action correction information only by learning the emergency action in front of the camera; computer vision is a special way to acquire information, and the vigorous development of the computer vision provides a new idea for human body correction analysis, and becomes a way to solve the above problems; for example, in the field of human-computer interaction, computers gradually reach the idea of understanding human behavior and actions and human expression, so that the computers can communicate with human beings in a more intelligent and convenient manner; in the field of action analysis, starting from the aspects of body health of the old and rehabilitation training of patients, the method can gradually analyze the sensing symptoms of the action gait of the old and judge whether action standard evaluation is carried out on the patients who carry out rehabilitation training;
the current adopted human body real-time action correction needs to analyze the action of a trainer by means of a camera, and extract various technical parameters of the trainer, such as the position of each main limb joint point, the speed and the angular speed of the main limb joint point in the movement process. Or detecting the positions of the key points of the human body captured by the camera in the image by adopting a convolutional neural network model in deep learning, and then correcting by combining the angle information of the bone joints;
the medical emergency action has stronger specialty and higher required correction precision, so the difficulty is higher, the research of the type has blanks at present, a special camera Kinect or an infrared camera is generally adopted in the existing similar research to obtain key points of human skeleton, the price of the external equipment is high, the requirements on indoor illumination and the like are very high, a great research space is provided for achieving popularization and use, aiming at the key point of obtaining human skeleton by adopting a neural network, the problems are the skeleton data obtaining method, the action correcting process and the action detail comparison, in computer vision, there is a very difficult task to acquire human skeletal data, mainly related to problems, people in the picture have the problems of shading, mutual contact, clothes or body type difference and the like, and the test points and the distance of the camera bring difficulty to the prediction of the joint points. Therefore, when performing the task of bone data acquisition, the following problems should be considered in essence: first, an indefinite number of people may be included in each picture, and may appear in various sizes at any one position in the picture. Secondly, the interaction between people can generate complex spatial reasoning, and the contact between people is very complicated. And, the task of estimating the human body posture also has real-time performance. In the motion correction system, it is necessary to consider how to perform real-time motion correction and how to set evaluation criteria. However, the problems are not solved well at present.
In order to solve the above problems, the present application provides a vision-based human body work correcting system.
Disclosure of Invention
To solve the problems set forth in the background art described above. The invention provides a vision-based human body work correcting system, which mainly solves the problems of timeliness and action standardization in the traditional medical first aid. The system estimates the human body posture through computer vision to further obtain the information of human body skeleton joint points, preprocesses standard actions and actions to be detected according to the arm extension, and then corrects the actions in real time, namely, estimates the coordinates of the joint points of the actions to be detected according to the corresponding arthropods of the standard actions, and judges whether the actions to be detected are standard or not and how to correct the result according to the estimation; in the action analysis stage, after the standard video action is aligned with the video action to be detected, a characteristic vector is obtained through limbs formed between adjacent joint points, and then an action comparison analysis result is obtained through the joint angle similarity and the angle trajectory diagram.
In order to achieve the purpose, the invention provides the following technical scheme: a human body work correction system based on vision carries out human body posture estimation through a computer vision method to obtain human body joint point information, and carries out real-time action correction according to the change of an arthropod track formed by human body corresponding joint points of actions to be detected and standard actions, in order to have a general action analysis result, a characteristic vector-based method is used to assist joint point angle similarity for error analysis, and a joint angle track change diagram is used to visually represent a motion process to obtain general evaluation, and the system comprises the following steps:
s1-based on computer vision intelligent medical first aid action correction method, action correction is carried out on the basis that human body posture estimation obtains human body skeleton joint points;
s2, constructing a judgment basis for medical emergency action correction;
s3-correcting human body movement;
s4-action alignment analysis.
Preferably, the vision-based human body work correcting system of the present invention further comprises the following steps:
s11-opening the camera;
s12-standard action collection;
s13-obtaining standard action bone data;
s121, acquiring actions to be detected;
s131, obtaining the bone data of the action to be detected.
Preferably, the vision-based human body work correcting system of the present invention further comprises the following steps:
s21, correcting the standard action limb direction in real time;
s31, correcting the action to be detected in real time;
s211, aligning the DTW of the action to be detected;
s311-action evaluation based on feature vectors;
s41-action evaluation.
Compared with the prior art, the invention has the beneficial effects that: estimating the human body posture by a computer vision method to obtain human body joint point information, and then correcting the real-time action according to the change of the arthropod track formed by the human body corresponding joint points of the action to be detected and the standard action; in order to obtain a result of the overall action analysis, a characteristic vector-based method is used to assist the angle similarity of the joint points to carry out error analysis, and a joint angle trajectory change diagram is used to visually represent the motion process, so as to obtain the overall evaluation.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic view of a human skeletal joint according to the present invention;
fig. 3 is a neural network structure employed in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
As shown in fig. 1, 2 and 3;
a human body work correction system based on vision carries out human body posture estimation through a computer vision method to obtain human body joint point information, and carries out real-time action correction according to the change of an arthropod track formed by human body corresponding joint points of actions to be detected and standard actions, in order to have a general action analysis result, a characteristic vector-based method is used to assist joint point angle similarity for error analysis, and a joint angle track change diagram is used to visually represent a motion process to obtain general evaluation, and the system comprises the following steps:
s1-based on computer vision intelligent medical first aid action correction method, action correction is carried out on the basis that human body posture estimation obtains human body skeleton joint points;
s2, constructing a judgment basis for medical emergency action correction;
s3-correcting human body movement;
s4-action alignment analysis.
Wherein:
the invention relates to an intelligent medical first-aid action correction method based on computer vision, which carries out action correction on the basis of human body posture estimation to obtain human body skeletal joint points.
Obtaining image characteristics of the collected video sequence through a neural network;
further, this feature is input to the first branch of the first stage, where the network generates a site confidence map S1=ρ1(F) And a set of partial affinity domains L1=φ1(F);
Where ρ is1And phi1Is the greedy inference result of the first stage CNN network, and F is the image feature.
In each stage thereafter, the union of the prediction results and image features, input for each branch of the previous stage, are used together to produce the most accurate prediction, which can be expressed as:
where ρ istAnd phitIs the result of greedy reasoning on CNNs at stage t.
To make a decision on the convergence of the network, the L2 paradigm is used between the prediction evaluation and the true graph and domain, e.g. at stage t, the two branches of each stage have loss functions of:
whereinIs a true position-location map of the body,is a true partial affinity domain, where W is a binary code, P is each pixel point, and W (P) avoids the penalty of true positive prediction.
Further, the overall goal of the present network is:
aiming at the position confidence map, the position confidence map is mainly formed by marking two-dimensional key points x in the imagej,kA real site location map is computed where x (j, k) represents the jth joint of the kth individual in the real image.Representing the location map generated by each individual k, whereinThe position of (a) is defined as:
whereinThe fitting normal distribution is a position confidence map of the j-th joint in each image, is the maximum value of the normal distribution of the k-th person in the image, and controls the propagation of the peak value of the normal distribution.
Further, the accurate outcome of the network prediction is to group together the various confidence maps by maximizing:
for the partial affinity domain, for the p-dots in the image, the partial affinity domainComprises the following steps:
wherein, use xj1,kAnd xj2,kThe true coordinates of locations j1 and j2 representing limb c of individual k, v ═ xj2,k-xj1,k)/||xj2,k-xj1,k||2Is the unit vector of the limb.
For all point sets on the limb:
0≤v·(P-xj1,k)≤lc,kand|v⊥·(P-xj1,k)|≤σl
wherein σlIs the width of the limb, /)c,k=||xj2,k-xj1,k||2Representing the length of the limb.
Further, the actual value of the partial affinity domain for point p is the average of the partial affinity domains for all persons at this point:
wherein n isc(P) represents the number of non-zero vectors.
As a partial affinity domain for judging whether two limbs are connected, the judging mode is as follows:
P(u)=(1-u)dj1+udj2
wherein d isj1And dj2Are two human joint coordinates, P (u) is dj1And dj2At any point on the line, if the value of E is relatively large, the location is a body.
Then, joint connection is carried out, and a candidate position connecting line set is obtained according to the confidence map:
wherein the content of the first and second substances,to represent two candidate detectionsAndwhether or not it can be attached as a limb.
Further, the best match is obtained using the Hungarian algorithm:
where Ec is the weight of the matching in limb type c, Zc is the subset Z in limb type c, Emn is the resulting body partAndthe integration result in between.
The final optimization problem can be expressed as:
secondly, constructing a judgment basis for medical emergency action correction, wherein the specific method comprises the following steps: aiming at a set of medical first-aid actions comprising standard actions and actions to be detected, collecting body skeleton joint point information of a person to be detected and a demonstrator, representing the body skeleton joint point information by coordinates, connecting corresponding joint points to obtain corresponding joint points, preprocessing the standard actions and the actions to be detected according to the arm extension, estimating the joint point direction and position formed by the joint points of the actions to be detected according to the joint point direction and position corresponding to the standard actions after the positions of the joint points of the two sets of actions are approximately aligned, and judging whether the actions to be detected are standard or not according to the estimation result.
(III) correcting the human body action, which comprises the following specific steps:
1. acquiring a real-time human body action video through a camera, and estimating the human body posture by using a computer vision method to acquire human body skeleton key point information;
2. based on the obtained human body joint point information, the action correction judgment basis constructed according to the second (second) stage can be summarized as follows:
wherein the content of the first and second substances,θ is the standard action joint slope, and oc is the error tolerance interval.
For example:is the right shoulder to be detected for the motion,the right elbow of the motion to be detected,is the right shoulder of the standard motion,the right elbow of the standard motion.
3. According to the above judgment standard, when the error between the joint point of the action to be detected and the estimated value is within the allowable range, the action is judged to be in accordance with the standard action. On the contrary, when the action to be detected is not standard, the deviation direction of the action to be detected can be judged according to the sign (positive/negative) of the error between the joint point and the estimated value, so that the action to be detected is corrected and prompted.
4. Aiming at the correction prompt, because of the complexity of the limb action, the real-time prompt is given to the main limb position of the human body, the content of the prompt runs through the whole correction process, and the real-time contrast correction of the frame to the frame is achieved by adopting a real-time correction strategy.
And (IV) comparing and analyzing actions, which comprises the following specific steps:
1. according to the correction process in the third stage (i), on the basis of real-time correction, comprehensive evaluation is needed in consideration of the practicability of the correction system, and if the comprehensive evaluation is only directed at joint skeleton coordinate data, effective comparison analysis cannot be achieved only because the joint skeleton coordinate changes along with the change of actions of a person to be detected and a standard actor due to different body types. The feature vector comprises category information, and action comparison analysis based on the feature vector is adopted according to the category information;
2. aiming at the adopted action comparison analysis method, firstly aiming at the problems of action speed and slow and the like when different people do the same action, a dynamic time warping method is used for aligning the standard video and the video to be detected in time sequence;
3. and (3) obtaining a characteristic vector through limbs formed between adjacent joint points according to the human skeleton joint point information obtained in the third stage, wherein the characteristic vector comprises position and direction information, and obtaining a motion comparison analysis result through joint angle similarity and a joint angle trajectory diagram. The characteristic vector comprises direction information and position information of limbs formed by adjacent joints, and the joint angle trajectory graph and the joint angle similarity mainly aim to achieve two different modes of visually representing the change of the human joint angle along with time in the motion process, wherein the two different modes are convenient for a user to use. The concrete contents are as follows: for action comparison analysis, the skeleton coordinate data can not be directly used for constructing a human body model because the human body shape and the position of the distance camera change, and modeling is considered by using a proper feature vector. Because the human motion process has high degree of freedom, and the joint angle contains abundant motion information, the vector included angle of the human joint can be selected as the main basis for describing action comparison analysis. The judgment basis can be expressed as:
wherein v isiAnd vjRespectively representing the eigenvectors of the corresponding arthropods by cosine similarityi,jTo measure the similarity and difference between angles, corr (theta) is used<i,j>) The angle and the angular similarity measure the amplitude and similarity of the movement of the limb.
And obtaining a quantification result through error analysis, and drawing a corresponding angle change track as the visual expression of the overall evaluation.
In summary, the following steps: and obtaining a comprehensive analysis result of the human body action correction through the two different evaluation modes.
In an alternative embodiment: a vision-based human body work correcting system further comprises the following steps:
s11-opening the camera;
s12-standard action collection;
s13-obtaining standard action bone data;
s121, acquiring actions to be detected;
s131, obtaining the bone data of the action to be detected.
In an alternative embodiment: a vision-based human body work correcting system further comprises the following steps:
s21, correcting the standard action limb direction in real time;
s31, correcting the action to be detected in real time;
s211, aligning the DTW of the action to be detected;
s311-action evaluation based on feature vectors;
s41-action evaluation.
The method utilizes a multilayer iterative neural network, adopts a position confidence map and a partial affinity domain to predict the positions and the directions of the human skeletal joints and the limbs, and utilizes a greedy algorithm to deduce and obtain the result of human posture estimation, thereby obtaining the position information of the human skeletal joint points. Regarding motion sequences, which can be considered as a collection of skeletal data, comparing the difference of two motion sequences is considered as comparing the difference of features in the motion sequences; the direction of the limbs is formed by the adjacent joints, and the action correction is carried out. Then, on the basis of aligning the action time sequence by using a dynamic time warping method, judging action similarity based on the characteristic vector and combining cosine similarity. Finally, representing the motion trail by means of the angle characteristics to obtain an overall motion comparison analysis result; on the basis of obtaining human skeleton data based on CNN, realize human action and correct. The system mainly has the following functions of recording standard videos, correcting human body actions according to the standard actions, and finally analyzing and comparing the human body actions. Estimating coordinates of joint points of the action to be detected according to the arthropods corresponding to the standard action by using the acquired skeleton data, judging that the joint points conform to the standard action when the limb skeleton data and the arthropod direction of the action to be detected are within an allowable range, and otherwise, correcting the action according to the sign (positive/negative) of the error between the joint points and the estimated value and the direction of the deviation of the action to be detected; in the action comparison analysis stage, time sequences with different lengths are aligned by using a dynamic time warping method, then the angles of the four limb joints formed by connecting eight joints are calculated according to a characteristic vector-based mode, and similarity evaluation is performed according to the change of the angle values.
The invention also provides a judgment basis for correcting the movement. The content is to set a certain tolerance according to the direction of the joint point to correct. The specific method is to predict the coordinates of the joint points of the motion to be detected according to the standard motion corresponding to the arthropods. Wherein the judgment formula can be expressed as:
wherein the content of the first and second substances,θ is the standard action joint slope ∈ is the error tolerance zone.
For example:is the right shoulder to be detected for the motion,the right elbow of the motion to be detected,is the right shoulder of the standard motion,the right elbow of the standard motion.
Further, according to the above-mentioned judgment criterion, when the error between the joint point of the motion to be detected and the estimated value is within the allowable range, it is judged that the standard motion is met. On the contrary, when the action to be detected is not standard, the deviation direction of the action to be detected can be judged according to the sign (positive/negative) of the error between the joint point and the estimated value, so that the action to be detected is corrected and prompted.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (3)
1. A vision-based human body work correcting system is characterized in that: the system carries out human body posture estimation through a computer vision method to obtain human body joint point information, carries out real-time action correction according to the change of an arthropod track formed by human body corresponding joint points of actions to be detected and standard actions, carries out error analysis by using a characteristic vector-based method assisted by joint point angle similarity for the purpose of having a general action analysis result, and visually represents a motion process by using a joint angle track change diagram to obtain general evaluation, and comprises the following steps:
s1-based on computer vision intelligent medical first aid action correction method, action correction is carried out on the basis that human body posture estimation obtains human body skeleton joint points;
s2, constructing a judgment basis for medical emergency action correction;
s3-correcting human body movement;
s4-action alignment analysis.
2. The vision-based work correcting system of claim 1, further comprising the steps of:
s11-opening the camera;
s12-standard action collection;
s13-obtaining standard action bone data;
s121, acquiring actions to be detected;
s131, obtaining the bone data of the action to be detected.
3. The vision-based work correcting system of claim 1, further comprising the steps of:
s21, correcting the standard action limb direction in real time;
s31, correcting the action to be detected in real time;
s211, aligning the DTW of the action to be detected;
s311-action evaluation based on feature vectors;
s41-action evaluation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110501374.1A CN113197572A (en) | 2021-05-08 | 2021-05-08 | Human body work correction system based on vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110501374.1A CN113197572A (en) | 2021-05-08 | 2021-05-08 | Human body work correction system based on vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113197572A true CN113197572A (en) | 2021-08-03 |
Family
ID=77030551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110501374.1A Pending CN113197572A (en) | 2021-05-08 | 2021-05-08 | Human body work correction system based on vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113197572A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294652A (en) * | 2022-08-05 | 2022-11-04 | 河南农业大学 | Behavior similarity calculation method and system based on deep learning |
CN115880774A (en) * | 2022-12-01 | 2023-03-31 | 湖南工商大学 | Body-building action recognition method and device based on human body posture estimation and related equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160047702A (en) * | 2014-10-23 | 2016-05-03 | 이진용 | Measuring method and system for 3-dimensional position of human body |
CN110210284A (en) * | 2019-04-12 | 2019-09-06 | 哈工大机器人义乌人工智能研究院 | A kind of human body attitude behavior intelligent Evaluation method |
CN111347438A (en) * | 2020-02-24 | 2020-06-30 | 五邑大学 | Learning type robot and learning correction method based on same |
CN111652192A (en) * | 2020-07-07 | 2020-09-11 | 泰州职业技术学院 | Tumble detection system based on kinect sensor |
CN111895997A (en) * | 2020-02-25 | 2020-11-06 | 哈尔滨工业大学 | Human body action acquisition method based on inertial sensor without standard posture correction |
CN111931804A (en) * | 2020-06-18 | 2020-11-13 | 南京信息工程大学 | RGBD camera-based automatic human body motion scoring method |
CN112258555A (en) * | 2020-10-15 | 2021-01-22 | 佛山科学技术学院 | Real-time attitude estimation motion analysis method, system, computer equipment and storage medium |
CN112568898A (en) * | 2019-09-29 | 2021-03-30 | 杭州福照光电有限公司 | Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image |
-
2021
- 2021-05-08 CN CN202110501374.1A patent/CN113197572A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160047702A (en) * | 2014-10-23 | 2016-05-03 | 이진용 | Measuring method and system for 3-dimensional position of human body |
CN110210284A (en) * | 2019-04-12 | 2019-09-06 | 哈工大机器人义乌人工智能研究院 | A kind of human body attitude behavior intelligent Evaluation method |
CN112568898A (en) * | 2019-09-29 | 2021-03-30 | 杭州福照光电有限公司 | Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image |
CN111347438A (en) * | 2020-02-24 | 2020-06-30 | 五邑大学 | Learning type robot and learning correction method based on same |
CN111895997A (en) * | 2020-02-25 | 2020-11-06 | 哈尔滨工业大学 | Human body action acquisition method based on inertial sensor without standard posture correction |
CN111931804A (en) * | 2020-06-18 | 2020-11-13 | 南京信息工程大学 | RGBD camera-based automatic human body motion scoring method |
CN111652192A (en) * | 2020-07-07 | 2020-09-11 | 泰州职业技术学院 | Tumble detection system based on kinect sensor |
CN112258555A (en) * | 2020-10-15 | 2021-01-22 | 佛山科学技术学院 | Real-time attitude estimation motion analysis method, system, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
解辉: "基于视觉的人体动作矫正系统", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294652A (en) * | 2022-08-05 | 2022-11-04 | 河南农业大学 | Behavior similarity calculation method and system based on deep learning |
CN115880774A (en) * | 2022-12-01 | 2023-03-31 | 湖南工商大学 | Body-building action recognition method and device based on human body posture estimation and related equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018120964A1 (en) | Posture correction method based on depth information and skeleton information | |
CN111144217B (en) | Motion evaluation method based on human body three-dimensional joint point detection | |
CN106055091B (en) | A kind of hand gestures estimation method based on depth information and correcting mode | |
CN110222665B (en) | Human body action recognition method in monitoring based on deep learning and attitude estimation | |
CN108597578B (en) | Human motion assessment method based on two-dimensional skeleton sequence | |
CN112069933A (en) | Skeletal muscle stress estimation method based on posture recognition and human body biomechanics | |
CN113197572A (en) | Human body work correction system based on vision | |
CN114067358A (en) | Human body posture recognition method and system based on key point detection technology | |
CN111199207B (en) | Two-dimensional multi-human body posture estimation method based on depth residual error neural network | |
CN112435731B (en) | Method for judging whether real-time gesture meets preset rules | |
CN113920326A (en) | Tumble behavior identification method based on human skeleton key point detection | |
CN113255522A (en) | Personalized motion attitude estimation and analysis method and system based on time consistency | |
CN114550027A (en) | Vision-based motion video fine analysis method and device | |
Yang et al. | Human exercise posture analysis based on pose estimation | |
CN102156994B (en) | Joint positioning method for single-view unmarked human motion tracking | |
Yu et al. | A deep-learning-based strategy for kidnapped robot problem in similar indoor environment | |
CN109993116A (en) | A kind of pedestrian mutually learnt based on skeleton recognition methods again | |
CN111539364B (en) | Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting | |
CN116704603A (en) | Action evaluation correction method and system based on limb key point analysis | |
Li et al. | Intelligent correction method of shooting action based on computer vision | |
CN115953834A (en) | Multi-head attention posture estimation method and detection system for sit-up | |
CN114360052A (en) | Intelligent somatosensory coach system based on AlphaPose and joint point angle matching algorithm | |
CN117671738B (en) | Human body posture recognition system based on artificial intelligence | |
CN113327267A (en) | Action evaluation method based on monocular RGB video | |
CN112836544A (en) | Novel sitting posture detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210803 |