CN115331304A - Running identification method - Google Patents
Running identification method Download PDFInfo
- Publication number
- CN115331304A CN115331304A CN202210938565.9A CN202210938565A CN115331304A CN 115331304 A CN115331304 A CN 115331304A CN 202210938565 A CN202210938565 A CN 202210938565A CN 115331304 A CN115331304 A CN 115331304A
- Authority
- CN
- China
- Prior art keywords
- human body
- human
- aspect ratio
- key points
- running
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000010586 diagram Methods 0.000 claims abstract description 44
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims abstract description 6
- 210000000689 upper leg Anatomy 0.000 claims description 13
- 210000002414 leg Anatomy 0.000 claims description 11
- 210000000707 wrist Anatomy 0.000 claims description 9
- 230000036544 posture Effects 0.000 description 13
- 239000013598 vector Substances 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 210000000245 forearm Anatomy 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000004103 aerobic respiration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/34—Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychiatry (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application provides a running identification method, which comprises the following steps: acquiring an initial image in a motion process; detecting a rectangular area where a human body is located in an initial image; intercepting the rectangular area to detect human body key points, and connecting the human body key points according to the position relationship to form a human body skeleton diagram; acquiring a human body visual angle in the rectangular area, and adjusting the aspect ratio of a human body skeleton map; performing running recognition by analyzing the distribution relation of the adjusted human skeleton diagram; according to the running recognition method, the key points of the human body are detected through the intercepting rectangular region, the key points of the human body are connected according to the position relation to form the human body skeleton diagram, the human body visual angle in the rectangular region is obtained, the width-height ratio of the human body skeleton diagram is adjusted, the posture visual angle of the human body skeleton is converted, the distribution relation of the adjusted human body skeleton diagram is analyzed again to conduct running recognition, the adaptability of a running recognition algorithm is improved, the running recognition accuracy can be improved from a two-dimensional image layer, and the requirement on computing power is lowered.
Description
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a running recognition method.
Background
The classification and identification of the sports fitness actions are widely applied to the scenes of judging fitness activities, assisting fitness training and the like. Running is a daily convenient physical exercise method and is an effective exercise mode for aerobic respiration.
The running postures are judged manually by a coach in the early stage, in the prior art, the running can be recognized by detecting the human body postures through an image recognition algorithm, but the human body running postures acquired at different visual angles are different, so that the existing running recognition algorithm can only recognize the human body postures at fixed visual angles, including human body orientation and pitching angles, therefore, the running recognition algorithm can only be applied to the occasions such as running machines with fixed positions and visual angles, the running recognition rate is insufficient in the complex scenes such as outdoor scenes, and the wide application of the technology is further limited.
Disclosure of Invention
An object of the embodiments of the present application is to provide a running recognition method, so as to solve the technical problems of insufficient adaptability and poor recognition accuracy existing in the running recognition process in the prior art.
In order to achieve the purpose, the technical scheme adopted by the application is as follows: a running identification method is provided, which comprises the following steps:
acquiring an initial image in a motion process;
detecting a rectangular area where a human body is located in an initial image;
intercepting the rectangular area to detect human body key points, and connecting the human body key points according to the position relationship to form a human body skeleton diagram;
acquiring a human body visual angle in the rectangular area, and adjusting the aspect ratio of a human body skeleton diagram;
and performing running recognition by analyzing the distribution relation of the adjusted human skeleton diagram.
Preferably, the method for acquiring the human body view angle in the rectangular area and adjusting the aspect ratio of the human body skeleton map comprises the following steps:
identifying a face orientation in a rectangular region;
the width of the human body skeleton image is transversely adjusted according to the face orientation;
and obtaining a side human skeleton diagram.
Preferably, the method for adjusting the width of the human skeleton map according to the human face orientation in the transverse direction comprises the following steps:
setting the orientation direction of the front face of the human body to be 0 degree, setting the orientation direction of the human face in the current rectangular area to be theta, setting the width of the rectangular area to be W, setting the width of the stretched human skeleton diagram to be W, and setting the formula of W to be:
preferably, after obtaining the side human skeleton map, the method further comprises the following steps:
acquiring the aspect ratio of a side human skeleton map;
and stretching or compressing the height of the side human skeleton map so that the aspect ratio of the side human skeleton map is within the range of the set aspect ratio threshold.
Preferably, the method for identifying running by analyzing the distribution relationship of the adjusted human skeleton map comprises the following steps:
analyzing the geometrical relationship between the coordinate positions of the key points of the human body;
and judging whether the geometric relation meets a geometric threshold value.
Preferably, the method for analyzing the geometrical relationship between the coordinate positions of the key points of the human body comprises the following steps:
respectively acquiring the length a of a big arm, the length b of a small arm and the distance c from a shoulder to a wrist of the same hand in a human body skeleton diagram;
let alpha be the included angle between the big arm and the small arm
And judging whether the alpha accords with the threshold value of the included angle of the arm.
Preferably, the method for analyzing the geometrical relationship between the coordinate positions of the key points of the human body comprises the following steps:
respectively acquiring the length d of the thigh of the same leg, the length e of the shank and the distance f from the thigh to the foot in a human body skeleton diagram;
let beta be the angle between thigh and shank
And judging whether the beta meets the leg included angle threshold value or not.
Preferably, the method for identifying running by analyzing the distribution relationship of the adjusted human skeleton map comprises the following steps:
analyzing characteristic components corresponding to the coordinate set of the key points;
and judging whether the characteristic component meets a set threshold value.
Preferably, the method for analyzing the feature components corresponding to the coordinate set of the key points includes the following steps:
setting an elbow key point of the same arm as A, a shoulder key point as B, a wrist key point as C and W as a characteristic component corresponding to the arm; then
And judging whether W meets the leg included angle threshold value.
Compared with the prior art, the running recognition method has the advantages that the human body key points are detected through the intercepting rectangular area, the human body key points are connected according to the position relation to form the human body skeleton diagram, the human body visual angle in the rectangular area is obtained, the width-to-height ratio of the human body skeleton diagram is adjusted, the human body skeleton posture visual angle is converted, the distribution relation of the adjusted human body skeleton diagram is analyzed to conduct running recognition, the adaptability of a running recognition algorithm is improved, the running recognition accuracy can be improved from the two-dimensional image layer, and the calculation force requirement is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a running identification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating the effect of detecting a rectangular region where a human body is located in an initial image based on the running recognition method in FIG. 1;
FIG. 3 is a schematic diagram illustrating the effect of cutting out a rectangular area based on the initial image in FIG. 2;
FIG. 4 is a schematic diagram of distribution of 18 key points of a human body detected by openposition;
FIG. 5 is a schematic diagram of a skeleton map of a human body based on the rectangular area in FIG. 3;
FIG. 6 is a schematic illustration of a side body skeleton map based on the body skeleton map of FIG. 5 taken across a stretched width;
FIG. 7 is a body skeleton diagram taken from the side body skeleton diagram of FIG. 6 at a stretched height;
FIG. 8 is a schematic diagram of the body skeleton of FIG. 7 labeled with a, b, c, d, e, f, α, β;
FIG. 9 is a schematic diagram of the body skeleton icon of FIG. 7 labeled A, B, C.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present application clearer, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the application and to simplify the description, and are not intended to indicate or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be constructed in operation as a limitation of the application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Referring to fig. 1, a running recognition method according to an embodiment of the present application will be described. The running identification method comprises the following steps:
s1, acquiring an initial image in a motion process;
s2, detecting a rectangular area where a human body is located in the initial image;
s3, intercepting the rectangular area to detect human body key points, and connecting the human body key points according to the position relationship to form a human body skeleton diagram;
s4, acquiring a human body visual angle in the rectangular area, and adjusting the aspect ratio of a human body skeleton diagram;
and S5, analyzing the distribution relation of the adjusted human skeleton diagram to perform running recognition.
It is understood that, in step S2, referring to fig. 2 and fig. 3 together, the rectangular region where the human body is located in the initial image is detected by using yolox algorithm. When a plurality of human body targets exist in the picture, a plurality of rectangular areas can be detected, and all the rectangular areas can be identified by running at the same time. Compared with the traditional thermodynamic diagram human body detection method, the method and the device can not only collect the data in real time, but also identify any frame in the video in a running mode, and in addition, a thermal imager is not needed, so that the cost can be saved.
In step S3, please refer to fig. 3 and fig. 4 together, the method for detecting key points of a human body may use openposition to detect 18 key points of the human body, where the 18 key points include left eye, right eye, left ear, right ear, nose, left shoulder, right shoulder, neck, left elbow, right elbow, left wrist, right wrist, left thigh, right thigh, left knee, right knee, left foot and right foot, and connect the key points according to a position relationship to form a skeleton diagram of the human body.
In step S4-5, please refer to fig. 6 and 7 together, because the geometric relationships between the coordinate positions of the key points of the human body obtained by analyzing the human body in different postures are different, and the feature components corresponding to the coordinate sets of the key points have fixed characteristics, the threshold of the human body in the running state can be preset, and the geometric relationships between the coordinate positions of the key points of the human body and the feature components corresponding to the coordinate sets of the key points are analyzed to determine whether the geometric relationships and the feature components meet the set threshold, if the geometric relationships and the feature components meet the threshold, the human body is identified as the running state, and if the feature components do not meet the threshold, the human body is identified as the non-running state. However, since the camera can only capture an image of one surface, the two-dimensional image does not contain depth in the human skeleton map obtained in step S3, and thus the human skeleton maps obtained under different human viewing angles are different. Therefore, in step S3, the human body view angle in the rectangular region is obtained first, and the aspect ratio of the human body skeleton diagram is adjusted according to the human body view angle, so that the human body skeleton diagram obtains view angle conversion, the technical problem that needs to be solved in a three-dimensional layer is solved by using a two-dimensional technology, and then the running recognition is performed by analyzing the distribution relationship of the adjusted human body skeleton diagram in step S5.
Compared with the prior art, the running recognition method has the advantages that human key points are detected through the intercepting rectangular region, the human key points are connected according to the position relation to form the human skeleton diagram, the human visual angle in the rectangular region is obtained, the width-to-height ratio of the human skeleton diagram is adjusted, the human skeleton posture visual angle is converted, the distribution relation of the adjusted human skeleton diagram is analyzed again to conduct running recognition, the adaptability of a running recognition algorithm is improved, the running recognition accuracy can be improved from the two-dimensional image layer, and the requirement on computing power is reduced.
In another embodiment of the present application, referring to fig. 5 to 6, in step S4, the method for obtaining the human body view angle in the rectangular area and adjusting the aspect ratio of the human body skeleton map includes the following steps:
identifying a face orientation in a rectangular area;
the width of the human skeleton map is transversely adjusted according to the face orientation;
and obtaining a side human skeleton map.
It can be understood that, the face orientation in the rectangular region is identified, the face orientation identification model based on the artificial neural network may be a multi-classification model, that is, the face orientation is divided into a plurality of categories according to an angle range, and when the rectangular region image is input to the face orientation identification model, the category corresponding to the face orientation may be output, so as to achieve the purpose of obtaining the face orientation in the rectangular region. Because the side human body skeleton map can reflect the running posture of the human body more vividly, the four limbs skeleton postures of the human body in the motion process can be reflected in the human body skeleton map on the inner side surface of the two-dimensional space, the width of the human body skeleton map can be transversely adjusted according to the face direction to obtain the side human body skeleton map.
The method for transversely adjusting the width of the human skeleton map according to the face orientation comprises the following steps: setting the orientation direction of the front face of the human body to be 0 degree, setting the orientation direction of the human face in the current rectangular area to be theta, setting the width of the rectangular area to be W, setting the width of the stretched human skeleton diagram to be W, and setting the formula of W to be:
for example, when the face orientation in the recognition rectangular region is 30 °, the width of the human skeleton map is adjusted to be 2 times according to the face orientation, so as to achieve the converted human skeleton posture view angle, the converted human skeleton posture view angle is the side human skeleton map, and the running recognition is performed by analyzing the distribution relation of the side human skeleton map, so as to improve the running recognition accuracy.
Further, referring to fig. 6 to 7, after the side skeleton diagram of the human body is obtained in step S4, the method further includes the steps of:
acquiring the aspect ratio of a side human skeleton map;
and stretching or compressing the height of the side human body skeleton map to ensure that the aspect ratio of the side human body skeleton map is within the set aspect ratio threshold range.
It can be understood that, because the pitch angle of view during image acquisition is uncertain, in the two-dimensional image, the image photographed by looking up the human body is needed to reflect the distribution relationship of the real human body skeleton map, but in the top view angle, the height of the human body becomes smaller, but the change of the body width is relatively smaller, and in the top view angle, the height of the human body becomes larger, and the change of the body width is relatively smaller. In this way, the width-to-height ratio of the side human body skeleton map is obtained, and the height of the side human body skeleton map is stretched or compressed, so that the width-to-height ratio of the side human body skeleton map is within the range of the set width-to-height ratio threshold. For example, if the aspect ratio of the side body skeleton map is less than the minimum value of the aspect ratio threshold range, the height of the side body skeleton map is compressed, if the aspect ratio of the side body skeleton map is greater than the maximum value of the aspect ratio threshold range, the height of the side body skeleton map is stretched, and if the aspect ratio of the side body skeleton map is within the aspect ratio threshold range, the aspect ratio of the side body skeleton map is maintained.
In another embodiment of the present application, referring to fig. 8, in step S5, a method for performing running recognition by analyzing a distribution relationship of an adjusted human skeleton map includes the following steps:
analyzing the geometric relation between the coordinate positions of the key points of the human body;
and judging whether the geometric relation meets a geometric threshold value.
It can be understood that, because the geometric relationships between the coordinate positions of the key points of the human body obtained by analysis in the running posture of the human body have fixed characteristics, a geometric threshold value between the coordinate positions of the key points of the human body in the running state of the human body can be preset, if the geometric threshold value is met, the running state is identified, and if the geometric threshold value is not met, the non-running state is identified.
Specifically, referring to fig. 8, in step S5, the method for analyzing the geometric relationship between the coordinate positions of the key points of the human body includes the following steps:
respectively acquiring the length a of a big arm, the length b of a small arm and the distance c from a shoulder to a wrist of the same hand in a human body skeleton diagram;
And judging whether the alpha accords with the threshold value of the included angle of the arm.
It will be appreciated that the shoulder is understood to be the left shoulder when calculating the left hand, and to be the right shoulder when calculating the right hand, including the thigh, foot, etc., wrist, arm, as mentioned later. Because the big arm and the small arm of the person form a certain included angle in the running state, for example, the threshold value of the included angle between the arms can be 60 degrees to 120 degrees, namely if alpha is more than or equal to 60 degrees and less than or equal to 120 degrees, the person is judged to be running, otherwise, the person is judged not to be in the running state.
Specifically, referring to fig. 8, in step S5, the method for analyzing the geometric relationship between the coordinate positions of the key points of the human body includes the following steps:
respectively acquiring the length d of the thigh of the same leg, the length e of the shank and the distance f from the thigh to the foot in a human body skeleton diagram;
let beta be the angle between thigh and shank
And judging whether the beta meets the leg included angle threshold value or not.
It can be understood that, because the thigh and the shank of the person form a certain included angle in the running state, for example, the threshold value of the leg included angle may be 45 ° to 140 °, that is, if α is greater than or equal to 45 ° and less than or equal to 140 °, the person is judged to be running, otherwise, the person is judged not to be in the running state.
In another embodiment of the present application, referring to fig. 9, in step S5, a method for performing running recognition by analyzing a distribution relationship of an adjusted human skeleton diagram includes the following steps:
analyzing characteristic components corresponding to the coordinate set of the key points;
and judging whether the characteristic component meets a set threshold value.
It can be understood that, because the feature components corresponding to the coordinate set of the key points obtained by analysis in the running posture of the human body all have fixed features, the threshold values of the feature components corresponding to the coordinate set of the key points of the human body in the running state can be preset, if the threshold values are met, the running state is identified, and if the threshold values are not met, the non-running state is identified. The calculation mode of the feature components can effectively avoid the influence on the feature vector extraction result caused by the change of the figure shape due to the distance of the figure.
Specifically, referring to fig. 9, in step S5, the method for analyzing the feature component corresponding to the coordinate set of the key point includes the following steps:
setting an elbow key point of the same arm as A, a shoulder key point as B, a wrist key point as C and W as a characteristic component corresponding to the arm; then
And judging whether W meets the leg included angle threshold value.
It can be understood that the two arms are subjected to human key point relation feature extraction, and corresponding feature components are extracted. The three key points on the arm are used as two vectors to respectively represent the trunk between the key points, the relation between the two vectors is obtained, and the characteristic component corresponding to the characteristic of the relation is further calculated. In the above formula, it is actually equivalent to calculating the projection length of the vector AB on the vector AC, and then calculating the ratio of the projection length to the vector BC, that is, W is used as the characteristic component of the relationship between the forearm and the forearm. And if the W accords with the relation characteristic component threshold value, the running state is identified, otherwise, the non-running state is identified.
Similarly, by adopting the method for analyzing the characteristic components corresponding to the coordinate set of the key points, the human body key point relation characteristics can be extracted from the two legs, and the corresponding characteristic components are extracted.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.
Claims (10)
1. A running identification method is characterized by comprising the following steps:
acquiring an initial image in a motion process;
detecting a rectangular area where a human body is located in an initial image;
intercepting the rectangular area to detect human body key points, and connecting the human body key points according to the position relationship to form a human body skeleton diagram;
acquiring a human body visual angle in the rectangular area, and adjusting the aspect ratio of a human body skeleton diagram;
and performing running recognition by analyzing the distribution relation of the adjusted human skeleton diagram.
2. The running recognition method of claim 1, wherein the method for obtaining the human body view angle in the rectangular area and adjusting the aspect ratio of the human body skeleton map comprises the following steps:
identifying a face orientation in a rectangular region;
the width of the human body skeleton image is transversely adjusted according to the face orientation;
and obtaining a side human skeleton map.
3. A method for identifying a run as claimed in claim 2, wherein the method for adjusting the width of the skeleton map of the human body laterally according to the orientation of the human face comprises:
setting the human body face facing direction to be 0 degree, setting the human face facing direction in the current rectangular region to be theta, setting the width of the rectangular region to be W, setting the width of the stretched human skeleton map to be W, and setting the formula of W to be:
4. a method of identifying a run as claimed in claim 2, wherein after obtaining the side skeleton map, further comprising the steps of:
acquiring the aspect ratio of a side human skeleton map;
and stretching or compressing the height of the side human body skeleton map to ensure that the aspect ratio of the side human body skeleton map is within the set aspect ratio threshold range.
5. A method of identifying a run as in claim 4 wherein the method of bringing the aspect ratio of the side skeleton map within a set aspect ratio threshold comprises:
if the aspect ratio of the side body skeleton map is less than the minimum value of the aspect ratio threshold range, the height of the side body skeleton map is compressed, if the aspect ratio of the side body skeleton map is greater than the maximum value of the aspect ratio threshold range, the height of the side body skeleton map is stretched, and if the aspect ratio of the side body skeleton map is within the aspect ratio threshold range, the aspect ratio of the side body skeleton map is maintained.
6. The running recognition method according to any one of claims 1 to 5, wherein the running recognition method by analyzing the distribution relationship of the adjusted human skeleton map comprises the following steps:
analyzing the geometrical relationship between the coordinate positions of the key points of the human body;
and judging whether the geometric relation meets a geometric threshold value.
7. A method of identifying a run as claimed in claim 6 wherein the method of analysing the geometric relationship between the coordinate locations of key points of the human body comprises the steps of:
respectively acquiring the length a of a large arm, the length b of a small arm and the distance c from a shoulder to a wrist of the same hand in a human skeleton diagram;
let alpha be the included angle between the big arm and the small arm
And judging whether the alpha accords with the threshold value of the arm included angle.
8. A method of identifying a run as claimed in claim 6 wherein the method of analysing the geometric relationship between the coordinate locations of key points of the human body comprises the steps of:
respectively acquiring the length d of the thigh of the same leg, the length e of the shank and the distance f from the thigh to the foot in a human body skeleton diagram;
let beta be the angle between thigh and shank
And judging whether the beta meets the leg clamping angle threshold value or not.
9. The running recognition method according to any one of claims 1 to 5, wherein the running recognition method by analyzing the distribution relationship of the adjusted human skeleton map comprises the following steps:
analyzing characteristic components corresponding to the coordinate set of the key points;
and judging whether the characteristic component accords with a set threshold value.
10. A method for identifying a run as claimed in claim 9, wherein the method for analyzing the feature components corresponding to the set of coordinates of the key points comprises the steps of:
setting an elbow key point of the same arm as A, a shoulder key point as B, a wrist key point as C and W as a characteristic component corresponding to the arm; then
And judging whether W meets the leg clamping angle threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210938565.9A CN115331304A (en) | 2022-08-05 | 2022-08-05 | Running identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210938565.9A CN115331304A (en) | 2022-08-05 | 2022-08-05 | Running identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115331304A true CN115331304A (en) | 2022-11-11 |
Family
ID=83921675
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210938565.9A Pending CN115331304A (en) | 2022-08-05 | 2022-08-05 | Running identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115331304A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030411A (en) * | 2022-12-28 | 2023-04-28 | 宁波星巡智能科技有限公司 | Human privacy shielding method, device and equipment based on gesture recognition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550984A (en) * | 2015-12-30 | 2016-05-04 | 北京奇艺世纪科技有限公司 | Fisheye image correction and wandering display method and apparatus |
CN110321795A (en) * | 2019-05-24 | 2019-10-11 | 平安科技(深圳)有限公司 | User's gesture recognition method, device, computer installation and computer storage medium |
CN111881705A (en) * | 2019-09-29 | 2020-11-03 | 深圳数字生命研究院 | Data processing, training and recognition method, device and storage medium |
CN112287759A (en) * | 2020-09-26 | 2021-01-29 | 浙江汉德瑞智能科技有限公司 | Tumble detection method based on key points |
CN113822250A (en) * | 2021-11-23 | 2021-12-21 | 中船(浙江)海洋科技有限公司 | Ship driving abnormal behavior detection method |
-
2022
- 2022-08-05 CN CN202210938565.9A patent/CN115331304A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105550984A (en) * | 2015-12-30 | 2016-05-04 | 北京奇艺世纪科技有限公司 | Fisheye image correction and wandering display method and apparatus |
CN110321795A (en) * | 2019-05-24 | 2019-10-11 | 平安科技(深圳)有限公司 | User's gesture recognition method, device, computer installation and computer storage medium |
CN111881705A (en) * | 2019-09-29 | 2020-11-03 | 深圳数字生命研究院 | Data processing, training and recognition method, device and storage medium |
CN112287759A (en) * | 2020-09-26 | 2021-01-29 | 浙江汉德瑞智能科技有限公司 | Tumble detection method based on key points |
CN113822250A (en) * | 2021-11-23 | 2021-12-21 | 中船(浙江)海洋科技有限公司 | Ship driving abnormal behavior detection method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116030411A (en) * | 2022-12-28 | 2023-04-28 | 宁波星巡智能科技有限公司 | Human privacy shielding method, device and equipment based on gesture recognition |
CN116030411B (en) * | 2022-12-28 | 2023-08-18 | 宁波星巡智能科技有限公司 | Human privacy shielding method, device and equipment based on gesture recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919132B (en) | Pedestrian falling identification method based on skeleton detection | |
US20220383653A1 (en) | Image processing apparatus, image processing method, and non-transitory computer readable medium storing image processing program | |
US9898651B2 (en) | Upper-body skeleton extraction from depth maps | |
CN111368810A (en) | Sit-up detection system and method based on human body and skeleton key point identification | |
CN107392086B (en) | Human body posture assessment device, system and storage device | |
US9117138B2 (en) | Method and apparatus for object positioning by using depth images | |
CN110969114A (en) | Human body action function detection system, detection method and detector | |
CN110032992A (en) | A kind of detection method that cheats at one's exam based on posture | |
CN107103298A (en) | Chin-up number system and method for counting based on image procossing | |
CN104200200B (en) | Fusion depth information and half-tone information realize the system and method for Gait Recognition | |
CN114067358A (en) | Human body posture recognition method and system based on key point detection technology | |
JP2000251078A (en) | Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person | |
CN105740781A (en) | Three-dimensional human face in-vivo detection method and device | |
CN111012353A (en) | Height detection method based on face key point recognition | |
CN110472473A (en) | The method fallen based on people on Attitude estimation detection staircase | |
CN108537131A (en) | A kind of recognition of face biopsy method based on human face characteristic point and optical flow field | |
CN111783702A (en) | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning | |
CN116092199A (en) | Employee working state identification method and identification system | |
WO2023040578A1 (en) | Child sitting posture detection method and system based on child facial recognition | |
CN111883229A (en) | Intelligent movement guidance method and system based on visual AI | |
CN111709365A (en) | Automatic human motion posture detection method based on convolutional neural network | |
KR20210118496A (en) | Image-based intelligent push-up discrimination method and system | |
CN115331304A (en) | Running identification method | |
CN115568823A (en) | Method, system and device for evaluating human body balance ability | |
CN118397692A (en) | Human body action recognition system and method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |