CN115690902A - Abnormal posture early warning method for body building action - Google Patents
Abnormal posture early warning method for body building action Download PDFInfo
- Publication number
- CN115690902A CN115690902A CN202211276841.6A CN202211276841A CN115690902A CN 115690902 A CN115690902 A CN 115690902A CN 202211276841 A CN202211276841 A CN 202211276841A CN 115690902 A CN115690902 A CN 115690902A
- Authority
- CN
- China
- Prior art keywords
- frame
- user image
- key
- image
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention relates to the field of image processing, in particular to an abnormal posture early warning method for fitness actions. The method comprises the following steps: arranging an intelligent fitness mirror system device module; in a period, each frame of image is compared with the template image, and an accumulated posture abnormal value is calculated; and judging according to the obtained accumulated attitude anomaly information quantity, and performing anomaly pause early warning. The invention utilizes the idea of transparency superposition to calculate the accumulated attitude abnormal value within a certain time, compares the difference between the acquired image and the key point of the corresponding standard fitness template image through a key point detection technology, calculates to obtain an accumulated coefficient, calculates the accumulated attitude abnormal information quantity of each frame of image, and carries out prompt and early warning according to the accumulated attitude abnormal information quantity; the invention avoids the error identification action of the body-building mirror and can quickly and accurately adjust the body-building action of the user.
Description
Technical Field
The application relates to the field of image processing, in particular to an abnormal posture early warning method for fitness actions.
Background
With the development of artificial intelligence technology, intelligent fitness glasses become a choice for more and more people, and the limitation of body building in a gymnasium is avoided. The intelligent fitness mirror is a mirror combined with an AI technology, so that various fitness course videos can be watched in real time, and a virtual coach can correct fitness actions of a user in real time. However, in the process of identifying the body-building action, the identification accuracy of the abnormal action influences the performance of the intelligent body-building mirror.
The traditional intelligent fitness mirror is characterized in that the abnormal motion recognition is mainly carried out according to the recognition of key points of motion and the calculation difference of the key points of standard motion. But when the user is not exercising for some reason, which is not an abnormal exercise posture, erroneous recognition of the exercise posture occurs.
Disclosure of Invention
In order to realize the abnormal analysis of the body-building action, the invention aims to provide an abnormal posture early warning method of the body-building action.
The invention provides an abnormal posture early warning method for body building actions, which comprises the following steps:
arranging an intelligent fitness mirror system device module;
acquiring multiple frames of user images as a detection period, and acquiring three-dimensional coordinates of key points and key edges in each frame of user image and a corresponding standard template image in the detection period; obtaining a spatial position difference degree according to the three-dimensional coordinate difference of key points in each frame of user image and the corresponding standard template image; obtaining a plurality of key edge structure direction difference degrees according to the three-dimensional coordinate difference of key edges of a current frame user image and a plurality of frames of user images before the current frame user image, and obtaining the key edge structure direction change degree of the current frame user image according to the key edge structure direction difference degrees; obtaining an accumulative coefficient according to the spatial position difference degree and the key edge structure direction change degree of each frame of user image; acquiring a corresponding attitude information amount of each frame of user image based on a motion analysis optical flow method, and obtaining an accumulated attitude abnormal value of the current frame of user image according to the attitude information amounts of all the user images and the accumulated coefficient;
and judging according to the obtained accumulated attitude abnormal value, and performing abnormal pause early warning.
Furthermore, the intelligent body-building mirror system device module comprises an image acquisition module, a labeling template library module, an image analysis module and an early warning module; the image acquisition module is used for acquiring motion images of continuous frames when the user exercises; the standard template library module is used for storing and calling standard templates; the image analysis module is used for carrying out image analysis calculation; the early warning module is used for early warning a user to adjust the body-building action.
Further, the step of obtaining the spatial position difference degree according to the three-dimensional coordinate difference of the key points in each frame of user image and the corresponding standard template image comprises:
for any one frame of user image:
acquiring an L2 norm between the three-dimensional coordinate of the ith key point in the user image and the three-dimensional coordinate of the ith key point in the corresponding standard template image, wherein i is a positive integer and is not more than the number of the key points in the user image;
and the average value of the L2 norms corresponding to all the key points is the spatial position difference degree of the user image.
Further, the step of obtaining the structural direction difference degrees of the plurality of key edges according to the three-dimensional coordinate difference of the key edges of the current frame user image and the previous frames of user images comprises the following steps:
assuming that the current frame user image is a p-th frame user image, obtaining the direction difference degree of a key edge according to the three-dimensional coordinate difference of the key edge between the p-th frame user image and an r-th frame user image before the p-th frame user image, wherein p and r are positive integers;
for the three-dimensional coordinates of the jth key edge corresponding to the pth frame of user image and the mth frame of user image, j is a positive integer and is not more than the number of key edges in the user image; calculating cosine similarity between three-dimensional coordinates of the jth key edge in the pth frame of user image and the jth frame of user image; and obtaining the average value of the cosine similarity corresponding to all the key edges in the p-th frame of user image as the direction difference of the key edges.
Further, the step of obtaining the direction variation degree of the key edge structure of the current frame user image according to the direction difference degree of the key edge structure includes:
presetting an allowable error value and a key edge direction difference threshold, and obtaining an acceptable interval according to the key edge direction difference threshold and the allowable error value;
comparing the key edge direction difference between the p-th frame of user image and each frame of user image before the p-th frame of user image with the acceptable interval to obtain a first value;
when the direction difference degree of the key edge between the p frame user image and the previous r frame user image is in the acceptable interval, the first value is a fixed value;
when the direction difference degree of the key edge between the p-th frame of user image and the r-th frame of user image before the p-th frame of user image is smaller than the minimum value of the acceptable interval, the first value is the difference value between the minimum value of the acceptable interval and the direction difference degree of the key edge;
when the difference degree of the key edge direction between the p frame user image and the prior r frame user image is greater than the maximum value of the acceptable interval, the first value is the difference value of the difference degree of the key edge direction and the maximum value of the acceptable interval;
and accumulating the first values corresponding to the user images of all frames before the p-th frame of user image to obtain the direction change degree of the key edge structure.
Further, the step of obtaining an accumulated coefficient according to the spatial position difference and the key edge structure direction variation of each frame of user image includes:
and calculating the product of the spatial position difference degree and the key edge structure direction change degree, taking the negative number of the product as a power exponent, and constructing an exponential function by taking a natural exponent e as a base, wherein the exponential function is the accumulative coefficient.
Further, the step of obtaining the corresponding attitude information amount of each frame of user image based on the motion analysis optical flow method includes:
extracting key frames of the user images in the detection period by utilizing a motion analysis optical flow method; the amount of pose information of the user image of the key frame is 1, and the amount of pose information of the user image of the non-key frame is 0.8.
Further, the step of obtaining the accumulated attitude abnormal value of the current frame user image according to the attitude information amount of all the user images and the accumulated coefficient includes:
acquiring the corresponding attitude information quantity of each frame of user image before the current frame of user image, and performing accumulated calculation on the attitude information quantities of all frames of user images before the current frame of user image to obtain an accumulated attitude abnormal information quantity;
and taking the accumulated coefficient of the current frame user image as a weight, and carrying out weighted summation on the attitude information quantity of the current frame user image and the accumulated attitude abnormal information quantity of all frame user images before the current frame user image to obtain the accumulated attitude abnormal value.
Has the advantages that: according to the embodiment of the invention, the user images in a detection period are analyzed, and the spatial position difference is obtained according to the three-dimensional coordinate difference of key points between each frame of user image and the corresponding standard template image; then obtaining the direction difference degree of a key edge structure according to the three-dimensional coordinate difference of the key edge between each frame of user image and a plurality of frames of user images before the frame of user image, obtaining the direction change degree of the key edge structure of the current frame of user image based on the direction difference degree of the key edge structure corresponding to the plurality of frames of user images before the frame of user image, and carrying out comprehensive analysis through the difference information of key points and the change information of key edges to obtain an accumulative coefficient so as to ensure the rationality of subsequent analysis data; and finally, acquiring the attitude information amount corresponding to each frame of user image, calculating through the accumulated coefficient corresponding to the user image and the attitude information amount to obtain an accumulated attitude abnormal value, and determining whether the user body-building action is not standard or the user is not in body building at the moment based on the accumulated attitude abnormal value, so that the abnormal pause early warning of the body-building mirror is more accurate and reliable.
Drawings
Fig. 1 is a flowchart of an abnormal posture pre-warning method for fitness activities according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
The invention utilizes the idea of transparency superposition to calculate the accumulated attitude abnormal value within a certain time, compares the difference between the acquired image and the key point of the corresponding standard fitness template image through a key point detection technology, calculates an accumulated coefficient and calculates the accumulated attitude abnormal value of each frame of image.
The present invention is directed to the following scenarios: when a user exercises, the user often follows the body-building action of the virtual character of the body-building mirror to exercise. When the action of the user is different from the standard body-building action, the body-building mirror can give an early warning to prompt the user to adjust the action. However, if the user is not performing the exercise for some reason or is pausing the exercise, which is obviously not an abnormal exercise posture, erroneous recognition of the exercise posture may occur.
In order to implement the abnormal analysis of the fitness activity, the embodiment provides an abnormal posture pre-warning method of the fitness activity, as shown in fig. 1, including the following steps:
(1) Arranging an intelligent fitness mirror system device module.
An image acquisition module, an annotation template library module, an image analysis module and an early warning module are constructed in the body-building mirror system. The image acquisition module is used for acquiring motion images of continuous frames when the user exercises; the standard template library module is used for storing and calling standard templates; the image analysis module is used for carrying out image analysis calculation; the early warning module is used for early warning a user to adjust the body-building action. Wherein the size of the acquired image is the same as the size of the standard motion template image.
(2) Acquiring multiple frames of user images as a detection period, and acquiring three-dimensional coordinates of key points and key edges in each frame of user image and a corresponding standard template image in the detection period; obtaining a spatial position difference degree according to the three-dimensional coordinate difference of key points in each frame of user image and the corresponding standard template image; obtaining a plurality of key edge structure direction difference degrees according to the three-dimensional coordinate difference of key edges of the current frame user image and the previous frames of user images, and obtaining the key edge structure direction change degree of the current frame user image according to the key edge structure direction difference degrees; obtaining an accumulative coefficient according to the spatial position difference degree and the key edge structure direction change degree of each frame of user image; and acquiring the corresponding attitude information quantity of each frame of user image based on a motion analysis optical flow method, and obtaining the accumulated attitude abnormal value of the current frame of user image according to the attitude information quantity and the accumulated coefficient of all the user images.
In the process of detecting the abnormal posture of the user in the exercise movement, it is usually determined whether the posture is abnormal or not by the change difference of the posture information of the current frame image. However, when the user is not performing the workout, or is pausing the workout for some reason, which is obviously not an abnormal workout posture, a misidentification of the workout posture may occur. Therefore, the embodiment calculates the accumulated attitude abnormal value within a certain time by using the idea of transparency superposition, compares the difference between the acquired image and the key point of the corresponding standard fitness template image by using the key point detection technology, calculates the accumulated coefficient, and calculates the accumulated attitude abnormal value of each frame of image. The process of achieving the accumulated attitude anomaly values is as follows:
a. and obtaining a self-adaptive template according to historical fitness images of different user standards in different age groups, and performing key point detection on the self-adaptive template and each frame of image.
b. And analyzing according to the matching result of the key points, and calculating an accumulative coefficient.
c. And carrying out accumulative calculation on the information content of each frame of image to obtain the accumulative attitude anomaly information content.
The following are specific developments:
a. and obtaining a self-adaptive template according to historical fitness images of different user standards in different age groups, and performing key point detection on the self-adaptive template and each frame of image.
Logic description: and establishing a self-adaptive template according to the standard fitness image, and detecting key points of each frame of image and the self-adaptive standard template.
Inputting: body structure information such as height and weight of the user, and acquired user images; and (3) outputting: a template corresponding to the body structure information, and detected key points and key edges.
In this embodiment, the gesture of the exercise movement needs to be recognized, so that a template needs to be established according to the standard exercise movement. Considering that different users in different age groups have different body structures such as height, weight and the like, an adaptive standard action template needs to be established.
The standard template is established by a priori historical standard fitness activity data for a large number of different users of different age groups. The standard action template establishing process comprises the following steps: for a set of standard exercise, the exercise device is operated byFrame as sampling frequency is selectedThe frame image is used as a template image, a standard action template is constructed in a mode of an action subject marked manually, and then the second action can be obtainedFrame, firstFrame, firstFrame \8230andtemplate image. Wherein the size of the template image is the same as the size of the captured image of the user's workout.
According to users of different ages, corresponding standard template libraries are established according to body structure parameters such as height and weight of the users and different body building actions, and a standard template module is established in the body building mirror system and used for storing and calling the standard templates. The user input data is data of height, weight, age and the like of the user and body building actions; and outputting the corresponding demonstration video and the standard action template.
And carrying out key point detection and identification on the collected body-building action of the image user and the template image which accords with the input information of the user according to the accuracy of the skeleton description action. Since there are various algorithms for the detection and identification of the key points, the reference in this embodiment uses the PoseC3D technology for the identification of the key points, and the selection of the detection and identification of the key points can be determined according to the specific implementation situation of the implementer.
In successionThe frame image is used as a detection periodAnd (4) performing key point comparison on each frame image in the frame images, the body structure parameters of the corresponding user obtained in the step one and the standard template images of the same action (when the user performs the body-building action, the user performs the body-building action along with the standard body-building images, and the condition that the performing action of the same frame image is different is not considered, namely each frame image corresponds to the standard action in the template library). Obtaining captured imagesThree-dimensional coordinates of the individual key points, denotedAnd of the standard template image also corresponding theretoThree-dimensional coordinates of the key points, denoted as. To obtain correspondingThree-dimensional coordinates of each key edge, and the key edges of the collected user image are recorded asAnd a corresponding standard template imageThe key edge is marked as. WhereinFor the number of keypoints determined by the PoseC3D technique,. In the PoseC3D technology, two images such as head key points, shoulder key points and the like are identified in a one-to-one correspondence mode, so that a key point coordinate representation method is that each key point corresponds to one, and a key edge representation method is that each key edge corresponds to one.
b. And analyzing according to the matching result of the key points, and calculating an accumulative coefficient.
Logic description: and analyzing the matching result according to the difference with the matching result. And calculating an accumulative coefficient through the direction and the position of the key point.
Inputting: detecting key points of the acquired image and key points of the template; and (3) outputting: cumulative coefficient (through key point three-dimensional space position difference degree)And the difference degree of the key edge structure directions of adjacent framesCalculation).
If the current frame image is abnormal possibly caused by insufficient fitness movement, the three-dimensional space position of the key point of the current frame image is different from the three-dimensional space position coordinates of the key point of the standard template image, but the difference is not too large, and the key edge structure direction difference between the current frame image and the adjacent frame image is also different, but the variation trends of the movement are the same, namely the key edge structure direction difference between the current frame image and the adjacent frame image is smaller.
If the current frame image is abnormal which may not be caused by the insufficient fitness action, and may be suspended or stopped due to some reasons (such as call receiving, call falling, and the like), the three-dimensional spatial position of the key point of the current frame image is different from the three-dimensional spatial position coordinates of the key point of the standard template image, but the difference is large, and the direction difference of the key edge structure between the current frame image and the adjacent frame image is different, but the change trend of the action is different, that is, the direction difference of the key edge structure between the current frame image and the adjacent frame image is large.
Calculating an accumulative coefficient by calculating the difference change of the three-dimensional space positions of the key points of the current frame and the standard template image and the difference change of the key edge structure direction between the current frame and the adjacent frame image, and calculating to obtain an accumulative attitude abnormal value by the accumulative change of the multi-frame image.
(1) The difference change of the three-dimensional space position of the key point;
performing key point contrast analysis according to each frame of image and the standard template image, calculating the difference between the three-dimensional coordinates of the key points of each frame of image and the three-dimensional coordinates of the standard template image, and calculating the difference of the positions of the key points in the three-dimensional space. Difference degree of three-dimensional space position of key pointThe larger the body-building posture is, the larger the variation of the body-building posture from the standard template image in the spatial position is; difference degree of three-dimensional space position of key pointThe smaller the size, the less the variation in the body-building posture from the standard template image in spatial position is indicated. Difference degree of three-dimensional space position of key pointThe computational expression of (a) is:
in the formula (I), the compound is shown in the specification,representing captured user fitness imagesThree-dimensional space coordinates of each key point;representing the corresponding standard fitness action template imageEach key point pair is corresponding to a three-dimensional space coordinate;representing the number of the identification pairs in the PoseC3D technology;representL2 norm.
(2) The difference of the key edge structure direction of the previous frame image is changed;
if the judgment of the abnormal attitude degree is carried out only by considering the variation of the difference degree of the three-dimensional space position, the judgment is misjudged, so that the calculation of the accumulative coefficient is carried out by considering the difference of the direction trends of the key edge structures between the adjacent frames, and the calculation of the accumulative abnormal attitude value is determined.
The accumulated coefficient is determined by determining the difference in the direction of the critical edge structure of the image of the current frame and the image of the previous frame. When the difference of the key edge structure direction of the current frame image and the previous frame image is smaller, the difference of the transformation direction of the posture is smaller, and the accumulative coefficient is smaller; when the difference of the key edge structure direction of the current frame image and the previous frame image is larger, the difference of the transformation direction of the posture of the current frame image is larger, and the accumulated coefficient is larger.
By counting the periodInner firstKey edge set of frameAnd before the frameKey edge set of frameAnd determining the difference of the transformation directions of the postures according to the difference of the directions and the angles of the corresponding key edge structures. The larger the difference of the directions of the key edge structures is, the larger the difference of the directions of the postures of the key edge structures is; the smaller the difference of the direction angles of the key edge structures is, the smaller the difference of the postures of the key edge structures is. First, theFrame and preceding the frameKey edge structure direction difference degree of frameThe calculation expression of (a) is:
in the formulaAndis as followsKey edge set of frameAnd before the frameKey edge set of frameTo middleA key edgeAndin whichBy forming key edgesTwo key points ofAndis calculated from the coordinates of (1), i.e.Can be obtained by the same wayIs composed ofAndis composed of; Is composed of, Is composed ofAndis composed of; The number of critical edges.
Wherein the content of the first and second substances,is as followsFrame and the first frameFirst of frameCosine similarity of three-dimensional coordinates of the key edges.
Analogously can be obtainedFrame and the preambleAll critical edge structure directions of a frameThe degree of difference.
Presetting an allowable error value and a key edge direction difference threshold, and obtaining an acceptable interval according to the key edge direction difference threshold and the allowable error value; comparing the difference degree of the key edge direction between the p-th frame of user image and each frame of user image before the p-th frame of user image with the acceptable interval to obtain a first value; when the direction difference of the key edge between the p-th frame user image and the previous r-th frame user image is in the acceptable interval, the first value is a fixed value, and the fixed value in the embodiment of the invention is 0.0001; when the direction difference degree of the key edge between the p-th frame of user image and the r-th frame of user image before the p-th frame of user image is smaller than the minimum value of the acceptable interval, the first value is the difference value between the minimum value of the acceptable interval and the direction difference degree of the key edge; when the difference degree of the key edge direction between the p-th frame user image and the r-th frame user image before the p-th frame user image is larger than the maximum value of the acceptable interval, the first value is the difference value of the difference degree of the key edge direction and the maximum value of the acceptable interval; and accumulating the first values corresponding to the user images of all frames before the user image of the p-th frame to obtain the direction change degree of the key edge structure.
Specifically, in order to determine the degree of variation of the key direction difference degree, a threshold value of the key edge direction difference degree is setAnd considering the influence of allowable error between frames, setting the allowable error value of the direction difference degree of the key edge. The larger the frame difference between frames is, the larger the allowable error value is setThe larger; the smaller the frame difference between frames is, the smaller the allowable error value is setThe smaller. First, theFrame and the preambleAllowable error value of frameThe computational expression of (a) is:
in the formula (I), the compound is shown in the specification,representing an allowable error value of the unit frame;representing the current frame number;representBefore a frameAnd (5) frame. Error value allowed for a unit frame thereinAnd the direction difference degree threshold value of the key edgeCan be obtained according to the specific implementation of a implementer and can be calculated according to historical fitness image data.
Then obtaining an acceptable interval according to the allowable error value and the key edge direction difference degree threshold value as follows:in which。
By the current frameCalculating the direction variation degree of the key edge structure of the current frame image according to the direction difference degree of the key edge structure of the image and the previous frame imageThe calculation expression is as follows:
in the formula (I), the compound is shown in the specification,is compared with the frame frontKey edge structure direction difference degree of frame(ii) a Acceptable range。
The difference degree of the three-dimensional space positions of the key points of the current frame image and the corresponding standard template image obtained by the steps(ii) a Key edge structure direction change degree of current frame imageAnd calculating the accumulative coefficient of the current frame image. And calculating the product of the spatial position difference degree and the key edge structure direction change degree, and constructing an exponential function by taking the negative number of the product as a power exponent and taking a natural exponent e as a base, wherein the exponential function is an accumulative coefficient. Wherein is currently the firstCumulative coefficients of framesThe computational expression of (a) is:
in the formula (I), the compound is shown in the specification,indicates the currentThe difference degree of the spatial position of the frame image;indicates the currentAnd the direction difference change degree of the key edge structure of the frame image.
c. And carrying out accumulative calculation on the information quantity of each frame of image to obtain the accumulative attitude abnormal information quantity.
Logic description: and calculating the attitude degree value by using each frame of matching result, and calculating the accumulated attitude abnormal information quantity according to the accumulated coefficient.
Inputting: accumulating the coefficient and the set attitude information amount; and (3) outputting: and accumulating the abnormal attitude information quantity.
In the embodiment, the body-building action abnormal condition of the user is early warned according to the calculated and accumulated posture information amount by setting the posture information amount of each frame of image. Difference degree of key point three-dimensional space position of current frame imageLarger, current frame image key edge structure direction change degreeIf the current frame is large, the user is not performing body-building exercise, and the weight of the posture information quantity of the current frame to the accumulated posture information quantity of the body-building posture is small; difference degree of key point three-dimensional space position of current frame imageSmaller, the key edge structure direction change degree of the current frame imageIf the weight is smaller, the user may perform wrong fitness exercise at the current frame, and the weight of the posture information amount of the current frame to the accumulated posture information amount of the fitness posture is larger.
By counting one detection periodAnd setting fixed attitude information quantity for all frame images of the frame, and calculating an accumulated attitude abnormal value. The present embodiment extracts a key frame by using a motion analysis optical flow method, and sets the value of the pose information amount of the key frame to 1 and the values of the pose information amounts of the remaining frames to 0.8. The key frame extraction by the motion analysis optical flow method is a known technique, and is not described in detail in this embodiment.
The accumulated attitude abnormal information quantity is obtained by performing accumulated calculation on the information quantity of the multi-frame images, and the attitude abnormal degree is judged according to the accumulated result. According to the idea of transparency superposition, constructing the accumulated attitude difference of the current frame imageConstant information volumeThe mathematical model is as follows:
in the formula (I), the compound is shown in the specification,is an accumulated coefficient;the attitude information quantity of the current frame image is obtained;and accumulating the abnormal attitude information amount for the previous frame image.
Thus, the accumulated attitude abnormal value of each frame of image is obtained.
(3) And judging according to the obtained accumulated attitude abnormal value, and performing abnormal pause early warning.
Obtaining the accumulated attitude abnormal value of each frame image of each detection period according to the steps, calculating the accumulated attitude abnormal value of each frame image, and setting an attitude abnormal information amount thresholdWhen the accumulated attitude anomaly value is less than the set threshold valueAnd the intelligent fitness mirror is used for carrying out pause early warning on fitness actions of the user. Wherein the attitude anomaly information quantity threshold is determined according to the specific implementation situation of an implementer, and an empirical reference value is given in the scheme。
In the embodiment, the accumulated attitude abnormal value within a certain time is calculated by using the idea of transparency superposition, the difference between the acquired image and the key point of the corresponding standard fitness template image is compared by using a key point detection technology, an accumulated coefficient is calculated, the accumulated attitude abnormal information amount of each frame of image is calculated, and prompt and early warning is performed according to the accumulated attitude abnormal information amount. By the method, the user can be accurately prompted to perform adjustment actions through early warning, the calculation is simple, the time complexity is low, the error identification actions of the fitness mirror are avoided, and the fitness actions of the user can be adjusted quickly and accurately.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.
Claims (8)
1. An abnormal posture early warning method for body building actions is characterized by comprising the following steps:
arranging an intelligent fitness mirror system device module;
acquiring multiple frames of user images as a detection period, and acquiring three-dimensional coordinates of key points and key edges in each frame of user image and a corresponding standard template image in the detection period; obtaining a spatial position difference degree according to the three-dimensional coordinate difference of key points in each frame of user image and the corresponding standard template image; obtaining a plurality of key edge structure direction difference degrees according to the three-dimensional coordinate difference of key edges of a current frame user image and a plurality of frames of user images before the current frame user image, and obtaining the key edge structure direction change degree of the current frame user image according to the key edge structure direction difference degrees; obtaining an accumulative coefficient according to the spatial position difference degree and the key edge structure direction change degree of each frame of user image; acquiring a corresponding attitude information amount of each frame of user image based on a motion analysis optical flow method, and obtaining an accumulated attitude abnormal value of the current frame of user image according to the attitude information amounts of all the user images and the accumulated coefficient;
and judging according to the obtained accumulated attitude abnormal value, and performing abnormal pause early warning.
2. The abnormal posture early warning method of the body-building action according to claim 1, wherein the intelligent body-building mirror system device module comprises an image acquisition module, a labeling template library module, an image analysis module and an early warning module; the image acquisition module is used for acquiring action images of continuous frames when a user exercises; the standard template library module is used for storing and calling standard templates; the image analysis module is used for carrying out image analysis calculation; the early warning module is used for early warning a user to adjust the body-building action.
3. The method for early warning the abnormal posture of the fitness activity according to claim 1, wherein the step of obtaining the spatial position difference degree according to the three-dimensional coordinate difference of the key points in each frame of the user image and the corresponding standard template image comprises the following steps:
for any one frame of user image:
acquiring an L2 norm between the three-dimensional coordinate of the ith key point in the user image and the three-dimensional coordinate of the ith key point in the corresponding standard template image, wherein i is a positive integer and is not more than the number of the key points in the user image;
and the average value of the L2 norms corresponding to all the key points is the spatial position difference degree of the user image.
4. The method as claimed in claim 1, wherein the step of obtaining the structural direction difference degree of the plurality of key edges according to the three-dimensional coordinate difference of the key edges of the user image of the current frame and the user image of the previous frames comprises:
assuming that the current frame user image is a p-th frame user image, obtaining the direction difference degree of a key edge according to the three-dimensional coordinate difference of the key edge between the p-th frame user image and an r-th frame user image before the p-th frame user image, wherein p and r are positive integers;
for the three-dimensional coordinates of the jth key edge in the pth frame of user image and the r frame of user image, j is a positive integer and is not more than the number of the key edges in the user image; calculating cosine similarity between three-dimensional coordinates of the jth key edge in the pth frame of user image and the jth frame of user image; and obtaining the average value of the cosine similarity corresponding to all the key edges in the p-th frame of user image as the direction difference of the key edges.
5. The abnormal posture early warning method for body building actions according to claim 4, wherein the step of obtaining the direction change degree of the key side structure of the current frame user image according to the direction difference degree of the key side structure comprises the following steps:
presetting an allowable error value and a key edge direction difference threshold, and obtaining an acceptable interval according to the key edge direction difference threshold and the allowable error value;
comparing the key edge direction difference between the p-th frame of user image and each frame of user image before the p-th frame of user image with the acceptable interval to obtain a first value;
when the difference degree of the key edge direction between the p-th frame user image and the previous r-th frame user image is in the acceptable interval, the first value is a fixed value;
when the direction difference degree of the key edge between the p-th frame of user image and the r-th frame of user image before the p-th frame of user image is smaller than the minimum value of the acceptable interval, the first value is the difference value between the minimum value of the acceptable interval and the direction difference degree of the key edge;
when the difference degree of the key edge direction between the p-th frame user image and the r-th frame user image before the p-th frame user image is larger than the maximum value of the acceptable interval, the first value is the difference value of the difference degree of the key edge direction and the maximum value of the acceptable interval;
and accumulating the first values corresponding to the user images of all frames before the p-th frame of user image to obtain the direction change degree of the key edge structure.
6. The method as claimed in claim 1, wherein the step of obtaining an accumulated coefficient according to the spatial position difference and the direction change of the key edge structure of each frame of user image comprises:
and calculating the product of the spatial position difference degree and the key edge structure direction change degree, taking the negative number of the product as a power exponent, and constructing an exponential function by taking a natural exponent e as a base, wherein the exponential function is the accumulative coefficient.
7. The method as claimed in claim 1, wherein the step of obtaining the corresponding pose information amount of each frame of user image based on the motion analysis optical flow method comprises:
extracting key frames of the user images in the detection period by utilizing a motion analysis optical flow method; the amount of pose information of the user image of the key frame is 1, and the amount of pose information of the user image of the non-key frame is 0.8.
8. The method as claimed in claim 1, wherein the step of obtaining the accumulated attitude outlier of the current frame user image according to the attitude information of all user images and the accumulated coefficient comprises:
acquiring the corresponding attitude information quantity of each frame of user image before the current frame of user image, and performing accumulated calculation on the attitude information quantities of all frames of user images before the current frame of user image to obtain an accumulated attitude abnormal information quantity;
and taking the accumulated coefficient of the current frame user image as a weight, and carrying out weighted summation on the attitude information quantity of the current frame user image and the accumulated attitude abnormal information quantity of all the frame user images before the current frame user image to obtain the accumulated attitude abnormal value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211276841.6A CN115690902A (en) | 2022-10-19 | 2022-10-19 | Abnormal posture early warning method for body building action |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211276841.6A CN115690902A (en) | 2022-10-19 | 2022-10-19 | Abnormal posture early warning method for body building action |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115690902A true CN115690902A (en) | 2023-02-03 |
Family
ID=85065680
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211276841.6A Pending CN115690902A (en) | 2022-10-19 | 2022-10-19 | Abnormal posture early warning method for body building action |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115690902A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116652396A (en) * | 2023-08-01 | 2023-08-29 | 南通大学 | Safety early warning method and system for laser inner carving machine |
-
2022
- 2022-10-19 CN CN202211276841.6A patent/CN115690902A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116652396A (en) * | 2023-08-01 | 2023-08-29 | 南通大学 | Safety early warning method and system for laser inner carving machine |
CN116652396B (en) * | 2023-08-01 | 2023-10-10 | 南通大学 | Safety early warning method and system for laser inner carving machine |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111666857B (en) | Human behavior recognition method, device and storage medium based on environment semantic understanding | |
CN106650687B (en) | Posture correction method based on depth information and skeleton information | |
CN111931701B (en) | Gesture recognition method and device based on artificial intelligence, terminal and storage medium | |
CN110674785A (en) | Multi-person posture analysis method based on human body key point tracking | |
CN110472612B (en) | Human behavior recognition method and electronic equipment | |
CN110633004B (en) | Interaction method, device and system based on human body posture estimation | |
CN114067358A (en) | Human body posture recognition method and system based on key point detection technology | |
CN110232308A (en) | Robot gesture track recognizing method is followed based on what hand speed and track were distributed | |
CN112464793A (en) | Method, system and storage medium for detecting cheating behaviors in online examination | |
CN111046825A (en) | Human body posture recognition method, device and system and computer readable storage medium | |
CN111914643A (en) | Human body action recognition method based on skeleton key point detection | |
CN115690902A (en) | Abnormal posture early warning method for body building action | |
CN113516005A (en) | Dance action evaluation system based on deep learning and attitude estimation | |
CN107886057B (en) | Robot hand waving detection method and system and robot | |
CN115227234A (en) | Cardiopulmonary resuscitation pressing action evaluation method and system based on camera | |
Li et al. | Fitness Action Counting Based on MediaPipe | |
CN116958584B (en) | Key point detection method, regression model training method and device and electronic equipment | |
Omelina et al. | Interaction detection with depth sensing and body tracking cameras in physical rehabilitation | |
Zhang et al. | Human deep squat detection method based on MediaPipe combined with Yolov5 network | |
CN115205750B (en) | Motion real-time counting method and system based on deep learning model | |
CN106406507B (en) | Image processing method and electronic device | |
CN114639168B (en) | Method and system for recognizing running gesture | |
CN113239849B (en) | Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium | |
CN115205737A (en) | Real-time motion counting method and system based on Transformer model | |
Bernier et al. | Human gesture segmentation based on change point model for efficient gesture interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |