CN111079616B - Single-person movement posture correction method based on neural network - Google Patents

Single-person movement posture correction method based on neural network Download PDF

Info

Publication number
CN111079616B
CN111079616B CN201911258388.4A CN201911258388A CN111079616B CN 111079616 B CN111079616 B CN 111079616B CN 201911258388 A CN201911258388 A CN 201911258388A CN 111079616 B CN111079616 B CN 111079616B
Authority
CN
China
Prior art keywords
human body
joint point
convolution
body joint
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911258388.4A
Other languages
Chinese (zh)
Other versions
CN111079616A (en
Inventor
谢雪梅
高旭
孔龙飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201911258388.4A priority Critical patent/CN111079616B/en
Publication of CN111079616A publication Critical patent/CN111079616A/en
Application granted granted Critical
Publication of CN111079616B publication Critical patent/CN111079616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Abstract

The invention discloses a single-person movement posture correction method based on a neural network, which mainly solves the problem that the accuracy and the efficiency of the current sports teachers for movement guidance of students are low. The implementation scheme is as follows: downloading an image data set containing human body joint points and a corresponding annotation file thereof, and constructing a training data set; building a human body joint point detection network based on spatial domain conversion, and training the human body joint point detection network by utilizing a training data set; collecting standard motion pictures and common motion pictures, respectively inputting the standard motion pictures and the common motion pictures into a trained human body joint point detection network based on spatial domain conversion to obtain respective joint point coordinates, respectively forming standard motion data sets and common motion data sets, and matching to obtain standard matching pictures; and calculating Euclidean distances between all the joint points in the common moving picture and the standard matching picture, and counting the joint points which are larger than a scoring threshold value, namely the joint points which need to be corrected. The invention improves the accuracy rate and the training efficiency of the movement posture correction and can be used for correcting the single movement posture.

Description

Single-person movement posture correction method based on neural network
Technical Field
The invention belongs to the technical field of image recognition and computer vision, and mainly relates to a single-person movement posture correction method which can be used for guiding the training of ordinary people.
Background
With the rapid development of modern socioeconomic performance, many people neglect the importance of exercise for health. To solve this problem, the state has introduced a series of sports in the middle school entrance to urge students to do physical exercises such as throwing a solid ball, long running, etc. Due to the large population base of the country, the number difference between sports teachers and students is large, and students cannot be guided timely and effectively. The introduction of intelligent motion posture correction methods is urgent. Therefore, there is an urgent need for a method for correcting exercise posture of ordinary people
At present, the correction of the single motion posture is mainly guided by a sports teacher. The guiding mode is to evaluate and correct the actions of the students by guiding the experience of teachers in the sports. This way of guidance is very dependent on the level of exercise of the instructor, and when the instructor experiences a deviation, the training will often have the opposite effect. In addition, because the population base of China is huge and the number of sports teachers is limited, each student cannot be fully guided, which is very unfair for some students who cannot be guided.
Disclosure of Invention
The invention aims to provide a single-person motion posture correction method based on a neural network aiming at the defects of the existing motion posture correction method so as to improve the accuracy and efficiency of motion posture correction.
The idea of the invention is to set up a human body joint point detection network based on spatial domain conversion, construct a standard motion data set, construct a common motion data set, set a scoring threshold value to be 50, and determine action points needing to be corrected. The method comprises the following implementation steps:
(1) collecting a training data set:
(1a) downloading an image data set containing human body joint points and storing the image data set into a training image folder A;
(1b) downloading a label file corresponding to the data set, and storing the label file into a training label folder B;
(1c) putting the image folder and the label folder into the same folder to form a training data set;
(2) constructing a human body joint point detection network based on spatial domain conversion, which is formed by cascading an image spatial domain conversion sub-network and a human body joint point detection sub-network, wherein:
the image space domain conversion sub-network consists of 3 convolutional layers in sequence;
the human body joint point detection sub-network comprises 9 convolution layers and 4 deconvolution layers, namely 4 deconvolution layers are sequentially connected between 8 convolution layers and the last convolution layer which are sequentially cascaded;
(3) training a human body joint point detection network based on spatial domain conversion:
(3a) reading a training data set image from a training image folder A, inputting the image into the human body joint point detection network based on spatial domain conversion constructed in the step (2), generating a spatial conversion image through an image spatial conversion sub-network in the human body joint point detection network, and outputting a predicted coordinate value of a human body joint point through the human body joint point detection sub-network by the spatial conversion image;
(3b) reading the labeled coordinate values corresponding to the images of the training data set from the training labeled folder B, calculating the loss value L of the human body joint point network, and training the network constructed in the step (2) by using the loss value and adopting a random gradient descent algorithm to obtain a trained human body joint point detection network based on spatial domain conversion;
(4) constructing a standard motion data set:
(4a) shooting a standard action video demonstrated by a standard athlete;
(4b) collecting each frame of the shot standard action video into a picture, and storing the picture into a standard picture folder C;
(4c) respectively inputting the collected pictures into a trained human body joint point detection network based on spatial domain conversion to obtain coordinate information of each human body joint point, and storing the obtained coordinate information into a standard labeling folder D;
(5) constructing a common motion data set:
(5a) shooting a non-standard motion video demonstrated by a common athlete;
(5b) collecting each frame of the shot non-standard action video into an image, and storing the image into a test image folder E;
(5c) respectively inputting the collected pictures into a trained human body joint point detection network based on spatial domain conversion to obtain coordinate information of each human body joint point, and storing the obtained coordinate information into a test labeling folder F;
(6) setting a scoring threshold value to be 50, determining an action point needing correction:
(6a) reading coordinate information corresponding to the test picture from the test labeling folder F;
(6b) reading coordinate information corresponding to the standard picture from the standard labeling folder D;
(6c) sequentially calculating the Euclidean distance sum of the coordinates of the joint points of the test picture and the standard picture, and taking the standard picture with the minimum Euclidean distance sum as a standard matching picture of the test picture;
(6d) and calculating the Euclidean distance between the test picture and each joint point in the standard matching picture, and counting the joint points which are larger than the set scoring threshold value, namely the joint points to be corrected.
Compared with the prior art, the invention has the following advantages:
1. the identification accuracy is high
The existing posture correction method is very dependent on the exercise experience and exercise level of a teacher, and when the experience of the teacher deviates or is not very proficient in a certain exercise, misleading effects are often generated on the exercise and training of students. The invention establishes a human body joint point detection network based on spatial domain conversion and collects standard motion videos, and strictly and standard definition is carried out on standard actions, so that the accuracy of guidance is greatly improved.
2. The training efficiency is high
In the existing posture correction method, because the number of teachers is far smaller than that of students, the students often cannot be effectively guided at any time. By establishing a set of universal motion posture detection method, the invention enables students to receive training at any time, thereby greatly improving the training efficiency.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
FIG. 2 is a graph of the standard action collected in the present invention.
FIG. 3 is a graph of the test action collected in the present invention.
Detailed Description
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Referring to fig. 1, the specific implementation steps for this example are as follows.
Step 1, a training data set is collected.
(1.1) downloading an image data set containing human body joint points from an open website and storing the image data set into a training image folder A;
(1.2) downloading a label file corresponding to the data set from the public website, and storing the label file into a training label folder B;
the label file contains coordinate information of 18 joint points in the human body, and the 18 joint points are respectively as follows: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, right eye, left eye, right ear, and left ear;
and (1.3) putting the image folder and the label folder into the same folder to form a training data set.
And 2, building a human body joint point detection network based on spatial domain conversion.
(2.1) constructing an image spatial domain conversion sub-network:
the sub-network is in turn composed of 3 convolutional layers, of which:
the convolution kernel size of the 1 st convolution layer is 1 multiplied by 1, the number of convolution kernels is 3, and the step length is 1;
the convolution kernel size of the 2 nd convolution layer is 1 multiplied by 1, the number of convolution kernels is 64, and the step length is 1;
the convolution kernel size of the 3 rd convolution layer is 1 × 1, the number of convolution kernels is 3, and the step size is 1.
(2.2) constructing a human joint point detection sub-network:
the sub-network comprises 9 convolutional layers and 4 anti-convolutional layers, and the structural relationship is as follows: first convolution layer → second convolution layer → third convolution layer → fourth convolution layer → fifth convolution layer → sixth convolution layer → seventh convolution layer → eighth convolution layer → first reverse convolution layer → second reverse convolution layer → third reverse convolution layer → fourth reverse convolution layer → ninth convolution layer, wherein:
the convolution kernel size of the first convolution layer is 3 multiplied by 3, the number of the convolution kernels is 128, and the step length is 1;
the convolution kernel size of the second convolution layer is 1 multiplied by 1, the number of convolution kernels is 256, and the step length is 2;
the convolution kernel size of the third convolution layer is 3 multiplied by 3, the number of convolution kernels is 256, and the step length is 1;
the convolution kernel size of the fourth convolution layer is 1 × 1, the number of convolution kernels is 256, and the step length is 2;
the convolution kernel size of the fifth convolution layer is 3 × 3, the number of convolution kernels is 256, and the step length is 1;
the convolution kernel size of the sixth convolution layer is 1 × 1, the number of convolution kernels is 256, and the step size is 2;
the convolution kernel size of the seventh convolution layer is 3 × 3, the number of convolution kernels is 256, and the step size is 1;
the convolution kernel size of the eighth convolution layer is 1 × 1, the number of convolution kernels is 256, and the step size is 1;
the convolution kernel size of the first deconvolution layer is 3 × 3, the number of convolution kernels is 256, and the step size is 2;
the convolution kernel size of the second deconvolution layer is 3 × 3, the number of convolution kernels is 128, and the step size is 2;
the convolution kernel size of the third deconvolution layer is 3 × 3, the number of convolution kernels is 128, and the step size is 2;
the convolution kernel size of the fourth deconvolution layer is 3 × 3, the number of convolution kernels is 128, and the step size is 1;
the size of convolution kernels of the ninth convolution layer is 1 multiplied by 1, the number of convolution kernels is 18, and the step size is 1;
and (2.3) cascading the established image spatial domain conversion sub-network with a human body joint point detection sub-network to form a human body joint point detection network based on spatial domain conversion.
And 3, training a human body joint point detection network based on spatial domain conversion.
(3.1) reading a training data set image from the training image folder A, inputting the image into the human body joint point detection network based on spatial domain conversion constructed in the step (2), generating a spatial conversion image through an image spatial conversion sub-network in the human body joint point detection network, and outputting a predicted coordinate value of the human body joint point through the human body joint point detection sub-network by the spatial conversion image;
(3.2) reading the labeled coordinate values corresponding to the images of the training data set from the training labeled folder B, and calculating the loss value L of the human body joint point detection network based on spatial domain conversion:
Figure BDA0002310930450000041
wherein i represents the serial number of the human body joint point, xi' and yi'minute' toMarked abscissa and ordinate values, x, respectively representing the corresponding serial number of the joint pointiAnd yiRespectively representing the abscissa and the ordinate of a predicted coordinate value output by a human body joint point detection network based on spatial domain conversion;
(3.3) detecting the loss value L of the network by using the human body joint points based on spatial domain conversion, and training the network constructed in the step (2) by adopting a random gradient descent algorithm:
(3.3.1) taking the derivative of the loss value of the human body joint point detection network based on the spatial domain conversion:
Figure BDA0002310930450000051
f represents a derivative value of a loss value L of the human body joint point detection network based on the spatial domain conversion to a network parameter theta thereof, and theta represents a parameter of the human body joint point detection network based on the spatial domain conversion;
(3.3.2) calculating an updated value of the human body joint point detection network parameter based on the spatial domain conversion:
θ2=θ-αF
wherein, theta2Representing the updated value of the parameters of the human body joint point detection network based on the spatial domain conversion, wherein alpha is the learning rate of the human body joint point detection network based on the spatial domain conversion, and the value is 0.00025;
(3.3.3) detecting updated values of network parameters θ with human body joint points based on spatial domain transformation2Replacing the parameter theta of the original network;
(3c4) and (3) iterating the steps from (3.3.1) to (3.3.3) for 150000 times to obtain the trained human body joint point detection network based on spatial domain conversion.
And 4, constructing a standard motion data set:
(4.1) shooting a standard motion video demonstrated by a standard athlete, wherein the shooting equipment is Canon EOS 5D Mark IV, and the video frame rate is 60 frames/second;
(4.2) collecting each frame of the shot standard motion video into a picture as shown in figure 2, and storing the picture into a standard picture folder C;
and (4.3) respectively inputting the collected pictures into a trained human body joint point detection network based on spatial domain conversion to obtain coordinate information of each human body joint point, and storing the obtained coordinate information into a standard labeling folder D.
And 5, constructing a common motion data set.
(5.1) shooting a nonstandard motion video demonstrated by a common athlete, wherein the shooting equipment is Canon EOS 5D Mark IV, and the video frame rate is 60 frames/second;
(5.2) collecting each frame of the shot non-standard motion video into a picture as shown in figure 3, and storing the picture into a test picture folder E;
and (5.3) respectively inputting the acquired pictures into a trained human body joint point detection network based on spatial domain conversion to obtain coordinate information of each human body joint point, and storing the obtained coordinate information into a test labeling folder F.
And 6, determining action points needing to be corrected.
(6.1) reading the coordinate information corresponding to the test picture from the test labeling folder F;
(6.2) reading coordinate information corresponding to the standard picture from the standard labeling folder D;
(6.3) sequentially calculating the sum of Euclidean distances between the coordinates of the test picture and the coordinates of the joint points of the standard picture:
Figure BDA0002310930450000061
wherein, P represents the sum of Euclidean distances between the coordinates of the test picture and the coordinates of the joint points of the standard picture, i represents the serial number of the joint points of the human body, ai' and bi' respectively representing the abscissa and ordinate values of the joint point of the corresponding serial number in the test picture, aiAnd biRespectively representing the abscissa and ordinate values of the joint point with the corresponding serial number in the standard picture.
(6.4) taking the standard picture with the minimum sum of Euclidean distances as a standard matching picture of the test picture from the sum of the Euclidean distances of the coordinates of the joint points of the calculated test picture and the standard picture;
(6.5) calculating the Euclidean distance between the test picture and each joint point in the standard matching picture:
Qj=(c'j-cj)2+(d'j-dj)2,j=1,2,...,18
wherein Q isjRepresenting Euclidean distance of j-th joint point coordinates of the test picture and the standard picture, wherein j represents a serial number of human body joint points, c'jAnd d'jRespectively representing the abscissa and ordinate values of the joint point of the corresponding serial number in the test picture, cjAnd djRespectively representing the abscissa and ordinate values of the joint point with the corresponding serial number in the standard matching picture.
(6.6) setting a scoring threshold value to be 50, and counting the joint points which are greater than the scoring threshold value in the Euclidean distance between the test picture and each joint point in the standard matching picture, namely the joint points to be corrected.
The foregoing description is only an example of the present invention and is not intended to limit the invention, so that it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (8)

1. A single-person movement posture correction method based on a neural network is characterized by comprising the following steps:
(1) collecting a training data set:
(1a) downloading an image data set containing human body joint points and storing the image data set into a training image folder A;
(1b) downloading a label file corresponding to the data set, and storing the label file into a training label folder B;
(1c) putting the image folder and the label folder into the same folder to form a training data set;
(2) constructing a human body joint point detection network based on spatial domain conversion, which is formed by cascading an image spatial domain conversion sub-network and a human body joint point detection sub-network, wherein:
the image space domain conversion sub-network consists of 3 convolutional layers in sequence;
the human body joint point detection sub-network comprises 9 convolution layers and 4 deconvolution layers, namely 4 deconvolution layers are sequentially connected between 8 convolution layers and the last convolution layer which are sequentially cascaded;
(3) training a human body joint point detection network based on spatial domain conversion:
(3a) reading a training data set image from a training image folder A, inputting the image into the human body joint point detection network based on spatial domain conversion constructed in the step (2), generating a spatial conversion image through an image spatial conversion sub-network in the human body joint point detection network, and outputting a predicted coordinate value of a human body joint point through the human body joint point detection sub-network by the spatial conversion image;
(3b) reading a labeled coordinate value corresponding to the training data set image from a training labeled folder B, calculating a loss value L of the human body joint point network according to the labeled coordinate value and the predicted coordinate value output in the step (3a), and training the network constructed in the step (2) by using the loss value and adopting a random gradient descent algorithm to obtain a trained human body joint point detection network based on spatial domain conversion;
(4) constructing a standard motion data set:
(4a) shooting a standard action video demonstrated by a standard athlete;
(4b) collecting each frame of the shot standard action video into a picture, and storing the picture into a standard picture folder C;
(4c) respectively inputting the collected pictures into a trained human body joint point detection network based on spatial domain conversion to obtain coordinate information of each human body joint point, and storing the obtained coordinate information into a standard labeling folder D;
(5) constructing a common motion data set:
(5a) shooting a non-standard motion video demonstrated by a common athlete;
(5b) collecting each frame of the shot non-standard action video into an image, and storing the image into a test image folder E;
(5c) respectively inputting the collected pictures into a trained human body joint point detection network based on spatial domain conversion to obtain coordinate information of each human body joint point, and storing the obtained coordinate information into a test labeling folder F;
(6) setting a scoring threshold value to be 50, determining an action point needing correction:
(6a) reading coordinate information corresponding to the test picture from the test labeling folder F;
(6b) reading coordinate information corresponding to the standard picture from the standard labeling folder D;
(6c) sequentially calculating the Euclidean distance sum of the coordinates of the joint points of the test picture and the standard picture, and taking the standard picture with the minimum Euclidean distance sum as a standard matching picture of the test picture;
(6d) and calculating the Euclidean distance between the test picture and each joint point in the standard matching picture, and counting the joint points which are larger than the set scoring threshold value, namely the joint points to be corrected.
2. The method according to claim 1, wherein (1b) downloading a label file corresponding to the data set, the label file comprising the pictures of the human body and position coordinate information of 18 joints of the human body in each picture, the 18 joints being respectively: nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, right eye, left eye, right ear, and left ear.
3. The method of claim 1, wherein in (2) the 3 convolutional layers of the spatial-domain switching sub-network have the following parameters:
the convolution kernel size of the 1 st convolution layer is 1 multiplied by 1, the number of convolution kernels is 3, and the step length is 1;
the convolution kernel size of the 2 nd convolution layer is 1 multiplied by 1, the number of convolution kernels is 64, and the step length is 1;
the convolution kernel size of the 3 rd convolution layer is 1 × 1, the number of convolution kernels is 3, and the step size is 1.
4. The method according to claim 1, wherein the human joint point detection subnetwork built in (2) has the following structure in sequence: first convolutional layer → second convolutional layer → third convolutional layer → fourth convolutional layer → fifth convolutional layer → sixth convolutional layer → seventh convolutional layer → eighth convolutional layer → first deconvolution layer → second deconvolution layer → third deconvolution layer → fourth deconvolution layer → ninth convolutional layer, each layer parameter is as follows:
the convolution kernel size of the first convolution layer is 3 multiplied by 3, the number of the convolution kernels is 128, and the step length is 1;
the convolution kernel size of the second convolution layer is 1 × 1, the number of convolution kernels is 256, and the step size is 2
The convolution kernel size of the third convolution layer is 3 multiplied by 3, the number of convolution kernels is 256, and the step length is 1;
the convolution kernel size of the fourth convolution layer is 1 × 1, the number of convolution kernels is 256, and the step length is 2;
the convolution kernel size of the fifth convolution layer is 3 × 3, the number of convolution kernels is 256, and the step length is 1;
the convolution kernel size of the sixth convolution layer is 1 × 1, the number of convolution kernels is 256, and the step size is 2;
the convolution kernel size of the seventh convolution layer is 3 × 3, the number of convolution kernels is 256, and the step size is 1;
the convolution kernel size of the eighth convolution layer is 1 × 1, the number of convolution kernels is 256, and the step size is 1;
the convolution kernel size of the first deconvolution layer is 3 × 3, the number of convolution kernels is 256, and the step size is 2;
the convolution kernel size of the second deconvolution layer is 3 × 3, the number of convolution kernels is 128, and the step size is 2;
the convolution kernel size of the third deconvolution layer is 3 × 3, the number of convolution kernels is 128, and the step size is 2;
the convolution kernel size of the fourth deconvolution layer is 3 × 3, the number of convolution kernels is 128, and the step size is 1;
the size of the convolution kernel of the ninth convolution layer is 1 × 1, the number of convolution kernels is 18, and the step size is 1.
5. The method according to claim 1, wherein the loss value L of the human joint detection network calculated in (3b) is calculated by the formula:
Figure FDA0003380223620000031
wherein i represents the serial number of human joint points, x'iAnd y'iMarked abscissa and ordinate values, x, respectively representing the joint points of the corresponding serial numbersiAnd yiRespectively representing the abscissa and the ordinate of the predicted coordinate value output by the human body joint point detection network.
6. The method according to claim 1, wherein the loss value is utilized in (3b) to perform a stochastic gradient descent algorithm on the human joint detection network based on spatial domain transformation, which is implemented as follows:
(3b1) and (3) obtaining a derivative of the loss value of the human body joint point detection network based on the spatial domain conversion according to the following formula:
Figure FDA0003380223620000032
f represents a derivative value of a loss value L of the human body joint point detection network based on the spatial domain conversion to a network parameter theta thereof, and theta represents a parameter of the human body joint point detection network based on the spatial domain conversion;
(3b2) calculating the updated value of the human body joint point detection network parameters based on the spatial domain conversion according to the following formula:
θ2=θ-αF
wherein, theta2Representing the updated value of the parameters of the human body joint point detection network based on the spatial domain conversion, wherein alpha is the learning rate of the human body joint point detection network based on the spatial domain conversion, and the value is 0.00025;
(3b3) detecting updated values of network parameters theta with human body joint points based on spatial domain transformation2Replacing the parameter theta of the original network;
(3b4) and (3b1) to (3b3) are iterated 150000 times to obtain the trained human body joint point detection network based on the spatial domain transformation.
7. The method according to claim 1, wherein the sum of Euclidean distances of the test picture coordinates and the joint point coordinates of the standard picture is calculated in (6c), and the formula is as follows:
Figure FDA0003380223620000041
wherein, P represents the sum of Euclidean distances of the coordinates of the joint points of the test picture and the standard picture, i represents the serial number of the joint points of the human body, a'iAnd b'iRespectively representing the abscissa and ordinate values, a, of the joint point of the corresponding serial number in the test pictureiAnd biRespectively representing the abscissa and ordinate values of the joint point with the corresponding serial number in the standard picture.
8. The method of claim 1, wherein the euclidean distance of each joint point in the test picture and its standard matching picture is calculated in (6d) as follows:
Qj=(c′j-cj)2+(d′j-dj)2,j=1,2,...,18
wherein Q isjRepresenting Euclidean distance of j-th joint point coordinates of the test picture and the standard picture, wherein j represents a serial number of human body joint points, c'jAnd d'jRespectively representing the abscissa and ordinate values of the joint point of the corresponding serial number in the test picture, cjAnd djRespectively representing the abscissa and ordinate values of the joint point with the corresponding serial number in the standard matching picture.
CN201911258388.4A 2019-12-10 2019-12-10 Single-person movement posture correction method based on neural network Active CN111079616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911258388.4A CN111079616B (en) 2019-12-10 2019-12-10 Single-person movement posture correction method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911258388.4A CN111079616B (en) 2019-12-10 2019-12-10 Single-person movement posture correction method based on neural network

Publications (2)

Publication Number Publication Date
CN111079616A CN111079616A (en) 2020-04-28
CN111079616B true CN111079616B (en) 2022-03-04

Family

ID=70313971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911258388.4A Active CN111079616B (en) 2019-12-10 2019-12-10 Single-person movement posture correction method based on neural network

Country Status (1)

Country Link
CN (1) CN111079616B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108762495A (en) * 2018-05-18 2018-11-06 深圳大学 The virtual reality driving method and virtual reality system captured based on arm action
CN109086754A (en) * 2018-10-11 2018-12-25 天津科技大学 A kind of human posture recognition method based on deep learning
WO2019035586A1 (en) * 2017-08-18 2019-02-21 강다겸 Method and apparatus for providing posture guide
CN110175566A (en) * 2019-05-27 2019-08-27 大连理工大学 A kind of hand gestures estimating system and method based on RGBD converged network
CN110245623A (en) * 2019-06-18 2019-09-17 重庆大学 A kind of real time human movement posture correcting method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019035586A1 (en) * 2017-08-18 2019-02-21 강다겸 Method and apparatus for providing posture guide
CN108762495A (en) * 2018-05-18 2018-11-06 深圳大学 The virtual reality driving method and virtual reality system captured based on arm action
CN109086754A (en) * 2018-10-11 2018-12-25 天津科技大学 A kind of human posture recognition method based on deep learning
CN110175566A (en) * 2019-05-27 2019-08-27 大连理工大学 A kind of hand gestures estimating system and method based on RGBD converged network
CN110245623A (en) * 2019-06-18 2019-09-17 重庆大学 A kind of real time human movement posture correcting method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于人体姿态序列提取和分析的行为识别;陈聪;《中国博士学位论文全文数据库 (基础科学辑)》;20131115;全文 *

Also Published As

Publication number Publication date
CN111079616A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
Yan et al. Crowd counting via perspective-guided fractional-dilation convolution
US11417095B2 (en) Image recognition method and apparatus, electronic device, and readable storage medium using an update on body extraction parameter and alignment parameter
CN109410190B (en) Tower pole reverse-breaking detection model training method based on high-resolution remote sensing satellite image
CN112464915B (en) Push-up counting method based on human skeleton point detection
CN105976395A (en) Video target tracking method based on sparse representation
CN111967407B (en) Action evaluation method, electronic device, and computer-readable storage medium
Zhang et al. Semi-supervised action quality assessment with self-supervised segment feature recovery
CN113705540A (en) Method and system for recognizing and counting non-instrument training actions
CN114783043B (en) Child behavior track positioning method and system
Zhao et al. 3d pose based feedback for physical exercises
CN115131879A (en) Action evaluation method and device
CN114495169A (en) Training data processing method, device and equipment for human body posture recognition
CN113033721B (en) Title correction method and computer storage medium
CN111079616B (en) Single-person movement posture correction method based on neural network
Guo et al. PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training
CN110070036B (en) Method and device for assisting exercise motion training and electronic equipment
CN107274388A (en) It is a kind of based on global information without refer to screen image quality evaluating method
CN116386136A (en) Action scoring method, equipment and medium based on human skeleton key points
CN110175531B (en) Attitude-based examinee position positioning method
CN115205332A (en) Moving object identification and motion track calculation method
CN115188051A (en) Object behavior-based online course recommendation method and system
CN114998803A (en) Body-building movement classification and counting method based on video
CN113361928A (en) Crowdsourcing task recommendation method based on special-pattern attention network
CN107392102A (en) Based on the family of local image characteristics and multi-instance learning group photo and non-family safe group photo sorting technique
CN111738343A (en) Image labeling method based on semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant