CN110163086A - Body-building action identification method, device, equipment and medium neural network based - Google Patents
Body-building action identification method, device, equipment and medium neural network based Download PDFInfo
- Publication number
- CN110163086A CN110163086A CN201910278750.8A CN201910278750A CN110163086A CN 110163086 A CN110163086 A CN 110163086A CN 201910278750 A CN201910278750 A CN 201910278750A CN 110163086 A CN110163086 A CN 110163086A
- Authority
- CN
- China
- Prior art keywords
- data
- feature
- standard
- characteristic
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of body-building action identification method, device, equipment and media neural network based, this method includes that at least one cutting characteristic is sequentially inputted to sequentially in time in trained body-building action recognition model, obtains the corresponding type of action of each cutting characteristic;Corresponding standard feature group data are obtained based on type of action, specific feature data group is chosen from characteristic to be identified by standard feature group data and carries out standard judgement, obtain feature recognition result;To determine whether specific feature data group is standard feature, count the corresponding feature recognition result of all specific feature data groups, when the corresponding feature recognition result of all specific feature data groups is characteristic standard, then determine that the corresponding body-building recognition result of action sequence data to be identified is action criteria, the accuracy rate of identification maneuver sequence data is treated in guarantee, meanwhile it can be to avoid using high-cost sensor, conservation schemes cost.
Description
Technical field
The present invention relates to body-building action recognition field more particularly to a kind of body-building action recognition sides neural network based
Method, device, equipment and medium.
Background technique
Current people's lives level is higher and higher, also increasingly focuses on health, and fitness training becomes health of people
The common selection of one kind of life.Existing more scientific and reasonable fitness training is gymnasium, is taught by the body-building of profession
Practice guidance, standard is accomplished in each body-building movement, to improve body-building effect.But this mode is not that everyone can make
, do not have popular style.Understand whenever and wherever possible in order to facilitate body builder oneself body-building movement whether standard, body-building effect is
It is no to reach requirement, identify whether standard mainly includes two ways for user for body-building movement, the first is set by wearable at present
The standby acceleration and displacement information for obtaining human body, by acceleration and displacement information judge user body-building act whether standard,
The deterministic process of which is relatively simple, but the accuracy judged is not high;Second is by single or multiple depth images
Sensor obtains the video information of personage, is by the body-building movement for carrying out pattern matching judgment output human body to video information
Although no standard, this kind of mode can effectively improve the accuracy rate of judgement, but the higher cost of depth image sensor.
Summary of the invention
The embodiment of the present invention provides a kind of body-building action identification method, device, equipment and medium neural network based, with
It solves the problems, such as to meet simultaneously in the prior art and at low cost and accuracy rate is high to be judged to body-building movement progress standard.
A kind of body-building action identification method neural network based, comprising:
Obtain the action sequence data to be identified of data acquisition equipment acquisition;
Feature extraction is carried out to the action sequence data to be identified using direction cosine matrix algorithm, obtains appearance to be identified
State characteristic, using the body-building action sequence data and the posture feature data to be identified as characteristic to be identified;
The characteristic to be identified is normalized, normalization characteristic data are obtained;
Cutting processing is carried out to the normalization characteristic data by time segmentation algorithm, obtains the normalization characteristic number
According at least one corresponding cutting characteristic;
At least one described cutting characteristic is sequentially inputted to trained body-building action recognition sequentially in time
In model, the corresponding type of action of each cutting characteristic is obtained;
Corresponding standard feature group data are obtained based on the type of action, by the standard feature group data from described
Specific feature data group is chosen in characteristic to be identified and carries out standard judgement, and it is corresponding to obtain the specific feature data group
Feature recognition result;
The corresponding feature recognition result of all specific feature data groups is counted, when all specific feature data groups
Corresponding feature recognition result is characteristic standard, then the corresponding body-building recognition result of the action sequence data to be identified is
Make standard.
A kind of body-building action recognition device neural network based, comprising:
Characteristic to be identified obtains module, for obtaining the action sequence data to be identified of data acquisition equipment acquisition;
Characteristic extracting module, for carrying out feature to the action sequence data to be identified using direction cosine matrix algorithm
It extracts, obtains posture feature data to be identified, the body-building action sequence data and the posture feature data to be identified are made
For characteristic to be identified;
Normalized module obtains normalization characteristic for the characteristic to be identified to be normalized
Data;
Cutting processing module is obtained for carrying out cutting processing to the normalization characteristic data by time segmentation algorithm
Take at least one corresponding cutting characteristic of the normalization characteristic data;
Model identification module, at least one described cutting characteristic to be sequentially inputted to train sequentially in time
In good body-building action recognition model, the corresponding type of action of each cutting characteristic is obtained;
Characteristic standard judgment module passes through for obtaining corresponding standard feature group data based on the type of action
The standard feature group data choose specific feature data group from the characteristic to be identified and carry out standard judgement, obtain
The corresponding feature recognition result of the specific feature data group;
Action criteria row judgment module, for counting the corresponding feature recognition result of all specific feature data groups,
When the corresponding feature recognition result of all specific feature data groups is characterized standard, then action sequence data to be identified
Corresponding body-building recognition result is action criteria.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing
The computer program run on device, the processor realize above-mentioned body-building neural network based when executing the computer program
Action identification method.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter
Calculation machine program realizes above-mentioned body-building action identification method neural network based when being executed by processor.
Above-mentioned body-building action identification method, device, computer equipment and storage medium neural network based, pass through first
Action sequence data to be identified are obtained, it, will be to be identified to obtain the corresponding characteristic to be identified of action sequence data to be identified
Action sequence data are converted to the computable data of computer language.Then, identification feature data are treated to be normalized,
So that characteristic to be identified is converted to the data for meeting the form of sigmoid function, to facilitate subsequent act using body-building to know
Other model identification, improves recognition accuracy.It then, is at least one cutting characteristic by normalization characteristic data cutting, according to
It is secondary to be input in body-building action recognition model, the corresponding type of action of each cutting characteristic is obtained, to improve type of action
Accuracy rate.Finally, obtain corresponding standard feature group data by type of action, to determine that specific feature data group is
No is standard feature, so that it is determined that action sequence data to be identified whether standard, guarantee to treat the standard of identification maneuver sequence data
True rate, meanwhile, it can be to avoid using high-cost sensor, conservation schemes cost.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention
Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention
Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings
Obtain other attached drawings.
Fig. 1 is a flow chart of body-building action identification method neural network based in one embodiment of the invention;
Fig. 2 is a specific flow chart of step S20 in Fig. 1;
Fig. 3 is a specific flow chart of step S60 in Fig. 1;
Fig. 4 is another flow chart of body-building action identification method neural network based in one embodiment of the invention;
Fig. 5 is a specific flow chart of step S503 in Fig. 4;
Fig. 6 is a specific flow chart of step S505 in Fig. 4;;
Fig. 7 is a schematic diagram of body-building action recognition device neural network based in one embodiment of the invention;
Fig. 8 is a schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
Body-building action identification method neural network based provided by the present application, can be applicable in various electronic equipments, should
Electronic equipment includes but is not limited to various personal computers, laptop, smart phone, tablet computer and portable wearable
Equipment.It is also applicable in server, electronic equipment and server are by network progress information exchange, to realize that the application provides
Body-building action identification method neural network based.
Specifically, dynamic by the body-building neural network based for being integrated in electronic equipment if using in the electronic device
Make recognition methods, corresponding data are acted to body-building and carry out standard judgement, to obtain action recognition result.It is being serviced if applying
In device, then the corresponding Data Concurrent of body-building movement can be obtained by electronic equipment and gives server, server passes through storage
Body-building action identification method neural network based acts corresponding data to the body-building of acquisition and carries out standard judgement, to obtain
Action recognition is as a result, and be sent to corresponding electronic equipment for the action recognition result.Wherein, action recognition result, which refers to, passes through base
It acts corresponding data to body-building in the body-building action identification method of neural network to identify, whether the body-building movement of acquisition
The result of standard.The action recognition result includes action criteria and acts nonstandard.
In one embodiment, as shown in Figure 1, providing a kind of body-building action identification method neural network based, including such as
Lower step:
S10: the action sequence data to be identified of data acquisition equipment acquisition are obtained.
Wherein, data acquisition equipment refers to the equipment of acquisition user for body-building movement related data.Data in the present embodiment are adopted
Collecting equipment includes but is not limited to the equipment for being equipped with the sensors such as acceleration transducer, gyroscope and magnetometer.
Sequence data to be identified refers to that acquisition equipment acts corresponding data according at least one body-building that time series acquires.
The sequence data to be identified of the present embodiment includes but is not limited to that each body-building acts corresponding translational acceleration, angular acceleration and magnetic
Field data.In the present embodiment body-building movement include but is not limited to put the palms together before one jump, folding jump and rotate in place.
Specifically, electronic equipment or server obtain the action sequence data to be identified of data acquisition equipment acquisition, with
Judge whether standard provides effective data source for body-building movement to be subsequent.
S20: identification maneuver sequence data is treated using direction cosine matrix algorithm and carries out feature extraction, obtains appearance to be identified
State characteristic, using body-building action sequence data and posture feature data to be identified as characteristic to be identified.
Wherein, direction cosine matrix algorithm refers to the acceleration information and angular speed for measuring acceleration transducer and gyroscope
Data carry out attitude algorithm, obtain the algorithm of the direction cosine matrix of characterization posture.Posture feature data to be identified refer to be identified
The data for being used to characterize posture that action sequence data are obtained by direction cosine matrix algorithm.The posture feature data to be identified
It can be used for estimating the position of sequence data to be identified in space.It, can be according to practical need after obtaining direction cosine matrix
It wants, direction cosine matrix is converted into quaternary number or Eulerian angles.
Specifically, after obtaining sequence data to be identified, identification maneuver sequence number is treated using direction cosine matrix algorithm
According to feature extraction is carried out, posture feature data to be identified are obtained.In order to more completely reflect action sequence data to be identified, obtaining
It, need to be by body-building action sequence data and posture feature data to be identified collectively as to be identified after taking posture feature data to be identified
Characteristic.Wherein, characteristic to be identified refers to the number including action sequence data to be identified and posture feature data to be identified
According to.
S30: treating identification feature data and be normalized, and obtains normalization characteristic data.
Specifically, after obtaining characteristic to be identified, subsequent step is realized for convenience, so that characteristic to be identified
Meet the form of sigmoid function, it is also necessary to treat identification feature data and be normalized, obtain normalization characteristic number
According to so that characteristic to be identified meets the form of sigmoid function.Sigmoid function in the present embodiment refers to LSTM model
In activation primitive.Wherein, normalization characteristic data refer to that by characteristic to be identified processing be to meet the form of sigmoid function
Data.It treats identification feature data to be normalized, so that characteristic to be identified meets the shape of sigmoid function
Formula, to facilitate execution subsequent step.
S40: cutting processing is carried out to normalization characteristic data by time segmentation algorithm, obtains normalization characteristic data pair
At least one the cutting characteristic answered.
Specifically, after obtaining normalization characteristic data, due to including several strong in uncertain normalization characteristic data
The characteristic of body movement, in order to improve the accuracy of step S50, therefore, before step S50, it is also necessary to according to setting in advance
The time segmentation algorithm set carries out cutting to normalization characteristic data, cuts so that normalization characteristic data are converted at least one
Divide characteristic.Wherein, time segmentation algorithm, which refers to, carries out cutting to normalization characteristic data according to pre-set time span
Method.Cutting characteristic refers to the data that normalization characteristic is obtained according to the cutting of time segmentation algorithm.
Further, if before step S50, cutting processing is not carried out to normalization characteristic data in advance, then the normalizing
Changing characteristic may include that multiple body-building act, and directly will include the normalization characteristic data input of multiple body-building movements
It is identified into body-building action recognition model, the type of action that will lead to acquisition is especially inaccurate, seriously affects body-building movement
The recognition accuracy of identification model.Wherein, body-building action recognition model refers to the model of the type of action of the movement of body-building for identification.
S50: at least one cutting characteristic is sequentially inputted to trained body-building action recognition sequentially in time
In model, the corresponding type of action of each cutting characteristic is obtained.
Body-building action recognition model in the present embodiment is the model obtained by LSTM model training.Wherein LSTM model
It is a kind of time recurrent neural networks model, is suitable for the event for handling and predicting that there is time series.
Specifically, after obtaining cutting characteristic, at least one cutting characteristic is successively defeated sequentially in time
Enter into trained body-building action recognition model, obtains the corresponding type of action of each cutting characteristic.Wherein, the movement
Type refers to that characterization body-building movement belongs to the type of what movement, and jump of such as putting the palms together before one, folding are jumped and rotated in place.It is acted and is known by body-building
Identification of the other model to cutting characteristic obtains the corresponding type of action of each cutting characteristic, can effectively improve dynamic
Make the accuracy rate of type.
S60: obtaining corresponding standard feature group data based on type of action, by standard feature group data from spy to be identified
It levies and chooses the progress standard judgement of specific feature data group in data, obtain the corresponding feature identification knot of specific feature data group
Fruit.
Wherein, standard feature group data refer to it is pre-stored can be with the data of the body-building motion characteristic of signature criteria.It can be with
Understand ground, a standard feature group data include at least one standard feature.Standard feature in the present embodiment is to pass through standard
The feature determination range that the motion profile of body-building movement determines.Feature determination range refers to that is be stored in advance moves according to standard body-building
The corresponding determination range of characteristic point that the motion profile of work determines.Characteristic point refers to the point for characterizing motion profile.Such as a mark
The motion profile of quasi- body-building movement A is the waveform that Open Side Down, then the characteristic point of standard body-building movement is the waveform
Two minimum points and a maximum point, corresponding standard feature are a1(t1Moment, rotation 45 ° to 60 ° of angle), feature
Point a2(t290 ° to 120 ° moment, rotation of angle) and characteristic point a3(t320 ° to 30 ° moment, rotation of angle).
Specifically, after obtaining type of action, the corresponding standard feature group number of the type of action is obtained based on type of action
According to.Then specific feature data group is chosen from characteristic to be identified by standard feature group data carry out standard judgement,
Obtain feature recognition result.Wherein, specific feature data group refer in characteristic to be identified with the institute in standard feature group data
There is standard feature to match one group of characteristic.Feature recognition result refers to through standard feature group data to specific feature data group
Carry out the result that standard judges.This feature recognition result includes that characteristic standard and feature are nonstandard.
S70: counting the corresponding feature recognition result of all specific feature data groups, when all specific feature data groups are corresponding
Feature recognition result be characteristic standard, then the corresponding body-building recognition result of action sequence data to be identified be action criteria.
Specifically, after obtaining feature recognition result, the corresponding feature recognition result of all specific feature data groups is counted,
When the corresponding feature recognition result of all cutting features is characterized standard, then the corresponding body-building of action sequence data to be identified identifies
It as a result is action criteria;It is nonstandard when there is the corresponding feature recognition result of a cutting feature to be characterized, then movement sequence to be identified
The corresponding body-building recognition result of column data is that movement is nonstandard.
Further, body-building recognition result is that movement is nonstandard, triggers the indicating mode pre-set and (such as shakes, refers to
Show lamp show or sound prompting) user is prompted so that user understands that done body-building movement is nonstandard.
Step S10- step S70, first by obtaining action sequence data to be identified, to obtain action sequence number to be identified
According to corresponding characteristic to be identified, action sequence data to be identified are converted into the computable data of computer language.Then,
It treats identification feature data to be normalized, so that characteristic to be identified is converted to the form for meeting sigmoid function
Data, with facilitate it is subsequent identified using body-building action recognition model, improve recognition accuracy.Then, by normalization characteristic number
It is at least one cutting characteristic according to cutting, is sequentially inputted in body-building action recognition model, obtains each cutting characteristic
According to corresponding type of action, to improve the accuracy rate of type of action.Finally, it is special to obtain corresponding standard by type of action
Sign group data, to determine whether specific feature data group is standard feature, so that it is determined that whether action sequence data to be identified are marked
Standard guarantees to treat the accuracy rate of identification maneuver sequence data, meanwhile, it can be to avoid using high-cost sensor, conservation schemes
Cost.
In one embodiment, as shown in Fig. 2, in step S20, identification maneuver sequence is treated using direction cosine matrix algorithm
Data carry out feature extraction, obtain posture feature data to be identified, specifically comprise the following steps:
S21: treating identification maneuver sequence data and pre-processed, and obtains effective action sequence data.
Since the action sequence data to be identified of acquisition are directly to obtain from acceleration transducer, gyroscope and magnetometer
Reading, contain magnetic force caused by a large amount of noise, including non-gravity, inertia force and native to this world magnetic field in form, because
This, after obtaining action sequence data to be identified, it is also necessary to treat identification feature data and the pretreatment such as be denoised and filtered, obtain
Take effective action sequence data.Wherein, effective action sequence data refers to the noise number got rid of in action sequence data to be identified
The action sequence data obtained after.Identification maneuver sequence data is treated to be pre-processed, can be improved subsequent step obtain to
Identify the accuracy rate of posture feature data.
S22: effective action sequence data is calculated using direction cosine matrix algorithm, obtains posture feature to be identified
Data.
Specifically, after obtaining effective action sequence data, using direction cosine matrix algorithm to effective action sequence number
According to being converted, posture feature data to be identified, i.e. direction cosine matrix are obtained.
Step S21- step S22 the pretreatment such as is denoised and is filtered by treating identification feature data, is obtained effectively dynamic
Make sequence data, to improve the accuracy rate of characteristic to be identified.
In one embodiment, as shown in figure 3, step S60, obtains corresponding standard feature group data based on type of action,
Specific feature data group is chosen from characteristic to be identified by standard feature group data and carries out standard judgement, is obtained specific
The corresponding feature recognition result of characteristic group, specifically comprises the following steps:
S61: standard feature group data corresponding with type of action are obtained based on type of action, standard feature group data include
At least one standard feature and standard time range, standard feature carry characteristic attribute.
Wherein, standard time range refers to the range being arranged according to standard feature group time span.The standard feature group time is long
Degree refers to the time span terminated first standard feature since standard feature group data to the last one standard feature.It is special
Sign attribute refers to the attribute for unique identification standard feature.
S62: by the characteristic attribute of all standard features in standard feature group data, identification feature data is treated and are divided
Group, obtains specific feature data group, and specific feature data group carries the time to be identified.
Specifically, after obtaining standard feature group data, according to the feature of all standard features in standard feature group data
Attribute is treated identification feature data and is grouped, and specific feature data group is obtained.If type of action includes A and B, the corresponding mark of A
Quasi- feature group data include 3 standard features, respectively a1、a2And a3, a1Characteristic attribute be A in first minimum, a2's
Characteristic attribute is first maximum in A, a3Characteristic attribute be A in second minimum.The standard feature group data of B include
2 standard features, respectively b1And b2, b1Characteristic attribute be B in first minimum, b2Characteristic attribute be in B the
One maximum.Characteristic to be identified is a1、a2、a3、b1、b2、a1、a2、a3, then treated according to this 2 standard feature group data
Identification feature data grouping, the specific feature data group of acquisition are 2 a1、a2And a3, 1 b1And b2。
Having time is carried by action sequence data to be identified in this present embodiment, it is therefore, to be identified in the present embodiment
Characteristic is also to carry having time, it will be appreciated that ground is also to carry having time by being grouped obtained specific feature data group
, the time which carries is the time to be identified.The time to be identified refers to from specific feature data group
First characteristic starts the time span terminated to the last one characteristic.
S63: treating recognition time based on standard time range and screened, by the spy to be identified within the standard time
For sign group data as feature group data to be analyzed, feature group to be analyzed includes at least one feature to be analyzed.
Specifically, after obtaining the time to be identified, recognition time is treated using the standard time and is screened, mark is only remained in
Feature group data to be identified in quasi- time range, and subsequent step is executed as feature group data to be analyzed.Wherein, to
Analysis feature group data refer to the feature group data to be identified within the standard time.One feature group to be analyzed includes at least one
A feature to be analyzed.
S64: judge whether each feature to be analyzed meets corresponding standard feature group data in feature group data to be analyzed
In each standard feature.
S65: if all features to be analyzed meet in corresponding standard feature group data respectively in feature group data to be analyzed
The corresponding standard feature of feature to be analyzed, then feature recognition result is characterized standard;If in feature group data to be analyzed exist to
Analysis feature does not meet the corresponding standard feature of each feature to be analyzed in corresponding standard feature group data, then feature recognition result
It is characterized nonstandard.
Specifically, if all features to be analyzed meet in corresponding standard feature group data in feature group data to be analyzed
The corresponding standard feature of each feature to be analyzed, then it represents that the standard feature of same characteristic attribute and feature to be analyzed are not much different,
Corresponding feature recognition result is characterized standard;If in feature group data to be analyzed, there are features to be analyzed not to meet corresponding mark
The corresponding standard feature of each feature to be analyzed in quasi- feature group data, then it represents that the standard feature of same characteristic attribute and to be analyzed
Feature difference is larger, and corresponding feature recognition result is characterized nonstandard.
Further, if feature recognition result be characterized it is nonstandard, then it represents that there are non-type for the body-building of user movement
Situation triggers the indicating mode (such as vibration, indicator light is shown or sound prompting) pre-set and prompts user.
Step S61- step S65 obtains standard feature group data corresponding with type of action by type of action, by sentencing
Whether each feature to be analyzed meets each standard feature in corresponding standard feature group data in feature group data to be analyzed of breaking,
Obtain feature recognition result, so that it is determined that feature recognition result whether standard, judging nicety rate can be improved.
In one embodiment, as shown in figure 4, the body-building action identification method neural network based further include:
S501: obtaining raw motion data, and raw motion data carries type of action label.
Wherein, raw motion data refers to the action data for training pattern.In order to enable the model of subsequent training is true
The a large amount of raw motion data for determining more accurate when type of action acquisition in the present embodiment includes the movement number of standard
According to also including non-type action data.If the time of the A body-building movement of a standard is 5s, but some users are being A
Body-building act when do it is nonstandard, the corresponding time may be 3s or 7s, although the time is different, user is done
It is A body-building movement.In order to which whether the model of the subsequent training of judgement of aspect trains success, each raw motion data of acquisition is all taken
With corresponding type of action label.The type of action label of the present embodiment includes but is not limited to put the palms together before one jump, folding jump and original place
Rotation.
S502: feature extraction is carried out to raw motion data using direction cosine matrix algorithm, obtains original posture feature
Data, using raw motion data and original posture feature data as initial characteristic data.
Wherein, original posture feature data refer to that raw motion data passes through the side of direction cosine matrix algorithm being converted to
To cosine matrix.Initial characteristic data refers to the data including raw motion data and original posture feature data.
S503: standard processing is carried out to initial characteristic data, obtains target signature data.
Standard treatments in the present embodiment include pretreatment and Character adjustment processing.Wherein pretreated detailed process and step
It is consistent with processing method in rapid S21, to avoid repeating, repeat no more.Character adjustment processing refers to different in same type of action
Time corresponding initial characteristic data carries out the side that missing values are filled up according to the corresponding maximum duration length of the initial characteristic data
Method.When the time span of initial characteristic data is less than maximum duration length, then missing values is filled up, tieed up with uniform data
Degree.It should be noted that the premise filled up is cannot to change the size of initial characteristic data.Target signature data refer to primitive character
The data obtained after data de-noising and filtering.
S504: target signature data are divided into training set and test set.
Wherein, training set refers to the target signature data for training pattern.Test set refers to for testing trained model
Target signature data.
S505: the corresponding target signature data of training set being input in original LSTM model and are trained, and is obtained effective
LSTM model.
Specifically comprising the following three steps: 1) use the first activation primitive to target signature data in the hidden layer of LSTM
It is handled, obtains the neuron for carrying state of activation mark.First activation primitive is the function for activating neuron state.
Neuron state determines the information of discarding, increase and the output of each door (i.e. input gate, forgetting door and out gate).Activate shape
State mark includes by mark and not passing through mark.Input gate, forgetting door and the corresponding mark point of out gate in the present embodiment
It Wei not i, f and o.In the present embodiment, specifically select sigmoid (S sigmoid growth curve) function as the first activation primitive,
Sigmoid function is the function of a common S type in biology, in information science, due to its list increasing and inverse function
Properties, the sigmoid functions such as single increasing are often used as the threshold function table of neural network, by variable mappings between 0-1.It activates letter
Several calculation formula areWherein, z indicates to forget the output valve of door.
2) neuron for carrying state of activation mark is handled using the second activation primitive in the hidden layer of LSTM, is obtained
Take the output of LSTM output layer.In the present embodiment, since the ability to express of linear model is inadequate, using tanh, (hyperbolic is just
Cut) as the second activation primitive, activation primitive tanh (tanh) has the advantages that fast convergence rate, can save instruction function
Practice the time, improves training effectiveness.
3) according to the output of LSTM model output layer, loss function: E is constructedloss=-ln ∑(x,z)∈SP (z | x), wherein p
(z | x) indicate that the probability that target signature data x is z in the output of LSTM model output layer, z refer to type of action label.Then pass through
Adam algorithm calculates gradient of the error function relative to weight and offset parameter in LSTM model, in the phase of loss function gradient
Weight and biasing are updated on opposite direction, obtain effective LSTM model.
S506: the corresponding target signature data of test set being input in effective LSTM model and are tested, acquisition probability
Value is greater than the output of predetermined probabilities as test output.
Wherein, what predetermined probabilities value was pre-set be input to after effective LSTM model for screening test collection obtain it is defeated
It out can be as the value of test output.
S507: obtaining test output and the consistent detection probability of type of action label, if detection probability is general not less than default
Rate, then using effective LSTM model as body-building action recognition model.
Wherein, detection probability refers to that test output accounts for the percentage of test output sum with the consistent quantity of type of action label
Than.Predetermined probabilities are pre-set for reflecting the probability whether trained effective LSTM meets the requirements.
Specifically, when detection probability is less than predetermined probabilities, then it represents that trained effective LSTM is unsatisfactory for requiring;Work as detection
Probability is not less than predetermined probabilities, then it represents that trained effective LSTM is met the requirements, and can be used as body-building action recognition model pair
Action sequence data to be identified are identified, with the type of action of determination action sequence data to be identified.
Step S501- step S507 obtains raw motion data and to the corresponding initial characteristic data of raw motion data
Carry out standard processing avoids deviation occur when original LSTM model training to reduce the difference due to target signature data.Then
Target signature data are divided into training set and test set, use the original LSTM mould of the corresponding target signature data training of training set
Type obtains effective LSTM model, is tested using test and corresponding target signature data effective LSTM model, obtains inspection
Probability is surveyed, if detection probability reaches predetermined probabilities, then it represents that trained effective LSTM is met the requirements, and can be used as body-building movement
Identification model is treated identification maneuver sequence data and is identified, to reach the type of action mesh for determining action sequence data to be identified
's.
In one embodiment, as shown in figure 5, step S503, carries out standard processing to initial characteristic data, it is special to obtain target
Data are levied, are specifically comprised the following steps:
S5031: the original time based on the corresponding initial characteristic data of same type of action label is selected from original time
Access is worth maximum original time as effective time.
Wherein, original time refers to the time of initial characteristic data objective reality.Effective time refers to be chosen from original time
The maximum original time of numerical value.Maximum original time, which is chosen, as effective time can guarantee the complete of initial characteristic data
Property.If it will cause the corresponding primitive characters of subsequent maximum original time not using maximum original time as effective time
Data input original LSTM model when there is the incomplete situation of feature, thus cause trained effective LSTM model exist compared with
Big error.
S5032: Character adjustment processing is carried out to initial characteristic data based on effective time, obtains validity feature data.
Specifically, after obtaining effective time, the initial characteristic data for being less than effective time to original time is lacked
Value is filled up.Missing values complementing method specifically can design according to the actual situation, but it has to be ensured that not change primitive character number
According to size.
S5033: being normalized validity feature data, obtains target signature data.
Specifically normalized process such as step S30 repeats no more to avoid repeating.
Step S5031- step S5033 carries out Character adjustment processing by receiving data to original spy according to effective time,
So that the target signature data length obtained is consistent, data basis is provided for the subsequent accuracy rate for improving valid model.
In one embodiment, as shown in fig. 6, step S505, the corresponding target signature data of training set is input to original
It is trained in LSTM model, obtains effective LSTM model, specifically comprise the following steps:
S5051: Initialize installation is carried out to the parameter in original LSTM model.
Specifically, it after obtaining original LSTM, needs rule of thumb to initialize the parameter in original LSTM model
It is arranged, the parameter in the present embodiment includes weight and biasing.Weight in original LSTM model is carried out reasonably just with biasing
Model training efficiency can be improved in beginningization setting, saves the training time.
S5052: the corresponding target signature data of training set being input in original LSTM model and are trained, and obtains training
Concentrate the corresponding model output of each target signature data.
Wherein, model output refers to that the corresponding target signature data of training set are input in original LSTM model and is trained,
The data of output layer output in original LSTM model.
S5053: it is exported, the parameter in original LSTM model is updated using Adam algorithm, acquisition has based on model
Imitate LSTM model.
Specifically, after obtaining model output, model output and the target signature data pair based on target signature data
The type of action label building loss function answered, is updated by Adam algorithm
Step S5051- step S5053, by carrying out Initialize installation to the parameter in original LSTM model, to improve mould
Type training effectiveness saves the training time.The corresponding target signature data of training set are input in original LSTM model and are instructed
Practice, model output is obtained, and be updated to the parameter in original LSTM model using Adam algorithm, with other adaptive learnings
Rate algorithm is compared, and faster, learning effect is more effective for convergence rate.
Body-building action identification method method neural network based provided by the present application passes through the movement sequence to be identified to acquisition
Column data is pre-processed, and to get rid of the data of the noise in action sequence data to be identified, obtains action sequence to be identified
Action sequence data to be identified are converted to the computable data of computer language by the corresponding characteristic to be identified of data.So
Afterwards, it treats identification feature data to be normalized, so that characteristic to be identified is converted to the shape for meeting sigmoid function
The data of formula improve recognition accuracy to facilitate subsequent use body-building action recognition model to identify.Then, by normalization characteristic
Data cutting is at least one cutting characteristic, is sequentially inputted in body-building action recognition model, obtains each cutting feature
The corresponding type of action of data, to improve the accuracy rate of type of action.Finally, obtaining corresponding standard by type of action
Feature group data judge whether each feature to be analyzed meets in corresponding standard feature group data in feature group data to be analyzed
Each standard feature obtains feature recognition result, to determine whether specific feature data group is standard according to feature recognition result
Feature, so that it is determined that action sequence data to be identified whether standard, guarantee to treat the accuracy rate of identification maneuver sequence data, together
When, it can be to avoid using high-cost sensor, conservation schemes cost.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
In one embodiment, a kind of body-building action recognition device neural network based is provided, this is neural network based
Body-building action identification method one-to-one correspondence neural network based in body-building action recognition device and above-described embodiment.Such as Fig. 7 institute
Show, which includes that characteristic to be identified obtains module 10, characteristic extracting module
20, normalized module 30, cutting processing module 40, model identification module 50, characteristic standard judgment module 60 and movement
Standard row judgment module 70.Detailed description are as follows for each functional module:
Characteristic to be identified obtains module 10, for obtaining the action sequence number to be identified of data acquisition equipment acquisition
According to.
Characteristic extracting module 20 is mentioned for treating identification maneuver sequence data progress feature using direction cosine matrix algorithm
Take, obtain posture feature data to be identified, using the body-building action sequence data and the posture feature data to be identified as
Characteristic to be identified.
Normalized module 30 is normalized for treating identification feature data, obtains normalization characteristic number
According to.
Cutting processing module 40 is obtained for carrying out cutting processing to normalization characteristic data by time segmentation algorithm
At least one corresponding cutting characteristic of normalization characteristic data.
Model identification module 50, at least one cutting characteristic to be sequentially inputted to train sequentially in time
Body-building action recognition model in, obtain the corresponding type of action of each cutting characteristic.
Characteristic standard judgment module 60 passes through mark for obtaining corresponding standard feature group data based on type of action
Quasi- feature group data choose specific feature data group from characteristic to be identified and carry out standard judgement, obtain special characteristic number
According to the corresponding feature recognition result of group.
Action criteria row judgment module 70, for counting the corresponding feature recognition result of all specific feature data groups, when
The corresponding feature recognition result of all specific feature data groups is characteristic standard, then action sequence data to be identified are corresponding strong
Body recognition result is action criteria.
Further, characteristic extracting module 20 includes that characteristic pretreatment unit to be identified and posture feature obtain list
Member.
Characteristic pretreatment unit to be identified is pre-processed for treating identification maneuver sequence data, is obtained effective
Action sequence data.
Posture feature acquiring unit, for being calculated using direction cosine matrix algorithm effective action sequence data,
Obtain posture feature data to be identified.
Further, characteristic standard judgment module 60 includes standard feature group data capture unit 61, feature to be identified
Data packet units 62, feature group data capture unit 63 to be analyzed, feature judging unit 64 and feature recognition result obtain single
Member 65.
Standard feature group data capture unit 61, for obtaining standard feature corresponding with type of action based on type of action
Group data, standard feature group data include at least one standard feature and standard time range, and standard feature carries feature category
Property.
Characteristic grouped element 62 to be identified, for the feature category by all standard features in standard feature group data
Property, treating identification feature data is grouped, and specific feature data group is obtained, when specific feature data group carries to be identified
Between.
Feature group data capture unit 63 to be analyzed is screened for treating recognition time based on standard time range,
Using the feature group data to be identified within the standard time as feature group data to be analyzed, feature group to be analyzed includes at least
One feature to be analyzed.
Feature judging unit 64, for judging whether each feature to be analyzed meets corresponding in feature group data to be analyzed
Each standard feature in standard feature group data.
Feature recognition result acquiring unit 65, if meeting pair for all features to be analyzed in feature group data to be analyzed
The corresponding standard feature of each feature to be analyzed in the standard feature group data answered, then feature recognition result is characterized standard;If to
There are feature to be analyzed, not meet each feature to be analyzed in corresponding standard feature group data corresponding in analysis feature group data
Standard feature, then feature recognition result is characterized nonstandard.
Further, body-building action recognition device neural network based further includes that raw motion data obtains module, original
Beginning characteristic obtains module, target signature data acquisition module, target signature data division module, model training module, mould
Type test module and body-building action recognition model obtain module.
Raw motion data obtains module, and for obtaining raw motion data, raw motion data carries type of action
Label.
Initial characteristic data obtains module, mentions for carrying out feature to raw motion data using direction cosine matrix algorithm
It takes, obtains original posture feature data, using raw motion data and original posture feature data as initial characteristic data.
Target signature data acquisition module obtains target signature data for carrying out standard processing to initial characteristic data.
Target signature data division module, for target signature data to be divided into training set and test set.
Model training module is instructed for the corresponding target signature data of training set to be input in original LSTM model
Practice, obtains effective LSTM model.
Model measurement module is surveyed for the corresponding target signature data of test set to be input in effective LSTM model
Examination, acquisition probability value are greater than the output of predetermined probabilities as test output.
Body-building action recognition model obtains module, general for obtaining the consistent detection of test output and type of action label
Rate, if detection probability is not less than predetermined probabilities, using effective LSTM model as body-building action recognition model.
Further, target signature data acquisition module includes effective time determination unit, validity feature data acquisition list
Member and target signature data capture unit.
Effective time determination unit, when for original based on the corresponding initial characteristic data of same type of action label
Between, the maximum original time of numerical value is chosen from original time as effective time.
Validity feature data capture unit, for carrying out Character adjustment processing to initial characteristic data based on effective time,
Obtain validity feature data.
Target signature data capture unit obtains target signature number for validity feature data to be normalized
According to.
Further, model training module include parameter initialization setting unit, model output acquiring unit and effectively
LSTM model acquiring unit.
Parameter initialization setting unit, for carrying out Initialize installation to the parameter in original LSTM model.
Model export acquiring unit, for by the corresponding target signature data of training set be input in original LSTM model into
Row training obtains the corresponding model output of each target signature data in training set.
Effective LSTM model acquiring unit, for being exported based on model, using Adam algorithm in original LSTM model
Parameter is updated, and obtains effective LSTM model.
Specific restriction about body-building action recognition device neural network based may refer to above for based on mind
The restriction of body-building action identification method through network, details are not described herein.Above-mentioned body-building action recognition dress neural network based
Modules in setting can be realized fully or partially through software, hardware and combinations thereof.Above-mentioned each module can be in the form of hardware
It is embedded in or independently of the storage that in the processor in computer equipment, can also be stored in a software form in computer equipment
In device, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure
Figure can be as shown in Figure 8.The computer equipment includes processor, the memory, network interface, display connected by system bus
Screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment is deposited
Reservoir includes non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and computer journey
Sequence.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating
The network interface of machine equipment is used to communicate with external server by network connection.When the computer program is executed by processor with
Realize a kind of body-building action identification method neural network based.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor execute computer program when realize above-described embodiment based on nerve
The body-building action identification method of network, step shown in step S10- step S70 or Fig. 2 to Fig. 6 as shown in Figure 1 are
It avoids repeating, which is not described herein again.Alternatively, processor realizes that above-mentioned body-building neural network based is dynamic when executing computer program
Make the function of each module/unit in this embodiment of identification device, such as module shown in Fig. 7 10 is to the function of module 70,
To avoid repeating, which is not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program realizes the body-building action identification method neural network based of above-described embodiment when being executed by processor, as shown in Figure 1
Step S10- step S70 or Fig. 2 to Fig. 6 shown in step, to avoid repeating, which is not described herein again.Alternatively, calculating
Machine program realizes each mould in above-mentioned this embodiment of body-building action recognition device neural network based when being executed by processor
Block/unit function, such as module shown in Fig. 7 10, to the function of module 70, to avoid repeating, which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality
Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of body-building action identification method neural network based characterized by comprising
Obtain the action sequence data to be identified of data acquisition equipment acquisition;
Feature extraction is carried out to the action sequence data to be identified using direction cosine matrix algorithm, it is special to obtain posture to be identified
Data are levied, using the body-building action sequence data and the posture feature data to be identified as characteristic to be identified;
The characteristic to be identified is normalized, normalization characteristic data are obtained;
Cutting processing is carried out to the normalization characteristic data by time segmentation algorithm, obtains the normalization characteristic data pair
At least one the cutting characteristic answered;
At least one described cutting characteristic is sequentially inputted to trained body-building action recognition model sequentially in time
In, obtain the corresponding type of action of each cutting characteristic;
Corresponding standard feature group data are obtained based on the type of action, by the standard feature group data from described wait know
Specific feature data group is chosen in other characteristic and carries out standard judgement, obtains the corresponding feature of the specific feature data group
Recognition result;
The corresponding feature recognition result of all specific feature data groups is counted, when all specific feature data groups are corresponding
Feature recognition result be characteristic standard, then the corresponding body-building recognition result of the action sequence data to be identified be movement mark
It is quasi-.
2. body-building action identification method neural network based as described in claim 1, which is characterized in that described to use direction
Cosine matrix algorithm carries out feature extraction to the action sequence data to be identified, obtains posture feature data to be identified, comprising:
The action sequence data to be identified are pre-processed, effective action sequence data is obtained;
The effective action sequence data is calculated using direction cosine matrix algorithm, obtains posture feature number to be identified
According to.
3. body-building action identification method neural network based as described in claim 1, which is characterized in that described based on described
Type of action obtains corresponding standard feature group data, through the standard feature group data from the characteristic to be identified
It chooses specific feature data group and carries out standard judgement, obtain the corresponding feature recognition result of the specific feature data group, wrap
It includes:
Standard feature group data corresponding with the type of action, the standard feature group data are obtained based on the type of action
Including at least one standard feature and standard time range, the standard feature carries characteristic attribute;
By the characteristic attribute of all standard features in the standard feature group data, the characteristic to be identified is divided
Group, obtains specific feature data group, and the specific feature data group carries the time to be identified;
The time to be identified is screened based on standard time range, by the spy to be identified within the standard time
Sign group data include at least one feature to be analyzed as feature group data to be analyzed, the feature group to be analyzed;
Judge whether each feature to be analyzed meets corresponding standard feature group data in the feature group data to be analyzed
In each standard feature;
If all features to be analyzed meet each in the corresponding standard feature group data in the feature group data to be analyzed
The corresponding standard feature of the feature to be analyzed, then feature recognition result is characterized standard;If the feature group data to be analyzed
In there are features to be analyzed not to meet the corresponding standard feature of each feature to be analyzed in corresponding standard feature group data, then
Feature recognition result is characterized nonstandard.
4. body-building action identification method neural network based as described in claim 1, which is characterized in that described based on nerve
The body-building action identification method of network further include:
Raw motion data is obtained, the raw motion data carries type of action label;
Feature extraction is carried out to the raw motion data using direction cosine matrix algorithm, obtains original posture feature data,
Using the raw motion data and the original posture feature data as initial characteristic data;
Standard processing is carried out to the initial characteristic data, obtains target signature data;
The target signature data are divided into training set and test set;
The corresponding target signature data of the training set are input in original LSTM model and are trained, effective LSTM mould is obtained
Type;
The corresponding target signature data of the test set are input in effective LSTM model and are tested, acquisition probability value
Output greater than predetermined probabilities is exported as test;
The test output and the consistent detection probability of type of action label are obtained, if the detection probability is not less than default
Probability, then using effective LSTM model as body-building action recognition model.
5. body-building action identification method neural network based as claimed in claim 4, which is characterized in that described to the original
Beginning characteristic carries out standard processing, obtains target signature data, comprising:
Based on the original time of the corresponding initial characteristic data of the same type of action label, chosen from the original time
The maximum original time of numerical value is as effective time;
Character adjustment processing is carried out to the initial characteristic data based on the effective time, obtains validity feature data;
The validity feature data are normalized, target signature data are obtained.
6. body-building action identification method neural network based as claimed in claim 4, which is characterized in that described by the instruction
The corresponding target signature data of white silk collection, which are input in original LSTM model, to be trained, and effective LSTM model is obtained, comprising:
Initialize installation is carried out to the parameter in the original LSTM model;
The corresponding target signature data of the training set are input in original LSTM model and are trained, the training set is obtained
In the corresponding model output of each target signature data;
It is exported based on the model, the parameter in the original LSTM model is updated using Adam algorithm, obtained effective
LSTM model.
7. a kind of body-building action recognition device neural network based characterized by comprising
Characteristic to be identified obtains module, for obtaining the action sequence data to be identified of data acquisition equipment acquisition;
Characteristic extracting module is mentioned for carrying out feature to the action sequence data to be identified using direction cosine matrix algorithm
Take, obtain posture feature data to be identified, using the body-building action sequence data and the posture feature data to be identified as
Characteristic to be identified;
Normalized module obtains normalization characteristic data for the characteristic to be identified to be normalized;
Cutting processing module obtains institute for carrying out cutting processing to the normalization characteristic data by time segmentation algorithm
State at least one corresponding cutting characteristic of normalization characteristic data;
Model identification module, it is trained at least one described cutting characteristic to be sequentially inputted to sequentially in time
In body-building action recognition model, the corresponding type of action of each cutting characteristic is obtained;
Characteristic standard judgment module, for obtaining corresponding standard feature group data based on the type of action, by described
Standard feature group data choose specific feature data group from the characteristic to be identified and carry out standard judgement, described in acquisition
The corresponding feature recognition result of specific feature data group;
Action criteria row judgment module works as institute for counting the corresponding feature recognition result of all specific feature data groups
There is the corresponding feature recognition result of the specific feature data group to be characterized standard, then the action sequence data to be identified are corresponding
Body-building recognition result be action criteria.
8. body-building action recognition device neural network based as claimed in claim 7, which is characterized in that the characteristic standard
Property judgment module includes:
Standard feature group data capture unit, it is special for obtaining standard corresponding with the type of action based on the type of action
Sign group data, the standard feature group data include at least one standard feature and standard time range, and the standard feature is taken
With characteristic attribute;
Characteristic grouped element to be identified, for the feature category by all standard features in the standard feature group data
Property, the characteristic to be identified is grouped, obtain specific feature data group, the specific feature data group carry to
Recognition time;
Feature group data capture unit to be analyzed will for being screened based on standard time range to the time to be identified
The feature group data to be identified within the standard time are as feature group data to be analyzed, the feature group packet to be analyzed
Include at least one feature to be analyzed;
Feature judging unit, for judging whether each feature to be analyzed meets correspondence in the feature group data to be analyzed
Standard feature group data in each standard feature;
Feature recognition result acquiring unit, if meeting correspondence for all features to be analyzed in the feature group data to be analyzed
The standard feature group data in the corresponding standard feature of each feature to be analyzed, then feature recognition result is characterized mark
It is quasi-;If in the feature group data to be analyzed there are feature to be analyzed do not meet in corresponding standard feature group data it is each it is described to
The corresponding standard feature of feature is analyzed, then feature recognition result is characterized nonstandard.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor
The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to
Any one of 6 body-building action identification methods neural network based.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In realization body-building neural network based as described in any one of claim 1 to 6 when the computer program is executed by processor
Action identification method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910278750.8A CN110163086B (en) | 2019-04-09 | 2019-04-09 | Body-building action recognition method, device, equipment and medium based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910278750.8A CN110163086B (en) | 2019-04-09 | 2019-04-09 | Body-building action recognition method, device, equipment and medium based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110163086A true CN110163086A (en) | 2019-08-23 |
CN110163086B CN110163086B (en) | 2021-07-09 |
Family
ID=67639263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910278750.8A Active CN110163086B (en) | 2019-04-09 | 2019-04-09 | Body-building action recognition method, device, equipment and medium based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163086B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765939A (en) * | 2019-10-22 | 2020-02-07 | Oppo广东移动通信有限公司 | Identity recognition method and device, mobile terminal and storage medium |
CN112001566A (en) * | 2020-09-11 | 2020-11-27 | 成都拟合未来科技有限公司 | Optimization method, device, equipment and medium of fitness training model |
CN112036291A (en) * | 2020-08-27 | 2020-12-04 | 东北电力大学 | Kinematic data model construction method based on motion big data and deep learning |
CN114330575A (en) * | 2021-12-31 | 2022-04-12 | 深圳千岸科技股份有限公司 | Body-building action discrimination method based on twin neural network and intelligent sand bag |
CN117666788A (en) * | 2023-12-01 | 2024-03-08 | 世优(北京)科技有限公司 | Action recognition method and system based on wearable interaction equipment |
CN117666788B (en) * | 2023-12-01 | 2024-06-11 | 世优(北京)科技有限公司 | Action recognition method and system based on wearable interaction equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102024151A (en) * | 2010-12-02 | 2011-04-20 | 中国科学院计算技术研究所 | Training method of gesture motion recognition model and gesture motion recognition method |
WO2013056472A1 (en) * | 2011-10-21 | 2013-04-25 | 智点科技(深圳)有限公司 | Drive method for active touch control system |
CN103345627A (en) * | 2013-07-23 | 2013-10-09 | 清华大学 | Action recognition method and device |
CN104700069A (en) * | 2015-01-13 | 2015-06-10 | 西安交通大学 | System and method for recognizing and monitoring exercising action through unbound radio frequency label |
CN105809144A (en) * | 2016-03-24 | 2016-07-27 | 重庆邮电大学 | Gesture recognition system and method adopting action segmentation |
CN105930767A (en) * | 2016-04-06 | 2016-09-07 | 南京华捷艾米软件科技有限公司 | Human body skeleton-based action recognition method |
CN106730760A (en) * | 2016-12-06 | 2017-05-31 | 广州视源电子科技股份有限公司 | Body-building motion detection method, system, wearable device and terminal |
CN107049324A (en) * | 2016-11-23 | 2017-08-18 | 深圳大学 | The determination methods and device of a kind of limb motion posture |
CN107239767A (en) * | 2017-06-08 | 2017-10-10 | 北京纽伦智能科技有限公司 | Mouse Activity recognition method and its system |
CN107329563A (en) * | 2017-05-22 | 2017-11-07 | 北京红旗胜利科技发展有限责任公司 | A kind of recognition methods of type of action, device and equipment |
CN107679522A (en) * | 2017-10-31 | 2018-02-09 | 内江师范学院 | Action identification method based on multithread LSTM |
CN108985157A (en) * | 2018-06-07 | 2018-12-11 | 北京邮电大学 | A kind of gesture identification method and device |
-
2019
- 2019-04-09 CN CN201910278750.8A patent/CN110163086B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102024151A (en) * | 2010-12-02 | 2011-04-20 | 中国科学院计算技术研究所 | Training method of gesture motion recognition model and gesture motion recognition method |
WO2013056472A1 (en) * | 2011-10-21 | 2013-04-25 | 智点科技(深圳)有限公司 | Drive method for active touch control system |
CN103345627A (en) * | 2013-07-23 | 2013-10-09 | 清华大学 | Action recognition method and device |
CN104700069A (en) * | 2015-01-13 | 2015-06-10 | 西安交通大学 | System and method for recognizing and monitoring exercising action through unbound radio frequency label |
CN105809144A (en) * | 2016-03-24 | 2016-07-27 | 重庆邮电大学 | Gesture recognition system and method adopting action segmentation |
CN105930767A (en) * | 2016-04-06 | 2016-09-07 | 南京华捷艾米软件科技有限公司 | Human body skeleton-based action recognition method |
CN107049324A (en) * | 2016-11-23 | 2017-08-18 | 深圳大学 | The determination methods and device of a kind of limb motion posture |
CN106730760A (en) * | 2016-12-06 | 2017-05-31 | 广州视源电子科技股份有限公司 | Body-building motion detection method, system, wearable device and terminal |
CN107329563A (en) * | 2017-05-22 | 2017-11-07 | 北京红旗胜利科技发展有限责任公司 | A kind of recognition methods of type of action, device and equipment |
CN107239767A (en) * | 2017-06-08 | 2017-10-10 | 北京纽伦智能科技有限公司 | Mouse Activity recognition method and its system |
CN107679522A (en) * | 2017-10-31 | 2018-02-09 | 内江师范学院 | Action identification method based on multithread LSTM |
CN108985157A (en) * | 2018-06-07 | 2018-12-11 | 北京邮电大学 | A kind of gesture identification method and device |
Non-Patent Citations (2)
Title |
---|
王怡 等: ""基于Kinect的健身动作识别与评价"", 《计算机科学与应用》 * |
贾菲菲: ""基于加速度和脉搏的运动状态识别和心率检测算法研究"", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110765939A (en) * | 2019-10-22 | 2020-02-07 | Oppo广东移动通信有限公司 | Identity recognition method and device, mobile terminal and storage medium |
CN110765939B (en) * | 2019-10-22 | 2023-03-28 | Oppo广东移动通信有限公司 | Identity recognition method and device, mobile terminal and storage medium |
CN112036291A (en) * | 2020-08-27 | 2020-12-04 | 东北电力大学 | Kinematic data model construction method based on motion big data and deep learning |
CN112001566A (en) * | 2020-09-11 | 2020-11-27 | 成都拟合未来科技有限公司 | Optimization method, device, equipment and medium of fitness training model |
CN112001566B (en) * | 2020-09-11 | 2024-04-30 | 成都拟合未来科技有限公司 | Optimization method, device, equipment and medium of fitness training model |
CN114330575A (en) * | 2021-12-31 | 2022-04-12 | 深圳千岸科技股份有限公司 | Body-building action discrimination method based on twin neural network and intelligent sand bag |
CN117666788A (en) * | 2023-12-01 | 2024-03-08 | 世优(北京)科技有限公司 | Action recognition method and system based on wearable interaction equipment |
CN117666788B (en) * | 2023-12-01 | 2024-06-11 | 世优(北京)科技有限公司 | Action recognition method and system based on wearable interaction equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110163086B (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163086A (en) | Body-building action identification method, device, equipment and medium neural network based | |
CN109389030A (en) | Facial feature points detection method, apparatus, computer equipment and storage medium | |
CN105631439B (en) | Face image processing process and device | |
CN110245598A (en) | It fights sample generating method, device, medium and calculates equipment | |
CN107358157A (en) | A kind of human face in-vivo detection method, device and electronic equipment | |
CN107609519A (en) | The localization method and device of a kind of human face characteristic point | |
CN108416198A (en) | Man-machine identification model establishes device, method and computer readable storage medium | |
WO2020215573A1 (en) | Captcha identification method and apparatus, and computer device and storage medium | |
CN108600212A (en) | Threat information credibility method of discrimination and device based on the credible feature of various dimensions | |
CN108985232A (en) | Facial image comparison method, device, computer equipment and storage medium | |
CN102136024B (en) | Biometric feature identification performance assessment and diagnosis optimizing system | |
CN105303179A (en) | Fingerprint identification method and fingerprint identification device | |
CN106778867A (en) | Object detection method and device, neural network training method and device | |
CN109754012A (en) | Entity Semantics relationship classification method, model training method, device and electronic equipment | |
CN107784282A (en) | The recognition methods of object properties, apparatus and system | |
CN108875963A (en) | Optimization method, device, terminal device and the storage medium of machine learning model | |
CN110147732A (en) | Refer to vein identification method, device, computer equipment and storage medium | |
CN109271870A (en) | Pedestrian recognition methods, device, computer equipment and storage medium again | |
CN108304842A (en) | Meter reading recognition methods, device and electronic equipment | |
CN109086652A (en) | Handwritten word model training method, Chinese characters recognition method, device, equipment and medium | |
CN108124486A (en) | Face living body detection method based on cloud, electronic device and program product | |
CN106570516A (en) | Obstacle recognition method using convolution neural network | |
CN103679160B (en) | Human-face identifying method and device | |
CN110222566A (en) | A kind of acquisition methods of face characteristic, device, terminal and storage medium | |
CN107958230A (en) | Facial expression recognizing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210617 Address after: 518000 room 304, wanwei building, No.5, Gongye 5th Road, Shekou street, Nanshan District, Shenzhen City, Guangdong Province Applicant after: Youpin International Science and Technology (Shenzhen) Co.,Ltd. Address before: No.312, block C, 28 xinjiekouwai street, Xicheng District, Beijing Applicant before: BINKE PUDA (BEIJING) TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |