CN114783046B - CNN and LSTM-based human body continuous motion similarity scoring method - Google Patents

CNN and LSTM-based human body continuous motion similarity scoring method Download PDF

Info

Publication number
CN114783046B
CN114783046B CN202210195641.1A CN202210195641A CN114783046B CN 114783046 B CN114783046 B CN 114783046B CN 202210195641 A CN202210195641 A CN 202210195641A CN 114783046 B CN114783046 B CN 114783046B
Authority
CN
China
Prior art keywords
human body
action
frame
video
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210195641.1A
Other languages
Chinese (zh)
Other versions
CN114783046A (en
Inventor
谢铭
索帅
董建武
吴林涛
郑博文
王立刚
蔡荣华
胡小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Scistor Technologies Co ltd
Original Assignee
Beijing Scistor Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Scistor Technologies Co ltd filed Critical Beijing Scistor Technologies Co ltd
Priority to CN202210195641.1A priority Critical patent/CN114783046B/en
Publication of CN114783046A publication Critical patent/CN114783046A/en
Application granted granted Critical
Publication of CN114783046B publication Critical patent/CN114783046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a CNN and LSTM-based human body continuity action similarity scoring method, which is used for aligning key frames of a standard action video and an action video to be detected to obtain a corresponding action sequence frame set. And respectively detecting human body key points by using a convolutional neural network for each frame in the standard action sequence frame set and the action sequence frame set to be detected, and converting the obtained coordinate information of the human body key points into human body key included angle information. And (4) sending the key included angle sequence obtained by each video into a recurrent neural network to obtain the human body continuous action characteristic vector of each video. And performing distance calculation on the human body continuous motion characteristic vectors of the standard motion video and the motion video to be detected, and converting the human body continuous motion characteristic vectors into motion similarity scores. The invention can provide the competition result aiming at the competition related to the action standard. Scoring may be performed instead of or in addition to the referees of the action standard correlation game.

Description

CNN and LSTM-based human body continuous motion similarity scoring method
Technical Field
The invention relates to the field of video data analysis, in particular to a human body continuous motion similarity scoring method based on a convolutional neural network and a cyclic neural network.
Background
In many areas of sports, fitness, dance events and teaching, some of which require athletes or trainees to perform in standard movements, referees or instructors are required to score the completion of their set of continuous movements. In practice, the referee needs a certain expertise and needs to compare the standard continuity of action with the performer's set of continuity of action by eye and brain memory for evaluation. The difficulty lies in that one referee with professional knowledge needs to grade a plurality of athletes one by one, and the efficiency is not high.
In addition, in recent years, the middle video teaching of the motion teaching software is more popular, students need to know the relevant evaluation condition of their own motion, and judges cannot directly evaluate the motion of mass users on the internet.
Disclosure of Invention
The invention provides a human body continuous action similarity scoring method based on a convolutional neural network and a cyclic neural network, which combines two-dimensional image information of a standard action video and an action video to be tested with time sequence information of action continuity and can well assist or replace a referee with professional knowledge to finish motion action scoring.
The invention discloses a CNN and LSTM-based human body continuous motion similarity scoring method, which comprises a data preparation stage, a human body key information extraction stage, a continuous motion characteristic vector extraction stage and a scoring stage.
And in the data preparation stage, key frames of the standard action video and the action video to be detected are aligned to generate a standard action sequence frame set and an action sequence frame set to be detected.
And in the human body key information extraction stage, coordinates of human body action key points in the action frame are extracted through a trained convolutional neural network, and the human body key point coordinate information is converted into human body key included angle information to obtain a standard action sequence human body key included angle set and a to-be-detected action sequence human body key included angle set.
And the continuous action characteristic vector extraction stage extracts a characteristic vector of a video through a trained recurrent neural network (LSTM).
The grading stage establishes a distance grading mapping relation; and simultaneously, distance calculation is carried out on the standard motion video characteristic vector and the motion video characteristic vector to be measured, and final mapping from distance to score is completed according to the distance score mapping relation, so that the score is completed.
The invention has the advantages that:
(1) According to the invention, through an intelligent and automatic mode, the human body continuous action similarity scoring can be completed only by inputting the standard action video and the action video to be tested. The method has the advantages that the task load of referees can be effectively reduced, the efficiency is high, and the labor cost is reduced.
(2) The method is not limited by time and regions, and can finish the scoring of the similarity of the human body continuous actions. The online motion evaluation method has the advantages that the online motion evaluation method is not only suitable for an offline motion evaluation scene, but also suitable for an online motion evaluation scene, and is small in use limitation.
(3) The invention converts the coordinate information of the key points of the human body into the information of the key included angles of the human body. The human body key included angle information is not influenced by the body type of a human body compared with the coordinate information of the human body key points, the information is more accurate, and the scoring accuracy can be effectively improved.
(4) The invention sends the human body key included angle sequence information into a recurrent neural network for feature extraction. The method has the advantages that the consistency information of the movement actions can be effectively fused to further improve the scoring accuracy.
Drawings
FIG. 1 is a general flowchart of a human body continuous motion similarity scoring method according to the present invention;
FIG. 2 is a flow chart of extracting human body key angle information in the present invention;
FIG. 3 is a schematic diagram illustrating a process of converting coordinate information of key points of a human body into information of key included angles of the human body according to the present invention;
FIG. 4 is a flowchart of the present invention for extracting continuous motion feature vectors;
FIG. 5 is a flowchart of the scoring stage of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the overall process of the human body continuous motion similarity scoring method based on the convolutional neural network and the cyclic neural network is divided into four stages, namely a data preparation stage, a human body key information extraction stage, a continuous motion feature vector extraction stage and a scoring stage.
In the data preparation stage, the action video to be detected is aligned with the key frame according to the frame number of the standard action video, and the specific method comprises the following steps:
a. the number of the motion video frames to be tested is N d Frame, standard action video frame number N s Frame, frame number difference is:
N diff =N s -N d (1)
if the frame number is different by N diff If =0, the frame insertion and frame deletion processing is not performed.
If the frame number is different by N diff >0, the video to be tested needs to be subjected to frame interpolation treatment, namely the first frame of the video frame sequence to be tested
Figure SMS_1
At a frame, respectively inserting a sequence of frames &>
Figure SMS_2
The frame data of (a); wherein i represents the frame order, i ∈ [1, N ] diff ]。
If the frame number is different by N diff <0, frame deletion processing needs to be carried out on the video to be detected, namely, the video frame sequence to be detected is the sequence of the video frames to be detected
Figure SMS_3
The frame erasure at (2).
And finally, respectively storing the frames in the processed motion video to be detected and the standard motion video as motion sequence frame sets in sequence.
In the human body key information extraction stage, as shown in fig. 2, the specific method is as follows:
A. and sending the processed standard action sequence frame set and the action sequence frame set to be tested in the data preparation stage into a trained convolutional neural network.
B. The convolutional neural network extracts single human body key point coordinate information from each frame in the standard action sequence frame set and the action sequence frame set to be detected, stores the human body key point coordinate information of each frame according to the frame sequence, and respectively obtains a standard action sequence human body key point coordinate set and a action sequence human body key point coordinate set to be detected. The key points of the human body are set as the top of the head, the upper neck, the lower neck, the left shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the left crotch, the right crotch, the pelvis, the left knee, the right knee, the left ankle and the right ankle.
The convolutional neural network adopts a SimplePose single human body key point extraction network. The convolution neural network model adopts an Encode-Decode structure. In the Encode stage, a ResNet network structure is adopted for feature coding, the ResNet is a convolutional neural network with a residual error structure, and the ResNet has a good effect on feature extraction in the aspect of images. In the Decode stage, 3 layers of transposition convolution are adopted to perform up-sampling on the feature map obtained in the Encode stage, so that the probability that each pixel point is a key of a human body is obtained, and finally the coordinate position of the key point of the human body is obtained.
C. And C, converting the 16 human body key point coordinates of each frame obtained in the step B into 11 key included angles (between 1 and 11) information, wherein the specific conversion method is to determine one included angle and two sides according to the three point coordinates and obtain the radian of the included angle by using an inverse trigonometric function. Therefore, a standard action sequence human body key included angle set and a to-be-detected action sequence human body key included angle set can be obtained; as shown in fig. 3, the left graph includes coordinate information of key points of the human body, and the right graph includes included angle information of key points of the human body after conversion.
As shown in fig. 4, in the continuous motion feature vector extraction stage, the specific method is as follows:
a. inputting each frame of information in the processed standard action sequence human body key included angle set and the action sequence human body key included angle set to be detected to the LSTM model according to the sequence order;
b. the cyclic neural network outputs a 32-dimensional feature vector to key angle information of each frame in the standard action sequence human body key angle set and the action sequence human body key angle set to be detected, the feature vector comprises information of a previous frame, the 32-dimensional feature vectors output by the key angle information of each frame are sequentially spliced to obtain a standard action sequence feature vector and an action sequence feature vector to be detected, and feature extraction of continuous actions is completed.
The LSTM model described above is a special recurrent neural network structure. The input of one LSTM module in the LSTM model comprises the input of the current moment, the output state of the hidden layer at the previous moment and the cell state at the previous moment, and simultaneously, the output of the current moment, the output state of the hidden layer at the current moment and the cell state of the current layer can be output. The output state of the hidden layer is mainly responsible for transmitting short-term information, the cell state is responsible for transmitting long-term state information, and all LSTMs are called long-term and short-term memory networks. A forgetting gate, an input gate and an output gate are arranged in an LSTM module to control the transmission of information in the aspect of time sequence. The forgetting gate is mainly responsible for discarding information about the cell state at the last moment. The input gate determines how much of the currently input information needs to be added to the current cell state. The output gate decides which information should be output at the current time.
Where training of the LSTM model uses a homemade data set. The specific production mode of the data set is as follows: the 11 key angles shown in fig. 3 are used as a sample, wherein each angle is classified according to the range; if the angle 2 is a category per pi/6 angles (step size), namely the angle 2 is divided into 12 intervals by [0, pi/6), [ pi/6, 2 pi/6), [2 pi/6, 3 pi/6), [3 pi/6, 4 pi/6), [4 pi/6, 5 pi/6), [5 pi/6, 6 pi/6), [6 pi/6, 7 pi/6), [7 pi/6, 8 pi/6), [8 pi/6, 9 pi/6), [9 pi/6, 10 pi/6), [10 pi/6, 11 pi/6), [11 pi/6, 12 pi/6), each interval is an angle category which comprises 2 pi/(pi/6) =12 categories. The number of categories for each angle is freely configurable i At the ith angleThe number of classes, beta, is the step length of the interval, and has the following relation 2 pi/a i = β; if the number of the configured categories of < 2 is 12, then < 2 can generate 12 intervals by using 2 pi/12 = pi/6 as a step length; the number of the configured categories of the angle 1 is 6, and the angle 1 can generate 6 intervals by using 2 pi/6 = pi/3 as a step length; the number of categories of the training data set of the sample is:
Figure SMS_4
wherein, a i Indicating the number of categories for the ith angle.
Randomly generating a certain amount of training sample data for each category according to the value range of each angle of each category of the training data set; such as: for one of the samples [ theta ] x,1 ,θ x,2 ,θ x,3 ,θ x,4 ,θ x,5 ,θ x,6 ,θ x,7 ,θ x,8 ,θ x,9 ,θ x,10 ,θ x,11 ]Wherein each angle θ x,y All the angles are randomly generated, each angle is classified according to the interval of the angle, each sample has 11 angles, the 11-angle classes are combined, and a data set can be generated
Figure SMS_5
Different classes of samples were used.
The self-made data set is used for training the LSTM model, and the method specifically comprises the following steps: adding a full connection layer to the output of the LSTM model to make the training model have the function of finishing classification, wherein the classification refers to the classification of a certain sample [ theta ] in a data set x,1 ,θ x,2 ,θ x,3 ,θ x,4 ,θ x,5 ,θ x,6 ,θ x,7 ,θ x,8 ,θ x,9 ,θ x,10 ,θ x,11 ]Classifying to make the model learn that the sample belongs to
Figure SMS_6
Which of the specific categories is. After the training of the LSTM model is completed, the full connection layer is removed, and only LS is reservedTM has the capability of outputting the feature vector, and the well-trained LSTM model is obtained.
In the scoring stage, the Euclidean distance is calculated from the standard motion sequence feature vector and the motion sequence feature vector to be measured, the mapping from the distance to the final scores of the two videos is completed according to the distance scoring mapping relation set in advance, and a scoring score is obtained, as shown in FIG. 5
The Euclidean distance determining method of the two vectors comprises the following steps: and substituting the obtained characteristic vector of the standard action sequence and the characteristic vector of the action sequence to be detected into an Euclidean distance calculation formula to obtain the Euclidean distance between the two vectors.
The distance score mapping relation is a mapping relation between Euclidean distances of the two action videos and scores. The specific method comprises the following steps: giving X groups of videos, wherein each group of videos comprises a standard action video and an action video to be tested, if n sets of standard actions need to be scored, n standard action videos of the action need to be given, and each standard action gives m action videos to be tested corresponding to the standard action, so that X = n × m groups of videos are generated, and the X = n × m groups of videos are respectively: [ standard action video 1, video 1 to be tested, [ 8230 ], [ standard action video i, video i _ j to be tested ], [ 8230 ], [ standard action video n, video n _ m to be tested ], wherein i belongs to [1, n ]; j is an element of [1,m ]. And (3) scoring the action similarity of the X groups of videos by 1-100 by experts of the motion projects in the videos, and then obtaining the Euclidean distance of the X groups of videos through the flow. And finding out Euclidean distances corresponding to a plurality of groups of videos with expert scores of 100, and averaging the Euclidean distances, wherein the average value is called full-scale truncation distance. And finding out Euclidean distances corresponding to a plurality of groups of videos with expert scores of 0, and averaging the Euclidean distances, wherein the average value is called zero-score truncation distance. And fitting a one-dimensional quadratic equation by adopting a least square method corresponding to the expert scores and Euclidean distance data of the rest groups of videos. The specific scoring mapping relation is as follows:
Figure SMS_7
in the formula, A, B and C are parameters of quadratic term, first-order term and zero-order term of a unary quadratic equation respectively; d is the Euclidean distance; score is the score, score ∈ [0,100].

Claims (2)

1. A CNN and LSTM-based human body continuous motion similarity scoring method is characterized in that:
in the data preparation stage, key frame alignment is carried out on the standard action video and the action video to be detected, and a standard action sequence frame set and an action sequence frame set to be detected are generated; in the data preparation stage, the method for aligning the key frames of the standard action video and the action video to be detected comprises the following steps:
a. the number of the motion video frames to be tested is N d Frame, standard action video frame number N s Frame, the frame number difference is:
N diff =N s -N d
if the frame number is different by N diff If =0, the frame insertion and frame deletion processing is not performed;
if the frame number is different by N diff If the frame rate is more than 0, the frame insertion processing needs to be carried out on the video to be detected, namely the frame rate of the video frame sequence to be detected
Figure QLYQS_1
At a frame, respectively inserting a sequence of frames &>
Figure QLYQS_2
The frame data of (1); wherein i represents a frame order, i ∈ [1, N ] diff ];
If the frame number is different by N diff <0, frame deletion processing needs to be carried out on the video to be detected, namely the video frame sequence to be detected is set as
Figure QLYQS_3
Frame deletion of (2);
in the human body key information extraction stage, coordinates of human body action key points in action frames are extracted through a trained convolutional neural network, and the human body key point coordinate information is converted into human body key included angle information to obtain a standard action sequence human body key included angle set and a to-be-detected action sequence human body key included angle set; in the human body key information extraction stage, the method for converting the human body key point coordinate information into the human body key included angle information comprises the following steps: determining an included angle and two edges according to the coordinates of the three points, and obtaining the radian of the included angle by using an inverse trigonometric function;
the continuous action characteristic vector extraction stage extracts a characteristic vector of a video through a trained recurrent neural network (LSTM); in the continuous action characteristic vector extraction stage, a self-made data set is used for training the recurrent neural network LSTM, and the self-made data set is prepared in the following mode: all key angles in the picture are used as a sample, and each angle is classified according to the range; let a be i The number of categories of the ith angle, β is the step length of the interval, and then: 2 pi/a i = β; the number of classes in the training data set of the final sample is:
Figure QLYQS_4
in the continuous action characteristic vector extraction stage, the training method of the recurrent neural network LSTM comprises the following steps: adding a full connection layer to the output of the LSTM model to enable the training model to have the function of finishing classification, wherein the classification refers to classifying a certain sample in a data set; after the LSTM model training is finished, removing the full-connection layer, and only keeping the capability of the LSTM for outputting the characteristic vector, namely obtaining the well-trained LSTM model;
the grading stage establishes a distance grading mapping relation; meanwhile, distance calculation is carried out on the standard action video characteristic vector and the action video characteristic vector to be measured, and final mapping from the distance to the score is completed according to the distance score mapping relation, so that the score is completed; in the continuous action characteristic vector extraction stage, the distance score mapping relation is a mapping relation between Euclidean distances of two action videos and scores, and the specific method is as follows: giving X groups of videos, wherein each group of videos comprises a standard action video and an action video to be detected, and scoring the action similarity of the X groups of videos by 1-100 by an expert of a sports project in the videos; then calculating the Euclidean distance of the X groups of videos; finding out Euclidean distances corresponding to a plurality of groups of videos with expert scores of 100, and averaging the Euclidean distances, wherein the average value is called a full-scale truncation distance; then finding out Euclidean distances corresponding to a plurality of groups of videos with expert scores of 0, averaging the Euclidean distances, and calling the average value as a zero-score truncation distance; and (3) fitting a quadratic equation of a single element by adopting a least square method corresponding to the expert scores and Euclidean distance data of the rest groups of videos, wherein the mapping relation of the distance scores is as follows:
Figure QLYQS_5
in the formula, A, B and C are parameters of quadratic term, first-order term and zero-order term of a unary quadratic equation respectively; d is the Euclidean distance; score is the score, score ∈ [0,100].
2. The human body continuous motion similarity scoring method based on CNN and LSTM as claimed in claim 1, characterized in that: in the human body key information extraction stage, a SimplePose single human body key point extraction network is adopted for human body key point extraction.
CN202210195641.1A 2022-03-01 2022-03-01 CNN and LSTM-based human body continuous motion similarity scoring method Active CN114783046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210195641.1A CN114783046B (en) 2022-03-01 2022-03-01 CNN and LSTM-based human body continuous motion similarity scoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210195641.1A CN114783046B (en) 2022-03-01 2022-03-01 CNN and LSTM-based human body continuous motion similarity scoring method

Publications (2)

Publication Number Publication Date
CN114783046A CN114783046A (en) 2022-07-22
CN114783046B true CN114783046B (en) 2023-04-07

Family

ID=82423039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210195641.1A Active CN114783046B (en) 2022-03-01 2022-03-01 CNN and LSTM-based human body continuous motion similarity scoring method

Country Status (1)

Country Link
CN (1) CN114783046B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018191555A1 (en) * 2017-04-14 2018-10-18 Drishti Technologies. Inc Deep learning system for real time analysis of manufacturing operations
CN108446594B (en) * 2018-02-11 2021-08-06 四川省北青数据技术有限公司 Emergency response capability evaluation method based on action recognition
CN109829442A (en) * 2019-02-22 2019-05-31 焦点科技股份有限公司 A kind of method and system of the human action scoring based on camera

Also Published As

Publication number Publication date
CN114783046A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
CN109522850B (en) Action similarity evaluation method based on small sample learning
CN110448870B (en) Human body posture training method
CN110728220A (en) Gymnastics auxiliary training method based on human body action skeleton information
CN109584290A (en) A kind of three-dimensional image matching method based on convolutional neural networks
CN109815930B (en) Method for evaluating action simulation fitting degree
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN113627409B (en) Body-building action recognition monitoring method and system
CN112287891A (en) Method for evaluating learning concentration through video based on expression and behavior feature extraction
CN111709301B (en) Curling ball motion state estimation method
CN112464915B (en) Push-up counting method based on human skeleton point detection
CN111985579A (en) Double-person diving synchronism analysis method based on camera cooperation and three-dimensional skeleton estimation
CN111860157A (en) Motion analysis method, device, equipment and storage medium
CN115482580A (en) Multi-person evaluation system based on machine vision skeletal tracking technology
CN112488047A (en) Piano fingering intelligent identification method
Liao et al. Ai golf: Golf swing analysis tool for self-training
CN114926762A (en) Motion scoring method, system, terminal and storage medium
CN114783046B (en) CNN and LSTM-based human body continuous motion similarity scoring method
CN116844084A (en) Sports motion analysis and correction method and system integrating blockchain
CN115050048B (en) Cross-modal pedestrian re-identification method based on local detail features
CN116386137A (en) Mobile terminal design method for lightweight recognition of Taiji boxing
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
CN112669180B (en) Preschool education method and system based on image recognition
CN113688790A (en) Human body action early warning method and system based on image recognition
CN114092863A (en) Human body motion evaluation method for multi-view video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Method for Evaluating the Similarity of Human Continuous Actions Based on CNN and LSTM

Effective date of registration: 20230802

Granted publication date: 20230407

Pledgee: Zhongguancun Beijing technology financing Company limited by guarantee

Pledgor: BEIJING SCISTOR TECHNOLOGIES CO.,LTD.

Registration number: Y2023990000389