CN113893517B - Rope skipping true and false judgment method and system based on difference frame method - Google Patents

Rope skipping true and false judgment method and system based on difference frame method Download PDF

Info

Publication number
CN113893517B
CN113893517B CN202111381904.XA CN202111381904A CN113893517B CN 113893517 B CN113893517 B CN 113893517B CN 202111381904 A CN202111381904 A CN 202111381904A CN 113893517 B CN113893517 B CN 113893517B
Authority
CN
China
Prior art keywords
true
layer
difference frame
rope skipping
rope
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111381904.XA
Other languages
Chinese (zh)
Other versions
CN113893517A (en
Inventor
吴友银
樊康
吕瑞
朱绍共
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Movers Technology Hangzhou Co ltd
Original Assignee
Movers Technology Hangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Movers Technology Hangzhou Co ltd filed Critical Movers Technology Hangzhou Co ltd
Priority to CN202111381904.XA priority Critical patent/CN113893517B/en
Publication of CN113893517A publication Critical patent/CN113893517A/en
Application granted granted Critical
Publication of CN113893517B publication Critical patent/CN113893517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0605Decision makers and devices using detection means facilitating arbitration
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B5/00Apparatus for jumping
    • A63B5/20Skipping-ropes or similar devices rotating in a vertical plane
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a rope skipping true and false judgment method and system based on a difference frame method, belonging to the technical field of moving target detection, wherein the rope skipping true and false judgment method comprises the following steps: s1, performing interframe difference calculation on a plurality of sections of skipping rope videos to obtain a difference frame image sequence which is arranged according to a time sequence and corresponds to each section of skipping rope video; s2, overlapping all the difference frame images in the difference frame image sequence into one image, and performing model training by using each overlapping image associated with each section of rope skipping video as a sample for judging the model training of rope skipping true and false; and S3, performing interframe difference calculation on the obtained real-time rope skipping video to obtain a difference frame sequence, combining all difference frames in the difference frame sequence into a difference frame overlay, and inputting the difference frame overlay into a rope skipping true and false judgment model to perform rope skipping true and false judgment. The invention enhances the characteristics of the motion skipping rope by overlapping the difference frame images, and the skipping rope true and false judgment model trained by taking the overlapping image as a sample has higher judgment accuracy.

Description

Rope skipping true and false judgment method and system based on difference frame method
Technical Field
The invention relates to the technical field of moving target detection, in particular to a rope skipping true and false judgment method and system based on a difference frame method.
Background
In rope skipping, when the rope whipping speed is too high, it is difficult to distinguish whether the rope skipping is a rope skipping or a cordless rope skipping by naked eyes. Because the rope is thin and long, even if the existing moving object detection algorithm is applied, the rope which is thrown at a high speed is difficult to accurately identify. In the prior art, a method for detecting whether a skipping rope is true or false mainly is a target detection method based on deep learning. The method mainly applies the existing SSD model or the YOLO model to carry out target detection on whether a rope exists in an input rope skipping image, if the rope is detected, the rope skipping is judged to be true, otherwise, the rope skipping is judged to be false. However, in the high-speed rope skipping movement, the ropes are highly fuzzy, and the existing SSD or YOLO model is difficult to accurately identify the ropes, so that the reference value of the judgment result is low.
Disclosure of Invention
The invention provides a skipping rope true and false judgment method and system based on a difference frame method, aiming at improving the accuracy of skipping rope true and false judgment.
In order to achieve the purpose, the invention adopts the following technical scheme:
the rope skipping true and false judgment method based on the difference frame method comprises the following steps:
s1, performing interframe difference calculation on a plurality of sections of skipping rope videos to obtain a difference frame image sequence which is arranged in time sequence and corresponds to each section of skipping rope video, and recording the difference frame image sequence as Q, Q being p1、p2、…、pi、…、pn,piRepresenting the ith difference frame image in the difference frame time sequence images, wherein i is 1, 2, …, n, and n is the number of the difference frame images in the difference frame image sequence Q;
s2, superposing all the difference frame images in the difference frame image sequence Q into one image, and performing model training by taking each superposed image associated with each section of rope skipping video as a sample for rope skipping true and false judgment model training;
and S3, performing interframe difference calculation on the obtained real-time rope skipping video to obtain a difference frame sequence of the real-time rope skipping video, merging each difference frame in the difference frame sequence into a difference frame overlay, inputting the difference frame overlay into the rope skipping true and false judgment model obtained by training in the step S2 to perform rope skipping true and false judgment, and outputting a judgment result.
As a preferred embodiment of the present invention, n ═ 20.
As a preferable aspect of the present invention, the difference frame image is a binary image.
As a preferable scheme of the present invention, the neural network structure for training the rope skipping true and false judgment model includes a first convolution layer, a first normalization layer, a first maximum pooling layer, a second convolution layer, a second normalization layer, a second maximum pooling layer, a third convolution layer, a third normalization layer, a third maximum pooling layer, a fourth convolution layer, a fourth maximum pooling layer, a fifth convolution layer, a fifth normalization layer, a flatten layer, a dropout layer and a dense layer, which are sequentially cascaded,
the first convolutional layer, the second convolutional layer, the third convolutional layer, the fourth convolutional layer and the fifth convolutional layer are used for extracting image features of the overlay graph as model training samples;
the first normalization layer, the second normalization layer, the third normalization layer, the fourth normalization layer and the fifth normalization layer are respectively used for performing batch normalization processing on the pixels of the feature images output by the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer;
the first maximum pooling layer, the second maximum pooling layer, the third maximum pooling layer and the fourth maximum pooling layer are respectively used for reducing the dimension of the feature by two-dimensional maximum pooling on the feature graphs output by the first normalization layer, the second normalization layer, the third normalization layer and the fourth normalization layer so as to compress the number of data and parameters;
the flatten layer is used for unidimensionalizing the multidimensional data output by the fifth normalization layer;
the dropout layer is used for reducing model overfitting;
and the dense layer is used as a network output layer and is used for mapping the extracted image features to an output space.
As a preferred scheme of the present invention, the method for training the rope skipping true and false judgment model comprises the steps of:
s21, performing data enhancement on each overlay;
s22, inputting the overlay after data enhancement as a model training sample into the neural network for training to form the rope skipping true and false judgment model;
s23, acquiring the difference frame to obtain an added image, and inputting the added image into the rope skipping true and false judgment model to perform rope skipping true and false judgment;
and S24, calculating the judgment error of the skipping rope true and false judgment model, adjusting the model training parameters according to the error calculation result, returning to the step S21 to perform updating training on the skipping rope true and false judgment model until the termination condition of iterative updating is met, and obtaining the final skipping rope true and false judgment model.
As a preferred aspect of the present invention, the method for enhancing data of the overlay includes any one or more of scaling, translating, rotating, randomly cropping, left flipping, and right flipping the overlay.
As a preferred scheme of the invention, the superposition graph after data enhancement is subjected to pixel normalization and then is used as a sample for training the rope skipping true and false judgment model.
As a preferred scheme of the present invention, the difference frame overlay image is subjected to pixel normalization and then input into the rope skipping true and false judgment model to perform rope skipping true and false judgment.
The invention also provides a system for judging the true and false skipping rope based on the difference frame method, which can realize the method for judging the true and false skipping rope based on the difference frame method, and the system for judging the true and false skipping rope comprises the following steps:
the inter-frame difference calculation module is used for performing inter-frame difference calculation on the rope skipping videos to obtain a difference frame image sequence which is arranged according to a time sequence and corresponds to each section of rope skipping video;
the difference frame image overlapping module is connected with the inter-frame difference calculating module and is used for overlapping all difference frame images in the difference frame image sequence into one image and outputting the image;
the model training module is connected with the difference frame image superposition module and used for performing model training by taking each superposition image associated with each section of rope skipping video as a sample of rope skipping true and false judgment model training to obtain a rope skipping true and false judgment model;
the real-time rope skipping video acquisition module is used for acquiring a real-time rope skipping video and inputting the real-time rope skipping video to the interframe difference module;
the inter-frame difference module is also connected with the real-time rope skipping video acquisition module and used for performing inter-frame difference calculation on the acquired real-time rope skipping video to obtain a difference frame sequence of the real-time rope skipping video;
the difference frame image superposition module is further used for merging the difference frames in the difference frame sequence into a difference frame superposition image and outputting the difference frame superposition image;
and the skipping rope true and false judging module is respectively connected with the difference frame image superposition module and the model training module and is used for judging whether skipping ropes are false or not by utilizing the skipping rope true and false judging module according to the input difference frame superposition image and outputting a judging result.
According to the invention, the inter-frame difference calculation is carried out on a plurality of rope skipping videos, a plurality of difference frame images related to the same rope skipping video are superposed into one image, and each superposed image related to each rope skipping video is used as a training sample of the rope skipping true and false judgment model, so that the accuracy of the model for judging the rope skipping true and false is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a diagram illustrating an implementation procedure of a rope skipping true and false determination method based on a difference frame method according to an embodiment of the present invention;
FIG. 2 is a diagram of method steps for training a rope jump truth judgment model;
FIG. 3 is a logic diagram of the present invention for determining whether a jump rope is true or false;
FIG. 4 is a logic diagram of the present invention for training a rope skipping true and false judgment model;
fig. 5 is a schematic diagram of an overlay after superimposing the difference frame image;
fig. 6 is a schematic structural diagram of a rope skipping true and false judgment system based on a difference frame method according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The interframe difference method (difference frame method for short) is a method for obtaining the contour of a moving target by performing difference operation on two continuous frames of images of a video image sequence. When abnormal target motion occurs in a monitored scene, a relatively obvious difference occurs between two adjacent frames of images, two frames are subtracted, the absolute value of the pixel value difference of the corresponding position of the image is obtained, whether the absolute value is larger than a certain threshold value or not is judged, and then the motion characteristics of the object of the video or the image sequence are analyzed.
According to the invention, the inter-frame difference calculation is carried out on the skipping rope video to obtain the binary difference frame image, the motion track characteristic of skipping rope is enhanced by superposing a plurality of difference frame images (binary images) related to the same skipping rope video into one image, the background environment is effectively filtered, and the anti-interference capability of the model is enhanced by taking a plurality of superposed images related to each skipping rope video as training samples of a skipping rope true and false judgment model, so that the problem that the accuracy of rope target detection of the conventional SSD, YOLO and other models is not ideal is solved.
Specifically, as shown in fig. 1 and fig. 3, the method for determining the true or false skipping rope based on the difference frame method according to the embodiment of the present invention includes:
step S1, performing inter-frame difference calculation on a plurality of rope skipping videos to obtain a difference frame image sequence corresponding to each rope skipping video in time sequence, and recording the difference frame image sequence as Q, Q being p1、p2、…、pi、…、pn,piRepresents the ith difference frame image in the difference frame time-series images, i is 1, 2, …, n is the number of difference frame images in the difference frame image sequence Q; n is preferably 20, namely the skipping rope video preferably comprises 21 frames of images, and the difference between the second frame and the first frame is calculated to obtain a difference frame image p1(ii) a The difference between the third frame and the second frame is calculated to obtain a difference frame image p2And by analogy, performing interframe difference calculation on the 21 st frame and the 20 th frame to obtain a difference frame image p20
Step S2, superposing all the difference frame images in the difference frame image sequence Q into one image, and performing model training by taking each superposed image associated with each section of rope skipping video as a sample for rope skipping true and false judgment model training;
the superposition mode of the difference frame image is as follows:
difference frame image p1As a first graph to be superimposed; difference frame image p2Obtaining a second graph after the second graph is overlapped with the first graph; difference (D)Frame image p3Overlapping the second image to obtain a third image, and repeating the steps until the difference frame image p20And the twenty-fourth graph is obtained after being overlapped with the nineteenth graph (the overlapped twenty-fourth graph is shown in the attached figure 5). And taking the finally superposed twentieth graph as a sample for training a rope skipping true and false judgment model. The sample includes positive sample and negative sample, has the rope in the image be the positive sample, does not have the rope be the negative sample.
The neural network structure of the training rope skipping true and false judgment model of the invention is shown in the following table 1, and comprises a first convolution layer, a first normalization layer, a first maximum pooling layer, a second convolution layer, a second normalization layer, a second maximum pooling layer, a third convolution layer, a third normalization layer, a third maximum pooling layer, a fourth convolution layer, a fourth normalization layer, a fourth maximum pooling layer, a fifth convolution layer, a fifth normalization layer, a flatten layer, a dropout layer and a dense layer which are sequentially cascaded,
the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer are used for extracting image convolution characteristics of an overlay graph serving as a model training sample;
the first normalization layer, the second normalization layer, the third normalization layer, the fourth normalization layer and the fifth normalization layer are respectively used for performing pixel batch normalization processing on feature images output by the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer so as to accelerate convergence accelerated training;
the first maximum pooling layer, the second maximum pooling layer, the third maximum pooling layer and the fourth maximum pooling layer are respectively used for reducing the dimension of the feature by two-dimensional maximum pooling on the feature graphs output by the first normalization layer, the second normalization layer, the third normalization layer and the fourth normalization layer so as to compress data and parameter quantity;
the flatten layer is used for unidimensionalizing the multidimensional data output by the fifth normalization layer;
dropout layers are used to reduce model overfitting;
and the dense layer is used as a network output layer and is used for mapping the extracted image features onto an output space.
The size of the overlay input by the network input layer is 384 × 384 × 1 ("384" indicates the length or width of the overlay, and "1" indicates the number of channels); the input overlay map outputs a feature map with the size of 384 × 384 × 32 after convolution feature extraction of the first convolution layer ("384" represents the length or width of the feature map, and "32" represents the number of channels, that is, 32 convolution kernels with 3 × 3 extract image features), and "320" of the first convolution layer is the parameter number of the convolution layer, and is calculated by 3 × 3 × 32+32 (the "32" added later is the number of constant term parameters). Similarly, shape variables in the normalization layer and the maximum pooling layer in the following other convolutional layers respectively represent (n, h, w, c), where n is None represents the number of pictures that the current layer needs to process at a time; h represents height, which is the height of the picture; w represents width, which is the width of the picture, and c represents channel, which is the number of channels of the picture. The normal GRBA is a 4-channel picture, but what we input to the network is a binary picture, so the number of channels of the picture input to the network input layer is "1". "Layer" in table 1 below indicates what operations were done, such as convolution or normalization or max pooling; "Output Shape" indicates the Shape of the Output data of this layer; "Param" indicates the number of parameters of this layer.
Figure BDA0003365918070000061
TABLE 1
Fig. 2 and 4 show a method for training a rope skipping true and false judgment model, which includes:
step S21, performing data enhancement on each overlay, wherein the purpose of the data enhancement is to increase the sample size, and the data enhancement mode comprises any one or more of zooming, translation, rotation, random clipping, left turning and right turning on the overlays;
step S22, inputting the overlay image after data enhancement as a model training sample into the neural network for training to form a rope skipping true and false judgment model;
step S23, obtaining a skipping rope video frame image and inputting the skipping rope video frame image into a skipping rope true and false judgment model for skipping rope true and false judgment; if the model identifies that a rope exists in the rope skipping video frame image, judging that the rope skipping is true, otherwise, judging that the rope skipping is false;
and step S24, calculating the judgment error of the skipping rope true and false judgment model, adjusting the model training parameters according to the error calculation result, returning to step S21 to perform updating training on the skipping rope true and false judgment model until the termination condition of iterative updating is met, and obtaining the final skipping rope true and false judgment model.
In order to eliminate the influence of different magnitude levels among the features on the model training effect, preferably, each superposition graph subjected to data enhancement is subjected to pixel normalization to be used as a sample of a training rope skipping true and false judgment model, the subsequent data processing is more convenient after normalization, and the convergence speed of model training is accelerated.
Referring to fig. 1, the method for determining whether a skipping rope is true or false according to this embodiment further includes:
and step S3, performing interframe difference calculation on the obtained real-time rope skipping video to obtain a difference frame sequence of the real-time rope skipping video, merging difference frames in the difference frame sequence into a difference frame overlay, inputting the difference frame overlay into the rope skipping true and false judgment model obtained by training in the step S2, judging whether the rope skipping is false or not by the model according to the input difference frame overlay, and outputting a judgment result.
In order to increase the speed of model judgment, the difference frame overlay image is preferably subjected to pixel normalization and then input into the rope skipping true and false judgment model for rope skipping true and false judgment.
In conclusion, the accuracy of the model for judging the true and false skipping ropes is greatly improved by performing interframe difference calculation on a plurality of skipping rope videos, overlapping a plurality of difference frame images related to the same skipping rope video into one image, and using each overlapping image related to each skipping rope video as a training sample of the skipping rope true and false judging model
The present invention further provides a system for determining the true or false of rope skipping based on the difference frame method, which can implement the method for determining the true or false of rope skipping based on the difference frame method, as shown in fig. 6, the system includes:
the inter-frame difference calculation module is used for performing inter-frame difference calculation on the rope skipping videos to obtain a difference frame image sequence which is arranged according to a time sequence and corresponds to each section of rope skipping video;
the difference frame image overlapping module is connected with the inter-frame difference calculating module and is used for overlapping all difference frame images in the difference frame image sequence into one image and outputting the image;
the model training module is connected with the difference frame image superposition module and used for performing model training by taking each superposition image associated with each section of rope skipping video as a sample of rope skipping true and false judgment model training to obtain a rope skipping true and false judgment model;
the real-time rope skipping video acquisition module is used for acquiring a real-time rope skipping video and inputting the real-time rope skipping video to the interframe difference module;
the inter-frame difference module is also connected with the real-time rope skipping video acquisition module and is used for performing inter-frame difference calculation on the acquired real-time rope skipping video to obtain a difference frame sequence of the real-time rope skipping video;
the difference frame image superposition module is also used for combining all the difference frames in the difference frame sequence into a difference frame superposition image and outputting the difference frame superposition image;
and the skipping rope true and false judging module is respectively connected with the difference frame image superposition module and the model training module and is used for judging whether skipping ropes are false or not by utilizing the skipping rope true and false judging model according to the input difference frame superposition image and outputting a judging result.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (9)

1. A rope skipping true and false judgment method based on a difference frame method is characterized by comprising the following steps:
s1, performing interframe difference calculation on a plurality of sections of skipping rope videos to obtain a difference frame image sequence which is arranged in time sequence and corresponds to each section of skipping rope video, and recording the difference frame image sequence as Q, Q being p1、p2、…、pi、…、pn,piRepresents the ith difference frame image in the difference frame time-series images, i is 1, 2, …, n is the number of the difference frame images in the difference frame image sequence Q;
s2, superposing all the difference frame images in the difference frame image sequence Q into one image, and performing model training by taking each superposed image associated with each section of rope skipping video as a sample for rope skipping true and false judgment model training;
and S3, performing interframe difference calculation on the obtained real-time rope skipping video to obtain a difference frame sequence of the real-time rope skipping video, merging each difference frame in the difference frame sequence into a difference frame overlay, inputting the difference frame overlay into the rope skipping true and false judgment model obtained by training in the step S2 to perform rope skipping true and false judgment, and outputting a judgment result.
2. The method for judging whether a rope skipping is true or false according to claim 1, wherein n is 20.
3. The rope skipping true and false judgment method based on the difference frame method as claimed in claim 1, wherein the difference frame image is a binary image.
4. The method according to any one of claims 1 to 3, wherein the neural network structure for training the rope skipping true/false judgment model comprises a first convolution layer, a first normalization layer, a first maximum pooling layer, a second convolution layer, a second normalization layer, a second maximum pooling layer, a third convolution layer, a third normalization layer, a third maximum pooling layer, a fourth convolution layer, a fourth maximum pooling layer, a fifth convolution layer, a fifth normalization layer, a flatten layer, a dropout layer and a dense layer, which are sequentially cascaded,
the first convolutional layer, the second convolutional layer, the third convolutional layer, the fourth convolutional layer and the fifth convolutional layer are used for extracting image features of the overlay graph as model training samples;
the first normalization layer, the second normalization layer, the third normalization layer, the fourth normalization layer and the fifth normalization layer are respectively used for performing batch normalization processing on the pixels of the feature images output by the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer and the fifth convolution layer;
the first maximal pooling layer, the second maximal pooling layer, the third maximal pooling layer and the fourth maximal pooling layer are respectively used for reducing the dimension of the features by two-dimensional maximal pooling on the feature maps output by the first normalization layer, the second normalization layer, the third normalization layer and the fourth normalization layer so as to compress the number of data and parameters;
the flatten layer is used for unidimensionally processing the multidimensional data output by the fifth normalization layer;
the dropout layer is used for reducing model overfitting;
and the dense layer is used as a network output layer and is used for mapping the extracted image features to an output space.
5. The rope skipping true and false judgment method based on the difference frame method according to claim 4, wherein the method for training the rope skipping true and false judgment model comprises the following steps:
s21, performing data enhancement on each overlay;
s22, inputting the overlay after data enhancement as a model training sample into the neural network for training to form the rope skipping true and false judgment model;
s23, acquiring the difference frame overlay and inputting the difference frame overlay into the rope skipping true and false judgment model for rope skipping true and false judgment;
and S24, calculating the judgment error of the rope skipping true and false judgment model, adjusting the model training parameters according to the error calculation result, returning to the step S21 to update and train the rope skipping true and false judgment model until the termination condition of iterative updating is met, and obtaining the final rope skipping true and false judgment model.
6. The method for judging whether skipping rope is true or false based on the difference frame method according to claim 5, wherein the method for enhancing the data of the overlay map comprises any one or more of scaling, translation, rotation, random cutting, left turning or right turning of the overlay map.
7. The method as claimed in claim 5, wherein the superimposed graph after data enhancement is subjected to pixel normalization and then used as a sample for training the rope skipping true-false judgment model.
8. The method as claimed in claim 1, wherein the difference frame overlay is pixel-normalized and then input into the rope skipping true-false judgment model for rope skipping true-false judgment.
9. A rope skipping true and false judging system based on a difference frame method, which can realize the rope skipping true and false judging method based on the difference frame method according to any one of claims 1-7, and is characterized by comprising the following steps:
the inter-frame difference calculation module is used for performing inter-frame difference calculation on the rope skipping videos to obtain a difference frame image sequence which is arranged according to a time sequence and corresponds to each section of rope skipping video;
the difference frame image overlapping module is connected with the inter-frame difference calculating module and is used for overlapping all difference frame images in the difference frame image sequence into one image and outputting the image;
the model training module is connected with the difference frame image superposition module and used for performing model training by taking each superposition image associated with each section of rope skipping video as a sample of rope skipping true and false judgment model training to obtain a rope skipping true and false judgment model;
the real-time rope skipping video acquisition module is used for acquiring a real-time rope skipping video and inputting the real-time rope skipping video to the inter-frame difference calculation module;
the inter-frame difference calculation module is also connected with the real-time rope skipping video acquisition module and is used for performing inter-frame difference calculation on the acquired real-time rope skipping video to obtain a difference frame sequence of the real-time rope skipping video;
the difference frame image superposition module is further used for merging the difference frames in the difference frame sequence into a difference frame superposition image and outputting the difference frame superposition image;
and the skipping rope true and false judging module is respectively connected with the difference frame image superposition module and the model training module and is used for judging whether skipping ropes are false or not by utilizing the skipping rope true and false judging module according to the input difference frame superposition image and outputting a judging result.
CN202111381904.XA 2021-11-22 2021-11-22 Rope skipping true and false judgment method and system based on difference frame method Active CN113893517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111381904.XA CN113893517B (en) 2021-11-22 2021-11-22 Rope skipping true and false judgment method and system based on difference frame method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111381904.XA CN113893517B (en) 2021-11-22 2021-11-22 Rope skipping true and false judgment method and system based on difference frame method

Publications (2)

Publication Number Publication Date
CN113893517A CN113893517A (en) 2022-01-07
CN113893517B true CN113893517B (en) 2022-06-17

Family

ID=79194865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111381904.XA Active CN113893517B (en) 2021-11-22 2021-11-22 Rope skipping true and false judgment method and system based on difference frame method

Country Status (1)

Country Link
CN (1) CN113893517B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116718791B (en) * 2023-04-13 2024-04-26 东莞市杜氏诚发精密弹簧有限公司 Method, device, system and storage medium for detecting rotation speed of torque spring

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001346901A (en) * 2000-06-07 2001-12-18 Matsushita Electric Works Ltd Rope skipping exercise device
WO2002007826A1 (en) * 2000-07-22 2002-01-31 Min Keon Dong Jump rope having function of calory and fat exhaustion amount measurement
US6406407B1 (en) * 2000-10-30 2002-06-18 Pamela Dean Wiedmann Jump rope device
CN103235928A (en) * 2013-01-08 2013-08-07 沈阳理工大学 Gait recognition method with monitoring mechanism
KR20190142166A (en) * 2018-06-16 2019-12-26 모젼스랩(주) Virtual content providing system for rope skipping based on motion analysis
CN111292353A (en) * 2020-01-21 2020-06-16 成都恒创新星科技有限公司 Parking state change identification method
CN112464808A (en) * 2020-11-26 2021-03-09 成都睿码科技有限责任公司 Rope skipping posture and number identification method based on computer vision
CN112883930A (en) * 2021-03-29 2021-06-01 动者科技(杭州)有限责任公司 Real-time true and false motion judgment method based on full-connection network
CN112883931A (en) * 2021-03-29 2021-06-01 动者科技(杭州)有限责任公司 Real-time true and false motion judgment method based on long and short term memory network
CN113111842A (en) * 2021-04-26 2021-07-13 浙江商汤科技开发有限公司 Action recognition method, device, equipment and computer readable storage medium
CN113537110A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 False video detection method fusing intra-frame and inter-frame differences
CN113627340A (en) * 2021-08-11 2021-11-09 广东沃莱科技有限公司 Method and equipment capable of identifying rope skipping mode

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101842808A (en) * 2007-11-16 2010-09-22 电子地图有限公司 Method of and apparatus for producing lane information

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001346901A (en) * 2000-06-07 2001-12-18 Matsushita Electric Works Ltd Rope skipping exercise device
WO2002007826A1 (en) * 2000-07-22 2002-01-31 Min Keon Dong Jump rope having function of calory and fat exhaustion amount measurement
US6406407B1 (en) * 2000-10-30 2002-06-18 Pamela Dean Wiedmann Jump rope device
CN103235928A (en) * 2013-01-08 2013-08-07 沈阳理工大学 Gait recognition method with monitoring mechanism
KR20190142166A (en) * 2018-06-16 2019-12-26 모젼스랩(주) Virtual content providing system for rope skipping based on motion analysis
CN111292353A (en) * 2020-01-21 2020-06-16 成都恒创新星科技有限公司 Parking state change identification method
CN112464808A (en) * 2020-11-26 2021-03-09 成都睿码科技有限责任公司 Rope skipping posture and number identification method based on computer vision
CN112883930A (en) * 2021-03-29 2021-06-01 动者科技(杭州)有限责任公司 Real-time true and false motion judgment method based on full-connection network
CN112883931A (en) * 2021-03-29 2021-06-01 动者科技(杭州)有限责任公司 Real-time true and false motion judgment method based on long and short term memory network
CN113111842A (en) * 2021-04-26 2021-07-13 浙江商汤科技开发有限公司 Action recognition method, device, equipment and computer readable storage medium
CN113537110A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 False video detection method fusing intra-frame and inter-frame differences
CN113627340A (en) * 2021-08-11 2021-11-09 广东沃莱科技有限公司 Method and equipment capable of identifying rope skipping mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
花样跳绳对体育专业大学生体质健康影响的分析;郭淑玲;《九江学院学报(自然科学版)》;20191220(第04期);全文 *

Also Published As

Publication number Publication date
CN113893517A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
WO2020108362A1 (en) Body posture detection method, apparatus and device, and storage medium
CN111784633B (en) Insulator defect automatic detection algorithm for electric power inspection video
CN110381268B (en) Method, device, storage medium and electronic equipment for generating video
CN110689482A (en) Face super-resolution method based on supervised pixel-by-pixel generation countermeasure network
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN112614136B (en) Infrared small target real-time instance segmentation method and device
CN113591968A (en) Infrared weak and small target detection method based on asymmetric attention feature fusion
CN114742799B (en) Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN111402237A (en) Video image anomaly detection method and system based on space-time cascade self-encoder
CN111079539A (en) Video abnormal behavior detection method based on abnormal tracking
CN111369548A (en) No-reference video quality evaluation method and device based on generation countermeasure network
CN113893517B (en) Rope skipping true and false judgment method and system based on difference frame method
CN113516126A (en) Adaptive threshold scene text detection method based on attention feature fusion
CN111401368B (en) News video title extraction method based on deep learning
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
CN114359333A (en) Moving object extraction method and device, computer equipment and storage medium
Zhang et al. Multi-prior driven network for rgb-d salient object detection
CN112700409A (en) Automatic retinal microaneurysm detection method and imaging method
CN112749671A (en) Human behavior recognition method based on video
CN114120076B (en) Cross-view video gait recognition method based on gait motion estimation
CN107766838B (en) Video scene switching detection method
CN116091793A (en) Light field significance detection method based on optical flow fusion
CN116311423A (en) Cross-attention mechanism-based multi-mode emotion recognition method
CN112532999B (en) Digital video frame deletion tampering detection method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: The Method and System of True and False Judgment of Rope Skipping Based on Difference Frame Method

Effective date of registration: 20221129

Granted publication date: 20220617

Pledgee: Hangzhou branch of Zhejiang Tailong Commercial Bank Co.,Ltd.

Pledgor: Movers Technology (Hangzhou) Co.,Ltd.

Registration number: Y2022330003332