WO2022116860A1 - Swimmer performance analysis system - Google Patents
Swimmer performance analysis system Download PDFInfo
- Publication number
- WO2022116860A1 WO2022116860A1 PCT/CN2021/132091 CN2021132091W WO2022116860A1 WO 2022116860 A1 WO2022116860 A1 WO 2022116860A1 CN 2021132091 W CN2021132091 W CN 2021132091W WO 2022116860 A1 WO2022116860 A1 WO 2022116860A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- swimmer
- analysis system
- performance analysis
- swimming
- analyzer
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 52
- 230000009182 swimming Effects 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims description 18
- 230000001133 acceleration Effects 0.000 claims description 4
- 239000008358 core component Substances 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 12
- 230000036544 posture Effects 0.000 description 15
- 238000000034 method Methods 0.000 description 14
- 238000012549 training Methods 0.000 description 7
- 210000002414 leg Anatomy 0.000 description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 241000272194 Ciconiiformes Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 239000000306 component Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000002683 foot Anatomy 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 210000001624 hip Anatomy 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 210000000617 arm Anatomy 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- RKTYLMNFRDHKIL-UHFFFAOYSA-N copper;5,10,15,20-tetraphenylporphyrin-22,24-diide Chemical compound [Cu+2].C1=CC(C(=C2C=CC([N-]2)=C(C=2C=CC=CC=2)C=2C=CC(N=2)=C(C=2C=CC=CC=2)C2=CC=C3[N-]2)C=2C=CC=CC=2)=NC1=C3C1=CC=CC=C1 RKTYLMNFRDHKIL-UHFFFAOYSA-N 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000037078 sports performance Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
Abstract
A swimmer performance analysis system is based on computer vision, including image processing, object detection and pose estimation. It can capture the image of the swimmer, recognize the swimmers' swimming style and analyze the swimmer's posture while calculating speed and the angle of the swimmer's body parts. Then, the analyzed information is fed back to a coach or the swimmer.
Description
This international patent application claims the benefit of U.S. Provisional Patent Application No.: 63/120,526 filed on December 02, 2020, the entire content of which is incorporated by reference for all purpose.
The present invention relates generally systems for analyzing the performance of swimmers, and more particularly to a swimmer performance analysis system based on computer vision.
In many of the prior swimming analysis systems, the swimmers are always required to wear some form of customized device such as a signal generator, sensor etc., which is not only uncomfortable but also expensive. Further, the accuracy of the system heavily depends on the devices. Usually, the more expensive, the more accurate the device. Thus, not everyone can afford an accurate device. Also, wearable devices are easily lost and damaged due to inaccurate wearing.
Another limitation of the prior art is that the customized devices can analyze only one swimmer at a time. Ifthere are more than one swimmer in the swimming pool, the system must analyze them one by one.
The pose or posture of a swimmer in the water while swimming can have a significant effect on the swimmer’s swimming efficiency. A swimming pose is the position of the swimmer’s body, head, arms, hands and legs in the water during the various phases of a swimming stroke. Thus, there are many different criteria that makeup a swimming pose and the customized devices cannot recognize and classify them automatically. As a result, there are not presently many swimming pose analysis systems on the market.
The prior pose analysis systems also need hardware support. In particular swimmers need to align some worn equipment, such as hand rings, foot rings or waist belts, with the analysis system. See for example Korean Patent 101860132 B1, China Patent 108452504 B and US Patent No. 9,216,341. There are several disadvantages in these pose analysis situations. For example, if the swimmer forgets to put on the equipment or loses the equipment or ifthe equipment is damaged, it will prevent swimmers from performing self-swimming performance analysis normally. In addition, due to the weight of the equipped equipment, it may affect the normal performance of swimmers.
An optimal exercise program determination system is disclosed in Japan Patent 2014078145A. It relies on calculating the area covered by the human body present in a photo, and then compares it with the standard value to determine the pose performance of the user. The difficulty of this system is that it lacks precision in the determination of the pose position.
SUMMARY OF THE INVENTION
The present invention overcomes the problems of the prior art with a swimmer performance analysis system that includes posture analysis and is simply based on software analysis of computer images. Swimmers don’t need to wear any special equipment and there is no need for prior photographs. Instead, the system only relies on ordinary cameras above the water as well as beneath the water. Also, human pose is believed to be essential in this scenario and by judging the pose of swimmers, a distinction can easily be made between different poses like breaststroke, Butterfly stroke etc. With the present invention, the swimmer’s posture is analyzed in different stages. This includes single player analysis and multiple player analysis.
The system includes three (3) sub-systems, i.e., the camera system, posture analysis system, and feedback system.
Multiple cameras, e.g., four (4) , are installed on the ceiling of the pool area, and are arranged to be able to monitor an entire swimming lane. When a participant begins to swim, the detection system captures the swimmer′s position, extracts the swimmer’s body skeleton from the image and continues tracking the participate for the rest of the swim. Then, based on the swimmer’s position and the position of the swimmer’s skeleton, the analysis system conducts some calculations to meet the swimmers′ requirements, like the speed, angle or even the calorie cost.
In analyzing swimming performance, it is believed that human pose is essential. By judging the pose of swimmers, different poses can be distinguished like breaststroke, Butterfly stroke etc. The present system defines the swimmer’s posture in different stages, single player analysis and multiple player analysis.
The users can be assigned their own id so they can login to the computer system to get their swimming style analysis, including the speeds of their hands and feet, the distance between the hand, etc. as well as their overall swimming speed and distance. Apart from the computer, the user can employ a mobile app to capture the swimmers’ postures and view the swimmer’s performance analysis.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing (s) will be provided by the Office upon request and payment of the necessary fee.
The foregoing and other objects and advantages of the present invention will become more apparent when considered in connection with the following detailed description and appended drawings in which like designations denote like elements in the various views, and wherein:
FIG. 1 is a view of a swimming pool with a swimming lane covered by ceiling mounted cameras according to the present invention;
FIG. 2 is a block diagram of the swimmer performance analysis system of the present invention;
FIG. 3 is a graph and overhead lane view of speed distribution analysis according to the present invention;
FIG. 4 is graphs of showing stroke training according to the present invention wherein FIG. 4A shows a slow move with small extended stroke, FIG. 4B shows a fast move with large extended stork, FIG. 4C shows a slow move but with a large extend stroke and FIG. 4D shows a fast move but with a small extended stroke;
FIG. 5 illustrates the images for a slow motion tracing playback of a moving swimmer according to the present invention wherein FIG. 5A shows a runner on a treadmill illustrating a runner at a fixed point and FIG. 5B shows the screen of a display with the swimmer at a fixed point in the center of the screen;
FIG. 6 is a flow chart for object detection according to the present invention;
FIG. 7 illustrates concepts of different object detection processes;
FIG. 8 is a block diagram of the inception module that is the core component of GoogLeNet, wherein FIG. 8A for the naive version and FIG. 8B for the version with dimension reductions; and
Fig. 9 shows the structure of Mobilenets.
The present invention provides a swimmer performance analysis system that includes posture analysis. The system consists of three (3) sub-systems: the camera system, the posture analysis system, and the feedback system.
Camera System
As shown in FIG. 1, there are multiple cameras, e.g., four (4) , monitoring an entire swimming lane, which lane is called “smart lane. ” The swimmers swimming in this lane will be captured and analyzed. The whole system includes several basic training parts, like speed analysis, stroke training and slow motion tracing.
As shown in FIG. 2, part of the speed distribution analysis utilizes the cameras, which may be CCTV cameras 10, to generate images under the control of a CCTV Controller 12. The mages may be displayed on a monitor 14 and saved to a storage or memory of a computer 16. As shown in FIG. 2 the images are of an overhead view of the smart lane. Based on the stored images the computer, while running appropriate software, reconstructs the images for a lane to measure the speed distribution of a moving swimmer, not just the average speed. As a result, during a swimming drill not only is the average speed determined but also the acceleration during different parts of the swim, which is part of swim strategy. Typical acceleration strategies are shown in the graphs of FIG. 3.
For stroke training each of the ceiling cameras 10 monitors the leg and arm performance of a swimmer as though at a fixed point. The system calculates the frequency and shows the strokes of the swimmer. As shown in FIG. 4 the stroke training can be broken down into move slow with small extended stroke (FIG. 4A) , move fast with large extended stork (FIG. 4B, move slow but with large extend stroke (FIG. 4C) and move fast, but with small extended stroke (FIG. 4D) .
For slow motion tracing the images stored in computer 16 of a moving swimmer can be played back. A “you only look once” (Yolo) object detection system can be used to trace the swimmer and the swimmer’s stroke in slow motion playback. The computer keeps the image of the swimmer in the middle of the screen like tracking and tracing the swimmer with a moving camera. It is equivalent to a runner on a treadmill as shown in FIG. 5.
Apart from the computer, swimmer or coach can also be provided with a mobile app that can be used to capture the swimmers’ postures and to perform analysis.
Posture Analysis System
In implementing the posture analysis part of the system, the images collected by the cameras must be analyzed and objects in the images, e.g., parts of the body of the swimmer, must be detected and analyzed. As shown in FIG. 6, the image 20 has objects in it detected by an object detection device in step 22. The object is tracked through the several images and is compared to various models in step 24 to identify it. The identified object is the pose estimation at step 26. The pose estimation image data is next subjected to a convolutional neural network (CNN) classified in step 28. CNN is a class of deep learning neural networks commonly used to analyze visual imagery. It classifies the images into freestyle 23, breaststroke 25 backstroke 27, et al. Each, e.g., breaststroke 25, can be analyzed to determine performance, i.e., how close to an ideal breaststroke the user is swimming.
Object Detection
As illustrated in FIG. 7 there are a number of different object detection methods. The classic deep learning models can be separated into 2-stage models such as R-CMM, Fast R-CNN and Faster R-CNN. Alternatively, the classic deep learning models can be 1-stage such as YOLO or SSD. Other object detection methods include Benchmarks, Bells and Whistles and New Trends.
The R-CNN method abstracts detection into two processes. In the first process a number of regions are proposed that may contain objects based on the picture (that is, the local cropping of the picture, called Region Proposal) . A Selective Search algorithm is then used in the article. The second process is to Run the best-performing classification network (AlexNet) on the area to get the category of the objects in each area.
In using R-CNN the first step is data preparation. Before entering CNN, the proposed Region Proposal must be marked according to Ground Truth. The indicator used is IoU (Intersection over Union) . IoU calculates the ratio of the area at the intersection of two regions to their union and describes the degree of overlap of the two regions.
Another point is the Bounding-Box Regression. This process is the adjustment of the Region Proposal to Ground Truth. The log/exp transformation is added to keep the loss at a reasonable level, which can be regarded as a kind of standardization or normalization operation.
The Fast R-CNN method can also be used. With this method a feature map is obtained by a feature extractor of the image, the Selective Search algorithm is run on the original image and the RoI (Region of Interest, which is actually a coordinate group that can be mixed with the Region Proposal) is mapped to the feature map. Then RoI Pooling is performed on each RoI. The operation obtains the feature vector of equal length, and the obtained feature vector is sorted by positive and negative samples (to maintain a certain ratio of positive and negative samples) , divided into batches and passed to a parallel R-CNN sub-network. Next, classification and regression are performed at the same time, and Unify is applied to the two losses.
Another technique is Faster R-CNN. The first step is to generate anchor boxes of different sizes and ratios of length and width on a sliding window, set the threshold of IoU, and calibrate the positive and negative of these anchor boxes according to Ground Truth. Therefore, the sample data passed into the RPN network is sorted into anchor boxes (coordinates) and whether there are objects in each anchor box (two classification labels) . The RPN network maps each sample to a probability value and four coordinate values. The probability value reflects the probability that the anchor box has an object, and the four coordinate values are used to regress to define the position of the object. Finally, the loss of the two classification and coordinate regression is unified, as the target training of the RPN network.
The Region Proposal obtained from RPN undergoes a similar labeling process after being screened according to the probability value, and then passed into the R-CNN sub-network for multi-classification and coordinate regression. Multi-task loss is also used to combine the losses of the two.
As noted above Yolo can be used. First the data is prepared by scaling the picture and dividing it into equally divided grids. Each grid is assigned to the sample to be predicted according to the IoU of Ground Truth. Modified from GoogLeNet, each grid predicts a conditional probability value for each category and generates B boxes based on the grid. Each box predicts five regression values, four characterization positions, and the fifth characterizes this object. The box contains the probability of the object existing in that box (note that it is not a certain type of object) and the accuracy of the position (represented by IoU) . During the test, the score is calculated as follows.
Finally, NMS (Non-Maximum Suppression, non-maximum suppression) is used to filter to get the final prediction box.
Pose Estimation
The recognition of human movements is generally divided into two methods: Top-down and Bottom-up. The top-down method is to first use the object recognition algorithm to find the position of the possible human in the input image, and use this part of the image with a four points. The position is marked by a square frame, which is generally called the tetragonal behavior bounding box (bounding box) , and then the image in the bounding box is sent to the human body motion analysis network to find out the joint points and limbs of the person. In addition, a bottom-up method first tries to find out the position of each possible joint point in the image. Through the bottom-up network, the feature map of n sets of key points is obtained. Then according to the relationship between each key point and position, the parts are form up to represent a person's movement.
The pose recognition model uses a great relationship between the joint point position judgment and the input image. The collected swimmer data is put it into the human body pose recognition network for training. First, the data passes through the target detection layer and then, after the person is framed, they are recognized. The recognition data is entered into the STN network to correct the inaccurate frame. Finally, the data is entered into the network for bone key point prediction.
Image Classification
Inception Module is the core component ofGoogleNet. The structure is as shown in FIG. 8A for the
version and FIG. 8B for the version with dimension reductions. The basic structure of Inception Module has four components: 1*1 convolution, 3*3 convolution, 5*5 convolution, 3*3 maximum pooling. Finally, the four component calculation results are combined on the channel. This is the core idea of the Inception Module. The information of different scales of the image is extracted through multiple convolution kernels, and finally fusion is performed to obtain a better representation of the image.
MobileNets
MobileNets is a streamlined architecture. It uses depth-wise separable convolution to build a lightweight deep neural network. By introducing two simple global hyperparameters, an effective balance between speed and accuracy can be achieved. These two hyperparameters allow the model builder to choose the appropriate size model for its application according to the constraints of the problem. MobileNets are used in a wide range of scenarios, including object detection, fine-grained classification, and face attributes.
The basic unit of Mobilenets is depth-wise separable convolution + pointwise convolution. The structure of Mobilenets is shown in Fig. 9.
Performance Analysis
Generally, when a determination is to be made as to whether a posture is good enough or not, it is more likely that the evaluation will be based on the position of different body parts. In swimming posture analysis, three kinds of variables are important: Angle of each leg or each arm and distance.
i) Angle. After obtaining the coordinates of 3 body parts in an image, the target angle is determined by the cosine theorem. For example, the coordinates of the shoulder, elbow and wrist can be used to work out the angle of an arm. For legs, using the coordinates of the hip, ankle and knee can be used to calculate the angle.
ii) Distance. Distance is also an important part when evaluating human posture. In the system of the present invention the distance between two body parts can be determined by the distance formula in 2D rectangular coordinates. The distance information should be obtained by making a comparison with a standard distance. For example, if it is to be determined whether the user is turning his body enough, such a determination can be achieved by comparing the distance of the legs and the distance of the shoulder.
After determining the important variables, the swimming performance score can be based on the standard posture along with a calculation of the speed of the swimmer and the frequency of motion of the legs and arms. See Fok et al., "Intelligent Sports performance Scoring and Analysis system Based on Deep Learning Network, " 3rd International Conference on Artificial Intelligence and Big Data (2020) for further details.
While the present invention has been particularly shown and described with reference to preferred embodiments thereof; it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention, and that the embodiments are merely illustrative of the invention, which is limited only by the appended claims. In particular, the foregoing detailed description illustrates the invention by way of example and not by way of limitation. The description enables one skilled in the art to make and use the present invention, and describes several embodiments, adaptations, variations, and method of uses of the present invention.
Claims (18)
- A swimmer performance analysis system comprising:a plurality of cameras spaced along a swimming lane of a swimming pool so as to monitor the entire swimming lane and capture images thereof;an image capture processor that at least temporarily stores the captured images;an image processor that detects a swimmer in the captured images; anda swimming analyzer that analyzes the location of the swimmer in the captured images at various times and calculates the swimmer’s speed and acceleration, said swimming analyzer further outputting analyzed information including the swimmer’s speed and acceleration.
- The swimmer performance analysis system according to claim 1 wherein said image processor further detects in the captured images the parts of the body of the swimmer including the swimmer’s arms and legs; andwherein the swimming analyzer calculates the sequences of poses of the swimmer while swimming in the swimming lane so as to recognize the swimmers′ swimming style and analyze the swimmer’s posture while calculating speed and the angle of the swimmer’s body parts, said swimming analyzer further outputting the swimmer’s posture and the angle of the swimmer’s body parts in the analyzed information.
- The swimmer performance analysis system according to claim 1 wherein the analyzed information is fed back to a display with a screen that is accessible to a user of the system.
- The swimmer performance analysis system according to claim 2 wherein the analyzed information is fed back to a display with a screen that is accessible to a user of the system.
- The swimmer performance analysis system according to claim 3 wherein the user is at least one of a coach or the swimmer.
- The swimmer performance analysis system according to claim 4 wherein the user is at least one of a coach or the swimmer.
- The swimmer performance analysis system according to claim 1 wherein the cameras are suspended from a ceiling vertically above the swimming lane.
- The swimmer performance analysis system according to claim 2 wherein said image processor and swimming analyzer simulate the leg and arm performance of the swimmer at a fixed point and the swimming analyzer calculates the frequency and extent of the swimmer’s stroke.
- The swimmer performance analysis system according to claim 4 wherein the image capture processor and image processor use You Only Look Once (YOLO) processing to trace the swimmer in the captured images and to achieve slow motion playback of the swimmer’s stroke, keeping the swimmer in the middle of the display screen like tracking and tracing the swimmer with a moving camera.
- The swimmer performance analysis system according to claim 2 wherein the swimming analyzer classifies the swimmer’s stoke and further including a performance analyzer that compares the swimmer’s classified stroke to an idealized version ifthe stroke.
- The swimmer performance analysis system according to claim 10 wherein the swimmer’s stroke can be classified into one of Freestyle, Breaststroke and Backstroke.
- The swimmer performance analysis system according to claim 1 wherein the image processor utilizes a type of convolutional neural network (CNN) processing.
- The swimmer performance analysis system according to claim 12 wherein the CNN processing is at least one of R-CNN, Fast R-CNN and Faster R-CNN.
- The swimmer performance analysis system according to claim 1 wherein the image processor utilizes a type of You Only Look Once (YOLO) processing.
- The swimmer performance analysis system according to claim 10 wherein the image classification is based on GoogleNet with an inception nodule as the core component.
- The swimmer performance analysis system according to claim 10 wherein the image classification is based on MobileNets.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180064734.4A CN116261479A (en) | 2020-12-02 | 2021-11-22 | Swimmer performance analysis system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202063120526P | 2020-12-02 | 2020-12-02 | |
US63/120,526 | 2020-12-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022116860A1 true WO2022116860A1 (en) | 2022-06-09 |
Family
ID=81853819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/132091 WO2022116860A1 (en) | 2020-12-02 | 2021-11-22 | Swimmer performance analysis system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116261479A (en) |
WO (1) | WO2022116860A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014128739A2 (en) * | 2013-02-19 | 2014-08-28 | Safety Bed Srl | A high-precision and high-reliability automatic system for real-time chronometric monitoring and analysis of a plurality of data regarding the individual performance of swimming athletes |
US20140277628A1 (en) * | 2013-03-15 | 2014-09-18 | Suunto Oy | Device and method for monitoring swimming performance |
CN206081550U (en) * | 2016-08-29 | 2017-04-12 | 李明卿 | A training aiding system for monitoring of swim motion parameter |
AU2017100348A4 (en) * | 2017-03-25 | 2017-04-27 | Wu, Yinnan Mr | Swimming Pool with Display Screen |
CN110180151A (en) * | 2019-05-06 | 2019-08-30 | 南昌嘉研科技有限公司 | A kind of swimming instruction auxiliary system |
CN111346358A (en) * | 2020-03-11 | 2020-06-30 | 嘉兴技师学院 | Swimming training evaluation system and method based on convolutional neural network |
-
2021
- 2021-11-22 CN CN202180064734.4A patent/CN116261479A/en active Pending
- 2021-11-22 WO PCT/CN2021/132091 patent/WO2022116860A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014128739A2 (en) * | 2013-02-19 | 2014-08-28 | Safety Bed Srl | A high-precision and high-reliability automatic system for real-time chronometric monitoring and analysis of a plurality of data regarding the individual performance of swimming athletes |
US20140277628A1 (en) * | 2013-03-15 | 2014-09-18 | Suunto Oy | Device and method for monitoring swimming performance |
CN206081550U (en) * | 2016-08-29 | 2017-04-12 | 李明卿 | A training aiding system for monitoring of swim motion parameter |
AU2017100348A4 (en) * | 2017-03-25 | 2017-04-27 | Wu, Yinnan Mr | Swimming Pool with Display Screen |
CN110180151A (en) * | 2019-05-06 | 2019-08-30 | 南昌嘉研科技有限公司 | A kind of swimming instruction auxiliary system |
CN111346358A (en) * | 2020-03-11 | 2020-06-30 | 嘉兴技师学院 | Swimming training evaluation system and method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN116261479A (en) | 2023-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Datta et al. | Person-on-person violence detection in video data | |
Chaudhari et al. | Yog-guru: Real-time yoga pose correction system using deep learning methods | |
US20060269145A1 (en) | Method and system for determining object pose from images | |
CN109325456A (en) | Target identification method, device, target identification equipment and storage medium | |
CN109684919B (en) | Badminton service violation distinguishing method based on machine vision | |
Papic et al. | Improving data acquisition speed and accuracy in sport using neural networks | |
CN111783702A (en) | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning | |
US11790652B2 (en) | Detection of contacts among event participants | |
CN113706579A (en) | Prawn multi-target tracking system and method based on industrial culture | |
CN115100744A (en) | Badminton game human body posture estimation and ball path tracking method | |
WO2022116860A1 (en) | Swimmer performance analysis system | |
Kishore et al. | Spatial Joint features for 3D human skeletal action recognition system using spatial graph kernels | |
Faujdar et al. | Human Pose Estimation using Artificial Intelligence with Virtual Gym Tracker | |
Yu | Evaluation of training efficiency of table tennis players based on computer video processing technology | |
CN116030533A (en) | High-speed motion capturing and identifying method and system for motion scene | |
Zeng et al. | Deep learning approach to automated data collection and processing of video surveillance in sports activity prediction | |
CN113408435B (en) | Security monitoring method, device, equipment and storage medium | |
CN114092863A (en) | Human body motion evaluation method for multi-view video image | |
Jian et al. | Deep learning used to recognition swimmers drowning | |
Cheng et al. | Body part connection, categorization and occlusion based tracking with correction by temporal positions for volleyball spike height analysis | |
CN111160179A (en) | Tumble detection method based on head segmentation and convolutional neural network | |
CN115273243B (en) | Fall detection method, device, electronic equipment and computer readable storage medium | |
Neskorodieva et al. | Real-time Classification, Localization and Tracking System (Based on Rhythmic Gymnastics) | |
US20220343649A1 (en) | Machine learning for basketball rule violations and other actions | |
CN114359328B (en) | Motion parameter measuring method utilizing single-depth camera and human body constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21899894 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.08.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21899894 Country of ref document: EP Kind code of ref document: A1 |