CN114743125B - Barbell identification and tracking control method based on YOLO and improved template matching - Google Patents
Barbell identification and tracking control method based on YOLO and improved template matching Download PDFInfo
- Publication number
- CN114743125B CN114743125B CN202210198719.5A CN202210198719A CN114743125B CN 114743125 B CN114743125 B CN 114743125B CN 202210198719 A CN202210198719 A CN 202210198719A CN 114743125 B CN114743125 B CN 114743125B
- Authority
- CN
- China
- Prior art keywords
- barbell
- image
- hamming distance
- template matching
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 4
- 238000012360 testing method Methods 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims description 29
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000005516 engineering process Methods 0.000 claims description 4
- 230000033001 locomotion Effects 0.000 description 7
- 230000009471 action Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 208000025978 Athletic injury Diseases 0.000 description 1
- 208000029549 Muscle injury Diseases 0.000 description 1
- 206010041738 Sports injury Diseases 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000005802 health problem Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a barbell identification and tracking control method based on YOLO and improved template matching, which specifically comprises the following steps: s1, acquiring side views of various barbells as training data, making labels for the training data, and dividing the training data into a training set and a testing set; s2, carrying out standardized processing on pictures in a training set, and training a barbell identification model by using a YOLO algorithm in a mode of adjusting super parameters; s3, acquiring an original barbell video, detecting a first frame image of the barbell video through a barbell identification model, and positioning the position of the barbell; s4, inputting the barbell position and the original barbell video obtained from the first frame image into an improved template matching algorithm, and calculating to obtain the barbell tracking video on which the barbell track is drawn. Compared with the prior art, the invention has the advantages of improving the accuracy of barbell identification, identifying barbells of different styles, improving the frame rate and the accuracy, and the like.
Description
Technical Field
The invention relates to the technical field of safety protection of fitness equipment, in particular to a barbell identification and tracking control method based on YOLO and improved template matching.
Background
With the continuous development of economic level, people pay more and more attention to health problems. The correct and reasonable movement mode can achieve the aim of building up the body. The body-building exercise is an exercise project which exercises by bare hands or by using various instruments and special action modes and methods to develop muscles, increase physical strength, improve physical and mental operations. However, the incorrect action used in the exercise may damage the human body.
Many scholars now develop some research into the use of computer technology in fitness exercises. Zhang Z et al propose a novel fine-grained gym motion recognition system aimed at providing more detailed motion information for monitoring exercises, assessing nonstandard motions, and avoiding muscle damage. Hsiao C et al designed and developed a barbell bench-top virtual fitness trainer information system based on a deep learning Long and Short Term Memory (LSTM) mechanism and a wearable device. Riccciardi L et al set up a system that can acquire the kinematic data of the barbell in a wireless and real-time manner during deep squat, hard pull, and recumbent pushing actions. Mallakzadeh et al propose the determination of an optimal objective function for grip weight and barbell trajectory evaluation using genetic algorithm optimization. After summarizing the researches of the scholars, the computer technology has important roles in helping people to exercise correctly and reasonably, and has important application significance in avoiding sports injury by identifying the barbell, extracting and tracking the motion track of the barbell and evaluating the correctness of the track.
For the identification task of the barbell, wang Jinli et al propose to improve the accuracy of detecting the position of the barbell of the weight lifting image through wavelet noise reduction, but the method has no universality due to the complexity of background information and the uncertainty of the radius of the barbell; deng Yu, et al propose to previously create a circular template and to match it with the Hausdorff distance of the filtered image to identify the barbell, but this method has difficulty in identifying barbells with non-circular side view due to the different disc patterns; wang Xiangdong a method for identifying the center point of a barbell by using boundary feature and gray feature methods to identify the barbell; hsu CT and the like propose a barbell motion trail extraction algorithm based on video space-time information; hsu CT and the like propose a high-efficiency weight lifting barbell tracking algorithm based on a diamond search strategy, but the frame rate is lower due to larger calculated amount; wu Wen it is proposed to obtain feature points on a barbell disc by using Harris corner detection, establish a relationship between a center point and the feature points, track the feature points based on LK optical flow method, and obtain an estimated position of the barbell center in a subsequent frame by using a statistical optimization method through the corresponding relationship between the feature points and the barbell center, but the center position cannot be obtained well because of fewer corner features on part of barbell discs.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a barbell identification and tracking control method based on YOLO and improved template matching, which improves the accuracy of barbell identification and can identify barbells of different styles, and the frame rate and the accuracy are effectively improved by limiting search areas, dynamically updating templates and tracking barbells in video through image hash enhanced template matching.
The aim of the invention can be achieved by the following technical scheme:
a barbell identification and tracking control method based on YOLO and improved template matching specifically comprises the following steps:
s1, acquiring side views of various barbells as training data, making labels for the training data, and dividing the training data into a training set and a testing set;
s2, carrying out standardized processing on pictures in a training set, and training a barbell identification model by using a YOLO algorithm in a mode of adjusting super parameters;
s3, acquiring an original barbell video, detecting a first frame image of the barbell video through a barbell identification model, and positioning the position of the barbell;
s4, inputting the barbell position and the original barbell video obtained from the first frame image into an improved template matching algorithm, and calculating to obtain the barbell tracking video on which the barbell track is drawn.
The types of barbell corresponding to the side view of the barbell include the blocked barbell, the barbell in use by the trainer and the standing barbell.
The YOLO algorithm is one of target detection methods based on deep learning. The convolutional neural network self-learning target features are introduced to replace the traditional manual feature selecting and extracting process, and the region candidate frame or regression method is introduced to greatly improve the target detection accuracy and instantaneity. By using the invention, various dumbbells can be effectively identified, and the success of non-side-looking barbell, barbell with certain angle offset and barbell with shielding object can be well detected. The method has high accuracy, strong robustness and strong generalization capability.
The matching area of the improved template matching algorithm in the step S4 is a nearby area of the current image, so that the matching speed is improved, and the matching accuracy is improved.
Further, the formula of the comparison times required by the template matching algorithm when searching the position of the next frame of image through the matching area is as follows:
S=[(w+Δw)-w+1]*[(h+Δh)-h+1]=(Δw+1)*(Δh+1)
where S is the number of comparisons, w×h is the size of the template, and (w+Δw) ×h+Δh is the size of the matching region.
The process of performing target detection by the improved template matching algorithm in the step S4 specifically includes the following steps:
s11, performing target detection on a target in a first frame image in an original barbell video, and taking the detected barbell image as a temporary template image;
s12, performing template matching on the next frame of image in the original barbell video according to the temporary template image, and taking the image matched in the second frame as a new temporary template image;
s13, judging whether the video is positioned at the tail end of the original barbell video, if so, turning to the step S12, otherwise, ending.
The improved template matching algorithm in the step S4 adopts an image hash technology as a criterion for measuring whether two images are similar or not by using the template matching.
The improved template matching algorithm adopts an image hash enhancement method to calculate the similarity degree of two images, and specifically comprises the following steps:
s21, acquiring a first barbell image and a second barbell image, and extracting structural information and edge information of the first barbell image and the second barbell image to obtain a first barbell structural image, a second barbell structural image, a first barbell edge image and a second barbell edge image;
s22, respectively calculating the structural Hamming distance between the first barbell structural image and the second barbell structural image and the edge Hamming distance between the first barbell edge image and the second barbell edge image;
s23, calculating the whole Hamming distance between the first barbell image and the second barbell image according to the structure Hamming distance and the edge Hamming distance;
s24, comparing the whole Hamming distance with a preset range threshold value to obtain a similarity degree detection result between the first barbell image and the second barbell image.
Further, in the step S21, the structural information is extracted through fourier transform, and the edge information is extracted through Sobel operator.
Further, in the step S22, the structural hamming distance and the edge hamming distance adopt the same calculation formula, which is specifically as follows:
wherein D is h Is the Hamming distance, (x) i ,y i ) The coordinates of the pixel points in the image are given, and N is the total number of the pixel points;
the calculation formula of the integral Hamming distance is specifically as follows:
D h (A,B)=D h (A1,B1)*W 1 +D h (A2,B2)*W 2
wherein D is h (A, B) is the overall Hamming distance, D h (A1, B1) is the structural Hamming distance, W 1 Weight of Hamming distance for structure, D h (A2, B2) is the edge Hamming distance, W 2 Is the weight of the edge hamming distance.
Further, the range threshold preset in the step S24 is sum T 2 And T is 1 <T 2 When D h (a, B) =0, the similarity degree detection result is completely similar; when 0 < D h (A,B)<T 1 The detection results of the similarity degree are almost the same; when T is 1 <D h (A,B)<T 2 In the case of the similarity degree detection, the detection results are somewhat different but relatively similar; when T is 2 <D h In the case of (A, B), the results of the similarity degree detection are completely different.
Compared with the prior art, the invention has the following beneficial effects:
1. the YOLO algorithm used by the invention greatly improves the precision of identifying the barbell, can detect barbells of different styles by using the YOLO algorithm, can well detect the barbells of different angles, and has strong robustness.
2. Aiming at the defects that the template matching needs to traverse the whole image with low efficiency and long time consumption and is easy to generate the phenomenon of error matching, the invention reduces the calculation amount of traversing the whole image, improves the speed of searching the target and reduces the comparison of background areas by limiting the searching area of the template matching, thereby improving the calculation speed.
3. The invention can only carry out parallel movement aiming at a template matching algorithm, and if a matching target in an original image rotates or changes in size, the matching target cannot be well matched with the limitation of a target object to generate an error matching result. The background information and the illumination information of the target area in the front and rear frame images in the video are relatively tiny, and the target in the first frame image in the video is detected by using a template updating method, so that the target is matched with the template image. And carrying out template matching on the second frame by using the template image acquired in the first frame image, and using the image matched in the second frame as a new template image for matching of the next frame, and repeating the process until the video is finished. The method of continuously updating the template ensures that the similarity between the template image which needs to be matched each time and the target in the next image is higher, thereby reducing the generation of drift during tracking.
4. Aiming at the defect that the difference hash only focuses on the structural discarding details of the image, the invention provides a method for enhancing the image hash by fusing structural information and edge information. The hamming distance between the images is reduced while reducing the time consumption of the image comparison algorithm.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic diagram of template matching limiting search areas in accordance with the present invention;
FIG. 3 is a diagram of the network structure of the YOLO algorithm of the present invention;
fig. 4 is a practical effect diagram of barbell tracking and trajectory drawing according to the present invention, and fig. 4 (a) to 4 (c) are specific processes of barbell tracking and trajectory drawing.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
Examples
As shown in fig. 1, a barbell identification and tracking control method based on YOLO and improved template matching specifically includes the following steps:
s1, acquiring side views of various barbells as training data, making labels for the training data, and dividing the training data into a training set and a testing set;
s2, carrying out standardized processing on pictures in a training set, and training a barbell identification model by using a YOLO algorithm in a mode of adjusting super parameters;
s3, acquiring an original barbell video, detecting a first frame image of the barbell video through a barbell identification model, and positioning the position of the barbell;
s4, inputting the barbell position and the original barbell video obtained from the first frame image into an improved template matching algorithm, and calculating to obtain a barbell tracking video with the barbell track drawn, as shown in fig. 4.
The types of barbell corresponding to the side view of the barbell include a blocked barbell, a barbell in use by a trainer, and a standing barbell.
As shown in fig. 3, the YOLO algorithm is one of target detection methods based on deep learning. The convolutional neural network self-learning target features are introduced to replace the traditional manual feature selecting and extracting process, and the region candidate frame or regression method is introduced to greatly improve the target detection accuracy and instantaneity. By using the invention, various dumbbells can be effectively identified, and the success of non-side-looking barbell, barbell with certain angle offset and barbell with shielding object can be well detected. The method has high accuracy, strong robustness and strong generalization capability.
As shown in fig. 2, the matching area of the improved template matching algorithm in step S4 is a nearby area of the current image, so that not only is the matching speed improved, but also the matching accuracy is improved. Typically the template matching algorithm is a traversal over the entire image to search for the most similar region to the template image. However, traversing the entire image is inefficient and time consuming and is prone to false matches.
The template matching algorithm searches the position of the next frame of image through the matching area, and the formula of the comparison times required by the next frame of image is as follows:
S=[(w+Δw)-w+1]*[(h+Δh)-h+1]=(Δw+1)*(Δh+1)
where S is the number of comparisons, w×h is the size of the template, and (w+Δw) ×h+Δh is the size of the matching region.
The process of performing target detection by the improved template matching algorithm in step S4 specifically includes the following steps:
s11, performing target detection on a target in a first frame image in an original barbell video, and taking the detected barbell image as a temporary template image;
s12, performing template matching on the next frame of image in the original barbell video according to the temporary template image, and taking the image matched in the second frame as a new temporary template image;
s13, judging whether the video is positioned at the tail end of the original barbell video, if so, turning to the step S12, otherwise, ending.
The improved template matching algorithm in step S4 uses an image hash technique as a criterion for matching the templates to measure whether the two images are similar.
The improved template matching algorithm adopts an image hash enhancement method to calculate the similarity degree of two images, and specifically comprises the following steps:
s21, acquiring a first barbell image and a second barbell image, and extracting structural information and edge information of the first barbell image and the second barbell image to obtain a first barbell structural image, a second barbell structural image, a first barbell edge image and a second barbell edge image;
s22, respectively calculating the structural Hamming distance between the first barbell structural image and the second barbell structural image and the edge Hamming distance between the first barbell edge image and the second barbell edge image;
s23, calculating the whole Hamming distance between the first barbell image and the second barbell image according to the structure Hamming distance and the edge Hamming distance;
s24, comparing the whole Hamming distance with a preset range threshold value to obtain a similarity degree detection result between the first barbell image and the second barbell image.
In step S21, the structure information is extracted by fourier transform, and the edge information is extracted by Sobel operator.
In step S22, the structural hamming distance and the edge hamming distance adopt the same calculation formula, which is specifically as follows:
wherein D is h Is the Hamming distance, (x) i ,y i ) The coordinates of the pixel points in the image are given, and N is the total number of the pixel points;
the calculation formula of the whole Hamming distance is specifically as follows:
D h (A,B)=D h (A1,B1)*W 1 +D h (A2,B2)*W 2
wherein D is h (A, B) is the overall Hamming distance, D h (A1, B1) is the structural Hamming distance, W 1 Weight of Hamming distance for structure, D h (A2, B2) is the edge Hamming distance, W 2 Is the weight of the edge hamming distance.
The range threshold preset in step S24 is sum T 2 And T is 1 <T 2 When D h (a, B) =0, the similarity degree detection result is completely similar; when 0 < D h (A,B)<T 1 The detection results of the similarity degree are almost the same; when T is 1 <D h (A,B)<T 2 In the case of the similarity degree detection, the detection results are somewhat different but relatively similar; when T is 2 <D h In the case of (A, B), the results of the similarity degree detection are completely different.
Furthermore, the particular embodiments described herein may vary from one embodiment to another, and the above description is merely illustrative of the structure of the present invention. Equivalent or simple changes of the structure, characteristics and principle of the present invention are included in the protection scope of the present invention. Various modifications or additions to the described embodiments or similar methods may be made by those skilled in the art without departing from the structure of the invention or exceeding the scope of the invention as defined in the accompanying claims.
Claims (5)
1. A barbell identification and tracking control method based on YOLO and improved template matching is characterized by comprising the following steps:
s1, acquiring side views of various barbells as training data, making labels for the training data, and dividing the training data into a training set and a testing set;
s2, carrying out standardized processing on pictures in a training set, and training a barbell identification model by using a YOLO algorithm in a mode of adjusting super parameters;
s3, acquiring an original barbell video, detecting a first frame image of the barbell video through a barbell identification model, and positioning the position of the barbell;
s4, inputting the barbell position and the original barbell video obtained from the first frame image into an improved template matching algorithm, and calculating to obtain a barbell tracking video drawn with a barbell track;
the matching area of the improved template matching algorithm in the step S4 is a nearby area of the position of the current image;
the formula of the comparison times required by the template matching algorithm when searching the position of the next frame of image through the matching area is as follows:
S=[(w+△w)-w+1]*[(h+△h)-h+1]=(△w+1)*(△h+1)
wherein S is the number of comparisons, w×h is the size of the template, and (w+Δw) ×h+Δh is the size of the matching region;
the process of performing target detection by the improved template matching algorithm in the step S4 specifically includes the following steps:
s11, performing target detection on a target in a first frame image in an original barbell video, and taking the detected barbell image as a temporary template image;
s12, performing template matching on the next frame of image in the original barbell video according to the temporary template image, and taking the image matched in the second frame as a new temporary template image;
s13, judging whether the video is positioned at the tail end of the original barbell video, if so, turning to the step S12, otherwise, ending;
the improved template matching algorithm in the step S4 adopts an image hash technology as a standard for measuring whether two images are similar or not in the template matching;
the improved template matching algorithm adopts an image hash enhancement method to calculate the similarity degree of two images, and specifically comprises the following steps:
s21, acquiring a first barbell image and a second barbell image, and extracting structural information and edge information of the first barbell image and the second barbell image to obtain a first barbell structural image, a second barbell structural image, a first barbell edge image and a second barbell edge image;
s22, respectively calculating the structural Hamming distance between the first barbell structural image and the second barbell structural image and the edge Hamming distance between the first barbell edge image and the second barbell edge image;
s23, calculating the whole Hamming distance between the first barbell image and the second barbell image according to the structure Hamming distance and the edge Hamming distance;
s24, comparing the whole Hamming distance with a preset range threshold value to obtain a similarity degree detection result between the first barbell image and the second barbell image.
2. The method for identifying and tracking a barbell based on YOLO and improved pattern matching according to claim 1, wherein the types of barbell corresponding to the side view of the barbell include a blocked barbell, a barbell in use by a trainer, and a standing barbell.
3. The barbell identification and tracking control method based on YOLO and improved template matching according to claim 1, wherein the structure information is extracted through fourier transform in step S21, and the edge information is extracted through Sobel operator.
4. The barbell identification and tracking control method based on YOLO and improved template matching according to claim 1, wherein the structural hamming distance and the edge hamming distance in step S22 adopt the same calculation formula, and the following is specifically shown:
wherein D is h Is the Hamming distance, (x) i ,y i ) The coordinates of the pixel points in the image are given, and N is the total number of the pixel points;
the calculation formula of the integral Hamming distance is specifically as follows:
D h (A,B)=D h (A1,B1)*W 1 +D h (A2,B2)*W 2
wherein D is h (A, B) is the overall Hamming distance, D h (A1, B1) is the structural Hamming distance, W 1 Weight of Hamming distance for structure, D h (A2, B2) is the edge Hamming distance, W 2 Is the weight of the edge hamming distance.
5. The barbell identification and tracking control method based on YOLO and improved template matching according to claim 4, wherein the range threshold preset in step S24 is sum T 2 And T is 1 <T 2 When D h (a, B) =0, the similarity degree detection result is completely similar; when 0 is<D h (A,B)<T 1 The detection results of the similarity degree are almost the same; when T is 1 <D h (A,B)<T 2 In the case of the similarity degree detection, the detection results are somewhat different but relatively similar; when T is 2 <D h In the case of (A, B), the results of the similarity degree detection are completely different.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210198719.5A CN114743125B (en) | 2022-03-02 | 2022-03-02 | Barbell identification and tracking control method based on YOLO and improved template matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210198719.5A CN114743125B (en) | 2022-03-02 | 2022-03-02 | Barbell identification and tracking control method based on YOLO and improved template matching |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114743125A CN114743125A (en) | 2022-07-12 |
CN114743125B true CN114743125B (en) | 2024-02-27 |
Family
ID=82275915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210198719.5A Active CN114743125B (en) | 2022-03-02 | 2022-03-02 | Barbell identification and tracking control method based on YOLO and improved template matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114743125B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118474551B (en) * | 2024-03-26 | 2024-10-29 | 国家体育总局体育科学研究所 | Image interference suppression system and method for complex training environment of weightlifting site |
CN118469430A (en) * | 2024-05-06 | 2024-08-09 | 中航材利顿(北京)航空科技有限公司 | Internet of things-based aircraft supply chain tracking method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017000466A1 (en) * | 2015-07-01 | 2017-01-05 | 中国矿业大学 | Method and system for tracking moving target based on optical flow method |
CN111241931A (en) * | 2019-12-30 | 2020-06-05 | 沈阳理工大学 | Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3 |
WO2021012484A1 (en) * | 2019-07-19 | 2021-01-28 | 平安科技(深圳)有限公司 | Deep learning-based target tracking method and apparatus, and computer readable storage medium |
CN112884742A (en) * | 2021-02-22 | 2021-06-01 | 山西讯龙科技有限公司 | Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method |
CN113763424A (en) * | 2021-08-13 | 2021-12-07 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Real-time intelligent target detection method and system based on embedded platform |
CN114120037A (en) * | 2021-11-25 | 2022-03-01 | 中国农业科学院农业信息研究所 | Germinated potato image recognition method based on improved yolov5 model |
-
2022
- 2022-03-02 CN CN202210198719.5A patent/CN114743125B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017000466A1 (en) * | 2015-07-01 | 2017-01-05 | 中国矿业大学 | Method and system for tracking moving target based on optical flow method |
WO2021012484A1 (en) * | 2019-07-19 | 2021-01-28 | 平安科技(深圳)有限公司 | Deep learning-based target tracking method and apparatus, and computer readable storage medium |
CN111241931A (en) * | 2019-12-30 | 2020-06-05 | 沈阳理工大学 | Aerial unmanned aerial vehicle target identification and tracking method based on YOLOv3 |
CN112884742A (en) * | 2021-02-22 | 2021-06-01 | 山西讯龙科技有限公司 | Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method |
CN113763424A (en) * | 2021-08-13 | 2021-12-07 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Real-time intelligent target detection method and system based on embedded platform |
CN114120037A (en) * | 2021-11-25 | 2022-03-01 | 中国农业科学院农业信息研究所 | Germinated potato image recognition method based on improved yolov5 model |
Non-Patent Citations (1)
Title |
---|
邓宇 ; 刘国翌 ; 李华 ; .基于视频的杠铃轨迹跟踪与分析系统.中国图象图形学报.2006,(第12期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN114743125A (en) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114743125B (en) | Barbell identification and tracking control method based on YOLO and improved template matching | |
JP6733738B2 (en) | MOTION RECOGNITION DEVICE, MOTION RECOGNITION PROGRAM, AND MOTION RECOGNITION METHOD | |
US8634638B2 (en) | Real-time action detection and classification | |
CN111368791B (en) | Pull-up test counting method and system based on Quick-OpenPose model | |
CN110674785A (en) | Multi-person posture analysis method based on human body key point tracking | |
Deepa et al. | Comparison of yolo, ssd, faster rcnn for real time tennis ball tracking for action decision networks | |
CN109376663A (en) | A kind of human posture recognition method and relevant apparatus | |
CN107067413A (en) | A kind of moving target detecting method of time-space domain statistical match local feature | |
CN110232308A (en) | Robot gesture track recognizing method is followed based on what hand speed and track were distributed | |
CN103902992B (en) | Human face recognition method | |
CN111259716A (en) | Human body running posture identification and analysis method and device based on computer vision | |
CN104408461B (en) | A kind of action identification method based on sliding window local matching window | |
CN105512630B (en) | Human eye detection and localization method | |
CN111144165B (en) | Gait information identification method, system and storage medium | |
CN114973401A (en) | Standardized pull-up assessment method based on motion detection and multi-mode learning | |
CN106446911A (en) | Hand recognition method based on image edge line curvature and distance features | |
CN111105443A (en) | Video group figure motion trajectory tracking method based on feature association | |
CN112101315A (en) | Deep learning-based exercise judgment guidance method and system | |
CN115035037A (en) | Limb rehabilitation training method and system based on image processing and multi-feature fusion | |
CN117133057A (en) | Physical exercise counting and illegal action distinguishing method based on human body gesture recognition | |
CN115937967A (en) | Body-building action recognition and correction method | |
WO2023279531A1 (en) | Method for counting drilling pipe withdrawals in a drilling video on basis of human body pose recognition | |
CN115105821A (en) | Gymnastics training auxiliary system based on OpenPose | |
Lim et al. | SwATrack: A Swarm Intelligence-based Abrupt Motion Tracker. | |
US11944870B2 (en) | Movement determination method, movement determination device and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |