CN112488047A - Piano fingering intelligent identification method - Google Patents
Piano fingering intelligent identification method Download PDFInfo
- Publication number
- CN112488047A CN112488047A CN202011482561.1A CN202011482561A CN112488047A CN 112488047 A CN112488047 A CN 112488047A CN 202011482561 A CN202011482561 A CN 202011482561A CN 112488047 A CN112488047 A CN 112488047A
- Authority
- CN
- China
- Prior art keywords
- piano
- key
- keyboard
- hand
- key points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000009434 installation Methods 0.000 claims abstract description 6
- 238000013135 deep learning Methods 0.000 claims abstract description 5
- 230000004927 fusion Effects 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 4
- 238000013527 convolutional neural network Methods 0.000 abstract description 3
- 238000005286 illumination Methods 0.000 abstract description 2
- 238000003062 neural network model Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000000280 densification Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B15/00—Teaching music
- G09B15/06—Devices for exercising or strengthening fingers or arms; Devices for holding fingers or arms in a proper position for playing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Physical Education & Sports Medicine (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a piano fingering intelligent identification method, which comprises the following steps: camera installation and debugging: segmenting the piano keyboard by using a significance target detection algorithm based on deep learning; and (3) piano keyboard calibration: detecting a played human hand; detecting coordinates of key points of playing fingers by using a human hand key point detection algorithm; and (4) joint matching of the playing fingers and the piano keyboard information. The method based on the deep convolutional neural network is usually to design a neural network model to mine deeper and more abstract image characteristics, does not need manual participation, is less influenced by illumination, posture and the like, and has stronger adaptability to complex scenes.
Description
Technical Field
The invention relates to a piano fingering intelligent identification method.
Background
In a piano music score playing scene, whether a playing technique is correct or not needs to be confirmed manually, and in actual operation, an observer needs to watch the finger action of a player and is familiar with the music score all the way. This makes it impossible for one observer to observe a plurality of players simultaneously and give prompt fingering judgment, and especially for persons who have no tutor to learn by themselves.
The piano playing learning mainly comprises the steps of learning to play by fingers according to the music score, and judging whether the music score and fingering corresponding to the music score correspond to each other in the piano playing teaching to measure the playing correctness. In the practice of pianos, it is almost impossible for a teacher to personally and simultaneously check whether the fingering of a plurality of students on the same piano score or different piano scores in the same time period is correct. With the development and advancement of computer vision technology and machine learning, it becomes more and more possible to automatically recognize events by monitoring, such as recognition of limb movements, gesture recognition, face recognition, and the like. The intelligent identification methods basically extract the characteristics of the object, and then detect, classify and identify the object according to the characteristics. The method for extracting the characteristics of the object is mainly divided into a traditional manual characteristic design method and a deep convolutional neural network-based method:
1. the traditional method for manually designing the features comprises the steps of HOG, LBP, SIFT and the like, the traditional method for manually designing the features is relatively simple, learning and training are not needed, and only simple calculation and statistics are needed. However, the manual design features are easily affected by external factors, and the actual application effect is not robust.
2. The method based on the deep convolutional neural network is usually to design a neural network model to mine deeper and more abstract image characteristics, does not need manual participation, is less influenced by illumination, posture and the like, and has stronger adaptability to complex scenes.
Disclosure of Invention
The invention aims to provide an intelligent piano fingering identification method, which aims at independently learning piano playing music scores and simultaneously evaluating errors of piano playing techniques of a plurality of trainees in the piano playing music scores, can prompt error information in time, and is convenient for players to prompt and evaluate the errors in time when no instructor guides the players nearby.
The invention aims to solve the problem that when a piano player plays music, the piano player can timely give a prompt of wrong playing fingering information of the player through intelligent recognition of the playing fingering, so that the player can timely correct the mistake during practice, and the aim of piano playing learning alone is fulfilled.
The specific technical scheme of the invention is as follows:
an intelligent piano fingering identification method comprises the following steps:
step one, camera installation and debugging: the camera of installation, the visual angle of camera can the full coverage piano keyboard, and the picture quality is clear as far as possible, and the camera can link piano display screen and show piano keyboard picture on the piano picture in real time.
Step two, segmenting the piano keyboard: the piano keyboard is divided by using a significance target detection algorithm SOD100K [1] based on deep learning, and the lightweight network provided by the algorithm mainly comprises a feature extractor and a cross-stage fusion part and can simultaneously process features of multiple scales. The feature extractor is stacked with the intra-layer multi-scale blocks proposed by SOD100K and is divided into 4 stages, each stage having 3, 4, 6, and 4 intra-layer multi-scale blocks, respectively, according to the resolution of the feature map. The cross-stage fusion part, which is a flexible convolution module (goctcnvs) proposed by SOD100K, processes features from stages of the feature extractor to obtain high-resolution output.
The algorithm uses a novel dynamic weight attenuation scheme to reduce redundancy of feature representations, and the weight attenuation can be adjusted according to specific features of certain channels. In particular, during back propagation, the attenuation term may dynamically change according to the characteristics of certain channels. The weight update for dynamic weight decay can be represented as:
wherein λdIs a weight, x, of a dynamic weight decayiIs represented by wiCalculated features, and S (x)i) Is a measure of the feature, which may have multiple definitions, w, depending on the taskiIs the weight of the i-th layer,is the gradient to be updated. In this algorithm, the goal is to assign weights according to features between stable channels, using global average pooling as an indicator for a particular channel, the formula can be expressed as:
xithe characteristic diagram is shown, and H and W respectively represent the height and width of the characteristic diagram.
Step three, piano keyboard calibration: the frame coordinates of the segmented keyboard obtained in the second step need to be sorted from left to right and the keyboard frame needs to be calibrated, such as X1, X2.
Step four, detecting the played human hand: the method comprises the steps of collecting a part of finger videos of a piano played through a camera, collecting a part of hand pictures on the Internet, labeling a hand frame and left and right classification marks, and training a hand detector model by using a FaceBoxes [2] detection algorithm. The algorithm provides a new anchor frame density increasing strategy, and aims to improve the recall rate of the small-scale face. The anchor boxes are arranged on different feature maps and used for detecting the target object, but in the case of target congestion, the small anchors arranged at the bottom layer of the network are obviously very sparse, and in order to perform a densification operation on the small anchors at the bottom layer, specifically, the small anchors are shifted at the center of each receptive field. The Anchor density can be expressed as:
Adensity=Ascale/Ainterval
Ascaledenotes the scale of the anchor, and AintervalIndicating the interval of the anchor.
Fifthly, detecting key points of playing fingers: and detecting the coordinates of the key points of the playing fingers by using an OpenPose [3] human hand key point detection algorithm, and marking the detected key points close to the finger tips with h1, h2, h 3. The algorithm network structure comprises 6 stages, and the loss of each stage is an L2 norm between a confidence map of a limb part and a predicted value and a grountruth of a limb affinity vector field, and can be represented as follows:
respectively are the predicted value and the real value of the confidence map of the limb part, the predicted value and the true value of the limb affinity vector field are respectively, W (p) is 0 or 1, when the W (p) is 0, a certain key point is marked to be missing, and loss does not calculate the point.
The overall loss is the sum of the losses of the individual stages:
and step six, joint matching of the playing fingers and the piano keyboard information: when a player presses a piano key X1, a signal f1 corresponding to the pressed piano key is transmitted from the side of the piano, the signal f1 corresponds to the piano key X1 frame, whether a hand exists on the piano keyboard is detected through the fourth step, if the hand exists, the key points of the hand are detected through the fifth step, whether each key point close to the finger tip falls in the piano key frame calibrated by X1 is compared, if the key points are calibrated in the piano key frame, which finger the player uses can be judged through the fingertip key points, whether the fingering method is correct can be identified, and if the fingering method is incorrect, an error prompt is given.
The intelligent identification of fingering when the piano is played can timely remind the player of wrong fingering when playing music scores, can timely correct the reminding to the piano exercise alone, has an auxiliary effect on simultaneously carrying out a plurality of piano teaching guides, improves the teaching guide efficiency, and facilitates the timely wrong reminding and evaluation of the player when the player does not have the guide beside to guide. The models of segmenting a piano keyboard, detecting a played hand, detecting a played finger key point and the like by using a deep learning method have strong anti-interference capability on the influence of the external environment, and the model has good robustness.
Drawings
Fig. 1 is a schematic diagram of the intelligent piano fingering identification method.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
An intelligent piano fingering identification method comprises the following overall steps.
Step one, camera installation and debugging: the camera of installation, the visual angle of camera can the full coverage piano keyboard, and the picture quality is clear as far as possible, and the camera can link piano display screen and show piano keyboard picture on the piano picture in real time.
Step two, segmenting the piano keyboard: the piano keyboard segmentation mainly uses a saliency target Detection algorithm SOD100K [1] (high Efficient Object Detection with100K Parameters) based on deep learning, and a lightweight network proposed by the algorithm mainly comprises a feature extractor and a cross-stage fusion part, and can simultaneously process features of multiple scales. The feature extractor is stacked with the intra-layer multi-scale blocks proposed by SOD100K and is divided into 4 stages, each stage having 3, 4, 6, and 4 intra-layer multi-scale blocks, respectively, according to the resolution of the feature map. The cross-stage fusion part, which is a flexible convolution module (goctcnvs) proposed by SOD100K, processes features from stages of the feature extractor to obtain high-resolution output.
The algorithm uses a novel dynamic weight attenuation scheme to reduce redundancy of feature representations, and the weight attenuation can be adjusted according to specific features of certain channels. In particular, during back propagation, the attenuation term may dynamically change according to the characteristics of certain channels. The weight update for dynamic weight decay can be represented as:
wherein λdIs a weight, x, of a dynamic weight decayiIs represented by wiCalculated features, and S (x)i) Is a measure of the feature, which may have multiple definitions, w, depending on the taskiIs the weight of the i-th layer,is the gradient to be updated. In this algorithm, the goal is to assign weights according to features between stable channels, using global average pooling as an indicator for a particular channel, the formula can be expressed as:
xithe characteristic diagram is shown, and H and W respectively represent the height and width of the characteristic diagram.
Step three, piano keyboard calibration: the frame coordinates of the segmented keyboard obtained in the second step need to be sorted from left to right and the keyboard frame needs to be calibrated, such as X1, X2.
Step four, hand detection: a part of finger videos of a piano to be played are collected through a camera, a part of hand pictures are collected on the internet, hand frames and left and right classification marks are marked, and a FaceBoxes [2] (FaceBoxes: A CPU Real-time Face Detector with High Accuracy) detection algorithm is used for training a hand Detector model. The algorithm provides a new anchor frame density increasing strategy, and aims to improve the recall rate of the small-scale face. The anchor boxes are arranged on different feature maps and used for detecting the target object, but in the case of target congestion, the small anchors arranged at the bottom layer of the network are obviously very sparse, and in order to perform a densification operation on the small anchors at the bottom layer, specifically, the small anchors are shifted at the center of each receptive field. The Anchor density can be expressed as:
Adensity=Ascale/Ainterval
Ascaledenotes the scale of the anchor, and AintervalIndicating the interval of the anchor.
Step five, detecting key points of the fingers: using OpenPose [3] (OpenPose: real Multi-Person 2D position Estimation using Part Affinity Fields) human hand key point detection algorithm, detecting the coordinates of the playing finger key points, and marking the detected key points close to the finger tips with h1, h2, h 3. The algorithm network structure comprises 6 stages, and the loss of each stage is an L2 norm between a confidence map of a limb part and a predicted value and a grountruth of a limb affinity vector field, and can be represented as follows:
respectively are the predicted value and the real value of the confidence map of the limb part, the predicted value and the true value of the limb affinity vector field are respectively, W (p) is 0 or 1, when the W (p) is 0, a certain key point is marked to be missing, and loss does not calculate the point.
The overall loss is the sum of the losses of the individual stages:
and step six, information matching: when a player presses a piano key X1, a signal f1 corresponding to the pressed piano key is transmitted from the side of the piano, the signal f1 corresponds to the piano key X1 frame, whether a hand exists on the piano keyboard is detected through the fourth step, if the hand exists, the key points of the hand are detected through the fifth step, whether each key point close to the finger tip falls in the piano key frame calibrated by X1 is compared, if the key points are calibrated in the piano key frame, which finger the player uses can be judged through the fingertip key points, whether the fingering method is correct can be identified, and if the fingering method is incorrect, an error prompt is given.
Claims (1)
1. An intelligent piano fingering identification method is characterized by comprising the following steps:
step one, camera installation and debugging: the camera is arranged, the visual angle of the camera can fully cover the piano keyboard, the image quality is clear as much as possible, and the camera can be linked with the piano display screen to display the picture of the piano keyboard on the picture of the piano in real time;
step two, segmenting the piano keyboard: the piano keyboard is divided and mainly uses a significance target detection algorithm SOD100K [1] based on deep learning, and a lightweight network provided by the algorithm mainly comprises a feature extractor and a cross-stage fusion part and can simultaneously process features of a plurality of scales; the feature extractor is stacked with the intra-layer multi-scale blocks proposed by SOD100K and divided into 4 stages according to the resolution of the feature map, each stage having 3, 4, 6 and 4 intra-layer multi-scale blocks, respectively; the cross-stage fusion part of a flexible convolution module (goctcnvs) proposed by SOD100K processes features from stages of the feature extractor to obtain high resolution output;
step three, piano keyboard calibration: sorting the keyboards from left to right and calibrating the keyboard frames according to the frame coordinates of the segmented keyboard obtained in the second step, such as X1 and X2;
step four, detecting the played human hand: collecting a part of finger video of a piano played by a camera, collecting a part of hand pictures on the Internet, labeling a hand frame and left and right classification marks, and training a hand detector model by using a FaceBoxes [2] detection algorithm;
fifthly, detecting key points of playing fingers: and detecting the coordinates of the key points of the playing fingers by using an OpenPose [3] human hand key point detection algorithm, and marking the detected key points close to the finger tips with h1, h2, h 3.
And step six, joint matching of the playing fingers and the piano keyboard information: when a player presses a piano key X1, a signal f1 corresponding to the pressed piano key is transmitted from the side of the piano, the signal f1 corresponds to the piano key X1 frame, whether a hand exists on the piano keyboard is detected through the fourth step, if the hand exists, the key points of the hand are detected through the fifth step, whether each key point close to the finger tip falls in the piano key frame calibrated by X1 is compared, if the key points are calibrated in the piano key frame, which finger the player uses can be judged through the fingertip key points, whether the fingering method is correct can be identified, and if the fingering method is incorrect, an error prompt is given.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011482561.1A CN112488047A (en) | 2020-12-16 | 2020-12-16 | Piano fingering intelligent identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011482561.1A CN112488047A (en) | 2020-12-16 | 2020-12-16 | Piano fingering intelligent identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112488047A true CN112488047A (en) | 2021-03-12 |
Family
ID=74917952
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011482561.1A Pending CN112488047A (en) | 2020-12-16 | 2020-12-16 | Piano fingering intelligent identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112488047A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160656A (en) * | 2021-04-19 | 2021-07-23 | 中国美术学院 | Guzheng playing prompting method and Guzheng teaching auxiliary system |
CN113657185A (en) * | 2021-07-26 | 2021-11-16 | 广东科学技术职业学院 | Intelligent auxiliary method, device and medium for piano practice |
CN113723264A (en) * | 2021-08-25 | 2021-11-30 | 桂林智神信息技术股份有限公司 | Method and system for intelligently identifying playing errors for assisting piano teaching |
CN115870980A (en) * | 2022-12-09 | 2023-03-31 | 北部湾大学 | Vision-based piano playing robot control method and device |
-
2020
- 2020-12-16 CN CN202011482561.1A patent/CN112488047A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113160656A (en) * | 2021-04-19 | 2021-07-23 | 中国美术学院 | Guzheng playing prompting method and Guzheng teaching auxiliary system |
CN113657185A (en) * | 2021-07-26 | 2021-11-16 | 广东科学技术职业学院 | Intelligent auxiliary method, device and medium for piano practice |
CN113723264A (en) * | 2021-08-25 | 2021-11-30 | 桂林智神信息技术股份有限公司 | Method and system for intelligently identifying playing errors for assisting piano teaching |
CN115870980A (en) * | 2022-12-09 | 2023-03-31 | 北部湾大学 | Vision-based piano playing robot control method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112488047A (en) | Piano fingering intelligent identification method | |
CN113762133A (en) | Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition | |
Hu et al. | Recognising human-object interaction via exemplar based modelling | |
Chen et al. | Using real-time acceleration data for exercise movement training with a decision tree approach | |
WO2022052941A1 (en) | Intelligent identification method and system for giving assistance with piano teaching, and intelligent piano training method and system | |
CN109508661A (en) | A kind of person's of raising one's hand detection method based on object detection and Attitude estimation | |
Lee et al. | Observing pianist accuracy and form with computer vision | |
CN103455826B (en) | Efficient matching kernel body detection method based on rapid robustness characteristics | |
CN113052138A (en) | Intelligent contrast correction method for dance and movement actions | |
Wei et al. | Performance monitoring and evaluation in dance teaching with mobile sensing technology | |
CN106951834A (en) | It is a kind of that motion detection method is fallen down based on endowment robot platform | |
CN109460724A (en) | The separation method and system of trapping event based on object detection | |
CN108304806A (en) | A kind of gesture identification method integrating feature and convolutional neural networks based on log path | |
CN117542121B (en) | Computer vision-based intelligent training and checking system and method | |
Guo et al. | PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training | |
Shi et al. | Design of optical sensors based on computer vision in basketball visual simulation system | |
Kerdvibulvech et al. | Real-time guitar chord estimation by stereo cameras for supporting guitarists | |
CN116503954A (en) | Physical education testing method and system based on human body posture estimation | |
CN116386424A (en) | Method, device and computer readable storage medium for music teaching | |
CN116271757A (en) | Auxiliary system and method for basketball practice based on AI technology | |
CN109447863A (en) | A kind of 4MAT real-time analysis method and system | |
CN115731608A (en) | Physical exercise training method and system based on human body posture estimation | |
TWI837038B (en) | Method for learning and recognizing individual behaviors for maker education | |
CN111652316A (en) | AR Chinese character recognition system based on multimedia application scene | |
CN109977819A (en) | A kind of Weakly supervised individual part localization method of application template matching process |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |