CN112364785A - Exercise training guidance method, device, equipment and computer storage medium - Google Patents
Exercise training guidance method, device, equipment and computer storage medium Download PDFInfo
- Publication number
- CN112364785A CN112364785A CN202011271174.3A CN202011271174A CN112364785A CN 112364785 A CN112364785 A CN 112364785A CN 202011271174 A CN202011271174 A CN 202011271174A CN 112364785 A CN112364785 A CN 112364785A
- Authority
- CN
- China
- Prior art keywords
- muscle
- athlete
- joint
- key point
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physical Education & Sports Medicine (AREA)
- Psychiatry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The embodiment of the application provides a method, a device, equipment and a computer storage medium for guiding exercise training, wherein the method comprises the following steps: acquiring a video image of an athlete; identifying joint key points of the athlete in the video image, and obtaining the joint key point data of the athlete during movement; obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacent matrix model; and comparing the muscle point force data of the athlete with the muscle point force data of the standard action to obtain a comparison result, and providing exercise training guidance according to the comparison result. According to the embodiment provided by the application, the muscle strength suggestion and the exercise suggestion are obtained by identifying the joint key points of the athlete, obtaining the muscle strength data of the athlete according to the change of the bone key points by utilizing the muscle strength adjacency matrix model and comparing the muscle strength data with the muscle strength of standard actions.
Description
Technical Field
The present application belongs to the field of motion recognition technology, and in particular, to a method, an apparatus, a device, and a computer storage medium for guiding exercise training.
Background
With the development of scientific technology, the motion recognition technology is also greatly developed, and the motion recognition technology has great application value in the field of auxiliary training. The method can be used in the fields of sports, dancing and the like, and can be used for analyzing, evaluating and assisting in training professional technical actions.
In the prior art implementation, wearable equipment such as a bracelet, a patch and the like is used and fixed at the tail end of a limb to provide the detection of the movement direction and speed, and a Kinect depth camera is combined to judge the difference between the joint position and the movement sequence and the standard movement and compare the difference with the standard movement to provide guidance.
However, the prior art is based on simple joint position difference comparison, only a non-judgment result compared with a standard action can be provided, and force application suggestions and differences cannot be provided, so that the training efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for guiding exercise training and a computer storage medium, and can solve the problems that in the prior art, muscle strength difference and guiding suggestions cannot be provided, and training guiding efficiency is low.
In a first aspect, an embodiment of the present application provides an exercise training guidance method, where the method includes:
acquiring a video image of an athlete;
identifying joint key points of the athlete in the video image, and obtaining the joint key point data of the athlete during movement;
obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacent matrix model;
and comparing the muscle point force data of the athlete with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result.
In one embodiment, the muscle exertion adjacency matrix model comprises:
acquiring position data of a key point of a standard motion action joint and corresponding muscle point force data;
establishing a neural network training set based on the key point position data of the standard motion joint and the corresponding muscle point force data;
establishing a bipartite graph network of corresponding relations between joint key points and muscle points;
and optimizing the bipartite graph network of the corresponding relation of the joint key points and the muscle points according to the neural network training set to obtain the muscle force-exerting adjacency matrix model.
In one embodiment, identifying the joint key points of the athlete in the video image and obtaining the joint key point data when the athlete moves comprises:
setting an initial space coordinate of a camera, and establishing a space coordinate system based on the initial space coordinate;
identifying joint key points of athletes in the video images, and obtaining key point space coordinates based on the space coordinate system;
establishing a first coordinate system based on any joint key point, and converting the key point space coordinate into a joint key point coordinate based on the first coordinate system;
and obtaining the joint key point data according to the change of the joint key point when the athlete moves.
In one embodiment, identifying the joint key points of the athlete in the video image and obtaining the spatial coordinates of the key points based on the spatial coordinate system comprises:
establishing a space coordinate system based on the relative positions of the three cameras;
converting each frame of pixel of the video image of the joint key point of the athlete into a projection line based on three cameras respectively under the space coordinate system;
calculating the mutual foot hanging points of the projection lines;
and taking the mean value of the mutual foot drop points as the joint key points based on the key point space coordinates in the space coordinate system.
In one embodiment, establishing the first coordinate system based on arbitrary joint keypoints comprises:
selecting a neck joint key point as an origin, taking a left shoulder and a right shoulder as an x-axis, taking the vertical direction as a z-axis, and taking the x-axis, the y-axis, the z-axis and the origin as a y-axis which are positioned on a horizontal plane and are vertical to the x-axis and the z-axis, and establishing a first coordinate system by using the x-axis, the y-axis and the z-axis and the.
In one embodiment, the identifying of the athlete's joint keypoints in the video image comprises:
and detecting each frame of image of the video image by utilizing a stacked hourglass network algorithm, and identifying the joint key points of the athlete.
In one embodiment, the change data of the key points of the joints when the athlete moves are compared with the change data of the key points of the joints of the standard movement actions;
and providing exercise training guidance according to the comparison result of the positions of the key points of the joints.
In a second aspect, an embodiment of the present application provides an exercise training guidance device, where the device includes: a camera and a central processing unit;
the camera is used for acquiring a video image of the athlete;
the central processing unit includes: the system comprises a joint key point identification module, a muscle exerting processing module and a motion training guidance module;
the joint key point identification module is used for identifying the joint key points of the athletes in the video images and obtaining the joint key point data of the athletes during movement;
the muscle force application processing module is used for obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force application adjacent matrix model;
the exercise training guidance module is used for comparing the muscle point force data of the athlete with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result.
In a third aspect, an embodiment of the present application provides an exercise training guidance apparatus, including: a camera, a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the athletic training guidance method described above.
In a fourth aspect, embodiments of the present application provide a computer storage medium having computer program instructions stored thereon, which when executed by a processor implement a method for athletic training guidance as described.
The exercise training guidance method, the exercise training guidance device, the exercise training guidance equipment and the computer storage medium can detect the movement of an athlete, extract the position information of the joint key points of the athlete, calculate the muscle force exertion condition of the athlete according to the position information of the joint key points by using the muscle force exertion adjacency matrix model, compare the muscle force exertion data with the muscle force exertion of standard movement to obtain the difference between the muscle force exertion of a user and the standard muscle force exertion, and give an exercise training guidance suggestion.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for guiding exercise training provided in an embodiment of the present application;
fig. 2 is a bipartite network diagram of a corresponding relationship between a key point and a muscle point in a training guidance method according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a method for establishing a spatial coordinate system in a training guidance method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a stacked hourglass network algorithm in a method for athletic training guidance according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an athletic training guidance device provided in an embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an athletic training guidance device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative only and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by illustrating examples thereof.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment provided by the application can be used in the fields of sports, dancing and the like, and can be used for training, guiding, analyzing and evaluating technical actions.
The existing technology is based on simple joint position difference comparison, only a non-judgment result compared with a standard action can be provided, and force application difference and suggestion cannot be provided, so that the training efficiency is low.
In order to solve the problems of the prior art, embodiments of the present application provide a method, an apparatus, a device, and a computer storage medium for exercise training guidance, which are used for calculating a muscle exertion situation of a user by identifying joint key points in a video image of an athlete and using a muscle exertion adjacency matrix model according to position information of the joint key points, comparing the muscle exertion situation with a muscle exertion situation of a standard action, obtaining a difference between the muscle exertion of the user and the standard muscle exertion, and giving an exertion suggestion and a muscle node exercise suggestion.
First, the exercise training guidance method provided in the embodiment of the present application is described below.
Referring to fig. 1, a flow chart of a method for guiding exercise training according to an embodiment of the present application is shown.
In this embodiment, the following steps may be included:
s1: a video image of the athlete is acquired.
The technical scheme provided by the embodiment of the application is based on processing the video image captured by the camera, and information extraction equipment is not required to be worn; since the device is fixed at the extremity of the limb and requires inverse kinematics to obtain the non-extremity joint position, multiple analyses can be obtained by inverse kinematics, and the analysis becomes abnormally difficult under the high degree of freedom of multiple joints, and the position of each internal joint point cannot be predicted. Meanwhile, the equipment is complex to wear, and the calibration of the induction equipment needs professional personnel, so that the use difficulty is increased. Therefore, the technical scheme provided by the embodiment of the application reduces the wearing, calibration and other processes of the wearable device, and improves the use experience of a user.
S2: and identifying the joint key points of the athlete in the video image, and obtaining the joint key point data when the athlete moves.
In this embodiment, the camera that uses can be the degree of depth camera, utilizes the degree of depth camera to carry out video image and catches, can carry out operations such as information extraction, joint key point discernment, motion tracking more conveniently effectively.
And processing the video image acquired by the depth-of-view camera, identifying joint key points of the athlete, tracking the motion trail of the joint key points and the like.
S3: obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacent matrix model;
because the strength of the surface electromyographic signals and the strength of the muscle force are in a linear proportional relationship at the same part for the same action, the surface electromyographic signals can be identical to the strength of the muscle force.
In this embodiment, a standard neural network training data set is established, the muscle electrical signal strength of relevant movements performed by professional athletes is collected, and the data is used as standard movement data to optimize parameters of the adjacency relation matrix model.
The corresponding relation between the key points of the joints and the force application condition of the muscles can be obtained by the adjacency relation matrix model, so that the force application data of the muscles can be calculated according to the position information and the motion state of the key points of the bones (joints).
S4: and comparing the muscle point force data of the athlete with the muscle point force data of the standard action to obtain a comparison result, and providing exercise training guidance according to the comparison result.
Through the embodiment, the difference between the muscle force of the user and the standard muscle force can be obtained; the exercise suggestion and the muscle node exercise suggestion are obtained, and the problem that training efficiency is low due to the fact that only action non-judgment can be provided and improvement suggestions of muscle exercise conditions cannot be provided based on simple joint position difference comparison in the prior art is solved. The embodiment can not only provide the exercise guidance, but also provide the action guidance suggestion according to the identified joint key points of the athlete, and also combine the action guidance with the muscle exercise guidance to provide professional training guidance, thereby effectively improving the training efficiency.
Please refer to fig. 2, which is a bipartite graph of a corresponding relationship between a key point and a muscle point in a training guidance method according to an embodiment of the present application, in which a muscle force-exerting adjacency matrix model includes:
acquiring position data of a key point of a standard motion action joint and corresponding muscle point force data;
establishing a neural network training set based on the key point position data of the standard motion joint and the corresponding muscle point force data; establishing a bipartite graph network of corresponding relations between joint key points and muscle points; and optimizing the bipartite graph network of the corresponding relation of the joint key points and the muscle points according to the neural network training set to obtain the muscle force-exerting adjacency matrix model.
In the above embodiments, the selected (skeletal joint) key points include 33 human body key points of the top, left ear, right ear, left eye, right eye, nose, left mouth corner, right mouth corner, head, neck, right index finger, right thumb, right palm, right wrist, right elbow, right shoulder, center of shoulder, left elbow, left wrist, left palm, left thumb, left index finger, spine, hip center, right hip, left hip, right knee, left knee, right ankle, left ankle, right foot, and left foot;
the selected muscle points comprise 17 of trapezius, pectoralis major, deltoid, trapezius, latissimus dorsi, biceps brachii, triceps brachii, extensor digitorum, serratus anterior, rectus abdominis, extrafemoral, rectus femoris, vastus femoris, biceps femoris, gluteus maximus, gastrocnemius and soleus.
Setting an adjacency matrix with the dimensionality of 17x25x6 according to a bipartite graph network, using a normal distribution random initialization relation matrix with the interval of 0-1, setting a Huber loss function, and establishing a graph structure by using the following structure:
convolutional layer-full link layer-convolutional layer-full link layer;
the convolutional layer uses the graph convolutional formula:
Hl+1=σ(AHlWl) (1)
wherein Wlσ (-) is the Relu activation function, which is the weight parameter matrix for layer I.
And (3) establishing a standard graph neural network training data set, inviting professional athletes to attach electromyogram induction devices to corresponding 17 muscle points, and collecting the muscle electric signal intensity. Because the strength of the surface electromyographic signals and the strength of the muscle force are in a linear proportional relationship at the same part for the same action, the surface electromyographic signals can be identical to the strength of the muscle force.
Meanwhile, the video equipment is used for collecting the position information of the key points of the skeleton, and the position information of the key points of the skeleton, which is analyzed by the strength of the electrical signals of the muscles of professional athletes performing standard sports actions and images, can be recorded. This is used as training data for the neural network. Training an adjacency relation matrix by using training data, and optimizing parameters in the adjacency matrix to obtain the relation description of muscle strength and skeleton key point movement. With this adjacency matrix, muscle exertion data can be calculated (inferred) from the position information and motion state of skeletal key points.
In another embodiment of the present application, identifying joint key points of an athlete in a video image and obtaining the joint key point data of the athlete while exercising comprises: setting an initial space coordinate of a camera, and establishing a space coordinate system based on the initial space coordinate; identifying joint key points of athletes in the video images, and obtaining key point space coordinates based on the space coordinate system; establishing a first coordinate system based on any joint key point, and converting the key point space coordinate into a joint key point coordinate based on the first coordinate system; and obtaining the joint key point data according to the change of the joint key point when the athlete moves.
In the prior art, in the scheme using a single depth camera, the view distance of the single depth camera is in a 60-degree sector area of 0.8-3 meters, and the pitch angle is only 60 degrees. Therefore, when the athlete performs actions such as large-scale movement, take-off, deep squatting and the like, the depth camera cannot capture all body parts, and due to the characteristics of the depth camera, the intelligent player can only observe the single-side position information of the body, and when joints are shielded, the depth camera cannot capture the joint information, so that the observation efficiency is low. It is not possible to provide sufficient information for later corrections. Meanwhile, due to the recognition mechanism of the depth camera Kinect, the accuracy difference area exists in the camera, and the accuracy is seriously reduced when the camera exceeds the optimal recognition area. This makes the extremity more likely to generate erroneous data, leading to failure of the course of motion correction.
Because the relevance of each bone key point in the space is weak, each bone key point needs to be unified into a coordinate system taking a tester as a reference, so that the influence of absolute displacement on the bone key points is reduced, and the influence of the relative displacement between the bone key points and the position of a human body on action judgment is increased.
Please refer to fig. 3, which is a schematic diagram illustrating a method for establishing a spatial coordinate system in a training guidance method according to an embodiment of the present application. In this embodiment:
establishing a space coordinate system based on the relative positions of the three cameras;
converting each frame of pixel of the video image of the joint key point of the athlete into a projection line based on three cameras respectively under the space coordinate system; calculating the mutual foot hanging points of the projection lines; and taking the average value of the mutual foot drop points as the joint key points based on the key point space coordinates in the space coordinate system.
The true spatial location to which each keypoint maps is computed. Pairing each key point of the three cameras respectively through a formula
The real space conversion relation in (3) can obtain a projection line L of the key point from the space to the camera origin point from the projection point Q and the origin point Q of the imaging model.
L=λZ+RTt (3)
Wherein R isTt is the spatial coordinates of the camera, and the mutual perpendicular point m is obtained from L1, L2 and L312,m21,m13,m31,m23,m32And taking the mean value of the 6 drop foot points to obtain M point coordinates, and taking the M point coordinates as joint key points based on key point space coordinates under the space coordinate system.
In one embodiment of the application, a neck joint key point is selected as an origin, a left shoulder and a right shoulder are selected as an x-axis, a vertical direction is selected as a z-axis, the neck joint key point is located on a horizontal plane, and the direction perpendicular to the x-axis and the z-axis is selected as a y-axis, so that a first coordinate system is established by the x-axis, the y-axis, the z-axis and the origin.
And converting the 25 spatial coordinate points relative to the camera into a neck xyz coordinate system through the formulas (2) and (3), and generating a 5-dimensional posture difference matrix. The matrix dimensions are (n, t, x, y, z) respectively, wherein n is the serial number of the key point, t is the time sequence number, x, y, z are the position points of the key point in the neck coordinate system respectively, and then the comparison between the previous frame and the next frame is carried out to obtain:
thereby generating a new keypoint description matrix (n, u, v, l, x, y, z).
The embodiment solves the problems that the existing camera has a precision difference area due to the adoption of a recognition mechanism of a single depth camera Kinect, and the precision is seriously reduced when the camera exceeds the optimal recognition area.
It should be noted that, in the above embodiment, the key point of the neck joint is selected as the origin, and in the actual application process, the coordinate system can be theoretically established by using any key point as the origin, so as to ensure the accuracy and convenience of data acquisition, which is not limited in the present application.
Please refer to fig. 4, which is a schematic structural diagram of a stacked hourglass network algorithm in a training guidance method according to an embodiment of the present application.
In this embodiment, each frame of the video image is detected by using a stacked hourglass network algorithm, and joint key points of the athlete are identified.
The embodiment captures video images of the motion of a user by using each camera, and performs whole-body key point detection on a target by using a stacked hourglass network on each frame of video images. The stacked hourglass network is divided into two parts, the front part of the network is composed of common multilayer resnet convolution residual blocks, and finally an image characteristic diagram is generated; and the second part carries out deconvolution operation on the feature map to obtain the interest target point, thereby obtaining the position of the key point.
In another embodiment provided by the application, the change data of the key points of the joints of the athletes during movement is compared with the change data of the key points of the joints of the standard movement actions; and providing exercise training guidance according to the comparison result of the positions of the key points of the joints.
The embodiment extracts the position information of the skeletal key points of the user, uses the graph neural network adjacency matrix to deduce the muscle force of the user according to the position information of the skeletal key points, compares the force with the muscle force of the standard action in the database to obtain the difference between the muscle force of the user and the standard muscle force, and obtains the force suggestion and the muscle node exercise suggestion; and a guidance method based on the comparison of the positions of the key points of the joints is combined, so that comprehensive training guidance of actions and muscle exertion can be provided for a user. Compared with the prior art that only simple joint positions can be provided, the method is non-contrasting, the scheme of the embodiment of the application is more scientific and effective, comprehensive training guidance suggestions can be given, and the training efficiency is improved.
Fig. 5 is a schematic structural diagram of an exercise training guidance device provided in an embodiment of the present application. As shown in fig. 5, the apparatus may include a camera 200 and a central processing unit 210.
The camera 200 is used for acquiring video images of athletes;
the central processing unit may include: a joint key point identification module 211, a muscle force application processing module 212 and a motion training guidance module 213;
the joint key point identification module 211 is configured to identify joint key points of the athlete in the video image, and obtain joint key point data of the athlete during movement;
a muscle force application processing module 212, configured to obtain corresponding athlete muscle point force data from the joint key point data according to the muscle force application adjacency matrix model;
and the exercise training guidance module 213 is configured to compare the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and provide exercise training guidance according to the comparison result.
The exercise training guidance device provided by the embodiment realizes the following training guidance process:
the camera 200 acquires a video image of the athlete and sends the video image information to the central processing unit 210, and the joint key point identification module 211 in the central processing unit analyzes and processes the video image to identify the joint key points of the athlete in the video image; meanwhile, joint key point data of the athlete during movement are obtained according to the movement of the joints in the video image; the joint key point identification module 211 sends the joint key point data to the muscle force application processing module 212, and the muscle force application processing module 212 obtains corresponding muscle point force data of the athlete according to the muscle force application adjacent matrix model and sends the muscle point force data to the exercise training guidance module 213; finally, the exercise training guidance module 213 compares the muscle point force data of the athlete with the muscle point force data of the standard action to obtain a comparison result, and provides exercise training guidance according to the comparison result.
In the embodiment, the video recording of the camera is obtained, the action of the athlete is detected, the position information of the joint key points of the athlete is extracted, the muscle force exerting condition of the athlete is calculated by using the muscle force exerting adjacency matrix model according to the position information of the joint key points, the muscle force exerting data is compared with the muscle force exerting amount of the standard action, the difference between the muscle force exerting of the user and the standard muscle force exerting is obtained, and a motion training guidance suggestion is given.
The existing technical scheme is based on simple joint position difference comparison, only non-judgment can be provided, improvement suggestions for muscle force generation cannot be provided, actions and the condition of the muscle force generation cannot be combined, comprehensive training guidance is provided, and therefore training efficiency is effectively improved through the technical scheme provided by the embodiment of the application.
Meanwhile, the image shooting method is used, so that the equipment wearing calibration process is reduced, the use experience of a user is improved, and errors among equipment are eliminated. The problem that binocular cameras on the same side generate blind areas is solved by using the multi-view cameras. The defect that the athlete can only check the data and cannot accept the guidance is overcome. And establishing a graph relation network of the action and the muscle exertion, and giving professional exercise and exertion guidance to the user.
Fig. 6 shows a hardware structure diagram of a sports training guidance device provided in an embodiment of the present application.
The kinetic training coaching device may include a camera 300, a processor 301, and a memory 302 having stored thereon computer program instructions.
Specifically, the processor 301 may include a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement the embodiments of the present Application.
In one example, the Memory 302 may be a Read Only Memory (ROM). In one example, the ROM may be mask programmed ROM, programmable ROM (prom), erasable prom (eprom), electrically erasable prom (eeprom), electrically rewritable ROM (earom), or flash memory, or a combination of two or more of these.
The memory 302 may include Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors), it is operable to perform operations described with reference to the methods according to an aspect of the present disclosure.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement the methods/steps S1 to S4 in the embodiment shown in fig. 1, and achieve the corresponding technical effects achieved by the embodiment shown in fig. 1 executing the methods/steps, which are not described herein again for brevity.
In one example, the dynamic training instruction device may also include a communication interface 303 and a bus 310. As shown in fig. 6, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present application.
It is to be understood that the present application is not limited to the particular arrangements and instrumentality described above and shown in the attached drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions or change the order between the steps after comprehending the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic Circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As described above, only the specific embodiments of the present application are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered within the scope of the present application.
Claims (10)
1. An exercise training guidance method, comprising:
acquiring a video image of an athlete;
identifying joint key points of the athlete in the video image, and obtaining the joint key point data of the athlete during movement;
obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacent matrix model;
and comparing the muscle point force data of the athlete with the muscle point force data of the standard action to obtain a comparison result, and providing exercise training guidance according to the comparison result.
2. The motor training guidance method of claim 1, wherein the muscle exertion adjacency matrix model comprises:
acquiring position data of a key point of a standard motion action joint and corresponding muscle point force data;
establishing a neural network training set based on the key point position data of the standard motion joint and the corresponding muscle point force data;
establishing a bipartite graph network of corresponding relations between joint key points and muscle points;
and optimizing the bipartite graph network of the corresponding relation of the joint key points and the muscle points according to the neural network training set to obtain the muscle force-exerting adjacency matrix model.
3. The athletic training guidance method of claim 1, wherein identifying the joint key points of the athlete in the video image and obtaining the joint key point data of the athlete while exercising comprises:
setting an initial space coordinate of a camera, and establishing a space coordinate system based on the initial space coordinate;
identifying joint key points of athletes in the video images, and obtaining key point space coordinates based on the space coordinate system;
establishing a first coordinate system based on any joint key point, and converting the key point space coordinate into a joint key point coordinate based on the first coordinate system;
and obtaining the joint key point data according to the change of the joint key point when the athlete moves.
4. The athletic training guidance method of claim 3, wherein identifying the joint key points of the athlete in the video image and obtaining key point spatial coordinates based on the spatial coordinate system comprises:
establishing a space coordinate system based on the relative positions of the three cameras;
converting each frame of pixel of the video image of the joint key point of the athlete into a projection line based on three cameras respectively under the space coordinate system;
calculating the mutual foot hanging points of the projection lines;
and taking the mean value of the mutual foot drop points as the joint key points based on the key point space coordinates in the space coordinate system.
5. The athletic training guidance method of claim 3, wherein establishing the first coordinate system based on any joint keypoint comprises:
selecting a neck joint key point as an origin, taking a left shoulder and a right shoulder as an x-axis, taking the vertical direction as a z-axis, and taking the x-axis, the y-axis, the z-axis and the origin as a y-axis which are positioned on a horizontal plane and are vertical to the x-axis and the z-axis, and establishing a first coordinate system by using the x-axis, the y-axis and the z-axis and the.
6. The athletic training guidance method of claim 1, wherein the identifying of the athlete's joint keypoints in the video images comprises:
and detecting each frame of image of the video image by utilizing a stacked hourglass network algorithm, and identifying the joint key points of the athlete.
7. The athletic training guidance method of claim 1, further comprising:
comparing the position of the key point of the joint of the change data of the key point of the joint of the athlete during the movement with the position of the key point of the joint of the standard movement;
and providing exercise training guidance according to the comparison result of the positions of the key points of the joints.
8. An athletic training guidance device, comprising: a camera and a central processing unit;
the camera is used for acquiring a video image of the athlete;
the central processing unit includes: the system comprises a joint key point identification module, a muscle exerting processing module and a motion training guidance module;
the joint key point identification module is used for identifying the joint key points of the athletes in the video images and obtaining the joint key point data of the athletes during movement;
the muscle force application processing module is used for obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force application adjacent matrix model;
the exercise training guidance module is used for comparing the muscle point force data of the athlete with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result.
9. An athletic training coaching apparatus, characterized in that the apparatus comprises: a camera, a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the athletic training guidance method of any one of claims 1-7.
10. A computer storage medium having computer program instructions stored thereon that, when executed by a processor, implement the athletic training guidance method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011271174.3A CN112364785B (en) | 2020-11-13 | 2020-11-13 | Exercise training guiding method, device, equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011271174.3A CN112364785B (en) | 2020-11-13 | 2020-11-13 | Exercise training guiding method, device, equipment and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112364785A true CN112364785A (en) | 2021-02-12 |
CN112364785B CN112364785B (en) | 2023-07-25 |
Family
ID=74515571
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011271174.3A Active CN112364785B (en) | 2020-11-13 | 2020-11-13 | Exercise training guiding method, device, equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112364785B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966593A (en) * | 2021-03-03 | 2021-06-15 | 河南鑫安利安全科技股份有限公司 | Enterprise safety standardized operation method and system based on artificial intelligence and big data |
CN113486798A (en) * | 2021-07-07 | 2021-10-08 | 首都体育学院 | Training plan making processing method and device based on causal relationship |
CN113762214A (en) * | 2021-09-29 | 2021-12-07 | 宁波大学 | AI artificial intelligence based whole body movement assessment system |
CN113842622A (en) * | 2021-09-23 | 2021-12-28 | 京东方科技集团股份有限公司 | Motion teaching method, device, system, electronic equipment and storage medium |
CN115019395A (en) * | 2022-06-10 | 2022-09-06 | 杭州电子科技大学 | Group action consistency detection method and system based on stacked hourglass network |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105596021A (en) * | 2014-11-19 | 2016-05-25 | 株式会社东芝 | Image analyzing device and image analyzing method |
CN106137504A (en) * | 2016-08-17 | 2016-11-23 | 杨如山 | A kind of complex rehabilitation system |
CN106175802A (en) * | 2016-08-29 | 2016-12-07 | 吉林大学 | A kind of in body osteoarthrosis stress distribution detection method |
CN106202739A (en) * | 2016-07-14 | 2016-12-07 | 哈尔滨理工大学 | A kind of skeletal muscle mechanical behavior multi-scale Modeling method |
CN107735797A (en) * | 2015-06-30 | 2018-02-23 | 三菱电机株式会社 | Method for determining the motion between the first coordinate system and the second coordinate system |
CN108446442A (en) * | 2018-02-12 | 2018-08-24 | 中国科学院自动化研究所 | The simplification method of class neuromuscular bone robot upper limb model |
CN109448815A (en) * | 2018-11-28 | 2019-03-08 | 平安科技(深圳)有限公司 | Self-service body building method, device, computer equipment and storage medium |
CN109753891A (en) * | 2018-12-19 | 2019-05-14 | 山东师范大学 | Football player's orientation calibration method and system based on human body critical point detection |
CN110147743A (en) * | 2019-05-08 | 2019-08-20 | 中国石油大学(华东) | Real-time online pedestrian analysis and number system and method under a kind of complex scene |
CN110355761A (en) * | 2019-07-15 | 2019-10-22 | 武汉理工大学 | A kind of healing robot control method based on joint stiffness and muscular fatigue |
CN110660017A (en) * | 2019-09-02 | 2020-01-07 | 北京航空航天大学 | Dance music recording and demonstrating method based on three-dimensional gesture recognition |
CN111046715A (en) * | 2019-08-29 | 2020-04-21 | 郑州大学 | Human body action comparison analysis method based on image retrieval |
CN111062356A (en) * | 2019-12-26 | 2020-04-24 | 沈阳理工大学 | Method for automatically identifying human body action abnormity from monitoring video |
-
2020
- 2020-11-13 CN CN202011271174.3A patent/CN112364785B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105596021A (en) * | 2014-11-19 | 2016-05-25 | 株式会社东芝 | Image analyzing device and image analyzing method |
CN107735797A (en) * | 2015-06-30 | 2018-02-23 | 三菱电机株式会社 | Method for determining the motion between the first coordinate system and the second coordinate system |
CN106202739A (en) * | 2016-07-14 | 2016-12-07 | 哈尔滨理工大学 | A kind of skeletal muscle mechanical behavior multi-scale Modeling method |
CN106137504A (en) * | 2016-08-17 | 2016-11-23 | 杨如山 | A kind of complex rehabilitation system |
CN106175802A (en) * | 2016-08-29 | 2016-12-07 | 吉林大学 | A kind of in body osteoarthrosis stress distribution detection method |
CN108446442A (en) * | 2018-02-12 | 2018-08-24 | 中国科学院自动化研究所 | The simplification method of class neuromuscular bone robot upper limb model |
CN109448815A (en) * | 2018-11-28 | 2019-03-08 | 平安科技(深圳)有限公司 | Self-service body building method, device, computer equipment and storage medium |
CN109753891A (en) * | 2018-12-19 | 2019-05-14 | 山东师范大学 | Football player's orientation calibration method and system based on human body critical point detection |
CN110147743A (en) * | 2019-05-08 | 2019-08-20 | 中国石油大学(华东) | Real-time online pedestrian analysis and number system and method under a kind of complex scene |
CN110355761A (en) * | 2019-07-15 | 2019-10-22 | 武汉理工大学 | A kind of healing robot control method based on joint stiffness and muscular fatigue |
CN111046715A (en) * | 2019-08-29 | 2020-04-21 | 郑州大学 | Human body action comparison analysis method based on image retrieval |
CN110660017A (en) * | 2019-09-02 | 2020-01-07 | 北京航空航天大学 | Dance music recording and demonstrating method based on three-dimensional gesture recognition |
CN111062356A (en) * | 2019-12-26 | 2020-04-24 | 沈阳理工大学 | Method for automatically identifying human body action abnormity from monitoring video |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966593A (en) * | 2021-03-03 | 2021-06-15 | 河南鑫安利安全科技股份有限公司 | Enterprise safety standardized operation method and system based on artificial intelligence and big data |
CN112966593B (en) * | 2021-03-03 | 2024-03-15 | 河南鑫安利安全科技股份有限公司 | Enterprise safety standardized operation method and system based on artificial intelligence and big data |
CN113486798A (en) * | 2021-07-07 | 2021-10-08 | 首都体育学院 | Training plan making processing method and device based on causal relationship |
CN113842622A (en) * | 2021-09-23 | 2021-12-28 | 京东方科技集团股份有限公司 | Motion teaching method, device, system, electronic equipment and storage medium |
CN113762214A (en) * | 2021-09-29 | 2021-12-07 | 宁波大学 | AI artificial intelligence based whole body movement assessment system |
CN115019395A (en) * | 2022-06-10 | 2022-09-06 | 杭州电子科技大学 | Group action consistency detection method and system based on stacked hourglass network |
CN115019395B (en) * | 2022-06-10 | 2022-12-06 | 杭州电子科技大学 | Group action consistency detection method and system based on stacked hourglass network |
Also Published As
Publication number | Publication date |
---|---|
CN112364785B (en) | 2023-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112364785B (en) | Exercise training guiding method, device, equipment and computer storage medium | |
CN107301370B (en) | Kinect three-dimensional skeleton model-based limb action identification method | |
CN107103298B (en) | Pull-up counting system and method based on image processing | |
AU2004237876C1 (en) | Golf swing diagnosis system | |
CN111144217A (en) | Motion evaluation method based on human body three-dimensional joint point detection | |
CN110738154A (en) | pedestrian falling detection method based on human body posture estimation | |
CN109325466A (en) | A kind of smart motion based on action recognition technology instructs system and method | |
CN112464915B (en) | Push-up counting method based on human skeleton point detection | |
Park et al. | Accurate and efficient 3d human pose estimation algorithm using single depth images for pose analysis in golf | |
Yang et al. | Human exercise posture analysis based on pose estimation | |
CN111883229A (en) | Intelligent movement guidance method and system based on visual AI | |
CN114973401A (en) | Standardized pull-up assessment method based on motion detection and multi-mode learning | |
CN112568898A (en) | Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image | |
CN113255623B (en) | System and method for intelligently identifying push-up action posture completion condition | |
CN117292288A (en) | Sports test method, system, electronic device, chip and storage medium | |
CN117133057A (en) | Physical exercise counting and illegal action distinguishing method based on human body gesture recognition | |
CN111539364A (en) | Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting | |
CN114241602B (en) | Deep learning-based multi-objective moment of inertia measurement and calculation method | |
Mangin et al. | An instrumented glove for swimming performance monitoring | |
CN115937969A (en) | Method, device, equipment and medium for determining target person in sit-up examination | |
CN113361333B (en) | Non-contact type riding motion state monitoring method and system | |
CN211878611U (en) | Ski athlete gesture recognition system based on multi-feature value fusion | |
Tang | [Retracted] Detection Algorithm of Tennis Serve Mistakes Based on Feature Point Trajectory | |
CN113536917A (en) | Dressing identification method, dressing identification system, electronic device and storage medium | |
JP2009095631A (en) | Golf swing measuring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |