CN112364785B - Exercise training guiding method, device, equipment and computer storage medium - Google Patents

Exercise training guiding method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN112364785B
CN112364785B CN202011271174.3A CN202011271174A CN112364785B CN 112364785 B CN112364785 B CN 112364785B CN 202011271174 A CN202011271174 A CN 202011271174A CN 112364785 B CN112364785 B CN 112364785B
Authority
CN
China
Prior art keywords
muscle
athlete
joint
data
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011271174.3A
Other languages
Chinese (zh)
Other versions
CN112364785A (en
Inventor
唐博恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Xiongan ICT Co Ltd
China Mobile System Integration Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Xiongan ICT Co Ltd
China Mobile System Integration Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Xiongan ICT Co Ltd, China Mobile System Integration Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011271174.3A priority Critical patent/CN112364785B/en
Publication of CN112364785A publication Critical patent/CN112364785A/en
Application granted granted Critical
Publication of CN112364785B publication Critical patent/CN112364785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the application provides a method, a device, equipment and a computer storage medium for guiding exercise training, wherein the method comprises the following steps: acquiring a video image of an athlete; identifying joint key points of the athlete in the video image, and obtaining joint key point data of the athlete during the movement; obtaining corresponding athlete muscle point force data from the joint key point data according to a muscle force adjacency matrix model; and comparing the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result. According to the embodiment provided by the application, the muscle strength data of the athlete is obtained by identifying the joint key points of the athlete and utilizing the muscle strength adjacency matrix model according to the change of the skeletal key points, and the muscle strength data is compared with the muscle strength of standard actions to obtain the muscle strength advice and the exercise advice.

Description

Exercise training guiding method, device, equipment and computer storage medium
Technical Field
The application belongs to the technical field of motion recognition, and particularly relates to a training guidance method, a training guidance device, training guidance equipment and a training guidance program.
Background
Along with the development of scientific technology, the motion recognition technology also has great development, and the motion recognition technology has great application value in the field of auxiliary training. The device can be used in the fields of sports, dance and the like, and can analyze, evaluate and assist in training professional technical actions.
In prior art implementations, wearable devices such as a bracelet, a patch, etc. are used, and are fixed at the extremity to provide detection of the direction and speed of movement, and in combination with a Kinect depth camera, guidance is provided by determining the difference from standard movements, and comparing the differences with standard movements, for the joint position and movement sequence.
However, the prior art is based on simple joint position difference comparison, can only provide a non-judgment result compared with standard actions, and cannot provide stress suggestions and differences, so that training efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a computer storage medium for training and guiding exercise, which can solve the problems that in the prior art, muscle force difference and guiding suggestion cannot be provided and the training and guiding efficiency is low.
In a first aspect, embodiments of the present application provide a method for training instruction, the method including:
acquiring a video image of an athlete;
identifying joint key points of the athlete in the video image, and obtaining joint key point data of the athlete during the movement;
obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacent matrix model;
and comparing the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result.
In one embodiment, a muscle force adjacency matrix model includes:
acquiring position data of key points of a standard sport action joint and corresponding muscle point force data;
establishing a neural network training set based on the standard athletic movement joint key point position data and the corresponding muscle point force data;
establishing a bipartite graph network of the corresponding relation between the key points of the joints and the muscle points;
and optimizing a bipartite graph network of the corresponding relation between the joint key points and the muscle points according to the neural network training set to obtain the muscle stress adjacency matrix model.
In one embodiment, identifying joint keypoints for the athlete in the video image and obtaining joint keypoint data for the athlete while the athlete is in motion comprises:
setting initial space coordinates of a camera, and establishing a space coordinate system based on the initial space coordinates;
identifying joint key points of athletes in the video image, and obtaining key point space coordinates based on the space coordinate system;
establishing a first coordinate system based on any joint key point, and converting the space coordinates of the key point into joint key point coordinates based on the first coordinate system;
and obtaining the joint key point data according to the joint key point change of the athlete during the exercise.
In one embodiment, identifying joint keypoints for an athlete in the video image and deriving spatial coordinates based on the keypoints in the spatial coordinate system comprises:
establishing a space coordinate system based on the relative positions of the three cameras;
converting each frame of pixels of the video image of the joint key points of the athlete into projection lines based on three cameras respectively under the space coordinate system;
calculating mutual foot points of the projection lines;
and taking the mean value of the mutual foot drop points as the joint key point and based on the key point space coordinates in the space coordinate system.
In one embodiment, establishing a first coordinate system based on any joint keypoints comprises:
and selecting a neck joint key point as an origin, taking left and right shoulders as an x axis, taking a vertical direction as a z axis, taking directions which are positioned on a horizontal plane and vertical to the x axis and the z axis as y axes, and establishing a first coordinate system by using the x axis, the y axis, the z axis and the origin.
In one embodiment, the identifying joint keypoints for the athlete in the video image comprises:
and detecting each frame of image of the video image by using a stacked hourglass network algorithm, and identifying joint key points of the athlete.
In one embodiment, the joint key point change data of the athlete during the sport is compared with the joint key point change data of the standard sport action;
and providing exercise training guidance according to the result of the joint key point position comparison.
In a second aspect, embodiments of the present application provide a training instruction device, the device including: the device comprises a camera and a central processing unit;
the camera is used for acquiring a video image of the athlete;
the central processing unit includes: the device comprises a joint key point identification module, a muscle force processing module and a motion training guidance module;
the joint key point identification module is used for identifying joint key points of the athlete in the video image and obtaining joint key point data of the athlete during the movement;
the muscle force processing module is used for obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacency matrix model by the block;
the exercise training guidance module is used for comparing the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result.
In a third aspect, embodiments of the present application provide an exercise training coaching device, the device comprising: a camera, a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the athletic training coaching method as described above.
In a fourth aspect, embodiments of the present application provide a computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement a athletic training coaching method as described.
The exercise training guidance method, the device, the equipment and the computer storage medium provided by the embodiment of the application can detect the action of the athlete, extract the position information of the joint key points of the athlete, calculate the condition of the muscle strength of the athlete according to the position information of the joint key points by using the muscle strength adjacent matrix model, compare the muscle strength data with the muscle strength of the standard action, obtain the difference between the muscle strength of the user and the standard muscle strength, and provide exercise training guidance suggestion.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described, and it is possible for a person skilled in the art to obtain other drawings according to these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a training instruction method for exercise according to an embodiment of the present application;
FIG. 2 is a binary network diagram of a correspondence between key points and muscle points in a training guidance method for exercise according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for establishing a spatial coordinate system in a training guidance method for exercise according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a stacked hourglass network algorithm in a training coaching method according to one embodiment of the present application;
FIG. 5 is a schematic structural view of a training instruction device according to an embodiment of the present application;
fig. 6 is a schematic hardware structure of an exercise training guidance device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The embodiment provided by the application can be used in the fields of sports, dance and the like, and can be used for training guidance and analysis and evaluation of technical actions.
The prior art is based on simple joint position difference comparison, can only provide a non-judgment result compared with standard actions, and can not provide stress difference and advice, so that the training efficiency is low.
In order to solve the problems in the prior art, the embodiments of the present application provide a training guidance method, apparatus, device and computer storage medium for training, by identifying joint key points in a video image of an athlete, calculating a muscle exertion condition of a user according to position information of the joint key points by using a muscle exertion adjacency matrix model, comparing the muscle exertion condition with a muscle exertion condition of a standard action, obtaining a difference between the muscle exertion of the user and the standard muscle exertion, and providing an exertion suggestion and a muscle node exercise suggestion.
The following first describes the exercise training guidance method provided in the embodiments of the present application.
Referring to fig. 1, a flow chart of a training instruction method according to an embodiment of the present application is shown.
In this embodiment, the following steps may be included:
s1: and obtaining a video image of the athlete.
The technical scheme provided by the embodiment of the application is based on processing the video image captured by the camera, and does not need to wear information extraction equipment; since the device is fixed at the extremity of the limb requiring a kinematic inverse solution to obtain the non-terminal joint position, the kinematic inverse solution can obtain multiple resolutions, and the resolution becomes exceptionally difficult with high degrees of freedom of the multi-joint, each internal joint point position cannot be predicted. Meanwhile, the equipment is complicated to wear, the sensing equipment is calibrated by a professional, and the use difficulty is increased. Therefore, the technical scheme provided by the embodiment of the application reduces the wearing, calibration and other processes of the wearable equipment, and improves the use experience of a user.
S2: and identifying the joint key points of the athlete in the video image, and obtaining the joint key point data of the athlete during the movement.
In this embodiment, the camera used may be a depth camera, and the video image capturing is performed by using the depth camera, so that operations such as information extraction, joint key point identification, and motion tracking may be more conveniently and effectively performed.
And processing video images acquired by the camera depth camera, identifying joint key points of athletes, tracking motion tracks of the joint key points and the like.
S3: obtaining corresponding athlete muscle point force data from the joint key point data according to a muscle force adjacency matrix model;
the surface electromyographic signal intensity and the muscle force intensity of the same part doing the same action are in a linear proportional relation, so that the surface electromyographic signal can be equal to the muscle force intensity.
In this embodiment, a standard graphic neural network training data set is established, the intensity of the myoelectric signals of the related actions performed by the professional athlete is collected, and the data are used as standard action data to optimize the parameters of the adjacency relation matrix model.
The corresponding relation between the joint key points and the muscle stress condition can be obtained by the adjacency relation matrix model, so that the muscle stress data can be calculated by the position information and the motion state of the bone (joint) key points.
S4: and comparing the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result.
By the above embodiments, it is possible to distinguish between the user's muscle development and the standard muscle development; the method and the device have the advantages that the stress suggestions and the muscle node exercise suggestions are obtained, and the problems that in the prior art, only based on simple joint position difference comparison, only action is not judged, but improvement suggestions of muscle stress conditions cannot be provided, so that training efficiency is low are solved. According to the embodiment, not only can the force suggestion guidance be provided, but also the action guidance suggestion can be provided according to the identified joint key points of the athlete, and the action guidance and the muscle force guidance can be combined, so that professional training guidance is provided, and the training efficiency is effectively improved.
Please refer to fig. 2, which is a bipartite graph of a correspondence between key points and muscle points in a training guidance method provided in an embodiment of the present application, in this embodiment, a muscle force generation adjacency matrix model includes:
acquiring position data of key points of a standard sport action joint and corresponding muscle point force data;
establishing a neural network training set based on the standard athletic movement joint key point position data and the corresponding muscle point force data; establishing a bipartite graph network of the corresponding relation between the key points of the joints and the muscle points; and optimizing a bipartite graph network of the corresponding relation between the joint key points and the muscle points according to the neural network training set to obtain the muscle stress adjacency matrix model.
In the above embodiments, the selected (skeletal joint) key points include 33 individual key points of the top of the head, left ear, right ear, left eye, right eye, nose, left mouth corner, right mouth corner, head, neck, right index finger, right thumb, right palm center, right wrist, right elbow, right shoulder, shoulder center, left shoulder, left elbow, left wrist, left palm center, left thumb, left index finger, spine, hip center, right hip, left hip, right knee, left knee, right ankle, left ankle, right foot, left foot;
the selected muscle points included trapezius, pectoral major, deltoid, trapezius, latissimus dorsi, biceps brachii, triceps brachii, extensor digitorum, anterior saw, rectus abdominis, external rectus, rectus femoris, internal rectus femoris, biceps femoris, gluteus maximus, gastrocnemius and soleus muscles, 17 in total.
Setting an adjacent matrix with the dimension of 17x25x6 according to a bipartite graph network, randomly initializing a relation matrix by using normal distribution of intervals between 0 and 1, setting a Huber loss function, and building a graph structure by using the following structure:
convolution layer-full connection layer-convolution layer-full connection layer;
the convolution layer uses the graph convolution formula:
H l+1 =σ(AH l W l ) (1)
wherein W is l For the weight parameter matrix of layer l, σ (·) is the Relu activation function.
And (3) establishing a standard graphic neural network training data set, inviting a professional athlete to attach an electromyography sensing device to corresponding 17 muscle points, and collecting the muscle electrical signal intensity. The surface electromyographic signal intensity and the muscle force intensity of the same part doing the same action are in a linear proportional relation, so that the surface electromyographic signal can be equal to the muscle force intensity.
Meanwhile, the video equipment is used for collecting the position information of the skeletal key points, so that the intensity of the electrical signals of muscles of professional athletes doing standard sports and the position information of the skeletal key points analyzed by images can be recorded. This is used as training data for the neural network. Training an adjacency relation matrix by using training data, optimizing parameters in the adjacency matrix, and obtaining the relation description of the muscle strength and the skeletal key point movement. From this adjacency matrix, muscle force data can be calculated (inferred) from the position information and the motion state of the skeletal key points.
In another embodiment of the present application, identifying joint keypoints of a player in a video image and obtaining joint keypoint data for the player during play includes: setting initial space coordinates of a camera, and establishing a space coordinate system based on the initial space coordinates; identifying joint key points of athletes in the video image, and obtaining key point space coordinates based on the space coordinate system; establishing a first coordinate system based on any joint key point, and converting the space coordinates of the key point into joint key point coordinates based on the first coordinate system; and obtaining the joint key point data according to the joint key point change of the athlete during the exercise.
In the prior art, in the scheme using a single depth camera, the view distance of the single depth camera is in a 60-degree sector area of 0.8-3 m, and the pitching angle is only 60 degrees. Therefore, when an athlete performs actions such as large-range movement and take-off deep squat, the depth camera cannot capture all body parts, and due to the characteristics of the depth camera, the intelligent camera can only observe the single-sided position information of the body, and when the joints are blocked, the depth camera cannot capture joint information, so that the observation efficiency is low. And cannot provide enough information for post-correction. Meanwhile, as the recognition mechanism of the depth camera Kinect causes the camera to have a precision difference area, the precision is seriously reduced when the precision exceeds the optimal recognition area. This makes the limb extremity more prone to false data, resulting in failure of the action correction process.
Because each bone key point in the space has weak relevance, each bone key point needs to be unified into a coordinate system taking a tester as a reference, so that the influence of absolute displacement on the bone key point is reduced, and the influence of relative displacement of the bone key point and the position of a human body on action judgment is increased.
Fig. 3 is a schematic diagram of a method for establishing a spatial coordinate system in a training guidance method according to an embodiment of the present application. In this embodiment:
establishing a space coordinate system based on the relative positions of the three cameras;
converting each frame of pixels of the video image of the joint key points of the athlete into projection lines based on three cameras respectively under the space coordinate system; calculating mutual foot points of the projection lines; and taking the mean value of the mutual foot drop points as the joint key points and based on the space coordinates of the key points in the space coordinate system.
The real spatial location to which each keypoint maps is calculated. Pairing each key point of the three cameras respectively through a formula
The projection line L of the key point from the space to the origin of the camera can be obtained from the projection point Q and the origin Q of the imaging model.
L=λZ+R T t (3)
Wherein R is T t is the spatial coordinates of the camera, and the mutual foot point m is obtained through L1, L2 and L3 12 ,m 21 ,m 13 ,m 31 ,m 23 ,m 32 And taking the average value of the 6 vertical foot points to obtain M point coordinates, and taking the M point coordinates as joint key points based on the key point space coordinates in the space coordinate system.
In one embodiment of the present application, a neck joint key point is selected as an origin, left and right shoulders are x-axis, a vertical direction is z-axis, a direction which is located on a horizontal plane and is perpendicular to the x-axis and the z-axis is y-axis, and a first coordinate system is established by the x, y, z-axis and the origin.
And (3) converting the 25 spatial coordinate points relative to the camera into a neck xyz coordinate system through formulas (2) and (3) to generate a 5-dimensional gesture difference matrix. The dimensions of the matrix are (n, t, x, y, z) respectively, wherein n is the sequence number of key points, t is the sequence number of time sequences, x, y, z are the position points of the key points in the neck coordinate system respectively, and then the position points are obtained through the comparison of the front frame and the rear frame:
thereby generating a new keypoint description matrix (n, u, v, l, x, y, z).
The embodiment solves the problems that the existing identification mechanism adopting a single depth camera Kinect causes a camera to have an accuracy difference area and the accuracy is seriously reduced when the accuracy exceeds the optimal identification area.
It should be noted that, in the above embodiment, the key point of the neck joint is selected as the origin, and in the practical application process, the coordinate system can be theoretically established by using any key point as the origin, so as to ensure accuracy and convenience of data acquisition, which is not limited in the application.
Referring to fig. 4, a schematic diagram of a stacked hourglass network algorithm in a training guidance method is provided in an embodiment of the present application.
In this embodiment, a stacked hourglass network algorithm is used to detect each frame of the video image, and identify key points of joints of the athlete.
The embodiment captures video images of the user's movements with each camera, and uses a stacked hourglass network for whole body keypoint detection of the target for each frame of video images. The stacked hourglass network is divided into two parts, wherein the front part of the network is composed of common multi-layer resnet convolution residual blocks, and finally an image characteristic diagram is generated; and the second part carries out deconvolution operation on the feature map to obtain an interest target point, thereby obtaining a key point position.
In another embodiment provided in the present application, the joint key point change data of the athlete during the exercise is compared with the joint key point change data of the standard exercise action; and providing exercise training guidance according to the result of the joint key point position comparison.
The embodiment extracts the position information of the skeletal key points of the user, uses the graph neural network adjacency matrix to infer the muscle power of the user according to the position information of the skeletal key points, compares the muscle power with the muscle power of the standard action in the database to obtain the difference between the muscle power of the user and the standard muscle power, and obtains the power advice and the muscle node exercise advice; and by combining the guidance method based on joint key point position comparison, comprehensive training guidance of actions and muscle strength can be provided for a user. Compared with the prior art that only simple joint positions can be provided is non-contrast, the scheme of the embodiment of the application is more scientific and effective, comprehensive training guidance suggestions can be given, and training efficiency is improved.
Fig. 5 is a schematic structural diagram of a training guidance device for exercise according to an embodiment of the present application. As shown in fig. 5, the apparatus may include a camera 200 and a central processing unit 210.
The camera 200 is used for acquiring a video image of an athlete;
the central processing unit may include: the device comprises a joint key point identification module 211, a muscle strength processing module 212 and a motion training guidance module 213;
the joint key point identification module 211 is configured to identify joint key points of the athlete in the video image, and obtain joint key point data of the athlete during the exercise;
a muscle force processing module 212, configured to obtain muscle point force data of a corresponding athlete from the joint key point data according to the muscle force adjacency matrix model by using the block;
and the exercise training guidance module 213 is configured to compare the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and provide exercise training guidance according to the comparison result.
The exercise training guidance device provided in this embodiment has the following process of implementing training guidance:
the camera 200 acquires the athlete video image, sends the video image information to the central processing unit 210, and the joint key point identification module 211 in the central processing unit analyzes and processes the video image to identify the joint key point of the athlete in the video image; meanwhile, according to the movement of the joints in the video image, obtaining joint key point data of the athlete during the movement; the joint key point identification module 211 sends the joint key point data to the muscle strength processing module 212, and the muscle strength processing module 212 obtains corresponding athlete muscle point strength data from the joint key point data according to the muscle strength adjacency matrix model and sends the corresponding athlete muscle point strength data to the exercise training guidance module 213; finally, the exercise training guidance module 213 compares the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and provides exercise training guidance according to the comparison result.
According to the embodiment, the video recordings of the cameras are acquired, the actions of the athlete are detected, the position information of the key points of the joints of the athlete is extracted, the condition of the muscle strength of the athlete is calculated according to the position information of the key points of the joints by using the muscle strength adjacent matrix model, the muscle strength data is compared with the muscle strength of the standard actions, the difference between the muscle strength of the user and the standard muscle strength is obtained, and the exercise training instruction suggestion is given.
The prior art scheme is only based on simple joint position difference comparison, only nondeterministic but not improved suggestions of muscle exertion can be provided, and actions and muscle exertion conditions can not be combined, so that comprehensive training guidance is provided.
Meanwhile, the image shooting method is used, the equipment wearing calibration process is reduced, the use experience of a user is improved, and meanwhile errors among the equipment are eliminated. The problem of blind areas generated by binocular cameras on the same side is solved by using a multi-view camera. The disadvantage of having to view data only for athletes is that it is not acceptable to guide. And (5) establishing a graph relation network of actions and muscle exertion, and giving professional exercise and exertion guidance to the user.
Fig. 6 shows a schematic hardware structure of the exercise training guidance device provided in the embodiment of the present application.
The training coaching instruction device can include a camera 300, a processor 301, and a memory 302 storing computer program instructions.
In particular, the processor 301 may include a central processing unit (Central Processing Unit, CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may comprise a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the foregoing. In one example, memory 302 may include removable or non-removable (or fixed) media, or memory 302 may be a non-volatile solid state memory. Memory 302 may be internal or external to the integrated gateway disaster recovery device.
In one example, memory 302 may be Read Only Memory (ROM). In one example, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
Memory 302 may include Read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory includes one or more tangible (non-transitory) computer-readable storage media (e.g., memory devices) encoded with software comprising computer-executable instructions and when the software is executed (e.g., by one or more processors) it is operable to perform the operations described with reference to methods in accordance with aspects of the present disclosure.
The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement the methods/steps S1 to S4 in the embodiment shown in fig. 1, and achieve the corresponding technical effects achieved by executing the methods/steps in the embodiment shown in fig. 1, which are not described herein for brevity.
In one example, the dynamic training coaching device can further include a communication interface 303 and a bus 310. As shown in fig. 6, the processor 301, the memory 302, and the communication interface 303 are connected to each other by a bus 310 and perform communication with each other.
The communication interface 303 is mainly used to implement communication between each module, device, unit and/or apparatus in the embodiments of the present application.
Bus 310 includes hardware, software, or both that couple the components of the online data flow billing device to each other. By way of example, and not limitation, the buses may include an accelerated graphics port (Accelerated Graphics Port, AGP) or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, EISA) Bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an industry standard architecture (Industry Standard Architecture, ISA) Bus, an infiniband interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (MCa) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a video electronics standards association local (VLB) Bus, or other suitable Bus, or a combination of two or more of the above. Bus 310 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (9)

1. A method of athletic training coaching, comprising:
acquiring a video image of an athlete based on a camera, wherein the athlete does not need to wear an information extraction device;
identifying joint key points of the athlete in the video image, and obtaining joint key point data of the athlete during the movement;
obtaining corresponding athlete muscle point force data from the joint key point data according to a muscle force adjacency matrix model;
comparing the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result;
the muscle development adjacency matrix model comprises:
acquiring position data of key points of a standard sport action joint and corresponding muscle point force data;
establishing a neural network training set based on the standard athletic movement joint key point position data and the corresponding muscle point force data;
establishing a bipartite graph network of the corresponding relation between the key points of the joints and the muscle points;
and optimizing a bipartite graph network of the corresponding relation between the joint key points and the muscle points according to the neural network training set to obtain the muscle stress adjacency matrix model.
2. The athletic training coaching method of claim 1, wherein identifying joint keypoints for the athlete in the video image and obtaining joint keypoint data for the athlete during the athletic activity comprises:
setting initial space coordinates of a camera, and establishing a space coordinate system based on the initial space coordinates;
identifying joint key points of athletes in the video image, and obtaining key point space coordinates based on the space coordinate system;
establishing a first coordinate system based on any joint key point, and converting the space coordinates of the key point into joint key point coordinates based on the first coordinate system;
and obtaining the joint key point data according to the joint key point change of the athlete during the exercise.
3. The method of claim 2, wherein identifying joint keypoints for the athlete in the video image and deriving spatial coordinates of the keypoints based on the spatial coordinate system comprises:
establishing a space coordinate system based on the relative positions of the three cameras;
converting each frame of pixels of the video image of the joint key points of the athlete into projection lines based on three cameras respectively under the space coordinate system;
calculating mutual foot points of the projection lines;
and taking the mean value of the mutual foot drop points as the joint key point and based on the key point space coordinates in the space coordinate system.
4. The athletic training guidance method of claim 2, wherein establishing the first coordinate system based on any joint keypoints comprises:
and selecting a neck joint key point as an origin, taking left and right shoulders as an x axis, taking a vertical direction as a z axis, taking directions which are positioned on a horizontal plane and vertical to the x axis and the z axis as y axes, and establishing a first coordinate system by using the x axis, the y axis, the z axis and the origin.
5. The athletic training coaching method of claim 1, wherein the identifying joint keypoints of the athlete in the video image comprises:
and detecting each frame of image of the video image by using a stacked hourglass network algorithm, and identifying joint key points of the athlete.
6. The athletic training coaching method of claim 1, further comprising:
the joint key point change data of the athlete during the exercise is compared with the joint key point change data of the standard exercise action;
and providing exercise training guidance according to the result of the joint key point position comparison.
7. A athletic training coaching device, the device comprising: the device comprises a camera and a central processing unit;
the camera is used for acquiring a video image of the athlete, and the athlete does not need to wear the information extraction equipment;
the central processing unit includes: the device comprises a joint key point identification module, a muscle force processing module and a motion training guidance module;
the joint key point identification module is used for identifying joint key points of the athlete in the video image and obtaining joint key point data of the athlete during the movement;
the muscle force processing module is used for obtaining corresponding athlete muscle point force data from the joint key point data according to the muscle force adjacency matrix model;
the exercise training guidance module is used for comparing the athlete muscle point force data with the standard action muscle point force data to obtain a comparison result, and providing exercise training guidance according to the comparison result;
the muscle development adjacency matrix model comprises:
acquiring position data of key points of a standard sport action joint and corresponding muscle point force data;
establishing a neural network training set based on the standard athletic movement joint key point position data and the corresponding muscle point force data;
establishing a bipartite graph network of the corresponding relation between the key points of the joints and the muscle points;
and optimizing a bipartite graph network of the corresponding relation between the joint key points and the muscle points according to the neural network training set to obtain the muscle stress adjacency matrix model.
8. An athletic training coaching device, the device comprising: a camera, a processor, and a memory storing computer program instructions; the processor reads and executes the computer program instructions to implement the athletic training coaching method according to any one of claims 1-6.
9. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the exercise training coaching method of any of claims 1-6.
CN202011271174.3A 2020-11-13 2020-11-13 Exercise training guiding method, device, equipment and computer storage medium Active CN112364785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011271174.3A CN112364785B (en) 2020-11-13 2020-11-13 Exercise training guiding method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011271174.3A CN112364785B (en) 2020-11-13 2020-11-13 Exercise training guiding method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN112364785A CN112364785A (en) 2021-02-12
CN112364785B true CN112364785B (en) 2023-07-25

Family

ID=74515571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011271174.3A Active CN112364785B (en) 2020-11-13 2020-11-13 Exercise training guiding method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112364785B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112966593B (en) * 2021-03-03 2024-03-15 河南鑫安利安全科技股份有限公司 Enterprise safety standardized operation method and system based on artificial intelligence and big data
CN113486798A (en) * 2021-07-07 2021-10-08 首都体育学院 Training plan making processing method and device based on causal relationship
CN113842622B (en) * 2021-09-23 2023-05-30 京东方科技集团股份有限公司 Motion teaching method, device, system, electronic equipment and storage medium
CN113762214A (en) * 2021-09-29 2021-12-07 宁波大学 AI artificial intelligence based whole body movement assessment system
CN115019395B (en) * 2022-06-10 2022-12-06 杭州电子科技大学 Group action consistency detection method and system based on stacked hourglass network

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105596021A (en) * 2014-11-19 2016-05-25 株式会社东芝 Image analyzing device and image analyzing method
CN106137504A (en) * 2016-08-17 2016-11-23 杨如山 A kind of complex rehabilitation system
CN106202739A (en) * 2016-07-14 2016-12-07 哈尔滨理工大学 A kind of skeletal muscle mechanical behavior multi-scale Modeling method
CN106175802A (en) * 2016-08-29 2016-12-07 吉林大学 A kind of in body osteoarthrosis stress distribution detection method
CN107735797A (en) * 2015-06-30 2018-02-23 三菱电机株式会社 Method for determining the motion between the first coordinate system and the second coordinate system
CN108446442A (en) * 2018-02-12 2018-08-24 中国科学院自动化研究所 The simplification method of class neuromuscular bone robot upper limb model
CN109448815A (en) * 2018-11-28 2019-03-08 平安科技(深圳)有限公司 Self-service body building method, device, computer equipment and storage medium
CN109753891A (en) * 2018-12-19 2019-05-14 山东师范大学 Football player's orientation calibration method and system based on human body critical point detection
CN110147743A (en) * 2019-05-08 2019-08-20 中国石油大学(华东) Real-time online pedestrian analysis and number system and method under a kind of complex scene
CN110355761A (en) * 2019-07-15 2019-10-22 武汉理工大学 A kind of healing robot control method based on joint stiffness and muscular fatigue
CN110660017A (en) * 2019-09-02 2020-01-07 北京航空航天大学 Dance music recording and demonstrating method based on three-dimensional gesture recognition
CN111046715A (en) * 2019-08-29 2020-04-21 郑州大学 Human body action comparison analysis method based on image retrieval
CN111062356A (en) * 2019-12-26 2020-04-24 沈阳理工大学 Method for automatically identifying human body action abnormity from monitoring video

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105596021A (en) * 2014-11-19 2016-05-25 株式会社东芝 Image analyzing device and image analyzing method
CN107735797A (en) * 2015-06-30 2018-02-23 三菱电机株式会社 Method for determining the motion between the first coordinate system and the second coordinate system
CN106202739A (en) * 2016-07-14 2016-12-07 哈尔滨理工大学 A kind of skeletal muscle mechanical behavior multi-scale Modeling method
CN106137504A (en) * 2016-08-17 2016-11-23 杨如山 A kind of complex rehabilitation system
CN106175802A (en) * 2016-08-29 2016-12-07 吉林大学 A kind of in body osteoarthrosis stress distribution detection method
CN108446442A (en) * 2018-02-12 2018-08-24 中国科学院自动化研究所 The simplification method of class neuromuscular bone robot upper limb model
CN109448815A (en) * 2018-11-28 2019-03-08 平安科技(深圳)有限公司 Self-service body building method, device, computer equipment and storage medium
CN109753891A (en) * 2018-12-19 2019-05-14 山东师范大学 Football player's orientation calibration method and system based on human body critical point detection
CN110147743A (en) * 2019-05-08 2019-08-20 中国石油大学(华东) Real-time online pedestrian analysis and number system and method under a kind of complex scene
CN110355761A (en) * 2019-07-15 2019-10-22 武汉理工大学 A kind of healing robot control method based on joint stiffness and muscular fatigue
CN111046715A (en) * 2019-08-29 2020-04-21 郑州大学 Human body action comparison analysis method based on image retrieval
CN110660017A (en) * 2019-09-02 2020-01-07 北京航空航天大学 Dance music recording and demonstrating method based on three-dimensional gesture recognition
CN111062356A (en) * 2019-12-26 2020-04-24 沈阳理工大学 Method for automatically identifying human body action abnormity from monitoring video

Also Published As

Publication number Publication date
CN112364785A (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN112364785B (en) Exercise training guiding method, device, equipment and computer storage medium
CN107301370B (en) Kinect three-dimensional skeleton model-based limb action identification method
CN111368810A (en) Sit-up detection system and method based on human body and skeleton key point identification
CN111144217A (en) Motion evaluation method based on human body three-dimensional joint point detection
CN110969114A (en) Human body action function detection system, detection method and detector
CN107174255A (en) Three-dimensional gait information gathering and analysis method based on Kinect somatosensory technology
CN110448870B (en) Human body posture training method
Chaudhari et al. Yog-guru: Real-time yoga pose correction system using deep learning methods
CN107392939A (en) Indoor sport observation device, method and storage medium based on body-sensing technology
CN109325466A (en) A kind of smart motion based on action recognition technology instructs system and method
Malawski Depth versus inertial sensors in real-time sports analysis: A case study on fencing
Park et al. Accurate and efficient 3d human pose estimation algorithm using single depth images for pose analysis in golf
Yang et al. Human exercise posture analysis based on pose estimation
CN112568898A (en) Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN112464915A (en) Push-up counting method based on human body bone point detection
CN111883229A (en) Intelligent movement guidance method and system based on visual AI
CN113255623B (en) System and method for intelligently identifying push-up action posture completion condition
Ingwersen et al. SportsPose-A Dynamic 3D sports pose dataset
CN116650922A (en) Deep learning-based teenager fitness comprehensive test method and system
Abd Shattar et al. Experimental Setup for Markerless Motion Capture and Landmarks Detection using OpenPose During Dynamic Gait Index Measurement
CN211878611U (en) Ski athlete gesture recognition system based on multi-feature value fusion
Tang Detection algorithm of tennis serve mistakes based on feature point trajectory
Zhang et al. Wrist MEMS sensor for movements recognition in ball games
CN114241602A (en) Multi-purpose rotational inertia measuring and calculating method based on deep learning
Nakamura et al. Tankendo motion estimation system with robustness against differences in color and size between users' clothes using 4-color markers with elastic belts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant