CN113707271B - Fitness scheme generation method and system based on artificial intelligence and big data - Google Patents

Fitness scheme generation method and system based on artificial intelligence and big data Download PDF

Info

Publication number
CN113707271B
CN113707271B CN202111260470.8A CN202111260470A CN113707271B CN 113707271 B CN113707271 B CN 113707271B CN 202111260470 A CN202111260470 A CN 202111260470A CN 113707271 B CN113707271 B CN 113707271B
Authority
CN
China
Prior art keywords
fitness
action
building
training data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111260470.8A
Other languages
Chinese (zh)
Other versions
CN113707271A (en
Inventor
别叶芹
孙娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haimen Sande Sporting Goods Co ltd
Original Assignee
Haimen Sande Sporting Goods Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haimen Sande Sporting Goods Co ltd filed Critical Haimen Sande Sporting Goods Co ltd
Priority to CN202111260470.8A priority Critical patent/CN113707271B/en
Publication of CN113707271A publication Critical patent/CN113707271A/en
Application granted granted Critical
Publication of CN113707271B publication Critical patent/CN113707271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of intelligent fitness, in particular to a fitness scheme generation method and system based on artificial intelligence and big data. The method comprises the following steps: training the first neural network according to first training data corresponding to the fitness personnel and label data corresponding to the first training data; inputting second training data corresponding to the body-building personnel into the first neural network to obtain second class labels corresponding to the second training data and obtain body-building action visual sensitivities of various actions, so that the body-building action visual angle invariance measurement network is trained by utilizing the second training data and the body-building action visual sensitivities; and finally, obtaining directed graph data corresponding to different fitness purposes by utilizing the trained fitness action visual angle invariance measurement network so as to obtain corresponding fitness schemes of the fitness purposes. According to the method and the system, the body-building scheme is generated for the user according to the body-building scheme of the body-building personnel with the same body-building purpose as the user in the big data, and the body-building effect of the user can be improved.

Description

Fitness scheme generation method and system based on artificial intelligence and big data
Technical Field
The invention relates to the technical field of intelligent fitness, in particular to a fitness scheme generation method and system based on artificial intelligence and big data.
Background
With the improvement of living standard and the call of people for fitness, more and more people begin to pay attention to physical health, and begin to walk into a gymnasium to participate in a plurality of fitness activities. The body-building is roughly divided into the body-building and the non-body-building, different body-building methods have different body-building effects on different parts of the body, for people who just enter the body-building field, because of insufficient body-building experience, the people can only carry out the exercise blindly and randomly, and the body-building scheme which is suitable for the body-building purpose of the people cannot be selected, and the body-building effect is poor.
Disclosure of Invention
In order to solve the problem of poor fitness effect of fitness personnel, the invention aims to provide a fitness scheme generation method and a fitness scheme generation system based on artificial intelligence and big data, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for generating a fitness plan based on artificial intelligence and big data, including the following steps:
acquiring a gymnasium RGB image of continuous frames in a preset time; obtaining an image sequence corresponding to a first fitness worker according to the gymnasium RGB images of the continuous frames, and recording first training data;
training a first neural network according to first training data and label data corresponding to the first training data, wherein the label data corresponding to the first training data comprises a visual angle category, an action category and a human body key point;
acquiring an image sequence corresponding to a second fitness person and recording the image sequence as second training data, wherein first class label data corresponding to the second training data comprises a visual angle class, an action class and a human body key point; classifying second training data with the same action type into a training set corresponding to the action type, and inputting the second training data in the training set into a trained first neural network to obtain a second type label corresponding to the second training data, wherein the second type label comprises an action classification probability vector and a view classification probability vector output by the first neural network;
obtaining body-building action visual sensitivities corresponding to various action types according to second type labels corresponding to the second training data;
training a body-building action visual angle invariance measurement network according to second training data and a second class label, a first class label and a body-building action visual sensitivity corresponding to the second training data, wherein the body-building action visual angle invariance measurement network is used for classifying body-building actions of people;
classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network to obtain action classification results corresponding to each body-building person; obtaining directed graph data corresponding to different fitness purposes according to the action classification result corresponding to each fitness worker;
and matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain a corresponding fitness scheme.
In a second aspect, another embodiment of the present invention provides an artificial intelligence and big data based exercise program generating system, which includes a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the artificial intelligence and big data based exercise program generating method described above.
Preferably, the obtaining of the directed graph data corresponding to different fitness purposes according to the motion classification result corresponding to each fitness worker includes:
according to the action classification result corresponding to each body-building person, taking each body-building action of the body-building person as a node, and taking the duration time of the body-building action as a signal value of the node corresponding to the body-building action;
when the body-building action of the body-building personnel changes, the nodes are connected according to the changing sequence to obtain directed graph data corresponding to the body-building personnel;
fusing directed graphs corresponding to the same fitness personnel with the fitness purpose to obtain directed graph data corresponding to the fitness purpose;
the signal value of each node in the directed graph data corresponding to the fitness purpose is the average value of the signal values of the same node in the directed graph data corresponding to the same fitness personnel with the same fitness purpose, and the weight of the edge is the probability of the edge appearing in the directed graph data corresponding to the same fitness personnel with the same fitness purpose.
Preferably, the matching, according to the fitness objective of the user, the directed graph data corresponding to the fitness objective to obtain a corresponding fitness scheme includes:
determining an initial node of a fitness scheme according to the probability of the first occurrence of each node in the directed graph data corresponding to the fitness purpose;
taking the initial node as a starting point, and performing wandering along the edge with the maximum weight to obtain a plurality of wandering paths;
and selecting the walking path with the maximum sum of the side weights as an optimal walking path, and taking the optimal walking path as a corresponding fitness scheme of the fitness objective.
Preferably, the calculation formula of the visual sensitivity of the fitness action is as follows:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 499111DEST_PATH_IMAGE002
is as follows
Figure 58662DEST_PATH_IMAGE003
The fitness movement visual sensitivity of each movement category,
Figure 41662DEST_PATH_IMAGE004
is as follows
Figure 968029DEST_PATH_IMAGE005
The motion classification probability vector of each training data,
Figure 188926DEST_PATH_IMAGE006
is as follows
Figure 542547DEST_PATH_IMAGE007
The perspective classification probability vectors for individual training data,
Figure 216105DEST_PATH_IMAGE008
is as follows
Figure 946164DEST_PATH_IMAGE003
Corresponding to each action categoryA set of training data pairs from different perspectives within a training set,
Figure 520102DEST_PATH_IMAGE009
is composed of
Figure 247887DEST_PATH_IMAGE008
The number of training data pairs in the set.
Preferably, the visual angle loss function adopted by the fitness action visual angle invariance measurement network is as follows:
Figure 533375DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 676911DEST_PATH_IMAGE011
to be the viewing angle loss function value,
Figure 731455DEST_PATH_IMAGE012
the second training data for an arbitrary action is,
Figure 833403DEST_PATH_IMAGE013
is prepared by reacting with
Figure 606187DEST_PATH_IMAGE012
Is the same as the positive sample of the action category,
Figure 287835DEST_PATH_IMAGE014
is prepared by reacting with
Figure 665727DEST_PATH_IMAGE012
Are different negative examples of the action category of (c),
Figure 532052DEST_PATH_IMAGE015
in order to be a discrimination threshold value, the discrimination threshold value,
Figure 186541DEST_PATH_IMAGE016
is composed of
Figure 796514DEST_PATH_IMAGE012
And
Figure 966595DEST_PATH_IMAGE013
the L2 distance between them,
Figure 269400DEST_PATH_IMAGE017
is composed of
Figure 626564DEST_PATH_IMAGE012
And
Figure 243490DEST_PATH_IMAGE014
l2 distance in between.
Preferably, the classification loss function adopted by the fitness action view invariance measurement network is as follows:
Figure 127132DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 476205DEST_PATH_IMAGE019
in order to classify the values of the loss functions,
Figure 445298DEST_PATH_IMAGE020
the actual view angle category output by the network is measured for the body-building action view angle invariance,
Figure DEST_PATH_IMAGE021
for the perspective class in the original label corresponding to the second training data,
Figure 567713DEST_PATH_IMAGE022
classifying the probability vector for the perspective in the second class label corresponding to the second training data,
Figure 446807DEST_PATH_IMAGE023
the actual motion category output by the network is measured for the body-building motion view invariance,
Figure 91415DEST_PATH_IMAGE024
for the action category in the original label corresponding to the second training data,
Figure 423170DEST_PATH_IMAGE025
classifying probability vectors for actions in the second class labels corresponding to the second training data,
Figure 178637DEST_PATH_IMAGE026
is composed of
Figure 974554DEST_PATH_IMAGE027
The divergence of the light beam is measured by the light source,
Figure 665430DEST_PATH_IMAGE028
in order to be a function of the cross-entropy loss,
Figure 609115DEST_PATH_IMAGE029
in order to be a function of the cross-entropy loss,
Figure 545103DEST_PATH_IMAGE030
is as follows
Figure 257844DEST_PATH_IMAGE024
Fitness movement visual sensitivity for each movement category.
Preferably, the method for obtaining the image sequence corresponding to the first fitness person comprises:
carrying out target detection on the RGB images of the continuous intraframe gymnasium to obtain an enclosure frame corresponding to the first gymnastic person in each image;
and cutting the RGB images of the gymnasium in the continuous frame according to the surrounding frame corresponding to each person in each image to obtain an RGB image sequence of the gymnasium in the continuous frame.
The embodiment of the invention has the following beneficial effects:
the method trains the body-building action visual angle invariance measurement network by acquiring the training data of the first body-building person and the second body-building person, and the trained body-building action visual angle invariance measurement network can be used for classifying the body-building action of the body-building persons; classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network, so as to obtain action classification results corresponding to each body-building person; the directed graph data corresponding to different fitness purposes can be obtained by combining the fitness purposes of each fitness worker; therefore, the directed graph data suitable for the user can be matched according to the fitness purpose of the user, and the fitness scheme suitable for the user is further obtained. The invention generates the body-building scheme for the user according to the body-building scheme of the body-building personnel with the same body-building purpose as the user in the big data, and can improve the body-building effect of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for generating a fitness plan based on artificial intelligence and big data according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and functional effects of the present invention adopted to achieve the predetermined invention purpose, the following describes in detail a method and system for generating a fitness plan based on artificial intelligence and big data according to the present invention with reference to the accompanying drawings and preferred embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of a fitness scheme generation method and system based on artificial intelligence and big data in detail with reference to the accompanying drawings.
The embodiment of the body-building scheme generation method based on artificial intelligence and big data comprises the following steps:
as shown in fig. 1, the method for generating a fitness program based on artificial intelligence and big data of the present embodiment includes the following steps:
step S1, acquiring gymnasium RGB images of continuous frames within preset time; and obtaining an image sequence corresponding to the first fitness personnel according to the gymnasium RGB images of the continuous frames, and recording first training data.
In order to obtain images required in the subsequent network training, in the embodiment, the camera deployed in the gymnasium is used for acquiring the RGB images of the gymnasium, wherein the RGB images of the gymnasium comprise the gymnastic processes of each gymnasium; the specific acquisition process is as follows:
the method comprises the steps of collecting continuous frames of the RGB images of the gymnasium through a camera in the gymnasium, inputting each collected frame of RGB image into a target detection network to obtain a surrounding frame of each gymnasium person in each frame of image, and cutting the RGB images of the gymnasium of the corresponding frame by using the surrounding frame of each gymnasium person to obtain the RGB images of each gymnasium person in the current frame of image. And processing the acquired RGB images of each gymnasium frame according to the same method to obtain the RGB images of each gymnasium person in the RGB images of all the gymnasiums.
Since the position of the fitness personnel will not change greatly in the short time during the exercise of the gymnasium, the embodiment determines the bounding boxes belonging to the same person based on the IOU (cross-over ratio) of the bounding boxes of the fitness personnel in the adjacent frame images, and obtains the image sequence of the exercise motions of each fitness personnel in the continuous frame according to the matching process, wherein the size of the image sequence is
Figure 119621DEST_PATH_IMAGE031
Wherein
Figure 550602DEST_PATH_IMAGE032
For the height and width of each fitness person's RGB image,
Figure 523238DEST_PATH_IMAGE033
is the time length of the image sequence. The present embodiment considers that since one fitness activity has a short duration, in order to ensure that the fitness activities corresponding to one image sequence are the same fitness activity type, setting is made accordingly
Figure 559327DEST_PATH_IMAGE034
And the setting can be carried out according to the requirement in the actual process.
In this embodiment, a gymnastic person identified based on the RGB images of the gymnasium is recorded as a first gymnastic person, and an image sequence corresponding to the first gymnastic person is recorded as first training data; the first exerciser is not one exerciser, but includes a plurality of exercisers.
Step S2, training a first neural network according to first training data and label data corresponding to the first training data, wherein the label data corresponding to the first training data comprises a visual angle category, an action category and a human body key point.
In order to classify the body-building actions of each body-building person, the embodiment constructs a body-building action visual angle invariance measurement network, and because the constructed network needs a large amount of parameters, the embodiment firstly constructs a first neural network, and the first neural network is used as a teacher network to provide dark knowledge to supervise the training of the body-building action visual angle invariance measurement network, so that the obtained body-building action visual angle invariance measurement network has small parameter amount and high precision, can be embedded into a camera, and improves the efficiency of calculation and data acquisition.
In this embodiment, step S2 is divided into the following two sub-steps:
and step S2-1, label data corresponding to the first training data is obtained.
In this embodiment, each first training data is used to train a first neural network, which is a network with sufficient training and high precision, and the first neural network cannot be embedded in a mobile device such as a camera or a mobile phone because of its large parameter. Before training the first neural network, the obtained first training data is artificially labeled in the present embodiment, and the label data in the present embodiment are respectively a view category label, an action category label and label information of 18 human key points in the CPN network. The 18 key points of the human body have been explained in the prior art, and are not described in detail here.
In this embodiment, the view category of each first training data is a view category of a first frame image in an image sequence, and in this embodiment, the view categories are divided into 8 categories, which are: front view, back view, left view, right view, left front view, right front view, left back view; the motion category of the first training data comprises all common fitness motions (such as sit-up, push-up, etc.). Each first training data corresponds to an action class label and a perspective class label.
And step S2-2, training the first neural network according to the first training data and the label data corresponding to the first training data.
In this embodiment, each first training data and the label data corresponding to each first training data are used to train a first neural network, where the first neural network is composed of two branches, which specifically is:
the first branch is used to obtain the action category corresponding to the image sequence, and the branch is
Figure 451059DEST_PATH_IMAGE035
Structure, the present embodiment inputs the first training data to the encoder
Figure 244703DEST_PATH_IMAGE036
Extracting features, and sending the extracted features to decoder
Figure 145663DEST_PATH_IMAGE037
The obtained human skeleton information is sent to a classifier
Figure 206898DEST_PATH_IMAGE038
Get the action class, in this embodiment
Figure 535111DEST_PATH_IMAGE039
A CPN network may be employed.
The second branch is used to obtain the action view angle category corresponding to the image sequence, and the branch is
Figure 550471DEST_PATH_IMAGE040
The structure, in this embodiment the first branch and the second branch are executed in parallel, will
Figure 255122DEST_PATH_IMAGE036
Extracted features are fed into
Figure 672328DEST_PATH_IMAGE041
Zhonglai pair
Figure 374705DEST_PATH_IMAGE042
The extracted features are further extracted and then the further extracted features are sent to a classifier
Figure 1995DEST_PATH_IMAGE043
In (5), obtaining an action view angle category.
The loss function of the first neural network is:
Figure 120124DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure 516470DEST_PATH_IMAGE045
is composed of
Figure 557458DEST_PATH_IMAGE046
A loss function of the network;
Figure 937624DEST_PATH_IMAGE047
for a classifier
Figure 593864DEST_PATH_IMAGE038
Is the cross-entropy loss function of (a),
Figure 110296DEST_PATH_IMAGE048
for a classifier
Figure 826579DEST_PATH_IMAGE043
For supervising the accuracy of the classification of the first neural network. The present embodiment continuously updates the network parameters by using a gradient descent method.
Step S3, acquiring an image sequence corresponding to a second fitness person and recording the image sequence as second training data, wherein first class label data corresponding to the second training data comprise a visual angle class, an action class and a human body key point; and classifying second training data with the same action type into a training set corresponding to the action type, and inputting the second training data in the training set into a trained first neural network to obtain a second type label corresponding to the second training data, wherein the second type label comprises an action classification probability vector and a view classification probability vector output by the first neural network.
After the trained first neural network is obtained, the embodiment obtains a second category label for second training data corresponding to a second fitness person by using the trained first neural network, specifically:
first, second training data corresponding to a second fitness person is obtained through a method similar to that in step S1, where the second fitness person in this embodiment is a plurality of fitness persons different from the first fitness person, the first type of label data corresponding to the second training data includes a visual angle type, an action type, and a human body key point, and the first type of label data is obtained in the same manner as the label data of the first training data.
Then, classifying the second training data, wherein the purpose of classification is to calculate the perspective sensitivity of each action category, and specifically: and classifying the second training data with the same action type into a set according to the action type label corresponding to each second training data, namely a training set corresponding to one action type.
And finally, sending second training data in the training set corresponding to each action into the first neural network to obtain a second class label corresponding to each second training data, wherein the second class label comprises: the motion classification probability vector is the probability that the corresponding second training data is of each motion category, and the sum of all numerical values in the vector is 1; and (3) a visual angle classification probability vector, namely the probability that the corresponding second training data is of each visual angle class, wherein the sum of numerical values in the vector is 1.
Step S4, obtaining the fitness movement visual sensitivities corresponding to the various movement types according to the second type labels corresponding to the second training data.
In the embodiment, the actions performed by the fitness personnel in the fitness exercise are often bilaterally symmetrical, such as left-side bow-step stretching and right-side bow-step stretching, and the like, and when the action types are judged through pictures, the change of the visual angle has a great influence on the accurate identification of the actions, namely the actions are sensitive to the change of the visual angle; however, for some body-building actions such as push-up, the change of the viewing angle has little influence on the judgment of the action type, so in order to ensure the accuracy of the judgment of the action type, the embodiment acquires the body-building action viewing angle sensitivities corresponding to different body-building actions.
Specifically, obtaining the
Figure 631724DEST_PATH_IMAGE003
The classification result of each second training data in the training set corresponding to each action category is to make two training data with different visual angles in the training set serve as a training data pair, a set Q is formed by a plurality of training data pairs, in this embodiment, an exclusive or operation is performed on the visual angle classification probability vectors corresponding to the two second training data of each training data pair in the set Q, the greater the number of training data pairs subjected to the exclusive or operation, the greater the visual sensitivity of the exercise action of the action category, and the smaller the number of training data pairs 1, the smaller the visual sensitivity of the exercise action of the action category, therefore, the visual sensitivity of the exercise action reflects the probability that the same action category is classified incorrectly at different visual angles, and the larger the visual sensitivity value of the exercise action indicates that the current action category is more sensitive to the change of the visual angle. Then it is first
Figure 216289DEST_PATH_IMAGE003
The calculation formula of the fitness movement visual sensitivity of each movement category is as follows:
Figure 462594DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 474412DEST_PATH_IMAGE002
is as follows
Figure 937492DEST_PATH_IMAGE003
The fitness movement visual sensitivity of each movement category,
Figure 325748DEST_PATH_IMAGE004
is as follows
Figure 426560DEST_PATH_IMAGE005
The motion classification probability vector of each training data,
Figure 609279DEST_PATH_IMAGE006
is as follows
Figure 61120DEST_PATH_IMAGE007
The perspective classification probability vectors for individual training data,
Figure 190750DEST_PATH_IMAGE008
is as follows
Figure 270702DEST_PATH_IMAGE003
A set of training data pairs with different visual angles in the training set corresponding to each action category,
Figure 765268DEST_PATH_IMAGE009
is composed of
Figure 829039DEST_PATH_IMAGE008
The number of training data pairs in the set,
Figure 935929DEST_PATH_IMAGE050
for an exclusive-OR operation, first
Figure 870387DEST_PATH_IMAGE005
Training data and
Figure 535854DEST_PATH_IMAGE007
the training data is a training data pair in the set Q,
Figure 821342DEST_PATH_IMAGE051
and
Figure 292775DEST_PATH_IMAGE052
is embodied in
Figure 222685DEST_PATH_IMAGE005
Training data and
Figure 183688DEST_PATH_IMAGE007
and the action type corresponding to each training data.
And step S5, training a body-building action visual angle invariance measurement network according to the second training data and the second class label, the first class label and the body-building action visual sensitivity corresponding to the second training data, wherein the body-building action visual angle invariance measurement network is used for classifying the body-building action of the body-building personnel.
In this embodiment, step S3 uses the action classification probability vector and the perspective class probability vector of each second training data as the second class label of the corresponding training data. Compared with the first class label, the second class label tends to be smooth, the distribution entropy is larger, and the larger the distribution entropy is, the more the similarity between different action classes can be reflected, so that more supervision information is provided.
In this embodiment, the second training data and the second category label, the first category label and the visual sensitivity of the exercise movement corresponding to the second training data are used to train the measurement network of the visual angle invariance of the exercise movement. Wherein the degree of the invariance of the angle of the body-building actionThe quantity network is
Figure 566258DEST_PATH_IMAGE053
In the structure, because the second class label obtained by the first neural network contains more supervision information, the encoder in the body-building action view angle invariance measurement network
Figure 903699DEST_PATH_IMAGE054
And an encoder
Figure 452230DEST_PATH_IMAGE055
Only a small number of parameters are needed to meet the classification requirement.
In the training process, each second training data is firstly input into the encoder
Figure 318555DEST_PATH_IMAGE054
The image sequence in (1) is subjected to feature extraction, and the obtained features are input into a classifier
Figure 781897DEST_PATH_IMAGE056
Obtaining a visual angle classification result, and adding the visual angle classification result to
Figure 267236DEST_PATH_IMAGE054
Is characterized by the output of
Figure 296372DEST_PATH_IMAGE057
Feeding into the encoder after operation
Figure 208964DEST_PATH_IMAGE058
Performing further feature extraction to obtain the final product
Figure 221920DEST_PATH_IMAGE059
And finally will be
Figure 510950DEST_PATH_IMAGE059
Input to a classifier
Figure 394592DEST_PATH_IMAGE060
To obtain the final action classification result.
In this embodiment, the following components
Figure 239270DEST_PATH_IMAGE061
The second training data is used as a batch, and one training data is constructed for each batch
Figure 208363DEST_PATH_IMAGE062
And each second training data gets a corresponding one when training the network
Figure 35505DEST_PATH_IMAGE059
Calculating the correspondence of pairwise training data in each batch
Figure 39233DEST_PATH_IMAGE059
L2 distance between, and will result in
Figure 621524DEST_PATH_IMAGE063
Writing distance values to build
Figure 953280DEST_PATH_IMAGE062
Is denoted as a distance metric matrix. The distance measurement matrix is used for constructing a loss function of the fitness action view angle invariance measurement network.
In this embodiment, the loss function of the body-building action view invariance measurement network is divided into two parts, namely a view angle loss function and a classification loss function, which specifically include:
first, a visual angle loss function is set for reducing the image sequence of the same exercise motion category under different visual angles
Figure 708746DEST_PATH_IMAGE059
While enlarging the sequence of images of different types of exercises
Figure 442347DEST_PATH_IMAGE059
BetweenTo enhance the distinguishability between features. The method specifically comprises the following steps: in this embodiment, according to
Figure 257856DEST_PATH_IMAGE061
Obtaining a plurality of triples including the second training data of an arbitrary motion by using the fitness motion category label corresponding to each second training data in the second image sequence
Figure 841022DEST_PATH_IMAGE012
Is recorded as a base sample, and
Figure 134600DEST_PATH_IMAGE012
positive samples of the same action type
Figure 722707DEST_PATH_IMAGE013
And with
Figure 709118DEST_PATH_IMAGE012
Negative examples of different action categories
Figure 15465DEST_PATH_IMAGE014
. The view loss function is calculated as:
Figure 315997DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 883244DEST_PATH_IMAGE011
to be the viewing angle loss function value,
Figure 915922DEST_PATH_IMAGE012
the second training data for an arbitrary action is,
Figure 7768DEST_PATH_IMAGE013
is prepared by reacting with
Figure 643149DEST_PATH_IMAGE012
Motion category ofThe same positive sample is used for the positive samples,
Figure 533745DEST_PATH_IMAGE014
is prepared by reacting with
Figure 737324DEST_PATH_IMAGE012
Are different negative examples of the action category of (c),
Figure 877318DEST_PATH_IMAGE015
in order to be a discrimination threshold value, the discrimination threshold value,
Figure 722915DEST_PATH_IMAGE016
is composed of
Figure 264754DEST_PATH_IMAGE012
And
Figure 639235DEST_PATH_IMAGE013
the L2 distance between them,
Figure 266526DEST_PATH_IMAGE017
is composed of
Figure 446971DEST_PATH_IMAGE012
And
Figure 482798DEST_PATH_IMAGE014
l2 distance in between. The discrimination threshold value
Figure 152814DEST_PATH_IMAGE015
Is a hyper-parameter for distinguishing the differences between image sequences of different motion classes, the size of which is related to the motion class of the sample and the view angle relationship between the samples. Therefore, whether the viewing angle is the same or not, it is desirable to make the viewing angle as large as possible
Figure 408346DEST_PATH_IMAGE064
Therefore, the discrimination between different categories is ensured to be larger.
In this example
Figure 189220DEST_PATH_IMAGE015
The determination method of the numerical value comprises the following steps: according to
Figure 581018DEST_PATH_IMAGE065
Corresponding labels, obtaining corresponding visual angle category labels, and obtaining the visual angle relation index of the triple according to the visual angle category labels corresponding to the samples
Figure 421935DEST_PATH_IMAGE066
When the view category labels of the triples are consistent
Figure 227080DEST_PATH_IMAGE067
(ii) a Take a value when the view categories are inconsistent
Figure 687011DEST_PATH_IMAGE068
. Then
Figure 57950DEST_PATH_IMAGE015
The calculation formula of (2) is as follows:
Figure 440740DEST_PATH_IMAGE069
wherein the content of the first and second substances,
Figure 264340DEST_PATH_IMAGE070
the degree of the division is taken as a reference area,
Figure 793541DEST_PATH_IMAGE002
is as follows
Figure 18986DEST_PATH_IMAGE003
The sensitivity of each action category is higher, which indicates the first
Figure 77072DEST_PATH_IMAGE003
The greater the effect of the individual movements on the change of viewing angle, i.e. at different viewing angles
Figure 856809DEST_PATH_IMAGE016
Will also become larger, thus making the discrimination threshold largerThe larger the value is, to ensure
Figure 783177DEST_PATH_IMAGE003
The classification effect of individual action classes with other actions,
Figure 738494DEST_PATH_IMAGE071
an adjustment value for the discrimination threshold for the perspective relationship.
Loss of view function
Figure 357695DEST_PATH_IMAGE011
The method and the device ensure the distinguishability among the body-building action class characteristics and also ensure the distinguishability among the extracted characteristics under different visual angles.
Secondly, a classification loss function is set, the classification loss function ensures the classification accuracy, and the classification loss function utilizes a first class label and a second class label corresponding to second training data to jointly supervise the body-building action visual angle invariance measurement network, so that the difference between the output result of the body-building action visual angle invariance measurement network and the first class label and the second class label is minimum, and a better network is obtained. The calculation formula of the classification loss function is as follows:
Figure 529788DEST_PATH_IMAGE072
wherein the content of the first and second substances,
Figure 994267DEST_PATH_IMAGE019
in order to classify the values of the loss functions,
Figure 69670DEST_PATH_IMAGE020
the actual view angle category output by the network is measured for the body-building action view angle invariance,
Figure 859772DEST_PATH_IMAGE021
for the perspective class in the original label corresponding to the second training data,
Figure 755047DEST_PATH_IMAGE022
classifying the probability vector for the perspective in the second class label corresponding to the second training data,
Figure 554375DEST_PATH_IMAGE023
the actual motion category output by the network is measured for the body-building motion view invariance,
Figure 484285DEST_PATH_IMAGE024
for the action category in the original label corresponding to the second training data,
Figure 179709DEST_PATH_IMAGE025
classifying probability vectors for actions in the second class labels corresponding to the second training data,
Figure 329324DEST_PATH_IMAGE026
is composed of
Figure 401185DEST_PATH_IMAGE027
The divergence of the light beam is measured by the light source,
Figure 185601DEST_PATH_IMAGE028
in order to be a function of the cross-entropy loss,
Figure 317505DEST_PATH_IMAGE029
in order to be a function of the cross-entropy loss,
Figure 515269DEST_PATH_IMAGE030
is as follows
Figure 266187DEST_PATH_IMAGE024
The fitness movement visual sensitivity of each movement category,
Figure 295323DEST_PATH_IMAGE073
the larger the size should be
Figure 207915DEST_PATH_IMAGE073
The corresponding body-building action is distributed with larger attention so as to reduce the change of visual angle to the actionInfluence of class judgment.
In this embodiment, the final loss function of the body-building action view angle invariance measurement network is
Figure 955291DEST_PATH_IMAGE074
And continuously and iteratively updating the parameters of the body-building action visual angle invariance measurement network by using a gradient descent method. The trained fitness action visual angle invariance measurement network can be embedded into equipment such as a camera due to small parameter quantity and high calculation efficiency.
Step S6, classifying the body-building actions of each body-building person in the big database by using the trained body-building action visual angle invariance measurement network to obtain action classification results corresponding to each body-building person; and obtaining directed graph data corresponding to different fitness purposes according to the action classification result corresponding to each fitness person.
In order to generate a fitness scheme capable of achieving the corresponding fitness purpose for users with different fitness purposes, the implementation collects RGB image sequences of each fitness person in each gym in real time during the exercise period through historical data in a large database and sends the RGB image sequences into a trained fitness action view angle invariance measurement network to obtain accurate action classification results corresponding to the image sequences. The embodiment determines the identity information of each fitness person based on the face recognition technology to obtain the fitness purposes of each fitness person, such as fat reduction, muscle increase and the like; in order to make the generated fitness scheme more reliable, in this embodiment, only the RGB image sequence of the professional fitness person or the experienced fitness person is selected as the data to be referred to for obtaining the directed graph data corresponding to different fitness purposes according to the identity information of the fitness person.
In this embodiment, the digraph data corresponding to each fitness person is obtained according to the obtained action classification result of each fitness person, and a process of generating the digraph data for the fitness person is as follows:
in this embodiment, each fitness activity of the fitness personnel is represented by a node, and the duration of the fitness activity is used as a node signal value. And when the body-building action of the target body-building person is changed, connecting the nodes before and after the change according to the change sequence until the movement of the target body-building person is finished, and acquiring directed graph data corresponding to each body-building person according to the method.
In the embodiment, a large amount of directed graph data corresponding to fitness personnel are collected, the directed graph data with the same fitness purpose are fused, and the mean value of the signal values of a plurality of same nodes is taken as the signal value of the fused node in the fusion process; the edge weight value in the directed graph data is updated according to the occurrence frequency of the directed edge in the directed graph data with the same fitness purpose, and the method specifically comprises the following steps: the quantity of the directed graph data in the embodiment is
Figure 8436DEST_PATH_IMAGE075
Node of
Figure 892078DEST_PATH_IMAGE076
Pointing node
Figure 975572DEST_PATH_IMAGE077
The number of edges present is
Figure 944665DEST_PATH_IMAGE078
Then there is an edge
Figure 365282DEST_PATH_IMAGE079
The edge weight value of
Figure 978797DEST_PATH_IMAGE080
I.e. the probability of the directed edge appearing in each directed graph; and obtaining the probability of the first occurrence of each node in the corresponding directed graph of the fitness objective according to the first occurrence of each node in each directed graph data.
And generating corresponding directed graph data for different fitness purposes according to the method, wherein the information contained in each node in the directed graph data comprises the probability of the first occurrence and the signal value. The fitness purpose in this embodiment can be for thin legs, practice the abdomen, promote heart and lung ability etc. and specific classification can set up according to actual conditions.
And step S7, matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain a corresponding fitness scheme.
In this embodiment, step S6 obtains the directed graph data corresponding to different purposes, so in practical applications, the directed graph data corresponding to the exercise purpose of the user is matched according to the exercise purpose of the user, and then a corresponding exercise scheme is generated for the user according to the directed graph data for the exercise purpose, where the user may be an exerciser with insufficient exercise experience or an exerciser who wants to reformulate the exercise purpose. The method specifically comprises the following steps:
firstly, according to the probability of the first appearance of each node in the directed graph data corresponding to the fitness purpose, determining a determined initial node of the fitness scheme to obtain an initial fitness action, and then taking the initial node as a starting point to walk along the edge with the maximum edge weight, wherein the path with the maximum edge weight is taken as the optimal walking path, that is, the path with the maximum edge weight is obtained in the embodiment because the edges connected by each node are possibly multiple and the weights of the edges are possibly the same, the embodiment possibly obtains multiple walking paths
Figure 623405DEST_PATH_IMAGE081
Corresponding path
Figure 955160DEST_PATH_IMAGE082
And the optimal walking path is a final fitness scheme corresponding to the fitness purpose, and the signal value of each node in the optimal walking path is the recommended exercise time length of each fitness action.
In the embodiment, the body-building action visual angle invariance measurement network is trained by acquiring training data of a first body-building person and a second body-building person, and the trained body-building action visual angle invariance measurement network can be used for classifying body-building actions of the persons; classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network, so as to obtain action classification results corresponding to each body-building person; the directed graph data corresponding to different fitness purposes can be obtained by combining the fitness purposes of each fitness worker; therefore, the directed graph data suitable for the user can be matched according to the fitness purpose of the user, and the fitness scheme suitable for the user is further obtained. Most of the fitness personnel in the big database are experienced fitness personnel or professional personnel, and the fitness scheme is generated for the user according to the fitness scheme of the fitness personnel with the same fitness purpose as the user in the big database, so that the fitness effect of the user can be improved.
The embodiment of the fitness scheme generation system based on artificial intelligence and big data comprises the following steps:
the exercise scheme generation system based on artificial intelligence and big data comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the exercise scheme generation method based on artificial intelligence and big data.
Because the exercise scheme generation method based on artificial intelligence and big data has been described in the embodiment of the exercise scheme generation method based on artificial intelligence and big data, the embodiment does not give any further details on the exercise scheme generation method based on artificial intelligence and big data.
It should be noted that: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A fitness scheme generation method based on artificial intelligence and big data is characterized by comprising the following steps:
acquiring a gymnasium RGB image of continuous frames in a preset time; obtaining an image sequence corresponding to a first fitness worker according to the gymnasium RGB images of the continuous frames, and recording first training data;
training a first neural network according to first training data and label data corresponding to the first training data, wherein the label data corresponding to the first training data comprises a visual angle category, an action category and a human body key point;
acquiring an image sequence corresponding to a second fitness person and recording the image sequence as second training data, wherein first class label data corresponding to the second training data comprises a visual angle class, an action class and a human body key point; classifying second training data with the same action type into a training set corresponding to the action type, and inputting the second training data in the training set into a trained first neural network to obtain a second type label corresponding to the second training data, wherein the second type label comprises an action classification probability vector and a view classification probability vector output by the first neural network;
obtaining body-building action visual sensitivities corresponding to various action types according to second type labels corresponding to the second training data;
training a body-building action visual angle invariance measurement network according to second training data and a second class label, a first class label and a body-building action visual sensitivity corresponding to the second training data, wherein the body-building action visual angle invariance measurement network is used for classifying body-building actions of people;
classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network to obtain action classification results corresponding to each body-building person; obtaining directed graph data corresponding to different fitness purposes according to the action classification result corresponding to each fitness worker;
and matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain a corresponding fitness scheme.
2. The method for generating a fitness scheme based on artificial intelligence and big data according to claim 1, wherein the obtaining of the directed graph data corresponding to different fitness objectives according to the action classification result corresponding to each fitness worker comprises:
according to the action classification result corresponding to each body-building person, taking each body-building action of the body-building person as a node, and taking the duration time of the body-building action as a signal value of the node corresponding to the body-building action;
when the body-building action of the body-building personnel changes, the nodes are connected according to the changing sequence to obtain directed graph data corresponding to the body-building personnel;
fusing directed graphs corresponding to the same fitness personnel with the fitness purpose to obtain directed graph data corresponding to the fitness purpose;
the signal value of each node in the directed graph data corresponding to the fitness purpose is the average value of the signal values of the same node in the directed graph data corresponding to the same fitness personnel with the same fitness purpose, and the weight of the edge is the probability of the edge appearing in the directed graph data corresponding to the same fitness personnel with the same fitness purpose.
3. The method for generating a fitness scheme based on artificial intelligence and big data according to claim 2, wherein the step of matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain the corresponding fitness scheme comprises the following steps:
determining an initial node of a fitness scheme according to the probability of the first occurrence of each node in the directed graph data corresponding to the fitness purpose;
taking the initial node as a starting point, and performing wandering along the edge with the maximum weight to obtain a plurality of wandering paths;
and selecting the walking path with the maximum sum of the side weights as an optimal walking path, and taking the optimal walking path as a corresponding fitness scheme of the fitness objective.
4. The artificial intelligence and big data based fitness scheme generating method of claim 1, wherein the visual sensitivity of the fitness activity is calculated by the formula:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
is as follows
Figure DEST_PATH_IMAGE006
The fitness movement visual sensitivity of each movement category,
Figure DEST_PATH_IMAGE008
is as follows
Figure DEST_PATH_IMAGE010
The motion classification probability vector of each training data,
Figure DEST_PATH_IMAGE012
is as follows
Figure DEST_PATH_IMAGE014
The perspective classification probability vectors for individual training data,
Figure DEST_PATH_IMAGE016
is as follows
Figure 613049DEST_PATH_IMAGE006
A set of training data pairs with different visual angles in the training set corresponding to each action category,
Figure DEST_PATH_IMAGE018
is composed of
Figure 733451DEST_PATH_IMAGE016
The number of training data pairs in the set.
5. The artificial intelligence and big data based fitness scheme generating method of claim 1, wherein the fitness action perspective invariance measure network employs a perspective loss function:
Figure DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE022
to be the viewing angle loss function value,
Figure DEST_PATH_IMAGE024
the second training data for an arbitrary action is,
Figure DEST_PATH_IMAGE026
is prepared by reacting with
Figure 998342DEST_PATH_IMAGE024
Is the same as the positive sample of the action category,
Figure DEST_PATH_IMAGE028
is prepared by reacting with
Figure 800076DEST_PATH_IMAGE024
Are different negative examples of the action category of (c),
Figure DEST_PATH_IMAGE030
in order to be a discrimination threshold value, the discrimination threshold value,
Figure DEST_PATH_IMAGE032
is composed of
Figure 489814DEST_PATH_IMAGE024
And
Figure 781118DEST_PATH_IMAGE026
the L2 distance between them,
Figure DEST_PATH_IMAGE034
is composed of
Figure 454676DEST_PATH_IMAGE024
And
Figure 387997DEST_PATH_IMAGE028
l2 distance in between.
6. The method for generating a fitness program based on artificial intelligence and big data according to claim 1, wherein the network for measuring the invariance of the angle of the fitness activities uses a classification loss function as follows:
Figure DEST_PATH_IMAGE036
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE038
in order to classify the values of the loss functions,
Figure DEST_PATH_IMAGE040
the actual view angle category output by the network is measured for the body-building action view angle invariance,
Figure DEST_PATH_IMAGE042
for the perspective class in the original label corresponding to the second training data,
Figure DEST_PATH_IMAGE044
classifying the probability vector for the perspective in the second class label corresponding to the second training data,
Figure DEST_PATH_IMAGE046
the actual motion category output by the network is measured for the body-building motion view invariance,
Figure DEST_PATH_IMAGE048
for the action category in the original label corresponding to the second training data,
Figure DEST_PATH_IMAGE050
classifying probability vectors for actions in the second class labels corresponding to the second training data,
Figure DEST_PATH_IMAGE052
is composed of
Figure DEST_PATH_IMAGE054
The divergence of the light beam is measured by the light source,
Figure DEST_PATH_IMAGE056
in order to be a function of the cross-entropy loss,
Figure DEST_PATH_IMAGE058
in order to be a function of the cross-entropy loss,
Figure DEST_PATH_IMAGE060
is as follows
Figure 156409DEST_PATH_IMAGE048
Fitness movement visual sensitivity for each movement category.
7. The method for generating a fitness program based on artificial intelligence and big data according to claim 1, wherein the method for obtaining the image sequence corresponding to the first fitness person comprises:
carrying out target detection on the RGB images of the continuous intraframe gymnasium to obtain an enclosure frame corresponding to the first gymnastic person in each image;
and cutting the RGB images of the gymnasium in the continuous frame according to the surrounding frame corresponding to each person in each image to obtain an RGB image sequence of the gymnasium in the continuous frame.
8. An artificial intelligence and big data based fitness program generation system comprising a memory and a processor, wherein the processor executes a computer program stored by the memory to implement the artificial intelligence and big data based fitness program generation method of any one of claims 1-7.
CN202111260470.8A 2021-10-28 2021-10-28 Fitness scheme generation method and system based on artificial intelligence and big data Active CN113707271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111260470.8A CN113707271B (en) 2021-10-28 2021-10-28 Fitness scheme generation method and system based on artificial intelligence and big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111260470.8A CN113707271B (en) 2021-10-28 2021-10-28 Fitness scheme generation method and system based on artificial intelligence and big data

Publications (2)

Publication Number Publication Date
CN113707271A CN113707271A (en) 2021-11-26
CN113707271B true CN113707271B (en) 2022-02-25

Family

ID=78647295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111260470.8A Active CN113707271B (en) 2021-10-28 2021-10-28 Fitness scheme generation method and system based on artificial intelligence and big data

Country Status (1)

Country Link
CN (1) CN113707271B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114242204B (en) * 2021-12-24 2024-06-18 珠海格力电器股份有限公司 Motion strategy determination method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN108984618A (en) * 2018-06-13 2018-12-11 深圳市商汤科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN110727718A (en) * 2019-10-14 2020-01-24 成都乐动信息技术有限公司 Intelligent generation method and system for fitness course
CN111383735A (en) * 2020-03-24 2020-07-07 杭州大数云智科技有限公司 Unmanned body-building analysis method based on artificial intelligence
CN112233770A (en) * 2020-10-15 2021-01-15 郑州师范学院 Intelligent gymnasium management decision-making system based on visual perception
CN112418200A (en) * 2021-01-25 2021-02-26 成都点泽智能科技有限公司 Object detection method and device based on thermal imaging and server

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN108984618A (en) * 2018-06-13 2018-12-11 深圳市商汤科技有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN110727718A (en) * 2019-10-14 2020-01-24 成都乐动信息技术有限公司 Intelligent generation method and system for fitness course
CN111383735A (en) * 2020-03-24 2020-07-07 杭州大数云智科技有限公司 Unmanned body-building analysis method based on artificial intelligence
CN112233770A (en) * 2020-10-15 2021-01-15 郑州师范学院 Intelligent gymnasium management decision-making system based on visual perception
CN112418200A (en) * 2021-01-25 2021-02-26 成都点泽智能科技有限公司 Object detection method and device based on thermal imaging and server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Using feedback through digital technology to disrupt and change habitual behavior: A critical review of current literature;Hermsen S 等;《Computers in Human Behavior》;20161231;全文 *
柔性力敏传感在人体运动信息获取和反馈训练中的应用研究;叶强;《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》;20170915;全文 *

Also Published As

Publication number Publication date
CN113707271A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
Wang et al. Human action recognition by learning spatio-temporal features with deep neural networks
CN107742107B (en) Facial image classification method, device and server
Garg et al. Yoga pose classification: a CNN and MediaPipe inspired deep learning approach for real-world application
CN107944431B (en) A kind of intelligent identification Method based on motion change
Díaz-Pereira et al. Automatic recognition and scoring of olympic rhythmic gymnastic movements
Parmar et al. Measuring the quality of exercises
CN103827891A (en) Systems and methods of detecting body movements using globally generated multi-dimensional gesture data
CN110575663A (en) physical education auxiliary training method based on artificial intelligence
CN110490109A (en) A kind of online human body recovery action identification method based on monocular vision
US20230149774A1 (en) Handle Motion Counting Method and Terminal
CN113707271B (en) Fitness scheme generation method and system based on artificial intelligence and big data
CN113663312A (en) Micro-inertia-based non-apparatus body-building action quality evaluation method
Beily et al. A sensor based on recognition activities using smartphone
Li et al. Personrank: Detecting important people in images
Yang et al. Research on face recognition sports intelligence training platform based on artificial intelligence
Akhter Automated posture analysis of gait event detection via a hierarchical optimization algorithm and pseudo 2D stick-model
US20240042281A1 (en) User experience platform for connected fitness systems
Cheng et al. Periodic physical activity information segmentation, counting and recognition from video
Li et al. What and how well you exercised? An efficient analysis framework for fitness actions
Zeng et al. Deep learning approach to automated data collection and processing of video surveillance in sports activity prediction
CN115311745A (en) Pattern skating layered action recognition method
Karunaratne et al. Objectively measure player performance on Olympic weightlifting
Torres et al. Detection of proper form on upper limb strength training using extremely randomized trees for joint positions
Malawski et al. Automatic analysis of techniques and body motion patterns in sport
CN115294660B (en) Body-building action recognition model, training method of model and body-building action recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant