CN111626137A - Video-based motion evaluation method and device, computer equipment and storage medium - Google Patents

Video-based motion evaluation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111626137A
CN111626137A CN202010358879.2A CN202010358879A CN111626137A CN 111626137 A CN111626137 A CN 111626137A CN 202010358879 A CN202010358879 A CN 202010358879A CN 111626137 A CN111626137 A CN 111626137A
Authority
CN
China
Prior art keywords
motion
video
frame image
coordinate point
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010358879.2A
Other languages
Chinese (zh)
Inventor
赵宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010358879.2A priority Critical patent/CN111626137A/en
Priority to PCT/CN2020/104967 priority patent/WO2021217927A1/en
Publication of CN111626137A publication Critical patent/CN111626137A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, and provides a video-based motion assessment method, a video-based motion assessment device, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring a first motion video of a user and a second motion video of a coach; identifying the motion type of the user according to the first motion video; extracting a first key frame image in the first motion video and extracting a second key frame image in the second motion video according to the motion type; detecting a first human body key point in the first key frame image and a second human body key point in the second key frame image; calculating the difference degree between the first human body key point and the second human body key point; and evaluating the exercise score of the user according to the difference degree and the exercise type. The method and the device can adaptively extract the key frame image according to the motion type and calculate the difference degree, evaluate the motion of the user by combining the motion type and the difference degree, and have higher accuracy. Furthermore, the invention also relates to blockchain techniques, the motion scores may be stored in blockchain nodes.

Description

Video-based motion evaluation method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of video processing in artificial intelligence, in particular to a motion evaluation method and device based on video, computer equipment and a storage medium.
Background
With the progress of society, the body-building consciousness of people is gradually enhanced, but due to the limitation of a body-building field and time, the requirement of extracting fragmentation time in a family to do body-building sports is gradually increased, and more body motion teaching videos are provided by each sharing platform.
In order to evaluate the limb actions of the home user and enable the home user to objectively know the motion effect of the home user, in the prior art, the motion video of the user and the key frames in the standard motion video are generally extracted, and then the key frames are compared through a body recognition model to calculate the motion score of the user.
However, no matter what type of motion, the keyframes are extracted in the same or random manner, so the calculated motion score cannot effectively evaluate whether the body motion of the user is standard, and the evaluation accuracy is low.
Disclosure of Invention
In view of the foregoing, there is a need for a video-based motion estimation method, apparatus, computer device, and storage medium, which can adaptively extract a key frame image according to a motion type and calculate a difference degree, and estimate a motion of a user by combining the motion type and the difference degree, with higher accuracy.
A first aspect of the present invention provides a video-based motion estimation method, including:
acquiring a first motion video of a user and a second motion video of a coach;
identifying the motion type of the user according to the first motion video;
extracting a plurality of first key frame images in the first motion video and extracting a plurality of second key frame images in the second motion video according to the motion type;
detecting a plurality of first human body key points in each first key frame image and a plurality of second human body key points in each second key frame image;
calculating a degree of dissimilarity between the plurality of first human keypoints and the plurality of second human keypoints;
evaluating a sports score of the user according to the degree of difference and the sports type.
According to an alternative embodiment of the present invention, the identifying the motion type of the user according to the first motion video comprises:
continuously extracting a first frame image, a second frame image and a third frame image in the first motion video;
calculating a first pixel difference value between the first frame image and the second frame image and calculating a second pixel difference value between the second frame image and the third frame image;
comparing an average difference between the first pixel difference and the second pixel difference to a plurality of preset difference ranges;
matching a target preset difference value range corresponding to the average difference value;
and determining the motion type corresponding to the target preset difference range as the motion type of the user.
According to an optional embodiment of the present invention, after the extracting a plurality of first key frame images in the first motion video and extracting a plurality of second key frame images in the second motion video according to the motion type, the video-based motion estimation method further comprises:
graying each first key frame image to obtain a first gray image, and graying each second key frame image to obtain a second gray image;
and compressing the first gray scale image to obtain a first compressed image, and compressing the second gray scale image to obtain a second compressed image.
According to an alternative embodiment of the present invention, said calculating the degree of difference between the plurality of first human keypoints and the plurality of second human keypoints comprises:
acquiring a first coordinate point of each first human body key point relative to the first key frame image and acquiring a second coordinate point of each second human body key point relative to the second key frame image;
acquiring the resolution of a display screen of the computer equipment;
converting each first coordinate point according to the resolution ratio to obtain a first standard coordinate point and converting each second coordinate point to obtain a second standard coordinate point;
and calculating the difference degree according to the first standard coordinate point and the second standard coordinate point.
According to an alternative embodiment of the present invention, the resolution is a first coefficient and a second coefficient, and the converting the first coordinate point to obtain a first standard coordinate point and the converting the second coordinate point to obtain a second standard coordinate point according to the resolution includes:
calculating a first ratio of the length of the first key frame image to the first coefficient and calculating a second ratio of the width of the first key frame image to the second coefficient;
calculating a third ratio of the length of the second key frame image to the first coefficient and calculating a fourth ratio of the width of the second key frame image to the second coefficient;
scaling the first coordinate point by the first proportion in the vertical direction and scaling the first coordinate point by the second proportion in the horizontal direction to obtain a first standard coordinate point;
and scaling the second coordinate point by the third proportion in the vertical direction and scaling the second coordinate point by the fourth proportion in the horizontal direction to obtain a second standard coordinate point.
According to an alternative embodiment of the present invention, the calculating the degree of difference from the first standard coordinate point and the second standard coordinate point includes:
associating the first standard coordinate points and the corresponding second standard coordinate points according to the sequence of the time axis;
and calculating the distance value between each associated coordinate point to obtain the difference.
According to an alternative embodiment of the present invention, said evaluating the exercise score of the user according to the degree of difference and the exercise type comprises:
calculating the variance of all the difference degrees;
obtaining a score corresponding to the variance;
obtaining score weight corresponding to the motion type;
and obtaining the movement score of the user according to the product of the score and the score weight, wherein the movement score is stored in a block chain node.
A second aspect of the present invention provides a video-based motion estimation apparatus, comprising:
the video acquisition module is used for acquiring a first motion video of a user and a second motion video of a coach;
the type identification module is used for identifying the motion type of the user according to the first motion video;
a key frame extraction module, configured to extract a plurality of first key frame images in the first motion video and a plurality of second key frame images in the second motion video according to the motion type;
the key point detection module is used for detecting a plurality of first human body key points in each first key frame image and detecting a plurality of second human body key points in each second key frame image;
a disparity calculation module for calculating the disparity between the plurality of first human body key points and the plurality of second human body key points;
and the motion evaluation module is used for evaluating the motion score of the user according to the difference degree and the motion type.
A third aspect of the invention provides a computer device comprising a processor for implementing the video-based motion estimation method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video-based motion estimation method.
In summary, according to the video-based motion estimation method, the video-based motion estimation device, the video-based motion estimation computer equipment and the video-based motion estimation storage medium, the motion type of the user is obtained by identifying the first motion video of the user, the key frames of the first motion video and the second motion video are extracted according to the motion type, the key frame images are extracted in a self-adaptive manner according to the motion type of the user, the extracted key frame images can reflect the body motions of the user more effectively, the human key points in the key frame images are detected, the difference between the human key points is calculated, and finally the motion score of the user is estimated by combining the difference and the motion type, so that the accuracy is higher and the method is more practical.
Drawings
Fig. 1 is a flowchart of a video-based motion estimation method according to an embodiment of the present invention.
Fig. 2 is a block diagram of a video-based motion estimation apparatus according to a second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a computer device according to a third embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Fig. 1 is a flowchart of a video-based motion estimation method according to an embodiment of the present invention. The video-based motion estimation method specifically comprises the following steps, and the sequence of the steps in the flowchart can be changed and some steps can be omitted according to different requirements.
And S11, acquiring the first motion video of the user and the second motion video of the coach.
The computer equipment can be provided with image acquisition equipment, and the first motion video of the user and the second motion video of the coach can be acquired and acquired through the image acquisition equipment.
The computer device can be provided with a communication module, and the communication module is used for receiving the first motion video of the user and the second motion video of the coach sent by other electronic devices.
The manner in which the computer device obtains the first motion video of the user and the second motion video of the trainer is not limited to the above list.
S12, identifying the motion type of the user according to the first motion video.
Because the learning condition of each user is different, the motion type of the user is identified through the first motion video, and the key frame image is extracted according to the motion type, so that the accuracy of evaluating the motion of the user is improved conveniently.
In an optional embodiment, the identifying the type of motion of the user from the first motion video comprises:
continuously extracting a first frame image, a second frame image and a third frame image in the first motion video;
calculating a first pixel difference value between the first frame image and the second frame image and calculating a second pixel difference value between the second frame image and the third frame image;
comparing an average difference between the first pixel difference and the second pixel difference to a plurality of preset difference ranges;
matching a target preset difference value range corresponding to the average difference value;
and determining the motion type corresponding to the target preset difference range as the motion type of the user.
In the optional embodiment, the motion type of the motion of the user can be effectively reflected according to the pixel difference between the two previous and next frames of images, and when the pixel difference is larger, the motion amplitude of the user is larger or the motion changes more, and the motion is a violent motion or a motion belonging to a young person. When the pixel difference value is smaller, the action amplitude of the user is smaller or the action change is less, and the user belongs to a relaxing type exercise or an old person.
A plurality of difference ranges are prestored in the computer equipment, and each difference range corresponds to one motion type. Three frames of images can be continuously extracted from the middle part of the first motion video, pixel difference values of the two frames of images before and after are respectively calculated, then the average difference value of the pixel difference values is calculated, and finally the difference value range in which the average difference value falls is determined, namely the motion type of the user can be determined according to the corresponding relation between the difference value range and the operation type.
S13, extracting a plurality of first key frame images in the first motion video and extracting a plurality of second key frame images in the second motion video according to the motion type.
Exemplarily, assuming that the motion type of the user is a violent motion, it indicates that the motion transformation of the user is fast, the difference between information expressed by two frames of images before and after the user is large, and the key frame images in the motion video can be extracted every preset first frame number; if the motion type of the user is a slow motion, the motion transformation of the user is slow, the difference between information expressed by the front frame image and the back frame image is small, and the key frame images in the motion video can be extracted every preset second frame number.
The preset first frame number is smaller than the preset second frame number, for example, the preset first frame number is 1, and the preset second frame number is 4. The key frame images are extracted by adopting different frame numbers according to the motion types, so that the key frame images can be extracted as many as possible for motion evaluation when the motion transformation is fast, and the evaluation accuracy is improved; when the motion transformation is slow, a small amount of key frame images are extracted for motion evaluation, and the calculation amount is reduced.
In an optional embodiment, after the extracting a plurality of first key frame images in the first motion video and extracting a plurality of second key frame images in the second motion video according to the motion type, the video-based motion estimation method further comprises:
graying each first key frame image to obtain a first gray image, and graying each second key frame image to obtain a second gray image;
and compressing the first gray scale image to obtain a first compressed image, and compressing the second gray scale image to obtain a second compressed image.
In the optional embodiment, the key frame image is processed in a graying manner, so that color and texture information in the key frame image is removed, and the efficiency of detecting the human key points in the key frame image by the human key point detection model can be improved; and the compression processing is carried out on the gray level image, so that the efficiency of detecting the human key points in the key frame image by the human key point detection model can be further improved.
S14, detecting a plurality of first human key points in each first key frame image and detecting a plurality of second human key points in each second key frame image.
The open-source TensorFlow model can be downloaded, a human body key point detection model is trained based on the open-source TensorFlow model, and a plurality of human body key points in the key frame image are detected through the human body key point detection model. The training process of the human body key point detection model is the prior art, and the invention is not elaborated in detail.
The human body key points include: and 18 points such as 0 nose, 1 clavicle midpoint, 2 right shoulder, 3 right elbow, 4 right wrist, 5 left shoulder, 6 left elbow, 7 left wrist, 8 right hip, 9 right knee, 10 right ankle, 11 left hip, 12 left knee, 13 left ankle, 14 right eye, 15 left eye, 16 right ear, 17 left ear and the like.
S15, calculating the difference degree between the plurality of first human key points and the plurality of second human key points.
After detecting the plurality of first human body key points in the first motion video of the user and the plurality of second human body key points in the second motion video of the coach, the computer device can calculate the difference degree according to the plurality of first human body key points and the plurality of second human body key points so as to evaluate the difference condition between the motion of the user and the motion of the coach.
In an optional embodiment, the calculating the degree of difference between the plurality of first human keypoints and the plurality of second human keypoints comprises:
acquiring a first coordinate point of each first human body key point relative to the first key frame image and acquiring a second coordinate point of each second human body key point relative to the second key frame image;
acquiring the resolution of a display screen of the computer equipment;
converting the first coordinate point according to the resolution ratio to obtain a first standard coordinate point and converting the second coordinate point to obtain a second standard coordinate point;
and calculating the difference degree according to the first standard coordinate point and the second standard coordinate point.
In the optional embodiment, because the user and the coach have differences in body height, size and the like, the first coordinate point and the second coordinate point are relative positions, and for convenience of subsequent calculation of the difference, the first coordinate point and the second coordinate point of the relative positions are converted on the same display screen, so that the first standard coordinate point and the second standard coordinate point obtained after conversion have the same dimension and are comparable, and the accuracy of the calculated difference is higher.
In an alternative embodiment, the resolution is a first coefficient and a second coefficient.
In an optional embodiment, the converting the first coordinate point to obtain a first standard coordinate point and the converting the second coordinate point to obtain a second standard coordinate point according to the resolution includes:
calculating a first ratio of the length of the first key frame image to the first coefficient and calculating a second ratio of the width of the first key frame image to the second coefficient;
calculating a third ratio of the length of the second key frame image to the first coefficient and calculating a fourth ratio of the width of the second key frame image to the second coefficient;
scaling the first coordinate point by the first proportion in the vertical direction and scaling the first coordinate point by the second proportion in the horizontal direction to obtain a first standard coordinate point;
and scaling the second coordinate point by the third proportion in the vertical direction and scaling the second coordinate point by the fourth proportion in the horizontal direction to obtain a second standard coordinate point.
In this alternative embodiment, the coordinate points in the key frame image may be mapped to the display screen by calculating the ratio between the key frame image and the resolution and scaling the coordinate points according to the ratio.
In an optional embodiment, the calculating the degree of difference from the first standard coordinate point and the second standard coordinate point includes:
associating the first standard coordinate points and the corresponding second standard coordinate points according to the sequence of the time axis;
and calculating the distance value between each associated coordinate point to obtain the difference.
In this alternative embodiment, the first standard coordinate point and the second standard coordinate point are mapped in the order of the time axis, and then the euclidean distance between every two coordinate points is calculated to obtain the degree of difference between the user and the coach.
S16, evaluating the exercise score of the user according to the difference degree and the exercise type.
The invention relates to artificial intelligence, the motion type of a user is obtained by identifying a first motion video of the user, and key frame extraction is carried out on the first motion video and a second motion video according to the motion type, so that key frame images are extracted in a self-adaptive manner according to the motion type of the user, and the extracted key frame images can more effectively reflect the body actions of the user; and detecting the human key points in the key frame image through the human key point detection model, calculating the difference between the human key points, and finally evaluating the motion score of the user by combining the difference and the motion type, so that the accuracy is higher and the method is more practical.
In an optional embodiment, said evaluating the exercise score of the user according to the degree of difference and the exercise type comprises:
calculating the variance of all the difference degrees;
obtaining a score corresponding to the variance;
obtaining score weight corresponding to the motion type;
and obtaining the movement score of the user according to the product of the score and the score weight.
In this alternative embodiment, a first correspondence between the variance and the score may be preset, for example, a variance of 0.1 corresponds to a score of 95, and a variance of 0.2 corresponds to a score of 85. A second correspondence between the sports type and the score weight may be set in advance, for example, the sports type is old, the score weight is 1.1, the sports type is young, and the score weight is 0.9. And obtaining the score of the variance according to the first corresponding relation, obtaining the score weight corresponding to the motion type according to the second corresponding relation, and finally calculating the motion score of the user according to the score and the score weight. As the calculated exercise score not only considers the difference degree of the limb actions between the user and the coach, but also considers the type of the exercise, for the old-aged exercise, the score can be enlarged because the limbs of the old people bend along with the growth of the old people, the calculated exercise score is more accurate, and the score of the user is more practical.
In an optional embodiment, the video-based motion estimation method further comprises:
acquiring a first target difference degree of which the difference degree is greater than a preset first difference degree threshold value;
acquiring a second target difference degree of which the difference degree is smaller than a preset second difference degree threshold value;
playing the first motion video and the second motion video on a display screen;
prompting the human body key points corresponding to the first target difference degree in the first motion video according to a preset first prompt; and/or
And prompting the human key points corresponding to the second target difference degree in the first motion video according to a preset second prompt.
The exercise video of the coach and the exercise video of the user are simultaneously displayed on the same display screen, for example, the first exercise video of the user is displayed on the right side of the computer device, and the second exercise video of the coach is displayed on the left side of the computer device, so that the user can watch standard limb actions and can also watch the limb actions of the user.
The preset prompt can comprise a text prompt, a voice prompt or an animation prompt and the like. The preset first difference threshold and the preset second difference threshold may be the same or different.
The preset first prompt is used for prompting a user that the human body key points with the action difference appear, and indicating the correct moving direction of the human body key points according to the positions of the key points with the action difference (for example, indicated by red arrows, and synchronous reminding can also be performed by using voice). The preset second prompt is used for prompting the human body to display the key points with the action standards, and the positions of the key points with the action standards are displayed by changing colors or changing into limbs of cartoon characters. When the motion video is played on the same screen, according to a preset prompting mode, not only is a prompt performed when the body moves in error, but also voice is given to be raised or rewarded when the body moves correctly, so that the user can obtain double feedback in the aspects of vision and hearing.
In an alternative embodiment, the second motion video of the coach in this embodiment can be stored on the blockchain node, taking advantage of the distributed data storage and reading on the blockchain.
It is emphasized that the sports score may also be stored in a blockchain node to further ensure privacy and security of the sports score.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Fig. 2 is a block diagram of a video-based motion estimation apparatus according to a second embodiment of the present invention.
In some embodiments, the video-based motion estimation apparatus 20 may include a plurality of functional modules comprised of program code segments. The program code of the various program segments in the video-based motion estimation apparatus 20 may be stored in a memory of a computer device and executed by the at least one processor to perform the functions of video-based motion estimation (described in detail with respect to fig. 1).
In this embodiment, the video-based motion estimation apparatus 20 may be divided into a plurality of functional modules according to the functions performed by the apparatus. The functional module may include: the video processing system comprises a video acquisition module 201, a type identification module 202, a key frame extraction module 203, an image processing module 204, a key point detection module 205, a difference degree calculation module 206, a motion evaluation module 207 and a video playing module 208. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The video obtaining module 201 is configured to obtain a first motion video of a user and a second motion video of a coach.
The computer equipment can be provided with image acquisition equipment, and the first motion video of the user and the second motion video of the coach can be acquired and acquired through the image acquisition equipment.
The computer device can be provided with a communication module, and the communication module is used for receiving the first motion video of the user and the second motion video of the coach sent by other electronic devices.
The manner in which the computer device obtains the first motion video of the user and the second motion video of the trainer is not limited to the above list.
The type identification module 202 is configured to identify a motion type of the user according to the first motion video.
Because the learning condition of each user is different, the motion type of the user is identified through the first motion video, and the key frame image is extracted according to the motion type, so that the accuracy of evaluating the motion of the user is improved conveniently.
In an optional embodiment, the identifying, by the type identifying module 202, the motion type of the user according to the first motion video includes:
continuously extracting a first frame image, a second frame image and a third frame image in the first motion video;
calculating a first pixel difference value between the first frame image and the second frame image and calculating a second pixel difference value between the second frame image and the third frame image;
comparing an average difference between the first pixel difference and the second pixel difference to a plurality of preset difference ranges;
matching a target preset difference value range corresponding to the average difference value;
and determining the motion type corresponding to the target preset difference range as the motion type of the user.
In the optional embodiment, the motion type of the motion of the user can be effectively reflected according to the pixel difference between the two previous and next frames of images, and when the pixel difference is larger, the motion amplitude of the user is larger or the motion changes more, and the motion is a violent motion or a motion belonging to a young person. When the pixel difference value is smaller, the action amplitude of the user is smaller or the action change is less, and the user belongs to a relaxing type exercise or an old person.
A plurality of difference ranges are prestored in the computer equipment, and each difference range corresponds to one motion type. Three frames of images can be continuously extracted from the middle part of the first motion video, pixel difference values of the two frames of images before and after are respectively calculated, then the average difference value of the pixel difference values is calculated, and finally the difference value range in which the average difference value falls is determined, namely the motion type of the user can be determined according to the corresponding relation between the difference value range and the operation type.
The key frame extracting module 203 is configured to extract a plurality of first key frame images in the first motion video and a plurality of second key frame images in the second motion video according to the motion type.
Exemplarily, assuming that the motion type of the user is a violent motion, it indicates that the motion transformation of the user is fast, the difference between information expressed by two frames of images before and after the user is large, and the key frame images in the motion video can be extracted every preset first frame number; if the motion type of the user is a slow motion, the motion transformation of the user is slow, the difference between information expressed by the front frame image and the back frame image is small, and the key frame images in the motion video can be extracted every preset second frame number.
The preset first frame number is smaller than the preset second frame number, for example, the preset first frame number is 1, and the preset second frame number is 4. The key frame images are extracted by adopting different frame numbers according to the motion types, so that the key frame images can be extracted as many as possible for motion evaluation when the motion transformation is fast, and the evaluation accuracy is improved; when the motion transformation is slow, a small amount of key frame images are extracted for motion evaluation, and the calculation amount is reduced.
The image processing module 204 is configured to process the plurality of first key frame images and process the plurality of second key frame images.
In an alternative embodiment, the image processing module 204 processes the plurality of first key frame images and processes the plurality of second key frame images includes:
graying each first key frame image to obtain a first gray image, and graying each second key frame image to obtain a second gray image;
and compressing the first gray scale image to obtain a first compressed image, and compressing the second gray scale image to obtain a second compressed image.
In the optional embodiment, the key frame image is processed in a graying manner, so that color and texture information in the key frame image is removed, and the efficiency of detecting the human key points in the key frame image by the human key point detection model can be improved; and the compression processing is carried out on the gray level image, so that the efficiency of detecting the human key points in the key frame image by the human key point detection model can be further improved.
The key point detecting module 205 is configured to detect a plurality of first human body key points in each first key frame image and a plurality of second human body key points in each second key frame image.
The open-source TensorFlow model can be downloaded, a human body key point detection model is trained based on the open-source TensorFlow model, and a plurality of human body key points in the key frame image are detected through the human body key point detection model. The training process of the human body key point detection model is the prior art, and the invention is not elaborated in detail.
The human body key points include: and 18 points such as 0 nose, 1 clavicle midpoint, 2 right shoulder, 3 right elbow, 4 right wrist, 5 left shoulder, 6 left elbow, 7 left wrist, 8 right hip, 9 right knee, 10 right ankle, 11 left hip, 12 left knee, 13 left ankle, 14 right eye, 15 left eye, 16 right ear, 17 left ear and the like.
The difference calculating module 206 is configured to calculate differences between the plurality of first human body key points and the plurality of second human body key points.
After detecting the plurality of first human body key points in the first motion video of the user and the plurality of second human body key points in the second motion video of the coach, the computer device can calculate the difference degree according to the plurality of first human body key points and the plurality of second human body key points so as to evaluate the difference condition between the motion of the user and the motion of the coach.
In an alternative embodiment, the calculating the difference degree between the plurality of first human body key points and the plurality of second human body key points by the difference degree calculating module 206 comprises:
acquiring a first coordinate point of each first human body key point relative to the first key frame image and acquiring a second coordinate point of each second human body key point relative to the second key frame image;
acquiring the resolution of a display screen of the computer equipment;
converting the first coordinate point according to the resolution ratio to obtain a first standard coordinate point and converting the second coordinate point to obtain a second standard coordinate point;
and calculating the difference degree according to the first standard coordinate point and the second standard coordinate point.
In the optional embodiment, because the user and the coach have differences in body height, size and the like, the first coordinate point and the second coordinate point are relative positions, and for convenience of subsequent calculation of the difference, the first coordinate point and the second coordinate point of the relative positions are converted on the same display screen, so that the first standard coordinate point and the second standard coordinate point obtained after conversion have the same dimension and are comparable, and the accuracy of the calculated difference is higher.
In an alternative embodiment, the resolution is a first coefficient and a second coefficient.
In an optional embodiment, the converting the first coordinate point to obtain a first standard coordinate point and the converting the second coordinate point to obtain a second standard coordinate point according to the resolution includes:
calculating a first ratio of the length of the first key frame image to the first coefficient and calculating a second ratio of the width of the first key frame image to the second coefficient;
calculating a third ratio of the length of the second key frame image to the first coefficient and calculating a fourth ratio of the width of the second key frame image to the second coefficient;
scaling the first coordinate point by the first proportion in the vertical direction and scaling the first coordinate point by the second proportion in the horizontal direction to obtain a first standard coordinate point;
and scaling the second coordinate point by the third proportion in the vertical direction and scaling the second coordinate point by the fourth proportion in the horizontal direction to obtain a second standard coordinate point.
In this alternative embodiment, the coordinate points in the key frame image may be mapped to the display screen by calculating the ratio between the key frame image and the resolution and scaling the coordinate points according to the ratio.
In an optional embodiment, the calculating the degree of difference from the first standard coordinate point and the second standard coordinate point includes:
associating the first standard coordinate points and the corresponding second standard coordinate points according to the sequence of the time axis;
and calculating the distance value between each associated coordinate point to obtain the difference.
In this alternative embodiment, the first standard coordinate point and the second standard coordinate point are mapped in the order of the time axis, and then the euclidean distance between every two coordinate points is calculated to obtain the degree of difference between the user and the coach.
The exercise evaluation module 207 is configured to evaluate an exercise score of the user according to the difference degree and the exercise type.
The motion type of the user is obtained by identifying the first motion video of the user, the key frame extraction is carried out on the first motion video and the second motion video according to the motion type, the key frame image is extracted in a self-adaptive mode according to the motion type of the user, the extracted key frame image can reflect the limb movement of the user more effectively, the human key points in the key frame image are detected, the difference degree between the human key points is calculated, the motion score of the user is evaluated by combining the difference degree and the motion type, the accuracy is higher, and the reality is better fitted.
In an optional embodiment, the exercise evaluation module 207 evaluating the exercise score of the user according to the difference degree and the exercise type includes:
calculating the variance of all the difference degrees;
obtaining a score corresponding to the variance;
obtaining score weight corresponding to the motion type;
and obtaining the movement score of the user according to the product of the score and the score weight.
In this alternative embodiment, a first correspondence between the variance and the score may be preset, for example, a variance of 0.1 corresponds to a score of 95, and a variance of 0.2 corresponds to a score of 85. A second correspondence between the sports type and the score weight may be set in advance, for example, the sports type is old, the score weight is 1.1, the sports type is young, and the score weight is 0.9. And obtaining the score of the variance according to the first corresponding relation, obtaining the score weight corresponding to the motion type according to the second corresponding relation, and finally calculating the motion score of the user according to the score and the score weight. As the calculated exercise score not only considers the difference degree of the limb actions between the user and the coach, but also considers the type of the exercise, for the old-aged exercise, the score can be enlarged because the limbs of the old people bend along with the growth of the old people, the calculated exercise score is more accurate, and the score of the user is more practical.
The video playing module 208 is configured to play the first motion video and the second motion video according to a preset prompt.
In an optional embodiment, the video playing module 208 plays the first motion video and the second motion video according to a preset prompt, including:
acquiring a first target difference degree of which the difference degree is greater than a preset first difference degree threshold value;
acquiring a second target difference degree of which the difference degree is smaller than a preset second difference degree threshold value;
playing the first motion video and the second motion video on a display screen;
prompting the human body key points corresponding to the first target difference degree in the first motion video according to a preset first prompt; and/or
And prompting the human key points corresponding to the second target difference degree in the first motion video according to a preset second prompt.
The exercise video of the coach and the exercise video of the user are simultaneously displayed on the same display screen, for example, the first exercise video of the user is displayed on the right side of the computer device, and the second exercise video of the coach is displayed on the left side of the computer device, so that the user can watch standard limb actions and can also watch the limb actions of the user.
The preset prompt can comprise a text prompt, a voice prompt or an animation prompt and the like. The preset first difference threshold and the preset second difference threshold may be the same or different.
The preset first prompt is used for prompting a user that the human body key points with the action difference appear, and indicating the correct moving direction of the human body key points according to the positions of the key points with the action difference (for example, indicated by red arrows, and synchronous reminding can also be performed by using voice). The preset second prompt is used for prompting the human body to display the key points with the action standards, and the positions of the key points with the action standards are displayed by changing colors or changing into limbs of cartoon characters. When the motion video is played on the same screen, according to a preset prompting mode, not only is a prompt performed when the body moves in error, but also voice is given to be raised or rewarded when the body moves correctly, so that the user can obtain double feedback in the aspects of vision and hearing.
In an alternative embodiment, the second motion video of the coach in this embodiment can be stored on the blockchain node, taking advantage of the distributed data storage and reading on the blockchain.
It is emphasized that the sports score may also be stored in a blockchain node to further ensure privacy and security of the sports score.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Fig. 3 is a schematic structural diagram of a computer device according to a third embodiment of the present invention. In the preferred embodiment of the present invention, the computer device 3 includes a memory 31, at least one processor 32, at least one communication bus 33, and a transceiver 34.
It will be appreciated by those skilled in the art that the configuration of the computer device shown in fig. 3 does not constitute a limitation of the embodiments of the present invention, and may be a bus-type configuration or a star-type configuration, and that the computer device 3 may include more or less hardware or software than those shown, or a different arrangement of components.
In some embodiments, the computer device 3 is a computer device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware thereof includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The computer device 3 may also include a client device, which includes, but is not limited to, any electronic product capable of interacting with a client through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, etc.
It should be noted that the computer device 3 is only an example, and other electronic products that are currently available or may come into existence in the future, such as electronic products that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
In some embodiments, program code is stored in the memory 31 and the at least one processor 32 may call the program code stored in the memory 31 to perform related functions. For example, the respective modules described in the above embodiments are program codes stored in the memory 31 and executed by the at least one processor 32, thereby realizing the functions of the respective modules. The Memory 31 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only Memory (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer that can be used to carry or store data.
In some embodiments, the at least one processor 32 is a Control Unit (Control Unit) of the computer device 3, connects various components of the entire computer device 3 by using various interfaces and lines, and executes various functions and processes data of the computer device 3 by running or executing programs or modules stored in the memory 31 and calling data stored in the memory 31. For example, the at least one processor 32, when executing program code stored in the memory, implements all or a portion of the steps of the video-based motion estimation method described in embodiments of the present invention. The at least one processor 32 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips.
In some embodiments, the at least one communication bus 33 is arranged to enable connection communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the computer device 3 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 32 through a power management device, so as to implement functions of managing charging, discharging, and power consumption through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The computer device 3 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute parts of the video-based motion estimation method according to the embodiments of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A video-based motion estimation method, comprising:
acquiring a first motion video of a user and a second motion video of a coach;
identifying the motion type of the user according to the first motion video;
extracting a plurality of first key frame images in the first motion video and extracting a plurality of second key frame images in the second motion video according to the motion type;
detecting a plurality of first human body key points in each first key frame image and a plurality of second human body key points in each second key frame image;
calculating a degree of dissimilarity between the plurality of first human keypoints and the plurality of second human keypoints;
evaluating a sports score of the user according to the degree of difference and the sports type.
2. The video-based motion estimation method of claim 1, wherein said identifying the type of motion of the user from the first motion video comprises:
continuously extracting a first frame image, a second frame image and a third frame image in the first motion video;
calculating a first pixel difference value between the first frame image and the second frame image and calculating a second pixel difference value between the second frame image and the third frame image;
comparing an average difference between the first pixel difference and the second pixel difference to a plurality of preset difference ranges;
matching a target preset difference value range corresponding to the average difference value;
and determining the motion type corresponding to the target preset difference range as the motion type of the user.
3. The video-based motion estimation method of claim 1, wherein after said extracting a plurality of first key frame images in the first motion video and extracting a plurality of second key frame images in the second motion video according to the motion type, the video-based motion estimation method further comprises:
graying each first key frame image to obtain a first gray image, and graying each second key frame image to obtain a second gray image;
and compressing the first gray scale image to obtain a first compressed image, and compressing the second gray scale image to obtain a second compressed image.
4. The video-based motion estimation method of claim 1 wherein said calculating the degree of difference between the plurality of first human keypoints and the plurality of second human keypoints comprises:
acquiring a first coordinate point of each first human body key point relative to the first key frame image and acquiring a second coordinate point of each second human body key point relative to the second key frame image;
acquiring the resolution of a display screen of the computer equipment;
converting each first coordinate point according to the resolution ratio to obtain a first standard coordinate point and converting each second coordinate point to obtain a second standard coordinate point;
and calculating the difference degree according to the first standard coordinate point and the second standard coordinate point.
5. The video-based motion estimation method of claim 4, wherein the resolution is a first coefficient and a second coefficient, and wherein converting the first coordinate point to obtain a first standard coordinate point and converting the second coordinate point to obtain a second standard coordinate point according to the resolution comprises:
calculating a first ratio of the length of the first key frame image to the first coefficient and calculating a second ratio of the width of the first key frame image to the second coefficient;
calculating a third ratio of the length of the second key frame image to the first coefficient and calculating a fourth ratio of the width of the second key frame image to the second coefficient;
scaling the first coordinate point by the first proportion in the vertical direction and scaling the first coordinate point by the second proportion in the horizontal direction to obtain a first standard coordinate point;
and scaling the second coordinate point by the third proportion in the vertical direction and scaling the second coordinate point by the fourth proportion in the horizontal direction to obtain a second standard coordinate point.
6. The video-based motion estimation method of claim 5, wherein the calculating a degree of difference from the first standard coordinate point and the second standard coordinate point comprises:
associating the first standard coordinate points and the corresponding second standard coordinate points according to the sequence of the time axis;
and calculating the distance value between each associated coordinate point to obtain the difference.
7. The video-based motion estimation method of claim 6, wherein estimating the motion score of the user based on the degree of difference and the motion type comprises:
calculating the variance of all the difference degrees;
obtaining a score corresponding to the variance;
obtaining score weight corresponding to the motion type;
and obtaining the movement score of the user according to the product of the score and the score weight, wherein the movement score is stored in a block chain node.
8. A video-based motion estimation apparatus, the video-based motion estimation apparatus comprising:
the video acquisition module is used for acquiring a first motion video of a user and a second motion video of a coach;
the type identification module is used for identifying the motion type of the user according to the first motion video;
a key frame extraction module, configured to extract a plurality of first key frame images in the first motion video and a plurality of second key frame images in the second motion video according to the motion type;
the key point detection module is used for detecting a plurality of first human body key points in each first key frame image and detecting a plurality of second human body key points in each second key frame image;
a disparity calculation module for calculating the disparity between the plurality of first human body key points and the plurality of second human body key points;
and the motion evaluation module is used for evaluating the motion score of the user according to the difference degree and the motion type.
9. A computer device, characterized in that the computer device comprises a processor for implementing the video-based motion estimation method according to any one of claims 1 to 7 when executing a computer program stored in a memory.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a video-based motion estimation method according to any one of claims 1 to 7.
CN202010358879.2A 2020-04-29 2020-04-29 Video-based motion evaluation method and device, computer equipment and storage medium Pending CN111626137A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010358879.2A CN111626137A (en) 2020-04-29 2020-04-29 Video-based motion evaluation method and device, computer equipment and storage medium
PCT/CN2020/104967 WO2021217927A1 (en) 2020-04-29 2020-07-27 Video-based exercise evaluation method and apparatus, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010358879.2A CN111626137A (en) 2020-04-29 2020-04-29 Video-based motion evaluation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111626137A true CN111626137A (en) 2020-09-04

Family

ID=72258894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010358879.2A Pending CN111626137A (en) 2020-04-29 2020-04-29 Video-based motion evaluation method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111626137A (en)
WO (1) WO2021217927A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418153A (en) * 2020-12-04 2021-02-26 上海商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112446313A (en) * 2020-11-20 2021-03-05 山东大学 Volleyball action recognition method based on improved dynamic time warping algorithm
CN112766638A (en) * 2020-12-28 2021-05-07 惠州学院 Method and system for analyzing working efficiency of pipeline operators based on video images
CN112926440A (en) * 2021-02-22 2021-06-08 北京市商汤科技开发有限公司 Action comparison method and device, electronic equipment and storage medium
CN114373549A (en) * 2022-03-22 2022-04-19 北京大学 Self-adaptive exercise prescription health intervention method and system for old people
CN114819474A (en) * 2022-03-07 2022-07-29 新瑞鹏宠物医疗集团有限公司 Physician evaluation method and device, electronic equipment and storage medium
CN115115822A (en) * 2022-06-30 2022-09-27 小米汽车科技有限公司 Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115243101A (en) * 2022-06-20 2022-10-25 上海众源网络有限公司 Video dynamic and static rate identification method and device, electronic equipment and storage medium
CN117216313A (en) * 2023-09-13 2023-12-12 中关村科学城城市大脑股份有限公司 Attitude evaluation audio output method, attitude evaluation audio output device, electronic equipment and readable medium
CN112418153B (en) * 2020-12-04 2024-06-11 上海商汤科技开发有限公司 Image processing method, device, electronic equipment and computer storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627559B (en) * 2022-05-11 2022-08-30 深圳前海运动保网络科技有限公司 Exercise plan planning method, device, equipment and medium based on big data analysis

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012048362A (en) * 2010-08-25 2012-03-08 Kddi Corp Device and method for human body pose estimation, and computer program
CN108446583A (en) * 2018-01-26 2018-08-24 西安电子科技大学昆山创新研究院 Human bodys' response method based on Attitude estimation
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
US20180315215A1 (en) * 2015-10-30 2018-11-01 Agfa Healthcare Compressing and uncompressing method for high bit-depth medical gray scale images
CN110445951A (en) * 2018-05-02 2019-11-12 腾讯科技(深圳)有限公司 Filtering method and device, storage medium, the electronic device of video
CN110471529A (en) * 2019-08-07 2019-11-19 北京卡路里信息技术有限公司 Act methods of marking and device
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6326701B2 (en) * 2014-11-04 2018-05-23 国立大学法人宇都宮大学 Cooperative movement evaluation device
KR102081099B1 (en) * 2018-04-24 2020-04-23 오창휘 A system for practicing the motion displayed by display device in real time
CN110347877B (en) * 2019-06-27 2022-02-11 北京奇艺世纪科技有限公司 Video processing method and device, electronic equipment and storage medium
CN111062239A (en) * 2019-10-15 2020-04-24 平安科技(深圳)有限公司 Human body target detection method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012048362A (en) * 2010-08-25 2012-03-08 Kddi Corp Device and method for human body pose estimation, and computer program
US20180315215A1 (en) * 2015-10-30 2018-11-01 Agfa Healthcare Compressing and uncompressing method for high bit-depth medical gray scale images
CN108446583A (en) * 2018-01-26 2018-08-24 西安电子科技大学昆山创新研究院 Human bodys' response method based on Attitude estimation
CN108615055A (en) * 2018-04-19 2018-10-02 咪咕动漫有限公司 A kind of similarity calculating method, device and computer readable storage medium
CN110445951A (en) * 2018-05-02 2019-11-12 腾讯科技(深圳)有限公司 Filtering method and device, storage medium, the electronic device of video
CN110471529A (en) * 2019-08-07 2019-11-19 北京卡路里信息技术有限公司 Act methods of marking and device
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446313A (en) * 2020-11-20 2021-03-05 山东大学 Volleyball action recognition method based on improved dynamic time warping algorithm
CN112418153A (en) * 2020-12-04 2021-02-26 上海商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and computer storage medium
CN112418153B (en) * 2020-12-04 2024-06-11 上海商汤科技开发有限公司 Image processing method, device, electronic equipment and computer storage medium
CN112766638A (en) * 2020-12-28 2021-05-07 惠州学院 Method and system for analyzing working efficiency of pipeline operators based on video images
CN112926440A (en) * 2021-02-22 2021-06-08 北京市商汤科技开发有限公司 Action comparison method and device, electronic equipment and storage medium
WO2022174544A1 (en) * 2021-02-22 2022-08-25 北京市商汤科技开发有限公司 Action comparison method, apparatus, electronic device, storage medium, computer program product and computer program
CN114819474A (en) * 2022-03-07 2022-07-29 新瑞鹏宠物医疗集团有限公司 Physician evaluation method and device, electronic equipment and storage medium
CN114373549B (en) * 2022-03-22 2022-06-10 北京大学 Self-adaptive exercise prescription health intervention method and system for old people
CN114373549A (en) * 2022-03-22 2022-04-19 北京大学 Self-adaptive exercise prescription health intervention method and system for old people
CN115243101A (en) * 2022-06-20 2022-10-25 上海众源网络有限公司 Video dynamic and static rate identification method and device, electronic equipment and storage medium
CN115243101B (en) * 2022-06-20 2024-04-12 上海众源网络有限公司 Video dynamic and static ratio identification method and device, electronic equipment and storage medium
CN115115822A (en) * 2022-06-30 2022-09-27 小米汽车科技有限公司 Vehicle-end image processing method and device, vehicle, storage medium and chip
CN115115822B (en) * 2022-06-30 2023-10-31 小米汽车科技有限公司 Vehicle-end image processing method and device, vehicle, storage medium and chip
CN117216313A (en) * 2023-09-13 2023-12-12 中关村科学城城市大脑股份有限公司 Attitude evaluation audio output method, attitude evaluation audio output device, electronic equipment and readable medium

Also Published As

Publication number Publication date
WO2021217927A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN111626137A (en) Video-based motion evaluation method and device, computer equipment and storage medium
CN111563487B (en) Dance scoring method based on gesture recognition model and related equipment
CN105426827B (en) Living body verification method, device and system
US20190066327A1 (en) Non-transitory computer-readable recording medium for storing skeleton estimation program, skeleton estimation device, and skeleton estimation method
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
KR102377561B1 (en) Apparatus and method for providing taekwondo movement coaching service using mirror dispaly
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
CN112543936B (en) Motion structure self-attention-drawing convolution network model for motion recognition
CN114998934B (en) Clothes-changing pedestrian re-identification and retrieval method based on multi-mode intelligent perception and fusion
CN110210194A (en) Electronic contract display methods, device, electronic equipment and storage medium
CN116311539B (en) Sleep motion capturing method, device, equipment and storage medium based on millimeter waves
US20220207921A1 (en) Motion recognition method, storage medium, and information processing device
CN114066534A (en) Elevator advertisement delivery method, device, equipment and medium based on artificial intelligence
CN111507301B (en) Video processing method, video processing device, computer equipment and storage medium
CN113887408A (en) Method, device and equipment for detecting activated face video and storage medium
US20220222975A1 (en) Motion recognition method, non-transitory computer-readable recording medium and information processing apparatus
Weitz et al. InfiniteForm: A synthetic, minimal bias dataset for fitness applications
CN113705469A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN112686232A (en) Teaching evaluation method and device based on micro expression recognition, electronic equipment and medium
CN112070662B (en) Evaluation method and device of face changing model, electronic equipment and storage medium
CN111860357B (en) Attendance rate calculating method and device based on living body identification, terminal and storage medium
CN113255456A (en) Non-active living body detection method, device, electronic equipment and storage medium
CN113392744A (en) Dance motion aesthetic feeling confirmation method and device, electronic equipment and storage medium
CN117423166B (en) Motion recognition method and system according to human body posture image data
CN117392760B (en) Health guidance method and system based on halved cross network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination