CN113743237B - Method and device for judging accuracy of follow-up action, electronic equipment and storage medium - Google Patents

Method and device for judging accuracy of follow-up action, electronic equipment and storage medium Download PDF

Info

Publication number
CN113743237B
CN113743237B CN202110921137.0A CN202110921137A CN113743237B CN 113743237 B CN113743237 B CN 113743237B CN 202110921137 A CN202110921137 A CN 202110921137A CN 113743237 B CN113743237 B CN 113743237B
Authority
CN
China
Prior art keywords
image
gesture
dimensional
user
matching degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110921137.0A
Other languages
Chinese (zh)
Other versions
CN113743237A (en
Inventor
苏同乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202110921137.0A priority Critical patent/CN113743237B/en
Publication of CN113743237A publication Critical patent/CN113743237A/en
Application granted granted Critical
Publication of CN113743237B publication Critical patent/CN113743237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service

Abstract

The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for judging the accuracy of a follow-up action, wherein the method comprises the following steps: capturing plane image data and space data corresponding to each gesture when a user moves along with a preset standard video; generating a three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture aiming at each gesture of the user; carrying out matching degree calculation on each three-dimensional posture image of the user and the corresponding standard three-dimensional posture image, obtaining a matching degree calculation result and outputting the result; the preset standard video comprises a plurality of standard three-dimensional posture images, and the three-dimensional posture images and the corresponding standard three-dimensional posture images correspond to the same posture. The invention can convert the real gesture of the user into the corresponding three-dimensional gesture image so as to accurately identify the gesture of the user through the image, correct the gesture of the user through matching degree calculation, reduce matching degree detection difficulty and ensure detection accuracy.

Description

Method and device for judging accuracy of follow-up action, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and apparatus for determining accuracy of a following action, an electronic device, and a storage medium.
Background
In the prior art, motion capturing is generally performed by combining a camera shooting portrait with an artificial intelligence (Artificial Intelligence, AI) technology aiming at an online live action teaching scene, a follow-up training scene of a fitness video, a self-posture correction scene and other scenes suitable for human body behavior detection.
In the scheme of motion capture by combining a camera with AI technology, when motion recognition is performed by an electronic device, the motion is dependent on the camera and AI operation (the purpose of AI operation is to recognize a feature), and the acquired image is a planar image, so that the motion amplitude cannot be accurately recognized. Therefore, there is a problem that the motion amplitude cannot be accurately recognized when motion capture is performed by the camera.
Because the motion amplitude cannot be accurately identified when the motion is captured according to the camera, the problems of high matching degree detection difficulty and low accuracy exist in a motion matching degree detection scene.
Disclosure of Invention
The embodiment of the invention provides a method, a device, electronic equipment and a storage medium for judging the accuracy of a follow-up action, which are used for solving the problems that in the prior art, the action amplitude cannot be accurately identified when the action is captured according to a camera, and the matching degree detection difficulty and the accuracy rate are high in a scene of detecting the matching degree of the action.
In a first aspect of an embodiment of the present invention, there is provided a method for determining accuracy of a following action, including:
capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, wherein the space data comprises a plurality of space distances between measuring equipment and the user and included angles between two line segments corresponding to any two space distances;
for each gesture of the user, generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture;
performing matching degree calculation on each three-dimensional posture image of the user and a corresponding standard three-dimensional posture image, and obtaining and outputting a matching degree calculation result;
the preset standard video comprises a plurality of standard three-dimensional posture images, and the three-dimensional posture images and the corresponding standard three-dimensional posture images correspond to the same posture.
In a second aspect of the embodiments of the present invention, there is provided an accuracy determination apparatus of a follow-up action, including:
the system comprises a capturing module, a display module and a display module, wherein the capturing module is used for capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, and the space data comprises a plurality of space distances between measuring equipment and the user and included angles between two line segments corresponding to any two space distances;
The generating module is used for generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture aiming at each gesture of the user;
the processing module is used for carrying out matching degree calculation on each three-dimensional posture image of the user and the corresponding standard three-dimensional posture image, obtaining a matching degree calculation result and outputting the result;
the preset standard video comprises a plurality of standard three-dimensional posture images, and the three-dimensional posture images and the corresponding standard three-dimensional posture images correspond to the same posture.
In a third aspect of the embodiment of the present invention, there is also provided an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the following action accuracy judging method when executing the program stored in the memory.
In a fourth aspect of the present invention, there is also provided a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the above-described following action accuracy determination method.
In a fifth aspect of the invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the above-described method of determining accuracy of a follow-up action.
The embodiment of the invention at least comprises the following technical effects:
according to the technical scheme, the plane image data and the space data corresponding to each gesture of the user when the user moves along with the preset standard video are captured, the corresponding three-dimensional gesture image is generated according to the plane image data and the space data corresponding to the current gesture, the three-dimensional stereo data can be determined based on the plane image data and the space data, the real gesture of the user is converted into the corresponding three-dimensional gesture image, and the gesture of the user is accurately identified; the matching degree calculation is carried out on the three-dimensional gesture image and the corresponding standard three-dimensional gesture image, so that a matching degree calculation result is obtained and output, an auxiliary mechanism can be provided, better auxiliary analysis is carried out on the gesture of the user, and the gesture of the user is corrected; and through carrying out accurate action range discernment, can reduce the detection degree of difficulty, guarantee to detect the rate of accuracy when calculating the matching degree in order to carry out the matching degree detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic diagram of a method for determining accuracy of a following action according to an embodiment of the present invention;
FIG. 2 is a flowchart of a specific example of a following accuracy determination method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a following accuracy determining device according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a following accuracy determination device according to an embodiment of the present invention;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The embodiment of the invention has the core concept that the three-dimensional data analysis is realized by combining the space distance and the included angle detection while the image capturing is carried out, so that the motion amplitude is accurately identified. Through carrying out accurate action range discernment, can reduce the detection degree of difficulty, guarantee to detect the rate of accuracy under action matching degree detects the scene. The following describes a method for determining accuracy of a following action according to an embodiment of the present invention, and referring to fig. 1, the method for determining accuracy of a following action provided by the present invention is applied to an electronic device, and includes the following steps:
step 101, capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, wherein the space data comprises a plurality of space distances between measuring equipment and the user and included angles between two line segments corresponding to any two space distances.
Under the on-line live action teaching scene, the following training scene of body-building video, the self-posture correction scene and other scenes suitable for human behavior detection, a user can imitate or follow actions in a preset standard video to form the actions of the user. The preset standard video can be an action teaching video, a standard fitness video, a standard posture correction video or other types of standard videos for human body behavior detection.
The electronic device can capture the plane image data and the space data corresponding to each gesture of the user when the user moves along with the preset standard video, wherein the number of the user can be one or more, and correspondingly, the number of the characters in the preset standard video can be one or more. In the case that the users are multiple, the users can form a user group, and the users move along with the preset standard video in the form of groups, wherein the preset standard video corresponds to a target group formed by multiple people. For each user, the electronic device captures data of each gesture of the current user, and obtains corresponding plane image data and space data, wherein the space data can comprise a plurality of space distances between the measuring device and the user and included angles between two line segments corresponding to any two space distances.
When capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, the method comprises the following steps:
aiming at each gesture when a user moves along with a preset standard video, acquiring corresponding plane image data through an image acquisition device and acquiring corresponding space data through a measurement device;
the image acquisition device captures a plane image of a user, and the measurement device measures a plurality of spatial distances between the image acquisition device and the user and obtains an included angle between two line segments corresponding to any two spatial distances.
When the user moves along with the preset standard video, and data capturing is performed for each gesture of the user to obtain corresponding plane image data and space data, the image acquisition equipment can be utilized to shoot the user to obtain the plane image data, and the measuring equipment is utilized to obtain the corresponding space data.
The image acquisition device may be integrated on the electronic device, i.e. the electronic device captures planar image data by means of the image acquisition device integrated by itself at this time; the image acquisition equipment can also be separated from the electronic equipment, namely, the image acquisition equipment and the electronic equipment are two independent equipment, at the moment, the electronic equipment is in communication connection with the image acquisition equipment, so that plane image data acquired by the image acquisition equipment can be acquired, and image capturing is realized through interaction with the image acquisition equipment.
The measuring equipment can be integrated on the electronic equipment, namely, the electronic equipment can acquire a plurality of space distances and a certain number of included angles between the measuring equipment and the user through the measuring equipment, the measuring equipment can be separated from the electronic equipment, namely, the measuring equipment and the electronic equipment are two independent equipment, at the moment, the electronic equipment is in communication connection with the measuring equipment, and the electronic equipment can acquire a plurality of space distances and a certain number of included angles between the measuring equipment and the user, which are measured by the measuring equipment, so that the space data corresponding to the user can be acquired through interaction with the measuring equipment. For the measuring equipment, the space distance between different body parts of a user can be measured, and the included angle between two line segments corresponding to any two space distances is obtained, so that a plurality of space distances and a certain number of included angles are obtained.
The image acquisition device can be a camera installed on the electronic device or a camera independent of the electronic device; the measurement device may be a range finder, such as a lidar or other device, and may be integrated on the electronic device or separate from the electronic device. By integrating the image acquisition equipment and the measuring equipment on the electronic equipment, the integration of the equipment can be realized, and the electronic equipment can acquire plane image data and space data without interaction with other equipment. The image acquisition equipment, the measuring equipment and the electronic equipment are independent, so that integrated design is not needed, and design cost is saved while planar image data and space data are acquired.
Step 102, generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture aiming at each gesture of the user.
For a user, a plurality of gestures may be corresponding, and for each gesture of the current user, a corresponding three-dimensional gesture image may be generated from the planar image data and the spatial data corresponding to the current gesture. Wherein, when generating the three-dimensional gesture image, the shape recognition can be performed on the plane image data based on the AI technology. By generating the three-dimensional gesture image, the real gesture of the user can be converted into the corresponding three-dimensional gesture image, so that the gesture of the user can be displayed in a manner of presenting the three-dimensional gesture image.
It should be noted that, for the case that the number of users is plural (two or more), the plurality of users may capture the planar image data and the spatial data respectively corresponding to the movement of the plurality of users along with the preset standard video. The preset standard video may correspond to a plurality of characters, and the gestures corresponding to the plurality of characters in the preset standard video at the same time may be the same (e.g., the gestures of the characters remain consistent in a group broadcast gymnastics scene) or different (e.g., the characters correspond to the corresponding gestures in a group dance scene). For the case that a plurality of users are provided, when generating a corresponding three-dimensional posture image according to the plane image data and the space data corresponding to the current posture, the three-dimensional posture image corresponding to the plurality of users (namely, the three-dimensional posture image comprises posture information of the plurality of users) can be generated according to the plane image data and the space data corresponding to the posture of the plurality of users at the same time; or at a certain moment, generating a stereoscopic image corresponding to the current user according to the plane image data and the space data corresponding to the gesture of the current user at the current moment, and then splicing the stereoscopic images corresponding to the multiple users to generate three-dimensional gesture images corresponding to the multiple users.
And 103, carrying out matching degree calculation on each three-dimensional posture image of the user and the corresponding standard three-dimensional posture image, obtaining a matching degree calculation result and outputting the matching degree calculation result, wherein the preset standard video comprises a plurality of standard three-dimensional posture images, and the three-dimensional posture images and the corresponding standard three-dimensional posture images correspond to the same posture.
After the three-dimensional posture image of the user is generated, each three-dimensional posture image of the user and the corresponding standard three-dimensional posture image may be subjected to matching degree calculation to obtain a matching degree calculation result and output. When a plurality of users form a user group, each three-dimensional posture image can be acquired for the user group, and the matching degree calculation is performed on each three-dimensional posture image and a corresponding standard three-dimensional posture image (an image corresponding to a target group), so that a matching degree calculation result is acquired and output. For the case where the preset standard video corresponds to the target community, each standard three-dimensional pose image corresponds to the target community.
The three-dimensional pose image and the corresponding standard three-dimensional pose image correspond to the same pose, and for the group form, it is understood that the three-dimensional pose image and the corresponding standard three-dimensional pose image correspond to the same group pose.
The following process of matching degree calculation is described by way of example, for a follow-up training scene of a fitness video, a fitness user follows a fitness action of a fitness trainer to perform movement, and for each action of the fitness user, the electronic device captures corresponding plane image data and spatial data to generate a corresponding three-dimensional gesture image. And aiming at each action of the body-building user, carrying out matching degree calculation on the three-dimensional gesture image corresponding to the current action and the standard three-dimensional gesture image corresponding to the standard action (corresponding to the current action) of the body-building coach so as to obtain a matching degree calculation result. After the electronic device obtains the matching degree calculation result, the electronic device may output the matching degree calculation result. By carrying out matching degree calculation aiming at each action, the real-time performance and the fineness of the matching degree verification are ensured.
According to the implementation process, the plane image data and the space data corresponding to each gesture of the user when the user moves along with the preset standard video are captured, the corresponding three-dimensional gesture image is generated according to the plane image data and the space data corresponding to the current gesture, the three-dimensional stereo data can be determined based on the plane image data and the space data, the real gesture of the user is converted into the corresponding three-dimensional gesture image, and the gesture of the user is accurately identified; the matching degree calculation is carried out on the three-dimensional gesture image and the corresponding standard three-dimensional gesture image, so that a matching degree calculation result is obtained and output, an auxiliary mechanism can be provided, better auxiliary analysis is carried out on the gesture of the user, and the gesture of the user is corrected; and through carrying out accurate action range discernment, can reduce the detection degree of difficulty, guarantee to detect the rate of accuracy when calculating the matching degree in order to carry out the matching degree detection.
The space distance is the distance between the measuring equipment and the characteristic points of the user, and each characteristic point corresponds to a space distance; step 102, generating a corresponding three-dimensional gesture image according to the planar image data and the spatial data corresponding to the current gesture, including:
constructing a three-dimensional model corresponding to a user in the current gesture according to a plurality of spatial distances corresponding to the current gesture and a preset number of included angles, wherein the preset number is determined according to the number of the spatial distances;
and carrying out equal proportion matching on the plane image data corresponding to the current gesture and the three-dimensional model to obtain a three-dimensional gesture image corresponding to the current gesture.
The spatial distance included in the spatial data is a distance between the measuring device and a plurality of feature points of the user, and may correspond to a spatial distance for each feature point of the user, where each spatial distance corresponds to the same starting point (starting point is the measuring device) and different ending points (ending point is a different feature point of the user). When a corresponding three-dimensional gesture image is generated according to the planar image data and the spatial data corresponding to the current gesture, a three-dimensional model corresponding to a user in the current gesture can be constructed according to a plurality of spatial distances corresponding to the current gesture and a preset number of included angles, wherein the preset number is determined by combining any two of the plurality of spatial distances, the combination number can be obtained by combining any two of the plurality of spatial distances, and the number of included angles can be smaller than or equal to the combination number. That is, if the number of spatial distances is N, the number of combinations is C (N, 2), and C (N, 2) is the number of combinations associated with N,2 determined based on the permutation and combination. When the number of included angles is equal to the number of combinations, the method can be understood as arbitrarily selecting two space distances for N space distances, and acquiring the included angles between the two corresponding line segments.
When generating a corresponding three-dimensional posture image according to the plane image data and the space data corresponding to the current posture, the method specifically comprises the following steps: and determining a plurality of characteristic points corresponding to the user according to a plurality of spatial distances corresponding to the current gesture and a preset number of included angles, and constructing a three-dimensional model corresponding to the user in the current gesture according to the plurality of characteristic points.
After the three-dimensional model corresponding to the user in the current posture is constructed, a corresponding three-dimensional posture image can be generated according to the plane image data corresponding to the current posture and the three-dimensional model. At this time, since the first scale between the three-dimensional model and the real user and the second scale between the planar image data and the real user may be different, it is necessary to unify the scale with respect to the planar image data and the three-dimensional model. That is, the proportion of the three-dimensional model to the real user is adjusted to be the same as the proportion of the planar image data to the real user, the proportion of the planar image data to the three-dimensional model relative to the real user is achieved to be the same, and then matching is carried out on the planar image data and the three-dimensional model to generate a three-dimensional gesture image.
It should be noted that, the ratio between the three-dimensional gesture image and the user is the same as the ratio between the corresponding standard three-dimensional gesture image and the standard object (the reference object of the user, such as a fitness trainer), so that a guarantee can be provided for detecting the matching degree.
According to the implementation process, the three-dimensional model corresponding to the user in the current gesture is constructed based on the plurality of space distances and the preset number of included angles, and the three-dimensional model and the plane image data corresponding to the user in the current gesture are combined after being subjected to proportional adjustment, so that the three-dimensional gesture image is generated, and the real gesture of the user can be converted into the corresponding three-dimensional gesture image based on image capturing and space data measurement.
In an optional embodiment of the present invention, performing matching degree calculation on each three-dimensional pose image of the user and a corresponding standard three-dimensional pose image, obtaining and outputting a matching degree calculation result, including:
calculating the matching degree of the three-dimensional gesture image and the corresponding standard three-dimensional gesture image aiming at each three-dimensional gesture image, and obtaining a matching degree calculation result comprising at least one of matching degree scores and matching details;
and playing the prompt voice comprising the matching degree calculation result and/or displaying the prompt information comprising the matching degree verification result.
When the matching degree calculation result is obtained and output, the matching degree calculation result comprising the matching degree score and/or the matching details can be obtained by calculating the matching degree of the three-dimensional posture image and the standard three-dimensional posture image. The matching degree score is higher as the matching degree is higher, otherwise, the matching degree score is lower, and the user can be encouraged to correct gestures through a scoring mechanism.
The matching details may include specific gesture matching situations, for example, a standard angle between an arm and a body is 45 degrees, a real angle between an arm and a body of a user is 30 degrees, a standard state of a left foot and a right foot is perpendicular to a left foot, and a real state of the right foot and the left foot of the user is an angle smaller than 90 degrees between the right foot and the left foot.
After the matching degree calculation result is obtained, the matching degree calculation result can be output, and at the moment, the matching degree calculation result can be played in a mode of outputting prompt voice, the matching degree calculation result can be displayed on a graphical user interface in a mode of displaying prompt information, and the prompt voice can be played while the prompt information is displayed.
According to the implementation process, the user can be encouraged to correct the gesture based on the scoring mechanism by outputting the matching degree score, the user can be guided to correct the gesture conveniently by outputting the matching details, and the prompting modes can be enriched by playing prompting voice and/or displaying prompting information.
The method for calculating the matching degree of the three-dimensional posture image and the corresponding standard three-dimensional posture image comprises the following steps:
acquiring at least one of an attitude framework feature corresponding to a three-dimensional attitude image and an image feature distance information set corresponding to the three-dimensional attitude image, wherein the image feature distance information set comprises distance information respectively corresponding to a plurality of image feature region combinations, each image feature region combination comprises two image feature regions, and the image feature regions corresponding to the image feature region combinations are at least partially distinguished;
Generating image feature data corresponding to the three-dimensional posture image according to at least one of posture skeleton features corresponding to the three-dimensional posture image and an image feature distance information set corresponding to the three-dimensional posture image;
calculating the matching degree of the three-dimensional posture image and the standard three-dimensional posture image according to the image characteristic data corresponding to the three-dimensional posture image and the image characteristic data corresponding to the standard three-dimensional posture image;
the image feature data corresponding to the standard three-dimensional posture image comprises at least one of posture skeleton features corresponding to the standard three-dimensional posture image and image feature distance information sets corresponding to the standard three-dimensional posture image.
When the matching degree is calculated for the three-dimensional posture image and the corresponding standard three-dimensional posture image, at least one of the corresponding posture skeleton feature and the image feature distance information set can be acquired for the three-dimensional posture image.
For the situation that only the gesture skeleton feature corresponding to the three-dimensional gesture image is obtained, image feature data corresponding to the three-dimensional gesture image can be generated according to the gesture skeleton feature corresponding to the three-dimensional gesture image, then matching degree calculation is carried out according to the image feature data corresponding to the three-dimensional gesture image and the image feature data corresponding to the standard three-dimensional gesture image, and matching degree of the three-dimensional gesture image and the standard three-dimensional gesture image is obtained. At this time, the image feature data corresponding to the standard three-dimensional posture image includes posture skeleton features corresponding to the standard three-dimensional posture image.
The image feature distance information set comprises distance information respectively corresponding to a plurality of image feature region combinations, each image feature region combination comprises two image feature regions, and the image feature regions corresponding to the image feature region combinations are at least partially distinguished. For example, the image feature distance information set includes distance information corresponding to 3 image feature region combinations, respectively, the 3 image feature region combinations being an image feature region combination a, an image feature region combination B, and an image feature region combination C, respectively. The image characteristic region combination A comprises an image characteristic region 1 and an image characteristic region 2, and the distance information corresponding to the image characteristic region combination A is the distance between the image characteristic region 1 and the image characteristic region 2; the image characteristic region combination B comprises an image characteristic region 2 and an image characteristic region 3, and the distance information corresponding to the image characteristic region combination B is the distance between the image characteristic region 2 and the image characteristic region 3; the image feature region combination C includes an image feature region 4 and an image feature region 5, and the distance information corresponding to the image feature region combination C is the distance between the image feature region 4 and the image feature region 5.
For the situation that only the image feature distance information set corresponding to the three-dimensional posture image is obtained, image feature data corresponding to the three-dimensional posture image can be generated according to the image feature distance information set corresponding to the three-dimensional posture image, then matching degree calculation is carried out according to the image feature data corresponding to the three-dimensional posture image and the image feature data corresponding to the standard three-dimensional posture image, and matching degree of the three-dimensional posture image and the standard three-dimensional posture image is obtained. At this time, the image feature data corresponding to the standard three-dimensional pose image includes the image feature distance information set corresponding to the standard three-dimensional pose image.
Aiming at the condition of acquiring the gesture skeleton feature and the image feature distance information set corresponding to the three-dimensional gesture image, image feature data corresponding to the three-dimensional gesture image can be generated according to the image feature distance information set corresponding to the three-dimensional gesture image and the gesture skeleton feature, then matching degree calculation is carried out according to the image feature data corresponding to the three-dimensional gesture image and the image feature data corresponding to the standard three-dimensional gesture image, and the matching degree of the three-dimensional gesture image and the standard three-dimensional gesture image is acquired. At this time, the image feature data corresponding to the standard three-dimensional posture image includes the image feature distance information set corresponding to the standard three-dimensional posture image and the posture skeleton feature.
The proportion between the three-dimensional posture image and the user is the same as the proportion between the corresponding standard three-dimensional posture image and a standard object (a reference object of the user, such as a body-building coach), so that guarantee can be provided for matching degree detection.
According to the implementation process, the image feature data corresponding to the three-dimensional posture image can be generated according to at least one of the posture skeleton feature corresponding to the three-dimensional posture image and the image feature distance information set, the matching degree calculation is performed based on the image feature data corresponding to the three-dimensional posture image and the image feature data corresponding to the standard three-dimensional posture image, the matching degree calculation can be performed according to at least one type of data, and the diversity and the accuracy of the matching degree calculation are guaranteed.
In an optional embodiment of the invention, after outputting the matching degree calculation result, the method further includes:
responding to the first input of the correction control on the graphical user interface, and displaying a correction page corresponding to the matching degree calculation result on the graphical user interface;
at least one of the matching degree score and the matching details is revised in response to a second input at the revised page.
After the electronic device outputs the matching degree calculation result, if the user finds that the matching degree calculation result is inaccurate, the matching degree calculation result can be corrected, for example, the first posture of the body-building user is matched with the first standard posture of the body-building coach, but due to the height difference of the body-building user and the body-building coach, the electronic device judges that the postures of the body-building user and the body-building coach are not matched, and at the moment, the user (the body-building user) can correct the matching degree calculation result to avoid the situation that the matching degree calculation result is inaccurate under the same scene again.
The graphical user interface of the electronic device displays a correction control, and receives a first input of the correction control by a user, where the first input may be an input meeting a preset input feature, for example, may be a press input with a press time longer than a preset time period and/or an input with a press force greater than a preset force, or may be a continuous click input, and is not limited to the above-listed cases. After the first input is received, a correction page corresponding to the matching degree calculation result is displayed on the graphical user interface in response to the first input, namely, the page displaying the correction control is switched to the correction page for display.
After displaying the correction page corresponding to the matching degree calculation result, a second input executed by the user on the correction page may be received, and the matching degree score and/or the matching details may be corrected in response to the second input. When the matching degree score is corrected, the matching degree score corresponding to the current gesture can be corrected, and the matching degree scores corresponding to at least two gestures can be corrected respectively, so that at least two matching degree scores can be corrected at a time. And when the matching degree score is corrected, the matching degree score smaller than the preset score threshold value can be adjusted to be larger than the preset score threshold value, and the method is not limited to the correction mode. Accordingly, when the matching details are corrected, the matching details corresponding to the current gesture can be corrected, and the matching details corresponding to at least two gestures can be corrected respectively, so that at least two matching details can be corrected at a time. And when the matching details are corrected, the specific gesture matching condition can be corrected.
According to the implementation process, the display of the correction page is triggered by executing the first input on the correction control, so that the matching degree calculation result can be corrected, and inaccurate matching degree calculation result can be prevented from being output.
In an optional embodiment of the invention, after generating the three-dimensional pose image corresponding to the user in the target pose, the method further comprises:
displaying a three-dimensional gesture image corresponding to the target gesture in a first area of the graphical user interface and displaying a standard three-dimensional gesture image corresponding to the target gesture in a second area of the graphical user interface;
wherein the target gesture is one of a plurality of gestures of the user, and the target gesture is at least one.
After the three-dimensional posture image corresponding to the user in the target posture is obtained, the three-dimensional posture image and the standard three-dimensional posture image corresponding to the three-dimensional posture image can be displayed on the graphical user interface, and the three-dimensional posture image and the standard three-dimensional posture image are displayed in different areas, so that the user can know the posture following condition in real time. If the user moves along with the body-building video, the three-dimensional gesture image and the standard three-dimensional gesture image are displayed for each action of the body-building user, so that the user can visually compare, and an online novel interaction mode is provided. The target gesture is at least one, one or a plurality of gestures of the user can be the target gesture, at least part of the gestures of the user can be the target gesture, and optionally, the gestures of the user can be all the target gestures.
The following describes, by way of a specific example, an implementation flow of the following action accuracy determination method provided in the embodiment of the present invention, as shown in fig. 2, including:
step 201, capturing plane image data and space data corresponding to each gesture of the exercise user when the exercise user moves along with the exercise video of the exercise coach.
Step 202, generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture aiming at each gesture of the body-building user.
Step 203, after generating a corresponding three-dimensional gesture image for each gesture of the fitness user, displaying the three-dimensional gesture image and a corresponding standard three-dimensional gesture image in the fitness video on the graphical user interface.
Step 204, for each gesture of the fitness user, performing matching degree calculation on the three-dimensional gesture image corresponding to the fitness user and the standard three-dimensional gesture image corresponding to the fitness trainer, and obtaining a matching degree calculation result.
Step 205, playing a prompt voice including a matching degree calculation result, and/or displaying prompt information including the matching degree calculation result, where the matching degree calculation result includes at least one of a matching degree score and matching details.
And 206, in the case that the matching degree calculation result is inaccurate, displaying a correction page corresponding to the matching degree calculation result on the graphical user interface in response to the first input of the correction control on the graphical user interface, and correcting at least one of the matching degree score and the matching detail in response to the second input of the correction page.
According to the implementation process, the three-dimensional posture image can be generated according to the plane image data and the space data, so that the posture of the user can be displayed in a mode of presenting the three-dimensional posture image, an auxiliary mechanism can be provided by carrying out matching degree calculation on the three-dimensional posture image and the corresponding standard three-dimensional posture image, better auxiliary analysis can be carried out on the posture of the user, the posture of the user is corrected, and inaccurate matching degree calculation results can be prevented from being output by correcting the matching degree calculation results.
The overall implementation flow of the following action accuracy judging method provided by the embodiment of the invention is that the corresponding three-dimensional gesture image is generated according to the plane image data and the space data corresponding to the current gesture by capturing the plane image data and the space data corresponding to each gesture of the user when the user moves along with the preset standard video, so that the three-dimensional stereo data can be determined based on the plane image data and the space data, the real gesture of the user can be converted into the corresponding three-dimensional gesture image, and the gesture of the user can be accurately identified; the matching degree calculation is carried out on the three-dimensional gesture image and the corresponding standard three-dimensional gesture image, so that a matching degree calculation result is obtained and output, an auxiliary mechanism can be provided, better auxiliary analysis is carried out on the gesture of the user, and the gesture of the user is corrected; and through carrying out accurate action range discernment, can reduce the detection degree of difficulty, guarantee to detect the rate of accuracy when calculating the matching degree in order to carry out the matching degree detection.
Furthermore, the matching degree calculation result is output by playing the prompt voice and/or displaying the prompt information, so that the prompt mode can be enriched; by correcting the matching degree calculation result, inaccurate matching degree calculation result can be prevented from being output; by displaying the three-dimensional gesture image and the standard three-dimensional gesture image on the graphical user interface, a user can conveniently know the gesture following condition in real time.
The embodiment of the invention also provides an accuracy judging device for following actions, which is applied to electronic equipment, as shown in fig. 3, and comprises:
the capturing module 31 is configured to capture plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, where the space data includes a plurality of spatial distances between a measurement device and the user and an included angle between two line segments corresponding to any two of the spatial distances;
a generating module 32, configured to generate, for each gesture of the user, a corresponding three-dimensional gesture image according to the planar image data and the spatial data corresponding to the current gesture;
the processing module 33 is configured to perform matching degree calculation on each three-dimensional gesture image of the user and a corresponding standard three-dimensional gesture image, obtain a matching degree calculation result, and output the matching degree calculation result;
The preset standard video comprises a plurality of standard three-dimensional posture images, and the three-dimensional posture images and the corresponding standard three-dimensional posture images correspond to the same posture.
Optionally, as shown in fig. 4, the processing module 33 includes:
the first processing sub-module 331 is configured to calculate, for each three-dimensional pose image, a matching degree between the three-dimensional pose image and the corresponding standard three-dimensional pose image, and obtain the matching degree calculation result including at least one of a matching degree score and matching details;
the second processing sub-module 332 is configured to play a prompt voice including the matching degree calculation result and/or display a prompt message including the matching degree verification result.
Optionally, the first processing sub-module 331 includes:
an obtaining unit 3311, configured to obtain at least one of a gesture skeleton feature corresponding to the three-dimensional gesture image and an image feature distance information set corresponding to the three-dimensional gesture image, where the image feature distance information set includes distance information respectively corresponding to a plurality of image feature region combinations, each of the image feature region combinations includes two image feature regions, and the image feature regions corresponding to the image feature region combinations are at least partially different;
A generating unit 3312, configured to generate image feature data corresponding to the three-dimensional pose image according to at least one of a pose skeleton feature corresponding to the three-dimensional pose image and an image feature distance information set corresponding to the three-dimensional pose image;
a calculating unit 3313 for calculating a degree of matching of the three-dimensional posture image and the standard three-dimensional posture image based on the image feature data corresponding to the three-dimensional posture image and the image feature data corresponding to the standard three-dimensional posture image;
the image feature data corresponding to the standard three-dimensional posture image comprises at least one of posture skeleton features corresponding to the standard three-dimensional posture image and an image feature distance information set corresponding to the standard three-dimensional posture image.
Optionally, the apparatus further comprises:
a first display module 34, configured to, after the processing module 33 outputs the matching degree calculation result, display, on the graphical user interface, a correction page corresponding to the matching degree calculation result in response to a first input to a correction control on the graphical user interface;
a correction module 35, configured to correct at least one of the matching degree score and the matching details in response to a second input on the correction page.
Optionally, the apparatus further comprises:
a second display module 36, configured to display, after the generating module 32 generates the three-dimensional pose image corresponding to the user in a target pose, the three-dimensional pose image corresponding to the target pose in a first area of a graphical user interface, and display, in a second area of the graphical user interface, the standard three-dimensional pose image corresponding to the target pose;
wherein the target gesture is one of a plurality of gestures of the user, and the target gesture is at least one.
Optionally, the capturing module 31 is further configured to:
for each gesture of the user when moving along with the preset standard video, acquiring corresponding plane image data through an image acquisition device and acquiring corresponding space data through the measurement device;
the image acquisition equipment captures a plane image of the user, and the measurement equipment measures a plurality of space distances between the image acquisition equipment and the user and acquires an included angle between two line segments corresponding to any two space distances.
Optionally, the spatial distance is a distance between the measurement device and a feature point of the user, and each feature point corresponds to one of the spatial distances;
The generating module 32 includes:
a construction sub-module 321, configured to construct a three-dimensional model corresponding to the user in the current gesture according to a plurality of spatial distances corresponding to the current gesture and a preset number of included angles, where the preset number is determined according to the number of spatial distances;
and the obtaining sub-module 322 is configured to perform an equal proportion matching on the planar image data corresponding to the current pose and the three-dimensional model, so as to obtain the three-dimensional pose image corresponding to the current pose.
For the following-action accuracy determining apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference is made to the partial explanation of the method embodiment for the matters.
The embodiment of the invention also provides an electronic device, as shown in fig. 5, which comprises a processor 51, a communication interface 52, a memory 53 and a communication bus 55, wherein the processor 51, the communication interface 52 and the memory 53 are in communication with each other through the communication bus 55, and the memory 53 is used for storing a computer program; the processor 51 is configured to execute a program stored in the memory 53. The processor 51 is configured to implement the following steps: capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, wherein the space data comprises a plurality of space distances between measuring equipment and the user and included angles between two line segments corresponding to any two space distances; for each gesture of the user, generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture; carrying out matching degree calculation on each three-dimensional posture image of the user and the corresponding standard three-dimensional posture image, obtaining a matching degree calculation result and outputting the result; the preset standard video comprises a plurality of standard three-dimensional posture images, and the three-dimensional posture images and the corresponding standard three-dimensional posture images correspond to the same posture. The processor 51 may also implement other implementations of the accuracy determination method of the follow-up action, which will not be further described herein.
The communication bus mentioned by the above electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated as PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The memory may include random access memory (Random Access Memory, RAM) or non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but also digital signal processors (Digital Signal Processing, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In yet another embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the following action accuracy determination method described in the above embodiment.
In yet another embodiment of the present invention, there is also provided a computer program product containing instructions that, when run on a computer, cause the computer to perform the following action accuracy determination method described in the above embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention are included in the protection scope of the present invention.

Claims (9)

1. A method for determining accuracy of a follow-up action, comprising:
capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, wherein the space data comprises a plurality of space distances between measuring equipment and the user and included angles between two line segments corresponding to any two space distances, the space distances are distances between the measuring equipment and characteristic points of the user, and each characteristic point corresponds to one space distance;
for each gesture of the user, generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture;
performing matching degree calculation on each three-dimensional posture image of the user and a corresponding standard three-dimensional posture image, and obtaining and outputting a matching degree calculation result;
the preset standard video comprises a plurality of standard three-dimensional gesture images, and the three-dimensional gesture images and the corresponding standard three-dimensional gesture images correspond to the same gesture;
The generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture includes:
constructing a three-dimensional model corresponding to the user in the current gesture according to a plurality of space distances corresponding to the current gesture and a preset number of included angles, wherein the preset number is determined according to the number of the space distances;
and carrying out equal proportion matching on the plane image data corresponding to the current gesture and the three-dimensional model to obtain the three-dimensional gesture image corresponding to the current gesture.
2. The method according to claim 1, wherein the matching degree calculation of each three-dimensional pose image of the user with the corresponding standard three-dimensional pose image, obtaining and outputting a matching degree calculation result, includes:
calculating the matching degree of the three-dimensional gesture image and the corresponding standard three-dimensional gesture image aiming at each three-dimensional gesture image, and obtaining the matching degree calculation result comprising at least one of matching degree scores and matching details;
and playing the prompt voice comprising the matching degree calculation result and/or displaying the prompt information comprising the matching degree verification result.
3. The method of claim 2, wherein said calculating a degree of matching of the three-dimensional pose image with the corresponding standard three-dimensional pose image comprises:
acquiring at least one of an attitude framework feature corresponding to the three-dimensional attitude image and an image feature distance information set corresponding to the three-dimensional attitude image, wherein the image feature distance information set comprises distance information respectively corresponding to a plurality of image feature region combinations, each image feature region combination comprises two image feature regions, and the image feature regions corresponding to the image feature region combinations are at least partially different;
generating image feature data corresponding to the three-dimensional posture image according to at least one of posture skeleton features corresponding to the three-dimensional posture image and an image feature distance information set corresponding to the three-dimensional posture image;
calculating the matching degree of the three-dimensional posture image and the standard three-dimensional posture image according to the image characteristic data corresponding to the three-dimensional posture image and the image characteristic data corresponding to the standard three-dimensional posture image;
the image feature data corresponding to the standard three-dimensional posture image comprises at least one of posture skeleton features corresponding to the standard three-dimensional posture image and an image feature distance information set corresponding to the standard three-dimensional posture image.
4. The method according to claim 2, further comprising, after outputting the matching degree calculation result:
responding to a first input of a correction control on a graphical user interface, and displaying a correction page corresponding to the matching degree calculation result on the graphical user interface;
and correcting at least one of the matching degree score and the matching details in response to a second input on the correction page.
5. The method according to claim 1 or 2, characterized by further comprising, after generating the three-dimensional pose image corresponding to the user in a target pose:
displaying the three-dimensional gesture image corresponding to the target gesture in a first area of a graphical user interface, and displaying the standard three-dimensional gesture image corresponding to the target gesture in a second area of the graphical user interface;
wherein the target gesture is one of a plurality of gestures of the user, and the target gesture is at least one.
6. The method according to claim 1, wherein capturing the planar image data and the spatial data corresponding to each gesture of the user while the user moves following the preset standard video comprises:
For each gesture of the user when moving along with the preset standard video, acquiring corresponding plane image data through an image acquisition device and acquiring corresponding space data through the measurement device;
the image acquisition equipment captures a plane image of the user, and the measurement equipment measures a plurality of space distances between the image acquisition equipment and the user and acquires an included angle between two line segments corresponding to any two space distances.
7. An accuracy determination device for a follow-up action, comprising:
the system comprises a capturing module, a display module and a display module, wherein the capturing module is used for capturing plane image data and space data corresponding to each gesture of a user when the user moves along with a preset standard video, the space data comprises a plurality of space distances between measuring equipment and the user and included angles between two line segments corresponding to any two space distances, the space distances are distances between the measuring equipment and characteristic points of the user, and each characteristic point corresponds to one space distance;
the generating module is used for generating a corresponding three-dimensional gesture image according to the plane image data and the space data corresponding to the current gesture aiming at each gesture of the user;
The processing module is used for carrying out matching degree calculation on each three-dimensional posture image of the user and the corresponding standard three-dimensional posture image, obtaining a matching degree calculation result and outputting the result;
the preset standard video comprises a plurality of standard three-dimensional gesture images, and the three-dimensional gesture images and the corresponding standard three-dimensional gesture images correspond to the same gesture;
the generation module comprises:
the construction submodule is used for constructing a three-dimensional model corresponding to the user in the current gesture according to a plurality of space distances corresponding to the current gesture and a preset number of included angles, and the preset number is determined according to the number of the space distances;
and the acquisition sub-module is used for carrying out equal proportion matching on the plane image data corresponding to the current gesture and the three-dimensional model, and acquiring the three-dimensional gesture image corresponding to the current gesture.
8. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps in the following action accuracy determination method according to any one of claims 1 to 6 when executing a program stored on a memory.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, realizes the steps in the following action accuracy determination method according to any one of claims 1 to 6.
CN202110921137.0A 2021-08-11 2021-08-11 Method and device for judging accuracy of follow-up action, electronic equipment and storage medium Active CN113743237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110921137.0A CN113743237B (en) 2021-08-11 2021-08-11 Method and device for judging accuracy of follow-up action, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110921137.0A CN113743237B (en) 2021-08-11 2021-08-11 Method and device for judging accuracy of follow-up action, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113743237A CN113743237A (en) 2021-12-03
CN113743237B true CN113743237B (en) 2023-06-02

Family

ID=78730716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110921137.0A Active CN113743237B (en) 2021-08-11 2021-08-11 Method and device for judging accuracy of follow-up action, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113743237B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107679A1 (en) * 2016-12-12 2018-06-21 华为技术有限公司 Method and device for acquiring dynamic three-dimensional image
CN108921907A (en) * 2018-07-26 2018-11-30 上海慧子视听科技有限公司 A kind of method, apparatus, equipment and the storage medium of exercise test scoring
CN109325437A (en) * 2018-09-17 2019-02-12 北京旷视科技有限公司 Image processing method, device and system
CN109635644A (en) * 2018-11-01 2019-04-16 北京健康有益科技有限公司 A kind of evaluation method of user action, device and readable medium
CN110225400A (en) * 2019-07-08 2019-09-10 北京字节跳动网络技术有限公司 A kind of motion capture method, device, mobile terminal and storage medium
CN111238368A (en) * 2020-01-15 2020-06-05 中山大学 Three-dimensional scanning method and device
CN111881887A (en) * 2020-08-21 2020-11-03 董秀园 Multi-camera-based motion attitude monitoring and guiding method and device
CN111898519A (en) * 2020-07-28 2020-11-06 武汉大学 Portable auxiliary visual servo robot system for motion training in specific area and posture evaluation method
KR20200143228A (en) * 2019-06-14 2020-12-23 고려대학교 산학협력단 Method and Apparatus for localization in real space using 3D virtual space model
CN112464918A (en) * 2021-01-27 2021-03-09 昆山恒巨电子有限公司 Body-building action correcting method and device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577085B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
CN109214980B (en) * 2017-07-04 2023-06-23 阿波罗智能技术(北京)有限公司 Three-dimensional attitude estimation method, three-dimensional attitude estimation device, three-dimensional attitude estimation equipment and computer storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018107679A1 (en) * 2016-12-12 2018-06-21 华为技术有限公司 Method and device for acquiring dynamic three-dimensional image
CN108921907A (en) * 2018-07-26 2018-11-30 上海慧子视听科技有限公司 A kind of method, apparatus, equipment and the storage medium of exercise test scoring
CN109325437A (en) * 2018-09-17 2019-02-12 北京旷视科技有限公司 Image processing method, device and system
CN109635644A (en) * 2018-11-01 2019-04-16 北京健康有益科技有限公司 A kind of evaluation method of user action, device and readable medium
KR20200143228A (en) * 2019-06-14 2020-12-23 고려대학교 산학협력단 Method and Apparatus for localization in real space using 3D virtual space model
CN110225400A (en) * 2019-07-08 2019-09-10 北京字节跳动网络技术有限公司 A kind of motion capture method, device, mobile terminal and storage medium
CN111238368A (en) * 2020-01-15 2020-06-05 中山大学 Three-dimensional scanning method and device
CN111898519A (en) * 2020-07-28 2020-11-06 武汉大学 Portable auxiliary visual servo robot system for motion training in specific area and posture evaluation method
CN111881887A (en) * 2020-08-21 2020-11-03 董秀园 Multi-camera-based motion attitude monitoring and guiding method and device
CN112464918A (en) * 2021-01-27 2021-03-09 昆山恒巨电子有限公司 Body-building action correcting method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于单目视觉的人体三维姿态估计";冯韬;《中国优秀硕士学位论文全文数据库(电子期刊) 信息科技辑》(第2期);全文 *
Alexander Toshev.et al."DeepPose: Human Pose Estimation via Deep Neural Networks".《IEEE》.2014,全文. *

Also Published As

Publication number Publication date
CN113743237A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN113850248B (en) Motion attitude evaluation method and device, edge calculation server and storage medium
US10803762B2 (en) Body-motion assessment device, dance assessment device, karaoke device, and game device
WO2021000708A1 (en) Fitness teaching method and apparatus, electronic device and storage medium
US9183431B2 (en) Apparatus and method for providing activity recognition based application service
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
WO2021098616A1 (en) Motion posture recognition method, motion posture recognition apparatus, terminal device and medium
CN114097248B (en) Video stream processing method, device, equipment and medium
KR102365431B1 (en) Electronic device for providing target video in sports play video and operating method thereof
US20230290003A1 (en) Model training method and apparatus, device, medium, and program product
CN115569344A (en) Standing long jump score evaluation method and device, electronic equipment and storage medium
CN110213605B (en) Image correction method, device and equipment
CN114926762A (en) Motion scoring method, system, terminal and storage medium
CN113409651A (en) Live broadcast fitness method and system, electronic equipment and storage medium
CN113743237B (en) Method and device for judging accuracy of follow-up action, electronic equipment and storage medium
CN106370883B (en) Speed measurement method and terminal
CN110148072A (en) Sport course methods of marking and system
US20170193668A1 (en) Intelligent Equipment-Based Motion Sensing Control Method, Electronic Device and Intelligent Equipment
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
JP6655114B2 (en) Image analysis device, image analysis method, and computer program
US11423647B2 (en) Identification system, model re-learning method and program
CN116012417A (en) Track determination method and device of target object and electronic equipment
CN116386136A (en) Action scoring method, equipment and medium based on human skeleton key points
CN111860206B (en) Image acquisition method and device, storage medium and intelligent equipment
CN114694256A (en) Real-time tennis action identification method, device, equipment and medium
KR102363435B1 (en) Apparatus and method for providing feedback on golf swing motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant