CN110428486B - Virtual interaction fitness method, electronic equipment and storage medium - Google Patents

Virtual interaction fitness method, electronic equipment and storage medium Download PDF

Info

Publication number
CN110428486B
CN110428486B CN201810399778.2A CN201810399778A CN110428486B CN 110428486 B CN110428486 B CN 110428486B CN 201810399778 A CN201810399778 A CN 201810399778A CN 110428486 B CN110428486 B CN 110428486B
Authority
CN
China
Prior art keywords
vector
joint point
action
video data
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810399778.2A
Other languages
Chinese (zh)
Other versions
CN110428486A (en
Inventor
冯伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shibeisi Fitness Management Co ltd
Original Assignee
Shanghai Shibeisi Fitness Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shibeisi Fitness Management Co ltd filed Critical Shanghai Shibeisi Fitness Management Co ltd
Priority to CN201810399778.2A priority Critical patent/CN110428486B/en
Publication of CN110428486A publication Critical patent/CN110428486A/en
Application granted granted Critical
Publication of CN110428486B publication Critical patent/CN110428486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a virtual interactive fitness method, electronic equipment and a storage medium, wherein the fitness method comprises the following steps: acquiring audio data and video data of a first class of users through a first terminal, converting the video data into a three-dimensional skeleton action model, and acquiring a target action; playing the audio data and the video data at the second terminal; acquiring three-dimensional video data of a second type of users in real time through each second terminal to obtain the action to be detected; and judging whether the action to be detected of each second terminal is matched with the target action, if not, acquiring the body image of the first type of user and the body image of the second type of user, overlaying the body image of the second type of user to the video data of the first type of user, playing the body image at the first terminal, overlaying the body image of the first type of user to the video data of the second type of user, and playing the body image at the second terminal. The invention avoids the occupation of the space by the entity body-building equipment and greatly reduces the threshold of body-building.

Description

Virtual interaction fitness method, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computer application, in particular to a virtual interaction fitness method, electronic equipment and a storage medium.
Background
At present, fitness is one of the important demands of modern people, more and more people like going to a gymnasium or buying fitness equipment by oneself for fitness, but the exercise guidance in the gymnasium is expensive, and the fitness equipment is not easy to store, so that the threshold of fitness is raised by the problems. For example, places such as hotels and offices are limited in space, and it is difficult for users to perform effective exercises.
Human motion capture and recognition methods are very widely used in today's society, for example: intelligent monitoring, human-computer interaction motion sensing games, video retrieval and the like.
Human motion detection recognition, which is a transition from traditional RGB-based video sequences to today's popular RGB-D video sequences, has been developed as an important feature. The traditional motion trail capture is usually based on a detection algorithm of characteristic points, and different characteristic point detection methods can obtain completely different motion trails. Meanwhile, because the retrieval of the feature points in different frames is very unstable, and the feature points are often discontinuous in the whole video sequence, a histogram-based statistical method is mostly adopted for the feature point trajectory method, and after the whole video sequence is calculated and counted, classifiers such as a support vector machine and the like are adopted for classification.
The matching calculation method of the video sequences has large calculation amount, cannot respond immediately, cannot be suitable for civil-grade human-computer interaction, and particularly can identify, compare and correct actions in a live scene. Therefore, in the prior art, for the human-computer interaction of fitness identification and error correction in a live broadcast scene, the requirement that whether the real-time feedback action of the system is wrong is difficult to realize.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a virtual interactive fitness method, electronic equipment and a storage medium, overcomes the difficulties in the prior art, avoids the occupation of physical fitness equipment on space, can reconstruct common display equipment into interactive fitness equipment, creates a novel experience of one-to-one tutoring of an online fitness coach, helps the fitness coach to remotely correct the actions of students, and greatly reduces the fitness threshold.
According to one aspect of the present invention, there is provided a virtual interactive fitness method, comprising the steps of:
s110, acquiring audio data and video data of a first type of user through a first terminal, converting the video data into a three-dimensional bone action model, acquiring coordinates of a plurality of bone points of a human body in the video data and vectors formed among the plurality of bone points, and identifying the action of the human body in the video according to the coordinates of the plurality of bone points of the human body in the video data and the vectors formed among the plurality of bone points to determine a target action;
s120, playing audio data and video data collected through the first terminal at one or more second terminals;
s130, acquiring three-dimensional video data of a second type of user in real time through each second terminal, generating a three-dimensional skeleton action model to be detected in real time, taking the current three-dimensional skeleton action model to be detected as an action to be detected according to the target action of the video data played at the second terminal at present, and forming a matching group by the action to be detected and the target action; and
s140, judging whether the motion to be detected of each second terminal is matched with the target motion, if so, returning to the step S130, and if not, executing the step S150;
s150, matting and acquiring the body image of the first type user from the video data of the first type user, matting and acquiring the body image of the second type user from the video data of the unmatched second type user, overlaying the unmatched body image of the second type user to the video data of the first type user, playing the body image of the first type user to the unmatched video data of the second type user at the first terminal, overlaying the body image of the first type user to the unmatched video data of the second type user at the second terminal, playing the body image at the second terminal and returning to the step S140.
Preferably, in the step S150, the body image of the second type user that does not match is superimposed as a foreground onto the video data of the first type user,
and overlaying the body image of the first type of user as a foreground into the video data of the second type of user which is not matched.
Preferably, in the step S150, a bidirectional voice channel between the first terminal and the unmatched second terminal is established.
Preferably, the step S110 includes: when recognizing that a motion is continuously repeated, taking the motion as a target motion;
the step S120 includes: and delaying the duration of a target action of video data currently played at the second terminal from the audio data and the video data which are played at the second terminal and collected from the first terminal.
Preferably, the step S110 includes: determining the duration of the target action;
the step S130 includes: and taking the current three-dimensional skeleton action model to be detected in the duration as a to-be-detected action from the determination of the duration from the start of the action of the current three-dimensional skeleton action model to the target action of the video data played at the second terminal.
Preferably, the step S110 further includes:
and decomposing the target motion into five body parts according to the three-dimensional skeleton motion model: left arm, right arm, left leg, right leg and truck, every the body part all includes: three skeletal points, three vectors formed by the three skeletal points and an included angle between two vectors in the three vectors,
generating one or more process-oriented or displacement-oriented recognition items for a part motion of at least one body part, each recognition item comprising a recognition object, a recognition parameter and a recognition rule, the recognition object comprising at least one of the three skeletal points; at least one of the three vectors; and one or more of an angle between two of the three vectors,
wherein the process-oriented identification item further comprises a standard process vector library, the standard process vector library stores at least one vector of the part motion in time sequence,
the step S130 further includes:
dividing the action to be detected into corresponding action of the part to be detected according to the action of the target part of the target action, and forming a body part matching group by the action of the part to be detected of the action to be detected and the action of the target part of the corresponding target action;
and acquiring an identification item of the action of a target part for each body part matching group, acquiring coordinates of the action of the part to be detected corresponding to bone points of the identification object and a vector and/or an included angle between vectors formed by the bone points according to the three-dimensional bone action model, if the identification item is a process-oriented identification item, performing matching calculation on the vector of the action of the part to be detected and a corresponding vector in a standard process vector library so as to compare the vector with a vector threshold value set by an identification parameter, and if the identification rule is not met, feeding back the action error of the part to be detected.
Preferably, the left arm comprises: the left wrist joint point, the left elbow joint point, the left shoulder joint point, a first vector formed from the left shoulder joint point to the left elbow joint point, a second vector formed from the left elbow joint point to the left wrist joint point, a third vector formed from the left shoulder joint point to the left wrist joint point and an included angle between the first vector and the second vector;
the right arm includes: a right wrist joint point, a right elbow joint point, a right shoulder joint point, a first vector formed from the right shoulder joint point to the right elbow joint point, a second vector formed from the right elbow joint point to the right wrist joint point, a third vector formed from the right shoulder joint point to the right wrist joint point, and an included angle between the first vector and the second vector;
the trunk includes: the head part center, the spine center of the neck, the spine center of the trunk, a first vector formed from the head part center to the spine center of the neck, a second vector formed from the spine center of the neck to the spine center of the trunk, a third vector formed from the head part center to the spine center of the trunk, and an included angle formed by the first vector and the second vector;
the left leg includes: the left ankle joint point, the left knee joint point, the left hip joint point, a first vector formed from the left hip joint point to the left knee joint point, a second vector formed from the left knee joint point to the left ankle joint point, a third vector formed from the left hip joint point to the left ankle joint point, and an included angle between the first vector and the second vector;
the right leg includes: the right foot ankle joint point, the right knee joint point, the right hip joint point, a first vector formed from the right hip joint point to the right knee joint point, a second vector formed from the right knee joint point to the right foot ankle joint point, a third vector formed from the right hip joint point to the right foot ankle joint point, and an included angle between the first vector and the second vector.
Preferably, the matching calculation of the vector of the motion of the part to be measured and a corresponding vector in a standard process vector library to be compared with the vector threshold set by the identification parameter comprises:
computing vectors in a standard process vector library
Figure BDA0001645439180000041
Vector of motion of part to be measured
Figure BDA0001645439180000042
Figure BDA0001645439180000043
Cosine of angle θ between:
Figure BDA0001645439180000044
(Vector)
Figure BDA0001645439180000045
and vector
Figure BDA0001645439180000046
The cosine value of the included angle theta is used for comparing with the vector threshold value set by the identification parameter.
Preferably, the process-oriented identification items comprise track identification, negative track identification and hold identification;
the displacement-oriented identification items comprise displacement identification and negative displacement identification.
Preferably, in step S150 of the present invention, instead, a first 3D character model may be generated according to the video data of the first type of user, where the video data is obtained by using a first 3D character model character to cover the body image of the first type of user; and acquiring the body image of the second type of user from the video data of the second type of user to generate a second 3D character model, and covering the body image of the second type of user through a second 3D character model character in the video data. Further, the body image of the second 3D character model of the second type of user that does not match is subsequently superimposed onto the video data of the first type of user and played at the first terminal, and the body image of the first 3D character model of the first type of user is superimposed onto the video data of the second type of user that does not match and played at the second terminal, so that the first type of user and the second type of user may not have the specific image of the person in the whole process, thereby protecting privacy of the person, and the process returns to step S140.
Preferably, the virtual interactive fitness method of the present invention is applicable to the following scenarios: gymnastics, yoga, taiji, street dance, rehabilitation training, etc., but not limited thereto. The first type of users can perform impromptu actions, and the videos and the three-dimensional skeleton action models of the first type of users are sent to the second terminal through the first terminal of the invention, so that the second type of users can follow the actions. The first type of user may also complete a preset set of action flow, for example: broadcast exercises, dancing, rehabilitation exercises and the like can also transmit videos and three-dimensional skeleton motion models of the first type of users to the second terminal through the first terminal of the invention, so that the second type of users can follow the motions.
Preferably, the first type of users can move freely, acquire audio and video data and a three-dimensional skeleton model through a first terminal, and record audio and video according to the three-dimensional skeleton model given by the system; and the second terminal plays the audio data and the video data.
According to still another aspect of the present invention, there is also provided an electronic apparatus, including:
a processor;
a storage medium having stored thereon a computer program which, when executed by the processor, performs the steps as described above.
According to yet another aspect of the present invention, there is also provided a storage medium having stored thereon a computer program which, when executed by a processor, performs the steps as described above.
Compared with the prior art, the invention avoids the occupation of physical fitness equipment to space, can modify common display equipment into interactive fitness equipment, creates a novel experience of one-to-one tutoring of an online fitness coach, helps the fitness coach to remotely correct the actions of students, and greatly reduces the fitness threshold.
Drawings
The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 shows a flow diagram of a virtual interactive fitness method according to an embodiment of the invention.
Fig. 2 to 6 are schematic diagrams illustrating the operation process of the virtual interactive fitness method according to the embodiment of the invention.
Fig. 7 shows a schematic view of a bone model according to an embodiment of the invention.
Fig. 8 to 12 show schematic views of 5 body parts according to an embodiment of the invention.
FIG. 13 shows a comparison of vectors in a standard process vector library with real-time acquired vectors, in accordance with an embodiment of the present invention. And
FIGS. 14 and 15 show the angle between vectors in the normal process vector library and the angle between vectors collected in real time, respectively, according to an embodiment of the present invention.
FIG. 16 schematically illustrates a computer-readable storage medium in an exemplary embodiment of the disclosure. And
fig. 17 schematically illustrates an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
FIG. 1 shows a flow diagram of a virtual interactive fitness method according to an embodiment of the invention. Fig. 2 to 6 are schematic diagrams illustrating the operation process of the virtual interactive fitness method according to the embodiment of the invention. As shown in fig. 1 to 6, the first terminal and the second terminal in the virtual interactive fitness method of the present invention may be a mobile phone, a notebook computer, etc. with a three-dimensional video capturing function, but not limited thereto. The method of the invention comprises the following steps:
s110, audio data and video data of a first type of user 6 (such as a fitness coach) are collected through a first terminal, the video data are converted into a three-dimensional skeleton action model, coordinates of a plurality of skeleton points of a human body in the video data and vectors formed among the skeleton points are obtained, and action recognition is carried out on the human body in the video according to the coordinates of the skeleton points of the human body in the video data and the vectors formed among the skeleton points to determine target action. The step S110 includes: the duration of the target action is determined. When recognizing that a motion is continuously repeated, the motion is taken as a target motion. The first terminal comprises a three-dimensional video collector 2 and a display 1.
S120, the first terminal is respectively connected to a plurality of second terminals through the network server 3, and the audio data and the video data collected through the first terminal are played at one or more second terminals. The step S120 includes: and delaying the duration of a target action of video data currently played at the second terminal from the audio data and the video data which are played at the second terminal and collected from the first terminal. The second terminal comprises a three-dimensional video collector 5 and a display 4.
S130, acquiring three-dimensional video data of a second type of user 7 (such as a student) in real time through each second terminal, generating a three-dimensional skeleton action model to be detected in real time, taking the three-dimensional skeleton action model to be detected as an action to be detected according to the target action of the video data played at the second terminal, and forming a matching group by the action to be detected and the target action. The step S130 includes: and taking the current three-dimensional skeleton action model to be detected in the duration as a to-be-detected action from the determination of the duration from the start of the action of the current three-dimensional skeleton action model to the target action of the video data played at the second terminal.
And S140, judging whether the motion to be detected of each second terminal is matched with the target motion, if so, returning to the step S130, and if not, executing the step S150.
S150, the body image of the first type user 6 is obtained by matting from the video data of the first type user 6, the body image of the second type user 7 is obtained by matting from the video data of the second type user 7 which is not matched, the body image of the second type user 7 which is not matched is superposed to the video data of the first type user 6 and is played at the first terminal, the body image of the first type user 6 is superposed to the video data of the second type user 7 which is not matched and is played at the second terminal, and the step S140 is returned.
In a preferred embodiment, in step S150, the body image of the unmatched second user 7 is overlaid as the foreground to the video data of the first user 6, and the body image of the first user 6 is overlaid as the foreground to the video data of the unmatched second user 7.
In a preferred embodiment, in step S150 of the present invention, a first 3D character model may be generated according to the video data of the first type of user, where the video data is obtained by overlaying the body image of the first type of user with the first 3D character model character; and acquiring the body image of the second type of user from the video data of the second type of user to generate a second 3D character model, and covering the body image of the second type of user through a second 3D character model character in the video data. Further, the body image of the second 3D character model of the second type of user that does not match is subsequently superimposed onto the video data of the first type of user and played at the first terminal, and the body image of the first 3D character model of the first type of user is superimposed onto the video data of the second type of user that does not match and played at the second terminal, so that the first type of user and the second type of user may not have the specific image of the person in the whole process, thereby protecting privacy of the person, and the process returns to step S140.
In a preferred embodiment, the virtual interactive fitness method of the present invention can be applied to the following scenarios: gymnastics, yoga, taiji, street dance, rehabilitation training, etc., but not limited thereto. The first type of users can perform impromptu actions, and the video and the three-dimensional skeleton action model of the first type of users are sent to the second terminal through the first terminal of the invention, so that the second type of users can perform action following. The first type of user may also complete a preset set of action flow, for example: broadcast exercises, dancing, rehabilitative exercises and the like can also transmit videos and three-dimensional skeleton motion models of the first type of users to the second terminal through the first terminal of the invention, so that the second type of users can follow the motions.
In a preferred scheme, users of the first class can move freely, acquire audio and video data and a three-dimensional skeleton model through a first terminal, and record audio and video according to the three-dimensional skeleton model given by a system; and the second terminal plays the audio data and the video data.
In a preferred embodiment, in the step S150, a bidirectional voice channel between the first terminal and the unmatched second terminal 7 is established.
Specifically, the judgment of the work matching in the present invention is realized by the following manner: for each human body, 15 skeletal points are set (see fig. 7), and the 15 skeletal points are: head center 211, neck center (e.g., spinal center of neck) 212, torso center 213 (e.g., spinal center of torso), left shoulder joint point 221, left elbow joint point 222, left wrist joint point 223, right shoulder joint point 231, right elbow joint point 232, right wrist joint point 233, left hip joint point 241, left knee joint point 242, left ankle joint point 243, right hip joint point 251, right knee joint point 252, right ankle joint point 253.
In the present case, the 15 skeletal points are divided into five body parts by taking 3 skeletal points as units: the torso (see fig. 8), the left arm (see fig. 9), the right arm (see fig. 10), the left leg (see fig. 11), and the right leg (see fig. 12). Vectors are formed among the skeleton points in each body part, and included angles are formed among the vectors.
Specifically, the torso (see fig. 8) includes a head center 211, a spine center 212 of the neck, a spine center 213 of the torso, a first vector 214 formed from the head center 211 to the spine center 212 of the neck, a second vector 215 formed from the spine center 212 of the neck to the spine center 213 of the torso, a third vector 216 formed from the head center 211 to the spine center 213 of the torso, and an angle 217 formed by the first vector 214 and the second vector 215.
The left arm (see fig. 9) includes a left wrist joint point 223, a left elbow joint point 222, a left shoulder joint point 221, a first vector 224 formed from the left shoulder joint point 221 to the left elbow joint point 222, a second vector 225 formed from the left elbow joint point 222 to the left wrist joint point 223, a third vector 226 formed from the left shoulder joint point 221 to the left wrist joint point 223, and an angle 227 between the first vector 224 and the second vector 225.
The right arm (see fig. 10) includes a right wrist joint point 233, a right elbow joint point 232, a right shoulder joint point 231, a first vector 234 formed from the right shoulder joint point 231 to the right elbow joint point 232, a second vector 235 formed from the right elbow joint point 232 to the right wrist joint point 233, a third vector 236 formed from the right shoulder joint point 231 to the right wrist joint point 233, and an angle 237 between the first vector 234 and the second vector 235.
The left leg comprises (see fig. 11) a left ankle joint point 243, a left knee joint point 242, a left hip joint point 241, a first vector 244 formed from left hip joint point 241 to left knee joint point 242, a second vector 245 formed from left knee joint point 242 to left ankle joint point 243, a third vector 246 formed from left hip joint point 241 to left ankle joint point 243, and an angle 247 between the first vector 244 and the second vector 245.
The right leg includes (see fig. 12) a right ankle joint point 253, a right knee joint point 252, a right hip joint point 251, a first vector 254 formed from right hip joint point 251 to right knee joint point 252, a second vector 255 formed from right knee joint point 252 to right ankle joint point 253, a third vector 256 formed from right hip joint point 251 to right ankle joint point 253, and an angle between the first vector 254 and the second vector 255.
The target action is broken down into five body parts: left arm, right arm, left leg, right leg and torso. Each body part comprises three skeletal points as shown in fig. 8 to 12, three vectors formed by the three skeletal points, and an included angle between two of the three vectors. The target site actions of the at least one body site correspond to one or more process-oriented or displacement-oriented recognition terms. Each identification item comprises an identification object, an identification parameter and an identification rule, wherein the identification object comprises at least one of the three skeleton points of the part action; at least one vector of the three vectors; and one or more of an angle between two of the three vectors.
Whether the identification item of the part action is process-oriented or displacement-oriented can be realized by presetting an action library and corresponding classification. In still other embodiments, the machine learning model may be trained by a machine learning manner through a plurality of part actions and a set test set of process-oriented or displacement-oriented markers, and then partial actions of target action division may be classified in a live broadcasting process.
The process-oriented identification item needs to be matched with a real-time acquired vector through a standard process vector library so as to judge whether the identification item is met. The standard process vector library stores at least one vector of the part motion in a time sequence with a sampling frequency. For example, for a left leg movement of a push-up, at least the first vector 224, the second vector 225 (and the angle 227) of the left arm are stored in time sequence at a sampling frequency of 5 times/second as a standard process vector library.
Specifically, the identification items facing the process comprise track identification, negative track identification and hold identification; the identification items facing displacement include displacement identification and negative displacement identification.
And the track identification is used for identifying whether the part moves according to a preset track, and if the part does not move according to the preset track, an error is prompted. The identification object comprises at least one vector in the three vectors and/or an included angle between two vectors in the three vectors. The recognition parameter sets one or more threshold values corresponding to the recognition object. The threshold value comprises a vector threshold value of the three vectors and an included angle threshold value of the included angle, and the identification parameter determines to adopt the vector threshold value and/or the included angle threshold value according to the identification object.
Specifically, the vector threshold and the included angle threshold are used to determine whether the vectors (and included angles) collected in real time match the vectors (and included angles) in the standard process vector library. For example, referring to FIG. 11, for the vector threshold, the vectors from bone point 222 to bone point 293 of a part motion are collected in real time
Figure BDA0001645439180000101
Finding the corresponding bone point 222 to bone point 223 vector corresponding to the time in the standard process vector library according to the time
Figure BDA0001645439180000102
Computing vectors in a standard process vector library
Figure BDA0001645439180000103
Vector of motion of part collected in real time
Figure BDA0001645439180000104
Figure BDA0001645439180000105
Cosine of angle θ between:
Figure BDA0001645439180000111
(Vector)
Figure BDA0001645439180000112
and vector
Figure BDA0001645439180000113
The cosine value of the included angle theta (cosine value is-1 to 1) is used for comparing with the vector threshold value set by the identification parameter. The vector threshold may be set to 0.1, with a corresponding vector threshold of-1 to 0.1. The vector threshold may also be set directly to a range of-0.1 to 0.1. The vector can be determined by comparing the vector threshold with the calculated cosine value
Figure BDA0001645439180000114
Whether within the vector threshold.
For example, in the embodiment of setting the angle threshold, the standard process vector library stores at least the first vector and the second vector of the motion of the portion in time sequence, and the angle between the vectors can be calculated according to the two vectors or directly stored in the standard process vector library. Referring to fig. 12 and 13, the angle threshold is used to compare the ratio α/β of the angle 297 α between the first vector 294 of the site motion (bone point 292 to bone point 291) and the second vector 295 of the site motion (bone point 292 to bone point 293) with the angle 227 β between the first vector 224 of the site motion (bone point 222 to bone point 221) and the second vector 225 of the site motion (bone point 222 to bone point 223) at the corresponding time in the standard process vector library to determine whether the angle of the site motion collected in real time is within the range of the angle threshold. The vector threshold may be set to 0.8, with a corresponding vector threshold of 0.8 to 1. The vector threshold may also be set directly to a range of 0.8 to 1. A comparison may be made based on the angle threshold and the calculated angle ratio to determine whether the angle between the first vector and the second vector is within the vector threshold.
Furthermore, the identification parameters of the track identification also comprise an initial amplitude threshold value and an achievement amplitude threshold value, wherein the initial amplitude threshold value is used for judging whether the part action starts or not, and the achievement amplitude threshold value is used for judging whether the part action finishes or not to complete achievement of the amplitude. Specifically, a starting maximum value of the angle of a portion motion may be set, and a starting amplitude threshold may be set to 0.8 (or other value in the middle of 0 to 1). When the included angle of the part motion reaches 80% of the initial maximum value, the part motion can be judged to start. For the achievement of the amplitude threshold, the achievement maximum value of the included angle of a part action can be set, and the achievement amplitude threshold is set to 0.2 (or other values between 0 and 1). When the included angle of the part motion reaches (1-20%) of the maximum value, it can be judged that the part motion is completed. In some variations, the vector and/or coordinates of the bone points may also be used to calculate the starting amplitude threshold and the achieved amplitude threshold. The starting maximum and the reached maximum may be used as the first data and the last data in the standard process vector library. Alternatively, in other embodiments, the starting amplitude threshold and the achievement amplitude threshold are both calculated using the last data in the library of standard process vectors, in which case the starting amplitude threshold may be set to 0.2, for example, and the achievement amplitude threshold may be set to 0.2, for example.
The recognition rules of the track recognition include an achievement rule and optionally different error rules corresponding to the set recognition object and the set recognition parameters. The achievement rule of the track recognition is that the recognition object of the part action starts from the position represented by the initial amplitude threshold value and the recognition objects are all within the set vector threshold value and/or included angle threshold value; when the recognition objects of the part action reach the position represented by the amplitude threshold value from the position represented by the initial amplitude threshold value, the recognition objects are all within the set vector threshold value and/or included angle threshold value; and the recognition objects of the part action reach the position represented by the achievement amplitude threshold value and are all within the set vector threshold value and/or included angle threshold value. Different error rules for track identification include: an out of corresponding vector threshold error (e.g., the large arm or thigh represented by vector one is out of threshold); an angle threshold error is exceeded (e.g., an angle at the elbow or an angle at the knee represented by the angle exceeds a threshold); and insufficient amplitude error. The identification rule with the wrong amplitude is that the identification object of the part action starts from the position represented by the initial amplitude threshold value and the identification objects are all in the set vector threshold value and/or included angle threshold value; when the recognition objects of the part action reach the position represented by the amplitude threshold value from the position represented by the initial amplitude threshold value, the recognition objects are all within the set vector threshold value and/or included angle threshold value; and the recognition objects of the part action do not reach the position represented by the achievement amplitude threshold value and are all within the set vector threshold value and/or included angle threshold value.
And the negative track identification is used for identifying whether the part moves according to a preset track, and if the part moves according to the preset track, an error is prompted. For negative trajectory recognition, which is similar to trajectory recognition, the recognition object comprises at least one of the three vectors and/or an angle between two of the three vectors (preferably, an angle between the first vector and the second vector). And setting one or more thresholds for the identification parameters of the negative track identification, wherein the thresholds comprise vector thresholds of the three vectors and an included angle threshold of the included angle, and the identification parameters determine to adopt the vector thresholds and/or the included angle thresholds according to the identification object. The negative track recognition is different from the track recognition in that the negative track recognition achievement rule is as follows: the identification object of the part action starts from the position represented by the initial amplitude threshold value and is within the set vector threshold value and/or included angle threshold value; when the recognition objects of the part action reach the position represented by the amplitude threshold value from the position represented by the initial amplitude threshold value, the recognition objects are all within the set vector threshold value and/or included angle threshold value; the recognition objects of the part action reach the position represented by the achievement amplitude threshold value and are all within the set vector threshold value and/or included angle threshold value; and there is currently a state in which recognition other than negative recognition and hold recognition is in progress (in other words, the trajectory or displacement amplitude is growing). When the rule is reached, a track error is prompted. In other words, if the recognition object is not always within the threshold range set by the recognition parameter during the movement of the body part, and the motion of the part represented by the recognition object generates a trajectory and/or displacement during the movement, an error will not be presented.
The hold recognition is used to identify whether the motion of the part is kept in a certain state (for example, kept upright or kept at a bending angle) during the motion, and if the motion is not kept in the state, an error is prompted. The identification object kept identified comprises at least one vector in the three vectors and/or an included angle between two vectors in the three vectors. And setting one or more thresholds according to the identification parameters, wherein the thresholds comprise vector thresholds of the three vectors and an included angle threshold of the included angle, and the identification parameters determine to adopt the vector thresholds and/or the included angle thresholds according to the identification object. The achievement rule for keeping the identification is: the recognition target of the part motion is always within the set vector threshold and/or included angle threshold. If the achievement rule of the keeping identification is not reached, an error corresponding to the keeping identification is prompted.
For displacement recognition and negative displacement recognition, although the displacement recognition and the negative displacement recognition are described as recognition items facing displacement instead of facing an object, the displacement recognition and the negative displacement recognition actually need to recognize whether a part action is in a continuous motion state, and if the part action is not in the continuous motion state, the recognition is interrupted, and an error is directly prompted; or to re-identify from the current location.
And the displacement identification is used for judging whether the identified object reaches the preset displacement direction and displacement distance, and if not, prompting an error. The identified object of displacement identification includes one or more of the three skeletal points. Preferably, one skeletal point of the site action is specified. The identification parameters set displacement distance, displacement direction (the displacement direction can be mapped to the positive direction of the X axis, the negative direction of the X axis, the positive direction of the Y axis, the negative direction of the Y axis, the positive direction of the Z axis and the negative direction of the Z axis in the three-dimensional coordinate, and the specific displacement direction does not need to be calculated) and initial amplitude threshold. The starting amplitude threshold of the displacement is a value in the range of 0 to 1. For example, the starting amplitude threshold may be set to 0.2 and represent that the site action or displacement recognition begins when the displacement of a given bone point exceeds 20% of the set displacement distance. The recognition rules of displacement recognition include an achievement rule and optionally different error rules. The achievement rule of the displacement identification is that the moving direction of the appointed bone point is consistent with the displacement direction set in the identification parameter, and the displacement distance of one continuous motion is larger than or equal to the displacement distance set in the identification parameter. Different error rules include that when the displacement of the specified bone point does not exceed the initial amplitude threshold value, the initial action amplitude is not enough; and if the displacement amplitude of the appointed bone point exceeds the initial amplitude threshold value, the moving direction of the appointed bone point is consistent with the displacement direction set in the identification parameter, and the displacement distance of one continuous motion is less than the displacement distance set in the identification parameter, the achievement amplitude is not enough.
And the negative displacement identification is used for judging whether the identified object reaches the preset displacement direction and displacement distance, and prompting an error if the identified object reaches the preset displacement direction and displacement distance. Similar to displacement recognition, the recognition object includes one or more of three bone points. Preferably, one skeletal point of the site action is specified. The identification parameters set displacement distance, displacement direction (the displacement direction can be mapped to the positive X-axis direction, the negative X-axis direction, the positive Y-axis direction, the negative Y-axis direction, the positive Z-axis direction and the negative Z-axis direction in the three-dimensional coordinates) and initial amplitude threshold values. The achievement rule of the negative displacement recognition is that the moving direction of the specified bone point coincides with the displacement direction set in the recognition parameter, the displacement distance of one continuous motion is equal to or greater than the displacement distance set in the recognition parameter, and there is a state in which recognition other than the negative recognition and the hold recognition is currently in progress (in other words, the trajectory or the displacement amplitude is increasing). When the rule is reached, a track error is prompted. In other words, if the recognition object does not move in the displacement direction set by the recognition parameter or the movement distance is greater than the displacement distance set by the recognition parameter during the movement of the body part, no error is indicated.
The identification item is set for at least one part action of an action, the at least one part action and the identification item of the at least one part action are used as an action file of the action, and the action file and the action number are stored in the standard action database in a correlation mode.
In one embodiment, for a squat action, it sets the identification items for the torso, left leg, and right leg. The identification items of the trunk include a hold identification and a displacement identification. In the trunk keeping identification, the identification object is only a first vector from the head center to the spine center of the neck, the parameters of the first vector are set correspondingly, and a standard process vector library of the first vector of the trunk in the deep squatting process is stored for subsequent matching. When the first vector of the trunk collected in real time exceeds the threshold value of the first vector, the body is not kept upright, and an error is prompted. Here, due to the characteristics of the trunk, when the first vector from the center of the head to the center of the spine of the neck remains upright, the second vector from the center of the spine of the neck to the center of the spine of the trunk can be generally directly determined to also remain upright, and only a threshold value of one vector is set, so as to reduce the subsequent calculation amount and improve the subsequent real-time error correction efficiency.
In the displacement recognition of the trunk, the recognition object is a skeleton point at the center of the spine of the trunk, and the corresponding recognition parameters are a predetermined displacement distance and a predetermined displacement direction (the direction is the negative direction of the Y axis) of the skeleton point. When the spine center of the torso moves more than a predetermined distance in the negative Y-axis direction, this identification of the motion of the part is indicated. If the spine center of the trunk does not move along the Y-axis negative direction for more than a preset displacement distance, the amplitude of the part motion is not enough.
The left leg is provided with negative displacement recognition for reminding the deep squatting middle knee not to exceed the tiptoe. In the negative displacement recognition of the left leg, the recognition target is a joint point of the left knee, and the recognition parameters are a predetermined displacement distance, a predetermined displacement direction (the direction is the positive direction of the Z axis), and a start amplitude threshold. When the left knee moves more than a preset displacement distance along the positive direction of the Z axis, the prompt shows that the part moves wrongly. When the left knee does not move more than the predetermined displacement distance in the positive Z-axis direction, this recognition of the motion of the part is achieved. The identification item of the right leg is the same as that of the left leg, and is not described herein.
In some embodiments, for squats, push-ups, etc. having a back and forth action, only one course in between can be set and identified. For example, the setting of the identification item and the identification error correction are only carried out on the action of squatting deeply; the setting of the identification item and the identification error correction are only carried out on the action during the push-up and the push-up, thereby further reducing the calculation amount of the action identification and increasing the real-time performance of the error correction.
The using process of the invention is as follows:
referring to fig. 2, a first type of user 6 (fitness trainer) can give fitness classes to a plurality of second type of users 7 (trainees) in different positions through a network at his home.
Referring to fig. 3, after the motion of a first type of user 6 (fitness trainer) is captured by the three-dimensional video collector 2, the motion is played on a second terminal of a second type of user 7 (student), and the three-dimensional video collector 5 of the second terminal of the second type of user 7 also collects the motion of the second type of user 7, and compares whether the motion of the second type of user 7 is matched with the motion of the first type of user 6 in real time, so as to evaluate the effect of the fitness motion.
Referring to fig. 4 to 7, when the motion of the second type user 7 does not match the first type user 6, that is, the motion of the second type user 7 is not standard and needs guidance, the body image of the first type user 6 is obtained by matting the video data of the first type user 6, the body image of the second type user 7 is obtained by matting the video data of the unmatched second type user 7, the unmatched body image of the second type user 7 is superimposed on the video data of the first type user 6 and is played at the first terminal, the body image of the first type user 6 is superimposed on the unmatched video data of the second type user 7 and is played at the second terminal, so that the first type user 6 (fitness trainer) and the second type user 7 (trainee) can respectively see the actual images of mutual limb interaction on the display 1 and the display 4, in this way, the first type of user 6 (fitness trainer) can intuitively teach the second type of user 7 (trainee) to perform the correct actions, as if the trainee were adjusting the trainee's actions at hand, with two people in a gym. Therefore, the user experience of the invention is very good, the time for going to and going to the gymnasium is saved, and the high gymnasium cost is avoided.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by, for example, a processor, can implement the steps of the electronic prescription flow processing method described in any one of the above embodiments. In some possible embodiments, aspects of the present invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of this specification, when the program product is run on the terminal device.
Referring to fig. 16, a program product 300 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the tenant computing device, partly on the tenant device, as a stand-alone software package, partly on the tenant computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing devices may be connected to the tenant computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Engineering programs for performing the operations of the present invention may be built in any combination of one or more programming Integrated Development Environments (IDEs), game Development engines, such as Unity3D, unknown, Visual Studio, and the like.
In an exemplary embodiment of the present disclosure, there is also provided an electronic device that may include a processor, and a memory for storing executable instructions of the processor. Wherein the processor is configured to execute the steps of the electronic prescription flow processing method in any one of the above embodiments via execution of the executable instructions.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 17. The electronic device 600 shown in fig. 17 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 17, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one storage unit 620, a bus 630 that connects the various system components (including the storage unit 620 and the processing unit 610), a display unit 640, and the like.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, the processing unit 610 may perform the steps as described in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a tenant to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Compared with the prior art, the virtual interactive fitness method, the electronic equipment and the storage medium avoid the occupation of physical fitness equipment to space, can reconstruct common display equipment into interactive fitness equipment, create a novel experience of one-to-one tutoring of an online fitness coach, help the fitness coach to remotely correct the actions of students and greatly reduce fitness threshold.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (12)

1. A virtual interactive fitness method, comprising the steps of:
s110, acquiring audio data and video data of a first type of user through a first terminal, converting the video data into a three-dimensional bone action model, acquiring coordinates of a plurality of bone points of a human body in the video data and vectors formed among the plurality of bone points, and identifying the action of the human body in the video according to the coordinates of the plurality of bone points of the human body in the video data and the vectors formed among the plurality of bone points to determine a target action;
s120, playing audio data and video data collected through the first terminal at one or more second terminals;
s130, acquiring three-dimensional video data of a second class of users in real time through each second terminal, generating a three-dimensional skeleton action model to be detected in real time, taking the three-dimensional skeleton action model to be detected as an action to be detected according to the target action of the video data played at the second terminal at present, and forming a matching group by the action to be detected and the target action; and
s140, judging whether the motion to be detected of each second terminal is matched with the target motion, if so, returning to the step S130, and if not, executing the step S150;
s150, matting and acquiring body images of the first type of users from the video data of the first type of users, matting and acquiring body images of the second type of users from the video data of the unmatched second type of users, overlaying the body images of the unmatched second type of users to the video data of the first type of users, playing the body images at the first terminal, overlaying the body images of the first type of users to the video data of the unmatched second type of users, playing the body images at the second terminal, and returning to the step S140.
2. The virtual interactive fitness method according to claim 1, wherein in step S150, the body images of the second users that do not match are superimposed as foreground onto the video data of the first users,
and overlaying the body image of the first type of user as a foreground into the video data of the second type of user which is not matched with the body image of the first type of user.
3. A method as claimed in claim 1, wherein in step S150, a bidirectional voice channel between the first terminal and the unmatched second terminal is established.
4. A virtual interactive fitness method according to claim 1, wherein step S110 comprises: when recognizing that a motion is continuously repeated, taking the motion as a target motion;
the step S120 includes: and delaying the duration of a target action of video data currently played at the second terminal from the audio data and the video data which are played at the second terminal and collected from the first terminal.
5. The virtual interactive fitness method of claim 4,
the step S110 includes: determining a duration of the target action;
the step S130 includes: and taking the current three-dimensional skeleton action model to be detected in the duration as a to-be-detected action from the determination of the duration from the start of the action of the current three-dimensional skeleton action model to the target action of the video data played at the second terminal.
6. The virtual interactive fitness method of claim 1,
the step S110 further includes:
and decomposing the target motion into five body parts according to the three-dimensional skeleton motion model: left arm, right arm, left leg, right leg and truck, every the body part all includes: three skeletal points, three vectors formed by the three skeletal points and an included angle between two vectors in the three vectors,
generating one or more process-oriented or displacement-oriented recognition items for the part motion of at least one body part, each recognition item comprising a recognition object, a recognition parameter and a recognition rule, the recognition object comprising at least one of the three skeletal points; at least one vector of the three vectors; and one or more of an angle between two of the three vectors,
wherein the process-oriented identification item further comprises a standard process vector library, the standard process vector library stores at least one vector of the part motion in time sequence,
the step S130 further includes:
dividing the action to be detected into corresponding action of the part to be detected according to the action of the target part of the target action, and forming a body part matching group by the action of the part to be detected of the action to be detected and the action of the target part of the corresponding target action;
and acquiring an identification item of the action of a target part for each body part matching group, acquiring coordinates of the action of the part to be detected corresponding to bone points of the identification object and a vector and/or an included angle between vectors formed by the bone points according to the three-dimensional bone action model, if the identification item is a process-oriented identification item, performing matching calculation on the vector of the action of the part to be detected and a corresponding vector in a standard process vector library so as to compare the vector with a vector threshold value set by an identification parameter, and if the identification rule is not met, feeding back the action error of the part to be detected.
7. The virtual interactive fitness method of claim 6,
the left arm includes: the left wrist joint point, the left elbow joint point, the left shoulder joint point, a first vector formed from the left shoulder joint point to the left elbow joint point, a second vector formed from the left elbow joint point to the left wrist joint point, a third vector formed from the left shoulder joint point to the left wrist joint point and an included angle between the first vector and the second vector;
the right arm includes: a right wrist joint point, a right elbow joint point, a right shoulder joint point, a first vector formed from the right shoulder joint point to the right elbow joint point, a second vector formed from the right elbow joint point to the right wrist joint point, a third vector formed from the right shoulder joint point to the right wrist joint point, and an included angle between the first vector and the second vector;
the trunk includes: the head part center, the spine center of the neck, the spine center of the trunk, a first vector formed from the head part center to the spine center of the neck, a second vector formed from the spine center of the neck to the spine center of the trunk, a third vector formed from the head part center to the spine center of the trunk, and an included angle formed by the first vector and the second vector;
the left leg includes: the left ankle joint point, the left knee joint point, the left hip joint point, a first vector formed from the left hip joint point to the left knee joint point, a second vector formed from the left knee joint point to the left ankle joint point, a third vector formed from the left hip joint point to the left ankle joint point, and an included angle between the first vector and the second vector;
the right leg includes: the right ankle joint point, the right knee joint point, the right hip joint point, a first vector formed from the right hip joint point to the right knee joint point, a second vector formed from the right knee joint point to the right ankle joint point, a third vector formed from the right hip joint point to the right ankle joint point, and an included angle between the first vector and the second vector.
8. The virtual interactive fitness method of claim 7, wherein matching the vector of motion of the location to be measured to a corresponding vector in a standard process vector library to compare with a vector threshold set by the identification parameter comprises:
computing vectors in a standard process vector library
Figure FDA0001645439170000032
Vector of motion of part to be measured
Figure FDA0001645439170000033
Figure FDA0001645439170000034
Cosine of angle θ between:
Figure FDA0001645439170000031
(Vector)
Figure FDA0001645439170000035
and vector
Figure FDA0001645439170000036
The cosine value of the included angle theta is used for comparing with the vector threshold value set by the identification parameter.
9. The virtual interactive fitness method of claim 6, wherein the process-oriented identification items comprise track identification, negative track identification, and hold identification;
the displacement-oriented identification items comprise displacement identification and negative displacement identification.
10. A virtual interactive fitness method according to claim 1, wherein step S150 is replaced by: acquiring a body image of a first type of user from the video data of the first type of user to generate a first 3D character model, and covering the body image of the first type of user in the video data through the first 3D character model; and acquiring a body image of the second type of user from the video data of the second type of user to generate a second 3D character model, covering the body image of the second type of user in the video data through the second 3D character model, subsequently overlaying the body image of the second 3D character model of the unmatched second type of user to the video data of the first type of user, playing the body image at the first terminal, overlaying the body image of the first 3D character model of the first type of user to the video data of the unmatched second type of user, playing the body image at the second terminal, and returning to the step S140.
11. An electronic device, characterized in that the electronic device comprises:
a processor;
a storage medium having stored thereon a computer program which, when executed by the processor, performs the steps of any of claims 1 to 10.
12. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of any of claims 1 to 10.
CN201810399778.2A 2018-04-28 2018-04-28 Virtual interaction fitness method, electronic equipment and storage medium Active CN110428486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810399778.2A CN110428486B (en) 2018-04-28 2018-04-28 Virtual interaction fitness method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810399778.2A CN110428486B (en) 2018-04-28 2018-04-28 Virtual interaction fitness method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110428486A CN110428486A (en) 2019-11-08
CN110428486B true CN110428486B (en) 2022-09-27

Family

ID=68407141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810399778.2A Active CN110428486B (en) 2018-04-28 2018-04-28 Virtual interaction fitness method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110428486B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929641A (en) * 2019-11-21 2020-03-27 三星电子(中国)研发中心 Action demonstration method and system
CN113409651B (en) * 2020-03-16 2024-04-16 上海史贝斯健身管理有限公司 Live broadcast body building method, system, electronic equipment and storage medium
CN112090053A (en) * 2020-09-14 2020-12-18 成都拟合未来科技有限公司 3D interactive fitness training method, device, equipment and medium
CN112348942B (en) * 2020-09-18 2024-03-19 当趣网络科技(杭州)有限公司 Body-building interaction method and system
CN112418046B (en) * 2020-11-17 2023-06-23 武汉云极智能科技有限公司 Exercise guiding method, storage medium and system based on cloud robot
CN112364818A (en) * 2020-11-27 2021-02-12 Oppo广东移动通信有限公司 Action correcting method and device, electronic equipment and storage medium
CN113241148B (en) * 2021-04-28 2023-04-11 厦门艾地运动科技有限公司 Fitness scheme generation method and device, terminal equipment and storage medium
CN113569688A (en) * 2021-07-21 2021-10-29 上海健指树健康管理有限公司 Body fitness testing method and device based on limb recognition technology and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000353252A (en) * 1999-06-14 2000-12-19 Nippon Telegr & Teleph Corp <Ntt> Video superimposing method, device therefor and recording medium for recording video superimpose program
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN105903157A (en) * 2016-04-19 2016-08-31 深圳泰山体育科技股份有限公司 Electronic coach realization method and system
CN106022213A (en) * 2016-05-04 2016-10-12 北方工业大学 Human body motion recognition method based on three-dimensional bone information
CN106485055A (en) * 2016-09-22 2017-03-08 吉林大学 A kind of old type ii diabetes patient moving training system based on Kinect sensor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10134296B2 (en) * 2013-10-03 2018-11-20 Autodesk, Inc. Enhancing movement training with an augmented reality mirror
KR101711488B1 (en) * 2015-01-28 2017-03-03 한국전자통신연구원 Method and System for Motion Based Interactive Service

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000353252A (en) * 1999-06-14 2000-12-19 Nippon Telegr & Teleph Corp <Ntt> Video superimposing method, device therefor and recording medium for recording video superimpose program
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN105903157A (en) * 2016-04-19 2016-08-31 深圳泰山体育科技股份有限公司 Electronic coach realization method and system
CN106022213A (en) * 2016-05-04 2016-10-12 北方工业大学 Human body motion recognition method based on three-dimensional bone information
CN106485055A (en) * 2016-09-22 2017-03-08 吉林大学 A kind of old type ii diabetes patient moving training system based on Kinect sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Kinect传感器骨骼信息的人体动作识别;朱国刚等;《计算机仿真》;20141231;第31卷(第12期);第329-345页 *

Also Published As

Publication number Publication date
CN110428486A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
CN110298220B (en) Action video live broadcast method, system, electronic equipment and storage medium
CN109308438B (en) Method for establishing action recognition library, electronic equipment and storage medium
CN110298218B (en) Interactive fitness device and interactive fitness system
CN110298221B (en) Self-help fitness method and system, electronic equipment and storage medium
CN109308437B (en) Motion recognition error correction method, electronic device, and storage medium
Anilkumar et al. Pose estimated yoga monitoring system
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
US11954869B2 (en) Motion recognition-based interaction method and recording medium
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
CN115131879B (en) Action evaluation method and device
Yang et al. Human exercise posture analysis based on pose estimation
KR102356685B1 (en) Home training providing system based on online group and method thereof
EP3786971A1 (en) Advancement manager in a handheld user device
CN113743237A (en) Follow-up action accuracy determination method and device, electronic device and storage medium
CN115331314A (en) Exercise effect evaluation method and system based on APP screening function
CN118380096A (en) Rehabilitation training interaction method and device based on algorithm tracking and virtual reality
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
Chariar et al. AI trainer: Autoencoder based approach for squat analysis and correction
Kishore et al. Smart yoga instructor for guiding and correcting yoga postures in real time
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
Rozaliev et al. Methods and applications for controlling the correctness of physical exercises performance
CN116386136A (en) Action scoring method, equipment and medium based on human skeleton key points
US20240181295A1 (en) User experience platform for connected fitness systems
CN113842622B (en) Motion teaching method, device, system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210819

Address after: 200125 room 328, floor 3, unit 2, No. 231, Expo Village Road, pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Shanghai shibeisi Fitness Management Co.,Ltd.

Address before: 200233 room 136, building 20, tianlinfang, 130 Tianlin Road, Xuhui District, Shanghai

Applicant before: SHANGHAI MYSHAPE INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant