CN107943291B - Human body action recognition method and device and electronic equipment - Google Patents

Human body action recognition method and device and electronic equipment Download PDF

Info

Publication number
CN107943291B
CN107943291B CN201711182909.3A CN201711182909A CN107943291B CN 107943291 B CN107943291 B CN 107943291B CN 201711182909 A CN201711182909 A CN 201711182909A CN 107943291 B CN107943291 B CN 107943291B
Authority
CN
China
Prior art keywords
action
human body
connecting line
standard
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711182909.3A
Other languages
Chinese (zh)
Other versions
CN107943291A (en
Inventor
严程
李震
方醒
郭宏财
张迎春
李红成
叶进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuomi Private Ltd
Original Assignee
Zhuomi Private Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuomi Private Ltd filed Critical Zhuomi Private Ltd
Priority to CN201711182909.3A priority Critical patent/CN107943291B/en
Publication of CN107943291A publication Critical patent/CN107943291A/en
Priority to PCT/CN2018/098598 priority patent/WO2019100754A1/en
Application granted granted Critical
Publication of CN107943291B publication Critical patent/CN107943291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a human body action recognition method, a human body action recognition device and electronic equipment, wherein the method comprises the following steps: when the standard action is displayed, a video picture frame of the human action is collected, each joint of the human body is identified in the video picture frame, two adjacent joints in each joint of the human body are connected to obtain a connecting line between the two adjacent joints, an actual included angle between the connecting line between the two adjacent joints and a preset reference direction is calculated, and whether the human action is matched with the standard action or not is determined according to a difference value between the actual included angle and the standard angle. The method comprises the steps of obtaining a connecting line of adjacent joints by identifying the adjacent joints of a human body in a human body video picture frame, calculating an actual included angle between the connecting line of the adjacent joints and a preset reference direction, and determining whether human body actions are matched with standard actions according to a difference value between the actual included angle and a standard angle, so as to realize accurate identification of the actions and solve the technical problem of inaccurate action identification in the prior art.

Description

Human body action recognition method and device and electronic equipment
Technical Field
The invention relates to the technical field of mobile terminals, in particular to a human body action identification method and device and electronic equipment.
Background
The body feeling game carries out man-machine interaction through an internet operation platform, a player holds a special game handle, the action of figures in the game is controlled through the action of recognizing the body of the player, the whole body of the player can be put into the game, and new experience of the body feeling interaction can be enjoyed.
In the related art, the motion sensing game technology is mainly applied to computers and game hosts, the portability is poor, and the judgment of the body motion of a user is inaccurate because the judgment is carried out by determining the position of a handheld controller of the user to judge and calculate whether the body motion is correct or not.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, a first objective of the present invention is to provide a method for recognizing a human body motion, which includes obtaining a connection line between adjacent joints by recognizing adjacent joints of a human body in a video frame of the human body, calculating an actual included angle between the connection line between the adjacent joints and a preset reference direction, and determining whether a human body motion matches a standard motion according to a difference between the actual included angle and the standard angle, so as to implement accurate recognition of the motion, and solve the technical problem of inaccurate motion recognition in the prior art.
A second object of the present invention is to provide a human body motion recognition apparatus.
A third object of the invention is to propose an electronic device.
A fourth object of the invention is to propose a non-transitory computer-readable storage medium.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for recognizing human body actions, including:
when standard actions are displayed, video picture frames of human body actions are collected;
identifying each joint of the human body in the video picture frame;
connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints;
calculating an actual included angle between a connecting line between two adjacent joints and a preset reference direction;
determining whether the human body action is matched with the standard action or not according to the difference value between the actual included angle and the standard angle; and the standard angle is an angle between a connecting line between every two adjacent joints and the reference direction when the standard action is executed.
Optionally, as a first possible implementation manner of the first aspect, the determining, according to a difference between the actual included angle and a standard angle, whether the human body motion matches a standard motion includes:
calculating the difference value between the corresponding standard angle and the actual angle according to the connecting line between each two adjacent joints;
if the difference calculated by the connecting line between each two adjacent joints is within the error range, determining that the human body action is matched with the standard action;
and if the difference calculated by the connecting line between at least one adjacent two joints is not within the error range, determining that the human body action is not matched with the standard action.
Optionally, as a second possible implementation manner of the first aspect, after determining that the human body action matches the standard action, the method further includes:
aiming at the connecting line between each two adjacent joints, determining a scoring coefficient of the connecting line according to the corresponding difference and the error range;
generating evaluation information of the connecting line according to the grading coefficient of the connecting line and the score corresponding to the connecting line; the evaluation information of the connecting line comprises a decomposition action score which is the product of a score coefficient of the connecting line and a score corresponding to the connecting line;
generating evaluation information of the human body action according to the evaluation information of the connecting line between each two adjacent joints; the evaluation information of the human body actions comprises human body action scores, and the human body action scores are the sum of all decomposition action scores.
Optionally, as a third possible implementation manner of the first aspect, the determining a scoring coefficient of the connection line according to the corresponding difference and the error range includes:
calculating a grading coefficient p of the connecting line by adopting a formula p-1- [2 delta/(a-b) ]; wherein b is the lower limit of the error range, a is the upper limit of the error range, and delta is the difference.
Optionally, as a fourth possible implementation manner of the first aspect, after determining that the human body action does not match the standard action, the method further includes:
and determining that the human motion score in the evaluation information of the human motion is zero.
Optionally, as a fifth possible implementation manner of the first aspect, before the capturing a video frame of a human body motion when the standard motion is displayed, the method further includes:
acquiring a selected audio and standard actions corresponding to time nodes in the audio;
playing the audio;
and displaying the corresponding standard action when the audio is played to each time node.
Optionally, as a sixth possible implementation manner of the first aspect, the method further includes:
when the audio playing is finished, obtaining evaluation information of each human body action; the evaluation information of the human body action is used for indicating the difference degree between the human body action and the corresponding standard action;
and generating a target video according to the audio, each video frame and the action evaluation information of each human action.
In the method for identifying the human body action, when the standard action is displayed, a video picture frame of the human body action is collected, each joint of the human body is identified in the video picture frame, two adjacent joints in each joint of the human body are connected to obtain a connecting line between the two adjacent joints, an actual included angle between the connecting line between the two adjacent joints and a preset reference direction is calculated, and whether the human body action is matched with the standard action is determined according to a difference value between the actual included angle and the standard angle. The method comprises the steps of obtaining a connecting line of adjacent joints by identifying the adjacent joints of a human body in a human body video picture frame, calculating an actual included angle between the connecting line of the adjacent joints and a preset reference direction, and determining whether human body actions are matched with standard actions according to a difference value between the actual included angle and a standard angle, so as to realize accurate identification of the actions and solve the technical problem of inaccurate action identification in the prior art.
In order to achieve the above object, a second aspect of the present invention provides a human body motion recognition apparatus, including:
the acquisition module is used for acquiring video picture frames of human body actions when the standard actions are displayed;
the identification module is used for identifying each joint of the human body in the video picture frame;
the connecting module is used for connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints;
the calculation module is used for calculating an actual included angle between a connecting line between two adjacent joints and a preset reference direction;
the determining module is used for determining whether the human body action is matched with the standard action according to the difference value between the actual included angle and the standard angle; and the standard angle is an angle between a connecting line between every two adjacent joints and the reference direction when the standard action is executed.
Optionally, as a first possible implementation manner of the second aspect, the determining module includes:
the calculating unit is used for calculating the difference value between the corresponding standard angle and the corresponding actual angle according to the connecting line between each two adjacent joints;
the determining unit is used for determining that the human body action is matched with the standard action if the difference calculated by the connecting line between each two adjacent joints is within an error range; and if the difference calculated by the connecting line between at least one adjacent two joints is not within the error range, determining that the human body action is not matched with the standard action.
Optionally, as a second possible implementation manner of the second aspect, the determining module further includes:
the first scoring unit is used for determining a scoring coefficient of each connecting line between two adjacent joints according to the corresponding difference and the error range; generating evaluation information of the connecting line according to the grading coefficient of the connecting line and the score corresponding to the connecting line; the evaluation information of the connecting line comprises a decomposition action score which is the product of a score coefficient of the connecting line and a score corresponding to the connecting line; generating evaluation information of the human body action according to the evaluation information of the connecting line between each two adjacent joints; the evaluation information of the human body actions comprises human body action scores, and the human body action scores are the sum of all decomposition action scores.
Optionally, as a third possible implementation manner of the second aspect, the first scoring unit is specifically configured to:
calculating a grading coefficient p of the connecting line by adopting a formula p-1- [2 delta/(a-b) ]; wherein b is the lower limit of the error range, a is the upper limit of the error range, and delta is the difference.
Optionally, as a fourth possible implementation manner of the second aspect, the determining module further includes:
and the second scoring unit is used for determining that the human action score in the evaluation information of the human action is zero.
Optionally, as a fifth possible implementation manner of the second aspect, the apparatus further includes:
the selection module is used for acquiring the selected audio and the standard action corresponding to each time node in the audio;
the playing module is used for playing the audio;
and the display module is used for displaying the corresponding standard action when the audio is played to each time node.
Optionally, as a sixth possible implementation manner of the second aspect, the apparatus further includes:
the generating module is used for acquiring evaluation information of each human body action when the audio playing is finished; the evaluation information of the human body action is used for indicating the difference degree between the human body action and the corresponding standard action; and generating a target video according to the audio, each video frame and the action evaluation information of each human action.
In the human body motion recognition device, the acquisition module is used for acquiring a video picture frame of human body motion when standard motion is displayed, the recognition module is used for recognizing each joint of a human body in the video picture frame, the connection module is used for connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints, the calculation module is used for calculating an actual included angle between the connecting line between the two adjacent joints and a preset reference direction, and the determination module is used for determining whether the human body motion is matched with the standard motion according to a difference value between the actual included angle and the standard angle. The method comprises the steps of obtaining a connecting line of adjacent joints by identifying the adjacent joints of a human body in a human body video picture frame, calculating an actual included angle between the connecting line of the adjacent joints and a preset reference direction, and determining whether human body actions are matched with standard actions according to a difference value between the actual included angle and a standard angle, so that accurate identification of the actions is realized, and the problem that the action identification in the prior art is inaccurate is solved.
To achieve the above object, a third aspect of the present invention provides an electronic device, including: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor reads the executable program code stored in the memory to run a program corresponding to the executable program code, and is used for executing the human body motion recognition method of the first aspect.
To achieve the above object, a fourth aspect of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the human body motion recognition method according to the first aspect.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a method for recognizing human body actions according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the ratio of limbs to height in the human anatomy provided in this embodiment;
fig. 3 is a schematic flow chart of another human body motion recognition method according to an embodiment of the present invention;
FIG. 4A is a block diagram illustrating a standard operation according to an embodiment of the present invention;
FIG. 4B is a schematic diagram of an actual implementation provided by the embodiment of the present invention;
fig. 5 is a schematic structural diagram of a human body motion recognition device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another human body motion recognition device according to an embodiment of the present invention; and
fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a human body motion recognition method, a human body motion recognition device, and an electronic apparatus according to an embodiment of the present invention with reference to the drawings.
The electronic device in this embodiment may be specifically a mobile phone, and those skilled in the art can know that the electronic device may also be other mobile terminals, and all of them can identify human body actions by referring to the scheme provided in this embodiment.
In the following embodiments, an electronic device is taken as an example of a mobile phone, and a method for recognizing human body actions is explained.
Fig. 1 is a schematic flow chart of a method for recognizing human body actions according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step 101, collecting video frame of human body action when displaying standard action.
Specifically, a mobile phone application program is opened, a video acquisition interface is entered, as a possible implementation manner, an audio selection interface is entered before the mobile phone application program enters the video acquisition interface, a user can select favorite audio in a pull-down menu manner, each time node in the audio has a corresponding standard action, the audio is selected through a confirmation button and enters the video acquisition interface to start video picture frame acquisition, the corresponding standard action is displayed at the corresponding time node in the process of playing the audio by the mobile phone, the user synchronously performs the same action according to the displayed standard action when displaying the standard action, and meanwhile, a camera device acquires video picture frames of the same human body action performed by the user.
When the standard action is displayed, the video frame containing the human body action which is synchronously acquired is a plurality of frames, as a possible implementation mode, the time point of displaying the standard action is taken as a time reference, N frames of pictures containing the human body action are acquired backwards, and for the value of N, the value of N can be determined by a person skilled in the art according to the practical application condition.
As another possible implementation, video frame frames containing human body motion may be continuously collected during the whole audio playing process.
Step 102, identifying each joint of the human body in the video picture frame.
As a possible implementation manner, in the case that the video frame carries depth information, the human body and the background in each frame of the video frame can be separated, and then joints of the human body can be identified. In order to enable the video frame to carry Depth information, the camera device for acquiring the human video frame may be a camera device capable of acquiring Depth information, and the human body part in the image is identified through the acquired Depth information, for example, a dual camera, a Depth camera (Red-Green-Blue Depth) RGBD, and the Depth information is acquired while imaging, and the acquisition of the Depth information may also be performed through a structured light/TOF lens, which is not listed herein.
Specifically, according to the acquired depth information, a face region and position information in the image are identified by combining a face identification technology, so that pixel points contained in the face region and depth information corresponding to the pixel points are obtained, and an average value of the depth information corresponding to the pixel points of the face is obtained through calculation. Furthermore, because the human body and the human face are basically on the same imaging plane, the pixel points of which the difference value of the average value of the depth information corresponding to the pixel points in the human face is within the threshold range are identified as the human body, and then the human body and the human body contour can be identified, so that the depth information and the position information of each pixel point in the human body and the contour are determined, and the human body and the background can be separated. Further, in order to facilitate the identification of joints in the human body and to eliminate the interference of the background, the image may be binarized so that the pixel value of the background is 0 and the pixel value of the human body is 1.
Further, according to the position information of the recognized human face and the human body and the proportional relation between the limbs and the height in human anatomy, the position information of each joint of the human body can be calculated. For example, fig. 2 is a schematic diagram of the proportion of limbs to height in human anatomy according to this embodiment, fig. 2 lists the proportional relationship of each joint in a limb, and the position information of the human neck joint in a video frame can be determined according to the position information of the human face and the human body, so as to obtain the two-dimensional coordinate information (x, y) of the human neck joint. As shown in fig. 2, the difference between the height of the shoulder joint and the height of the neck joint is fixed, the row of the shoulder joint can be determined according to the coordinate information of the neck joint and the difference, and since the pixel value of the background part is 0 and the pixel value of the human body part is 1, the point of the line where the pixel value of the extreme edge of the left and right sides is 1 is the point corresponding to the shoulder joint, thereby determining the two-dimensional coordinate information (x1, y1) of the left shoulder joint and the two-dimensional coordinate information (x2, y2) of the right shoulder joint.
Based on the determined position information of the left shoulder joint, based on the standard distance of the left shoulder joint and the left elbow joint in fig. 2, and rounding with the standard distance as a diameter, since the pixel value of the background portion is 0, when the left and right pixel point positions of which the pixel is 1 are recognized, two-dimensional coordinate information of the left elbow joint can be determined (x3, y 3).
Similarly, the two-dimensional coordinate information of other joints of the human body can be further identified and determined, and each joint of the human body at least comprises: the neck joint, left shoulder joint, right shoulder joint, left elbow joint, right elbow joint, left wrist joint, right wrist joint, left knee joint, left ankle joint, right knee joint, right ankle joint, etc., are not listed here because there are many joints. The principle of the method for identifying and determining the two-dimensional coordinates of other joints is the same, and the details are omitted here.
And 103, connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints.
For example, the left shoulder joint and the left elbow joint are two adjacent joints, and the left shoulder joint and the left elbow joint corresponding to the left shoulder joint and the left elbow joint when the human body acts are connected to obtain a connecting line between the left shoulder joint and the left elbow joint.
And 104, calculating an actual included angle between a connecting line between two adjacent joints and a preset reference direction.
Specifically, if the preset reference direction is the horizontal direction, the included angle between the connecting line between two adjacent joints and the horizontal direction may be calculated according to the acquired position information of the two adjacent joints, for example, the included angle is defined as θ, the two-dimensional coordinate of the left shoulder joint is (x1, y1), the two-dimensional coordinate of the left elbow joint is (x3, y3), and the actual included angle θ between the connecting line between the left shoulder joint and the left elbow joint and the horizontal direction may be calculated according to the formula tg (θ) — (y3-y1)/(x3-x1), and similarly, the actual included angle between the connecting line between two adjacent joints and the horizontal direction may be calculated.
And 105, determining whether the human body action is matched with the standard action or not according to the difference value between the actual included angle and the standard angle.
Specifically, the standard angle is an angle between a connecting line between each two adjacent joints and a reference direction when the standard action is executed, and for the connecting line between each two adjacent joints, calculating a difference value between an actual included angle of the user when the standard action is executed and the corresponding standard angle, and determining that the human body action is matched with the standard action if the calculated difference value of the connecting line between each two adjacent joints is within an error range; and if the difference calculated by the connecting line between at least one adjacent two joints is not within the error range, determining that the human body action is not matched with the standard action.
It should be noted that all the human body motions in the collected multi-frame video pictures containing the human body motions are matched with the standard motions, and the smaller the difference value in the error range is, the higher the matching degree of the human body motions and the standard motions is, that is, the more standard the user imitates the standard motions.
In the method for identifying the human body action, when the standard action is displayed, a video picture frame of the human body action is collected, each joint of the human body is identified in the video picture frame, two adjacent joints in each joint of the human body are connected to obtain a connecting line between the two adjacent joints, an actual included angle between the connecting line between the two adjacent joints and a preset reference direction is calculated, and whether the human body action is matched with the standard action is determined according to a difference value between the actual included angle and the standard angle. The method comprises the steps of obtaining a connecting line of adjacent joints by identifying the adjacent joints of a human body in a human body video picture frame, calculating an actual included angle between the connecting line of the adjacent joints and a preset reference direction, and determining whether human body actions are matched with standard actions according to a difference value between the actual included angle and a standard angle, so as to realize accurate identification of the actions and solve the technical problem of inaccurate action identification in the prior art.
On the basis of the previous embodiment, this embodiment provides another human body motion recognition method, and fig. 3 is a schematic flow chart of the another human body motion recognition method provided by the embodiment of the present invention, as shown in fig. 3, the method may include:
step 301, obtaining the selected audio and the standard action corresponding to each time node in the audio, and playing the audio.
Specifically, the mobile phone presets a plurality of audios, each time node in each audio has a corresponding standard action, a user selects and plays the audio according to the preference, and synchronously collects each video frame containing the user while playing the audio until the audio playing is finished.
Step 302, when the audio is played to each time node, the corresponding standard action is displayed.
Specifically, when the corresponding time node is played, that is, the corresponding standard action is displayed on the video capture interface of the camera device, as a possible implementation manner, the corresponding standard action may be displayed in the form of a suspension frame in the video capture interface, and as another possible implementation manner, the corresponding standard action may be displayed in the form of a bullet screen in the video capture interface in a rolling manner.
For example, fig. 4A is a schematic structural diagram of a standard motion provided by an embodiment of the present invention, in which the standard motion is shown at a certain time node, and the relevant joints involved in the standard motion are shown, each joint includes: the left wrist joint, the right wrist joint, the left elbow joint, the right elbow joint, the left shoulder joint and the right shoulder joint are 6 joints in total.
Step 303, collecting video frame of human body action when displaying standard action.
Specifically, when the audio is played to a corresponding certain time node and the corresponding standard action is displayed, the camera device synchronously collects video picture frames of the human body action made by the user by simulating the standard action, the video picture frames of the human body action collected by the camera device are multi-frame pictures, and the human body action corresponding to the standard action is recorded in the multi-frame pictures. For example, fig. 4B is a schematic structural diagram of an actual action provided by the embodiment of the present invention, and fig. 4B shows an actual action performed by a user when the standard action in fig. 4A is shown.
It should be noted that the collected video frames of the human body motion are multiple frames of images, each frame of image has a corresponding human body motion, in this embodiment, one frame of image is used for illustration, and the processing methods of other frames of images are the same.
And 304, identifying each joint of the human body in the video picture frame, and obtaining a connecting line between two adjacent joints.
In the collected video frame including the motion of the human body, each joint of the human body is identified, which may specifically refer to step 102 in fig. 1, and details are not described in this embodiment.
Further, each joint of the human body is identified according to the collected human body motion, a connecting line between two adjacent joints is obtained, a connecting line 1 between the right wrist joint and the right elbow joint, a connecting line 2 between the right elbow joint and the right shoulder joint, a connecting line 3 between the right shoulder joint and the left shoulder joint, a connecting line 4 between the left shoulder joint and the left elbow joint, and a connecting line 5 between the left elbow joint and the right wrist joint in fig. 4B are obtained, for convenience of description, the motion corresponding to each connecting line is called as the decomposition motion of the actual motion made by the user, and all the decomposition motions constitute the actual motion.
And step 305, calculating an actual included angle between a connecting line between two adjacent joints and a preset reference direction.
Specifically, as shown in fig. 4B, the preset reference direction is the horizontal direction of the screen, and the angle between the connection line No. 1 and the horizontal direction of the screen is calculated to be 35 degrees, the angle between the connection line No. 2 and the horizontal direction of the screen is 0 degree, the angle between the connection line No. 3 and the horizontal direction of the screen is 0 degree, the angle between the connection line No. 4 and the horizontal direction of the screen is 0 degree, and the angle between the connection line No. 5 and the horizontal direction of the screen is 130 degrees.
Step 306, calculating the difference between the corresponding standard angle and the actual angle for the connecting line between each two adjacent joints, determining whether the human body motion is matched with the standard motion, if not, executing step 307, and if so, executing step 308.
Specifically, when the standard angle is a standard motion, the angle between the connecting line between two adjacent joints and the reference direction is illustrated by taking the connecting line 1 between the right wrist joint and the right elbow joint in fig. 4B as an example, where the connecting line 1 corresponds to the standard angle in fig. 4A and is 45 degrees, the actual angle measured by the actual motion in fig. 4B is 35 degrees and the difference is 10 degrees, and according to a preset threshold value of the difference, for example, 15 degrees and the difference is 10 degrees and is less than 15 degrees, it can be determined that the decomposition motion corresponding to the connecting line 1 matches the decomposition motion in the standard motion, further, it is determined whether the decomposition motions corresponding to the connecting line 2, the connecting line 3, the connecting line 4 and the connecting line 5 match the decomposition motions in the standard motion, if all the decomposition motions match the standard motion, the actual human motion is matched with the standard motion, and if any decomposition motion does not match the corresponding decomposition motion in the standard motion, the actual human motion is not matched to the standard motion.
And 307, determining that the human motion score in the evaluation information of the human motion is zero.
Specifically, if the actual human body motion and the standard motion are not matched, the score obtained by the user doing the human body motion is set to be 0.
And 308, determining a grading coefficient of the connecting line according to the corresponding difference and the error range aiming at the connecting line between each two adjacent joints.
Specifically, according to the formula p ═ 1- [2 Δ/(a-b) ], a score coefficient p of the connecting line is calculated, where b is a lower error range limit, a is an upper error range limit, and Δ is a difference. Taking the connecting line 1 between the right wrist joint and the right elbow joint in fig. 4B as an example, the corresponding difference is 10 degrees, for example, the upper limit of the error range of the difference is positive 50 degrees, the lower limit of the error range is negative 50 degrees, and the value of P is 1- [2 × 10/(50- (-50)) ] ═ 0.8 according to the formula, i.e., the score coefficient of the connecting line 1 is 0.8.
Further, in the same way, the scoring coefficient of the connecting line 2 is 1, the scoring coefficient of the connecting line 3 is 1, the scoring coefficient of the connecting line 4 is 1, and the scoring coefficient of the connecting line 5 is 0.9.
Step 309, generating evaluation information of the connection line according to the grading coefficient of the connection line and the score corresponding to the connection line, and further generating evaluation information of the human body action.
Specifically, the evaluation information of the connection line includes a score of the decomposition action, and the score of the decomposition action is a product of a score coefficient of the connection line and a score corresponding to the connection line, as shown in fig. 4B, the total score of the action is 100, and there are 5 decomposition actions, so that the score of each decomposition action is 20 points at all, and the score of the decomposition action corresponding to the connection line 1 is 16 points when the score of the decomposition action corresponding to the connection line 1 is 20 points at all, and the corresponding score coefficient is 0.8, so as to generate the evaluation information of the connection line 1. Similarly, the score of the decomposition action corresponding to the connecting line 2 contained in the evaluation information of the connecting line 2 is 20 points, the score of the decomposition action corresponding to the connecting line 3 contained in the evaluation information of the connecting line 3 is 20 points, the score of the decomposition action corresponding to the connecting line 4 contained in the evaluation information of the connecting line 4 is 20 points, the score of the decomposition action corresponding to the connecting line 5 contained in the evaluation information of the connecting line 5 is 18 points, and the scores of the decomposition actions corresponding to the connecting lines are summed to obtain the score of the human body action of 94 points, so that the evaluation information of the human body action is obtained.
Further, the video frames of other multiple human body actions are processed according to the above method, so as to obtain evaluation information of human body actions in different video frames, as a possible implementation manner, the action score in the evaluation information of human body actions exceeds a threshold score, such as 60 scores, and the corresponding video frames of multiple human body actions are taken as video frames for displaying a single action score in the generated video, that is, the score information of corresponding actions is added to the multiple video frames, so that the time delay is long enough, and a user can see specific score information.
And step 310, when the audio playing is finished, obtaining evaluation information of each human body action and generating a target video.
Specifically, when the audio playing is finished, obtaining evaluation information of each human body action corresponding to different time node display standard actions, wherein the evaluation information of the human body action is used for indicating the degree of difference between the human body action and the corresponding standard action, and the higher the score of the action in the evaluation information of the human body action, the smaller the difference between the human body action and the corresponding standard action is, and otherwise, the larger the difference is.
Furthermore, according to the audio, the obtained video frame and the corresponding action evaluation information of the human body action, a target video is generated, and when the target video is played back, each human body action can show the corresponding score, so that a user can know the score condition of the action, the user can be helped to improve the action, and meanwhile, the user experience is good.
In the method for identifying the human body action, when the standard action is displayed, a video picture frame of the human body action is collected, a connecting line of adjacent joints is obtained by identifying adjacent joints of the human body in the video picture frame of the human body, an actual included angle between the connecting line of the adjacent joints and a preset reference direction is calculated, and whether the human body action is matched with the standard action is determined according to a difference value between the actual included angle and the standard angle, so that the accurate identification of the action is realized, and the technical problem of inaccurate action identification in the prior art is solved. Meanwhile, the human body actions in the collected video frame can be scored to obtain action evaluation information, the difference degree between the human body actions and the standard actions is indicated, and the user can know and correct the human body actions during playback by generating the target video, so that the actions during next video recording are more standard.
In order to realize the embodiment, the invention further provides a human body action recognition device.
Fig. 5 is a schematic structural diagram of a human body motion recognition device according to an embodiment of the present invention.
As shown in fig. 5, the apparatus includes: an acquisition module 51, an identification module 52, a connection module 53, a calculation module 54 and a determination module 55.
And the acquisition module 51 is used for acquiring video frame of human body action when the standard action is displayed.
And the identification module 52 is configured to identify each joint of the human body in the video frame.
The connecting module 53 is used for connecting two adjacent joints of each joint of the human body to obtain a connecting line between the two adjacent joints.
And the calculating module 54 is configured to calculate an actual included angle between a connecting line between two adjacent joints and the preset reference direction.
And the determining module 55 is configured to determine whether the human body motion matches a standard motion according to a difference between the actual included angle and the standard angle, where the standard angle is an angle between a connection line between each two adjacent joints and the reference direction when the standard motion is executed.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
In the human body motion recognition device, the acquisition module is used for acquiring a video picture frame of human body motion when standard motion is displayed, the recognition module is used for recognizing each joint of a human body in the video picture frame, the connection module is used for connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints, the calculation module is used for calculating an actual included angle between the connecting line between the two adjacent joints and a preset reference direction, and the determination module is used for determining whether the human body motion is matched with the standard motion according to a difference value between the actual included angle and the standard angle. The method comprises the steps of obtaining a connecting line of adjacent joints by identifying the adjacent joints of a human body in a human body video picture frame, calculating an actual included angle between the connecting line of the adjacent joints and a preset reference direction, and determining whether human body actions are matched with standard actions according to a difference value between the actual included angle and a standard angle, so as to realize accurate identification of the actions and solve the technical problem of inaccurate action identification in the prior art.
Based on the foregoing embodiment, the embodiment of the present invention further provides a possible implementation manner of another human body motion recognition apparatus, fig. 6 is a schematic structural diagram of another human body motion recognition apparatus provided in the embodiment of the present invention, and on the basis of the foregoing embodiment, the determining module 55 may further include: a calculation unit 551, a determination unit 552, a first scoring unit 553, and a second scoring unit 554.
And the calculating unit 551 is used for calculating the difference value between the corresponding standard angle and the actual angle for the connecting line between each two adjacent joints.
The determining unit 552 is configured to determine that the human body motion matches the standard motion if the difference calculated by the connection line between each two adjacent joints is within the error range; and if the difference calculated by the connecting line between at least one adjacent two joints is not within the error range, determining that the human body action is not matched with the standard action.
As a possible implementation manner of the embodiment of the present invention, if the determining unit 552 determines that the human body action matches the standard action, the first scoring unit 553 is specifically configured to:
and determining a grading coefficient of the connecting line according to the corresponding difference and the error range of the connecting line for each connecting line between two adjacent joints, generating evaluation information of the connecting line according to the grading coefficient of the connecting line and the value corresponding to the connecting line, wherein the evaluation information of the connecting line comprises a decomposition action value which is the product of the grading coefficient of the connecting line and the value corresponding to the connecting line, and generating the evaluation information of the human body action according to the evaluation information of the connecting line between two adjacent joints, wherein the evaluation information of the human body action comprises the human body action value, and the human body action value is the sum of the decomposition action values.
As another possible implementation manner of the embodiment of the present invention, if the determining unit 552 determines that the human body action does not match the standard action, the second scoring unit 554 is specifically configured to:
and determining that the human motion score in the evaluation information of the human motion is zero.
As a possible implementation manner of this embodiment, the apparatus may further include: a selection module 56, a playing module 57, a presentation module 58 and a generation module 59.
And the selecting module 56 is used for acquiring the selected audio and the standard action corresponding to each time node in the audio.
And a playing module 57, configured to play audio.
The presentation module 58 is configured to present the corresponding standard action when the audio is played to each time node.
And the generating module 59 is used for acquiring evaluation information of each human body action when the audio playing is finished, wherein the evaluation information of the human body action is used for indicating the difference degree between the human body action and the corresponding standard action, and generating the target video according to the audio, each video picture frame and the action evaluation information of each human body action.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
In the human body action recognition device provided by the embodiment of the invention, when standard actions are displayed, video picture frames of the human body actions are collected, the connecting line of adjacent joints is obtained by recognizing adjacent joints of the human body in the human body video picture frames, the actual included angle between the connecting line of the adjacent joints and a preset reference direction is calculated, and whether the human body actions are matched with the standard actions is determined according to the difference value between the actual included angle and the standard angle, so that the accurate recognition of the actions is realized, and the technical problem of inaccurate action recognition in the prior art is solved. Meanwhile, the human body actions in the collected video frame can be scored to obtain action evaluation information, the difference degree between the human body actions and the standard actions is indicated, and the user can know and correct the human body actions during playback by generating the target video, so that the actions during next video recording are more standard.
To achieve the foregoing embodiments, an embodiment of the present invention further provides an electronic device, and fig. 7 is a schematic structural diagram of an embodiment of the electronic device of the present invention, as shown in fig. 7, the electronic device includes: the device comprises a shell 71, a processor 72, a memory 73, a circuit board 74 and a power circuit 75, wherein the circuit board 74 is arranged inside a space enclosed by the shell 71, and the processor 72 and the memory 73 are arranged on the circuit board 74; a power supply circuit 75 for supplying power to each circuit or device of the electronic apparatus; the memory 73 is used to store executable program code; the processor 72 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 73, for executing the human body motion recognition method described in the foregoing method embodiments.
For the specific execution process of the above steps by the processor 72 and the steps further executed by the processor 72 by running the executable program code, reference may be made to the description of the embodiment shown in fig. 1 to 3 of the present invention, which is not described herein again.
The electronic device exists in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic equipment with data interaction function.
In order to implement the above embodiments, the embodiment of the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the human body motion recognition method described in the above method embodiments.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A human body action recognition method is characterized by comprising the following steps:
acquiring a selected audio and standard actions corresponding to time nodes in the audio;
playing the audio, and displaying corresponding standard actions when the audio is played to each time node;
when standard actions are displayed, video picture frames of human body actions are collected;
identifying each joint of the human body in the video picture frame;
connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints;
calculating an actual included angle between a connecting line between two adjacent joints and a preset reference direction;
determining whether the human body action is matched with the standard action according to the difference value between the actual included angle and the standard angle, wherein the determining step comprises the following steps: calculating the difference value between the corresponding standard angle and the actual angle according to the connecting line between each two adjacent joints;
if the difference calculated by the connecting line between each two adjacent joints is within the error range, determining that the human body action is matched with the standard action;
aiming at the connecting line between each two adjacent joints, determining a scoring coefficient of the connecting line according to the corresponding difference and the error range;
generating evaluation information of the connecting line according to the grading coefficient of the connecting line and the score corresponding to the connecting line; the evaluation information of the connecting line comprises a decomposition action score which is the product of a score coefficient of the connecting line and a score corresponding to the connecting line;
generating evaluation information of the human body action according to the evaluation information of the connecting line between each two adjacent joints; the evaluation information of the human body actions comprises human body action scores, and the human body action scores are the sum of all decomposition action scores;
the standard angle is an angle between a connecting line between every two adjacent joints and the reference direction when the standard action is executed;
when the audio playing is finished, acquiring evaluation information of each human body action corresponding to the standard action displayed at different time nodes;
and generating a target video according to the audio, each video picture frame and the action evaluation information of each human action, wherein the corresponding human action scores of the human actions are added in the video picture frames, and the video picture frames of the human actions of which the human action scores exceed threshold scores in the action evaluation information of the human actions are taken as video frames for displaying single human action scores in the generated target video.
2. The identification method according to claim 1, wherein the determining whether the human body motion matches a standard motion according to the difference between the actual included angle and a standard angle further comprises:
and if the difference calculated by the connecting line between at least one adjacent two joints is not within the error range, determining that the human body action is not matched with the standard action.
3. The method of claim 1, wherein determining the scoring coefficients for the connected line based on the corresponding difference and the error range comprises:
calculating a grading coefficient p of the connecting line by adopting a formula p-1- [2 delta/(a-b) ]; wherein b is the lower limit of the error range, a is the upper limit of the error range, and delta is the difference.
4. The identification method according to claim 2, wherein after determining that the human body action does not match the standard action, further comprising:
and determining that the human motion score in the evaluation information of the human motion is zero.
5. An apparatus for recognizing a human body motion, the apparatus comprising:
the selection module is used for acquiring the selected audio and the standard action corresponding to each time node in the audio;
the display module is used for playing the audio and displaying the corresponding standard action when the audio is played to each time node;
the acquisition module is used for acquiring video picture frames of human body actions when the standard actions are displayed;
the identification module is used for identifying each joint of the human body in the video picture frame;
the connecting module is used for connecting two adjacent joints in each joint of the human body to obtain a connecting line between the two adjacent joints;
the calculation module is used for calculating an actual included angle between a connecting line between two adjacent joints and a preset reference direction;
the determining module is used for determining whether the human body action is matched with the standard action according to the difference value between the actual included angle and the standard angle; the standard angle is an angle between a connecting line between every two adjacent joints and the reference direction when the standard action is executed;
the determining module includes:
the calculating unit is used for calculating the difference value between the corresponding standard angle and the actual angle according to the connecting line between each two adjacent joints;
the determining unit is used for determining that the human body action is matched with the standard action if the difference calculated by the connecting line between each two adjacent joints is within an error range;
the first scoring unit is used for determining a scoring coefficient of each connecting line between two adjacent joints according to the corresponding difference and the error range; generating evaluation information of the connecting line according to the grading coefficient of the connecting line and the score corresponding to the connecting line; the evaluation information of the connecting line comprises a decomposition action score which is the product of a score coefficient of the connecting line and a score corresponding to the connecting line; generating evaluation information of the human body action according to the evaluation information of the connecting line between each two adjacent joints; the evaluation information of the human body actions comprises human body action scores, and the human body action scores are the sum of all decomposition action scores;
the generating module is used for acquiring evaluation information of each human body action when the audio playing is finished; the evaluation information of the human body action is used for indicating the difference degree between the human body action and the corresponding standard action; and generating a target video according to the audio, each video picture frame and the action evaluation information of each human action, wherein the corresponding human action scores of the human actions are added in the video picture frames, and the video picture frames of the human actions of which the human action scores exceed threshold scores in the action evaluation information of the human actions are taken as video frames for displaying single human action scores in the generated target video.
6. The identification device according to claim 5, wherein the determination unit is further configured to determine that the human body motion does not match the standard motion if the difference calculated by the connection line between at least two adjacent joints is not within the error range.
7. The identification device of claim 5, wherein the first scoring unit is specifically configured to:
calculating a grading coefficient p of the connecting line by adopting a formula p-1- [2 delta/(a-b) ]; wherein b is the lower limit of the error range, a is the upper limit of the error range, and delta is the difference.
8. The identification device of claim 6, wherein the determining module further comprises:
and the second scoring unit is used for determining that the human action score in the evaluation information of the human action is zero.
9. An electronic device, comprising: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for performing the human body motion recognition method of any one of claims 1 to 4.
10. A non-transitory computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the method for recognizing human body motion according to any one of claims 1 to 4.
CN201711182909.3A 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment Active CN107943291B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711182909.3A CN107943291B (en) 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment
PCT/CN2018/098598 WO2019100754A1 (en) 2017-11-23 2018-08-03 Human body movement identification method and device, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711182909.3A CN107943291B (en) 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107943291A CN107943291A (en) 2018-04-20
CN107943291B true CN107943291B (en) 2021-06-08

Family

ID=61930056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711182909.3A Active CN107943291B (en) 2017-11-23 2017-11-23 Human body action recognition method and device and electronic equipment

Country Status (2)

Country Link
CN (1) CN107943291B (en)
WO (1) WO2019100754A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943291B (en) * 2017-11-23 2021-06-08 卓米私人有限公司 Human body action recognition method and device and electronic equipment
CN108875687A (en) * 2018-06-28 2018-11-23 泰康保险集团股份有限公司 A kind of appraisal procedure and device of nursing quality
CN109432753B (en) * 2018-09-26 2020-12-29 Oppo广东移动通信有限公司 Action correcting method, device, storage medium and electronic equipment
CN111107279B (en) * 2018-10-26 2021-06-29 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111105345B (en) * 2018-10-26 2021-11-09 北京微播视界科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109462776B (en) * 2018-11-29 2021-08-20 北京字节跳动网络技术有限公司 Video special effect adding method and device, terminal equipment and storage medium
CN109621332A (en) * 2018-12-29 2019-04-16 北京卡路里信息技术有限公司 A kind of attribute determining method, device, equipment and the storage medium of body-building movement
WO2021032092A1 (en) 2019-08-18 2021-02-25 聚好看科技股份有限公司 Display device
CN112399234B (en) * 2019-08-18 2022-12-16 聚好看科技股份有限公司 Interface display method and display equipment
CN110728181B (en) * 2019-09-04 2022-07-12 北京奇艺世纪科技有限公司 Behavior evaluation method and apparatus, computer device, and storage medium
CN111158486B (en) * 2019-12-31 2023-12-05 恒信东方文化股份有限公司 Method and system for identifying singing jump program action
CN112487940B (en) * 2020-11-26 2023-02-28 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and device
CN112998700B (en) * 2021-05-26 2021-09-24 北京欧应信息技术有限公司 Apparatus, system and method for assisting assessment of a motor function of an object

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5294983B2 (en) * 2009-05-21 2013-09-18 Kddi株式会社 Portable terminal, program and method for determining direction of travel of pedestrian using acceleration sensor and geomagnetic sensor
CN105278685B (en) * 2015-09-30 2018-12-21 陕西科技大学 A kind of assisted teaching system and teaching method based on EON
CN107943291B (en) * 2017-11-23 2021-06-08 卓米私人有限公司 Human body action recognition method and device and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105307017A (en) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 Method and device for correcting posture of smart television user

Also Published As

Publication number Publication date
CN107943291A (en) 2018-04-20
WO2019100754A1 (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN107943291B (en) Human body action recognition method and device and electronic equipment
CN107968921B (en) Video generation method and device and electronic equipment
CN107952238B (en) Video generation method and device and electronic equipment
CN109068053B (en) Image special effect display method and device and electronic equipment
WO2019100757A1 (en) Video generation method and device, and electronic apparatus
US9495800B2 (en) Storage medium having stored thereon image processing program, image processing apparatus, image processing system, and image processing method
EP2839428B1 (en) Method of displaying multimedia exercise content based on exercise amount and multimedia apparatus applying the same
WO2019100756A1 (en) Image acquisition method and apparatus, and electronic device
US9071808B2 (en) Storage medium having stored information processing program therein, information processing apparatus, information processing method, and information processing system
US8414393B2 (en) Game device, control method for a game device, and a non-transitory information storage medium
US7828659B2 (en) Game device, control method of computer, and information storage medium
US9064335B2 (en) System, method, device and computer-readable medium recording information processing program for superimposing information
US20110305398A1 (en) Image generation system, shape recognition method, and information storage medium
CN104243951A (en) Image processing device, image processing system and image processing method
US8520901B2 (en) Image generation system, image generation method, and information storage medium
US9776088B2 (en) Apparatus and method of user interaction
JP2012115539A (en) Game device, control method therefor, and program
US20130215112A1 (en) Stereoscopic Image Processor, Stereoscopic Image Interaction System, and Stereoscopic Image Displaying Method thereof
JP2010220857A (en) Program, information storage medium, and game device
US9724613B2 (en) Game device, control method of game device, program, and information storage medium
US10086283B2 (en) Motion scoring method and apparatus
KR20170078176A (en) Apparatus for presenting game based on action recognition, method thereof and computer recordable medium storing the method
US20130176302A1 (en) Virtual space moving apparatus and method
CN106390454A (en) Reality scene virtual game system
EP2513842B1 (en) Locating camera relative to a display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190626

Address after: Room 1101, Santai Commercial Building, 139 Connaught Road, Hong Kong, China

Applicant after: Hong Kong Lemi Co., Ltd.

Address before: Cayman Islands, Greater Cayman Island, Kamana Bay, Casia District, Seitus Chamber of Commerce, 2547

Applicant before: Happy honey Company Limited

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210524

Address after: 25, 5th floor, shuangjingfang office building, 3 frisha street, Singapore

Applicant after: Zhuomi Private Ltd.

Address before: Room 1101, Santai Commercial Building, 139 Connaught Road, Hong Kong, China

Applicant before: HONG KONG LIVE.ME Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant