WO2019100754A1 - 人体动作的识别方法、装置和电子设备 - Google Patents

人体动作的识别方法、装置和电子设备 Download PDF

Info

Publication number
WO2019100754A1
WO2019100754A1 PCT/CN2018/098598 CN2018098598W WO2019100754A1 WO 2019100754 A1 WO2019100754 A1 WO 2019100754A1 CN 2018098598 W CN2018098598 W CN 2018098598W WO 2019100754 A1 WO2019100754 A1 WO 2019100754A1
Authority
WO
WIPO (PCT)
Prior art keywords
human body
motion
standard
connection
action
Prior art date
Application number
PCT/CN2018/098598
Other languages
English (en)
French (fr)
Inventor
叶进
严程
李震
方醒
郭宏财
张迎春
李红成
Original Assignee
乐蜜有限公司
叶进
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐蜜有限公司, 叶进 filed Critical 乐蜜有限公司
Publication of WO2019100754A1 publication Critical patent/WO2019100754A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present application relates to the field of mobile terminal technologies, and in particular, to a method, an apparatus, and an electronic device for recognizing a human body motion.
  • Somatosensory game through the Internet operation platform, human-computer interaction, the player holds a special game controller, and controls the movement of the characters in the game by recognizing the movement of the player's body, allowing the player to "full body” into the game and enjoy the somatosensory interaction. New Experience.
  • the somatosensory game technology is mainly applied to a computer and a game host, and the portability is poor, and the judgment of the user's body motion is to determine and calculate the correctness of the body motion by determining the position of the user's hand-held controller, resulting in inaccurate judgment. .
  • the present application aims to solve at least one of the technical problems in the related art to some extent.
  • the first object of the present application is to propose a method for recognizing a human body motion, by identifying adjacent joints of a human body in a video frame of a human body, obtaining a connection of adjacent joints, and calculating a connection and a preset of adjacent joints. Refer to the actual angle between the reference directions, and determine whether the human body motion matches the standard motion according to the difference between the actual angle and the standard angle, so as to achieve accurate recognition of the motion, and solve the inaccurate motion recognition in the prior art. technical problem.
  • a second object of the present application is to provide an apparatus for recognizing human motion.
  • a third object of the present application is to propose an electronic device.
  • a fourth object of the present application is to propose a non-transitory computer readable storage medium.
  • a fifth object of the present application is to propose a computer program product.
  • the first aspect of the present application provides a method for identifying a human motion, including:
  • the determining, according to the difference between the actual angle and the standard angle, whether the human motion is matched with a standard motion includes:
  • the method further includes:
  • the evaluation information of the connection includes a decomposition action score
  • the decomposition action score is the connection The product of the score factor of the line and the score corresponding to the line;
  • the evaluation information of the human body motion includes a human action score, and the human motion score is each decomposition action The sum of the scores.
  • the determining, according to the corresponding difference value and the error range, the scoring coefficient of the connection including:
  • the score coefficient p of the connection is calculated; where b is the lower limit of the error range, a is the upper limit of the error range, and ⁇ is the difference.
  • the method further includes:
  • the human action score is determined to be zero in the evaluation information of the human body motion.
  • the method before the displaying the video action frame of the human motion, the method further includes:
  • the corresponding standard action is displayed when the audio is played to each time node.
  • the method further includes:
  • a target video is generated based on the audio, each video frame frame, and motion evaluation information of each human body motion.
  • a video frame frame of a human body motion is collected, and in the video frame frame, each joint of the human body is recognized, and two adjacent joints in the joints of the human body are connected to obtain
  • the connecting line between two adjacent joints calculates the actual angle between the connecting line between the adjacent joints and the preset reference direction, and determines whether the human body action is based on the difference between the actual angle and the standard angle. Standard action matching.
  • the connection of the adjacent joints is obtained, and the actual angle between the connection of the adjacent joints and the preset reference direction is calculated, and according to the actual angle between the actual angle and the standard angle
  • the difference is determined whether the human body action matches the standard action to achieve accurate recognition of the action, and solves the technical problem of inaccurate motion recognition in the prior art.
  • the second aspect of the present application provides an apparatus for recognizing a human body motion, including:
  • An acquisition module configured to capture a video frame of a human motion when displaying a standard action
  • An identification module configured to identify each joint of the human body in the video frame frame
  • a connecting module for connecting two adjacent joints in each joint of the human body to obtain a connection between two adjacent joints
  • a calculation module for calculating an actual angle between a line between adjacent joints and a preset reference direction
  • a determining module configured to determine, according to a difference between the actual included angle and a standard angle, whether the human body motion matches the standard motion; wherein the standard angle is when the standard motion is performed, each phase The angle between the line between the adjacent joints and the reference direction.
  • the determining module includes:
  • a calculating unit configured to calculate, according to a connection between each adjacent two joints, a difference between the corresponding standard angle and the actual angle
  • a determining unit configured to determine that the human body motion matches the standard motion if the difference calculated by the connection between each adjacent two joints is within an error range; if there is at least one adjacent two joints The difference calculated by the connection is not within the error range, and it is determined that the human body motion does not match the standard motion.
  • the determining module further includes:
  • a first scoring unit configured to determine a scoring coefficient of the connection according to a corresponding difference and the error range for a connection between each adjacent two joints; according to the scoring coefficient and the connection of the connection Calculating the evaluation information of the connection according to the score corresponding to the connection;
  • the evaluation information of the connection includes a decomposition action score, and the decomposition action score is a score coefficient of the connection and the connection corresponding to the connection a product of the scores;
  • the evaluation information of the human body motion is generated according to the evaluation information of the connection between the two adjacent joints; wherein the evaluation information of the human body motion includes a human body action score, and the human body motion
  • the score is the sum of the scores of the decomposition actions.
  • the first scoring unit is specifically configured to:
  • the score coefficient p of the connection is calculated; where b is the lower limit of the error range, a is the upper limit of the error range, and ⁇ is the difference.
  • the determining module further includes:
  • the second scoring unit is configured to determine that the human action score in the evaluation information of the human body motion is zero.
  • the device further includes:
  • a selection module configured to acquire selected audio, and standard actions corresponding to each time node in the audio
  • a playing module configured to play the audio
  • a display module is configured to display a corresponding standard action when the audio is played to each time node.
  • the device further includes:
  • a generating module configured to: when the audio playback ends, obtain evaluation information of each human body motion; wherein the evaluation information of the human body motion is used to indicate a degree of difference between the human body motion and the corresponding standard motion; The audio, the video frame frame, and the motion evaluation information of each human body motion are generated to generate a target video.
  • the acquisition module is configured to collect a video frame frame of the human body action when displaying the standard action
  • the identification module is configured to identify each joint of the human body in the video frame frame
  • the connection module is used for Connect two adjacent joints in each joint of the human body to obtain the connection between the adjacent two joints.
  • the calculation module is used to calculate the actual angle between the connection between the adjacent joints and the preset reference direction, and determine the module. It is used to determine whether the human body motion matches the standard motion according to the difference between the actual angle and the standard angle.
  • the connection of the adjacent joints is obtained, and the actual angle between the connection of the adjacent joints and the preset reference direction is calculated, and according to the actual angle between the actual angle and the standard angle The difference is determined whether the human motion is matched with the standard motion to achieve accurate recognition of the motion, and the motion recognition in the prior art is inaccurate.
  • an embodiment of the third aspect of the present application provides an electronic device including: a housing, a processor, a memory, a circuit board, and a power supply circuit, wherein the circuit board is disposed inside the space enclosed by the housing, and is processed. And a memory disposed on the circuit board; a power supply circuit for powering each circuit or device of the electronic device; a memory for storing executable program code; and the processor operating by reading executable program code stored in the memory A program corresponding to the executable program code for performing the method of recognizing the human motion described in the first aspect.
  • the fourth aspect of the present application provides a non-transitory computer readable storage medium having stored thereon a computer program, which is executed by the processor to implement the human body as described in the first aspect.
  • the method of identifying the action is a non-transitory computer readable storage medium having stored thereon a computer program, which is executed by the processor to implement the human body as described in the first aspect. The method of identifying the action.
  • the fifth aspect of the present application further provides a computer program product, which is implemented by the processor when the instructions in the computer program product are executed by the processor.
  • FIG. 1 is a schematic flow chart of a method for recognizing a human body motion according to an embodiment of the present application
  • FIG. 2 is a schematic view showing the ratio of limb to height in human anatomy according to the embodiment
  • FIG. 3 is a schematic flowchart diagram of another method for identifying a human body motion according to an embodiment of the present application
  • 4A is a schematic structural diagram of a standard action provided by an embodiment of the present application.
  • 4B is a schematic structural diagram of actual operations provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a human body motion recognition device according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of another apparatus for recognizing human body motion according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
  • the electronic device in this embodiment may be a mobile phone, and those skilled in the art may know that the electronic device may also be other mobile terminals, and may refer to the solution provided in this embodiment to identify the human body motion.
  • an electronic device is used as a mobile phone as an example to explain a method for recognizing a human body motion.
  • FIG. 1 is a schematic flowchart of a method for recognizing a human body motion according to an embodiment of the present application. As shown in FIG. 1 , the method includes the following steps:
  • Step 101 Collect a video frame of the human motion when displaying the standard action.
  • the mobile application is opened, and the video capture interface is entered.
  • the audio selection interface can be accessed before entering the video capture interface, and the user can click on the selected audio and audio through the drop-down menu.
  • Each time node has a corresponding standard action.
  • the audio is selected by the confirmation button, and the video capture interface is started to start the video frame frame collection.
  • the corresponding standard action is displayed at the corresponding time node, and the standard is displayed.
  • the user performs the same action synchronously according to the standard action displayed, and the camera device collects the video frame frame that the user performs the same human body motion.
  • the synchronously collected video frame frame containing the human motion is multi-frame.
  • the time point showing the standard action can be used as the time reference, and the N frame is collected backward to include the human body motion.
  • the value of N can be determined by a person skilled in the art according to the actual application.
  • a video frame frame containing human motion can be continuously acquired during the entire audio playing process.
  • Step 102 Identify the joints of the human body in the video frame frame.
  • the image capturing device for collecting the body video frame frame may be an image capturing device capable of collecting the depth information, and the body information in the image is recognized by the acquired depth information, such as a dual camera, a depth camera ( Red-Green-Blue Depth) RGBD, which acquires depth information while imaging, and also acquires depth information through structured light/TOF lenses, which are not listed here.
  • a depth camera Red-Green-Blue Depth
  • RGBD Red-Green-Blue Depth
  • the face recognition area and the position information in the image are recognized by the face recognition technology, thereby obtaining the pixel points and the corresponding depth information of the face area, and calculating the corresponding face pixel points.
  • the average of the depth information since the human body and the human face are substantially on the same imaging plane, the pixel value of the difference between the average value of the depth information corresponding to the pixel point in the human face is recognized as the human body within the threshold range, and the human body can be recognized.
  • the contour of the human body thereby determining the depth information and position information of each pixel in the human body and the contour, thereby separating the human body from the background.
  • the image may be binarized such that the pixel value of the background is 0 and the pixel value of the human body is 1.
  • FIG. 2 is a schematic diagram of the ratio of limbs to height in human anatomy provided in the embodiment, and FIG. 2 lists the proportional relationship of each joint in the limb, and the video frame can be determined according to the position information of the face and the human body.
  • the position information of the human neck joint can obtain the two-dimensional coordinate information (x, y) of the human neck joint. As shown in Figure 2, the difference between the height of the shoulder joint and the height of the neck joint is fixed.
  • the line where the shoulder joint is located can be determined.
  • the value is 0, and the human body part has a pixel value of 1. Therefore, the point corresponding to the pixel value at the left and right edges of the line is the point corresponding to the shoulder joint, thereby determining the two-dimensional coordinate information of the left shoulder joint (x1, y1). ), the two-dimensional coordinate information of the right shoulder joint (x2, y2).
  • the standard distance of the left shoulder joint and the left elbow joint in FIG. 2 is circled by the diameter, since the pixel value of the background portion is 0, when the pixel is identified as the left side of the When the pixel position on the right is located, the two-dimensional coordinate information (x3, y3) of the left elbow joint can be determined.
  • the two-dimensional coordinate information of other joints of the human body can be further identified and determined, and the joints of the human body include at least: a neck joint, a left shoulder joint, a right shoulder joint, a left elbow joint, a right elbow joint, a left wrist joint, a right wrist joint, and a left knee. Joints, left ankle joints, right knee joints, right ankle joints, etc., due to more joints, are not listed here. The principle is the same for the method of identifying and determining the two-dimensional coordinates of other joints, and will not be repeated here.
  • Step 103 Connect two adjacent joints in each joint of the human body to obtain a connection between two adjacent joints.
  • the left shoulder joint and the left elbow joint are two adjacent joints, and the corresponding left shoulder joint and the left elbow joint are connected when the human body moves, and the connection between the left shoulder joint and the left elbow joint is obtained.
  • Step 104 Calculate the actual angle between the connection between the adjacent two joints and the preset reference direction.
  • the angle between the connection between the adjacent joints and the horizontal direction can be calculated, for example, the angle is defined as ⁇ , the left shoulder
  • the two-dimensional coordinates of the joint are (x1, y1)
  • the two-dimensional coordinates of the left elbow joint are (x3, y3).
  • the actual angle ⁇ between the line connecting the left shoulder joint and the left elbow joint and the horizontal direction can be calculated by calculating the actual angle between the line connecting the other adjacent joints and the horizontal direction.
  • Step 105 Determine whether the human body motion matches the standard motion according to the difference between the actual angle and the standard angle.
  • the standard angle is an angle between a line between each adjacent two joints and a reference direction when performing a standard action, and for a connection between each adjacent two joints, when the user performs the standard action
  • the actual angle is calculated from the corresponding standard angle. If the difference calculated by the connection between each adjacent two joints is within the error range, it is determined that the human body action matches the standard motion; if there is at least one adjacent two The difference calculated between the joints between the joints is not within the error range, and it is determined that the human body motion does not match the standard motion.
  • the human body motion in the collected multi-frame video image including the human motion is matched with the standard motion, and the smaller the difference within the error range, the higher the matching degree between the human motion and the standard motion is. That is, the more standard the user mimics the standard action.
  • a video frame frame of a human body motion is collected, and in the video frame frame, each joint of the human body is recognized, and two adjacent joints in the joints of the human body are connected to obtain
  • the connecting line between two adjacent joints calculates the actual angle between the connecting line between the adjacent joints and the preset reference direction, and determines whether the human body action is based on the difference between the actual angle and the standard angle. Standard action matching.
  • the connection of the adjacent joints is obtained, and the actual angle between the connection of the adjacent joints and the preset reference direction is calculated, and according to the actual angle between the actual angle and the standard angle
  • the difference is determined whether the human body action matches the standard action to achieve accurate recognition of the action, and solves the technical problem of inaccurate motion recognition in the prior art.
  • FIG. 3 is a schematic flowchart of another human body motion recognition method according to an embodiment of the present application, as shown in FIG.
  • the method can include:
  • Step 301 Acquire selected audio, and standard actions corresponding to each time node in the audio, and play the audio.
  • the mobile phone presets a plurality of audios, and each time node in each audio has a corresponding standard action, the user selects the audio according to the preference, and plays the same, while playing the audio, simultaneously collecting each of the users including the user. Video frame until the end of audio playback.
  • Step 302 when the audio is played to each time node, the corresponding standard action is displayed.
  • the corresponding standard action when the corresponding time node is played, that is, the corresponding standard action is displayed on the video collection interface of the camera, as a possible implementation manner, the corresponding standard action can be displayed in the form of a floating frame in the video collection interface. As another possible implementation, the corresponding standard action can be scrolled in the form of a barrage in the video capture interface.
  • FIG. 4A is a schematic structural diagram of a standard action provided by an embodiment of the present application.
  • the figure shows a standard action displayed at a certain time node, and related joints involved in the standard action, and each joint includes: a left wrist joint and a right wrist joint. , left elbow joint, right elbow joint, left shoulder joint, right shoulder joint, a total of 6 joints.
  • Step 303 Collect a video frame of the human motion when the standard action is displayed.
  • FIG. 4B is a schematic structural diagram of an actual action provided by an embodiment of the present application, and FIG. 4B shows an actual action made by a user when the standard action in FIG. 4A is displayed.
  • the captured video frame of the human body motion is a multi-frame image, and each frame image has a corresponding human body motion.
  • one frame image is used for illustration, and other frame images are processed. The method is the same.
  • step 304 in the video frame frame, each joint of the human body is identified, and a connection between two adjacent joints is obtained.
  • the joints of the human body are identified. For details, refer to the steps of step 102 in the embodiment of FIG. 1 , which is not described in this embodiment.
  • the joints of the human body are recognized, and the connection between the adjacent two joints is obtained, and the connection between the right wrist joint and the right elbow joint in FIG. 4B is obtained, and the right elbow joint and the right shoulder joint are obtained.
  • the line 2 the line between the right shoulder joint and the left shoulder joint 3, the line 4 between the left shoulder joint and the left elbow joint, and the line 5 between the left elbow joint and the right wrist joint, for convenience of explanation.
  • the action corresponding to each connection is called the decomposition action of the actual action made by the user, and all the decomposition actions constitute the actual action.
  • Step 305 Calculate the actual angle between the connection between the adjacent two joints and the preset reference direction.
  • the preset reference direction is the horizontal direction of the screen
  • the angle between the connection line 1 and the horizontal direction of the screen is calculated to be 35 degrees
  • the connection between the line 2 and the horizontal direction of the screen is calculated.
  • the angle between the angle is 0 degrees
  • the angle between the line 3 and the horizontal direction of the screen is 0 degrees
  • the angle between the line 4 and the horizontal direction of the screen is 0 degrees, between the line 5 and the horizontal direction of the screen.
  • the angle is 130 degrees.
  • Step 306 Calculate the difference between the corresponding standard angle and the actual angle for each connection between the two adjacent joints, and determine whether the human body action matches the standard action. If not, perform step 307. If it matches, Go to step 308.
  • the standard angle is an angle between the connecting line between the adjacent two joints and the reference direction when the standard motion is performed, and the connection 1 between the right wrist joint and the right elbow joint in FIG. 4B is taken as an example for explanation.
  • the connection line 1 corresponds to the standard angle of 45 degrees in FIG. 4A, and the actual angle measured by the actual action in FIG. 4B is 35 degrees, and the difference is 10 degrees.
  • the difference is 10 degrees less than 15 degrees
  • the decomposition action corresponding to the connection 1 and the decomposition action in the standard action match, and further, respectively determine whether the decomposition action corresponding to the connection 2, the connection 3, the connection 4, and the connection 5 is The decomposition action in the standard action matches. If all the decomposition actions match the standard action, the actual human action matches the standard action. If any of the decomposition actions does not match the corresponding decomposition action in the standard action, then the The actual human motion does not match the standard motion.
  • step 307 the human action score in the evaluation information of the human body motion is determined to be zero.
  • the score obtained by the user for the human motion is set to zero.
  • Step 308 for each connection between two adjacent joints, determining a score coefficient of the connection according to the corresponding difference and the error range.
  • the scoring coefficient p of the connection is calculated, where b is the lower limit of the error range, a is the upper limit of the error range, and ⁇ is the difference.
  • the corresponding difference is 10 degrees.
  • the upper limit of the error range of the difference is positive 50 degrees
  • the lower limit of the error range is minus 50 degrees.
  • the scoring coefficient of the connecting line 2 is 1, the scoring coefficient of the connecting line 3 is 1, the scoring coefficient of the connecting line 4 is 1, and the scoring coefficient of the connecting line 5 is 0.9.
  • Step 309 Generate evaluation information of the connection according to the score coefficient of the connection and the score corresponding to the connection, and further generate evaluation information of the human motion.
  • the evaluation information of the connection includes a decomposition action score
  • the decomposition action score is a product of a score coefficient of the connection and a score corresponding to the connection.
  • the action is divided into 100 points, and there are 5 totals.
  • the score of each decomposition action is 20 points, and the score of the decomposition action corresponding to the connection 1 is multiplied by 20 points by the corresponding score coefficient of 0.8, and the score of the decomposition action corresponding to the connection 1 is obtained. It is 16 points, thereby generating evaluation information of the connection 1.
  • the score of the decomposition operation corresponding to the connection 2 included in the evaluation information of the connection 2 is 20 points
  • the score of the decomposition operation corresponding to the connection 3 included in the evaluation information of the connection 3 is 20 points.
  • the score of the decomposition operation corresponding to the connection 4 included in the evaluation information of the connection 4 is 20 minutes
  • the score of the decomposition operation corresponding to the connection 5 included in the evaluation information of the connection 5 is 18 points, and each connection is made.
  • the score of the corresponding decomposition action is summed, that is, the score of the human body action is 94 points, that is, the evaluation information of the human body action is obtained.
  • the video frame frames of the other plurality of human body motions are processed according to the above method, and the evaluation information of the human body motions in different video frame frames can be respectively obtained.
  • the action scores of the human body motion evaluation information can be obtained. If a threshold score is exceeded, such as 60 points, the corresponding video frame of the plurality of human motions is used as a video frame for displaying a single action score in the generated video, that is, the score information of the corresponding action is added to the multiple video frame frames, Making the time delay long enough, the user can see the specific score information.
  • Step 310 When the audio playback ends, the evaluation information of each human body motion is acquired, and the target video is generated.
  • the evaluation information of each human body motion corresponding to the standard action is displayed at different time nodes, wherein the evaluation information of the human body motion is used to indicate the degree of difference between the human body motion and the corresponding standard motion, and the human body The higher the score of the action in the evaluation information of the action, the smaller the difference between the human body action and the corresponding standard action, and vice versa.
  • the target video is generated according to the audio, the acquired video picture frames, and the motion evaluation information of the corresponding human motion, and each target motion displays a corresponding score when the target video is played back, so that the user knows the score of the action. Can help users improve their actions while the user experience is good.
  • a video frame frame of the human body motion is collected, and the adjacent joints of the human body are recognized by the human body, and the adjacent joints are obtained, and the adjacent joints are calculated.
  • the actual angle between the connection line and the preset reference direction and according to the difference between the actual angle and the standard angle, determine whether the human body action matches the standard action, so as to achieve accurate recognition of the action, and solve the prior art The technical problem of inaccurate motion recognition.
  • the present application also proposes an identification device for human body motion.
  • FIG. 5 is a schematic structural diagram of a human body motion recognition device according to an embodiment of the present application.
  • the device includes an acquisition module 51, an identification module 52, a connection module 53, a calculation module 54, and a determination module 55.
  • the acquisition module 51 is configured to collect a video frame of the human motion when the standard motion is displayed.
  • the identification module 52 is configured to identify each joint of the human body in the video frame.
  • the connecting module 53 is configured to connect two adjacent joints in each joint of the human body to obtain a connection between two adjacent joints.
  • the calculation module 54 is configured to calculate an actual angle between the connection between the adjacent two joints and the preset reference direction.
  • the determining module 55 is configured to determine whether the human body motion matches the standard motion according to the difference between the actual angle and the standard angle, wherein the standard angle is a connection between the adjacent two joints when performing the standard motion The angle between the reference directions.
  • the acquisition module is configured to collect a video frame frame of the human body action when displaying the standard action
  • the identification module is configured to identify each joint of the human body in the video frame frame
  • the connection module is used for Connect two adjacent joints in each joint of the human body to obtain the connection between the adjacent two joints.
  • the calculation module is used to calculate the actual angle between the connection between the adjacent joints and the preset reference direction, and determine the module. It is used to determine whether the human body motion matches the standard motion according to the difference between the actual angle and the standard angle.
  • the connection of the adjacent joints is obtained, and the actual angle between the connection of the adjacent joints and the preset reference direction is calculated, and according to the actual angle between the actual angle and the standard angle
  • the difference is determined whether the human body action matches the standard action to achieve accurate recognition of the action, and solves the technical problem of inaccurate motion recognition in the prior art.
  • FIG. 6 is a schematic structural diagram of another human motion recognition device according to an embodiment of the present application.
  • the determining module 55 may further include: a calculating unit 551, a determining unit 552, a first scoring unit 553, and a second scoring unit 554.
  • the calculating unit 551 is configured to calculate a difference between the corresponding standard angle and the actual angle for the connection between each adjacent two joints.
  • the determining unit 552 is configured to determine that the human body motion matches the standard motion if the difference calculated by the connection between each adjacent two joints is within the error range; if there is at least one connection between the adjacent two joints The calculated difference is not within the error range, and it is determined that the human body motion does not match the standard motion.
  • the first scoring unit 553 is specifically configured to:
  • the scoring coefficient of the connection is determined, and the evaluation information of the connection is generated according to the score coefficient of the connection and the score corresponding to the connection.
  • the evaluation information of the connection includes the decomposition action score, the decomposition action score is the product of the score coefficient of the connection and the score corresponding to the connection, and the human body action is generated according to the evaluation information of the connection between the adjacent two joints.
  • the evaluation information wherein the evaluation information of the human motion includes the human action score, and the human action score is the sum of the scores of the decomposition actions.
  • the second scoring unit 554 is specifically configured to:
  • the human action score is zero in the evaluation information for determining the human motion.
  • the apparatus may further include: a selecting module 56, a playing module 57, a displaying module 58, and a generating module 59.
  • the selection module 56 is configured to obtain selected audio and standard actions corresponding to each time node in the audio.
  • the playing module 57 is configured to play audio.
  • the display module 58 is configured to display corresponding standard actions when the audio is played to each time node.
  • the generating module 59 acquires evaluation information of each human body action when the audio playback ends, wherein the evaluation information of the human body motion is used to indicate the degree of difference between the human body motion and the corresponding standard motion, according to the audio, each video frame and each The motion evaluation information of the human body motion generates a target video.
  • the human body motion recognition device of the embodiment of the present application when a standard motion is displayed, a video frame frame of the human body motion is collected, and the adjacent joints of the human body are recognized by the human body, and the adjacent joints are obtained, and the adjacent joints are calculated.
  • FIG. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
  • the electronic device includes: a housing 71 and a processor 72. a memory 73, a circuit board 74, and a power supply circuit 75, wherein the circuit board 74 is disposed inside the space surrounded by the housing 71, the processor 72 and the memory 73 are disposed on the circuit board 74, and the power supply circuit 75 is used for the above electronic
  • the various circuits or devices of the device are powered; the memory 73 is for storing executable program code; the processor 72 runs the program corresponding to the executable program code by reading the executable program code stored in the memory 73 for performing the aforementioned method implementation The method for recognizing human motion as described in the example.
  • the electronic device exists in a variety of forms including, but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • the embodiment of the present application further provides a non-transitory computer readable storage medium, where the computer program is stored, and when the program is executed by the processor, the human motion recognition described in the foregoing method embodiment is implemented. method.
  • the embodiment of the present application further provides a computer program product.
  • the instructions in the computer program product are executed by the processor, the method for recognizing the human body motion described in the foregoing method embodiment is implemented.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Abstract

本申请实施例提出一种人体动作的识别方法、装置和电子设备,其中,方法包括:在展示标准动作时,采集人体动作的视频画面帧,在视频画面帧中,识别人体的各关节,连接人体各关节中相邻的两关节,得到相邻两关节之间的连线,计算相邻两关节之间的连线与预设参考方向之间的实际夹角,根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配。通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。

Description

人体动作的识别方法、装置和电子设备
相关申请的交叉引用
本申请要求乐蜜有限公司于2017年11月23日提交的、申请名称为“人体动作的识别方法、装置和电子设备”的、中国专利申请号“201711182909.3”的优先权。
技术领域
本申请涉及移动终端技术领域,尤其涉及一种人体动作的识别方法、装置和电子设备。
背景技术
体感游戏,通过互联网运营平台,进行人机互动,玩家手握专用游戏手柄,通过识别玩家身体的动作来控制游戏中人物的动作,能够让玩家“全身”投入到游戏当中,享受到体感互动的新体验。
相关技术中,体感游戏技术主要应用在电脑和游戏主机上,便携性差,而对用户身体动作的判断,是通过确定用户手持控制器的位置来判断并计算身体动作正确与否,导致判断不准确。
申请内容
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。
为此,本申请的第一个目的在于提出一种人体动作的识别方法,通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。
本申请的第二个目的在于提出一种人体动作的识别装置。
本申请的第三个目的在于提出一种电子设备。
本申请的第四个目的在于提出一种非临时性计算机可读存储介质。
本申请的第五个目的在于提出一种计算机程序产品。
为达上述目的,本申请第一方面实施例提出了一种人体动作的识别方法,包括:
在展示标准动作时,采集人体动作的视频画面帧;
在所述视频画面帧中,识别人体的各关节;
连接人体各关节中相邻的两关节,得到相邻两关节之间的连线;
计算相邻两关节之间的连线与预设参考方向之间的实际夹角;
根据所述实际夹角与标准角度之间的差值,确定所述人体动作是否与所述标准动作匹配;其中,所述标准角度,是执行所述标准动作时,各相邻两关节之间的连线与所述参考方向之间的角度。
可选地,作为第一方面的第一种可能的实现方式,所述根据所述实际夹角与标准角度之间的差值,确定所述人体动作是否与标准动作匹配,包括:
针对每一条相邻两关节之间的连线,计算对应的所述标准角度与所述实际角度之间的差值;
若每一条相邻两关节之间的连线计算出的差值均在误差范围内,确定所述人体动作与所述标准动作匹配;
若存在至少一条相邻两关节之间的连线计算出的差值未处于误差范围内,确定所述人体动作与所述标准动作不匹配。
可选地,作为第一方面的第二种可能的实现方式,所述确定所述人体动作与所述标准动作匹配之后,还包括:
针对每一条相邻两关节之间的连线,根据对应的差值和所述误差范围,确定所述连线的评分系数;
根据所述连线的评分系数和所述连线对应的分值,生成所述连线的评价信息;所述连线的评价信息包括分解动作分值,所述分解动作分值为所述连线的评分系数和所述连线对应的分值的乘积;
根据各条相邻两关节之间的连线的评价信息,生成所述人体动作的评价信息;其中,所述人体动作的评价信息包括人体动作分值,所述人体动作分值为各分解动作分值之和。
可选地,作为第一方面的第三种可能的实现方式,所述根据对应的差值和所述误差范围,确定所述连线的评分系数,包括:
采用公式p=1-[2Δ/(a-b)],计算得到连线的评分系数p;其中,b为误差范围下限,a为误差范围上限,Δ为差值。
可选地,作为第一方面的第四种可能的实现方式,所述确定所述人体动作与所述标准动作不匹配之后,还包括:
确定所述人体动作的评价信息中人体动作分值为零。
可选地,作为第一方面的第五种可能的实现方式,所述在展示标准动作时,采集人体动作的视频画面帧之前,还包括:
获取选定的音频,以及所述音频中各时间节点对应的标准动作;
播放所述音频;
在所述音频播放至每一个时间节点时,展示对应的标准动作。
可选地,作为第一方面的第六种可能的实现方式,所述方法还包括:
当所述音频播放结束时,获取各人体动作的评价信息;其中,所述人体动作的评价信息,用于指示所述人体动作与对应的标准动作之间差异程度;
根据所述音频、各视频画面帧和各人体动作的动作评价信息,生成目标视频。
本申请实施例的人体动作的识别方法中,在展示标准动作时,采集人体动作的视频画面帧,在视频画面帧中,识别人体的各关节,连接人体各关节中相邻的两关节,得到相邻两关节之间的连线,计算相邻两关节之间的连线与预设参考方向之间的实际夹角,根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配。通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。
为达上述目的,本申请第二方面实施例提出了一种人体动作的识别装置,包括:
采集模块,用于在展示标准动作时,采集人体动作的视频画面帧;
识别模块,用于在所述视频画面帧中,识别人体的各关节;
连接模块,用于连接人体各关节中相邻的两关节,得到相邻两关节之间的连线;
计算模块,用于计算相邻两关节之间的连线与预设参考方向之间的实际夹角;
确定模块,用于根据所述实际夹角与标准角度之间的差值,确定所述人体动作是否与所述标准动作匹配;其中,所述标准角度,是执行所述标准动作时,各相邻两关节之间的连线与所述参考方向之间的角度。
可选地,作为第二方面的第一种可能的实现方式,所述确定模块包括:
计算单元,用于针对每一条相邻两关节之间的连线,计算对应的所述标准角度与所述实际角度之间的差值;
确定单元,用于若每一条相邻两关节之间的连线计算出的差值均在误差范围内,确定所述人体动作与所述标准动作匹配;若存在至少一条相邻两关节之间的连线计算出的差值未处于误差范围内,确定所述人体动作与所述标准动作不匹配。
可选地,作为第二方面的第二种可能的实现方式,所述确定模块还包括:
第一评分单元,用于针对每一条相邻两关节之间的连线,根据对应的差值和所述误差范围,确定所述连线的评分系数;根据所述连线的评分系数和所述连线对应的分值,生成所述连线的评价信息;所述连线的评价信息包括分解动作分值,所述分解动作分值为所述连线的评分系数和所述连线对应的分值的乘积;根据各条相邻两关节之间的连线的评价信息,生成所述人体动作的评价信息;其中,所述人体动作的评价信息包括人体动作分值,所述人体动 作分值为各分解动作分值之和。
可选地,作为第二方面的第三种可能的实现方式,所述第一评分单元具体用于:
采用公式p=1-[2Δ/(a-b)],计算得到连线的评分系数p;其中,b为误差范围下限,a为误差范围上限,Δ为差值。
可选地,作为第二方面的第四种可能的实现方式,所述确定模块还包括:
第二评分单元,用于确定所述人体动作的评价信息中人体动作分值为零。
可选地,作为第二方面的第五种可能的实现方式,所述装置还包括:
选取模块,用于获取选定的音频,以及所述音频中各时间节点对应的标准动作;
播放模块,用于播放所述音频;
展示模块,用于在所述音频播放至每一个时间节点时,展示对应的标准动作。
可选地,作为第二方面的第六种可能的实现方式,所述装置还包括:
生成模块,用于当所述音频播放结束时,获取各人体动作的评价信息;其中,所述人体动作的评价信息,用于指示所述人体动作与对应的标准动作之间差异程度;根据所述音频、各视频画面帧和各人体动作的动作评价信息,生成目标视频。
本申请实施例的人体动作的识别装置中,采集模块用于在展示标准动作时,采集人体动作的视频画面帧,识别模块用于在视频画面帧中,识别人体的各关节,连接模块用于连接人体各关节中相邻的两关节,得到相邻两关节之间的连线,计算模块用于计算相邻两关节之间的连线与预设参考方向之间的实际夹角,确定模块用于根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配。通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确。
为达上述目的,本申请第三方面实施例提出了一种电子设备,包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为上述电子设备的各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,用于执行第一方面所述的人体动作的识别方法。
为达上述目的,本申请第四方面实施例提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面实施例所述的人体动作的识别方法。
为达上述目的,本申请第五方面实施例还提出一种计算机程序产品,当计算机程序产品中的指令由处理器执行时,实现如第一方面实施例所述的人体动作的识别方法。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请实施例所提供的一种人体动作的识别方法的流程示意图;
图2为本实施例提供的人体解剖学中肢体与身高的比例的示意图;
图3为本申请实施例所提供的另一种人体动作的识别方法的流程示意图;
图4A为本申请实施例所提供的一个标准动作的结构示意图;
图4B是本申请实施例所提供的实际动作的结构示意图;
图5为本申请实施例提供的一种人体动作的识别装置的结构示意图;
图6为本申请实施例提供的另一种人体动作的识别装置的结构示意图;以及
图7为本申请电子设备一个实施例的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请实施例的人体动作的识别方法、装置和电子设备。
本实施例中的电子设备,具体可以为手机,本领域技术人员可以知晓,电子设备还可以为其他移动终端,均可以参考本实施例中提供的方案进行人体动作的识别。
以下实施例中,以电子设备为手机为例,进行解释说明人体动作的识别方法。
图1为本申请实施例所提供的一种人体动作的识别方法的流程示意图,如图1所示,该方法包括以下步骤:
步骤101,在展示标准动作时,采集人体动作的视频画面帧。
具体地,打开手机应用程序,进入视频采集界面,作为一种可能的实现方式,可在进入视频采集界面之前,先进入音频选择界面,可通过下拉菜单的方式让用户点选喜欢的音频,音频中各时间节点有对应的标准动作,通过确认按钮选定音频,并进入视频采集界面开始进行视频画面帧采集,手机播放音频的过程中,在相应的时间节点展示对应的标准动作,在展示标准动作时,用户根据展示的标准动作同步做同样的动作,同时摄像装置采集用户做同样人体动作的视频画面帧。
在展示标准动作时,同步采集到的包含人体动作的视频画面帧为多帧,作为一种可能的实现方式,可以以展示标准动作的时间点为时间基准,向后采集N帧包含人体动作的画面,对于N的取值,本领域技术人员可根据实际应用情况确定。
作为另一种可能的实现方式,可以在整个音频播放过程中,连续采集包含人体动作的视频画面帧。
步骤102,在视频画面帧中,识别人体的各关节。
作为一种可能的实现方式,视频画面帧携带深度信息的情况下,可将每一帧画面中的人体和背景分离,进而识别出人体的各关节。为了使得视频画面帧携带深度信息,用于采集人体视频画面帧的摄像装置可为能采集深度信息的摄像装置,通过获取的深度信息,识别出图像中的人体部位,例如双摄像头,深度摄像头(Red-Green-Blue Depth)RGBD,成像的同时获得深度信息,此外还可通过结构光/TOF镜头进行深度信息的获取,在此不一一列举。
具体来说,根据获取的深度信息,结合人脸识别技术识别出图像中的人脸区域和位置信息,从而得到人脸区域包含的像素点及其对应的深度信息,计算得到人脸像素点对应的深度信息的平均值。进一步,由于人体和人脸基本在同一个成像平面上,故将和人脸中像素点对应的深度信息的平均值的差值在阈值范围以内的像素点识别为人体,即可识别出人体和人体轮廓,从而确定人体和轮廓中各像素点的深度信息和位置信息,进而可将人体和背景分离出来。更进一步,为了便于识别人体中的关节,排除背景的干扰,可将图像进行二值化,使得背景的像素值为0,人体的像素值为1。
进一步,根据识别出的人脸和人体的位置信息,以及根据人体解剖学中肢体与身高的比例关系,可计算得到人体各关节的位置信息。例如,图2为本实施例提供的人体解剖学中肢体与身高的比例的示意图,图2中列出了各关节在肢体中的比例关系,根据人脸和人体的位置信息可确定出视频帧中人体颈关节的位置信息,即可得到人体颈关节的二维坐标信息(x,y)。如图2中所示,肩关节所在的高度和颈关节所在的高度的差值是固定的,根据颈关节的坐标信息,以及该差值即可确定出肩关节所在的行,因背景部分像素值为0,人体部分像素值为1,因此,该行中左边和右边最边缘处对应像素值为1的点即为肩关节对应的点,从而确定左肩关节的二维坐标信息(x1,y1),右肩关节的二维坐标信息(x2,y2)。
根据确定的左肩关节的位置信息,根据图2中左肩关节和左肘关节的标准距离,以该标准距离为直径划圆,由于背景部分的像素值为0,当识别出像素为1的左边和右边像素点位置时,即可确定左肘关节的二维坐标信息(x3,y3)。
同理,可进一步识别并确定人体其它各个关节的二维坐标信息,人体各关节至少包括:颈关节,左肩关节、右肩关节、左肘关节、右肘关节,左腕关节,右腕关节、左膝关节、左踝关节、右膝关节、右踝关节等,因关节较多,此处不一一列举。对于识别并确定其它各关 节的二维坐标的方法,原理相同,此处不一一赘述。
步骤103,连接人体各关节中相邻的两关节,得到相邻两关节之间的连线。
例如,左肩关节和左肘关节是相邻的两关节,将做人体动作时对应的左肩关节和左肘关节连接,得到左肩关节和左肘关节之间的连线。
步骤104,计算相邻两关节之间的连线与预设参考方向之间的实际夹角。
具体地,若预设参考方向为水平方向,根据获取的相邻两关节的位置信息,可计算相邻两关节之间的连线与水平方向的夹角,例如,夹角定义为θ,左肩关节的二维坐标为(x1,y1),左肘关节的二维坐标为(x3,y3),根据公式tg(θ)=(y3-y1)/(x3-x1),即可计算得到相邻的左肩关节和左肘关节的连线与水平方向之间的实际夹角θ,同理可计算得到其它相邻两关节之间的连线与水平方向之间的实际夹角。
步骤105,根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配。
具体地,标准角度是执行标准动作时,各相邻两关节之间的连线与参考方向之间的角度,针对每一条相邻两关节之间的连线,将用户执行该标准动作时的实际夹角,与对应标准角度计算差值,若每一条相邻两关节之间的连线计算出的差值均在误差范围内,确定人体动作与标准动作匹配;若存在至少一条相邻两关节之间的连线计算出的差值未处于误差范围内,确定人体动作与标准动作不匹配。
需要说明的是,将采集到的包含人体动作的多帧视频画面中的人体动作均与标准动作匹配,而在误差范围内的差值越小,则说明该人体动作与标准动作匹配度高,即用户模仿标准动作做的越标准。
本申请实施例的人体动作的识别方法中,在展示标准动作时,采集人体动作的视频画面帧,在视频画面帧中,识别人体的各关节,连接人体各关节中相邻的两关节,得到相邻两关节之间的连线,计算相邻两关节之间的连线与预设参考方向之间的实际夹角,根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配。通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。
在上一实施例的基础上,本实施例提供了另一种人体动作的识别方法,图3为本申请实施例所提供的另一种人体动作的识别方法的流程示意图,如图3所示,该方法可以包括:
步骤301,获取选定的音频,以及音频中各时间节点对应的标准动作,并播放该音频。
具体地,手机预置了多个音频,每个音频中各时间节点均有对应的标准动作,用户根据喜好选定音频,并进行播放,在播放该音频的同时,同步采集包含该用户的各视频画面帧,直至音频播放结束。
步骤302,在音频播放至每一个时间节点时,展示对应的标准动作。
具体地,当播放到对应时间节点时,即在摄像装置的视频采集界面展示对应的标准动作,作为一种可能的实现方式,可在视频采集界面中采用悬浮框的形式展示对应的标准动作,作为另一种可能的实现方式,可在视频采集界面中以弹幕的形式滚动展示对应的标准动作。
例如,图4A为本申请实施例所提供的一个标准动作的结构示意图,图中示出了某时间节点展示的标准动作,及该标准动作涉及的相关关节,各关节包括:左腕关节,右腕关节,左肘关节,右肘关节,左肩关节,右肩关节,共6个关节。
步骤303,在展示标准动作时,采集人体动作的视频画面帧。
具体地,当音频播放到对应的某一个时间节点,并展示对应标准动作时,摄像装置同步采集用户模仿该标准动作做出的人体动作的视频画面帧,摄像装置采集到的人体动作的视频画面帧为多帧画面,该多帧画面中均记录了对应该标准动作的人体动作。例如,图4B是本申请实施例所提供的实际动作的结构示意图,图4B中示出了,当展示图4A中标准动作时,用户做出的实际动作。
需要说明的是,采集到的人体动作的视频画面帧为多帧图像,每一帧图像中均有对应的人体动作,本实施例中均以其中一帧画面进行举例说明,其它帧画面的处理方法相同。
步骤304,在视频画面帧中,识别人体的各关节,并得到相邻两关节之间的连线。
在采集到的包含人体动作的视频画面帧中,识别人体的各关节,具体可参照图1实施例中步骤102的步骤,本实施例中不做赘述。
进一步,根据采集到的人体动作识别出人体各关节,得到相邻两关节之间的连线,得到图4B中右腕关节和右肘关节之间的连线1、右肘关节和右肩关节之间的连线2、右肩关节和左肩关节之间的连线3、左肩关节和左肘关节之间的连线4和左肘关节和右腕关节之间的连线5,为了便于说明,将每个连线对应的动作称为该用户作出的实际动作的分解动作,所有的分解动作组成该实际动作。
步骤305,计算相邻两关节之间的连线与预设参考方向之间的实际夹角。
具体地,如图4B中所示,预设参考方向为屏幕的水平方向,计算得到1号连线与屏幕水平方向之间的夹角为35度,2号连线与屏幕水平方向之间的夹角为0度,3号连线与屏幕水平方向之间的夹角为0度,4号连线与屏幕水平方向之间的夹角为0度,5号连线与屏幕水平方向之间的夹角为130度。
步骤306,针对每一条相邻两关节之间的连线,计算对应的标准角度与实际角度之间的差值,确定人体动作与标准动作是否匹配,若不匹配,执行步骤307,若匹配,执行步骤308。
具体地,标准角度是执行标准动作时,各相邻两关节之间的连线与参考方向之间的角度,以图4B中右腕关节和右肘关节之间的连线1为例,进行说明,连线1对应图4A中标准角度为 45度,图4B中实际动作测量得到的实际角度为35度,差值为10度,根据预设的差值的阈值,例如为15度,差值10度小于15度,即可确定连线1对应的分解动作和标准动作中的分解动作匹配,进一步,分别确定连线2、连线3、连线4和连线5对应的分解动作是否和标准动作中的分解动作匹配,如果所有的分解动作均和标准动作匹配,则该实际人体动作与标准动作是匹配的,若有任意一个分解动作和标准动作中对应的分解动作不匹配,则该实际人体动作与标准动作是不匹配的。
步骤307,确定人体动作的评价信息中人体动作分值为零。
具体地,若实际人体动作和标准动作是不匹配的,则将用户做该人体动作得到的评分置0。
步骤308,针对每一条相邻两关节之间的连线,根据对应的差值和误差范围,确定连线的评分系数。
具体地,根据公式p=1-[2Δ/(a-b)],计算得到连线的评分系数p,其中,b为误差范围下限,a为误差范围上限,Δ为差值。以图4B中右腕关节和右肘关节之间的连线1为例,其对应的差值为10度,如,差值的误差范围的上限为正50度,误差范围的下限为负50度,根据公式P=1-[2×10/(50-(-50))]=0.8,即连线1的评分系数为0.8。
进一步,同理可分别计算得到连线2的评分系数为1,连线3的评分系数为1,连线4的评分系数为1,连线5的评分系数为0.9。
步骤309,根据连线的评分系数和连线对应的分值,生成连线的评价信息,进而生成人体动作的评价信息。
具体地,连线的评价信息包括分解动作分值,分解动作分值为连线的评分系数和连线对应的分值的乘积,如图4B中,该动作总分为100分,共有5条分解动作,则每条分解动作的分值满分为20分,将连线1对应的分解动作的分值满分20分乘以对应的评分系数0.8,则得到连线1对应的分解动作的分值为16分,从而生成连线1的评价信息。同理,得到连线2的评价信息中包含的连线2对应的分解动作的分值为20分,连线3的评价信息中包含的连线3对应的分解动作的分值为20分,连线4的评价信息中包含的连线4对应的分解动作的分值为20分,连线5的评价信息中包含的连线5对应的分解动作的分值为18分,将各连线对应的分解动作的分值求和,即得到该人体动作的分值为94分,即得到该人体动作的评价信息。
进一步,将其它多个人体动作的视频画面帧按照上述方法进行处理,可分别得到不同视频画面帧中人体动作的评价信息,作为一种可能的实现方式,可将人体动作的评价信息中动作得分超过阈值分数,如60分,对应的多个人体动作的视频画面帧作为生成的视频中用于展示单个动作分数的视频帧,即在该多个视频画面帧中添加对应动作的得分信息,以使得时间延迟的足够长,用户可看到具体得分信息。
步骤310,当音频播放结束时,获取各人体动作的评价信息,并生成目标视频。
具体地,当音频播放结束时,获取不同时间节点展示标准动作对应的各人体动作的评价信息,其中,人体动作的评价信息,用于指示人体动作与对应的标准动作之间的差异程度,人体动作的评价信息中该动作的得分越高,则该人体动作与对应的标准动作差异越小,反之,差异则越大。
进一步,根据该音频、获取的各视频画面帧和对应的人体动作的动作评价信息,生成目标视频,在目标视频回放时,每个人体动作都会展示对应的得分,使得用户了解自己动作的得分情况,可帮助用户改进动作,同时用户体验度好。
本申请实施例的人体动作的识别方法中,在展示标准动作时,采集人体动作的视频画面帧,通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。同时,还可以给采集到的视频画面帧中人体动作打分,得到动作评价信息,指示人体动作与标准动作之间的差异程度,通过生成目标视频,使得用户在回放时了解并纠正人体动作,使得下次录视频时动作更加标准。
为了实现上述实施例,本申请还提出一种人体动作的识别装置。
图5为本申请实施例提供的一种人体动作的识别装置的结构示意图。
如图5所示,该装置包括:采集模块51、识别模块52、连接模块53、计算模块54和确定模块55。
采集模块51,用于在展示标准动作时,采集人体动作的视频画面帧。
识别模块52,用于在视频画面帧中,识别人体的各关节。
连接模块53,用于连接人体各关节中相邻的两关节,得到相邻两关节之间的连线。
计算模块54,用于计算相邻两关节之间的连线与预设参考方向之间的实际夹角。
确定模块55,用于根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,其中,标准角度,是执行标准动作时,各相邻两关节之间的连线与参考方向之间的角度。
需要说明的是,前述对方法实施例的解释说明也适用于该实施例的装置,此处不再赘述。
本申请实施例的人体动作的识别装置中,采集模块用于在展示标准动作时,采集人体动作的视频画面帧,识别模块用于在视频画面帧中,识别人体的各关节,连接模块用于连接人体各关节中相邻的两关节,得到相邻两关节之间的连线,计算模块用于计算相邻两关节之间的连线与预设参考方向之间的实际夹角,确定模块用于根据实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配。通过识别人体视频画面帧中人体相邻关节,得到相邻关 节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。
基于上述实施例,本申请实施例还提供了另一种人体动作的识别装置的可能的实现方式,图6为本申请实施例提供的另一种人体动作的识别装置的结构示意图,在上一实施例的基础上,确定模块55,还可以包括:计算单元551、确定单元552、第一评分单元553和第二评分单元554。
计算单元551,用于针对每一条相邻两关节之间的连线,计算对应的标准角度与实际角度之间的差值。
确定单元552,用于若每一条相邻两关节之间的连线计算出的差值均在误差范围内,确定人体动作与标准动作匹配;若存在至少一条相邻两关节之间的连线计算出的差值未处于误差范围内,确定人体动作与标准动作不匹配。
作为本申请实施例的一种可能的实现方式,若确定单元552,确定人体动作与标准动作匹配,则第一评分单元553,具体用于:
针对每一条相邻两关节之间的连线,根据对应的差值和误差范围,确定连线的评分系数,根据连线的评分系数和连线对应的分值,生成连线的评价信息,连线的评价信息包括分解动作分值,分解动作分值为连线的评分系数和连线对应的分值的乘积,根据各条相邻两关节之间的连线的评价信息,生成人体动作的评价信息,其中,人体动作的评价信息包括人体动作分值,人体动作分值为各分解动作分值之和。
作为本申请实施例的另一种可能的实现方式,若确定单元552,确定人体动作与标准动作不匹配,则第二评分单元554,具体用于:
确定人体动作的评价信息中人体动作分值为零。
作为本实施例的一种可能的实现方式,该装置还可以包括:选取模块56、播放模块57、展示模块58和生成模块59。
选取模块56,用于获取选定的音频,以及音频中各时间节点对应的标准动作。
播放模块57,用于播放音频。
展示模块58,用于在音频播放至每一个时间节点时,展示对应的标准动作。
生成模块59,当音频播放结束时,获取各人体动作的评价信息,其中,人体动作的评价信息,用于指示人体动作与对应的标准动作之间差异程度,根据音频、各视频画面帧和各人体动作的动作评价信息,生成目标视频。
需要说明的是,前述对方法实施例的解释说明也适用于该实施例的装置,此处不再赘述。
本申请实施例的人体动作的识别装置中,在展示标准动作时,采集人体动作的视频画面 帧,通过识别人体视频画面帧中人体相邻关节,得到相邻关节的连线,计算相邻关节的连线和预设参考方向之间的实际夹角,并根据该实际夹角与标准角度之间的差值,确定人体动作是否与标准动作匹配,以实现动作的精准识别,解决现有技术中动作识别不准确的技术问题。同时,还可以给采集到的视频画面帧中人体动作打分,得到动作评价信息,指示人体动作与标准动作之间的差异程度,通过生成目标视频,使得用户在回放时了解并纠正人体动作,使得下次录视频时动作更加标准。
为实现上述实施例,本申请实施例还提出了一种电子设备,图7为本申请电子设备一个实施例的结构示意图,如图7所示,该电子设备包括:壳体71、处理器72、存储器73、电路板74和电源电路75,其中,电路板74安置在壳体71围成的空间内部,处理器72和存储器73设置在电路板74上;电源电路75,用于为上述电子设备的各个电路或器件供电;存储器73用于存储可执行程序代码;处理器72通过读取存储器73中存储的可执行程序代码来运行与可执行程序代码对应的程序,用于执行前述方法实施例所述的人体动作的识别方法。
处理器72对上述步骤的具体执行过程以及处理器72通过运行可执行程序代码来进一步执行的步骤,可以参见本申请图1-3所示实施例的描述,在此不再赘述。
该电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子设备。
为实现上述实施例,本申请实施例还提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述方法实施例所述的人体动作的识别方法。
为了实现上述实施例,本申请实施例还提出一种计算机程序产品,当计算机程序产品中的指令由处理器执行时,实现上述方法实施例所述的人体动作的识别方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具 体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵 列(FPGA)等。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (17)

  1. 一种人体动作的识别方法,其特征在于,包括以下步骤:
    在展示标准动作时,采集人体动作的视频画面帧;
    在所述视频画面帧中,识别人体的各关节;
    连接人体各关节中相邻的两关节,得到相邻两关节之间的连线;
    计算相邻两关节之间的连线与预设参考方向之间的实际夹角;
    根据所述实际夹角与标准角度之间的差值,确定所述人体动作是否与所述标准动作匹配;其中,所述标准角度,是执行所述标准动作时,各相邻两关节之间的连线与所述参考方向之间的角度。
  2. 根据权利要求1所述的识别方法,其特征在于,所述根据所述实际夹角与标准角度之间的差值,确定所述人体动作是否与标准动作匹配,包括:
    针对每一条相邻两关节之间的连线,计算对应的所述标准角度与所述实际角度之间的差值;
    若每一条相邻两关节之间的连线计算出的差值均在误差范围内,确定所述人体动作与所述标准动作匹配;
    若存在至少一条相邻两关节之间的连线计算出的差值未处于误差范围内,确定所述人体动作与所述标准动作不匹配。
  3. 根据权利要求2所述的识别方法,其特征在于,所述确定所述人体动作与所述标准动作匹配之后,还包括:
    针对每一条相邻两关节之间的连线,根据对应的差值和所述误差范围,确定所述连线的评分系数;
    根据所述连线的评分系数和所述连线对应的分值,生成所述连线的评价信息;所述连线的评价信息包括分解动作分值,所述分解动作分值为所述连线的评分系数和所述连线对应的分值的乘积;
    根据各条相邻两关节之间的连线的评价信息,生成所述人体动作的评价信息;其中,所述人体动作的评价信息包括人体动作分值,所述人体动作分值为各分解动作分值之和。
  4. 根据权利要求3所述的识别方法,其特征在于,所述根据对应的差值和所述误差范围,确定所述连线的评分系数,包括:
    采用公式p=1-[2Δ/(a-b)],计算得到连线的评分系数p;其中,b为误差范围下限,a为误差范围上限,Δ为差值。
  5. 根据权利要求2-4任一项所述的识别方法,其特征在于,所述确定所述人体动作与所述标准动作不匹配之后,还包括:
    确定所述人体动作的评价信息中人体动作分值为零。
  6. 根据权利要求1-5任一项所述的识别方法,其特征在于,所述在展示标准动作时,采集人体动作的视频画面帧之前,还包括:
    获取选定的音频,以及所述音频中各时间节点对应的标准动作;
    播放所述音频;
    在所述音频播放至每一个时间节点时,展示对应的标准动作。
  7. 根据权利要求6所述的识别方法,其特征在于,所述方法还包括:
    当所述音频播放结束时,获取各人体动作的评价信息;其中,所述人体动作的评价信息,用于指示所述人体动作与对应的标准动作之间差异程度;
    根据所述音频、各视频画面帧和各人体动作的动作评价信息,生成目标视频。
  8. 一种人体动作的识别装置,其特征在于,所述装置包括:
    采集模块,用于在展示标准动作时,采集人体动作的视频画面帧;
    识别模块,用于在所述视频画面帧中,识别人体的各关节;
    连接模块,用于连接人体各关节中相邻的两关节,得到相邻两关节之间的连线;
    计算模块,用于计算相邻两关节之间的连线与预设参考方向之间的实际夹角;
    确定模块,用于根据所述实际夹角与标准角度之间的差值,确定所述人体动作是否与所述标准动作匹配;其中,所述标准角度,是执行所述标准动作时,各相邻两关节之间的连线与所述参考方向之间的角度。
  9. 根据权利要求8所述的识别装置,其特征在于,所述确定模块包括:
    计算单元,用于针对每一条相邻两关节之间的连线,计算对应的所述标准角度与所述实际角度之间的差值;
    确定单元,用于若每一条相邻两关节之间的连线计算出的差值均在误差范围内,确定所述人体动作与所述标准动作匹配;若存在至少一条相邻两关节之间的连线计算出的差值未处于误差范围内,确定所述人体动作与所述标准动作不匹配。
  10. 根据权利要求9所述的识别装置,其特征在于,所述确定模块还包括:
    第一评分单元,用于针对每一条相邻两关节之间的连线,根据对应的差值和所述误差范围,确定所述连线的评分系数;根据所述连线的评分系数和所述连线对应的分值,生成所述连线的评价信息;所述连线的评价信息包括分解动作分值,所述分解动作分值为所述连线的评分系数和所述连线对应的分值的乘积;根据各条相邻两关节之间的连线的评价信息,生成所述人体动作的评价信息;其中,所述人体动作的评价信息包括人体动作分值,所述人体动作分值为各分解动作分值之和。
  11. 根据权利要求10所述的识别装置,其特征在于,所述第一评分单元具体用于:
    采用公式p=1-[2Δ/(a-b)],计算得到连线的评分系数p;其中,b为误差范围下限,a为误差范围上限,Δ为差值。
  12. 根据权利要求9-11任一项所述的识别装置,其特征在于,所述确定模块还包括:
    第二评分单元,用于确定所述人体动作的评价信息中人体动作分值为零。
  13. 根据权利要求8-12任一项所述的识别装置,其特征在于,所述装置还包括:
    选取模块,用于获取选定的音频,以及所述音频中各时间节点对应的标准动作;
    播放模块,用于播放所述音频;
    展示模块,用于在所述音频播放至每一个时间节点时,展示对应的标准动作。
  14. 根据权利要求13所述的识别装置,其特征在于,所述装置还包括:
    生成模块,用于当所述音频播放结束时,获取各人体动作的评价信息;其中,所述人体动作的评价信息,用于指示所述人体动作与对应的标准动作之间差异程度;根据所述音频、各视频画面帧和各人体动作的动作评价信息,生成目标视频。
  15. 一种电子设备,其特征在于,包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为上述电子设备的各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,用于执行权利要求1-7任一项所述的人体动作的识别方法。
  16. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-7任一项所述的人体动作的识别方法。
  17. 一种计算机程序产品,其特征在于,当所述计算机程序产品中的指令由处理器执行时,执行如权利要求1-7任一项所述的人体动作的识别方法。
PCT/CN2018/098598 2017-11-23 2018-08-03 人体动作的识别方法、装置和电子设备 WO2019100754A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711182909.3A CN107943291B (zh) 2017-11-23 2017-11-23 人体动作的识别方法、装置和电子设备
CN201711182909.3 2017-11-23

Publications (1)

Publication Number Publication Date
WO2019100754A1 true WO2019100754A1 (zh) 2019-05-31

Family

ID=61930056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098598 WO2019100754A1 (zh) 2017-11-23 2018-08-03 人体动作的识别方法、装置和电子设备

Country Status (2)

Country Link
CN (1) CN107943291B (zh)
WO (1) WO2019100754A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112998700A (zh) * 2021-05-26 2021-06-22 北京欧应信息技术有限公司 用于辅助对象运动功能评估的设备、系统和方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943291B (zh) * 2017-11-23 2021-06-08 卓米私人有限公司 人体动作的识别方法、装置和电子设备
CN108875687A (zh) * 2018-06-28 2018-11-23 泰康保险集团股份有限公司 一种护理质量的评估方法及装置
CN109432753B (zh) * 2018-09-26 2020-12-29 Oppo广东移动通信有限公司 动作矫正方法、装置、存储介质及电子设备
CN111105345B (zh) * 2018-10-26 2021-11-09 北京微播视界科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111107279B (zh) * 2018-10-26 2021-06-29 北京微播视界科技有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN109462776B (zh) * 2018-11-29 2021-08-20 北京字节跳动网络技术有限公司 一种视频特效添加方法、装置、终端设备及存储介质
CN109621332A (zh) * 2018-12-29 2019-04-16 北京卡路里信息技术有限公司 一种健身动作的属性确定方法、装置、设备和存储介质
CN116074564A (zh) * 2019-08-18 2023-05-05 聚好看科技股份有限公司 一种界面显示方法及显示设备
WO2021032092A1 (zh) 2019-08-18 2021-02-25 聚好看科技股份有限公司 显示设备
CN110728181B (zh) * 2019-09-04 2022-07-12 北京奇艺世纪科技有限公司 行为评价方法、装置、计算机设备和存储介质
CN111158486B (zh) * 2019-12-31 2023-12-05 恒信东方文化股份有限公司 一种识别唱跳节目动作的方法及识别系统
CN112487940B (zh) * 2020-11-26 2023-02-28 腾讯音乐娱乐科技(深圳)有限公司 视频的分类方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010271167A (ja) * 2009-05-21 2010-12-02 Kddi Corp 加速度センサ及び地磁気センサを用いて歩行者の進行方向を決定する携帯端末、プログラム及び方法
CN105278685A (zh) * 2015-09-30 2016-01-27 陕西科技大学 一种基于eon的辅助教学系统及教学方法
CN105307017A (zh) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 智能电视用户的姿势矫正方法及装置
CN107943291A (zh) * 2017-11-23 2018-04-20 乐蜜有限公司 人体动作的识别方法、装置和电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010271167A (ja) * 2009-05-21 2010-12-02 Kddi Corp 加速度センサ及び地磁気センサを用いて歩行者の進行方向を決定する携帯端末、プログラム及び方法
CN105278685A (zh) * 2015-09-30 2016-01-27 陕西科技大学 一种基于eon的辅助教学系统及教学方法
CN105307017A (zh) * 2015-11-03 2016-02-03 Tcl集团股份有限公司 智能电视用户的姿势矫正方法及装置
CN107943291A (zh) * 2017-11-23 2018-04-20 乐蜜有限公司 人体动作的识别方法、装置和电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112998700A (zh) * 2021-05-26 2021-06-22 北京欧应信息技术有限公司 用于辅助对象运动功能评估的设备、系统和方法
CN112998700B (zh) * 2021-05-26 2021-09-24 北京欧应信息技术有限公司 用于辅助对象运动功能评估的设备、系统和方法

Also Published As

Publication number Publication date
CN107943291A (zh) 2018-04-20
CN107943291B (zh) 2021-06-08

Similar Documents

Publication Publication Date Title
WO2019100754A1 (zh) 人体动作的识别方法、装置和电子设备
WO2019100753A1 (zh) 视频生成方法、装置和电子设备
WO2019100757A1 (zh) 视频生成方法、装置和电子设备
CN108537867B (zh) 根据用户肢体运动的视频渲染方法和装置
WO2019100756A1 (zh) 图像采集方法、装置和电子设备
CN108615248B (zh) 相机姿态追踪过程的重定位方法、装置、设备及存储介质
WO2019100755A1 (zh) 视频生成方法、装置和电子设备
WO2021008158A1 (zh) 一种人体关键点检测方法及装置、电子设备和存储介质
JP7457082B2 (ja) 反応型映像生成方法及び生成プログラム
CN109525891B (zh) 多用户视频特效添加方法、装置、终端设备及存储介质
CN109891189B (zh) 策划的摄影测量
EP3341851B1 (en) Gesture based annotations
CN106325509A (zh) 三维手势识别方法及系统
WO2021098616A1 (zh) 运动姿态识别方法、运动姿态识别装置、终端设备及介质
JP6263917B2 (ja) 情報処理装置、情報処理方法及びコンピュータプログラム
KR20150130483A (ko) 평면의 자연스러운 특성 타겟들의 인시츄 생성
CN108498102B (zh) 康复训练方法及装置、存储介质、电子设备
CN111625682B (zh) 视频的生成方法、装置、计算机设备及存储介质
KR20170078176A (ko) 동작 인식 기반의 게임을 제공하기 위한 장치, 이를 위한 방법 및 이 방법이 기록된 컴퓨터 판독 가능한 기록매체
WO2017092432A1 (zh) 一种虚拟现实交互方法、装置和系统
CN109348277A (zh) 运动像素视频特效添加方法、装置、终端设备及存储介质
CN106390454A (zh) 一种现实场景虚拟游戏系统
JP2014023745A (ja) ダンス教習装置
US9261974B2 (en) Apparatus and method for processing sensory effect of image data
CN110102057A (zh) 一种过场动画衔接方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18881053

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18881053

Country of ref document: EP

Kind code of ref document: A1