CN110866417A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110866417A
CN110866417A CN201810982473.4A CN201810982473A CN110866417A CN 110866417 A CN110866417 A CN 110866417A CN 201810982473 A CN201810982473 A CN 201810982473A CN 110866417 A CN110866417 A CN 110866417A
Authority
CN
China
Prior art keywords
line segment
user
key points
image
evaluation result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810982473.4A
Other languages
Chinese (zh)
Inventor
宋子奇
陈志文
李晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810982473.4A priority Critical patent/CN110866417A/en
Publication of CN110866417A publication Critical patent/CN110866417A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: acquiring position information of at least two key points of a user body in an image to be processed; determining an evaluation result of the user posture according to the position information of the at least two key points; and outputting the evaluation result. The image processing method, the image processing device and the electronic equipment provided by the embodiment of the invention can objectively evaluate the performance of the user, so that the user can timely and accurately know the self-learning effect and grasp the self-level, and the efficiency and the accuracy of the user learning action are improved.

Description

Image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
Dance learning is the process by which learners mimic the actions of others. In the prior art, whether the imitation is similar or not depends on subjective evaluation of visual effects by others or learners. For example, the learner can obtain the information of the learning effect by evaluating the learner by others, such as whether the action is in place or not, whether the rhythm is accurate or not, or by observing the learner by himself or herself, such as practice in a mirror.
However, subjective evaluation has many problems such as inconsistent and unstable standards, which increases the difficulty for learners to understand the problems in learning and makes it difficult to accurately evaluate their own level. Because each person has differences in attention, aesthetics, response speed and the like, the evaluation from others cannot ensure the consistency of the evaluation standard, and even different evaluators may generate opposite evaluation, so that the learner is difficult to judge, and the learning efficiency is low and the learning effect is poor.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, an image processing apparatus and an electronic device to improve the efficiency of learning actions of learners.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring position information of at least two key points of a user body in an image to be processed;
determining an evaluation result of the user posture according to the position information of the at least two key points;
and outputting the evaluation result.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the acquisition module is used for acquiring the position information of at least two key points of the user body in the image to be processed;
the determining module is used for determining an evaluation result of the user posture according to the position information of the at least two key points;
and the output module is used for outputting the evaluation result.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory is used to store one or more computer instructions, and when the one or more computer instructions are executed by the processor, the electronic device implements the image processing method in the first aspect. The electronic device may also include a communication interface for communicating with other devices or a communication network.
An embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to enable a computer to implement the image processing method in the first aspect when executed.
The image processing method, the image processing device and the electronic equipment provided by the embodiment of the invention can acquire the image of the user when the user exercises the action, recognize the position information of at least two key points of the body of the user from the image, judge whether the posture of the user reaches the reference standard or not according to the position information of the at least two key points, or calculate the difference between the posture of the user and the posture corresponding to the reference standard, and output the corresponding evaluation result, thereby objectively evaluating the performance of the user, enabling the user to timely and accurately know the self-learning effect, grasping the self-level and improving the efficiency and the accuracy of the user for learning the action.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a first embodiment of an image processing method according to the present invention;
FIG. 2 is a key point diagram according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a second embodiment of an image processing method according to the present invention;
fig. 4 is a schematic flowchart of a method for calculating angle information according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of a third embodiment of an image processing method according to the present invention;
fig. 6 is a schematic flowchart of a fourth embodiment of an image processing method according to the present invention;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 1 is a schematic flowchart of an image processing method according to a first embodiment of the present invention. As shown in fig. 1, the image processing method in the present embodiment may include:
step 101, obtaining position information of at least two key points of a user body in an image to be processed.
The execution main body of the method provided by the embodiment of the invention can be any device capable of realizing image processing, such as a mobile phone, a tablet, a computer, a dancing machine and the like. The device may be installed with a corresponding application program or operating system, and may be capable of executing the method according to the embodiment of the present invention.
In this step, the acquired image may be subjected to recognition processing to obtain position information of a plurality of key points of the body of the user. The position information may be coordinate information of a keypoint in an image, or a number of rows and columns of the keypoint in the image. The image to be processed may be a frame of image obtained from a video or a single still image.
The identification of key points of the user's body in the image can be realized by the existing motion or gesture recognition schemes such as the openpos model. The OpenPose model is an open-source human body posture detection model, and can identify a plurality of key point coordinates of a human body posture according to an image.
Fig. 2 is a schematic diagram of a key point according to an embodiment of the present invention. As shown in fig. 2, reference numerals 0 to 13 represent 14 key points of the human body, and the 14 key points are, in order: nose 0, shoulder middle 1, right shoulder 2, right elbow 3, right hand 4, left shoulder 5, left elbow 6, left hand 7, right hip 8, right knee 9, right foot 10, left hip 11, left knee 12, left foot 13.
For convenience of description, the embodiment of the present invention takes fig. 2 as an example to illustrate the implementation process of the image processing method. In practical applications, the key points of the body can be obtained according to specific requirements, and are not limited to the points shown in fig. 2. Of course, the key points in the image are not limited to the openpos model, and the key points of the body of the user may be obtained by other models or schemes.
And 102, determining an evaluation result of the user posture according to the position information of the at least two key points.
Specifically, after the position information of the at least two key points is obtained, whether the gesture of the user is standard or not may be determined according to the relative position of each key point. Alternatively, the evaluation result of the user posture may be determined according to a difference between the posture of the user and a reference posture. In the embodiment of the present invention, the reference gesture may be a standard gesture stored in advance, or may also be a corresponding gesture in an image or video to be learned or simulated by a user.
For example, if the standard posture requires lifting the hand over the top of the head, then the right hand 4 should be above the nose 0, and if the right hand 4 is below the nose 0, the user posture may be considered to be substandard. For another example, if the standard posture requires separating the feet to shoulder width, the distance between the right foot 12 and the left foot 13 should be equal to or close to the distance between the right shoulder 2 and the left shoulder 5, and if there is too much difference, the user posture is considered to be not standard.
The evaluation result of the user gesture can be qualitative or quantitative, for example, the evaluation result can be "qualified" or "unqualified"; alternatively, the evaluation result may be excellent, medium, poor, etc.; or, the evaluation result may be a specific evaluation score, where a higher score indicates a more standard user posture, and conversely indicates a less standard user posture.
And step 103, outputting the evaluation result.
Specifically, the evaluation result may be pushed to the user in a display mode, a voice playing mode, a short message mode, a message mode, and the like.
The method provided by the embodiment of the invention can be applied to any scene needing to evaluate the posture of the user. For example, the method provided by the embodiment of the invention can be applied to scenes in which a user needs to learn postures, such as dance and martial arts, or entertainment scenes, such as dancing, or the like, or can be used as an auxiliary scoring scheme for dance or martial arts games.
In an optional application scenario, the method in the embodiment of the present invention may be used to evaluate a static image of a user, for example, when the user exercises a certain motion posture, an image of the motion posture of the user may be captured, and the method in the embodiment of the present invention processes the image and outputs a corresponding evaluation result, so that the user can know whether the posture of the user is standard or not in time.
In another optional application scenario, the method in the embodiment of the present invention may be used for evaluating a user gesture in a video. For example, when a user learns to dance, the reference video to be simulated can be selected, dance can be carried out along with the beat of the reference video, the camera collects video data of the dance of the user, each frame of image can be processed according to the video data collected by the camera, and a corresponding evaluation result is obtained and displayed in real time.
To sum up, the image processing method provided by the embodiment of the present invention can obtain the image of the user during the posture exercise, recognize the position information of at least two key points of the user body from the image, and according to the position information of at least two key points, can judge whether the posture of the user reaches the reference standard, or calculate the difference between the posture of the user and the posture corresponding to the reference standard, and output the corresponding evaluation result, thereby objectively evaluating the performance of the user, so that the user can timely and accurately know the self learning effect, grasp the self level, and improve the efficiency and accuracy of the user learning action.
Fig. 3 is a flowchart illustrating an image processing method according to a second embodiment of the present invention. The embodiment is based on the technical scheme provided by the first embodiment, and the posture of the user is evaluated through the connection line between the key points. As shown in fig. 3, the image processing method in the present embodiment may include:
step 301, obtaining position information of at least two key points of a user body in an image to be processed.
Step 302, determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points.
Assuming that there are M key points, and any two key points can be connected, there can be M × (M-1)/2 connecting lines at most according to the permutation and combination formula. In practical application, one or more of the M (M-1)/2 connecting lines can be selected to evaluate the user gesture.
As will be understood by those skilled in the art, the larger the number of selected line segments, the larger the amount of calculation, but the more accurate the evaluation effect, and the smaller the number of selected line segments, the smaller the amount of calculation, but the worse the evaluation effect.
Optionally, in the 14 key points shown in fig. 2, the following 20 groups of key points may be selected, where a connection line is established between two key points in each group, and 20 line segments are obtained in total and used as the line segments determined in step 302:
nose 0 and shoulder middle 1, shoulder middle 1 and right shoulder 2, shoulder middle 1 and left shoulder 5, right shoulder 2 and right elbow 3, right elbow 3 and right hand 4, left shoulder 5 and left elbow 6, left elbow 6 and left hand 7, shoulder middle 1 and 8 right hip, right hip 8 and right knee 9, right knee 9 and right foot 10, shoulder middle 1 and 11 left hip, left hip 11 and left knee 12, left knee 12 and left foot 13, shoulder middle 1 and right hand 4, shoulder middle 1 and left hand 7, shoulder middle 1 and right foot 10, shoulder middle 1 and left foot 13, nose 0 and right hand 4, nose 0 and left hand 7, right foot 10 and left foot 13.
The 20 line segments obtained according to the 20 groups of key points can well realize the evaluation of the user posture, and the practicability and the evaluation accuracy are both considered.
Step 303, determining an evaluation result of the user gesture according to the angle information of the at least one line segment.
And step 304, outputting the evaluation result.
The angle information of the line segment may be included angle information between the line segment and a horizontal line or a vertical line, or the angle information of the line segment may be included angle information between the line segment and another line segment.
Specifically, the angle information of each line segment may be compared with a corresponding reference angle, and an evaluation result of the user gesture may be determined.
Alternatively, the reference angle may be an angle range, and if the angle information of a line segment in the image to be processed is within the angle range, the line segment is considered to be standard, and if the angle information of the line segment is not within the angle range, the line segment is considered to be non-standard.
For example, assuming that the reference angle of the line segment between the nose 0 and the right hand 4 is 30 ° to 40 °, and the angle information of the line segment between the nose 0 and the right hand 4 in the current image to be processed is 35 °, it is considered to be standard, and if it exceeds the range of 30 ° to 40 °, it is considered to be non-standard.
If the angle information exceeding the preset number is not standard, the evaluation result of the user posture can be unqualified, otherwise, the evaluation result is qualified.
Or, the reference angle may be a specific value, and the evaluation result is determined by comparing the similarity between the angle information of the line segment and the corresponding reference angle, where the higher the similarity is, the higher the evaluation result is, and otherwise, the lower the evaluation result is.
The angle information of the line segment can be calculated according to the position information of any two points on the line segment. Preferably, the calculation may be performed based on position information of two key points at both ends of the line segment.
Fig. 4 is a flowchart illustrating a method for calculating angle information according to an embodiment of the present invention. As shown in fig. 4, calculating the angle information of the line segment may include the steps of:
step 401, for each line segment, determining a position difference of two key points in a horizontal direction and a position difference of the two key points in a vertical direction according to position information of the two key points at two ends of the line segment.
And step 402, determining a trigonometric function value of an included angle between the line segment and the horizontal line or the vertical line according to the position difference of the two key points in the horizontal direction and the position difference of the two key points in the vertical direction.
And step 403, determining the angle information of the line segment according to the trigonometric value.
The position information of the key points may be coordinates of the key points, the position difference of the two key points in the horizontal direction may be a difference between horizontal coordinates of the two key points, and the position difference of the two key points in the vertical direction may be a difference between vertical coordinates of the two key points.
The trigonometric function value of the included angle can be a sine value, a cosine value or a tangent value of the included angle and the like. According to the trigonometric values, corresponding angle information can be determined. To simplify the calculation, the tangent value may be selected to determine the corresponding angle information.
Optionally, key point A (x)A,yA) And key point B (x)B,yB) The angle information of the line segments in between is:
θ=atan[(yB-yA)/(xB-xA)](1)
wherein atan [ alpha ], [ alpha ]]As an arctangent function, xA、yARespectively the abscissa and ordinate, x, of the key point AB、yBThe coordinate system is characterized by comprising a horizontal coordinate and a vertical coordinate of a key point B respectively, and angle information theta is specifically included angle information between a line segment and a horizontal line (namely an x axis).
The position difference of the two key points in the horizontal direction and the position difference of the two key points in the vertical direction can be positive numbers or negative numbers, and the angle information takes a value between (-90, 90), and the unit is degree.
When the angle information of the line segment is the included angle information between the line segment and another line segment, the included angle between the line segment and the horizontal line and the included angle between the another line segment and the horizontal line can be calculated by using a formula (1), and the calculated two included angles are subtracted to obtain the included angle information between the line segment and the another line segment.
Optionally, before the position information of at least two key points of the user body in the image to be processed is acquired, horizontal correction may be performed on the image to be processed, so as to prevent an inaccurate evaluation result due to an insufficient horizontal shot picture.
Specifically, the horizontal correction of the image may be achieved by measuring the degree of inclination of the camera, for example, the inclination angle of the camera may be acquired by an inertial measurement unit or the like, and the angle shot by the camera may be horizontally corrected according to the inclination angle.
In summary, the image processing method provided in this embodiment determines the angle information of the line segment according to the position information of the key point, and evaluates the posture of the user according to the angle information, without depending on the distance between the user and the camera, and without being affected by the lengths of the limbs and trunk of the user, and can accurately reflect whether the posture of the user is standard, thereby providing a valuable evaluation result for the user to learn the action.
In another alternative embodiment, the evaluation result of the user gesture may also be determined according to the length information of the line segment, that is, the distance between two key points. Specifically, the length information of the line segment can be compared with the reference length, and the evaluation result of the user posture can be determined according to the comparison result, so that the method is simple in steps and easy to implement.
Fig. 5 is a flowchart illustrating an image processing method according to a third embodiment of the present invention. In this embodiment, based on the technical solution provided in any of the above embodiments, the evaluation result is determined by the difference between the angle information and the reference angle. As shown in fig. 5, the image processing method in the present embodiment may include:
step 501, obtaining position information of at least two key points of a user body in an image to be processed.
Step 502, determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points.
Step 503, calculating the difference between the angle information of each line segment and the corresponding reference angle.
And step 504, determining an evaluation result of the user posture according to the corresponding difference value of each line segment.
Optionally, calculating a score corresponding to each line segment according to the difference corresponding to each line segment; and determining the evaluation result according to the weighted sum of the scores corresponding to the line segments. Alternatively, the evaluation result may be determined according to a mean value of scores corresponding to each line segment.
And step 505, outputting the evaluation result.
In this embodiment, the evaluation result may be a score or an evaluation level, and the smaller an absolute value of a difference value corresponding to a line segment (i.e., a difference between the angle information of the line segment and a corresponding reference angle), the higher the score or the evaluation level is, and otherwise, the lower the score or the evaluation level is. Therefore, the evaluation result can be calculated from the absolute value of the difference value using the negative correlation function.
In an alternative embodiment, the absolute value of the difference corresponding to the line segment may be input into a negative correlation function, so as to obtain the score corresponding to the line segment. The negative correlation function may be a function in which the value of the dependent variable decreases as the value of the independent variable increases, and the value of the dependent variable increases as the value of the independent variable decreases.
Optionally, the score corresponding to the ith line segment may be calculated according to equation (2):
Si=-a×Di+b (2)
in the formula (2), SiThe score corresponding to the ith line segment, a and b are constants, DiIs the absolute value of the difference corresponding to the ith line segment.
a and b can be positive numbers, the values of a and b can be set according to actual needs, and the values of a and b can determine the highest score and the lowest score of the final evaluation result. Therefore, the values of a and b may also be determined based on the highest and lowest scores entered by a manager or pose designer.
After determining the score corresponding to each line segment, a weighted sum of the scores corresponding to the line segments may be used as the evaluation result.
The weight corresponding to each line segment may be set according to actual needs, and optionally, the weight corresponding to each line segment may be set by a configurator, and the configurator may be a designer of an action simulated by a system administrator or a user.
For example, a dance mainly lies in the movements of both hands, the weight of the line segment where the arm is located may be higher, and the weights of other line segments may be lower, so that the learning effect can be better reflected, and the error interference of the less important position is reduced.
In another optional implementation, a fault-tolerant interval may also be set for the user, and as long as the difference between the angle information corresponding to the user posture and the reference angle is smaller than a certain value, the user is considered as a standard and can obtain a full score, thereby avoiding that the user cannot reach the full score due to a small error, and effectively improving the user experience.
Specifically, the absolute value of the difference value corresponding to the line segment may be input into the negative correlation function, the obtained output is the score corresponding to the line segment, and then the score of the line segment is constrained within a preset interval, for example, if the score corresponding to the line segment is greater than a first threshold, the score corresponding to the line segment is reduced to the first threshold, and if the score corresponding to the line segment is less than a second threshold, the score corresponding to the line segment is increased to the second threshold.
The first threshold may be a highest value of the score, the second threshold may be a lowest value of the score, and the first threshold may be greater than the second threshold.
Alternatively, the score corresponding to the line segment may be calculated according to equation (1), and assuming that a is 0.05 and b is 6, equation (1) becomes:
Si=-0.05×Di+6 (3)
because the angle information and the reference angle are both between (-90, 90), the absolute value of Di is close to 180 at the maximum value, and 0 at the minimum value, then-0.05DiThe maximum value of +6 is 6 and the minimum value is close to-3.
Assuming that the evaluation result expected to be output is 100 points at the highest and 0 points at the lowest, since there are 20 line segments, in the case that the weight of each line segment is 1, the score interval corresponding to each line segment should be [0, 5] to ensure that the final evaluation result is between 0 and 100, that is, the score is 5 at the highest and 0 at the lowest, if the score of the outgoing line segment calculated according to formula (3) is between 5 and 6, the score of the line segment is considered to be 5, and if the score of the outgoing line segment calculated according to formula (3) is between-3 and 0, the score of the line segment is considered to be 0.
After the scores of the line segments are constrained to be between 0 and 5, the weighted sum of the scores of the individual line segments can be used as the final evaluation result.
Further, a first threshold value, a second threshold value and a fault tolerance value input by a configuration person can be obtained, and an interval where a score output by the negative correlation function is located is determined according to the first threshold value, the second threshold value and the fault tolerance value; and determining the negative correlation function according to the interval of the score output by the negative correlation function.
Specifically, the interval in which the score of the negative correlation function output is located may be [ second threshold value-error tolerance value, first threshold value + error tolerance value ], or the error tolerance value may include high-error tolerance value and low-error tolerance value, and then the interval in which the score of the negative correlation function output is located may be [ second threshold value-low-error tolerance value, first threshold value + high-error tolerance value ].
Alternatively, taking the negative correlation function as the function shown in equation (3) as an example, the values of a and b may be calculated according to equations (4) and (5):
a=(H-L+WH+WL)/180 (4)
b=H+WH(5)
where H is the first threshold, i.e., the highest value of the score, L is the second threshold, i.e., the lowest value of the score, and W is the maximum value of the scoreHIs a high fractional error value, WLAt a low fractional error value, WHAnd WLMay or may not be equal.
Assuming that the configurator desires the final evaluation result to be between 0 and 10 points, the lowest score value of each line segment may be 0, and the highest score value may be 0.5, so that the sum of the scores of 20 line segments is in the interval [0, 10], and the configurator desires to set the error tolerance value to be 0.2, the configurator may input: the first threshold is 0.5, the second threshold is 0, and the error tolerance value is 0.2. From the information input by the configurator and equations (4) and (5), a is 0.005 and b is 0.7.
In summary, in the image processing method provided in this embodiment, the scores corresponding to the line segments are obtained by inputting the absolute values of the differences corresponding to the line segments into the negative correlation function, and the evaluation result is determined according to the weighted sum of the scores of the line segments, so that the posture of the user can be evaluated quickly and accurately, the evaluation efficiency is improved, and a fault-tolerant interval can be set for the scores, so that the evaluation result is output reasonably, and the user experience is improved.
In an alternative embodiment, the corresponding evaluation result may also be determined by a further function of the angle information and the reference angle. For example, the score corresponding to the line segment may be determined by a ratio of the angle information of the line segment and the corresponding reference angle, the closer the ratio is to 1, the more standard the gesture of the user is, the higher the score may be, the larger the difference between the ratio and 1, the less standard the gesture of the user is, and the lower the score may be.
Fig. 6 is a flowchart illustrating a fourth embodiment of an image processing method according to the present invention. The embodiment determines the evaluation result by comparing the similarity between the image to be processed and the reference image on the basis of the technical scheme provided by any embodiment. As shown in fig. 6, the image processing method in the present embodiment may include:
step 601, obtaining dance video data of the user collected by the camera, and determining an image to be processed according to the dance video data.
Specifically, the image to be processed may be a frame of image in a dance video of the user, which is acquired by the camera. Optionally, the method in this embodiment may be applied to a scene in which the dance video of the user collected by the camera is displayed in real time and the dance of the user is scored in real time, and in this case, the image to be processed may be an image to be displayed to the user in a next frame.
Step 602, obtaining position information of at least two key points of a user's body in the image to be processed, and determining angle information of at least one line segment according to the position information of the at least two key points.
Step 603, obtaining a reference image corresponding to the image to be processed in the reference video being simulated by the user, calculating the position information of the at least two key points in the reference image, and determining a reference angle corresponding to the image to be processed according to the position information of the key points in the reference image.
The reference image is a frame of image corresponding to the image to be processed in the reference video being simulated by the user, and the correspondence between the reference image and the image to be processed may be determined by a frame number or may also be determined by time, for example, if the image to be processed is an image corresponding to the 1 st minute 30 seconds in the dance video of the user, the reference image may be an image corresponding to the 1 st minute 30 seconds in the reference video.
Or, in a scene where the user dance video and the corresponding reference video acquired by the camera are displayed in real time and the user dance is scored in real time, the reference image may be an image to be displayed to the user in the next frame in the reference video.
The reference angle can be calculated from the reference image. Specifically, the position information of at least two key points in the reference image may be obtained first, where the key points are consistent with the key points in the image to be processed, for example, the key points in the image to be processed include 14 key points shown in fig. 2, then the key points in the reference image may also include the 14 key points, a corresponding line segment is determined according to the 14 key points in the reference image, and the angle information of the line segment may be calculated as the reference angle.
It can be understood that the number of the line segments determined in the reference image is the same as that of the line segments determined in the image to be processed, and the line segments are in one-to-one correspondence.
Specifically, assume that the position information of the ith key point in the image to be processed is (x)1i,y1i) The position information of the ith key point in the reference image is (x)2i,y2i) The position information of the jth key point in the image to be processed is (x)1j,y1j) The position information of the jth key point in the reference image is (x)2j,y2j) Then, the angle information of the line segment between the ith key point and the jth key point in the image to be processed is:
θ1=atan[(y1i-y1j)/(x1i-x1j)](6)
the angle information of the line segment between the ith key point and the jth key point in the reference image is as follows:
θ2=atan[(y2i-y2j)/(x2i-x2j)](7)
and step 604, comparing the angle information of each line segment with the corresponding reference angle, and determining the evaluation result of the user posture.
And the reference angle is angle information corresponding to the line segment in the reference image. More specifically, a reference angle corresponding to a line segment between the ith key point and the jth key point in the image to be processed is angle information of a line segment between the ith key point and the jth key point in the reference image.
According to the difference between the angle information of the line segment in the image to be processed and the corresponding reference angle, the final evaluation result can be calculated, and the specific implementation method can be referred to the above embodiments, which are not described herein again.
And step 605, playing the dance video shot by the camera, and displaying the evaluation result corresponding to the current image.
The evaluation result can be displayed in the dance video in the forms of numbers, characters, colors, animation and the like, and the evaluation result may change continuously with the continuous change of the video image.
Optionally, the playing the dance video shot by the camera in step 605 may include: and simultaneously playing the reference video and the dance video shot by the camera, and displaying the positions of the key points in the reference video and the dance video.
Meanwhile, the dance video and the reference video of the user are played, and the positions of the key points are displayed, so that the user can see the difference between the posture of the user and the reference posture more obviously, and the learning efficiency and the learning achievement of the user are improved.
In practical application, a user can select a reference video which the user wants to imitate, and then follow the reference video in front of the camera for dance. When dance video data of a user shot by a camera is obtained, a current image in the video can be used as an image to be processed, the posture of the user is evaluated according to the posture similarity between a corresponding reference image in a reference video and the image to be processed, an evaluation result of the current image is obtained, the evaluation result, the reference image and the current image are displayed to the user together, a reference video is formed by continuous reference images, and a dance video of the user is formed by continuous images shot by the camera. Alternatively, the reference video may be displayed on the left half of the screen, the dance video of the user may be displayed on the right half, and the evaluation result may be displayed in the upper left corner of the dance video.
In other alternative embodiments, after the user jumps, the images in the reference video and the images in the dance video of the user may be compared one by one and the evaluation result may be output, or a static frame of image and the reference image may be selected and compared and the evaluation result may be output.
In summary, according to the image processing method provided by this embodiment, in the process of dance learning of a user, dance video data of the user collected by a camera is obtained, and a corresponding evaluation result is determined according to the dance video data of the user and a reference video, so that the degree of similarity of dance of an imitator and a simulated person can be effectively embodied, the problem that the existing dance similarity evaluation standard is not objective and stable enough is solved, and through statistical analysis of human body postures, the unified standard is used for scoring the performance of the user, so that the user can accurately grasp the level of the user.
An image processing apparatus according to one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these image processing apparatuses can be configured by the steps taught in the present embodiment using commercially available hardware components.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 7, the apparatus may include:
an obtaining module 701, configured to obtain position information of at least two key points of a user's body in an image to be processed;
a determining module 702, configured to determine an evaluation result of the user gesture according to the position information of the at least two key points;
an output module 703 is configured to output the evaluation result.
Optionally, the determining module 702 may be specifically configured to: determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points; determining an evaluation result of the user posture according to the angle information of the at least one line segment; or determining the evaluation result of the user gesture according to the length information of the at least one line segment.
Optionally, the angle information of the line segment is information of an included angle between the line segment and a horizontal line or a vertical line, or the angle information of the line segment is information of an included angle between the line segment and another line segment.
Optionally, the determining module 702 may be further configured to: for each line segment, determining the position difference of two key points in the horizontal direction and the position difference of the two key points in the vertical direction according to the position information of the two key points at the two ends of the line segment; determining a trigonometric function value of an included angle between the line segment and a horizontal line or a vertical line according to the position difference of the two key points in the horizontal direction and the position difference of the two key points in the vertical direction; and determining the angle information of the line segment according to the trigonometric function value.
Optionally, the obtaining module 701 may further be configured to: before acquiring the position information of at least two key points of a user body in an image to be processed, carrying out horizontal correction on the image to be processed.
Optionally, the determining module 702 may be specifically configured to: determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points; comparing the angle information of each line segment with the corresponding reference angle to determine the evaluation result of the user posture; the reference angle is angle information which is stored in advance and corresponds to a standard posture, or the reference angle is corresponding angle information in a reference image which is simulated by the user.
Optionally, the evaluation result is a score or an evaluation grade, and the smaller the difference between the angle information and the reference angle is, the higher the score or the evaluation grade is.
Optionally, the determining module 702 may be specifically configured to: determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points; calculating the difference between the angle information of each line segment and the corresponding reference angle; and determining the evaluation result of the user posture according to the corresponding difference value of each line segment.
Optionally, the determining module 702 may include: the first determining unit is used for determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points; a first calculation unit for calculating a difference between the angle information of each line segment and a corresponding reference angle; the second calculating unit is used for calculating the scores corresponding to the line segments according to the difference values corresponding to the line segments; and the second determining unit is used for determining the evaluation result according to the weighted sum of the scores corresponding to the line segments.
Optionally, the second computing unit may be specifically configured to: for each line segment, inputting the absolute value of the difference value corresponding to the line segment into a negative correlation function to obtain a score corresponding to the line segment; if the score corresponding to the line segment is larger than a first threshold value, reducing the score corresponding to the line segment to the first threshold value; and if the score corresponding to the line segment is smaller than a second threshold value, increasing the score corresponding to the line segment to the second threshold value.
Optionally, the determining module 702 may be further configured to: acquiring a first threshold value, a second threshold value and an error tolerance value input by a configuration worker; determining an interval where the score output by the negative correlation function is located according to the first threshold, the second threshold and the error tolerance value; and determining the negative correlation function according to the interval of the score output by the negative correlation function.
Optionally, the second computing unit may be specifically configured to: calculating the corresponding score of the line segment according to the following formula:
Si=-a×Di+b
wherein S isiScoring the ith line segment, DiAnd a and b are constants which are absolute values of the difference values corresponding to the ith line segment.
Optionally, the obtaining module 701 may further be configured to: the method comprises the steps of obtaining dance video data of a user collected by a camera before obtaining position information of at least two key points of the body of the user in an image to be processed, and determining the image to be processed according to the dance video data.
Correspondingly, the output module 703 may be specifically configured to: and playing the dance video shot by the camera, and displaying the evaluation result corresponding to the current image.
Optionally, the obtaining module 701 may further be configured to: acquiring a reference image corresponding to the image to be processed in a reference video which is simulated by the user; calculating position information of the at least two key points in the reference image; and determining a reference angle corresponding to the image to be processed according to the position information of the key point in the reference image.
Optionally, the output module 703 may be specifically configured to: and simultaneously playing the reference video and the dance video shot by the camera, displaying the positions of the key points in the reference video and the dance video, and displaying the evaluation result corresponding to the current image.
The apparatus shown in fig. 7 can execute the image processing method provided in the first to fifth embodiments, and reference may be made to the related description of the foregoing embodiments for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the foregoing embodiments, and are not described herein again.
The internal functions and structures of the image processing apparatus are described above, and in one possible design, the structure of the image processing apparatus may be implemented as an electronic device such as a mobile phone, a tablet, a computer, a dancing machine, and the like. Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 8, the electronic device may include: a processor 801 and a memory 802, wherein the memory 802 is used for storing a program that supports an electronic device to execute the image processing method provided in any one of the first to fifth embodiments, and the processor 801 is configured to execute the program stored in the memory 802.
The program comprises one or more computer instructions which, when executed by the processor 801, enable the following steps to be performed:
acquiring position information of at least two key points of a user body in an image to be processed;
determining an evaluation result of the user posture according to the position information of the at least two key points;
and outputting the evaluation result.
Optionally, the processor 801 is further configured to perform all or part of the steps in the embodiments shown in fig. 1 to 6.
The electronic device may further include a communication interface 1003, which is used for the electronic device to communicate with other devices or a communication network.
Additionally, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by a processor, cause the processor to perform acts comprising:
acquiring position information of at least two key points of a user body in an image to be processed;
determining an evaluation result of the user posture according to the position information of the at least two key points;
and outputting the evaluation result.
In addition, the computer instructions, when executed by a processor, may further cause the processor to perform all or part of the steps involved in the image processing method in the above embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. An image processing method, comprising:
acquiring position information of at least two key points of a user body in an image to be processed;
determining an evaluation result of the user posture according to the position information of the at least two key points;
and outputting the evaluation result.
2. The method of claim 1, wherein determining the evaluation result of the user gesture according to the position information of the at least two key points comprises:
determining at least one line segment according to the position information of the at least two key points, wherein the line segment is a connecting line between any two key points;
determining an evaluation result of the user posture according to the angle information of the at least one line segment; or determining the evaluation result of the user gesture according to the length information of the at least one line segment.
3. The method according to claim 2, wherein the angle information of the line segment is information of an included angle between the line segment and a horizontal line or a vertical line, or the angle information of the line segment is information of an included angle between the line segment and another line segment.
4. The method of claim 2, further comprising:
for each line segment, determining the position difference of two key points in the horizontal direction and the position difference of the two key points in the vertical direction according to the position information of the two key points at the two ends of the line segment;
determining a trigonometric function value of an included angle between the line segment and a horizontal line or a vertical line according to the position difference of the two key points in the horizontal direction and the position difference of the two key points in the vertical direction;
and determining the angle information of the line segment according to the trigonometric function value.
5. The method of claim 2, further comprising, prior to obtaining location information of at least two key points of the user's body in the image to be processed:
and carrying out horizontal correction on the image to be processed.
6. The method according to any one of claims 2-5, wherein determining an evaluation result of the user gesture based on the angle information of the at least one line segment comprises:
comparing the angle information of each line segment with the corresponding reference angle to determine the evaluation result of the user posture;
the reference angle is angle information which is stored in advance and corresponds to a standard posture, or the reference angle is corresponding angle information in a reference image which is simulated by the user.
7. The method of claim 6, wherein comparing the angle information of each line segment with a corresponding reference angle to determine the evaluation result of the user gesture comprises:
calculating the difference between the angle information of each line segment and the corresponding reference angle;
and determining the evaluation result of the user posture according to the corresponding difference value of each line segment.
8. The method of claim 7, wherein determining the evaluation result of the user gesture according to the difference value corresponding to each line segment comprises:
calculating the score corresponding to each line segment according to the difference value corresponding to each line segment;
and determining the evaluation result according to the weighted sum of the scores corresponding to the line segments.
9. The method of claim 8, wherein calculating the score corresponding to the line segment comprises:
inputting the absolute value of the difference value corresponding to the line segment into a negative correlation function to obtain a score corresponding to the line segment;
if the score corresponding to the line segment is larger than a first threshold value, reducing the score corresponding to the line segment to the first threshold value;
and if the score corresponding to the line segment is smaller than a second threshold value, increasing the score corresponding to the line segment to the second threshold value.
10. The method of claim 9, further comprising:
acquiring a first threshold value, a second threshold value and an error tolerance value input by a configuration worker;
determining an interval where the score output by the negative correlation function is located according to the first threshold, the second threshold and the error tolerance value;
and determining the negative correlation function according to the interval of the score output by the negative correlation function.
11. The method of claim 6, further comprising, prior to obtaining location information of at least two key points of the user's body in the image to be processed:
obtaining dance video data of the user, which is acquired by a camera, and determining the image to be processed according to the dance video data;
correspondingly, the evaluation result is output, and the method comprises the following steps:
and playing the dance video shot by the camera, and displaying the evaluation result corresponding to the current image.
12. The method of claim 11, further comprising:
acquiring a reference image corresponding to the image to be processed in a reference video which is simulated by the user;
calculating position information of the at least two key points in the reference image;
and determining a reference angle corresponding to the image to be processed according to the position information of the key point in the reference image.
13. The method of claim 12, wherein playing the dance video captured by the camera comprises:
and simultaneously playing the reference video and the dance video shot by the camera, and displaying the positions of the key points in the reference video and the dance video.
14. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring the position information of at least two key points of the user body in the image to be processed;
the determining module is used for determining an evaluation result of the user posture according to the position information of the at least two key points;
and the output module is used for outputting the evaluation result.
15. An electronic device, comprising: a memory and a processor; wherein the content of the first and second substances,
the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the image processing method of any of claims 1 to 13.
CN201810982473.4A 2018-08-27 2018-08-27 Image processing method and device and electronic equipment Pending CN110866417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810982473.4A CN110866417A (en) 2018-08-27 2018-08-27 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810982473.4A CN110866417A (en) 2018-08-27 2018-08-27 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110866417A true CN110866417A (en) 2020-03-06

Family

ID=69651258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810982473.4A Pending CN110866417A (en) 2018-08-27 2018-08-27 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110866417A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111639605A (en) * 2020-06-01 2020-09-08 影子江湖文化(北京)有限公司 Human body action scoring method based on machine vision
CN113395472A (en) * 2020-03-12 2021-09-14 杭州海康威视数字技术股份有限公司 Video-based scoring method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102000430A (en) * 2009-09-01 2011-04-06 深圳泰山在线科技有限公司 Computer-based dance movement judging method
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104933734A (en) * 2015-06-26 2015-09-23 西安理工大学 Multi-Kinect-based human body gesture data fusion method
CN106073793A (en) * 2016-06-13 2016-11-09 中南大学 Attitude Tracking based on micro-inertia sensor and recognition methods
CN107194361A (en) * 2017-05-27 2017-09-22 成都通甲优博科技有限责任公司 Two-dimentional pose detection method and device
CN108170281A (en) * 2018-01-19 2018-06-15 吉林大学 A kind of work posture analysis system measuring method
CN108446678A (en) * 2018-05-07 2018-08-24 同济大学 A kind of dangerous driving behavior recognition methods based on skeleton character

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102000430A (en) * 2009-09-01 2011-04-06 深圳泰山在线科技有限公司 Computer-based dance movement judging method
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104933734A (en) * 2015-06-26 2015-09-23 西安理工大学 Multi-Kinect-based human body gesture data fusion method
CN106073793A (en) * 2016-06-13 2016-11-09 中南大学 Attitude Tracking based on micro-inertia sensor and recognition methods
CN107194361A (en) * 2017-05-27 2017-09-22 成都通甲优博科技有限责任公司 Two-dimentional pose detection method and device
CN108170281A (en) * 2018-01-19 2018-06-15 吉林大学 A kind of work posture analysis system measuring method
CN108446678A (en) * 2018-05-07 2018-08-24 同济大学 A kind of dangerous driving behavior recognition methods based on skeleton character

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395472A (en) * 2020-03-12 2021-09-14 杭州海康威视数字技术股份有限公司 Video-based scoring method and device, electronic equipment and storage medium
CN111639605A (en) * 2020-06-01 2020-09-08 影子江湖文化(北京)有限公司 Human body action scoring method based on machine vision
CN111639605B (en) * 2020-06-01 2024-04-26 影子江湖文化(北京)有限公司 Human body action scoring method based on machine vision

Similar Documents

Publication Publication Date Title
CN109191588B (en) Motion teaching method, motion teaching device, storage medium and electronic equipment
US10803762B2 (en) Body-motion assessment device, dance assessment device, karaoke device, and game device
CN107292271A (en) Learning-memory behavior method, device and electronic equipment
US11113988B2 (en) Apparatus for writing motion script, apparatus for self-teaching of motion and method for using the same
WO2021098616A1 (en) Motion posture recognition method, motion posture recognition apparatus, terminal device and medium
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
CN111428686A (en) Student interest preference evaluation method, device and system
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
CN111967407B (en) Action evaluation method, electronic device, and computer-readable storage medium
US11282214B2 (en) Motion matching analysis
CN110866417A (en) Image processing method and device and electronic equipment
Tang et al. Research on sports dance movement detection based on pose recognition
US20230162458A1 (en) Information processing apparatus, information processing method, and program
CN113743237A (en) Follow-up action accuracy determination method and device, electronic device and storage medium
CN115546360A (en) Action result identification method and device
KR102171319B1 (en) Appratus for writing motion-script, appratus for self-learning montion and method for using the same
Shi et al. Design of optical sensors based on computer vision in basketball visual simulation system
Kerdvibulvech et al. Guitarist fingertip tracking by integrating a Bayesian classifier into particle filters
CN115661935B (en) Human body action accuracy determining method and device
CN117789959B (en) Remote medical measurement and control method, system and medium based on virtual interaction technology
CN116797090B (en) Online assessment method and system for classroom learning state of student
TWI750613B (en) System and method for presenting performance of remote teaching
US20220410005A1 (en) Method for placing virtual object
CN115578786A (en) Motion video detection method, device, equipment and storage medium
CN117746498A (en) Human motion quality assessment method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200306

RJ01 Rejection of invention patent application after publication