CN111428665A - Information determination method, equipment and computer readable storage medium - Google Patents

Information determination method, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111428665A
CN111428665A CN202010241288.7A CN202010241288A CN111428665A CN 111428665 A CN111428665 A CN 111428665A CN 202010241288 A CN202010241288 A CN 202010241288A CN 111428665 A CN111428665 A CN 111428665A
Authority
CN
China
Prior art keywords
key point
posture
special effect
matching degree
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010241288.7A
Other languages
Chinese (zh)
Other versions
CN111428665B (en
Inventor
李立锋
白保军
颜忠伟
王科
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical MIGU Video Technology Co Ltd
Priority to CN202010241288.7A priority Critical patent/CN111428665B/en
Publication of CN111428665A publication Critical patent/CN111428665A/en
Application granted granted Critical
Publication of CN111428665B publication Critical patent/CN111428665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Abstract

The invention discloses an information determination method, information determination equipment and a computer readable storage medium, relates to the technical field of video processing, and aims to accurately reflect the special effect of the posture of a user. The method comprises the following steps: acquiring an image of a target object; extracting information of the pose of the target object from the image; according to the information of the postures, determining the matching degree between the posture of the target object and a preset posture; and determining the special effect strength of the special effect of the posture according to the matching degree. The embodiment of the invention can accurately reflect the special effect of the posture of the user.

Description

Information determination method, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to an information determining method, an information determining apparatus, and a computer-readable storage medium.
Background
The user may mimic gestures (e.g., gestures, motions, etc.) in the video and then render special effects based on the user's mimic. However, in the prior art, the corresponding special effects are the same for all the postures. Therefore, the special effect of the user posture cannot be accurately reflected by the method in the prior art.
Disclosure of Invention
The embodiment of the invention provides an information determination method, information determination equipment and a computer readable storage medium, which are used for accurately embodying the special effect of the posture of a user.
In a first aspect, an embodiment of the present invention provides an information determining method, including:
acquiring an image of a target object;
extracting information of the pose of the target object from the image;
according to the information of the postures, determining the matching degree between the posture of the target object and a preset posture;
and determining the special effect strength of the special effect of the posture according to the matching degree.
Wherein the extracting of the information of the pose of the target object from the image comprises:
extracting first pose key points from the image, wherein the number of the first pose key points is at least three;
calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first attitude key points;
the first key point, the second key point and the third key point are any three key points which are adjacent in sequence in the first attitude key; the second keypoint is located between the first keypoint and the third keypoint;
the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
Determining the matching degree between the posture of the target object and a preset posture according to the information of the posture, wherein the determining comprises the following steps:
determining a second posture key point of the preset posture;
calculating a second matching degree between a first included angle in the included angles and a second included angle in the preset posture;
normalizing the obtained at least one second matching degree to obtain the matching degree between the posture of the target object and the preset posture;
the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second posture key point and respectively correspond to the three key points forming the first included angle in the first posture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
Determining the matching degree between the posture of the target object and a preset posture according to the information of the posture, wherein the determining comprises the following steps:
determining a second posture key point of the preset posture;
adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture;
for a seventh key point in the first posture key points, determining a corresponding eighth key point in the second posture key points, and adjusting the posture of the target object to enable the seventh key point and the eighth key point to be coincident;
for a ninth key point in the first pose key points, determining a corresponding tenth key point in the second pose key points, and adjusting the pose of the target object so that the distance between the ninth key point and the tenth key point is minimum;
respectively calculating the distance between a first target key point in the first posture key points and a second target key point in the preset posture; the first target key point is any one of the key points in the first posture, and the second target key point is any one of the key points in the preset posture and corresponds to the first target key point;
and carrying out normalization processing on the obtained at least one distance to obtain the matching degree between the posture of the target object and the preset posture.
Wherein the determining the special effect strength of the special effect of the gesture according to the matching degree comprises:
for a target special effect parameter corresponding to the special effect, taking the sum of a minimum parameter value corresponding to the target special effect parameter and a first numerical value as the special effect strength of the target special effect parameter;
the first value is a product of a difference between a maximum parameter value and a minimum parameter value corresponding to the target special-effect parameter and the matching degree.
Wherein the information of the gesture comprises information of at least one sub-gesture that is continuous in time;
the determining the matching degree between the posture of the target object and a preset posture comprises:
according to the information of the at least one sub-gesture, respectively determining the matching degree between the at least one sub-gesture and the preset gesture to obtain at least one matching degree;
and processing the at least one matching degree by utilizing a dynamic time warping algorithm (DTW), and taking a processing result as the matching degree between the posture of the target object and a preset posture.
Wherein the method further comprises:
and carrying out normalization processing on the special effect parameters of the special effect.
Wherein after determining the special effect strength of the special effect according to the matching degree, the method further comprises:
displaying the special effect with the special effect strength.
In a second aspect, an embodiment of the present invention further provides an information determining apparatus, including: the information determining method comprises the following steps of a memory, a processor and a program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of the information determining method.
In a third aspect, the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the information determination method described above.
In the embodiment of the invention, the posture of the target object is matched with the preset posture to obtain the matching degree, and then the special effect strength of the special effect corresponding to the posture of the target object is determined according to the matching degree. Therefore, in the embodiment of the invention, the matching condition between the posture of the target object and the preset posture can be distinguished to determine the special effect strength of the special effect, so that the special effect of the posture of the user can be accurately reflected by utilizing the scheme of the embodiment of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of an information determination method provided by an embodiment of the present invention;
FIG. 2 is one of the schematic diagrams of key points of a human body provided by the embodiment of the invention;
FIG. 3 is a second schematic diagram of key points of a human body according to an embodiment of the present invention;
fig. 4 is a block diagram of an information determining apparatus provided in an embodiment of the present invention;
fig. 5 is a block diagram of an information determining apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an information determining method according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step 101, acquiring an image of a target object.
The target object may be a human being, or may be another object, such as an animal. In the embodiment of the present invention, an image of the target object, such as a 2D image or a 3D image, may be captured by the camera.
And 102, extracting the information of the posture of the target object from the image.
In the image of the target object, taking the target object as an example, a person may perform a certain action, thereby presenting different postures. The information of the gesture can be embodied by the gesture key points and included angles formed by the gesture key points.
In practical application, the posture key points of the target object in the image can be detected according to information such as the type of the action and the like through a human skeleton key point detection algorithm. Usually, the key points of posture are joint points on joints, such as joints on the wrist, elbow, shoulder, and the like. There may be one or more keypoints on each joint.
Specifically, in this step, first pose key points are extracted from the image, wherein the number of the first pose key points is at least three. Wherein the first pose keypoints may be located on different joints.
Then, calculating an included angle formed by any three sequentially adjacent key points. Herein, sequentially adjacent means that the three keypoints form an arrangement in which the order of the three keypoints is relatively fixed. For example, for the human body, key points a, B, and C are located on the shoulder joint, elbow joint, and wrist joint, respectively, in the order from head to foot. Because the shoulder joint, the elbow joint and the hand wrist joint have clear relative position relations, the three points A, B and C can be considered to be adjacent in sequence.
Specifically, when the included angle is calculated, the included angle between the first connecting line and the second connecting line is calculated for the first key point, the second key point and the third key point in the first posture key point. The first key point, the second key point and the third key point are any three key points which are adjacent in sequence in the first attitude key; the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point. The second keypoint is located between the first keypoint and the third keypoint, that is, the location where the second keypoint is located between the location where the first keypoint is located and the location where the third keypoint is located.
As shown in FIG. 2, key points J (Jx, Jy, Jz), E (Ex, Ey, Ez), and W (Wx, Wy, Wz) are located at the shoulder joint, elbow joint, and wrist joint, respectively. J. The point E, W corresponds to the first keypoint, the second keypoint, and the third keypoint, or the point W, E, J corresponds to the first keypoint, the second keypoint, and the third keypoint, respectively. The line between EJs can be considered a first connection line, and the line between EW can be considered a second connection line. Or vice versa.
The angle between these three key points is then calculated as follows:
Figure BDA0002431825610000051
Figure BDA0002431825610000052
Figure BDA0002431825610000053
wherein the content of the first and second substances,
Figure BDA0002431825610000054
a vector representing a line between the keypoint E and the keypoint J,
Figure BDA0002431825610000055
a vector representing a line between keypoint E and keypoint W,
Figure BDA0002431825610000056
a cosine value representing the angle between the two vectors. And obtaining an included angle between the two connecting lines EJ and EW according to the cosine value.
In this step, the included angle formed by any three sequentially adjacent key points can be calculated.
And 103, determining the matching degree between the posture of the target object and a preset posture according to the information of the posture.
Since the magnitude of the angle may represent the magnitude of the motion amplitude, the degree of match between two motions or poses may be determined based on the match between angles. If the matching degree meets the preset requirement, for example, the matching degree is greater than a certain preset value, a special effect corresponding to the posture of the target object can be triggered.
Wherein the preset gesture can be regarded as the gesture of some predefined standard actions. The standard actions may include: body limb movements, finger movements, facial movements, etc. Wherein, the standard action for triggering a certain type of special effect can be set according to the special effect type. For example, the sound effect of hitting the ball is special, and the corresponding action is the hitting action. That is, if a hitting motion is detected, a sound effect of hitting the ball may be triggered. The images of these preset poses may be stored in advance. Meanwhile, the key point information in the postures and the included angle information formed by the key points can be stored.
After the image of the target object is acquired, image recognition can be performed according to information such as the application scene of the target object, so that an image which can be used for matching can be found from the stored images. For example, for a user image obtained in a baseball game, an image for matching may be selected from a library of images related to baseball actions stored in advance.
In the embodiment of the present invention, the matching degree between the posture of the target object and the preset posture can be determined in at least two ways.
In one approach, the following steps may be included:
and step 1031a, determining a second posture key point of the preset posture.
As shown in fig. 3, the skeletal points of the human body are divided into three categories: points 303, 305, 309, 411, 413 etc. on the joints on the left side of the body, points 304, 306, 410, 412, 414 etc. on the joints on the right side of the body, and key points 301, 302 on the head. In general, the key points refer to points on joints on the left and right sides of the human body. Here, the second pose key points in the preset pose may be determined according to the above human skeleton key point detection algorithm, and the number of the second pose key points is at least three.
Wherein the second pose keypoints may be pre-labeled.
Step 1031b, calculating a second matching degree between a first included angle in the included angles and a second included angle in the preset posture.
The second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second posture key point and respectively correspond to the three key points forming the first included angle in the first posture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
If the first included angle is determined based on the first key point, the second key point and the third key point in the first posture key points, then the fourth key point, the fifth key point and the sixth key point in the second posture key points of the second included angle are calculated to be key points corresponding to the first key point, the second key point and the third key point respectively.
For example, the first keypoint, the second keypoint, and the third keypoint are, in turn, keypoints of the left shoulder, the left elbow, and the left wrist, and then the fourth keypoint, the fifth keypoint, and the sixth keypoint are, in turn, keypoints of the left shoulder, the left elbow, and the left wrist.
In the embodiment of the invention, the size of the included angle formed by three key points which are adjacent in sequence can be calculated according to the calculation method of the angle from the head to the foot or from the foot to the head. Alternatively, the size of the included angle may be calculated in advance. Here, the second included angle calculated in advance may be obtained.
In this way, the matching degree between the first included angle and the second included angle can be represented by the difference between the two included angles. The smaller the absolute value of the difference, the closer the two angles are.
Step 1031c, normalizing the obtained at least one second matching degree to obtain the matching degree between the posture of the target object and the preset posture.
In the embodiment of the invention, an error range is set for each included angle of the preset posture. And if the matching degree of the first included angle and the second included angle is obtained to be within the corresponding error range, the first included angle and the second included angle are considered to be matched, otherwise, the first included angle and the second included angle are considered to be not matched.
In this step, normalization processing is performed on the obtained at least one second matching degree, so as to obtain a matching degree between the posture of the target object and the preset posture. The matching degree is a number of 0 or more and 1 or less.
In another mode, the method can comprise the following steps:
step 1032a, determining a second posture key point of the preset posture.
The description of this step may refer to the description of step 1031a previously described.
Step 1032b, adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture.
Here, the size of the target object and the size of the preset posture may be normalized, so that the size of the target object in the image of the target object is adjusted to be consistent with the size of the preset posture.
Step 1032c, for a seventh key point in the first pose key points, determining a corresponding eighth key point in the second pose key points, and adjusting the pose of the target object, so that the seventh key point and the eighth key point are overlapped.
Wherein the seventh keypoint may be any keypoint. In general, the seventh keypoint may be a keypoint on the leg or, alternatively, the first keypoint in the direction from the foot towards the head.
In this step, the posture (e.g., 3D posture) of the target object is adjusted with the seventh keypoint as the center so that the seventh keypoint and the eighth keypoint coincide. And the eighth key point is a key point which is positioned at the same position on the preset posture as the seventh key point.
Step 1032d, for a ninth key point in the first pose key points, determining a corresponding tenth key point in the second pose key points, and adjusting the pose of the target object so that the distance between the ninth key point and the tenth key point is minimum.
The ninth keypoint may be a keypoint of a human body part adjacent to the seventh keypoint and located above the position where the seventh keypoint is located in the pose of the target object.
Step 1032e, respectively calculating the distance between a first target key point in the first pose key points and a second target key point in the preset pose. The first target key point is any one of the key points in the first posture, and the second target key point is any one of the key points in the preset posture and corresponds to the first target key point.
That is, for a keypoint of the first pose keypoint and the second pose keypoint, a linear distance between any two corresponding keypoints is calculated.
Step 1032f, normalization processing is carried out on the obtained at least one distance, and matching degree between the posture of the target object and the preset posture is obtained.
Also, a distance range may be set for each keypoint. If the distance between the first target key point and the second target key point is within the corresponding distance range, the first target key point and the second target key point are considered to be matched; otherwise the two may be considered to be mismatched.
In this step, normalization processing is performed on the obtained at least one distance degree, so as to obtain a matching degree between the posture of the target object and the preset posture. The matching degree is a number of 0 or more and 1 or less.
And step 104, determining the special effect strength of the special effect of the posture according to the matching degree.
In this step, for a target special effect parameter corresponding to the special effect, a sum of a minimum parameter value corresponding to the target special effect parameter and a first numerical value is used as a special effect strength of the target special effect parameter. The first value is a product of a difference between a maximum parameter value and a minimum parameter value corresponding to the target special-effect parameter and the matching degree. The target special effect parameters may be, for example, brightness, contrast, size, speed, frequency, etc., and the special effect effects may be, for example, sound, light, etc. The special effect strength refers to the size of a certain special effect parameter in the special effect, such as the size of sound of the special effect, the intensity of light, the size of action speed and the like.
In the embodiment of the present invention, in order to make the determined special effect strength more accurate, normalization processing may be further performed on the special effect parameters of the special effect, for example, normalization processing may be performed on brightness, contrast, size, speed, frequency, and the like, respectively. Wherein, the special effect normalization value range of a certain special effect parameter is as follows: the weakest effect parameter of the special effect parameters, and the product of the difference and the matching degree between the weakest effect parameter of the special effect parameters and the maximum effect parameter of the special effect parameters.
Taking a special effect as an example of flame, the minimum special effect, the maximum special effect and the normalized special effect strength corresponding to the special effect parameters are shown in the following table 1:
TABLE 1
Figure BDA0002431825610000091
With the table 1, after the matching degree is obtained, the special effect strength corresponding to a certain parameter can be calculated. The more matched the key points are, the stronger the special effect is.
In the embodiment of the invention, the posture of the target object is matched with the preset posture to obtain the matching degree, and then the special effect strength of the special effect corresponding to the posture of the target object is determined according to the matching degree. Therefore, in the embodiment of the invention, the matching condition between the posture of the target object and the preset posture can be distinguished to determine the special effect strength of the special effect, so that the special effect of the posture of the user can be accurately reflected by utilizing the scheme of the embodiment of the invention.
In addition, the effect of the special effect with the strength of the special effect can be displayed, so that a user can know the matching degree of the action conveniently. Or, the special effect with the special effect strength can be displayed under the condition that the obtained matching degree meets the preset requirement. The preset requirement may be, for example, that the matching degree is greater than a certain value, and the value may be set as needed.
In practical applications, the pose of the target object may last for a period of time or be made up of multiple poses that are continuous in time. Then, correspondingly, the information of the gesture comprises information of at least one sub-gesture that is continuous in time. Therefore, when the matching degree is determined, in order to make the obtained special effect strength more accurate, the matching degree between the at least one sub-posture and the preset posture can be respectively determined according to the information of the at least one sub-posture, so as to obtain at least one matching degree. The manner of determining the matching degree may refer to the description of the foregoing embodiments. And then, processing the at least one matching degree by using a Dynamic Time Warping (DTW) algorithm, and taking a processing result as the matching degree between the posture of the target object and a preset posture. In this way, the matching degree corresponding to each sub-attitude in the continuous attitude change process can be taken into consideration, so as to obtain an intermediate matching degree value.
In the above embodiments, the key points and the included angles formed between the key points can also be set for a certain standard posture. The selection of the key points is used for determining the strength of the special effect, namely the special effect can be triggered after the matching of the key points meets a certain condition, and the strength of the special effect is influenced. For example, the user imitates Sunwukong to launch Tortoise wave Qigong, and if the foot and waist motions reach a certain matching degree, a special effect is triggered; the closer the palm movement is to the set movement, the stronger the qigong wave effect is.
The embodiment of the invention also provides an information determining device. Referring to fig. 4, fig. 4 is a block diagram of an information determination apparatus according to an embodiment of the present invention. Because the principle of solving the problem of the information determining device is similar to the information determining method in the embodiment of the present invention, the implementation of the information determining device can refer to the implementation of the method, and repeated details are not repeated.
As shown in fig. 4, the information determining apparatus 400 includes:
a first obtaining module 401, configured to obtain an image of a target object; a first extraction module 402, configured to extract information of a pose of the target object from the image; a first determining module 403, configured to determine, according to the information of the posture, a matching degree between the posture of the target object and a preset posture; a second determining module 404, configured to determine, according to the matching degree, a special effect strength of the special effect of the gesture.
Optionally, the first extraction module 402 may include:
the first extraction submodule is used for extracting first posture key points from the image, wherein the number of the first posture key points is at least three; the first calculation submodule is used for calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first attitude key point; the first key point, the second key point and the third key point are any three key points which are adjacent in sequence in the first attitude key; the second keypoint is located between the first keypoint and the third keypoint; the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
Optionally, the first determining module 403 may include:
the first determining submodule is used for determining a second posture key point of the preset posture; the first calculation submodule is used for calculating a second matching degree between a first included angle in the included angles and a second included angle in the preset posture; the first obtaining submodule is used for carrying out normalization processing on the obtained at least one second matching degree to obtain the matching degree between the posture of the target object and the preset posture; the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second posture key point and respectively correspond to the three key points forming the first included angle in the first posture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
Optionally, the first determining module 403 may include:
the second determining submodule is used for determining a second posture key point of the preset posture; the first adjusting submodule is used for adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture; a second adjusting submodule, configured to determine, for a seventh key point in the first pose key points, a corresponding eighth key point in the second pose key points, and adjust a pose of the target object, so that the seventh key point and the eighth key point coincide; a third adjusting submodule, configured to determine, for a ninth keypoint in the first pose keypoints, a corresponding tenth keypoint in the second pose keypoints, and adjust a pose of the target object so that a distance between the ninth keypoint and the tenth keypoint is minimum; the second calculation submodule is used for calculating the distance between a first target key point in the first posture key point and a second target key point in the preset posture respectively; the first target key point is any one of the key points in the first posture, and the second target key point is any one of the key points in the preset posture and corresponds to the first target key point; and the second obtaining submodule is used for carrying out normalization processing on the obtained at least one distance to obtain the matching degree between the posture of the target object and the preset posture.
Optionally, the second determining module 404 is specifically configured to, for a target special effect parameter corresponding to the special effect, use a sum of a minimum parameter value corresponding to the target special effect parameter and a first numerical value as the special effect strength of the target special effect parameter; the first value is a product of a difference between a maximum parameter value and a minimum parameter value corresponding to the target special-effect parameter and the matching degree.
Optionally, the information of the gesture includes information of at least one temporally continuous sub-gesture; the first determining module 403 may include:
the third determining submodule is used for respectively determining the matching degree between the at least one sub-gesture and the preset gesture according to the information of the at least one sub-gesture to obtain at least one matching degree; and the fourth determining submodule is used for processing the at least one matching degree by using DTW, and taking a processing result as the matching degree between the posture of the target object and a preset posture.
Optionally, the apparatus may further include:
and the processing module is used for carrying out normalization processing on the special effect parameters of the special effect.
Optionally, the apparatus may further include: displaying the special effect with the special effect strength.
The apparatus provided in the embodiment of the present invention may implement the method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
As shown in fig. 5, the information determining apparatus according to the embodiment of the present invention includes: the processor 500, which is used to read the program in the memory 520, executes the following processes:
acquiring an image of a target object;
extracting information of the pose of the target object from the image;
according to the information of the postures, determining the matching degree between the posture of the target object and a preset posture;
and determining the special effect strength of the special effect of the posture according to the matching degree.
Wherein in fig. 5, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 500, and various circuits, represented by memory 520, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500 is further configured to read the program and execute the following steps:
extracting first pose key points from the image, wherein the number of the first pose key points is at least three;
calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first attitude key points;
the first key point, the second key point and the third key point are any three key points which are adjacent in sequence in the first attitude key; the second keypoint is located between the first keypoint and the third keypoint;
the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
The processor 500 is further configured to read the program and execute the following steps:
determining a second posture key point of the preset posture;
calculating a second matching degree between a first included angle in the included angles and a second included angle in the preset posture;
normalizing the obtained at least one second matching degree to obtain the matching degree between the posture of the target object and the preset posture;
the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second posture key point and respectively correspond to the three key points forming the first included angle in the first posture key point; the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
The processor 500 is further configured to read the program and execute the following steps:
determining a second posture key point of the preset posture;
adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture;
for a seventh key point in the first posture key points, determining a corresponding eighth key point in the second posture key points, and adjusting the posture of the target object to enable the seventh key point and the eighth key point to be coincident;
for a ninth key point in the first pose key points, determining a corresponding tenth key point in the second pose key points, and adjusting the pose of the target object so that the distance between the ninth key point and the tenth key point is minimum;
respectively calculating the distance between a first target key point in the first posture key points and a second target key point in the preset posture; the first target key point is any one of the key points in the first posture, and the second target key point is any one of the key points in the preset posture and corresponds to the first target key point;
and carrying out normalization processing on the obtained at least one distance to obtain the matching degree between the posture of the target object and the preset posture. The processor 500 is further configured to read the program and execute the following steps:
the processor 500 is further configured to read the program and execute the following steps:
for a target special effect parameter corresponding to the special effect, taking the sum of a minimum parameter value corresponding to the target special effect parameter and a first numerical value as the special effect strength of the target special effect parameter;
the first value is a product of a difference between a maximum parameter value and a minimum parameter value corresponding to the target special-effect parameter and the matching degree.
The information of the gesture comprises information of at least one sub-gesture which is continuous in time; the processor 500 is further configured to read the program and execute the following steps:
according to the information of the at least one sub-gesture, respectively determining the matching degree between the at least one sub-gesture and the preset gesture to obtain at least one matching degree;
and processing the at least one matching degree by utilizing a dynamic time warping algorithm (DTW), and taking a processing result as the matching degree between the posture of the target object and a preset posture.
The processor 500 is further configured to read the program and execute the following steps:
and carrying out normalization processing on the special effect parameters of the special effect.
The processor 500 is further configured to read the program and execute the following steps:
displaying the special effect with the special effect strength.
The device provided by the embodiment of the present invention may implement the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned information determining method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. With such an understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An information determination method, comprising:
acquiring an image of a target object;
extracting information of the pose of the target object from the image;
according to the information of the postures, determining the matching degree between the posture of the target object and a preset posture;
and determining the special effect strength of the special effect of the posture according to the matching degree.
2. The method of claim 1, wherein the extracting information of the pose of the target object from the image comprises:
extracting first pose key points from the image, wherein the number of the first pose key points is at least three;
calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first attitude key points;
the first key point, the second key point and the third key point are any three key points which are adjacent in sequence in the first attitude key; the second keypoint is located between the first keypoint and the third keypoint;
the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
3. The method according to claim 2, wherein the determining the matching degree between the posture of the target object and a preset posture according to the information of the posture comprises:
determining a second posture key point of the preset posture;
calculating a second matching degree between a first included angle in the included angles and a second included angle in the preset posture;
normalizing the obtained at least one second matching degree to obtain the matching degree between the posture of the target object and the preset posture;
the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second posture key point and respectively correspond to the three key points forming the first included angle in the first posture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
4. The method according to claim 2, wherein the determining the matching degree between the posture of the target object and a preset posture according to the information of the posture comprises:
determining a second posture key point of the preset posture;
adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture;
for a seventh key point in the first posture key points, determining a corresponding eighth key point in the second posture key points, and adjusting the posture of the target object to enable the seventh key point and the eighth key point to be coincident;
for a ninth key point in the first pose key points, determining a corresponding tenth key point in the second pose key points, and adjusting the pose of the target object so that the distance between the ninth key point and the tenth key point is minimum;
respectively calculating the distance between a first target key point in the first posture key points and a second target key point in the preset posture; the first target key point is any one of the key points in the first posture, and the second target key point is any one of the key points in the preset posture and corresponds to the first target key point;
and carrying out normalization processing on the obtained at least one distance to obtain the matching degree between the posture of the target object and the preset posture.
5. The method of claim 1, wherein determining the effect strength of the effect of the gesture according to the matching degree comprises:
for a target special effect parameter corresponding to the special effect, taking the sum of a minimum parameter value corresponding to the target special effect parameter and a first numerical value as the special effect strength of the target special effect parameter;
the first value is a product of a difference between a maximum parameter value and a minimum parameter value corresponding to the target special-effect parameter and the matching degree.
6. The method of claim 1, wherein the information of the gesture comprises information of at least one sub-gesture that is consecutive in time;
the determining the matching degree between the posture of the target object and a preset posture comprises:
according to the information of the at least one sub-gesture, respectively determining the matching degree between the at least one sub-gesture and the preset gesture to obtain at least one matching degree;
and processing the at least one matching degree by utilizing a dynamic time warping algorithm (DTW), and taking a processing result as the matching degree between the posture of the target object and a preset posture.
7. The method of claim 1, further comprising:
and carrying out normalization processing on the special effect parameters of the special effect.
8. The method of claim 1, wherein after said determining a special effect strength of a special effect based on said match, the method further comprises:
displaying the special effect with the special effect strength.
9. An information determining apparatus comprising: a memory, a processor, and a program stored on the memory and executable on the processor; characterized in that the processor, which is adapted to read a program in the memory, implements the steps in the information determination method according to any one of claims 1 to 8.
10. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the steps in the information determination method according to any one of claims 1 to 8.
CN202010241288.7A 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium Active CN111428665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241288.7A CN111428665B (en) 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241288.7A CN111428665B (en) 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111428665A true CN111428665A (en) 2020-07-17
CN111428665B CN111428665B (en) 2024-04-12

Family

ID=71551754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241288.7A Active CN111428665B (en) 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111428665B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110113523A (en) * 2019-03-15 2019-08-09 深圳壹账通智能科技有限公司 Intelligent photographing method, device, computer equipment and storage medium
CN110297929A (en) * 2019-06-14 2019-10-01 北京达佳互联信息技术有限公司 Image matching method, device, electronic equipment and storage medium
US20200082635A1 (en) * 2017-12-13 2020-03-12 Tencent Technology (Shenzhen) Company Limited Augmented reality processing method, object recognition method, and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200082635A1 (en) * 2017-12-13 2020-03-12 Tencent Technology (Shenzhen) Company Limited Augmented reality processing method, object recognition method, and related device
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110113523A (en) * 2019-03-15 2019-08-09 深圳壹账通智能科技有限公司 Intelligent photographing method, device, computer equipment and storage medium
CN110297929A (en) * 2019-06-14 2019-10-01 北京达佳互联信息技术有限公司 Image matching method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
傅鹏;孙世君;王琨;金元;蔡汉辉;孙权森;朱近;: "图像MTF对立体定位测量精度影响的仿真研究" *
罗会兰;冯宇杰;孔繁胜;: "融合多姿势估计特征的动作识别" *

Also Published As

Publication number Publication date
CN111428665B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
US8639020B1 (en) Method and system for modeling subjects from a depth map
US20180321776A1 (en) Method for acting on augmented reality virtual objects
CN108304819B (en) Gesture recognition system and method, and storage medium
CN111191599A (en) Gesture recognition method, device, equipment and storage medium
Maisto et al. An accurate algorithm for the identification of fingertips using an RGB-D camera
JP2019096113A (en) Processing device, method and program relating to keypoint data
Anilkumar et al. Pose estimated yoga monitoring system
CN113658211B (en) User gesture evaluation method and device and processing equipment
Lee et al. Kinect who's coming—applying Kinect to human body height measurement to improve character recognition performance
JP2010113530A (en) Image recognition device and program
CN109740511B (en) Facial expression matching method, device, equipment and storage medium
CN111368787A (en) Video processing method and device, equipment and computer readable storage medium
CN108509924B (en) Human body posture scoring method and device
CN111103981A (en) Control instruction generation method and device
CN111639615B (en) Trigger control method and device for virtual building
Xu et al. A novel method for hand posture recognition based on depth information descriptor
CN111353347B (en) Action recognition error correction method, electronic device, and storage medium
CN111428665B (en) Information determination method, equipment and computer readable storage medium
CN112418153A (en) Image processing method, image processing device, electronic equipment and computer storage medium
Huang et al. A skeleton-occluded repair method from Kinect
Putz-Leszczynska et al. Gait biometrics with a Microsoft Kinect sensor
CN111611941A (en) Special effect processing method and related equipment
CN111722710A (en) Method for starting augmented reality AR interactive learning mode and electronic equipment
CN111462337A (en) Image processing method, device and computer readable storage medium
CN110781857A (en) Motion monitoring method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant