WO2020147797A1 - 图像处理方法及装置、图像设备及存储介质 - Google Patents
图像处理方法及装置、图像设备及存储介质 Download PDFInfo
- Publication number
- WO2020147797A1 WO2020147797A1 PCT/CN2020/072550 CN2020072550W WO2020147797A1 WO 2020147797 A1 WO2020147797 A1 WO 2020147797A1 CN 2020072550 W CN2020072550 W CN 2020072550W WO 2020147797 A1 WO2020147797 A1 WO 2020147797A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rotation information
- node
- nodes
- coordinate system
- child node
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present disclosure relates to the field of information technology, and in particular to an image processing method and device, image equipment and storage medium.
- embodiments of the present disclosure provide an image processing method and device, image equipment, and storage medium.
- the present disclosure provides an image processing method, including:
- acquiring the respective first rotation information of the multiple nodes of the target based on the image includes: acquiring the respective first rotation information of the multiple nodes of the target in the camera coordinate system based on the image The first rotation information.
- determining the second rotation information of each child node of the multiple nodes of the target relative to the parent node according to the hierarchical relationship between the nodes and the plurality of first rotation information includes: Each node of the plurality of nodes uses the node as a child node; according to the hierarchical relationship and the plurality of first rotation information, the local coordinate system of the child node is determined relative to the parent node of the child node The second rotation information in the local coordinate system.
- the method further includes: for each node of the plurality of nodes, determining the local coordinate system in which the node is located according to the hierarchical relationship.
- determining the local coordinate system where the node is located according to the hierarchical relationship includes: the node as a child node; and according to the hierarchical relationship, the direction of the child node to the parent node of the child node is the local The first direction of the coordinate system; the vertical direction of the plane where the child node and the two connected nodes are located is the second direction of the local coordinate system; the orientation of the target predetermined part is the third direction of the local coordinate system direction.
- acquiring the respective first rotation information of the multiple nodes of the target in the camera coordinate system based on the image includes: acquiring the multiple nodes of the target based on the image The respective first quaternion in the camera coordinate system.
- determining the second rotation information of each child node of the multiple nodes of the target relative to the parent node according to the hierarchical relationship between the nodes and the plurality of first rotation information includes: Hierarchical relationship, each node of the multiple nodes is taken as a child node, and the parent node of the child node is determined; for each child node, based on the first quaternion sum of the parent node of the child node in the camera coordinate system The first quaternion of the child node in the camera coordinate system determines the second quaternion of the child node relative to the parent node.
- controlling the movement of the nodes corresponding to the plurality of nodes in the controlled model includes: for each of the second rotation information, determining the second rotation Whether the information satisfies the constraint condition; when the second rotation information satisfies the constraint condition, the movement of the node corresponding to the child node associated with the second rotation information in the controlled model is controlled according to the second rotation information .
- determining whether the second rotation information satisfies a constraint condition includes: converting the second rotation information into a first angle of a predetermined type of coordinate system; determining the first angle Whether it is within a preset angle range; if the first angle is within the preset angle range, it is determined that the second rotation information satisfies the constraint condition.
- the method further includes: in a case where the first angle is outside the preset angle range, determining that the second rotation information does not satisfy the constraint condition.
- the method further includes: in the case that the second rotation information does not satisfy the constraint condition, correcting the second rotation information to obtain third rotation information; and controlling the receiving device according to the third rotation information The movement of the node corresponding to the child node associated with the second rotation information in the control model.
- correcting the second rotation information to obtain the third rotation information includes: if the first angle corresponding to the second rotation information is in the preset Outside the angle range, the first angle is corrected according to the preset angle to obtain a second angle; and the third rotation information is obtained according to the second angle.
- the predetermined type of coordinate system is an Euler coordinate system.
- controlling the movement of the nodes corresponding to the plurality of nodes in the controlled model includes: responding to a plurality of the second rotation information or by a plurality of the The plurality of third rotation information generated by the second rotation information is corrected for posture defect, and a plurality of corrected fourth rotation information is obtained respectively; the plurality of fourth rotation information is used to control the control model and the The movement of the node corresponding to each node.
- the posture defect correction includes at least one of the following: the same-side defect of the upper and lower limbs; the motion defect of the looped leg; the foot showing an outward figure-shaped motion defect; the foot's concave motion defect.
- the method further includes: obtaining the posture defect correction parameter according to the difference information between the shape of the target and the standard form; wherein, the posture defect correction parameter is used for the second rotation information Or calibration of the third rotation information.
- the method further includes: obtaining a posture defect correction parameter according to the difference information between the shape of the target and the standard form, wherein the posture defect correction parameter is used to correct a plurality of the second rotations Information or the third rotation information.
- an image processing device including:
- the first acquisition module is used to acquire an image; the second acquisition module is used to acquire respective first rotation information of multiple nodes of the target based on the image; the first determination module is used to acquire the hierarchical relationship between the nodes and A plurality of the first rotation information determines the second rotation information of each child node of the plurality of nodes of the target relative to its parent node; a control module is configured to control based on the plurality of second rotation information The movement of the nodes corresponding to the multiple nodes in the controlled model.
- an image device including:
- Memory a processor, connected to the memory, and configured to execute any of the above-mentioned image processing methods by executing computer-executable instructions located on the memory.
- the present disclosure provides a non-volatile computer storage medium that stores computer-executable instructions; after the computer-executable instructions are executed by a processor, any one of the aforementioned image processing methods can be implemented .
- the technical solution provided by the embodiment of the present invention obtains the first rotation information and the second rotation information of the multiple nodes of the target through the acquired images, and then uses multiple second rotation information to control the controlled model and the multiple nodes.
- the movement of the node corresponding to the node instead of directly outputting the image containing the target, a controlled model is used to simulate the target's movement, while avoiding direct exposure of the target to the network during network teaching, game control, and live video broadcasting, thereby maintaining the user's privacy.
- the conversion from the first rotation information to the second rotation information is realized based on the hierarchical relationship between the parent node and the child node, so that the controlled model can accurately simulate the target. Thereby it has the characteristics of good control effect.
- FIG. 1 is a schematic flowchart of a first image processing method provided by an embodiment of the disclosure.
- FIG. 2 is a schematic diagram of the hierarchical relationship between the first type of nodes provided by an embodiment of the disclosure.
- 3A to 3C are schematic diagrams of a controlled model provided by an embodiment of the disclosure simulating changes of the hand movement of a collected user.
- 4A to 4C are schematic diagrams of a controlled model provided by an embodiment of the disclosure to simulate changes in the torso movement of a collected user.
- 5A to 5C are schematic diagrams of a controlled model simulating the movement of the collected user's feet according to an embodiment of the disclosure.
- FIG. 6 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
- FIG. 7A is a schematic diagram of nodes included in a human skeleton provided by an embodiment of the disclosure.
- FIG. 7B is a schematic diagram of nodes included in another human skeleton provided by an embodiment of the disclosure.
- FIG. 8A is a schematic diagram of determining a first direction according to an embodiment of the disclosure.
- FIG. 8B is a schematic diagram of determining a second direction according to an embodiment of the disclosure.
- FIG. 8C is a usage diagram of a local coordinate system where a wrist node is located according to an embodiment of the disclosure.
- FIG. 9 is a schematic diagram of a three-dimensional coordinate system of nodes included in a human body provided by an embodiment of the disclosure.
- FIG. 10 is a schematic structural diagram of an image device provided by an embodiment of the disclosure.
- this embodiment provides an image processing method, which includes the following steps.
- Step S110 Obtain an image.
- Step S120 Based on the image, obtain respective first rotation information of multiple nodes of the target.
- Step S130 Determine the second rotation information of each child node of the multiple nodes of the target relative to its parent node according to the hierarchical relationship between the nodes and the plurality of first rotation information.
- Step S140 Based on the plurality of second rotation information, control the movement of the nodes corresponding to the plurality of nodes in the controlled model.
- the image processing method provided in this embodiment can drive the movement of the controlled model through image processing.
- the image processing method provided in this embodiment can be applied to an image device, which can be various electronic devices capable of image device processing, for example, an electronic device that performs image collection, image display, and image pixel reorganization to generate images.
- the image device includes but is not limited to various terminal devices, for example, a mobile terminal and/or a fixed terminal; it may also include various image servers capable of providing image services.
- the mobile terminal includes portable devices such as mobile phones or tablet computers that are easy for users to carry, and may also include devices worn by users, such as smart bracelets, smart watches, and smart glasses.
- the fixed terminal includes a fixed desktop computer and the like.
- the image acquired in step S110 may be a 2-dimensional (referred to as 2D) image or a 3-dimensional (referred to as 3D) image.
- the 2D image may include images collected by a single-lens or multi-lens camera, such as red, green, and blue (RGB) images.
- the method for acquiring the image may include: using the camera of the imaging device itself to collect the image; and/or the image received from an external device; and/or reading the image from a local database or a local storage.
- the 3D image may be a 2D coordinate detected from a 2D image, and then a 3D image obtained by a conversion algorithm from 2D coordinates to 3D coordinates, and the 3D image may also be an image collected by a 3D camera.
- the step S120 may include: detecting the image to obtain first rotation information of a predetermined node included in the target. If the target is a human, the predetermined node may be a human body node, and may not be limited to a human body node.
- the target is a person
- the target's nodes may include at least two of the following: head node, neck node, shoulder node, elbow node, wrist node, waist node, thigh node, knee node, foot Wrist node and/or instep node, etc.
- the target is not limited to humans, but may also include various movable living or non-living bodies such as animals.
- the first rotation information of the multiple nodes of the target can be combined to reflect the local and/or overall rotation information of the target.
- the first rotation information includes, but is not limited to, the rotation angle of the node relative to the reference point in the target, the rotation angle relative to the reference position, the position information relative to the reference point in the target, and/or the position information relative to the reference position, etc. .
- a deep learning model such as a neural network may be used to detect the image, so as to obtain the first rotation information and/or the second rotation information.
- the controlled model may be a model corresponding to the target.
- the controlled model is a human body model; if the target is an animal, the controlled model may be a body model corresponding to an animal; if the target is a vehicle, the controlled model Can be a model of a vehicle.
- the controlled model is a model for the category of the target.
- the model can be predetermined and can be further divided into multiple styles.
- the style of the controlled model may be determined based on user instructions, and the style of the controlled model may include a variety of styles, for example, a realistic style that simulates a real person, anime style, an internet celebrity style, styles of different temperaments, and game styles. Among them, different temperament styles can be literary style or rock style.
- the controlled model can be a role in the game.
- the teacher's face and body shape will be exposed.
- the image of the teacher's movement can be obtained by means of image collection, etc., and then the movement of a virtual controlled model can be controlled by feature extraction and first rotation information acquisition.
- the controlled model can be used to simulate the teacher’s movement to complete the physical exercise teaching through its own limb movement.
- the teacher’s face and body shape do not need to be directly exposed to the teaching.
- the teacher’s privacy is protected.
- the method of this embodiment is used to simulate the real vehicle movement by using a vehicle model to obtain a surveillance video, the vehicle’s license plate information and/or the overall outline of the vehicle can be retained in the surveillance video.
- the vehicle’s brand, model, color and Both old and new can be hidden to protect user privacy.
- the step S130 may include: determining, according to the hierarchical relationship and a plurality of the first rotation information, that the local coordinate system where each child node is located is relative to the first local coordinate system where the parent node is located. 2. Rotate information.
- multiple nodes of the target are hierarchical, and this hierarchical relationship is determined by the traction relationship between the nodes.
- this traction relationship may be determined based on the hierarchical relationship. For example, certain movements of the elbow joint will drive the movement of the wrist joint.
- This hierarchical relationship is reflected in: some nodes are the parent or child nodes of other nodes.
- the hierarchical relationship and a plurality of first rotation information information that each child node has more or less rotation than its parent node is obtained, and then the second rotation information is obtained.
- the wrist joint itself has other rotations, and the information that the wrist rotates more than the elbow joint is the second rotation information.
- step S130 based on the hierarchical relationship and the plurality of first rotation information, the net rotation information (ie, the second rotation information) of each child node of the multiple nodes of the target relative to its parent node can be obtained.
- the second rotation information is used to control the movement of the node corresponding to the child node in the controlled model.
- the movement of each corresponding node in the controlled model also follows the hierarchical relationship of the nodes of the target in the image, so that the controlled model can accurately simulate the movement of the target in the image.
- a root node that is, a node without a parent node
- its second rotation information may be consistent with its first rotation information.
- the root node can be the pelvis.
- the method further includes: determining the local coordinate system where each node is located according to the hierarchical relationship.
- different nodes have their own local coordinate system.
- the local coordinate system is determined based on the hierarchical relationship.
- the local coordinate system includes but is not limited to: the child node establishes a coordinate system relative to the parent node, and the parent The coordinate system established by the node relative to the child node, and the coordinate system established by the node relative to the reference node in the target.
- the reference node may be a waist node or a crotch node of the human body.
- determining the local coordinate system in which the child node is located according to the hierarchical relationship may include: according to the hierarchical relationship, the direction in which the child node points to its parent node is the local The first direction of the coordinate system; the vertical direction of the plane where the child node and the two nodes connected to it are located is the second direction of the local coordinate system; the orientation of the target predetermined part is the local coordinate system The third direction.
- the local coordinate system of the node is a three-dimensional coordinate system.
- the three-dimensional coordinate system needs to determine the directions of three axes. The calculation of these three axes is carried out in the global coordinate system.
- the directions of these three axes can be regarded as vectors in the global space, and the three directions are orthogonal to each other. By determining the local coordinate system, the rotation information of each node can be calculated more conveniently.
- the direction that the child node points to the parent node is the first direction of the local coordinate system.
- the direction in which the wrist joint points to the elbow joint is the first direction of the three-dimensional coordinate system.
- Such a local coordinate system is established because the relative position relationship between the parent node and the child node is considered at the same time. In this way, the coordinates of the child node in the three-dimensional coordinate system can represent the second rotation information of the child node relative to the parent node.
- the two nodes connected by the child node include the parent node.
- the child node is a wrist node as an example, the parent node of the wrist node may include the elbow node; the hand node is the child node of the wrist node; the three points of the wrist node, the elbow node and the hand node form one
- the vertical direction of the surface is the second direction.
- the specific method for solving the second direction can be as follows: the child node and the first node of the two connected nodes are connected to form vector one; the child node and the second node of the two connected nodes are connected to form vector two ; Cross multiplying the vector one and the vector two to obtain the normal vector of the two vectors, and the direction of the normal vector is the second direction.
- the first direction is the direction x where node A points to node B in FIG. 8A.
- the second direction may be: the vertical direction of the plane where the two bones connected by node A are located.
- the specific determination method can be: node A and node C are connected to form a vector; node A and node B are connected to form another vector; the two vectors are cross-multiplied to obtain the normal vector of the two vectors, and the direction of the normal vector is Is the second direction y.
- the direction z in FIG. 8B may be the direction of the human body, that is, the third direction. In some embodiments, the third direction may also be the orientation of the human face.
- the method of determining the local coordinate system may be as shown in FIG. 8C.
- Node B is the wrist node
- z is the normal vector obtained by binary multiplication of the vector 1 from the wrist to the forearm and the vector from the wrist to the hand.
- Figure 9 is a schematic diagram of the effect of establishing a three-dimensional coordinate system of different nodes (joints) of the human body.
- the child nodes may have different rotation directions, and the direction of the maximum rotation angle among the different rotation directions is regarded as the second direction of the aforementioned three-dimensional coordinate system.
- the shoulders, wrists, ankles, waist, or neck can be rotated in multiple directions, but the maximum rotation angles in different directions are different.
- the direction of the maximum angle selectable by the child node is directly used as the The second direction of the local coordinate system.
- the other direction of the three-dimensional coordinate system is related to the orientation of the target.
- the target may include multiple parts.
- one or more parts of the target will replace the orientation of the entire target.
- the description is made by taking the target as a person, and the orientation of the target may be the orientation of the human face or the torso of the human body.
- the orientation of the human torso is selected as the target orientation.
- the step S120 may include: obtaining the first quaternion of each node of the plurality of nodes of the target in the camera coordinate system based on the image.
- the first rotation information may be determined based on the camera coordinate system.
- a quaternion which is a data form that is convenient for representing rotation information, is used to represent the first rotation information of the rotation information.
- coordinate values of various coordinate systems may also be used to represent the first rotation information, for example, coordinate values in the Euler coordinate system and/or coordinate values in the Lagrangian coordinate system.
- the quaternion of the node of the target directly extracted from the camera coordinate system corresponding to the image is called the first quaternion. "Has no specific meaning, it is only for distinguishing.
- the step S130 may include: for each node of the multiple nodes of the target, determining the parent node when the node is a child node according to the hierarchical relationship; based on the parent node in the camera coordinate system The first quaternion and the first quaternion of the child node in the camera coordinate system determine the second quaternion of the child node relative to the parent node.
- the hierarchical relationship between nodes of different targets may be predetermined.
- the electronic device may know the hierarchical relationship between the nodes of the human body in advance.
- the hierarchical relationship of the nodes of the human body can be as follows. For example, waist node 1 is a child node of pelvic node 0; left thigh node 9 is a child node of waist node 1; left calf node 10 is a child node of left thigh joint 9, and left foot node 11 is a child of left calf node 10. node.
- the right thigh node 16 is a child node of the waist node 1; the right calf node 17 is a child node of the right thigh joint 16, and the right foot node 18 is a child node of the right calf node 17.
- Chest node 2 is a child node of waist node 1; neck node 3 is a child node of chest node 2, and head node 4 is a child node of neck node 3.
- the left upper arm node 6 is a child node of the left clavicle node 5; the left forearm node 7 is a child node of the left upper arm node 6; the left hand node 8 is a child node of the left forearm node 7.
- the right upper arm node 13 is a child node of the right clavicle node 12; the right forearm node 14 is a child node of the right upper arm node 13; and the right hand node 15 is a child node of the right forearm node 14.
- the second rotation information of the child node relative to the parent node can be converted.
- the difference between the first quaternion of the child node and the first quaternion of the parent node may be the second rotation information.
- the method further includes: for each second rotation information, determining whether the second rotation information satisfies a constraint condition; the step S140 may include: when the second rotation information satisfies the constraint condition In this case, the movement of the node corresponding to the child node associated with the second rotation information in the controlled model is controlled according to the second rotation information.
- the node has the largest rotation angle in the target, and this rotation angle may be one of the above constraints.
- the human body can only rotate 90 degrees to the left and/or right relative to the torso of the human body, but cannot rotate 180 degrees or 270 degrees, and the maximum angle of rotation to the left or right is the said The boundary angle of the constraint.
- determining whether the second rotation information satisfies the constraint condition includes: determining whether the second rotation information indicates that the rotation direction of the corresponding child node is the rotation direction of the child node; determining the second rotation information Whether the rotation angle indicated by the rotation information does not exceed the maximum angle of the child node in the corresponding rotation direction; if the second rotation information indicates that the rotation direction of the child node is a direction that the child node can rotate; and if the second rotation information indicates If the rotation angle of the child node does not exceed the maximum angle of the child node in the corresponding rotation direction, it is determined that the second rotation information satisfies the constraint condition.
- step S140 the second rotation information satisfying the constraint condition can be directly used to control the rotation of the corresponding node in the controlled model.
- the method further includes: if the second rotation information indicates that the rotation direction of the corresponding child node is a direction that the child node cannot rotate, then the rotation direction of the child node may be excluded from the second rotation information. Rotation information; and/or, if the second rotation information indicates that the rotation angle of the child node exceeds the maximum angle of the child node in the corresponding rotation direction, replace the rotation angle of the rotation direction in the second rotation information with the maximum angle . In this way, the corrected second rotation information is obtained.
- the modified second rotation information satisfies the constraint condition.
- step S140 the rotation of the corresponding node of the controlled model can be controlled according to the modified second rotation information that satisfies the constraint condition.
- determining whether the second rotation information satisfies a constraint condition includes: converting the second rotation information into a first angle of a predetermined type of coordinate system; determining the first angle Whether it is within a preset angle range; if the first angle is within the preset angle range, it is determined that the second rotation information satisfies the constraint condition.
- the determination of whether the angle is within the preset angle range is a way to determine whether the second rotation information satisfies the constraint condition.
- the specific implementation may be to determine whether the coordinate value in the predetermined coordinate system is within the predetermined range.
- the method further includes: when the first angle is outside the preset angle range, determining that the second rotation information does not meet the constraint condition.
- the third rotation information is the aforementioned modified second rotation information that satisfies the constraint condition.
- the step S140 may include: controlling the movement of the nodes corresponding to the plurality of nodes in the controlled model according to a plurality of the third rotation information.
- correcting the second rotation information to obtain third rotation information includes: if the first angle corresponding to the second rotation information is outside the preset angle range, according to the Correcting the first angle by the preset angle to obtain a second angle; obtaining the third rotation information according to the second angle.
- the second angle may include but is not limited to the maximum angle that can be rotated at the node in the corresponding rotation direction.
- two angles are formed between the orientation of the face and the orientation of the torso, and the sum of the two angles is 180 degrees. Assume that the two included angles are angle 1 and angle 2, respectively.
- the constraint conditions for the neck connecting the face and the torso are: between -90 and 90 degrees, the angle exceeding 90 degrees is excluded from the constraint; this way, it can reduce the control model in the process of simulating the target movement.
- There is an abnormal situation where the rotation angle exceeds 90 degrees clockwise or counterclockwise, for example, 120 degrees or 180 degrees.
- This constraint condition corresponds to two extreme angles, one is -90 degrees and the other is 90 degrees.
- the detected rotation angle is modified to the maximum angle defined by the constraint condition. For example, if a rotation angle exceeding 90 degrees (for example, the aforementioned 120 degrees or 180 degrees) is detected, the detected rotation angle is modified to a limit angle closer to the detected rotation angle, such as 90 degrees.
- the step S140 may include: performing posture defect correction on a plurality of the second rotation information or a plurality of third rotation information generated from the plurality of second rotation information to obtain a plurality of corrections respectively The latter fourth rotation information; using a plurality of the fourth rotation information to control the movement of the nodes corresponding to the plurality of nodes in the controlled model.
- the target itself has a certain defect in its movement, which is called a posture defect.
- the second rotation information and/or the third rotation information can be posture Defect correction, so as to obtain the fourth rotation information after correction, and use the corrected fourth rotation information that overcomes the posture defect when controlling the controlled model.
- the postural defect correction includes at least one of the following: a defect of the same side of the upper and lower limbs; a movement defect of the looped leg; a movement defect of the foot showing an outward figure shape; a concave movement defect of the foot.
- the method further includes: obtaining a posture defect correction parameter according to the difference information between the shape of the target and the standard form; wherein, the posture defect correction parameter is used for the second rotation information Or correction of the third rotation information.
- the shape of the target is detected first, and then the detected shape is compared with the standard shape to obtain difference information; posture defect correction is performed through the difference information.
- a prompt to maintain a predetermined posture can be output on the display interface. After the user sees the prompt, the user maintains the predetermined posture, so that the imaging device can collect an image of the user maintaining the predetermined posture, and then through image detection, it is determined that the user maintains the predetermined posture Whether it is standard enough to get the difference information.
- a person’s feet show the outer eight characters
- the normal standard standing posture should be that the toes and roots of the feet are parallel to each other.
- the method further includes: correcting the proportions of different parts of the standard model according to the proportion relations of different parts of the target to obtain the corrected controlled model.
- the proportional relationship between the various parts of different targets may be different. For example, taking people as an example, the ratio of the length of the legs to the length of the head of a professional model is longer than that of an ordinary person. For another example, if some people have fuller buttocks, the distance between their hips may be larger than that of ordinary people.
- the standard model may be a mean model obtained based on a large amount of human body data.
- the different parts of the standard model will be corrected according to the proportional relationship of different parts of the target. Ratio to obtain the corrected controlled model.
- the corrected part includes but not limited to the crotch and/or the leg.
- the small image in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the user's hand is moving, from Figure 3A to Figure 3B, and then from Figure 3B to Figure 3C, the user's hand is moving, and the hand of the controlled model is also moving.
- the user's hand movement changes from making a fist, extending the palm, and extending the index finger sequentially in FIGS. 3A to 3C, while the controlled model imitates the user's gestures changing from making a fist, extending the palm, and extending the index finger.
- the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the torso of the user is moving. From Fig. 4A to Fig. 4B and then from Fig. 4B to Fig. 4C, the torso of the user is in motion, and the torso of the controlled model is also in motion. 4A to 4C, the user straddles from the top right of the image, to the top right of the image, and finally changes upright.
- the controlled model also simulates the user's torso movement.
- the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the user steps toward the right side of the image, steps toward the left side of the image, and finally stands up straight; the controlled model also simulates the user's foot movement.
- the controlled model also simulates changes in the user's expression.
- this embodiment provides an image processing device, which includes the following modules.
- the first acquisition module 110 is used to acquire images.
- the second acquiring module 120 is configured to acquire the first rotation information of each of the multiple nodes of the target based on the image.
- the first determining module 130 is configured to determine the second rotation information of each child node of the multiple nodes of the target relative to its parent node according to the hierarchical relationship between the nodes and the plurality of first rotation information.
- the control module 140 is configured to control the movement of the nodes corresponding to the multiple nodes in the controlled model based on the multiple second rotation information.
- the first acquisition module 110, the second acquisition module 120, the first determination module 130, and the control module 140 may be program modules; the program modules can implement corresponding functions after being executed by the processor.
- the first acquisition module 110, the second acquisition module 120, the first determination module 130, and the control module 140 may be a combination of software and hardware, and the combination of software and hardware includes but is not limited to a programmable array ;
- the programmable array includes but is not limited to a complex programmable array or a field programmable array.
- the first acquisition module 110, the second acquisition module 120, the first determination module 130, and the control module 140 may be pure hardware modules; the pure hardware modules include but are not limited to application specific integrated circuits.
- the first obtaining module 110 is specifically configured to obtain respective first rotation information of the multiple nodes of the target in the camera coordinate system based on the image.
- the first determining module 130 is specifically configured to, for each of the plurality of nodes, use the node as a child node; according to the hierarchical relationship and the plurality of first rotation information To determine the second rotation information in the local coordinate system where the child node is located relative to the local coordinate system where the parent node of the child node is located.
- the device further includes: a second determining module, configured to determine, for each node of the plurality of nodes, according to the hierarchical relationship, the local coordinate system in which the node is located.
- the second determining module is specifically configured to: the node serves as a child node; according to the hierarchical relationship, the direction in which the child node points to the parent node of the child node is the first in the local coordinate system A direction; the vertical direction of the plane where the child node and the two connected nodes are located is the second direction of the local coordinate system; the orientation of the target predetermined part is the third direction of the local coordinate system.
- the second acquiring module 120 is specifically configured to acquire the respective first quaternions of the multiple nodes of the target in the camera coordinate system based on the image.
- the first determining module 130 is specifically configured to: use each of the multiple nodes as a child node according to the hierarchical relationship, and determine the parent node of the child node; Node, based on the first quaternion of the parent node of the child node in the camera coordinate system and the first quaternion of the child node in the camera coordinate system, determine the second quaternion of the child node relative to the parent node Quaternion.
- the device further includes: a third determining module, configured to determine whether the second rotation information satisfies a constraint condition for each of the second rotation information; the control module 140 is specifically configured to When the second rotation information satisfies the constraint condition, the movement of the node corresponding to the child node associated with the second rotation information in the controlled model is controlled according to the second rotation information.
- a third determining module configured to determine whether the second rotation information satisfies a constraint condition for each of the second rotation information
- the control module 140 is specifically configured to When the second rotation information satisfies the constraint condition, the movement of the node corresponding to the child node associated with the second rotation information in the controlled model is controlled according to the second rotation information.
- the third determining module is specifically configured to convert the second rotation information into a first angle of a predetermined type of coordinate system; determine whether the first angle is within a preset angle range; When the first angle is within the preset angle range, it is determined that the second rotation information satisfies the constraint condition.
- the device further includes: a fourth determining module, configured to determine that the second rotation information does not satisfy the constraint condition when the first angle is outside the preset angle range.
- the device further includes: a first correction module, configured to correct the second rotation information to obtain third rotation information when the second rotation information does not meet the constraint condition;
- the module 140 is further configured to control the movement of the node corresponding to the child node associated with the second rotation information in the controlled model according to the third rotation information.
- the correction module is specifically configured to: if the first angle corresponding to the second rotation information is outside the preset angle range, correct the first angle according to the preset angle to obtain the second angle; The second angle obtains the third rotation information.
- the predetermined type of coordinate system is an Euler coordinate system.
- control module 140 is specifically configured to perform posture defect correction on multiple pieces of second rotation information or multiple pieces of third rotation information generated from multiple pieces of second rotation information, and obtain multiple pieces of information respectively.
- a piece of corrected fourth rotation information using multiple pieces of the fourth rotation information to control the movement of nodes in the controlled model corresponding to the multiple nodes.
- the postural defect correction includes at least one of the following: a defect of the same side of the upper and lower limbs; a movement defect of the looped leg; a movement defect of the foot showing an outward figure shape; a concave movement defect of the foot.
- the device further includes: a third obtaining module, configured to obtain posture defect correction parameters according to the difference information between the shape of the target and the standard form, wherein the posture defect correction parameters are For correcting a plurality of the second rotation information or the third rotation information.
- a third obtaining module configured to obtain posture defect correction parameters according to the difference information between the shape of the target and the standard form, wherein the posture defect correction parameters are For correcting a plurality of the second rotation information or the third rotation information.
- the device further includes: a second correction module, configured to correct the proportions of different parts of the standard model according to the proportions of different parts of the target to obtain the corrected controlled model.
- a second correction module configured to correct the proportions of different parts of the standard model according to the proportions of different parts of the target to obtain the corrected controlled model.
- This example provides an image processing method.
- the steps of the method are as follows.
- Collect an image the image includes a target, the target includes but is not limited to a human body; detects the face node of the human body, where the face node may be a contour node on the surface of the human face; detects the trunk node and/or limb node of the human body, here
- the torso nodes and/or limb nodes of can be 3D nodes, which are represented by 3D coordinates, where the 3D can include the 2D coordinates detected from the 2D image, and then the 3D coordinates obtained by the conversion algorithm from 2D coordinates to 3D coordinates ;
- the 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera.
- the limb nodes here may include: upper limb nodes and/or lower limb nodes; for example, taking the hand as an example, the hand nodes of the upper limb node include, but are not limited to, wrist joint nodes, finger palm joint nodes, and finger joints Nodes, fingertip nodes; the positions of these nodes can reflect the movement of the hands and fingers.
- the mesh information of the face is generated.
- the expression base corresponding to the current expression of the target is selected according to the mesh information, and the expression of the controlled model is controlled according to the expression base; the expression of the controlled model corresponding to each expression base is controlled according to the intensity coefficient reflected by the mesh information strength.
- the quaternion is converted.
- the torso movement of the controlled model is controlled according to the quaternion corresponding to the trunk node; and/or the limb movement of the controlled model is controlled according to the quaternion corresponding to the limb node.
- the face node may include 106 nodes.
- the trunk node and/or limb node may include: 14 nodes or 17 nodes, which may be specifically shown in FIG. 7A and FIG. 7B.
- Figure 7A shows a schematic diagram containing 14 skeleton nodes;
- Figure 7B shows a schematic diagram containing 17 skeleton nodes.
- FIG. 7B may be a schematic diagram of 17 nodes generated based on the 14 nodes shown in FIG. 7A.
- the 17 nodes in FIG. 7B are equivalent to the nodes shown in FIG. 7A, with node 0, node 7 and node 9 added.
- the 2D coordinates of node 9 can be determined based on the preliminary determination of the 2D coordinates of nodes 8 and 10;
- the 2D coordinates of node 7 can be determined based on the 2D coordinates of node 8 and the 2D coordinates of node 0.
- Node 0 may be the reference point provided by the embodiments of the present disclosure, and the reference point may be used as the aforementioned first reference point and/or second reference point.
- the controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
- the clothes of the teacher model may be more stable, such as suits.
- the controlled model may wear sports clothing.
- This example provides an image processing method.
- the method steps are as follows.
- An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
- the torso key points and limb key points of the human body are detected.
- the torso key points and/or the limb key points here can be 3D key points, which are represented by 3D coordinates.
- the 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates.
- the 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera.
- the key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs.
- the hand key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the knuckles, and the key points of the fingertips.
- the location of these key points can reflect the movement of the hands and fingers.
- Use the trunk quaternion to control the torso movement of the controlled model; use the limb quaternion to control the limb movement of the controlled model.
- the torso node and the limb node may include: 14 nodes or 17 nodes, which may be specifically shown in FIG. 7A and FIG. 7B.
- the controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
- the clothes of the teacher model may be more stable, such as suits.
- the controlled model may wear sports clothing.
- This example provides an image processing method.
- the steps of the method are as follows. Acquire an image, the image contains a target, and the target can be a human body. According to the image, a 3D posture of the target in a three-dimensional space is obtained, and the 3D posture can be represented by the 3D coordinates of the skeleton node of the human body. Obtain the absolute rotation information of the joints of the human body in the camera coordinate system, and the absolute rotation position can be determined by the coordinates in the camera coordinate system. According to the coordinates, the coordinate direction of the joint is obtained. According to the hierarchical relationship, the relative rotation parameters of the joints are determined. Determining the relative parameters may specifically include: determining the position of the key point of the joint relative to the root node of the human body.
- the relative rotation parameter can be used for quaternion representation.
- the hierarchical relationship here can be the traction relationship between joints. For example, the movement of the elbow joint will pull the movement of the wrist joint to a certain extent, and the movement of the shoulder joint will also pull the movement of the elbow joint.
- the hierarchical relationship may also be predetermined according to the joints of the human body. Use this quaternion to control the rotation of the controlled model.
- the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within the constraint condition, Wherein, the constraint condition may be to limit the first Euler angle; obtain a quaternion corresponding to the second Euler angle, and then use the quaternion to control the rotation of the controlled model.
- the quaternion corresponding to the second Euler angle is obtained, and the second Euler angle can be directly converted into a quaternion.
- the method further includes: performing posture optimization adjustment on the second Euler angle. For example, to adjust some of the second Euler angles, it may be adjusted to the Euler angle optimized for posture based on preset rules, so as to obtain the third Euler angle.
- Obtaining the quaternion corresponding to the second Euler angle may include: converting the third Euler angle into a quaternion for controlling the controlled model.
- the method further includes: after converting the second Euler angles into a quaternion, performing posture optimization processing on the converted quaternion data. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
- the adjustment when adjusting the second Euler angle or the quaternion obtained by the conversion of the second Euler angle, the adjustment may be based on a preset rule, or may be optimized and adjusted by the deep learning model itself; There are many specific implementation methods, which are not limited in this application.
- pre-processing may also be included.
- the width of the crotch and/or shoulder of the controlled model is modified to correct the overall posture of the human body.
- the standing posture of the human body can be corrected for standing upright and for abdomen correction. Some people will push their abdomen when standing, and the abdomen correction can make the controlled model not simulate the user's abdomen movement. Some people hunch back when standing, and the hunchback correction can prevent the controlled model from simulating the user's hunchback.
- This example provides an image processing method.
- the steps of the method are as follows.
- An image is acquired, and the image includes a target, and the target may include at least one of a human body, a human upper limb, and a human lower limb.
- the coordinate system of the limb part that will pull the target joint movement is obtained.
- the rotation of the target joint relative to the limb part is determined to obtain rotation information; the rotation information includes the spin parameter of the target joint and the rotation information drawn by the limb part.
- the first angle limit is used to limit the rotation information of the local traction of the limbs, and the final traction rotation parameters are obtained.
- the partial rotation information of the limbs is corrected.
- the relative rotation information of the limb part according to the coordinate system of the first limb and the corrected rotation information.
- the restricted rotation information is obtained as a quaternion.
- the movement of the target joint of the controlled model is controlled according to the quaternion.
- the coordinate system of the hand in the image coordinate system is obtained, and the coordinate system of the forearm and the coordinate system of the upper arm are obtained.
- the target joint at this time is the wrist joint.
- the rotation of the hand relative to the forearm is broken down into spin and pulled rotation. Transfer the towed rotation to the forearm, specifically, assign the towed rotation to the rotation in the corresponding direction of the forearm; use the first angle limit of the forearm to limit the maximum rotation of the forearm. Then determine the rotation of the hand relative to the corrected forearm, and obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the hand relative to the forearm.
- the coordinate system of the foot under the image coordinate system is obtained, and the coordinate system of the lower leg and the coordinate system of the thigh are obtained; the target joint at this time is the ankle joint.
- the rotation of the foot relative to the calf is broken down into spin and pulled rotation. Transfer the pulled rotation to the lower leg, specifically, assign the pulled rotation to the rotation in the corresponding direction of the lower leg; use the first angle limit of the lower leg to limit the maximum rotation of the lower leg. Then determine the rotation of the foot relative to the corrected calf to obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the foot relative to the calf.
- This example provides a specific method for calculating the quaternion of the human skeleton.
- the information of the human skeleton is less than the quaternion, because each bone can spin.
- a and B are two nodes in space, representing the start and end of a bone . But the posture of this bone in space cannot be determined, so the local coordinate system of each bone is defined, as shown in Figure 8B. After the local coordinate system is determined, the posture of this bone in space is confirmed.
- the human skeleton In order to drive a three-dimensional animated character (that is, the aforementioned controlled model), it is not enough to only know these.
- the human skeleton must be migrated to the human skeleton under the 3DMAX standard.
- the human skeleton of the controlled model contains the quaternion of the local rotation information and the local bone length.
- the bone length is expressed by the end point coordinates in the local coordinate system. Since the bone length of the controlled model is generally set, only the quaternion representing the local rotation information is considered.
- p i represents the three-dimensional coordinates in the local coordinate system of node i, which are generally fixed values that come with the original model and do not need to be modified or migrated.
- q i is a quaternion, which represents the rotation of the bone controlled by node i in the coordinate system of its parent node. It can also be considered as the rotation of the local coordinate system of the current node and the local coordinate system of the parent node.
- the other nodes can be calculated using the existing human skeleton and human prior knowledge.
- the parent key point parent(i) is the key point one level above the current key point i.
- Q i is the rotation quaternion of the current key point i relative to the camera coordinate system; It is the reverse rotation parameter of the key point of the upper level.
- Q parent(i) is the rotation parameter of the key point of the upper level, and the rotation angle is 90 degrees; then The angle of rotation is -90 degrees.
- (i-j) represents the vector that i points to j
- x represents the cross product.
- (1-7) represents the vector from the first node to the seventh node.
- nodes 8, 15, 11, and 18 are the four nodes of the hands and feet. Since the calculation of the quaternion of these four nodes requires specific postures to be determined, these four nodes are not included in the table. key point.
- the number of the 19-point skeleton node can be referred to as shown in Fig. 2 and the node number of the 17-point skeleton can be referred to as shown in Fig. 7B.
- an embodiment of the present application provides an image device, including: a memory 1002, configured to store information; a processor 1001, connected to the memory 1002, configured to execute data stored on the memory 1002
- the computer-executable instructions can implement the image processing method provided by one or more of the foregoing technical solutions, for example, the image processing method shown in FIG. 1.
- the memory 1002 can be various types of memory, such as random access memory, read-only memory, flash memory, and the like.
- the memory 1002 may be used for information storage, for example, to store computer executable instructions.
- the computer-executable instructions may be various program instructions, for example, target program instructions and/or source program instructions.
- the processor 1001 may be various types of processors, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
- the processor 1001 may be connected to the memory 1002 through a bus.
- the bus may be an integrated circuit bus or the like.
- the terminal device may further include: a communication interface 1003, and the communication interface 1003 may include: a network interface, for example, a local area network interface, a transceiver antenna, and the like.
- the communication interface is also connected to the processor 1001 and can be used for information transceiving.
- the terminal device further includes a human-computer interaction interface 1005.
- the human-computer interaction interface 1005 may include various input and output devices, such as a keyboard and a touch screen.
- the image device further includes: a display 1004, which can display various prompts, collected facial images, and/or various interfaces.
- the embodiment of the present application provides a non-volatile computer storage medium that stores computer executable code; after the computer executable code is executed, the image provided by one or more technical solutions can be realized
- the processing method is, for example, the image processing method shown in FIG. 1.
- the disclosed device and method can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
- the coupling, or direct coupling, or communication connection between the components shown or discussed can be indirect coupling or communication connection through some interfaces, devices or units, and can be electrical, mechanical or other forms of.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- the functional units in the embodiments of the present disclosure can be all integrated into one processing module, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
- the unit of can be implemented in the form of hardware, or in the form of hardware plus software functional units.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (36)
- 一种图像处理方法,其特征在于,包括:获取图像;基于所述图像,获取目标的多个节点各自的第一旋转信息;根据节点之间的层级关系及多个所述第一旋转信息,确定所述目标的所述多个节点中各子节点相对于父节点的第二旋转信息;基于多个所述第二旋转信息,控制受控模型中与所述多个节点对应的节点的运动。
- 根据权利要求1所述的方法,其特征在于,基于所述图像,获取所述目标的所述多个节点各自的第一旋转信息,包括:基于所述图像,获取所述目标的所述多个节点在相机坐标系下各自的第一旋转信息。
- 根据权利要求2所述的方法,其特征在于,根据所述节点之间的层级关系及多个所述第一旋转信息,确定所述目标的所述多个节点中各子节点相对于父节点的第二旋转信息,包括:对于所述多个节点中的每个节点,将该节点作为子节点;根据所述层级关系及多个所述第一旋转信息,确定该子节点所在的局部坐标系相对于该子节点的父节点所在局部坐标系内的第二旋转信息。
- 根据权利要求3所述的方法,其特征在于,所述方法还包括:对于所述多个节点中的每个节点,根据所述层级关系,确定该节点所在的局部坐标系。
- 根据权利要求4所述的方法,其特征在于,根据所述层级关系,确定该节点所在的局部坐标系,包括:该节点作为子节点;根据所述层级关系,将该子节点指向该子节点的父节点的方向为所述局部坐标系的第一方向;以该子节点及所连接的两个节点所在的平面的垂直方向为所述局部坐标系的第二方向;以目标预定部位的朝向为所述局部坐标系的第三方向。
- 根据权利要求2至5任一项所述的方法,其特征在于,基于所述图像,获取所述目标的所述多个节点在所述相机坐标系下各自的第一旋转信息,包括:基于所述图像,获取所述目标的所述多个节点在所述相机坐标系内各自的第一四元数。
- 根据权利要求2至6任一项所述的方法,其特征在于,根据节点之间的层级关系及多个所述第一旋转信息,确定所述目标的所述多个节点中各子节点相对于父节点的第二旋转信息,包括:根据所述层级关系,将所述多个节点中的每个节点作为子节点,确定该子节点的父节点;对于每个子节点,基于该子节点的父节点在相机坐标系内的第一四元数和该子节点在所述相机坐标系的第一四元数,确定出该子节点相对于该父节点的第二四元数。
- 根据权利要求1至7任一项所述的方法,其特征在于,基于多个所述第二旋转信息,控制所述受控模型中与所述多个节点对应的节点的运动,包括:对于每个所述第二旋转信息,确定该第二旋转信息是否满足约束条件;在该第二旋转信息满足所述约束条件的情况下,根据该第二旋转信息控制所述受控模型中该第二旋转信息相关联的子节点所对应的节点的运动。
- 根据权利要求8所述的方法,其特征在于,对于每个所述第二旋转信息,确定该第二旋转信息是否满足约束条件,包括:将该第二旋转信息转换为预定类型坐标系的第一角度;确定所述第一角度是否在预设角度范围内;在所述第一角度在所述预设角度范围内的情况下,确定该第二旋转信息满足所述约束条件。
- 根据权利要求9所述的方法,其特征在于,所述方法还包括:在所述第一角度在所述预设角度范围外的情况下,确定该第二旋转信息不满足所述约束条件。
- 根据权利要求8至10任一项所述的方法,其特征在于,所述方法还包括:在该第二旋转信息不满足所述约束条件的情况下,校正该第二旋转信息获得第三旋转信息;根据所述第三旋转信息,控制所述受控模型中与该第二旋转信息相关联的子节点所对应的节点的运动。
- 根据权利要求11所述的方法,其特征在于,在该第二旋转信息不满足所述约束条件的情况下,校正该第二旋转信息获得所述第三旋转信息,包括:若该第二旋转信息对应的第一角度在预设角度范围外,根据所述预设角度校正所述第一角度得到第二角度;根据所述第二角度,获得所述第三旋转信息。
- 根据权利要求9至12任一项所述的方法,其特征在于,所述预定类型坐标系为欧拉坐标系。
- 根据权利要求1至3中任一项所述的方法,其特征在于,基于多个所述第二旋转信息,控制所述受控模型中与所述多个节点对应的节点的运动,包括:对多个所述第二旋转信息或由多个所述第二旋转信息产生的多个第三旋转信息进行姿势缺陷校正,分别获得多个校正后的第四旋转信息;利用多个所述第四旋转信息,控制所述受控模型中与所述多个节点对应的节点的运动。
- 根据权利要求14所述的方法,其特征在于,所述姿势缺陷校正包括以下至少之一:上肢和下肢的同边缺陷;罗圈腿运动缺陷;脚部呈现外八字型运动缺陷;脚部内凹型运动缺陷。
- 根据权利要求14或15所述的方法,其特征在于,所述方法还包括:根据所述目标的形体与标准形体之间的差异信息,获得姿势缺陷校正参数,其中,所述姿势缺陷校正参数,用于校正多个所述第二旋转信息或所述第三旋转信息。
- 根据权利要求1至16任一项所述的方法,其特征在于,所述方法还包括:根据所述目标的不同局部的比例关系,校正标准模型的不同局部的比例,获得校正后的所述受控模型。
- 一种图像处理装置,其特征在于,包括:第一获取模块,用于获取图像;第二获取模块,用于基于所述图像,获取目标的多个节点各自的第一旋转信息;第一确定模块,用于根据节点之间的层级关系及多个所述第一旋转信息,确定所述目标的所述多个节点中各子节点相对于父节点的第二旋转信息;控制模块,用于基于多个所述第二旋转信息,控制受控模型中与所述多个节点对应的节点的运动。
- 根据权利要求18所述的装置,其特征在于,所述第一获取模块,具体用于基于所述图像,获取所述目标的所述多个节点在相机坐标系下各自的第一旋转信息。
- 根据权利要求19所述的装置,其特征在于,所述第一确定模块,具体用于:对于所述多个节点中的每个节点,将该节点作为子节点;根据所述层级关系及多个所述第一旋转信息,确定该子节点所在的局部坐标系相对于该子节点的父节点所在局部坐标系内的第二旋转信息。
- 根据权利要求20所述的装置,其特征在于,所述装置还包括:第二确定模块,用于对于所述多个节点中的每个节点,根据所述层级关系,确定该节点所在的局部坐标系。
- 根据权利要求21所述的装置,其特征在于,所述第二确定模块,具体用于:该节点作为子节点;根据所述层级关系,将该子节点指向该子节点的父节点的方向为所述局部坐标系的第一方向;以该子节点及所连接的两个节点所在的平面的垂直方向为所述局部坐标系的第二方向;以目标预定部位的朝向为所述局部坐标系的第三方向。
- 根据权利要求19至22任一项所述的装置,其特征在于,所述第二获取模块,具体用于基于所述图像,获取所述目标的所述多个节点在所述相机坐标系内各自的第一四元数。
- 根据权利要求19至23任一项所述的装置,其特征在于,所述第一确定模块,具体用于:根据所述层级关系,将所述多个节点中的每个节点作为子节点,确定该子节点的父节点;对于每个子节点,基于该子节点的父节点在相机坐标系内的第一四元数和该子节点在所述相机坐标系的第一四元数,确定出该子节点相对于该父节点的第二四元数。
- 根据权利要求18至24任一项所述的装置,其特征在于,所述装置还包括:第三确定模块,用于对于每个所述第二旋转信息,确定该第二旋转信息是否满足约束条件;所述控制模块,具体用于在该第二旋转信息满足所述约束条件的情况下,根据该第二旋转信息控制所述受控模型中该第二旋转信息相关联的子节点所对应的节点的运动。
- 根据权利要求25所述的装置,其特征在于,所述第三确定模块,具体用于:将该第二旋转信息转换为预定类型坐标系的第一角度;确定所述第一角度是否在预设角度范围内;在所述第一角度在所述预设角度范围内的情况下,确定该第二旋转信息满足所述约束条件。
- 根据权利要求26所述的装置,其特征在于,所述装置还包括:第四确定模块,用于在所述第一角度在所述预设角度范围外的情况下,确定该第二旋转信息不满足所述约束条件。
- 根据权利要求25至27任一项所述的装置,其特征在于,所述装置还包括:第一校正模块,用于在该第二旋转信息不满足所述约束条件的情况下,校正该第二旋转信息获得第三旋转信息;所述控制模块,还用于根据所述第三旋转信息,控制所述受控模型中与该第二旋转信息相关联的子节点所对应的节点的运动。
- 根据权利要求28所述的装置,其特征在于,所述校正模块,具体用于:若该第二旋转信息对应的第一角度在预设角度范围外,根据所述预设角度校正所述第一角度得到第二角度;根据所述第二角度,获得所述第三旋转信息。
- 根据权利要求26至29任一项所述的装置,其特征在于,所述预定类型坐标系为欧拉坐标系。
- 根据权利要求18至20任一项所述的装置,其特征在于,所述控制模块,具体用于:对多个所述第二旋转信息或由多个所述第二旋转信息产生的多个第三旋转信息进行姿势缺陷校 正,分别获得多个校正后的第四旋转信息;利用多个所述第四旋转信息,控制所述受控模型中与所述多个节点对应的节点的运动。
- 根据权利要求31所述的装置,其特征在于,所述姿势缺陷校正包括以下至少之一:上肢和下肢的同边缺陷;罗圈腿运动缺陷;脚部呈现外八字型运动缺陷;脚部内凹型运动缺陷。
- 根据权利要求31或32所述的装置,其特征在于,所述装置还包括:第三获得模块,用于根据所述目标的形体与标准形体之间的差异信息,获得姿势缺陷校正参数,其中,所述姿势缺陷校正参数,用于校正多个所述第二旋转信息或所述第三旋转信息。
- 根据权利要求18至33任一项所述的装置,其特征在于,所述装置还包括:第二校正模块,用于根据所述目标的不同局部的比例关系,校正标准模型的不同局部的比例,获得校正后的所述受控模型。
- 一种图像设备,其特征在于,包括:存储器;处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,以实现上述权利要求1至17任一项提供的方法。
- 一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行后,能够实现上述权利要求1至17任一项提供的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202011600QA SG11202011600QA (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device, and storage medium |
JP2020559380A JP7001841B2 (ja) | 2019-01-18 | 2020-01-16 | 画像処理方法及び装置、画像デバイス並びに記憶媒体 |
KR1020207036612A KR20210011984A (ko) | 2019-01-18 | 2020-01-16 | 이미지 처리 방법 및 장치, 이미지 디바이스, 및 저장 매체 |
US17/102,373 US11468612B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of a model based on captured images and determined information |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049830.6 | 2019-01-18 | ||
CN201910049830 | 2019-01-18 | ||
CN201910363858.7A CN111460874A (zh) | 2019-01-18 | 2019-04-30 | 图像处理方法及装置、图像设备及存储介质 |
CN201910363858.7 | 2019-04-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/102,373 Continuation US11468612B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of a model based on captured images and determined information |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020147797A1 true WO2020147797A1 (zh) | 2020-07-23 |
Family
ID=71614430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/072550 WO2020147797A1 (zh) | 2019-01-18 | 2020-01-16 | 图像处理方法及装置、图像设备及存储介质 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020147797A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113838177A (zh) * | 2021-09-22 | 2021-12-24 | 上海拾衷信息科技有限公司 | 一种手部动画制作方法及系统 |
CN114519666A (zh) * | 2022-02-18 | 2022-05-20 | 广州方硅信息技术有限公司 | 直播图像矫正方法、装置、设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163084A1 (en) * | 2012-03-06 | 2016-06-09 | Adobe Systems Incorporated | Systems and methods for creating and distributing modifiable animated video messages |
CN108229332A (zh) * | 2017-12-08 | 2018-06-29 | 华为技术有限公司 | 骨骼姿态确定方法、装置及计算机可读存储介质 |
CN109816773A (zh) * | 2018-12-29 | 2019-05-28 | 深圳市瑞立视多媒体科技有限公司 | 一种虚拟人物的骨骼模型的驱动方法、插件及终端设备 |
-
2020
- 2020-01-16 WO PCT/CN2020/072550 patent/WO2020147797A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160163084A1 (en) * | 2012-03-06 | 2016-06-09 | Adobe Systems Incorporated | Systems and methods for creating and distributing modifiable animated video messages |
CN108229332A (zh) * | 2017-12-08 | 2018-06-29 | 华为技术有限公司 | 骨骼姿态确定方法、装置及计算机可读存储介质 |
CN109816773A (zh) * | 2018-12-29 | 2019-05-28 | 深圳市瑞立视多媒体科技有限公司 | 一种虚拟人物的骨骼模型的驱动方法、插件及终端设备 |
Non-Patent Citations (2)
Title |
---|
LIANG, FENG; ZHANG, ZHILI; LI, XIANGYANG; TANG, ZHIBO; MA, CHAO: "Research on motion control technology of virtual human's lower limb based on optical motion capture data", JOURNAL OF SYSTEM SIMULATION, vol. 27, no. 2, 28 February 2015 (2015-02-28), pages 327 - 335, XP009522211, ISSN: 1004-731X * |
ZHANG, ZUOYUN: "The design and implementation of Kinect-based motion capture system", MASTER THESIS, 2 April 2018 (2018-04-02), CN, pages 1 - 96, XP009522199, ISSN: 1674-0246 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113838177A (zh) * | 2021-09-22 | 2021-12-24 | 上海拾衷信息科技有限公司 | 一种手部动画制作方法及系统 |
CN114519666A (zh) * | 2022-02-18 | 2022-05-20 | 广州方硅信息技术有限公司 | 直播图像矫正方法、装置、设备及存储介质 |
CN114519666B (zh) * | 2022-02-18 | 2023-09-19 | 广州方硅信息技术有限公司 | 直播图像矫正方法、装置、设备及存储介质 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111460875B (zh) | 图像处理方法及装置、图像设备及存储介质 | |
WO2020147796A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
US20160048993A1 (en) | Image processing device, image processing method, and program | |
US20210349529A1 (en) | Avatar tracking and rendering in virtual reality | |
WO2020147791A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
Gültepe et al. | Real-time virtual fitting with body measurement and motion smoothing | |
JP2019096113A (ja) | キーポイントデータに関する加工装置、方法及びプログラム | |
WO2015054426A1 (en) | Single-camera motion capture system | |
WO2020147797A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
JP2022043264A (ja) | 運動評価システム | |
Xie et al. | Visual feedback for core training with 3d human shape and pose | |
WO2020147794A1 (zh) | 图像处理方法及装置、图像设备及存储介质 | |
CN111861822A (zh) | 病人模型构建方法、设备和医学教育系统 | |
JP7482471B2 (ja) | 学習モデルの生成方法 | |
CN113842622A (zh) | 一种运动教学方法、装置、系统、电子设备及存储介质 | |
CN108109197A (zh) | 一种图像处理建模方法 | |
Su et al. | Estimating human pose with both physical and physiological constraints | |
Zhang et al. | The Application of Computer-Assisted Teaching in the Scientific Training of Sports Activities | |
Xie et al. | CoreUI: Interactive Core Training System with 3D Human Shape | |
Liu et al. | Application of VR technology in sports training in colleges and universities | |
TW202341071A (zh) | 運動影像分析方法 | |
Zhu | Efficient and robust photo-based methods for precise shape and pose modeling of human subjects | |
Hendry | Markerless pose tracking of a human subject. | |
Diamanti | Motion Capture in Uncontrolled Environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20741004 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020559380 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207036612 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20741004 Country of ref document: EP Kind code of ref document: A1 |