WO2020147794A1 - Image processing method and apparatus, image device and storage medium - Google Patents

Image processing method and apparatus, image device and storage medium Download PDF

Info

Publication number
WO2020147794A1
WO2020147794A1 PCT/CN2020/072526 CN2020072526W WO2020147794A1 WO 2020147794 A1 WO2020147794 A1 WO 2020147794A1 CN 2020072526 W CN2020072526 W CN 2020072526W WO 2020147794 A1 WO2020147794 A1 WO 2020147794A1
Authority
WO
WIPO (PCT)
Prior art keywords
type
image
coordinates
information
target
Prior art date
Application number
PCT/CN2020/072526
Other languages
French (fr)
Chinese (zh)
Inventor
汪旻
谢符宝
刘文韬
钱晨
马利庄
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910362107.3A external-priority patent/CN111460872B/en
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to SG11202011596WA priority Critical patent/SG11202011596WA/en
Priority to KR1020207036619A priority patent/KR20210011985A/en
Priority to JP2020567116A priority patent/JP2021525431A/en
Publication of WO2020147794A1 publication Critical patent/WO2020147794A1/en
Priority to US17/102,331 priority patent/US20210074004A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present disclosure relates to the field of information technology, and in particular to an image processing method and device, image equipment and storage medium.
  • embodiments of the present disclosure provide an image processing method and device, image equipment, and storage medium.
  • the present disclosure provides an image processing method, including:
  • acquiring the local features of the target based on the image includes: acquiring the first type features of the first type of the target based on the image; and/or, based on the image To obtain the second-type feature of the second-type part of the target.
  • acquiring the first-type feature of the first-type part of the target based on the image includes: acquiring the expression feature of the head and the intensity coefficient of the expression feature based on the image.
  • obtaining the intensity coefficient of the expression feature based on the image includes: obtaining, based on the image, an intensity coefficient representing each sub-part of the first type of part.
  • the determination of the local motion information based on the characteristics includes: determining the motion information of the head based on the expression characteristics and the intensity coefficient; according to the motion information, controlling the corresponding control model
  • the local movement of the head includes: controlling the expression change of the head of the controlled model according to the movement information of the head.
  • the acquiring the second type feature of the second type part of the target based on the image includes: acquiring the key of the second type part of the target based on the image Point location information; the determining the local motion information based on the characteristic includes: determining the second type of local motion information based on the location information.
  • acquiring the position information of the key points of the second type part of the target based on the image includes: acquiring the support key of the second type part of the target based on the image The first coordinate of the point; based on the first coordinate, the second coordinate is obtained.
  • acquiring the first coordinates of the support key points of the second type part of the target based on the image includes: acquiring the support key points of the second type part based on a 2D image
  • the first 2D coordinates of the; based on the first coordinates, obtaining the second coordinates includes: obtaining the first 2D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a conversion relationship from 2D coordinates to 3D coordinates A 3D coordinate.
  • acquiring the first coordinates of the key points of the support of the second type part of the target based on the image includes: acquiring all the key points of the second type part of the target based on the 3D image The second 3D coordinates of the key points of the support; and based on the first coordinates, obtaining the second coordinates includes: obtaining third 3D coordinates based on the second 3D coordinates.
  • obtaining the third 3D coordinates based on the second 3D coordinates includes: based on the second 3D coordinates, correcting the bracket key corresponding to the occluded part of the second type of part in the 3D image The 3D coordinates of the point, thereby obtaining the third 3D coordinates.
  • determining the motion information of the second type of part based on the position information includes: determining the quaternion of the second type of part based on the position information.
  • acquiring the position information of the key points of the second type part of the target based on the image includes: acquiring the first part of the support key point of the first part in the second type part Location information; acquiring second location information of key points of the bracket in the second part of the second type of part.
  • determining the movement information of the second type of part based on the position information includes: determining the movement information of the first part according to the first position information; and determining according to the second position information The motion information of the second part.
  • the controlling the movement of the part corresponding to the controlled model according to the movement information includes: controlling the movement of the controlled model and the first part according to the movement information of the first part. Local motion of the part corresponding to the part; according to the motion information of the second part, controlling the motion of the part of the controlled model corresponding to the second part.
  • the first part is the trunk; and/or the second part is the upper limbs, lower limbs or limbs.
  • an image processing device including:
  • the first acquisition module is used to acquire an image; the second acquisition module is used to acquire a local feature of the target based on the image; the first determination module is configured to determine the local motion information based on the feature; control The module is used to control the local motion of the controlled model according to the motion information.
  • the present disclosure provides an image device including: a memory; a processor connected to the memory and configured to execute any of the above-mentioned image processing methods by executing computer-executable instructions located on the memory .
  • the present disclosure provides a non-volatile computer storage medium that stores computer-executable instructions; after the computer-executable instructions are executed by a processor, any one of the aforementioned image processing can be implemented method.
  • the technical solutions provided by the embodiments of the present disclosure can obtain the local characteristics of the target according to the acquired images, and then obtain the local motion information based on the local characteristics, and finally control the local motion of the controlled model according to the motion information. .
  • the controlled model when the controlled model is used to simulate the movement of the target for live video broadcast, the movement of the controlled model can be accurately controlled, so that the controlled model can accurately simulate the movement of the target.
  • the live video is realized, and the user is protected on the other hand. privacy.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the disclosure.
  • FIG. 2 is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
  • 3A to 3C are schematic diagrams of a controlled model provided by this embodiment for simulating changes of the hand movement of the collected user.
  • 4A to 4C are schematic diagrams of a controlled model provided by an embodiment of the disclosure to simulate changes in the torso movement of a collected user.
  • 5A to 5C are schematic diagrams of a controlled model simulating the movement of the collected user's feet according to an embodiment of the disclosure.
  • FIG. 6 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
  • FIG. 7A is a schematic diagram of key points of a skeleton provided by an embodiment of the disclosure.
  • FIG. 7B is a schematic diagram of a skeleton key point provided by another embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a skeleton provided by the implementation of the present disclosure.
  • FIG. 9 is a schematic diagram of a local coordinate system of different bones of a human body provided by an embodiment of the disclosure.
  • FIG. 10 is a schematic structural diagram of an image device provided by an embodiment of the disclosure.
  • this embodiment provides an image processing method, which includes the following steps.
  • Step S110 Obtain an image.
  • Step S120 Acquire local features of the target based on the image.
  • Step S130 Based on the feature, determine the local motion information.
  • Step S140 Control the local movement corresponding to the controlled model according to the movement information.
  • the image processing method provided in this embodiment can drive the movement of the controlled model through image processing.
  • the image processing method provided in this embodiment can be applied to an image device, which can be various electronic devices capable of image device processing, for example, an electronic device that performs image collection, image display, and image pixel reorganization to generate images.
  • the image device includes but is not limited to various terminal devices, for example, a mobile terminal and/or a fixed terminal; it may also include various image servers capable of providing image services.
  • the mobile terminal includes portable devices such as mobile phones or tablet computers that are easy for users to carry, and may also include devices worn by users, such as smart bracelets, smart watches, or smart glasses.
  • the fixed terminal includes a fixed desktop computer and the like.
  • the image acquired in step S110 may be a 2D (two-dimensional) image or a 3D (three-dimensional) image.
  • the 2D image may include images collected by a single-lens or multi-lens camera, such as red, green, and blue (RGB) images.
  • the 3D image may be a 2D coordinate detected from a 2D image, and then a 3D image obtained by a conversion algorithm from 2D coordinates to 3D coordinates, and the 3D image may also be an image collected by a 3D camera.
  • the method for acquiring the image may include: using the camera of the imaging device itself to collect the image; and/or the image received from an external device; and/or reading the image from a local database or a local storage.
  • the step S120 may include: detecting the image to obtain a feature of a part of the target, and the part may be any part of the target.
  • the step S120 may include: detecting the image and acquiring features of at least two parts of the target, and the two parts may be different parts of the target.
  • the two parts may be continuously distributed on the target, or may be distributed on the target at intervals.
  • any part may include any of the following parts: head, trunk, limbs, upper limbs, lower limbs, hands and feet, etc.; the at least two parts may include the following parts At least two of them: head, trunk, limbs, upper limbs, lower limbs, hands and feet, etc.
  • the target is not limited to humans, but can also be various movable living or non-living objects such as animals.
  • one or more local features are acquired, and the features can be features that characterize the local spatial structure information, position information, or motion state in various forms.
  • a deep learning model such as a neural network can be used to detect the image to obtain the feature.
  • the feature may represent the relative positional relationship between joint points in the human skeleton.
  • the feature can characterize the position change relationship of the corresponding joint points in the human skeleton at adjacent time points, or the feature can characterize the human skeleton of the current picture and the initial coordinate system (also called the camera coordinate system).
  • the feature may include: the 3D coordinates of each joint point in the human skeleton detected by a deep learning model (such as the neural network used in the OpenPose project) in the world coordinate system.
  • the feature may include: optical flow feature that characterizes the change of the posture of the human body.
  • the acquired image may be one frame of image or multiple frames of image.
  • the subsequently obtained motion information may reflect the motion of the joint points in the current image relative to the corresponding joint points of the camera coordinate system.
  • the subsequently obtained motion information may reflect the motion of the joint points in the current image relative to the corresponding joint points in the previous frames, or the subsequent motion information may also reflect the current The movement of the joint points in the image relative to the corresponding joint points in the camera coordinate system. This application does not limit the number of images acquired.
  • the motion information characterizes the change of the motion corresponding to the part and/or the change of the expression caused by the change of the motion.
  • step S140 the part corresponding to the head in the controlled module is controlled to move, and the part corresponding to the torso in the controlled module is controlled Exercise.
  • the information of the motion information includes, but is not limited to: the coordinates of the key points corresponding to the part, and the coordinates include but are not limited to 2D coordinates and 3D coordinates; the coordinates can characterize the change of the key points corresponding to the part relative to the reference position, thereby being able to characterize Corresponding to the local sports conditions.
  • the motion information can be expressed in various information forms such as vectors, arrays, one-dimensional numerical values, and matrices.
  • the controlled model may be a model corresponding to the target.
  • the controlled model is a human body model; if the target is an animal, the controlled model may be a body model corresponding to an animal; if the target is a vehicle, the controlled model Can be a model of a vehicle.
  • the controlled model is a model for the category of the target.
  • the model can be predetermined and can be further divided into multiple styles.
  • the style of the controlled model may be determined based on user instructions, and the style of the controlled model may include a variety of styles, for example, a real-life style that simulates a real person, an anime style, an internet celebrity style, styles of different temperaments, and game styles. Among them, different temperament styles can be literary style or rock style.
  • the controlled model can be a role in the game.
  • the teacher's face and body shape will be exposed.
  • the image of the teacher's movement can be obtained through image collection and other methods, and then the movement of a virtual controlled model can be controlled through feature extraction and movement information acquisition.
  • the controlled model can be used to simulate the teacher’s movement to complete the physical exercise teaching through its own limb movement.
  • the teacher’s face and body shape do not need to be directly exposed to the teaching.
  • the teacher’s privacy is protected.
  • the method of this embodiment is used to simulate the real vehicle movement by using a vehicle model to obtain a surveillance video, the vehicle’s license plate information and/or the overall outline of the vehicle can be retained in the surveillance video.
  • the vehicle’s brand, model, color and Both old and new can be hidden to protect user privacy.
  • the step S120 may include the following steps.
  • Step S121 Based on the image, obtain a first-type feature of the first-type part of the target.
  • Step S122 Based on the image, obtain a second-type local second-type feature of the target.
  • the first type of features and the second type of features are features that characterize the spatial structure information, position information, and/or motion state of the corresponding part.
  • Different types of features have different characteristics, and the application of different types of parts will have higher accuracy.
  • different features have different accuracy of the spatial changes caused by the movement.
  • different types of faces and limbs can be used in this embodiment. It is expressed by features that match the accuracy of the face or limbs.
  • the first-type features of the first-type part and the second-type features of the second-type part are obtained separately.
  • the first type part and the second type part are different types of parts; different types of parts can be distinguished by the movable range of different types of parts; or, the movement fineness of the type parts is not used to distinguish.
  • the first type of part and the second type of part may be two types of parts with a relatively large difference in the maximum amplitude of motion.
  • the first type of part can be the head. All the five senses of the head can move, but the movement of the five senses of the head is relatively small; the whole head can also move, for example, nodding or shaking the head, but the movement range is relative to the limbs. Or the motion range of the trunk is too small.
  • the second type of parts can be upper limbs, lower limbs or limbs, and the range of limb movement is very large. If the two types of local motion states are represented by the same feature, it may cause problems such as a decrease in accuracy or an increase in the complexity of the algorithm due to the amplitude of a certain local motion.
  • different types of features are used to obtain motion information according to the characteristics of different types of parts.
  • the accuracy of at least one type of local information can be reduced, and the motion information can be improved. Accuracy.
  • the first-type features and the second-type features are acquired by different subjects, for example, acquired by using different deep learning models or deep learning modules.
  • the first type of feature and the second type of feature have different acquisition logic.
  • the step S121 may include: obtaining the facial expression features of the head based on the image.
  • the first type of part is the head
  • the head includes the face
  • the expression features include but are not limited to at least one of the following: movement of the eyebrows, movement of the mouth, movement of the nose, and eyes. Facial movement and cheek movement.
  • the movement of the eyebrows can include: eyebrow lifting and drooping.
  • Mouth movements can include: opening, closing, flattening, pouting, grinning, and barking teeth.
  • the movement of the nose may include: contraction of the nose produced by inhaling into the nose, and blowing outward accompanied by the movement of nose extension.
  • Eye movement may include, but is not limited to: eye socket movement and/or eyeball movement.
  • the movement of the eye socket will change the size and/or shape of the eye socket, for example, the shape and size of the eye socket of squinting, staring, and smiling eyes will change.
  • the movement of the eyeball may include: the position of the eyeball in the eye socket.
  • the change of the user's line of sight may cause the eyeball to be located at different positions of the eye socket, and the movement of the left and right eyeballs together can reflect the different emotional states of the user.
  • cheek movement some users will have dimples or pear vortices when they laugh, and the shape of the cheeks will also change accordingly.
  • the movement of the head is not limited to the expression movement
  • the first type of feature is not limited to the expression feature, but also includes: hair movement features such as the head hair movement;
  • the first type of features may also include the overall head movement features such as head shaking and/or head nodding.
  • the step S121 further includes: obtaining the intensity coefficient of the expression feature based on the image.
  • the intensity coefficient may correspond to the facial expression amplitude.
  • the intensity coefficient here can be used to characterize the intensity of the expression action, for example, the intensity can be the magnitude of the expression action.
  • the greater the intensity coefficient the higher the intensity of the characterization.
  • the higher the intensity coefficient the larger the amplitude of the open mouth expression base, the larger the amplitude of the pouting expression base, and so on.
  • the greater the intensity coefficient the higher the eyebrow raising height for the eyebrow raising expression base.
  • the controlled model can simulate the current action of the target, but also the intensity of the current expression of the target can be accurately simulated to realize the accurate migration of the expression.
  • the controlled object is a game character.
  • the game character can not only be controlled by the user's body movements, but also accurately simulate the user's facial expression characteristics. In this way, in the game scene, the simulation degree of the game scene is improved, and the user's game experience is improved.
  • the mesh information includes but is not limited to: quadrilateral mesh information and/or triangular patch information.
  • the quadrilateral grid information indicates the information of the latitude and longitude lines; the triangular patch information is the information of the triangular patch connected by three key points.
  • the mesh information is formed by a predetermined number of face key points including the body surface of the face, and the intersection of the longitude and latitude lines in the grid represented by the quadrilateral grid information may be the location of the face key point.
  • the change in the position of the intersection of the grid is the change in expression.
  • the expression feature and intensity coefficient obtained based on the quadrilateral grid information can be used for precise control of the facial expression of the controlled model.
  • the vertices of the triangle face piece corresponding to the triangle face piece information include key points of the face, and the change in the position of the key point is the expression change.
  • the expression features and intensity coefficients obtained based on the triangular face information can be used for precise control of the facial expressions of the controlled model.
  • obtaining the intensity coefficient of the expression feature may include: obtaining an intensity coefficient representing each sub-part in the first type of part based on the image.
  • each corresponds to at least one expression base, and some can correspond to multiple expression bases, and one expression base corresponds to one type of facial expressions of the facial features.
  • the intensity coefficient characterizes the magnitude of the expression action.
  • the step S130 may include: determining the movement information of the head based on the expression feature and the intensity coefficient; the step S140 may include: controlling the controlled head based on the movement information of the head The facial expression changes of the corresponding head of the model.
  • the step S122 may include: acquiring the position information of the second-type local key points of the target based on the image.
  • the position information may be represented by the position information of the key points of the target, and the key points may include: key points of the support and key points of the outer contour. If a person is taken as an example, the key points of the support may include the key points of the skeleton of the human body, and the key points of the outline may be the key points of the outer contour of the body surface of the human body. This application does not limit the number of key points, but the key points must at least represent a part of the skeleton.
  • the position information may be represented by coordinates, for example, 2D coordinates and/or 3D coordinates of a predetermined coordinate system.
  • the predetermined coordinate system includes but is not limited to the image coordinate system where the image is located.
  • the location information can be the coordinates of key points, which is obviously different from the aforementioned mesh information. Since the second type of part is different from the first type of part, the use of position information can more accurately characterize the movement of the second type of part.
  • the step S130 may include: determining the second type of local motion information based on the position information.
  • the second type of part includes but is not limited to: trunk and/or limbs; trunk and/or upper limbs, trunk and/or lower limbs.
  • the step S122 may specifically include: obtaining the first coordinates of the key points of the second type of local support of the target based on the image; obtaining the second coordinates based on the first coordinates.
  • Both the first coordinate and the second coordinate are coordinates that characterize key points of the support. If the target is a human or animal, the key points of the bracket here are the key points of the skeleton.
  • the first coordinate and the second coordinate may be different types of coordinates.
  • the first coordinate is a 2D coordinate in a 2D coordinate system
  • the second coordinate is a 3D coordinate in a 3D coordinate system.
  • the first coordinate and the second coordinate may also be the same type of coordinate.
  • the second coordinate is the coordinate after the first coordinate is corrected.
  • the first coordinate and the second coordinate are the same type of coordinate.
  • the first coordinate and the second coordinate are both 3D coordinates or both are 2D coordinates.
  • acquiring the first coordinates of the key points of the support of the second type of the target includes: acquiring the first 2D key points of the support of the second type of parts based on the 2D image Coordinates; obtaining the second coordinates based on the first coordinates includes: obtaining the first 3D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a conversion relationship from 2D coordinates to 3D coordinates.
  • acquiring the first coordinates of the key points of the support of the second type of the target includes: acquiring the first coordinates of the key points of the support of the second type of the target based on the 3D image 2. 3D coordinates; obtaining second coordinates based on the first coordinates, including: obtaining third 3D coordinates based on the second 3D coordinates.
  • the 3D image directly acquired in step S110 includes: a 2D image and a depth image corresponding to the 2D image.
  • the 2D image can provide the coordinate values of the key points of the stent in the xoy plane, and the depth value in the depth image can provide the coordinates of the key points of the stent on the z-axis.
  • the z axis is perpendicular to the xoy plane.
  • obtaining the third 3D coordinates based on the second 3D coordinates includes: based on the second 3D coordinates, correcting the 3D of the key points of the bracket corresponding to the occluded part of the second type of part in the 3D image Coordinates to obtain the third 3D coordinates.
  • the 3D model is used to first extract the second 3D coordinates from the 3D image, and then the occlusion that takes into account different parts of the target is used; through correction, the correct third 3D coordinates of different parts of the target in the 3D space can be obtained. So as to ensure the control accuracy of the subsequent controlled model.
  • the step S130 may include: determining the quaternion of the second type of part based on the position information.
  • the motion information is not limited to being represented by a quaternion, but can also be represented by coordinate values in different coordinate systems, for example, coordinate values in the Euler coordinate system or the Lagrangian coordinate system, etc. .
  • the quaternion can be used to accurately describe the spatial position and/or rotation of the second type of local.
  • the quaternion is used as the motion information. In the specific implementation, it is not limited to the quaternion. It can also be indicated by the coordinate value relative to the reference point in various coordinate systems. For example, Euler Coordinates or Lagrangian coordinates replace the quaternion.
  • the step S120 may include: acquiring the first position information of the stent key points of the first part in the second type part; acquiring the stent key of the second part in the second type part The second location information of the point.
  • the second type of parts may include at least two different parts.
  • the controlled model can simultaneously simulate at least two local motions of the target.
  • the step S130 may include: determining the motion information of the first part according to the first position information; and determining the motion information of the second part according to the second position information.
  • the step S140 may include: controlling the movement of the controlled model and the corresponding part of the first part according to the movement information of the first part; according to the movement information of the second part, Controlling the movement of the controlled model and the corresponding part of the second part.
  • the first part is the trunk; the second part is the upper limbs, lower limbs or limbs.
  • the method further includes: determining the second type of motion information of the connecting part according to the at least two local characteristics and the first motion constraint condition of the connecting part, wherein the connecting part uses To connect two parts; according to the second type of motion information, control the motion of the connecting part of the controlled model.
  • some local motion information can be separately obtained through a motion information acquisition model, and the motion information obtained in this way may be referred to as the first type of motion information.
  • Some parts are connecting parts connecting other two or more parts, and the motion information of these connecting parts is called the second type of motion information for convenience in this embodiment.
  • the second type of motion information here is also one of the information that characterizes the local motion status of the target.
  • the second type of motion information may be determined based on the first type of motion information of the two parts connected by the connecting portion.
  • the difference between the second type of motion information and the first type of motion information is: the second type of motion information is the motion information of the connecting part, and the first type of motion information is the motion information of other parts except the connecting part; the first type of motion The information is generated solely based on the motion state of the corresponding part, and the second type of motion information may be related to the motion information of other parts connected to the corresponding connecting portion.
  • the step S140 may include: determining a control method for controlling the connecting portion according to the type of the connecting portion; controlling the controlled method according to the control method and the second type of motion information Movement of the connecting part of the model.
  • the connecting part can be used to connect the other two parts.
  • the neck, wrist, ankle, or waist are all connecting parts that connect the two parts.
  • the motion information of these connecting parts may be inconvenient to detect or rely on other adjacent parts to a certain extent. Therefore, in this embodiment, it can be based on the first type of two or more other parts connected to the connecting part.
  • the movement information can determine the movement information of the connecting portion, thereby obtaining the second type of movement information corresponding to the connecting portion.
  • the corresponding control method will be determined according to the type of the connecting part, so as to achieve precise control of the corresponding connecting part in the controlled model.
  • the lateral rotation of the wrist such as the extension of the upper arm to the hand, is the axis of rotation, and the lateral rotation of the wrist is caused by the rotation of the upper arm.
  • the lateral rotation of the ankle if taking the extension direction of the calf as the axis to rotate, the rotation of the ankle is also directly driven by the calf, of course, it is also possible that the thigh drives the calf, and the calf further drives the ankle of.
  • determining the control method for controlling the connecting portion according to the type of the connecting portion includes: if the connecting portion is the first type connecting portion, determining to adopt the first type control method, wherein The first type of control method is used to directly control the movement of the connecting portion corresponding to the first type of connecting portion in the controlled model.
  • the rotation of the first type of connecting portion is not driven by other parts.
  • the connecting portion further includes a second type of connecting portion other than the first type of connecting portion.
  • the movement of the second type of connection here may be limited to itself, but driven by other parts.
  • determining a control method for controlling the connecting portion according to the type of the connecting portion includes: if the connecting portion is a second type of connecting portion, determining to adopt the second type of control method, wherein the The second type of control method is used to indirectly control the movement of the second type of connecting part by controlling the part of the controlled model other than the second type of connecting part.
  • the parts other than the second-type connecting portion include but are not limited to: the part directly connected to the second-type connecting portion, or the part indirectly connected to the second-type connecting portion.
  • the entire upper limb may be moving, and the shoulders and elbows are rotating.
  • the rotation of the wrist can be indirectly driven by controlling the lateral rotation of the shoulder and/or elbow.
  • controlling the movement of the connecting part of the controlled model according to the control method and the second type of motion information includes: if it is the second type of control method, decomposing the second type of motion information , Obtain the first type of rotation information of the connecting part being pulled by the traction part to rotate; adjust the motion information of the traction part according to the first type of rotation information; use the adjusted motion information of the traction part , Controlling the movement of the traction part in the controlled model to indirectly control the movement of the connecting part.
  • the first type of rotation information is not the rotation information generated by the movement of the second type of connecting part itself, but the second type is pulled by the movement of other parts connected with the second type of connecting part (that is, the traction part).
  • the connection part makes the movement information generated by the second type connection part relative to a specific reference point of the target (for example, the center of the human body).
  • the traction part is a part directly connected with the second type connecting part. Taking the wrist as the second type of connecting part as an example, the traction part is the elbow or even the shoulder above the wrist. If an ankle is taken as the second type of connecting part as an example, the traction part is the knee or even the root of the thigh above the ankle.
  • the lateral rotation of the wrist along the straight line from the shoulder to the elbow to the wrist may be caused by the rotation of the shoulder or the elbow.
  • the information of lateral rotation should be assigned to the elbow or shoulder. Through this transfer assignment, the adjustment of the movement information of the elbow or shoulder is realized; the adjusted movement information is used to control the movement of the elbow or shoulder in the controlled model. Movement, in this way, the lateral rotation corresponding to the elbow or shoulder, viewed from the effect bar of the image, will be reflected by the wrist of the controlled model; thus, the controlled model can accurately simulate the target movement.
  • the method further includes: decomposing the second type of motion information to obtain the second type of rotation information of the second type of connecting part rotating relative to the traction part; using the second type of rotation Information to control the rotation of the connecting part of the controlled model relative to the traction part.
  • the first type of rotation information is information obtained by the information model from which the rotation information is extracted directly according to the characteristics of the image
  • the second type of rotation information is the rotation information obtained by adjusting the first type of rotation information.
  • the movement information of the second type of connecting portion relative to the predetermined posture can be known through the characteristics of the second type of connecting portion, for example, 2D coordinates or 3D coordinates, and this movement information is called the second type of movement information.
  • the second type of motion information includes but is not limited to rotation information.
  • the second type of connecting portion includes: a wrist; an ankle.
  • the traction part corresponding to the wrist includes: a forearm and/or an upper arm; and/or, if the second type of connecting part is an ankle, The traction part corresponding to the ankle includes: calf and/or thigh.
  • the first type of connecting portion includes a neck connecting the head and the torso.
  • determining the motion information of the connecting part according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part includes: determining the motion information of the connecting part according to the characteristics of the at least two parts.
  • the orientation information of the at least two parts determine the candidate orientation information of the connection part according to the orientation information of the at least two parts; determine the connection part according to the candidate orientation information and the first motion constraint condition Sports information.
  • determining the candidate orientation information of the connecting part according to the orientation information of at least two parts includes: determining the first candidate orientation and the second orientation information of the connecting part according to the orientation information of the at least two parts. Two alternative directions.
  • Two included angles may be formed between the two local orientation information, and these two included angles correspond to the rotation information of different orientations of the connecting part; therefore, the orientations corresponding to the two included angles are all alternative orientations; Only one of the candidate orientations satisfies the first motion constraint condition for the movement of the connecting part, so the second type of motion information needs to be determined according to the target orientation of the first motion constraint condition.
  • the included angle of rotation that satisfies the first motion constraint condition is used as the second type of motion information.
  • the first motion constraint condition connecting the human face and the neck of the torso is: between -90 and 90 degrees, and the angle exceeding 90 degrees is excluded according to the first motion constraint condition. In this way, it is possible to reduce the abnormal situation in which the rotation angle exceeds 90 degrees clockwise or counterclockwise, for example, 120 degrees, 180 degrees, during the process of simulating the target movement by the controlled model. If the first motion constraint condition is between -90 and 90 degrees; then the first motion constraint condition corresponds to two extreme angles, one is -90 degrees and the other is 90 degrees.
  • the detected rotation angle is modified to the maximum angle defined by the first motion constraint condition. For example, if a rotation angle exceeding 90 degrees is detected, the detected rotation angle is modified to a limit angle closer to the detected rotation angle, such as 90 degrees.
  • determining the movement information of the connecting portion according to the candidate orientation information and the first movement constraint condition includes: from the first candidate orientation information and the second candidate orientation information , Select the target orientation information within the bounds of the orientation change; determine the movement information of the connecting part according to the target orientation information.
  • the corresponding neck orientation may be 90 degrees to the right or 270 degrees to the left.
  • the change of the neck orientation of the human body may not be 270 degrees to the left. So that the neck faces right.
  • the orientation of the neck is: 90 degrees to the right and 270 degrees to the left are candidate orientation information, and the orientation information of the neck needs to be further determined, which needs to be determined according to the aforementioned first motion constraint condition.
  • the neck 90 degrees to the right is the target orientation information of the neck, and according to the neck 90 degrees to the right, the second type of movement information of the neck relative to the camera coordinate system is obtained as the right rotation 90 degrees.
  • the target orientation information here is the information that satisfies the first motion constraint condition.
  • determining the orientation information of the at least two parts according to the characteristics of the at least two parts includes: acquiring a first key point and a second key point of each part of the at least two parts Acquire the first reference point of each of the at least two parts, wherein the first reference point is a predetermined key point in the target; based on the first key point and the first reference point A first vector is generated, and a second vector is generated based on the second key point and the first reference point; based on the first vector and the second vector, the value of each of the at least two parts is determined Towards information.
  • the first reference point of the first part may be the waist key point of the target or the midpoint of the key points of the two hips. If the second part of the two parts is a human face, the first reference point of the second part may be a connection point between the neck and shoulders of the human face.
  • determining the orientation information of each of the at least two parts includes: cross multiplying the first vector and the second vector of a part to obtain the corresponding The normal vector of the plane where the part is located; the normal vector is used as the orientation information of the part.
  • the orientation of the plane where the part is located is also determined.
  • determining the movement information of the connecting portion based on the at least two local movement information includes: acquiring a fourth 3D coordinate of the connecting portion relative to a second reference point; The fourth 3D coordinate obtains the absolute rotation information of the connecting part; according to the motion information, controlling the movement of the part corresponding to the controlled model includes: controlling the corresponding connecting part of the controlled model based on the absolute rotation information exercise.
  • the second reference point may be one of the key points of the stent of the target, and the target is a person as an example, and the second reference point may be a local key point connected by the first type of connecting portion.
  • the second reference point may be the key point of the shoulder connected to the neck.
  • the second reference point may be the same as the first reference point.
  • both the first reference point and the second reference point may be the root node of the human body, and the root node of the human body may be the human crotch.
  • the root node includes but is not limited to the key point 0 shown in FIG. 7B.
  • Fig. 7B is a schematic diagram of the skeleton of the human body, and Fig. 7B includes 17 skeleton joint points numbered 0-16.
  • controlling the movement of the corresponding connection part of the controlled model further includes: decomposing according to the traction hierarchical relationship among the plurality of connection parts in the target The absolute rotation information obtains relative rotation information; based on the relative rotation information, the movement of the corresponding connection part in the controlled model is controlled.
  • first level pelvis
  • second level waist
  • third level thigh (for example, left thigh, right thigh)
  • fourth level lower leg (for example, left calf, Right calf)
  • fifth level feet.
  • the following is another level relationship; the first level: chest; the second level: neck; the third level, head.
  • the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
  • motion information (that is, the relative rotation information).
  • the relative rotation information can be represented by the following calculation formula (1): the rotation quaternion of each key point relative to the camera coordinate system ⁇ Q 0 ,Q 1 ,...,Q 18 ⁇ , and then calculate the rotation quaternion q i of each key point relative to the parent key point;
  • the parent key point parent(i) is the key point one level above the current key point i.
  • Q i is the rotation quaternion of the current key point i relative to the camera coordinate system; It is the reverse rotation parameter of the key point of the upper level.
  • Q parent(i) is the rotation parameter of the key point of the upper level, and the rotation angle is 90 degrees; then The angle of rotation is -90 degrees.
  • controlling the movement of the corresponding connection part of the controlled model further includes: correcting the relative rotation information according to a second constraint condition; based on the relative rotation information, Controlling the movement of the corresponding connecting part in the controlled model includes: controlling the movement of the corresponding connecting part in the controlled model based on the corrected relative rotation information.
  • the second constraint condition includes: a rotatable angle of the connecting portion.
  • the method further includes: performing posture defect correction on the second type of motion information to obtain corrected second type of motion information; said controlling the subject according to the second type of motion information Controlling the movement of the connecting part of the model includes: using the corrected second-type movement information to control the movement of the connecting part of the controlled model.
  • posture defect correction can be performed on the second type of motion information to obtain the corrected second type of motion information.
  • the method further includes: performing posture defect correction on the first type of motion information to obtain corrected first type of motion information; the step S140 may include: using the corrected first type of motion information.
  • the similar motion information controls the local motion corresponding to the controlled model.
  • the postural defect correction includes at least one of the following: synchronization defect of upper and lower limbs; movement defect of looped leg; foot showing external figure-shaped movement defect; foot concave type movement defect.
  • the method further includes: obtaining the posture defect correction parameter according to the difference information between the shape of the target and the standard form; wherein, the posture defect correction parameter is used for the first The correction of the second type of motion information and/or the second type of motion information.
  • the shape of the target is detected first, and then the detected shape is compared with the standard shape to obtain difference information; posture defect correction is performed through the difference information.
  • a prompt to maintain a predetermined posture is output on the display interface. After the user sees the prompt, the user maintains the predetermined posture, so that the imaging device can collect an image of the user maintaining the predetermined posture; then through image detection, it is determined whether the user maintains the predetermined posture Standard enough to get the difference information.
  • the predetermined posture may include, but is not limited to, the upright posture of the human body.
  • the normal standard standing posture should be that the toes of the feet and the roots of the feet are parallel to each other, and the first type of movement information and/or the second type corresponding to the characteristics of the target
  • this non-standard correction of the shape ie, the posture defect correction
  • the method further includes: correcting the proportions of different parts of the standard model according to the proportion relations of different parts of the target to obtain the corrected controlled model.
  • the proportional relationship between the various parts of different targets may be different. For example, taking people as an example, the ratio of the length of the legs to the length of the head of a professional model is longer than that of an ordinary person. Some people have fuller buttocks, and the distance between their hips may be larger than that of ordinary people.
  • the standard model may be a mean model obtained based on a large amount of human body data.
  • the different parts of the standard model will be corrected according to the proportional relationship of different parts of the target. Ratio to obtain the corrected controlled model.
  • the corrected part includes but not limited to the crotch and/or the leg.
  • the small image in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
  • the user's hand is moving. From FIG. 3A to FIG. 3B, and then from FIG. 3B to FIG. 3C, the user's hand is moving, and the hand of the controlled model is also moving.
  • the user's hand movement changes from making a fist, extending the palm, and extending the index finger sequentially in FIGS. 3A to 3C, while the controlled model imitates the user's gestures changing from making a fist, extending the palm, and extending the index finger.
  • the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
  • the torso of the user is moving. From Fig. 4A to Fig. 4B and then from Fig. 4B to Fig. 4C, the torso of the user is in motion, and the torso of the controlled model is also in motion. 4A to 4C, the user changes his crotch from the right side of the image, to the right side of the image, and finally stands upright.
  • the controlled model also simulates the user's torso movement.
  • the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
  • the user steps toward the right side of the image, steps toward the left side of the image, and finally stands up straight; the controlled model also simulates the user's foot movement.
  • the controlled model also simulates changes in the user's expression.
  • this embodiment provides an image processing device, which includes the following modules:
  • the first acquisition module 110 is used to acquire images.
  • the second acquisition module 120 is configured to acquire local features of the target based on the image.
  • the first determining module 130 is configured to determine the local motion information based on the characteristic.
  • the control module 140 is configured to control the local movement corresponding to the controlled model according to the movement information.
  • the second acquisition module 120 is specifically configured to: acquire the first-type features of the first-type part of the target based on the image; and/or, based on the image, acquire the The second type of feature of the second type of the target.
  • the second acquisition module 120 is specifically configured to acquire the expression feature of the head and the intensity coefficient of the expression feature based on the image.
  • obtaining the intensity coefficient of the expression feature based on the image includes: obtaining, based on the image, an intensity coefficient representing each sub-part in the first type of part.
  • the first determining module 130 is specifically configured to determine the movement information of the head based on the expression feature and the intensity coefficient; the control module 140 is specifically configured to: The movement information of the head controls the expression change of the head of the controlled model.
  • the second obtaining module 120 is configured to obtain the first type of local grid information based on the image.
  • the second obtaining module 120 is specifically configured to obtain an intensity coefficient representing each sub-part in the first type of part based on the image.
  • the second acquiring module 120 is specifically configured to acquire the location information of the second-type local key points of the target based on the image; the first determining module 130 is specifically configured to The location information determines the movement information of the second type of local.
  • the second acquiring module 120 is specifically configured to: acquire the first coordinates of the key points of the second type of local support of the target based on the image; and acquire the first coordinates based on the first coordinates. Two coordinates.
  • the second acquiring module 120 is specifically configured to acquire the first 2D coordinates of the key points of the second type of local support based on a 2D image; and to 3D based on the first 2D coordinates and the 2D coordinates The transformation relationship of the coordinates, the first 3D coordinates corresponding to the first 2D coordinates are obtained.
  • the second acquiring module 120 is specifically configured to acquire the second 3D coordinates of the key points of the second type local support of the target based on the 3D image; and to acquire the third coordinates based on the second 3D coordinates. 3D coordinates.
  • the second acquisition module 120 is specifically configured to correct the 3D coordinates of key points of the bracket corresponding to the occluded part of the second type of part in the 3D image based on the second 3D coordinates, so as to obtain The third 3D coordinate.
  • the first determining module 130 is specifically configured to determine the quaternion of the second type of part based on the location information.
  • the second acquiring module 120 is specifically configured to acquire the first position information of the key points of the bracket of the first part in the second type of part; and to acquire the second position information of the second part of the second type.
  • the first determining module 130 is specifically configured to determine the motion information of the first part according to the first position information; and determine the motion information of the second part according to the second position information. Sports information.
  • control module 140 is specifically configured to: control the movement of the controlled model and the corresponding part of the first part according to the movement information of the first part; according to the movement information of the second part The motion information controls the motion of the controlled model and the corresponding part of the second part.
  • the first part is a trunk; the second part is an upper limb, a lower limb or a limb.
  • This example provides an image processing method.
  • the steps of the method are as follows.
  • An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
  • the key points of the face of the human body are detected, where the key points of the face may be the key points of the contour of the face surface.
  • the torso key points and/or limb key points of the human body are detected, where the torso key points and/or limb key points can be 3D key points, which are represented by 3D coordinates.
  • the 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates.
  • the 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera.
  • the key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs.
  • the key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the finger joints, and the key points of the fingertips; the positions of these key points Can reflect the movement of hands and fingers.
  • the mesh information of the human face is generated.
  • the expression base corresponding to the current expression of the target is selected according to the mesh information, and the expression of the controlled model is controlled according to the expression base; the expression of the controlled model corresponding to each expression base is controlled according to the intensity coefficient reflected by the mesh information strength.
  • the quaternion is converted.
  • the torso movement of the controlled model is controlled according to the quaternion corresponding to the torso key point; and/or the limb movement of the controlled model is controlled according to the quaternion corresponding to the limb key point.
  • the key points of the face may include: 106 key points.
  • the torso key points and/or limb key points may include: 14 key points or 17 key points, which may be specifically shown in FIG. 7A and FIG. 7B.
  • FIG. 7A shows a schematic diagram containing 14 key points of the skeleton
  • FIG. 7B shows a schematic diagram containing 17 key points of the skeleton.
  • FIG. 7B may be a schematic diagram of 17 key points generated based on the 14 key points shown in FIG. 7A.
  • the 17 key points in Fig. 7B are equivalent to the key points shown in Fig. 7A, with key point 0, key point 7 and key point 9 added.
  • the 2D coordinates of key point 9 can be preliminarily determined based on the 2D coordinates of key point 8 and key point 10; the 2D coordinates of key point 7 can be determined according to the 2D coordinates of key point 8 and the 2D coordinates of key point 0.
  • the key point 0 may be the reference point provided by the embodiments of the disclosure, and the reference point may be used as the aforementioned first reference point and/or the second reference point.
  • the controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene.
  • the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
  • the clothes of the teacher model may be more stable, such as suits.
  • the controlled model may wear sports clothing.
  • This example provides an image processing method.
  • the steps of the method are as follows.
  • An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
  • the torso key points and limb key points of the human body are detected.
  • the torso key points and/or the limb key points here can be 3D key points, which are represented by 3D coordinates.
  • the 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates.
  • the 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera.
  • the key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs.
  • the hand key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the knuckles, and the key points of the fingertips.
  • the location of these key points can reflect the movement of the hands and fingers.
  • the key points of the torso are converted into a quaternion that characterizes the movement of the torso.
  • the quaternion can be called a trunk quaternion.
  • the key points of the limbs are converted into quaternions representing the movement of the limbs, and the quaternion data can be called limb quaternions.
  • the torso quaternion is used to control the torso movement of the controlled model.
  • the torso key points and the limb key points may include: 14 key points or 17 key points, which may be specifically shown in FIG. 7A or FIG. 7B.
  • the controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene.
  • the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
  • the clothes of the teacher model may be more stable, such as suits.
  • the controlled model may wear sports clothing.
  • This example provides an image processing method.
  • the steps of the method are as follows.
  • the image contains a target, and the target can be a human body.
  • a 3D posture of the target in a three-dimensional space is obtained, and the 3D posture can be represented by the 3D coordinates of the key points of the skeleton of the human body.
  • the absolute rotation parameters of the joints of the human body in the camera coordinate system can be determined by the coordinates in the camera coordinate system.
  • the coordinate direction of the joint is obtained.
  • the relative rotation parameters of the joints are determined. Determining the relative parameters may specifically include: determining the position of the key point of the joint relative to the root node of the human body. Among them, the relative rotation parameter can be used for quaternion representation.
  • the hierarchical relationship here can be the traction relationship between joints. For example, the movement of the elbow joint will pull the movement of the wrist joint to a certain extent, and the movement of the shoulder joint will also pull the movement of the elbow joint.
  • the hierarchical relationship may also be predetermined according to the joints of the human body.
  • the first level pelvis
  • the second level waist
  • the third level thigh (for example, left thigh, right thigh)
  • fourth level calf (for example, left calf, right calf)
  • fifth level feet.
  • the first level chest; the second level: neck; the third level, head.
  • the first level the clavicle, corresponding to the shoulder; the second level: the upper arm; the third level: the forearm (also called the forearm); the fourth level: the hand.
  • the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
  • the second type of motion information When determining the second type of motion information, first, obtain the motion information of the local key points of each level, and then determine the local motion of the low-level local key points relative to the high-level key points based on the hierarchical relationship Information (that is, the relative rotation information).
  • the hierarchical relationship Information that is, the relative rotation information
  • the relative rotation information can be expressed by the following calculation formula: the rotation quaternion ⁇ Q 0 ,Q 1 ,...,Q 18 ⁇ of each key point relative to the camera coordinate system, Then calculate the rotation quaternion q i of each key point relative to the parent key point according to formula (1).
  • the aforementioned use of quaternions to control the motion of each joint of the controlled model may include: using q i to control the motion of each joint of the controlled model.
  • the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within the constraint condition, Wherein, the constraint condition may be to limit the first Euler angle; obtain a quaternion corresponding to the second Euler angle, and then use the quaternion to control the rotation of the controlled model.
  • the quaternion corresponding to the second Euler angle is obtained, and the second Euler angle can be directly converted into a quaternion.
  • Fig. 7B is a skeleton diagram of 17 key points.
  • Figure 8 is a skeleton diagram of 19 key points.
  • the bones shown in Figure 8 can correspond to 19 key points, referring to the following bones: pelvis, waist, left thigh, left calf, left foot; right thigh, right calf, right foot, chest, neck, head, Left clavicle, right clavicle, right upper arm, right forearm, right hand, left upper arm, left forearm, left hand.
  • (x i , y i , z i ) can be the coordinates of the i-th key point, and the value of i ranges from 0 to 16.
  • p i represents the three-dimensional coordinates in the local coordinate system of node i, which are generally fixed values that come with the original model and do not need to be modified or migrated.
  • q i is a quaternion, which represents the rotation of the bone controlled by node i in the coordinate system of its parent node. It can also be considered as the rotation of the local coordinate system of the current node and the local coordinate system of the parent node.
  • the process of calculating the quaternion of the key points corresponding to each joint can be as follows: Determine the coordinate axis direction of the local coordinate system of each node. For each bone, the direction from the child node to the parent node is the x-axis; the axis of rotation that makes the bone rotatable the maximum angle is the z-axis; if the rotation axis cannot be determined, the direction the human body faces is the y-axis; for details Shown in Figure 9.
  • This example uses the left-handed coordinate system for illustration, and the right-handed coordinate system can also be used in specific implementation.
  • (i-j) represents the vector that i points to j, and x represents the cross product.
  • (1-7) represents the vector from the first key point to the seventh key point.
  • nodes 8, 15, 11, and 18 are the four nodes of the hands and feet. Since the calculation of the quaternion of these four nodes requires specific postures to be determined, these four nodes are not included in the table. node.
  • the number of the 19-point skeleton node can be referred to as shown in Fig. 8, and the key point number of the 17-point skeleton can be referred to as Fig. 7B.
  • Y asin(2*(q1*q3+q0*q2)) and the value of Y is between -1 and 1 (3)
  • X is the Euler angle in the first direction
  • Y is the Euler angle in the second direction
  • Z is the Euler angle in the third direction. Any two of the first direction, the second direction, and the third direction are perpendicular.
  • the method further includes: performing posture optimization adjustment on the second Euler angle. For example, to adjust some of the second Euler angles, it may be adjusted to the Euler angle optimized for posture based on preset rules, so as to obtain the third Euler angle.
  • Obtaining the quaternion corresponding to the second Euler angle may include: converting the third Euler angle into a quaternion for controlling the controlled model.
  • the method further includes: after converting the second Euler angles into a quaternion, performing posture optimization processing on the converted quaternion data. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
  • the adjustment when adjusting the second Euler angle or the quaternion obtained by the conversion of the second Euler angle, the adjustment may be based on a preset rule, or may be optimized and adjusted by the deep learning model itself; There are many specific implementation methods, which are not limited in this application.
  • pre-processing may also be included.
  • the width of the crotch and/or shoulder of the controlled model is modified to correct the overall posture of the human body.
  • the standing posture of the human body can be corrected for standing upright and for abdomen correction. Some people will push their abdomen when standing, and the abdomen correction can make the controlled model not simulate the user's abdomen movement. Some people hunch back when standing, and the hunchback correction can prevent the controlled model from simulating the user's hunchback.
  • This example provides an image processing method.
  • the steps of the method are as follows.
  • An image is acquired, and the image includes a target, and the target may include at least one of a human body, a human upper limb, and a human lower limb.
  • the coordinate system of the target joint according to the position information of the target joint in the image coordinate system. According to the position information of the limb part in the image coordinate system, the coordinate system of the limb part that will pull the target joint movement is obtained.
  • the rotation of the target joint relative to the limb part is determined to obtain the rotation parameter; the rotation parameter includes the spin parameter of the target joint and the rotation parameter traction by the limb part.
  • the first angle limit is used to limit the rotation parameters of the local traction of the limbs, and the final traction rotation parameters are obtained. According to the final traction rotation parameters, the local rotation parameters of the limbs are corrected. According to the coordinate system of the first limb and the partial relative rotation parameter of the limb after the correction, the relative rotation parameter is restricted by a second angle to obtain the restricted relative rotation parameter.
  • the limited rotation parameters are obtained as a quaternion.
  • the movement of the target joint of the controlled model is controlled according to the quaternion.
  • the coordinate system of the hand in the image coordinate system is obtained, and the coordinate system of the forearm and the coordinate system of the upper arm are obtained.
  • the target joint at this time is the wrist joint.
  • the rotation of the hand relative to the forearm is broken down into spin and pulled rotation. Transfer the towed rotation to the forearm, specifically, assign the towed rotation to the rotation in the corresponding direction of the forearm; use the first angle limit of the forearm to limit the maximum rotation of the forearm. Then determine the rotation of the hand relative to the corrected forearm, and obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the hand relative to the forearm.
  • the coordinate system of the foot under the image coordinate system is obtained, and the coordinate system of the lower leg and the coordinate system of the thigh are obtained; the target joint at this time is the ankle joint.
  • the rotation of the foot relative to the calf is broken down into spin and pulled rotation. Transfer the pulled rotation to the lower leg, specifically, assign the pulled rotation to the rotation in the corresponding direction of the lower leg; use the first angle limit of the lower leg to limit the maximum rotation of the lower leg. Then determine the rotation of the foot relative to the corrected calf to obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the foot relative to the calf.
  • the neck controls the direction of the head.
  • the face, the human body, and the hands are separate parts that are ultimately integrated.
  • the rotation of the neck is very important.
  • the orientation of a human body can be calculated.
  • the orientation of a face can be calculated, and the relative position of the two orientations is the rotation angle of the neck.
  • the angle of this connection part is calculated by relative calculation. For example, the body is 0 degrees and the face is 90 degrees. To control a controlled model, it only pays attention to the local angle, the head and body angle changes, and the neck angle of the controlled model needs to be calculated to control the head of the controlled model.
  • the rotation angle of the neck first determine the current orientation of the user's face based on the image, and then calculate the rotation angle of the neck. Since the rotation of the neck has a range, for example, suppose the neck can rotate up to 90 degrees. If the calculated rotation angle exceeds this range (-90 degrees to 90 degrees), the boundary of the range is used as the rotation angle of the neck (for example, -90 degrees or 90 degrees).
  • 3D key points can be used to calculate the orientation of the body or face.
  • the calculation of the specific orientation can be: the normal vector of the plane is obtained by cross-multiplying the two vectors in the plane of the face or the body that are not on a straight line. It is the orientation of the face or body. This orientation can be used as the orientation of the connecting part (neck) between the body and the face.
  • an embodiment of the present application provides an image device, including: a memory 1002, configured to store information; a processor 1001, connected to the memory 1002, configured to execute data stored on the memory 1002
  • the computer-executable instructions can implement the image processing method provided by one or more of the foregoing technical solutions, for example, the image processing method shown in FIG. 1 and/or FIG. 2.
  • the memory 1002 can be various types of memory, such as random access memory, read-only memory, flash memory, and the like.
  • the memory 1002 may be used for information storage, for example, to store computer executable instructions.
  • the computer-executable instructions may be various program instructions, for example, target program instructions and/or source program instructions.
  • the processor 1001 may be various types of processors, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
  • the processor 1001 may be connected to the memory 1002 through a bus.
  • the bus may be an integrated circuit bus or the like.
  • the terminal device may further include: a communication interface 1003, and the communication interface 1003 may include: a network interface, for example, a local area network interface, a transceiver antenna, and the like.
  • the communication interface is also connected to the processor 1001 and can be used for information transceiving.
  • the terminal device further includes a human-computer interaction interface 1005.
  • the human-computer interaction interface 1005 may include various input and output devices, such as a keyboard and a touch screen.
  • the image device further includes: a display 1004, which can display various prompts, collected facial images, and/or various interfaces.
  • the embodiment of the present application provides a non-volatile computer storage medium that stores computer executable code; after the computer executable code is executed, the image provided by one or more technical solutions can be realized
  • the processing method is, for example, the image processing method shown in FIG. 1 and/or FIG. 2.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed can be indirect coupling or communication connection through some interfaces, devices or units, and can be electrical, mechanical or other forms of.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • the functional units in the embodiments of the present disclosure can be all integrated into one processing module, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
  • the unit of can be implemented in the form of hardware, or in the form of hardware plus software functional units.

Abstract

Disclosed are an image processing method and apparatus, an image device, and a storage medium. The image processing method comprises: acquiring an image; based on the image, acquiring features of a local part of a target; based on the features, determining motion information of the local part; and according to the motion information, controlling the motion of the local part corresponding to a controlled model.

Description

图像处理方法及装置、图像设备及存储介质Image processing method and device, image equipment and storage medium
相关申请的交叉引用Cross-reference of related applications
本专利申请要求于2019年1月18日提交的、申请号为201910049830.6、发明名称为“图像处理方法及装置、图像设备及存储介质”以及2019年4月30日提交的、申请号为201910362107.3、发明名称为“图像处理方法及装置、图像设备及存储介质”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。This patent application requires that it was filed on January 18, 2019, the application number is 201910049830.6, the title of the invention is "Image processing methods and devices, image equipment and storage media", and the application number submitted on April 30, 2019 is 201910362107.3, The priority of the Chinese patent application with the title of "Image Processing Method and Apparatus, Image Equipment and Storage Medium", the full text of this application is incorporated herein by reference.
技术领域Technical field
本公开涉及信息技术领域,尤其涉及一种图像处理方法及装置、图像设备及存储介质。The present disclosure relates to the field of information technology, and in particular to an image processing method and device, image equipment and storage medium.
背景技术Background technique
随着信息技术的发展,用户可以通过视频录制进行网络授课、网络主播,体感游戏等成为可能。但是在一些情况下,例如,体感游戏需要用户需要佩戴专门的体感设备来检测自身的肢体等活动,才能控制游戏角色。而进行网络授课或者网络主播时,用户的面貌或肢体等完全暴露在网络中,这一方面可能涉及用户的隐私问题,另一方面还可能涉及信息安全性问题。为了解决这种隐私或安全性问题,可以会通过马赛克等方式进行脸部图像的覆盖,但是这样会影响视频效果。With the development of information technology, it has become possible for users to conduct online lectures, network anchors, and somatosensory games through video recording. But in some cases, for example, somatosensory games require users to wear special somatosensory equipment to detect their own limbs and other activities in order to control the game character. When conducting online lectures or network anchors, the face or body of the user is completely exposed to the network. This may involve the user's privacy issue on the one hand, and may also involve information security issues on the other hand. In order to solve this privacy or security problem, the facial image may be covered by mosaic or other methods, but this will affect the video effect.
发明内容Summary of the invention
有鉴于此,本公开实施例提供一种图像处理方法及装置、图像设备及存储介质。In view of this, embodiments of the present disclosure provide an image processing method and device, image equipment, and storage medium.
在第一方面,本公开提供一种图像处理方法,包括:In a first aspect, the present disclosure provides an image processing method, including:
获取图像;基于所述图像,获取目标的局部的特征;基于所述特征,确定所述局部的运动信息;根据所述运动信息,控制受控模型对应的局部的运动。Obtain an image; obtain a local feature of the target based on the image; determine the local motion information based on the feature; control the local motion corresponding to the controlled model according to the motion information.
基于上述方案,基于所述图像,获取所述目标的所述局部的特征,包括:基于所述图像,获取所述目标的第一类局部的第一类特征;和/或,基于所述图像,获取所述目标的第二类局部的第二类特征。Based on the above solution, acquiring the local features of the target based on the image includes: acquiring the first type features of the first type of the target based on the image; and/or, based on the image To obtain the second-type feature of the second-type part of the target.
基于上述方案,基于所述图像,获取所述目标的所述第一类局部的所述第一类特征,包括:基于所述图像,获取头部的表情特征以及所述表情特征的强度系数。Based on the above solution, acquiring the first-type feature of the first-type part of the target based on the image includes: acquiring the expression feature of the head and the intensity coefficient of the expression feature based on the image.
基于上述方案,基于所述图像,获取所述表情特征的所述强度系数,包括:基于所述图像,获得表征所述第一类局部中各个子局部的强度系数。Based on the above solution, obtaining the intensity coefficient of the expression feature based on the image includes: obtaining, based on the image, an intensity coefficient representing each sub-part of the first type of part.
基于上述方案,基于所述特征,确定所述局部的运动信息,包括:基于所述表情特征和所述强度系数,确定所述头部的运动信息;根据所述运动信息,控制受控模型对应的局部的运动,包括:根据所述头部的所述运动信息,控制所述受控模型的头部的表情变化。Based on the above solution, the determination of the local motion information based on the characteristics includes: determining the motion information of the head based on the expression characteristics and the intensity coefficient; according to the motion information, controlling the corresponding control model The local movement of the head includes: controlling the expression change of the head of the controlled model according to the movement information of the head.
基于上述方案,所述基于所述图像,获取所述目标的所述第二类局部的所述第二类特征,包括:基于所述图像,获取所述目标的所述第二类局部的关键点的位置信息;所述基于所述特征,确定所 述局部的所述运动信息,包括:基于所述位置信息,确定所述第二类局部的运动信息。Based on the above solution, the acquiring the second type feature of the second type part of the target based on the image includes: acquiring the key of the second type part of the target based on the image Point location information; the determining the local motion information based on the characteristic includes: determining the second type of local motion information based on the location information.
基于上述方案,基于所述图像,获取所述目标的所述第二类局部的所述关键点的位置信息,包括:基于所述图像,获取所述目标的所述第二类局部的支架关键点的第一坐标;基于所述第一坐标,获得第二坐标。Based on the above solution, acquiring the position information of the key points of the second type part of the target based on the image includes: acquiring the support key of the second type part of the target based on the image The first coordinate of the point; based on the first coordinate, the second coordinate is obtained.
基于上述方案,基于所述图像,获取所述目标的所述第二类局部的所述支架关键点的第一坐标,包括:基于2D图像,获取所述第二类局部的所述支架关键点的第一2D坐标;基于所述第一坐标,获得所述第二坐标,包括:基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。Based on the above solution, acquiring the first coordinates of the support key points of the second type part of the target based on the image includes: acquiring the support key points of the second type part based on a 2D image The first 2D coordinates of the; based on the first coordinates, obtaining the second coordinates includes: obtaining the first 2D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a conversion relationship from 2D coordinates to 3D coordinates A 3D coordinate.
基于上述方案,基于所述图像,获取所述目标的所述第二类局部的所述支架关键点的第一坐标,包括:基于3D图像,获取所述目标的所述第二类局部的所述支架关键点的第二3D坐标;基于所述第一坐标,获得所述第二坐标,包括:基于所述第二3D坐标,获得第三3D坐标。Based on the above solution, acquiring the first coordinates of the key points of the support of the second type part of the target based on the image includes: acquiring all the key points of the second type part of the target based on the 3D image The second 3D coordinates of the key points of the support; and based on the first coordinates, obtaining the second coordinates includes: obtaining third 3D coordinates based on the second 3D coordinates.
基于上述方案,基于所述第二3D坐标,获得所述第三3D坐标,包括:基于所述第二3D坐标,修正所述第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。Based on the above solution, obtaining the third 3D coordinates based on the second 3D coordinates includes: based on the second 3D coordinates, correcting the bracket key corresponding to the occluded part of the second type of part in the 3D image The 3D coordinates of the point, thereby obtaining the third 3D coordinates.
基于上述方案,基于所述位置信息,确定所述第二类局部的运动信息,包括:基于所述位置信息,确定所述第二类局部的四元数。Based on the above solution, determining the motion information of the second type of part based on the position information includes: determining the quaternion of the second type of part based on the position information.
基于上述方案,基于所述图像,获取所述目标的所述第二类局部的所述关键点的位置信息,包括:获取所述第二类局部中的第一局部的支架关键点的第一位置信息;获取所述第二类局部中的第二局部的支架关键点的第二位置信息。Based on the above solution, acquiring the position information of the key points of the second type part of the target based on the image includes: acquiring the first part of the support key point of the first part in the second type part Location information; acquiring second location information of key points of the bracket in the second part of the second type of part.
基于上述方案,基于所述位置信息,确定所述第二类局部的运动信息,包括:根据所述第一位置信息,确定所述第一局部的运动信息;根据所述第二位置信息,确定所述第二局部的运动信息。Based on the above solution, determining the movement information of the second type of part based on the position information includes: determining the movement information of the first part according to the first position information; and determining according to the second position information The motion information of the second part.
基于上述方案,所述根据所述运动信息,控制所述受控模型对应的局部的运动,包括:根据所述第一局部的所述运动信息,控制所述受控模型的与所述第一局部对应的局部的运动;根据所述第二局部的所述运动信息,控制所述受控模型的与所述第二局部对应的局部的运动。Based on the above solution, the controlling the movement of the part corresponding to the controlled model according to the movement information includes: controlling the movement of the controlled model and the first part according to the movement information of the first part. Local motion of the part corresponding to the part; according to the motion information of the second part, controlling the motion of the part of the controlled model corresponding to the second part.
基于上述方案,所述第一局部为:躯干;和/或,所述第二局部为上肢、下肢或四肢。Based on the above solution, the first part is the trunk; and/or the second part is the upper limbs, lower limbs or limbs.
在第二方面,本公开提供一种图像处理装置,包括:In a second aspect, the present disclosure provides an image processing device, including:
第一获取模块,用于获取图像;第二获取模块,用于基于所述图像,获取目标的局部的特征;第一确定模块,用于基于所述特征,确定所述局部的运动信息;控制模块,用于根据所述运动信息,控制受控模型对应的局部的运动。The first acquisition module is used to acquire an image; the second acquisition module is used to acquire a local feature of the target based on the image; the first determination module is configured to determine the local motion information based on the feature; control The module is used to control the local motion of the controlled model according to the motion information.
在第三方面,本公开提供一种图像设备,包括:存储器;处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,以实现上述任意一项图像处理方法。In a third aspect, the present disclosure provides an image device including: a memory; a processor connected to the memory and configured to execute any of the above-mentioned image processing methods by executing computer-executable instructions located on the memory .
在第四方面,本公开提供一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行后,能够实现上述任意一项图像处理方法。In a fourth aspect, the present disclosure provides a non-volatile computer storage medium that stores computer-executable instructions; after the computer-executable instructions are executed by a processor, any one of the aforementioned image processing can be implemented method.
本公开实施例提供的技术方案,可以根据获取到的图像,得到目标的局部的特征,再基于局部的特征,得到局部的运动信息,最后可以根据运动信息,控制受控模型对应的局部的运动。如此, 在利用受控模型模拟目标的运动而进行视频直播时,可以精确控制受控模型的运动,使得受控模型能够精确模拟目标的运动,一方面实现了视频直播,另一方面保护了用户隐私。The technical solutions provided by the embodiments of the present disclosure can obtain the local characteristics of the target according to the acquired images, and then obtain the local motion information based on the local characteristics, and finally control the local motion of the controlled model according to the motion information. . In this way, when the controlled model is used to simulate the movement of the target for live video broadcast, the movement of the controlled model can be accurately controlled, so that the controlled model can accurately simulate the movement of the target. On the one hand, the live video is realized, and the user is protected on the other hand. privacy.
附图说明BRIEF DESCRIPTION
图1为本公开实施例提供的一种图像处理方法的流程示意图。FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the disclosure.
图2为本公开另一实施例提供的一种图像处理方法的流程示意图。FIG. 2 is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
图3A至图3C为本实施例提供的一种受控模型模拟被采集用户的手部运动的变化示意图。3A to 3C are schematic diagrams of a controlled model provided by this embodiment for simulating changes of the hand movement of the collected user.
图4A至图4C为本公开实施例提供的一种受控模型模拟被采集用户的躯干运动的变化示意图。4A to 4C are schematic diagrams of a controlled model provided by an embodiment of the disclosure to simulate changes in the torso movement of a collected user.
图5A至图5C为本公开实施例提供的一种受控模型模拟被采集用户的脚部运动的示意图。5A to 5C are schematic diagrams of a controlled model simulating the movement of the collected user's feet according to an embodiment of the disclosure.
图6为本公开实施例提供的一种图像处理装置的结构示意图。FIG. 6 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
图7A为本公开实施例提供的一种骨架关键点的示意图。FIG. 7A is a schematic diagram of key points of a skeleton provided by an embodiment of the disclosure.
图7B为本公开另一实施例提供的一种骨架关键点的示意图。FIG. 7B is a schematic diagram of a skeleton key point provided by another embodiment of the present disclosure.
图8为本公开实施提供的一种骨架的示意图。FIG. 8 is a schematic diagram of a skeleton provided by the implementation of the present disclosure.
图9为本公开实施例提供的一种人体的不同骨骼的局部坐标系的示意图。FIG. 9 is a schematic diagram of a local coordinate system of different bones of a human body provided by an embodiment of the disclosure.
图10为本公开实施例提供的一种图像设备的结构示意图。FIG. 10 is a schematic structural diagram of an image device provided by an embodiment of the disclosure.
具体实施方式detailed description
以下结合说明书附图及具体实施例对本公开的技术方案做进一步的详细阐述。The technical solution of the present disclosure will be further elaborated below in conjunction with the drawings and specific embodiments of the specification.
如图1所示,本实施例提供一种图像处理方法,包括如下步骤。As shown in FIG. 1, this embodiment provides an image processing method, which includes the following steps.
步骤S110:获取图像。Step S110: Obtain an image.
步骤S120:基于所述图像获取目标的局部的特征。Step S120: Acquire local features of the target based on the image.
步骤S130:基于所述特征,确定所述局部的运动信息。Step S130: Based on the feature, determine the local motion information.
步骤S140:根据所述运动信息,控制受控模型对应的局部的运动。Step S140: Control the local movement corresponding to the controlled model according to the movement information.
本实施例提供的图像处理方法,通过图像处理可以驱动受控模型的运动。The image processing method provided in this embodiment can drive the movement of the controlled model through image processing.
本实施例提供的图像处理方法可应用于图像设备,该图像设备可为能够进行图像设备处理的各种电子设备,例如,进行图像采集、图像显示及图像像素重组生成图像的电子设备。该图像设备包括但不限于各种终端设备,例如,移动终端和/或固定终端;还可包括各种能够提供图像服务的图像服务器。所述移动终端包括用户便于携带的手机或平板电脑等便携式设备,还可以包括用户佩戴的设备,例如,智能手环、智能手表或智能眼镜等。所述固定终端包括固定的台式电脑等。The image processing method provided in this embodiment can be applied to an image device, which can be various electronic devices capable of image device processing, for example, an electronic device that performs image collection, image display, and image pixel reorganization to generate images. The image device includes but is not limited to various terminal devices, for example, a mobile terminal and/or a fixed terminal; it may also include various image servers capable of providing image services. The mobile terminal includes portable devices such as mobile phones or tablet computers that are easy for users to carry, and may also include devices worn by users, such as smart bracelets, smart watches, or smart glasses. The fixed terminal includes a fixed desktop computer and the like.
在本实施例中,步骤S110中获取的图像可为:2D(二维)图像或者3D(三维)图像。所述2D图像可包括:单目或多目摄像头采集的图像,如红绿蓝(RGB)图像等。In this embodiment, the image acquired in step S110 may be a 2D (two-dimensional) image or a 3D (three-dimensional) image. The 2D image may include images collected by a single-lens or multi-lens camera, such as red, green, and blue (RGB) images.
3D图像可以是从2D图像中检测的2D坐标,然后利用2D坐标到3D坐标的转换算法得到的3D图像,3D图像还可以是利用3D摄像头采集的图像。The 3D image may be a 2D coordinate detected from a 2D image, and then a 3D image obtained by a conversion algorithm from 2D coordinates to 3D coordinates, and the 3D image may also be an image collected by a 3D camera.
所述获取图像的方式可包括:利用图像设备自身的摄像头采集所述图像;和/或,从外部设备接收的图像;和/或,从本地数据库或本地存储器中读取所述图像。The method for acquiring the image may include: using the camera of the imaging device itself to collect the image; and/or the image received from an external device; and/or reading the image from a local database or a local storage.
在一个例子中,所述步骤S120可包括:检测所述图像,获取目标的一个局部的特征,该局部可以是目标上的任一局部。In an example, the step S120 may include: detecting the image to obtain a feature of a part of the target, and the part may be any part of the target.
在另一个例子中,所述步骤S120可包括:检测所述图像,获取目标的至少两个局部的特征,这两个局部可以是目标上的不同局部。这两个局部可以连续分布在目标上,也可以间隔分布在所述目标上。In another example, the step S120 may include: detecting the image and acquiring features of at least two parts of the target, and the two parts may be different parts of the target. The two parts may be continuously distributed on the target, or may be distributed on the target at intervals.
例如,所述目标为人,则所述任一局部可以包括以下部位中的任一个:头部、躯干、四肢、上肢、下肢、手部及脚部等;所述至少两个局部可包括以下部位中的至少两个:头部、躯干、四肢、上肢、下肢、手部及脚部等。在另一些实施例中,所述目标不局限于人,还可以为动物等各种可活动生命体或非生命体。For example, if the target is a person, any part may include any of the following parts: head, trunk, limbs, upper limbs, lower limbs, hands and feet, etc.; the at least two parts may include the following parts At least two of them: head, trunk, limbs, upper limbs, lower limbs, hands and feet, etc. In other embodiments, the target is not limited to humans, but can also be various movable living or non-living objects such as animals.
本实施例中,会获取一个或多个局部的特征,该特征可为各种形式表征该局部的空间结构信息、位置信息或运动状态的特征。在本例中,可以利用神经网络等深度学习模型检测所述图像,从而得到所述特征。In this embodiment, one or more local features are acquired, and the features can be features that characterize the local spatial structure information, position information, or motion state in various forms. In this example, a deep learning model such as a neural network can be used to detect the image to obtain the feature.
在一个例子中,所述特征可表征人体骨架中关节点之间的相对位置关系。在另一个例子中,所述特征可以表征相邻时刻点人体骨架中对应关节点的位置变化关系,或者,该特征可以表征当前图片与初始坐标系(也可以称为相机坐标系)的人体骨架中对应关节点的位置变化关系。更具体的,该特征可包括:由深度学习模型(如OpenPose项目所采用的神经网络)检测的人体骨架中各关节点在世界坐标系中的3D坐标。在再一个例子中,所述特征可包括:表征人体姿态变化的光流特征等。In an example, the feature may represent the relative positional relationship between joint points in the human skeleton. In another example, the feature can characterize the position change relationship of the corresponding joint points in the human skeleton at adjacent time points, or the feature can characterize the human skeleton of the current picture and the initial coordinate system (also called the camera coordinate system). The position change relationship of the corresponding joint points in. More specifically, the feature may include: the 3D coordinates of each joint point in the human skeleton detected by a deep learning model (such as the neural network used in the OpenPose project) in the world coordinate system. In another example, the feature may include: optical flow feature that characterizes the change of the posture of the human body.
在步骤S110中,所获取的图像可以为一帧图像,也可以为多帧图像。例如,当获取的图像是一帧图像时,后续得到的运动信息可以反映当前图像中的关节点相对于相机坐标系的对应的关节点的运动。又例如,当获取的是多帧图像时,后续得到的运动信息可以反映当前图像中的关节点相对于前几帧图像中对应的关节点的运动,或者,后续得到的运动信息也可以反映当前图像中的关节点相对于相机坐标系的对应的关节点的运动。本申请对获取的图像的数量不做限制。In step S110, the acquired image may be one frame of image or multiple frames of image. For example, when the acquired image is a frame of image, the subsequently obtained motion information may reflect the motion of the joint points in the current image relative to the corresponding joint points of the camera coordinate system. For another example, when a multi-frame image is acquired, the subsequently obtained motion information may reflect the motion of the joint points in the current image relative to the corresponding joint points in the previous frames, or the subsequent motion information may also reflect the current The movement of the joint points in the image relative to the corresponding joint points in the camera coordinate system. This application does not limit the number of images acquired.
在获得所述特征之后,得到局部的运动信息,所述运动信息表征了对应所述局部的动作变化和/或因动作变化引起的表情变化等。After the feature is obtained, local motion information is obtained, and the motion information characterizes the change of the motion corresponding to the part and/or the change of the expression caused by the change of the motion.
在一个例子中,假设S120中涉及的两个局部为头部和躯干,则在步骤S140中,控制受控模块中与头部对应的局部进行运动,并控制受控模块中与躯干对应的局部进行运动。In an example, assuming that the two parts involved in S120 are the head and the torso, in step S140, the part corresponding to the head in the controlled module is controlled to move, and the part corresponding to the torso in the controlled module is controlled Exercise.
所述运动信息的信息包括但不限于:局部所对应关键点的坐标,该坐标包括但不限于2D坐标及3D坐标;该坐标能够表征局部所对应关键点相对于基准位置的变化,从而能够表征对应局部的运动状况。所述运动信息可以用向量、数组、一维的数值、矩阵等各种信息形式表示。The information of the motion information includes, but is not limited to: the coordinates of the key points corresponding to the part, and the coordinates include but are not limited to 2D coordinates and 3D coordinates; the coordinates can characterize the change of the key points corresponding to the part relative to the reference position, thereby being able to characterize Corresponding to the local sports conditions. The motion information can be expressed in various information forms such as vectors, arrays, one-dimensional numerical values, and matrices.
所述受控模型可为所述目标所对应的模型。例如,目标为人,则所述受控模型为人体模型,若所述目标为动物,则所述受控模型可为对应动物的身体模型;若所述目标为交通工具,则所述受控模型可为交通工具的模型。The controlled model may be a model corresponding to the target. For example, if the target is a human, the controlled model is a human body model; if the target is an animal, the controlled model may be a body model corresponding to an animal; if the target is a vehicle, the controlled model Can be a model of a vehicle.
在本实施例中,所述受控模型是针对目标所属类别的模型。该模型可为预先确定的,并可以进一步分为多种风格。所述受控模型的风格可以基于用户指令确定,该受控模型的风格可包括多种, 例如,模拟真人的真人风格、动漫风格、网红风格、不同气质的风格、游戏风格。其中,不同气质的风格可以为文艺风格或摇滚风格。游戏风格下则受控模型可为游戏中的角色。In this embodiment, the controlled model is a model for the category of the target. The model can be predetermined and can be further divided into multiple styles. The style of the controlled model may be determined based on user instructions, and the style of the controlled model may include a variety of styles, for example, a real-life style that simulates a real person, an anime style, an internet celebrity style, styles of different temperaments, and game styles. Among them, different temperament styles can be literary style or rock style. In the game style, the controlled model can be a role in the game.
例如,在网络教学过程中,有的老师并不愿意暴露自己的面容和身形,认为这是隐私。若直接录制视频必然导致老师的面容和身形等被曝光。在本实施例中,可以通过图像采集等方式获得老师运动的图像,然后通过特征提取及运动信息的获取,控制一个虚拟的受控模型运动。这样,一方面可以通过自身的肢体运动使得受控模型模拟老师的运动完成肢体运动教学,另一方面,利用受控模型的运动进行教学,则该老师的面容和身形都不用直接暴露在教学视频中,保护了老师的隐私。For example, in the process of online teaching, some teachers are unwilling to expose their faces and figures, thinking this is privacy. If you directly record the video, the teacher's face and body shape will be exposed. In this embodiment, the image of the teacher's movement can be obtained through image collection and other methods, and then the movement of a virtual controlled model can be controlled through feature extraction and movement information acquisition. In this way, on the one hand, the controlled model can be used to simulate the teacher’s movement to complete the physical exercise teaching through its own limb movement. On the other hand, if the movement of the controlled model is used for teaching, the teacher’s face and body shape do not need to be directly exposed to the teaching. In the video, the teacher’s privacy is protected.
又例如,在路面监控视频中,若直接采集车辆的视频,若视频一旦曝光到网络中,则有一些特定用户的全部车辆信息会被曝光,但是不监控则可能在出现交通事故时无法确定责任的情况。若利用本实施例的方法,利用车辆模型模拟真实的车辆运动,获得监控视频,在该监控视频中保留车的车牌信息和/或车的整体外轮廓即可,车的品牌、型号、颜色及新旧等都可以被隐藏,从而保护用户隐私。For another example, in road surveillance video, if the video of the vehicle is directly collected, if the video is exposed to the network, all vehicle information of some specific users will be exposed, but without monitoring, it may not be possible to determine the responsibility in the event of a traffic accident Case. If the method of this embodiment is used to simulate the real vehicle movement by using a vehicle model to obtain a surveillance video, the vehicle’s license plate information and/or the overall outline of the vehicle can be retained in the surveillance video. The vehicle’s brand, model, color and Both old and new can be hidden to protect user privacy.
在一些实施例中,如图2所示,所述步骤S120可包括以下步骤。In some embodiments, as shown in FIG. 2, the step S120 may include the following steps.
步骤S121:基于所述图像,获取所述目标的第一类局部的第一类特征。Step S121: Based on the image, obtain a first-type feature of the first-type part of the target.
步骤S122:基于所述图像,获取所述目标的第二类局部的第二类特征。Step S122: Based on the image, obtain a second-type local second-type feature of the target.
在本实施例中,所述第一类特征和所述第二类特征是表征对应局部的空间结构信息、位置信息和/或运动状态的特征。In this embodiment, the first type of features and the second type of features are features that characterize the spatial structure information, position information, and/or motion state of the corresponding part.
不同类型特征具有不同的特点,适用不同类型的局部会具有更高的精确度。例如,以人体脸部的肌肉运动,相对于四肢的运动,不同特征对运动引起的空间变化的精确度是不同的,此时,在本实施例中针对人脸和四肢,可以采用不同类型分别与人脸或四肢相适配精确度的特征来表示。Different types of features have different characteristics, and the application of different types of parts will have higher accuracy. For example, with the muscle movement of the human face, relative to the movement of the limbs, different features have different accuracy of the spatial changes caused by the movement. In this case, different types of faces and limbs can be used in this embodiment. It is expressed by features that match the accuracy of the face or limbs.
在一些实施例中,例如,基于图像,分别获取第一类局部的第一类特征和第二类局部的第二类特征。In some embodiments, for example, based on the image, the first-type features of the first-type part and the second-type features of the second-type part are obtained separately.
所述第一类局部和所述第二类局部为不同类型局部;不同类型的局部可以通过不同类型局部的可运动幅度来进行区分;或者,不用类型局部的运动精细度来区分。The first type part and the second type part are different types of parts; different types of parts can be distinguished by the movable range of different types of parts; or, the movement fineness of the type parts is not used to distinguish.
在本实施例中,所述第一类局部和第二类局部可为运动的最大幅度差异比较大的两类局部。例如,第一类局部可为头部,头部的五官都可以运动,但是头部的五官的运动都比较小;头部整体也可以运动,例如,点头或摇头等,但是运动幅度相对于肢体或躯干的运动幅度偏小。In this embodiment, the first type of part and the second type of part may be two types of parts with a relatively large difference in the maximum amplitude of motion. For example, the first type of part can be the head. All the five senses of the head can move, but the movement of the five senses of the head is relatively small; the whole head can also move, for example, nodding or shaking the head, but the movement range is relative to the limbs. Or the motion range of the trunk is too small.
第二类局部可为上肢、下肢或四肢,肢体运动的幅度都很大。若这两类局部的运动状态利用同一种特征来表示,可能会使得因为将就某一个局部的运动幅度,导致精度下降或者算法的复杂度增大等问题。The second type of parts can be upper limbs, lower limbs or limbs, and the range of limb movement is very large. If the two types of local motion states are represented by the same feature, it may cause problems such as a decrease in accuracy or an increase in the complexity of the algorithm due to the amplitude of a certain local motion.
此处,根据不同类型局部的特点以不同类型的特征来获取运动信息,相对于采用同一类特征来表示同类型的局部的相关方式,可以减少至少一类局部的信息精确性,提升了运动信息的精确度。Here, different types of features are used to obtain motion information according to the characteristics of different types of parts. Compared with the correlation method of using the same type of features to represent the same type of parts, the accuracy of at least one type of local information can be reduced, and the motion information can be improved. Accuracy.
在一些实施例中,获取第一类特征和第二类特征的获取主体不同,例如,使用不同的深度学习模型或深度学习模块获得。第一类特征和第二类特征的获取逻辑不同。In some embodiments, the first-type features and the second-type features are acquired by different subjects, for example, acquired by using different deep learning models or deep learning modules. The first type of feature and the second type of feature have different acquisition logic.
在一些实施例中,所述步骤S121可包括:基于所述图像,获取头部的表情特征。In some embodiments, the step S121 may include: obtaining the facial expression features of the head based on the image.
在本实施例中,所述第一类局部为头部,所述头部包括脸部,所述表情特征包括但不限于以下至少之一:眉毛的运动、嘴巴的运动、鼻子的运动、眼部运动和脸颊运动。眉毛的运动可以包括:耸眉、耷拉眉。嘴巴的运动可以包括:张嘴、闭嘴、扁嘴、噘嘴、咧嘴、龇牙等。鼻子的运动可以包括:通过向鼻子内吸气产生的缩鼻,向外吹起伴随的鼻子伸张运动。眼部运动可以包括但不限于:眼眶的运动和/或眼珠的运动。所述眼眶的运动会改变的眼眶的大小和/或形状,例如,眯眼、瞪眼、笑眼的眼眶形状和大小都是会改变。眼珠的运动可以包括:眼珠在眼眶内的位置,例如,用户视线的变化会使得眼珠位于眼眶不同的位置,左右眼的眼珠一起运动可以体现用户的不同情绪状态等。脸颊运动,有的用户笑起来会产生酒窝或梨涡,脸颊的形状也会随着发生变化。In this embodiment, the first type of part is the head, the head includes the face, and the expression features include but are not limited to at least one of the following: movement of the eyebrows, movement of the mouth, movement of the nose, and eyes. Facial movement and cheek movement. The movement of the eyebrows can include: eyebrow lifting and drooping. Mouth movements can include: opening, closing, flattening, pouting, grinning, and barking teeth. The movement of the nose may include: contraction of the nose produced by inhaling into the nose, and blowing outward accompanied by the movement of nose extension. Eye movement may include, but is not limited to: eye socket movement and/or eyeball movement. The movement of the eye socket will change the size and/or shape of the eye socket, for example, the shape and size of the eye socket of squinting, staring, and smiling eyes will change. The movement of the eyeball may include: the position of the eyeball in the eye socket. For example, the change of the user's line of sight may cause the eyeball to be located at different positions of the eye socket, and the movement of the left and right eyeballs together can reflect the different emotional states of the user. With cheek movement, some users will have dimples or pear vortices when they laugh, and the shape of the cheeks will also change accordingly.
在一些实施例中,所述头部的运动不限于所述表情运动,则所述第一类特征不限于所述表情特征,还包括:所述头部的头发运动等头发运动特征;所述第一类特征还可包括:头部的摇头和/或点头等头部的整体运动特征。In some embodiments, the movement of the head is not limited to the expression movement, and the first type of feature is not limited to the expression feature, but also includes: hair movement features such as the head hair movement; The first type of features may also include the overall head movement features such as head shaking and/or head nodding.
在一些实施例中,所述步骤S121还包括:基于所述图像,获取所述表情特征的强度系数。In some embodiments, the step S121 further includes: obtaining the intensity coefficient of the expression feature based on the image.
在本实施例中所述强度系数可对应于脸部表情的表情幅度。例如,在脸部设置有多个表情基,一个表情基对应了一个表情动作;此处的强度系数可以用于表征该表情动作的强度,例如,该强度可为表情动作的幅度。In this embodiment, the intensity coefficient may correspond to the facial expression amplitude. For example, there are multiple expression bases on the face, and one expression base corresponds to one expression action; the intensity coefficient here can be used to characterize the intensity of the expression action, for example, the intensity can be the magnitude of the expression action.
在一些实施例中,所述强度系数越大,表征的强度越高。例如,所述强度系数越高,表示张嘴表情基的幅度越大,噘嘴表情基的幅度越大等。再例如,所述强度系数越大,对于挑眉表情基的挑眉高度越高。In some embodiments, the greater the intensity coefficient, the higher the intensity of the characterization. For example, the higher the intensity coefficient, the larger the amplitude of the open mouth expression base, the larger the amplitude of the pouting expression base, and so on. For another example, the greater the intensity coefficient, the higher the eyebrow raising height for the eyebrow raising expression base.
通过强度系数的引入,不仅会使得受控模型可以模拟目标当前的动作,而且还可以精确模拟目标当前表情的强度,实现表情的精准迁移。如此,若该方法应用于体感游戏场景,则受控对象为游戏角色,利用这种方法游戏角色不仅能够受控于用户的肢体动作,而且会精准模拟用户的表情特征。如此在游戏场景下,提高了游戏场景的仿真度,提升了用户的游戏体验。Through the introduction of the intensity coefficient, not only the controlled model can simulate the current action of the target, but also the intensity of the current expression of the target can be accurately simulated to realize the accurate migration of the expression. In this way, if the method is applied to a somatosensory game scene, the controlled object is a game character. Using this method, the game character can not only be controlled by the user's body movements, but also accurately simulate the user's facial expression characteristics. In this way, in the game scene, the simulation degree of the game scene is improved, and the user's game experience is improved.
在本实施例中,针对目标为人时,通过网格检测等,获取了表征头部的表情变化的网格(mesh)信息,基于mesh信息控制受控模型的变化。该mesh信息包括但不限于:四边形网格信息和/或三角面片信息。四边形网格信息指示经纬线的信息;三角面片信息是由三个关键点连接成的三角面片的信息。In this embodiment, when the target is a person, mesh information representing changes in the expression of the head is obtained through mesh detection, etc., and the change of the controlled model is controlled based on the mesh information. The mesh information includes but is not limited to: quadrilateral mesh information and/or triangular patch information. The quadrilateral grid information indicates the information of the latitude and longitude lines; the triangular patch information is the information of the triangular patch connected by three key points.
例如,mesh信息是由包括脸部体表的预定个数的脸部关键点形成的,四边形网格信息所代表的网格中经纬线的交叉点可为所述脸部关键点的所在位置,网格的交叉点的位置变化即为表情变化,如此,基于四边形网格信息得到的表情特征和强度系数,可以用于受控模型的脸部的表情精准控制。再例如,三角面片信息所对应的三角面片的顶点包含脸部关键点,关键点的位置上的变化即为表情变化。基于三角面片信息得到的表情特征和强度系数,可以用于受控模型的脸部的表情精准控制。For example, the mesh information is formed by a predetermined number of face key points including the body surface of the face, and the intersection of the longitude and latitude lines in the grid represented by the quadrilateral grid information may be the location of the face key point. The change in the position of the intersection of the grid is the change in expression. Thus, the expression feature and intensity coefficient obtained based on the quadrilateral grid information can be used for precise control of the facial expression of the controlled model. For another example, the vertices of the triangle face piece corresponding to the triangle face piece information include key points of the face, and the change in the position of the key point is the expression change. The expression features and intensity coefficients obtained based on the triangular face information can be used for precise control of the facial expressions of the controlled model.
在一些实施例中,获取所述表情特征的强度系数可包括:基于图像,获得表征第一类局部中各个子局部的强度系数。In some embodiments, obtaining the intensity coefficient of the expression feature may include: obtaining an intensity coefficient representing each sub-part in the first type of part based on the image.
例如,对于脸部的五官,眼睛、眉毛、鼻子、嘴、耳朵,分别对应至少一个表情基,有的可对应多个表情基,一个表情基对应了一个五官的一种类型的表情动作,而强度系数表征的是该表情动作的幅度。For example, for the facial features, eyes, eyebrows, nose, mouth, and ears, each corresponds to at least one expression base, and some can correspond to multiple expression bases, and one expression base corresponds to one type of facial expressions of the facial features. The intensity coefficient characterizes the magnitude of the expression action.
在一些实施例中,所述步骤S130可包括:基于所述表情特征和所述强度系数,确定头部的运动 信息;所述步骤S140可包括:基于头部的运动信息,控制所述受控模型的对应头部的表情变化。In some embodiments, the step S130 may include: determining the movement information of the head based on the expression feature and the intensity coefficient; the step S140 may include: controlling the controlled head based on the movement information of the head The facial expression changes of the corresponding head of the model.
在一些实施中,所述步骤S122可包括:基于所述图像,获取所述目标的第二类局部的关键点的位置信息。In some implementations, the step S122 may include: acquiring the position information of the second-type local key points of the target based on the image.
该位置信息可由目标的关键点的位置信息来表示,该关键点可包括:支架关键点和外轮廓关键点。若以人为例,则支架关键点可包括人体的骨架关键点,轮廓关键点可为人体体表的外轮廓的关键点。本申请不限制关键点的数量,但是关键点至少要可以表示骨架的一部分。The position information may be represented by the position information of the key points of the target, and the key points may include: key points of the support and key points of the outer contour. If a person is taken as an example, the key points of the support may include the key points of the skeleton of the human body, and the key points of the outline may be the key points of the outer contour of the body surface of the human body. This application does not limit the number of key points, but the key points must at least represent a part of the skeleton.
所述位置信息可以由坐标表示,例如,利用预定坐标系的2D坐标和/或3D坐标表示。该预定坐标系包括但不限于图像所在的图像坐标系。位置信息可为关键点的坐标,显然是不同于前述的mesh信息的。由于第二类局部不同于第一类局部,利用位置信息更加能够精准表征第二类局部的运动变化。The position information may be represented by coordinates, for example, 2D coordinates and/or 3D coordinates of a predetermined coordinate system. The predetermined coordinate system includes but is not limited to the image coordinate system where the image is located. The location information can be the coordinates of key points, which is obviously different from the aforementioned mesh information. Since the second type of part is different from the first type of part, the use of position information can more accurately characterize the movement of the second type of part.
在一些实施例中,所述步骤S130可包括:基于所述位置信息,确定所述第二类局部的运动信息。In some embodiments, the step S130 may include: determining the second type of local motion information based on the position information.
若目标以人为例,则所述第二类局部包括但不限于:躯干和/或四肢;躯干和/或上肢,躯干和/或下肢。If the target is a person as an example, the second type of part includes but is not limited to: trunk and/or limbs; trunk and/or upper limbs, trunk and/or lower limbs.
更进一步的,所述步骤S122具体可包括:基于所述图像,获取所述目标的第二类局部的支架关键点的第一坐标;基于所述第一坐标,获得第二坐标。Furthermore, the step S122 may specifically include: obtaining the first coordinates of the key points of the second type of local support of the target based on the image; obtaining the second coordinates based on the first coordinates.
所述第一坐标和所述第二坐标都是表征支架关键点的坐标。若目标以人或动物为例,此处的支架关键点为骨架关键点。Both the first coordinate and the second coordinate are coordinates that characterize key points of the support. If the target is a human or animal, the key points of the bracket here are the key points of the skeleton.
所述第一坐标和第二坐标可为不同类型的坐标,例如,第一坐标是在2D坐标系内的2D坐标,第二坐标为在3D坐标系内的3D坐标。第一坐标和第二坐标也可为同一类坐标。例如,所述第二坐标为对第一坐标进行校正过后的坐标,此时,第一坐标和第二坐标即为同一类坐标。例如,第一坐标和第二坐标均为3D坐标或均为2D坐标。The first coordinate and the second coordinate may be different types of coordinates. For example, the first coordinate is a 2D coordinate in a 2D coordinate system, and the second coordinate is a 3D coordinate in a 3D coordinate system. The first coordinate and the second coordinate may also be the same type of coordinate. For example, the second coordinate is the coordinate after the first coordinate is corrected. At this time, the first coordinate and the second coordinate are the same type of coordinate. For example, the first coordinate and the second coordinate are both 3D coordinates or both are 2D coordinates.
在一些实施例中,基于所述图像,获取所述目标的第二类局部的支架关键点的第一坐标,包括:基于2D图像,获取所述第二类局部的支架关键点的第一2D坐标;基于所述第一坐标,获得第二坐标,包括:基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。In some embodiments, based on the image, acquiring the first coordinates of the key points of the support of the second type of the target includes: acquiring the first 2D key points of the support of the second type of parts based on the 2D image Coordinates; obtaining the second coordinates based on the first coordinates includes: obtaining the first 3D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a conversion relationship from 2D coordinates to 3D coordinates.
在一些实施例中,基于所述图像,获取所述目标的第二类局部的支架关键点的第一坐标,包括:基于3D图像,获取所述目标的第二类局部的支架关键点的第二3D坐标;基于所述第一坐标,获得第二坐标,包括:基于所述第二3D坐标,获得第三3D坐标。In some embodiments, based on the image, acquiring the first coordinates of the key points of the support of the second type of the target includes: acquiring the first coordinates of the key points of the support of the second type of the target based on the 3D image 2. 3D coordinates; obtaining second coordinates based on the first coordinates, including: obtaining third 3D coordinates based on the second 3D coordinates.
例如,在步骤S110中直接获取的3D图像,该3D图像包括:2D图像和2D图像对应的深度图像。2D图像可以提供支架关键点在xoy平面内的坐标值,深度图像中的深度值,可以提供支架关键点在z轴上的坐标。z轴垂直于xoy平面内。For example, the 3D image directly acquired in step S110, the 3D image includes: a 2D image and a depth image corresponding to the 2D image. The 2D image can provide the coordinate values of the key points of the stent in the xoy plane, and the depth value in the depth image can provide the coordinates of the key points of the stent on the z-axis. The z axis is perpendicular to the xoy plane.
在一些实施例中,基于所述第二3D坐标获得第三3D坐标,包括:基于所述第二3D坐标,修正第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。In some embodiments, obtaining the third 3D coordinates based on the second 3D coordinates includes: based on the second 3D coordinates, correcting the 3D of the key points of the bracket corresponding to the occluded part of the second type of part in the 3D image Coordinates to obtain the third 3D coordinates.
在本实施例中利用3D模型先从3D图像中提取出第二3D坐标,然后利用考虑到目标中不同局部的遮挡;通过修正可以得到目标不同局部在3D空间内的正确的第三3D坐标,从而确保后续受控 模型的控制精确度。In this embodiment, the 3D model is used to first extract the second 3D coordinates from the 3D image, and then the occlusion that takes into account different parts of the target is used; through correction, the correct third 3D coordinates of different parts of the target in the 3D space can be obtained. So as to ensure the control accuracy of the subsequent controlled model.
在一些实施例中,所述步骤S130可包括:基于所述位置信息,确定所述第二类局部的四元数。In some embodiments, the step S130 may include: determining the quaternion of the second type of part based on the position information.
基于位置信息,确定四元数的具体方法,可以参见后续示例3中的描述。For the specific method of determining the quaternion based on the location information, please refer to the description in Example 3 below.
在一些实施中,所述运动信息不局限于由四元数来表示,还可以是不同坐标系内的坐标值来表示,例如,欧拉坐标系或拉格朗日坐标系内的坐标值等。利用四元数可以精准描述第二类局部的空间位置和/或各个方向上的旋转。In some implementations, the motion information is not limited to being represented by a quaternion, but can also be represented by coordinate values in different coordinate systems, for example, coordinate values in the Euler coordinate system or the Lagrangian coordinate system, etc. . The quaternion can be used to accurately describe the spatial position and/or rotation of the second type of local.
在一些实施例中,利用四元数作为运动信息,具体实现时,不局限于四元数,还可以利用各种坐标系内的相对于参考点的坐标值来指示,例如,可以利用欧拉坐标或拉格朗日坐标来替代所述四元数。In some embodiments, the quaternion is used as the motion information. In the specific implementation, it is not limited to the quaternion. It can also be indicated by the coordinate value relative to the reference point in various coordinate systems. For example, Euler Coordinates or Lagrangian coordinates replace the quaternion.
在一些实施例中,所述步骤S120可包括:获取所述第二类局部中的第一局部的支架关键点的第一位置信息;获取所述第二类局部中的第二局部的支架关键点的第二位置信息。In some embodiments, the step S120 may include: acquiring the first position information of the stent key points of the first part in the second type part; acquiring the stent key of the second part in the second type part The second location information of the point.
所述第二类局部可至少包括两个不同的局部。如此,受控模型能够同时模拟到目标的至少两个局部的运动。The second type of parts may include at least two different parts. In this way, the controlled model can simultaneously simulate at least two local motions of the target.
在一些实施例中,所述步骤S130可包括:根据所述第一位置信息,确定所述第一局部的运动信息;根据所述第二位置信息,确定所述第二局部的运动信息。In some embodiments, the step S130 may include: determining the motion information of the first part according to the first position information; and determining the motion information of the second part according to the second position information.
在一些实施例中,所述步骤S140可包括:根据所述第一局部的运动信息,控制所述受控模型与所述第一局部对应局部的运动;根据所述第二局部的运动信息,控制所述受控模型与所述第二局部对应局部的运动。In some embodiments, the step S140 may include: controlling the movement of the controlled model and the corresponding part of the first part according to the movement information of the first part; according to the movement information of the second part, Controlling the movement of the controlled model and the corresponding part of the second part.
在另一些实施例中,所述第一局部为:躯干;所述第二局部为上肢、下肢或四肢。In some other embodiments, the first part is the trunk; the second part is the upper limbs, lower limbs or limbs.
在一些实施例中,所述方法还包括:根据所述至少两个局部的特征及连接部的第一运动约束条件,确定所述连接部的第二类运动信息,其中,所述连接部用于连接两个局部;根据所述第二类运动信息,控制所述受控模型的连接部的运动。In some embodiments, the method further includes: determining the second type of motion information of the connecting part according to the at least two local characteristics and the first motion constraint condition of the connecting part, wherein the connecting part uses To connect two parts; according to the second type of motion information, control the motion of the connecting part of the controlled model.
在一些实施例中,有的局部的运动信息可以通过运动信息的获取模型单独得到,这样得到的运动信息,可以称之为第一类运动信息。而有一些局部是连接其他两个或两个以上局部的连接部,这些连接部的运动信息在本实施例中为了方便称之为第二类运动信息。此处的第二类运动信息也为表征目标中局部的运动状况的信息之一。在一些实施例中,所述第二类运动信息可以基于该连接部所连接的两个局部的第一类运动信息所确定。In some embodiments, some local motion information can be separately obtained through a motion information acquisition model, and the motion information obtained in this way may be referred to as the first type of motion information. Some parts are connecting parts connecting other two or more parts, and the motion information of these connecting parts is called the second type of motion information for convenience in this embodiment. The second type of motion information here is also one of the information that characterizes the local motion status of the target. In some embodiments, the second type of motion information may be determined based on the first type of motion information of the two parts connected by the connecting portion.
故第二类运动信息相对于第一类运动信息的差异在于:第二类运动信息是连接部的运动信息,而第一类运动信息是连接部以外的其他局部的运动信息;第一类运动信息是单独根据对应局部的运动状态生成的,而第二类运动信息可能是与对应连接部所连接的其他局部的运动信息是相关的。Therefore, the difference between the second type of motion information and the first type of motion information is: the second type of motion information is the motion information of the connecting part, and the first type of motion information is the motion information of other parts except the connecting part; the first type of motion The information is generated solely based on the motion state of the corresponding part, and the second type of motion information may be related to the motion information of other parts connected to the corresponding connecting portion.
在一些实施例中,所述步骤S140可包括:根据所述连接部的类型,确定控制所述连接部的控制方式;根据所述控制方式及所述第二类运动信息,控制所述受控模型的连接部的运动。In some embodiments, the step S140 may include: determining a control method for controlling the connecting portion according to the type of the connecting portion; controlling the controlled method according to the control method and the second type of motion information Movement of the connecting part of the model.
该连接部可用于连接其他两个局部,例如,以人为例,脖子、手腕、脚腕或腰都为连接两个局部的连接部。The connecting part can be used to connect the other two parts. For example, taking a person as an example, the neck, wrist, ankle, or waist are all connecting parts that connect the two parts.
这些连接部的运动信息,可能不方便检测或者一定程度上依赖其相邻的其他局部,故在本 实施例中,可以根据与连接部连接的两个或两个以上的其他局部的第一类运动信息可以确定出所述连接部的运动信息,从而获得对应连接部的第二类运动信息。The motion information of these connecting parts may be inconvenient to detect or rely on other adjacent parts to a certain extent. Therefore, in this embodiment, it can be based on the first type of two or more other parts connected to the connecting part. The movement information can determine the movement information of the connecting portion, thereby obtaining the second type of movement information corresponding to the connecting portion.
在本实施例中考虑到连接部的运动信息的获取方式及约束条件等特殊信息,会根据连接部的类型,确定出对应的控制方式,以实现对受控模型中对应连接部的精准控制。In this embodiment, taking into account the special information such as the method of acquiring motion information of the connecting part and the constraint conditions, the corresponding control method will be determined according to the type of the connecting part, so as to achieve precise control of the corresponding connecting part in the controlled model.
例如,手腕的侧向旋转,如以上臂到手部的延伸方向为轴进行旋转,手腕在侧向旋转,是上臂旋转导致的。For example, the lateral rotation of the wrist, such as the extension of the upper arm to the hand, is the axis of rotation, and the lateral rotation of the wrist is caused by the rotation of the upper arm.
再例如,脚腕的侧向旋转,如以为小腿的延伸方向为轴进行旋转,脚腕的旋转也是直接由小腿带动的,当然也有可能是由大腿带动小腿,并由小腿进一步带动所述脚腕的。For another example, the lateral rotation of the ankle, if taking the extension direction of the calf as the axis to rotate, the rotation of the ankle is also directly driven by the calf, of course, it is also possible that the thigh drives the calf, and the calf further drives the ankle of.
而针对脖子这种连接部,其旋转决定了面部朝向和躯干朝向。As for the connecting part of the neck, its rotation determines the orientation of the face and the torso.
在另一些实施例中,根据所述连接部的类型,确定控制所述连接部的控制方式,包括:若所述连接部为第一类连接部,确定采用第一类控制方式,其中,所述第一类控制方式,用于直接控制所述受控模型中与所述第一类连接部对应的连接部的运动。In other embodiments, determining the control method for controlling the connecting portion according to the type of the connecting portion includes: if the connecting portion is the first type connecting portion, determining to adopt the first type control method, wherein The first type of control method is used to directly control the movement of the connecting portion corresponding to the first type of connecting portion in the controlled model.
在一些实施例中,所述第一类连接部为其自身的旋转并非由其他局部带动。In some embodiments, the rotation of the first type of connecting portion is not driven by other parts.
在另外一些实施例中,连接部还包括第一类连接部以外的第二类连接部。此处的第二类连接部的运动可能局限于其自身,而是由其他局部所带动的。In some other embodiments, the connecting portion further includes a second type of connecting portion other than the first type of connecting portion. The movement of the second type of connection here may be limited to itself, but driven by other parts.
在一些实施例中,根据所述连接部的类型,确定控制所述连接部的控制方式,包括:若所述连接部为第二类连接部,确定采用第二类控制方式,其中,所述第二类控制方式,用于通过控制所述受控模型所述第二类连接部以外的局部,来间接控制所述第二类连接部的运动。In some embodiments, determining a control method for controlling the connecting portion according to the type of the connecting portion includes: if the connecting portion is a second type of connecting portion, determining to adopt the second type of control method, wherein the The second type of control method is used to indirectly control the movement of the second type of connecting part by controlling the part of the controlled model other than the second type of connecting part.
该第二类连接部以外的局部包括但不限于:直接与所述第二类连接部连接的局部,或者,间接与所述第二类连接部连接的局部。The parts other than the second-type connecting portion include but are not limited to: the part directly connected to the second-type connecting portion, or the part indirectly connected to the second-type connecting portion.
例如,在手腕侧向旋转时,可能是整个上肢在运动,则肩膀及肘部都在旋转,如此,可以通过控制肩膀和/或肘部的侧向旋转,间接带动所述手腕的旋转。For example, when the wrist rotates laterally, the entire upper limb may be moving, and the shoulders and elbows are rotating. In this way, the rotation of the wrist can be indirectly driven by controlling the lateral rotation of the shoulder and/or elbow.
在一些实施例中,根据所述控制方式及所述第二类运动信息,控制受控模型的连接部的运动,包括:若为所述第二类控制方式,分解所述第二类运动信息,获得所述连接部由牵引部牵引该连接部旋转的第一类旋转信息;根据所述第一类旋转信息,调整所述牵引部的运动信息;利用所述牵引部的调整后的运动信息,控制所述受控模型中牵引部的运动,以间接控制所述连接部的运动。In some embodiments, controlling the movement of the connecting part of the controlled model according to the control method and the second type of motion information includes: if it is the second type of control method, decomposing the second type of motion information , Obtain the first type of rotation information of the connecting part being pulled by the traction part to rotate; adjust the motion information of the traction part according to the first type of rotation information; use the adjusted motion information of the traction part , Controlling the movement of the traction part in the controlled model to indirectly control the movement of the connecting part.
在本实施例中,第一类旋转信息并非是第二类连接部自身运动所产生的旋转信息,而是由与第二类连接部连接的其他局部(即牵引部)的运动牵引第二类连接部使得第二类连接部相对于目标的特定参考点(例如,人体中心)所产生的运动信息。In this embodiment, the first type of rotation information is not the rotation information generated by the movement of the second type of connecting part itself, but the second type is pulled by the movement of other parts connected with the second type of connecting part (that is, the traction part). The connection part makes the movement information generated by the second type connection part relative to a specific reference point of the target (for example, the center of the human body).
在本实施例中,所述牵引部为直接与所述第二类连接部连接的局部。以手腕为所述第二类连接部为例,则所述牵引部为所述手腕之上的手肘甚至肩部。若以脚腕为所述第二类连接部为例,则所述牵引部为所述脚腕之上的膝盖甚至大腿根部。In this embodiment, the traction part is a part directly connected with the second type connecting part. Taking the wrist as the second type of connecting part as an example, the traction part is the elbow or even the shoulder above the wrist. If an ankle is taken as the second type of connecting part as an example, the traction part is the knee or even the root of the thigh above the ankle.
手腕沿着肩部、手肘到手腕的直线方向的侧向旋转,可能是由肩部带动或者是手肘带动的旋转,而在检测运动信息时,是通过手腕的运动引起的,如此手腕的侧向旋转信息实质应该赋值给手肘或肩部,通过这种传递赋值就实现了对手肘或肩部的运动信息的调整;利用调整后的运动信息 控制受控模型中手肘或肩部的运动,如此,手肘或肩部所对应的侧向旋转,从图像的效果栏看,会由受控模型的手腕来体现;从而实现了受控模型对目标运动的精准模拟。The lateral rotation of the wrist along the straight line from the shoulder to the elbow to the wrist may be caused by the rotation of the shoulder or the elbow. When detecting the movement information, it is caused by the movement of the wrist. The information of lateral rotation should be assigned to the elbow or shoulder. Through this transfer assignment, the adjustment of the movement information of the elbow or shoulder is realized; the adjusted movement information is used to control the movement of the elbow or shoulder in the controlled model. Movement, in this way, the lateral rotation corresponding to the elbow or shoulder, viewed from the effect bar of the image, will be reflected by the wrist of the controlled model; thus, the controlled model can accurately simulate the target movement.
在一些实施例中,所述方法还包括:分解所述第二类运动信息,获得所述第二类连接部相对于所述牵引部旋转的第二类旋转信息;利用所述第二类旋转信息,控制所述受控模型所述连接部相对于所述牵引部的旋转。In some embodiments, the method further includes: decomposing the second type of motion information to obtain the second type of rotation information of the second type of connecting part rotating relative to the traction part; using the second type of rotation Information to control the rotation of the connecting part of the controlled model relative to the traction part.
所述第一类旋转信息为提取旋转信息的信息模型直接根据图像的特征得到的信息,而第二类旋转信息是通过调整所述第一类旋转信息得到的旋转信息。在本实施例中,首先通过第二类连接部的特征,例如,2D坐标或3D坐标可以知道第二类连接部相对于预定姿态的运动信息,该运动信息称之为第二类运动信息。所述第二类运动信息包括但不限于旋转信息。The first type of rotation information is information obtained by the information model from which the rotation information is extracted directly according to the characteristics of the image, and the second type of rotation information is the rotation information obtained by adjusting the first type of rotation information. In this embodiment, firstly, the movement information of the second type of connecting portion relative to the predetermined posture can be known through the characteristics of the second type of connecting portion, for example, 2D coordinates or 3D coordinates, and this movement information is called the second type of movement information. The second type of motion information includes but is not limited to rotation information.
在一些实施例中,所述第二类连接部包括:手腕;脚腕。In some embodiments, the second type of connecting portion includes: a wrist; an ankle.
在另一些实施例中,若所述第二类连接部为手腕,对应于所述手腕的牵引部包括:前臂和/或上臂;和/或,若所述第二类连接部为脚腕,对应于所述脚腕的牵引部包括:小腿和/或大腿。In other embodiments, if the second type of connecting part is a wrist, the traction part corresponding to the wrist includes: a forearm and/or an upper arm; and/or, if the second type of connecting part is an ankle, The traction part corresponding to the ankle includes: calf and/or thigh.
在一些实施例中,所述第一类连接部包括连接头部和躯干的脖子。In some embodiments, the first type of connecting portion includes a neck connecting the head and the torso.
在还有一些实施例中,根据所述至少两个局部的特征及所述连接部的第一运动约束条件,确定连接部的运动信息,包括:根据所述至少两个局部的特征,确定所述至少两个局部的朝向信息;根据至少两个局部的朝向信息,确定所述连接部的备选朝向信息;根据所述备选朝向信息和所述第一运动约束条件,确定所述连接部的运动信息。In some other embodiments, determining the motion information of the connecting part according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part includes: determining the motion information of the connecting part according to the characteristics of the at least two parts. The orientation information of the at least two parts; determine the candidate orientation information of the connection part according to the orientation information of the at least two parts; determine the connection part according to the candidate orientation information and the first motion constraint condition Sports information.
在一些实施例中,根据至少两个局部的朝向信息,确定所述连接部的备选朝向信息,包括:根据至少两个局部的朝向信息,确定所述连接部的第一备选朝向和第二备选朝向。In some embodiments, determining the candidate orientation information of the connecting part according to the orientation information of at least two parts includes: determining the first candidate orientation and the second orientation information of the connecting part according to the orientation information of the at least two parts. Two alternative directions.
两个局部的朝向信息之间可能形成有两个夹角,这两个夹角对应了连接部的不同朝向的旋转信息;故这两个夹角分别对应的朝向均为备选朝向;两个备选朝向中仅有一个满足连接部运动的第一运动约束条件,故需要根据第一运动约束条件的目标朝向并确定出所述第二类运动信息。在本实施例中旋转夹角满足所述第一运动约束条件的夹角作为所述第二类运动信息。Two included angles may be formed between the two local orientation information, and these two included angles correspond to the rotation information of different orientations of the connecting part; therefore, the orientations corresponding to the two included angles are all alternative orientations; Only one of the candidate orientations satisfies the first motion constraint condition for the movement of the connecting part, so the second type of motion information needs to be determined according to the target orientation of the first motion constraint condition. In this embodiment, the included angle of rotation that satisfies the first motion constraint condition is used as the second type of motion information.
例如,人脸的朝向和躯干的朝向之间形成两个夹角,这两个夹角之和为180度;假设这两个夹角分别是第一夹角和第二夹角。而连接人脸和躯干的脖子的第一运动约束条件为:-90至90度之间,则超过90度的角度根据第一运动约束条件被排除。如此,可以减少在受控模型模拟目标运动过程中,出现旋转角度出现顺时针或逆时针超过90度,例如,120度、180度的异常情况。若所述第一运动约束条件为:-90至90度之间;则该第一运动约束条件对应了两个极限角度,一个是-90度,另一个是90度。For example, two included angles are formed between the orientation of the face and the torso, and the sum of the two included angles is 180 degrees; suppose the two included angles are the first included angle and the second included angle, respectively. The first motion constraint condition connecting the human face and the neck of the torso is: between -90 and 90 degrees, and the angle exceeding 90 degrees is excluded according to the first motion constraint condition. In this way, it is possible to reduce the abnormal situation in which the rotation angle exceeds 90 degrees clockwise or counterclockwise, for example, 120 degrees, 180 degrees, during the process of simulating the target movement by the controlled model. If the first motion constraint condition is between -90 and 90 degrees; then the first motion constraint condition corresponds to two extreme angles, one is -90 degrees and the other is 90 degrees.
但是当出现旋转角度超过-90至90度范围,则将检测的旋转角度修改为第一运动约束条件限定的最大角度。例如,若检测到超过90度的旋转角度,将检测的旋转角度修改为检测的旋转角度更靠近的极限角度,如,90度。However, when the rotation angle exceeds the range of -90 to 90 degrees, the detected rotation angle is modified to the maximum angle defined by the first motion constraint condition. For example, if a rotation angle exceeding 90 degrees is detected, the detected rotation angle is modified to a limit angle closer to the detected rotation angle, such as 90 degrees.
在一些实施例中,根据所述备选朝向信息和所述第一运动约束条件,确定所述连接部的运动信息,包括:从所述第一备选朝向信息和第二备选朝向信息中,选择出位于朝向改变约束范围内的目标朝向信息;根据所述目标朝向信息,确定所述连接部的运动信息。In some embodiments, determining the movement information of the connecting portion according to the candidate orientation information and the first movement constraint condition includes: from the first candidate orientation information and the second candidate orientation information , Select the target orientation information within the bounds of the orientation change; determine the movement information of the connecting part according to the target orientation information.
例如,以脖子为例,脸部朝右,则对应的脖子的朝向可能是朝右90度或者朝左270度,但 是根据人体的生理构造,人体的脖子朝向的改变可能无法是通过左旋270度到使得脖子朝右。此时,脖子的朝向为:朝右90度及朝左270度都是备选朝向信息,需要进一步确定脖子的朝向信息,需要根据前述的第一运动约束条件来确定。在本例中,脖子朝右90度为脖子的目标朝向信息,并根据脖子的朝右90度,得到脖子当前相对于相机坐标系的第二类运动信息为向右旋转90度。For example, taking the neck as an example, if the face is facing right, the corresponding neck orientation may be 90 degrees to the right or 270 degrees to the left. However, according to the physiological structure of the human body, the change of the neck orientation of the human body may not be 270 degrees to the left. So that the neck faces right. At this time, the orientation of the neck is: 90 degrees to the right and 270 degrees to the left are candidate orientation information, and the orientation information of the neck needs to be further determined, which needs to be determined according to the aforementioned first motion constraint condition. In this example, the neck 90 degrees to the right is the target orientation information of the neck, and according to the neck 90 degrees to the right, the second type of movement information of the neck relative to the camera coordinate system is obtained as the right rotation 90 degrees.
此处的目标朝向信息即为满足所述第一运动约束条件的信息。The target orientation information here is the information that satisfies the first motion constraint condition.
在一些实施例中,根据所述至少两个局部的特征,确定所述至少两个局部的朝向信息,包括:获取所述至少两个局部的每一个局部的第一关键点和第二关键点;获取所述至少两个局部的每一个局部的第一参考点,其中,所述第一参考点为所述目标内的预定关键点;基于所述第一关键点和所述第一参考点生成第一向量,并基于所述第二关键点和所述第一参考点生成第二向量;基于所述第一向量和所述第二向量,确定所述至少两个局部的每一个局部的朝向信息。In some embodiments, determining the orientation information of the at least two parts according to the characteristics of the at least two parts includes: acquiring a first key point and a second key point of each part of the at least two parts Acquire the first reference point of each of the at least two parts, wherein the first reference point is a predetermined key point in the target; based on the first key point and the first reference point A first vector is generated, and a second vector is generated based on the second key point and the first reference point; based on the first vector and the second vector, the value of each of the at least two parts is determined Towards information.
若所述两个局部中的第一局部为人体的肩部,则所述第一局部的第一参考点可为所述目标的腰部关键点或两胯的关键点的中点。若所述两个局部中的第二局部为人脸,则所述第二局部的第一参考点可为人脸所连接脖子和肩部的连接点。If the first part of the two parts is the shoulder of the human body, the first reference point of the first part may be the waist key point of the target or the midpoint of the key points of the two hips. If the second part of the two parts is a human face, the first reference point of the second part may be a connection point between the neck and shoulders of the human face.
连接第一参考点和对应两个关键点形成两个向量,再叉乘这两个向量得到的这两个向量的法向量,该法向量的方向即可认为是对应局部的朝向。故在一些实施例中,基于所述两个向量,确定所述至少两个局部的每一个局部的朝向信息,包括:叉乘一个局部的所述第一向量和所述第二向量,得到对应局部所在平面的法向量;将所述法向量,作为该局部的朝向信息。Connect the first reference point and the corresponding two key points to form two vectors, and then cross-multiply the two vectors to obtain the normal vector of the two vectors, and the direction of the normal vector can be regarded as the corresponding local orientation. Therefore, in some embodiments, based on the two vectors, determining the orientation information of each of the at least two parts includes: cross multiplying the first vector and the second vector of a part to obtain the corresponding The normal vector of the plane where the part is located; the normal vector is used as the orientation information of the part.
若该法向量确定了,则局部所在平面的朝向也确定了。If the normal vector is determined, the orientation of the plane where the part is located is also determined.
在一些实施例中,基于所述至少两个局部的运动信息,确定所述连接部的所述运动信息,包括:获取所述连接部相对于第二参考点的第四3D坐标;根据所述第四3D坐标获得所述连接部的绝对旋转信息;根据所述运动信息,控制受控模型对应的局部的运动,包括:基于所述绝对旋转信息,控制所述受控模型的对应的连接部的运动。In some embodiments, determining the movement information of the connecting portion based on the at least two local movement information includes: acquiring a fourth 3D coordinate of the connecting portion relative to a second reference point; The fourth 3D coordinate obtains the absolute rotation information of the connecting part; according to the motion information, controlling the movement of the part corresponding to the controlled model includes: controlling the corresponding connecting part of the controlled model based on the absolute rotation information exercise.
在一些实施例中,所述第二参考点可为目标的支架关键点中一个,目标以人为例,则该第二参考点可为所述第一类连接部所连接的局部的关键点。例如,以脖子为例,则第二参考点可为脖子所连接的肩部的关键点。In some embodiments, the second reference point may be one of the key points of the stent of the target, and the target is a person as an example, and the second reference point may be a local key point connected by the first type of connecting portion. For example, taking the neck as an example, the second reference point may be the key point of the shoulder connected to the neck.
在另一些实施例中,所述第二参考点可和所述第一参考点相同,例如,第一参考点和第二参考点均可为人体的根节点,人体的根节点可为人体胯部两个关键点连线的中点。该根节点包括但不限于图7B所示的关键点0。图7B为人体的骨架示意图,在图7B包含有标号0至16共17个骨架关节点。In other embodiments, the second reference point may be the same as the first reference point. For example, both the first reference point and the second reference point may be the root node of the human body, and the root node of the human body may be the human crotch. The midpoint of the connection between the two key points. The root node includes but is not limited to the key point 0 shown in FIG. 7B. Fig. 7B is a schematic diagram of the skeleton of the human body, and Fig. 7B includes 17 skeleton joint points numbered 0-16.
在另一些实施例中,基于所述绝对旋转信息,控制所述受控模型的对应的连接部的运动,还包括:根据所述目标中多个所述连接部之间的牵引层级关系,分解所述绝对旋转信息,获得相对旋转信息;基于所述相对旋转信息,控制所述受控模型中对应的连接部的运动。In other embodiments, based on the absolute rotation information, controlling the movement of the corresponding connection part of the controlled model further includes: decomposing according to the traction hierarchical relationship among the plurality of connection parts in the target The absolute rotation information obtains relative rotation information; based on the relative rotation information, the movement of the corresponding connection part in the controlled model is controlled.
例如,以下是一种层级关系的举例:第一层级:盆骨;第二层级:腰部;第三层级:大腿(例如,左大腿、右大腿);第四层级:小腿(例如,左小腿、右小腿);第五层级:脚部。For example, the following is an example of a hierarchical relationship: first level: pelvis; second level: waist; third level: thigh (for example, left thigh, right thigh); fourth level: lower leg (for example, left calf, Right calf); fifth level: feet.
再例如,以下是另一种层级关系;第一层级:胸;第二层级:脖子;第三层级,头。For another example, the following is another level relationship; the first level: chest; the second level: neck; the third level, head.
进一步地,例如,以下是再一种层级关系:第一层级:锁骨,对应于肩部;第二层级:上 臂;第三层级:前臂(又称为小臂);第四层级:手部。Further, for example, the following is another level relationship: the first level: the clavicle, corresponding to the shoulder; the second level: the upper arm; the third level: the forearm (also called the forearm); the fourth level: the hand.
第一层级至第五层级,层级关系依次降低;高层级的局部的运动会影响低层级的局部的运动。故,牵引部的层级高于连接部的层级。From the first level to the fifth level, the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
在确定所述第二类运动信息时,首先,获取各层级的局部所对应关键点的运动信息,然后基于层级关系,确定出低层级的局部的关键点相对于高层级的关键点的局部的运动信息(即所述相对旋转信息)。When determining the second type of motion information, first, obtain the motion information of the key points corresponding to the local parts of each level, and then determine the local key points of the low-level relative to the local key points of the high-level based on the hierarchical relationship. Motion information (that is, the relative rotation information).
例如,以四元数作表征运动信息,则相对旋转信息可以用如下计算公式(1)进行表示:每个关键点相对于相机坐标系的旋转四元数{Q 0,Q 1,…,Q 18},然后再计算每个关键点相对于父关键点的旋转四元数q iFor example, if a quaternion is used to characterize the motion information, the relative rotation information can be represented by the following calculation formula (1): the rotation quaternion of each key point relative to the camera coordinate system {Q 0 ,Q 1 ,...,Q 18 }, and then calculate the rotation quaternion q i of each key point relative to the parent key point;
Figure PCTCN2020072526-appb-000001
Figure PCTCN2020072526-appb-000001
其中,父关键点parent(i)为当前关键点i上一层级的关键点。Q i为当前关键点i相对于相机坐标系的旋转四元数;
Figure PCTCN2020072526-appb-000002
为上一层级的关键点的逆旋转参数。例如,Q parent(i)为上一层级的关键点的旋转参数,且旋转角度为90度;则
Figure PCTCN2020072526-appb-000003
的旋转角度为-90度。
Among them, the parent key point parent(i) is the key point one level above the current key point i. Q i is the rotation quaternion of the current key point i relative to the camera coordinate system;
Figure PCTCN2020072526-appb-000002
It is the reverse rotation parameter of the key point of the upper level. For example, Q parent(i) is the rotation parameter of the key point of the upper level, and the rotation angle is 90 degrees; then
Figure PCTCN2020072526-appb-000003
The angle of rotation is -90 degrees.
在一些实施例中,基于所述绝对旋转信息,控制所述受控模型的对应的连接部的运动,还包括:根据第二约束条件,校正所述相对旋转信息;基于所述相对旋转信息,控制所述受控模型中对应的连接部的运动,包括:基于校正后的所述相对旋转信息,控制所述受控模型中对应的连接部的运动。In some embodiments, based on the absolute rotation information, controlling the movement of the corresponding connection part of the controlled model further includes: correcting the relative rotation information according to a second constraint condition; based on the relative rotation information, Controlling the movement of the corresponding connecting part in the controlled model includes: controlling the movement of the corresponding connecting part in the controlled model based on the corrected relative rotation information.
在一些实施例中,所述第二约束条件包括:所述连接部的可旋转角度。In some embodiments, the second constraint condition includes: a rotatable angle of the connecting portion.
在一些实施例中,所述方法还包括:对所述第二类运动信息进行姿势缺陷校正,获得校正后的第二类运动信息;所述根据所述第二类运动信息,控制所述受控模型的连接部的运动,包括:利用所述校正后的第二类运动信息,控制所述受控模型的连接部的运动。In some embodiments, the method further includes: performing posture defect correction on the second type of motion information to obtain corrected second type of motion information; said controlling the subject according to the second type of motion information Controlling the movement of the connecting part of the model includes: using the corrected second-type movement information to control the movement of the connecting part of the controlled model.
例如,在一些用户存在一些形体不是很标准的问题,走路的不协调问题等。为了减少受控模型直接模仿出现比较怪异的动作等现象,在本实施例中,可以对第二类运动信息进行姿势缺陷校正,获得校正后的第二类运动信息。For example, in some users, there are some problems with not very standard body shapes, and the problem of uncoordinated walking. In order to reduce the phenomenon that the controlled model directly imitates strange actions and the like, in this embodiment, posture defect correction can be performed on the second type of motion information to obtain the corrected second type of motion information.
在一些实施例中,所述方法还包括:对所述第一类运动信息进行姿势缺陷校正,获得校正后的第一类运动信息;所述步骤S140可包括:利用所述校正后的第一类运动信息,控制所述受控模型对应的局部的运动。In some embodiments, the method further includes: performing posture defect correction on the first type of motion information to obtain corrected first type of motion information; the step S140 may include: using the corrected first type of motion information. The similar motion information controls the local motion corresponding to the controlled model.
在一些实施例中,所述姿势缺陷校正包括以下至少之一:上肢和下肢的同步缺陷;罗圈腿运动缺陷;脚部呈现外八字型运动缺陷;脚部内凹型运动缺陷。In some embodiments, the postural defect correction includes at least one of the following: synchronization defect of upper and lower limbs; movement defect of looped leg; foot showing external figure-shaped movement defect; foot concave type movement defect.
在一些实施例中,所述方法还包括:根据所述目标的形体与标准形体之间的差异信息,获得所述姿势缺陷校正参数;其中,所述姿势缺陷校正参数,用于所述第一类运动信息和/或第二类运动信息的校正。In some embodiments, the method further includes: obtaining the posture defect correction parameter according to the difference information between the shape of the target and the standard form; wherein, the posture defect correction parameter is used for the first The correction of the second type of motion information and/or the second type of motion information.
例如,在利用包含目标的图像控制所述受控模型之前,先检测所述目标的形体,然后将检测的形体与标准形体进行比对,得到差异信息;通过差异信息进行姿势缺陷校正。For example, before using the image containing the target to control the controlled model, the shape of the target is detected first, and then the detected shape is compared with the standard shape to obtain difference information; posture defect correction is performed through the difference information.
在显示界面输出保持预定姿势的提示,用户看到所述提示之后,保持所述预定姿势,如此图像设备就能够采集到维持预定姿势的用户的图像;然后通过图像检测,确定用户维持预定姿势是否足够标准,从而得到所述差异信息。该预定姿势可包括但不限于人体的直立姿势。A prompt to maintain a predetermined posture is output on the display interface. After the user sees the prompt, the user maintains the predetermined posture, so that the imaging device can collect an image of the user maintaining the predetermined posture; then through image detection, it is determined whether the user maintains the predetermined posture Standard enough to get the difference information. The predetermined posture may include, but is not limited to, the upright posture of the human body.
例如,有人脚部呈现外八字,而正常的标准站立姿态应该是双足的足尖和足根的连线相互平行,在获取目标的特征所对应的第一类运动信息和/或第二类运动信息,控制受控模型时会考虑到这种形体上的非标准校正(即所述姿势缺陷校正)。For example, some people’s feet show the outer eight characters, and the normal standard standing posture should be that the toes of the feet and the roots of the feet are parallel to each other, and the first type of movement information and/or the second type corresponding to the characteristics of the target For motion information, when controlling the controlled model, this non-standard correction of the shape (ie, the posture defect correction) will be taken into consideration.
在另一些实施例中,所述方法还包括:根据所述目标的不同局部的比例关系,校正标准模型的不同局部的比例,获得校正后的所述受控模型。In some other embodiments, the method further includes: correcting the proportions of different parts of the standard model according to the proportion relations of different parts of the target to obtain the corrected controlled model.
不同的目标各个部分的比例关系可能有差异。例如,以人为例,职业模特的腿长与头长比例,比普通人的比例长一些。有一些人臀部比较丰满,则其两胯之间的间距可能比普通人更大一些。The proportional relationship between the various parts of different targets may be different. For example, taking people as an example, the ratio of the length of the legs to the length of the head of a professional model is longer than that of an ordinary person. Some people have fuller buttocks, and the distance between their hips may be larger than that of ordinary people.
标准模型可为基于大量人体数据得到的均值模型,为了使得受控模型能够更加精准模仿受控模型的运动,在本实施例中会根据目标的不同局部的比例关系,校正标准模型的不同局部的比例,获得校正后的所述受控模型。例如,以目标为人为例,被校正的局部包括但不限于胯部和/或腿部。The standard model may be a mean model obtained based on a large amount of human body data. In order to make the controlled model more accurately imitate the movement of the controlled model, in this embodiment, the different parts of the standard model will be corrected according to the proportional relationship of different parts of the target. Ratio to obtain the corrected controlled model. For example, taking the target as a person as an example, the corrected part includes but not limited to the crotch and/or the leg.
如图3A、图3B及图3C所示,图像左上角的小图是为采集的图像,右下角为人体的受控模型。该用户的手部在运动,从图3A到图3B,再从图3B到图3C,用户的手部在运动,受控模型的手也跟随在运动。用户的手部运动在图3A至图3C中依次从握拳、伸掌和伸出食指进行变化,而受控模型模仿用户的手势从握拳、伸掌和伸出食指变化。As shown in Figure 3A, Figure 3B and Figure 3C, the small image in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body. The user's hand is moving. From FIG. 3A to FIG. 3B, and then from FIG. 3B to FIG. 3C, the user's hand is moving, and the hand of the controlled model is also moving. The user's hand movement changes from making a fist, extending the palm, and extending the index finger sequentially in FIGS. 3A to 3C, while the controlled model imitates the user's gestures changing from making a fist, extending the palm, and extending the index finger.
如图4A、图4B及图4C所示,图像左上角的小图是为采集的图像,右下角为人体的受控模型。该用户的躯干在运动,从图4A到图4B,再从图4B到图4C,用户的躯干在运动,受控模型的躯干也跟随在运动。图4A到图4C,用户从向图像右边顶胯、向图像右顶胯,最后直立进行变化。受控模型也模拟用户进行躯干运动。As shown in Fig. 4A, Fig. 4B and Fig. 4C, the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body. The torso of the user is moving. From Fig. 4A to Fig. 4B and then from Fig. 4B to Fig. 4C, the torso of the user is in motion, and the torso of the controlled model is also in motion. 4A to 4C, the user changes his crotch from the right side of the image, to the right side of the image, and finally stands upright. The controlled model also simulates the user's torso movement.
图5A、图5B及图5C所示,图像左上角的小图是为采集的图像,右下角为人体的受控模型。图5A到图5C,用户向图像右侧迈腿、向图像左迈腿、最后站直;受控模型也模拟用户进行脚部运动。As shown in Fig. 5A, Fig. 5B and Fig. 5C, the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body. In Figures 5A to 5C, the user steps toward the right side of the image, steps toward the left side of the image, and finally stands up straight; the controlled model also simulates the user's foot movement.
此外,在图4A至图4C中,受控模型还模拟用户的表情进行变化。In addition, in FIGS. 4A to 4C, the controlled model also simulates changes in the user's expression.
如图6所示,本实施例提供一种图像处理装置,包括以下模块:As shown in FIG. 6, this embodiment provides an image processing device, which includes the following modules:
第一获取模块110,用于获取图像。The first acquisition module 110 is used to acquire images.
第二获取模块120,用于基于所述图像获取目标的局部的特征。The second acquisition module 120 is configured to acquire local features of the target based on the image.
第一确定模块130,用于基于所述特征,确定所述局部的运动信息。The first determining module 130 is configured to determine the local motion information based on the characteristic.
控制模块140,用于根据所述运动信息,控制受控模型对应的局部的运动。The control module 140 is configured to control the local movement corresponding to the controlled model according to the movement information.
在一些实施例中,所述第二获取模块120,具体用于:基于所述图像,获取所述目标的第一类局部的第一类特征;和/或,基于所述图像,获取所述目标的第二类局部的第二类特征。In some embodiments, the second acquisition module 120 is specifically configured to: acquire the first-type features of the first-type part of the target based on the image; and/or, based on the image, acquire the The second type of feature of the second type of the target.
在一些实施例中,所述第二获取模块120,具体用于基于所述图像,获取头部的表情特征以及所述表情特征的强度系数。In some embodiments, the second acquisition module 120 is specifically configured to acquire the expression feature of the head and the intensity coefficient of the expression feature based on the image.
在一些实施例中,基于所述图像,获取所述表情特征的所述强度系数,包括:基于所述图 像,获得表征所述第一类局部中各个子局部的强度系数。In some embodiments, obtaining the intensity coefficient of the expression feature based on the image includes: obtaining, based on the image, an intensity coefficient representing each sub-part in the first type of part.
在一些实施例中,所述第一确定模块130,具体用于基于所述表情特征和所述强度系数,确定所述头部的运动信息;所述控制模块140,具体用于:根据所述头部的所述运动信息,控制所述受控模型的头部的表情变化。In some embodiments, the first determining module 130 is specifically configured to determine the movement information of the head based on the expression feature and the intensity coefficient; the control module 140 is specifically configured to: The movement information of the head controls the expression change of the head of the controlled model.
在一些实施例中,所述第二获取模块120,用于基于所述图像,获得所述第一类局部的网格信息。In some embodiments, the second obtaining module 120 is configured to obtain the first type of local grid information based on the image.
在一些实施例中,所述第二获取模块120,具体用于基于图像,获得表征第一类局部中各个子局部的强度系数。In some embodiments, the second obtaining module 120 is specifically configured to obtain an intensity coefficient representing each sub-part in the first type of part based on the image.
在一些实施例中,所述第二获取模块120,具体用于基于所述图像,获取所述目标的第二类局部的关键点的位置信息;所述第一确定模块130,具体用于基于所述位置信息,确定所述第二类局部的运动信息。In some embodiments, the second acquiring module 120 is specifically configured to acquire the location information of the second-type local key points of the target based on the image; the first determining module 130 is specifically configured to The location information determines the movement information of the second type of local.
在一些实施例中,所述第二获取模块120,具体用于:基于所述图像,获取所述目标的第二类局部的支架关键点的第一坐标;基于所述第一坐标,获得第二坐标。In some embodiments, the second acquiring module 120 is specifically configured to: acquire the first coordinates of the key points of the second type of local support of the target based on the image; and acquire the first coordinates based on the first coordinates. Two coordinates.
在一些实施例中,所述第二获取模块120,具体用于基于2D图像,获取所述第二类局部的支架关键点的第一2D坐标;基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。In some embodiments, the second acquiring module 120 is specifically configured to acquire the first 2D coordinates of the key points of the second type of local support based on a 2D image; and to 3D based on the first 2D coordinates and the 2D coordinates The transformation relationship of the coordinates, the first 3D coordinates corresponding to the first 2D coordinates are obtained.
在一些实施例中,所述第二获取模块120,具体用于基于3D图像,获取所述目标的第二类局部的支架关键点的第二3D坐标;基于所述第二3D坐标获得第三3D坐标。In some embodiments, the second acquiring module 120 is specifically configured to acquire the second 3D coordinates of the key points of the second type local support of the target based on the 3D image; and to acquire the third coordinates based on the second 3D coordinates. 3D coordinates.
在一些实施例中,所述第二获取模块120,具体用于基于所述第二3D坐标,修正第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。In some embodiments, the second acquisition module 120 is specifically configured to correct the 3D coordinates of key points of the bracket corresponding to the occluded part of the second type of part in the 3D image based on the second 3D coordinates, so as to obtain The third 3D coordinate.
在一些实施例中,所述第一确定模块130,具体用于基于所述位置信息,确定所述第二类局部的四元数。In some embodiments, the first determining module 130 is specifically configured to determine the quaternion of the second type of part based on the location information.
在一些实施例中,所述第二获取模块120,具体用于获取所述第二类局部中的第一局部的支架关键点的第一位置信息;获取所述第二类局部中的第二局部的支架关键点的第二位置信息。In some embodiments, the second acquiring module 120 is specifically configured to acquire the first position information of the key points of the bracket of the first part in the second type of part; and to acquire the second position information of the second part of the second type. The second location information of the local support key points.
在一些实施例中,所述第一确定模块130,具体用于根据所述第一位置信息,确定所述第一局部的运动信息;根据所述第二位置信息,确定所述第二局部的运动信息。In some embodiments, the first determining module 130 is specifically configured to determine the motion information of the first part according to the first position information; and determine the motion information of the second part according to the second position information. Sports information.
在一些实施例中,所述控制模块140,具体用于:根据所述第一局部的运动信息,控制所述受控模型与所述第一局部对应局部的运动;根据所述第二局部的运动信息,控制所述受控模型与所述第二局部对应局部的运动。In some embodiments, the control module 140 is specifically configured to: control the movement of the controlled model and the corresponding part of the first part according to the movement information of the first part; according to the movement information of the second part The motion information controls the motion of the controlled model and the corresponding part of the second part.
在一些实施例中,所述第一局部为:躯干;所述第二局部为上肢、下肢或四肢。In some embodiments, the first part is a trunk; the second part is an upper limb, a lower limb or a limb.
以下结合上述任意实施例提供几个具体示例。Several specific examples are provided below in conjunction with any of the foregoing embodiments.
示例1Example 1
本实例提供一种图像处理方法,该方法的步骤如下。This example provides an image processing method. The steps of the method are as follows.
采集图像,该图像包括目标,该目标包括但不限于人体。An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
检测人体的脸部关键点,其中,该脸部关键点可为人脸表面的轮廓关键点。The key points of the face of the human body are detected, where the key points of the face may be the key points of the contour of the face surface.
检测人体的躯干关键点和/或肢体关键点,此处的躯干关键点和/或肢体关键点都可为3D关键点,是由3D坐标表示的。该3D可以包括从2D图像中检测的2D坐标,然后利用2D坐标到3D坐标的转换算法得到的3D坐标。该3D坐标还可以是利用3D摄像头采集的3D图像中提取的3D坐标。此处的肢体关键点可包括:上肢关键点和/或下肢关键点。以手部为例,则该上肢关键点的手部关键点包括但不限于,手腕关节的关键点,指掌关节的关键点、指关节的关键点、指尖关键点;这些关键点的位置能够反映手部和手指的运动。The torso key points and/or limb key points of the human body are detected, where the torso key points and/or limb key points can be 3D key points, which are represented by 3D coordinates. The 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates. The 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera. The key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs. Taking the hand as an example, the key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the finger joints, and the key points of the fingertips; the positions of these key points Can reflect the movement of hands and fingers.
根据人脸关键点,生成人脸的网格(mesh)信息。根据mesh信息选择出所述目标的当前表情对应的表情基,根据该表情基控制所述受控模型的表情;根据mesh信息所反映的强度系数,控制受控模型的对应于各表情基的表情强度。According to the key points of the human face, the mesh information of the human face is generated. The expression base corresponding to the current expression of the target is selected according to the mesh information, and the expression of the controlled model is controlled according to the expression base; the expression of the controlled model corresponding to each expression base is controlled according to the intensity coefficient reflected by the mesh information strength.
根据所述躯干关键点和/或肢体关键点,转换四元数。根据与躯干关键点对应的四元数控制受控模型的躯干运动;和/或,根据与肢体关键点对应的四元数控制受控模型的肢体运动。According to the trunk key points and/or limb key points, the quaternion is converted. The torso movement of the controlled model is controlled according to the quaternion corresponding to the torso key point; and/or the limb movement of the controlled model is controlled according to the quaternion corresponding to the limb key point.
例如,所述脸部关键点可包括:106个关键点。所述躯干关键点和/或肢体关键点可包括:14个关键点或17个关键点,具体可如图7A和图7B所示。图7A所示为包含有14个骨架关键点的示意图;图7B所示为包含有17个骨架关键点的示意图。For example, the key points of the face may include: 106 key points. The torso key points and/or limb key points may include: 14 key points or 17 key points, which may be specifically shown in FIG. 7A and FIG. 7B. FIG. 7A shows a schematic diagram containing 14 key points of the skeleton; FIG. 7B shows a schematic diagram containing 17 key points of the skeleton.
图7B可为一种基于图7A所示的14个关键点生成的17个关键点的示意图。图7B中17个关键点,相当于图7A所示的关键点,增加了关键点0、关键点7及关键点9。其中,关键点9的2D坐标可基于关键点8及关键点10的2D坐标初步确定;关键点7的2D坐标可根据关键点8的2D坐标及关键点0的2D坐标确定。关键点0可为本公开实施例提供的参考点,该参考点可以作为前述的第一参考点和/或第二参考点。FIG. 7B may be a schematic diagram of 17 key points generated based on the 14 key points shown in FIG. 7A. The 17 key points in Fig. 7B are equivalent to the key points shown in Fig. 7A, with key point 0, key point 7 and key point 9 added. Among them, the 2D coordinates of key point 9 can be preliminarily determined based on the 2D coordinates of key point 8 and key point 10; the 2D coordinates of key point 7 can be determined according to the 2D coordinates of key point 8 and the 2D coordinates of key point 0. The key point 0 may be the reference point provided by the embodiments of the disclosure, and the reference point may be used as the aforementioned first reference point and/or the second reference point.
本示例中所述受控模型可为游戏场景下的游戏角色;网络授课场景下的网络教育视频中的教师模型;虚拟主播场景下的虚拟主播。总之,所述受控模型是根据应用场景确定的,应用场景不同,则受控模型的模型不同和/或外观不同。The controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene. In short, the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
例如,在传统的数学、物理等讲台授课场景,教师模型的衣着服饰可能比较稳重,如西服等服饰。再比如,针对瑜伽或体操等运动授课场景,则受控模型穿着可能是运动服饰。For example, in traditional mathematics, physics and other podium teaching scenes, the clothes of the teacher model may be more stable, such as suits. For another example, for sports teaching scenes such as yoga or gymnastics, the controlled model may wear sports clothing.
示例2Example 2
本实例提供一种图像处理方法,该方法的步骤如下。This example provides an image processing method. The steps of the method are as follows.
采集图像,该图像包括目标,该目标包括但不限于人体。An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
检测人体的躯干关键点和肢体关键点,此处的躯干关键点和/或肢体关键点都可为3D关键点,是由3D坐标表示的。该3D可以包括从2D图像中检测的2D坐标,然后利用2D坐标到3D坐标的转换算法得到的3D坐标。该3D坐标还可以是利用3D摄像头采集的3D图像中提取的3D坐标。此处的肢体关键点可包括:上肢关键点和/或下肢关键点。以手部为例,则该上肢关键点的手部关键点包括但不限于,手腕关节的关键点,指掌关节的关键点、指关节的关键点、指尖关键点。这些关键点的位置能够反映手部和手指的运动。The torso key points and limb key points of the human body are detected. The torso key points and/or the limb key points here can be 3D key points, which are represented by 3D coordinates. The 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates. The 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera. The key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs. Taking the hand as an example, the hand key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the knuckles, and the key points of the fingertips. The location of these key points can reflect the movement of the hands and fingers.
将躯干关键点转换为表征躯干运动的四元数,该四元数可以称之为躯干四元数。将肢体关键点转换为表征肢体运动的四元数,该四元数据可以称之为肢体四元数。The key points of the torso are converted into a quaternion that characterizes the movement of the torso. The quaternion can be called a trunk quaternion. The key points of the limbs are converted into quaternions representing the movement of the limbs, and the quaternion data can be called limb quaternions.
利用躯干四元数控制受控模型的躯干运动。利用肢体四元数控制受控模型的肢体运动。The torso quaternion is used to control the torso movement of the controlled model. Use limb quaternion to control the limb movement of the controlled model.
所述躯干关键点和肢体关键点可包括:14个关键点或17个关键点,具体可如图7A或图7B所示。The torso key points and the limb key points may include: 14 key points or 17 key points, which may be specifically shown in FIG. 7A or FIG. 7B.
本示例中所述受控模型可为游戏场景下的游戏角色;网络授课场景下的网络教育视频中的教师模型;虚拟主播场景下的虚拟主播。总之,所述受控模型是根据应用场景确定的,应用场景不同,则受控模型的模型不同和/或外观不同。The controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene. In short, the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
例如,在传统的数学、物理等讲台授课场景,教师模型的衣着服饰可能比较稳重,如西服等服饰。再比如,针对瑜伽或体操等运动授课场景,则受控模型穿着可能是运动服饰。For example, in traditional mathematics, physics and other podium teaching scenes, the clothes of the teacher model may be more stable, such as suits. For another example, for sports teaching scenes such as yoga or gymnastics, the controlled model may wear sports clothing.
示例3Example 3
本示例提供一种图像处理方法,该方法的步骤如下。This example provides an image processing method. The steps of the method are as follows.
获取图像,该图像包含目标,该目标可为人体。Acquire an image, the image contains a target, and the target can be a human body.
根据所述图像,获得目标在三维空间内3D姿势,该3D姿势可以通过人体的骨架关键点的3D坐标来表示。According to the image, a 3D posture of the target in a three-dimensional space is obtained, and the 3D posture can be represented by the 3D coordinates of the key points of the skeleton of the human body.
获取人体的关节在相机坐标系下的绝对旋转参数,该绝对旋转位置可以由在相机坐标系下的坐标。Obtain the absolute rotation parameters of the joints of the human body in the camera coordinate system, and the absolute rotation position can be determined by the coordinates in the camera coordinate system.
根据该坐标得到关节的坐标方向。根据层级关系,确定出关节的相对旋转参数。确定所述相对参数具体可包括:确定出关节的关键点相对于人体的根节点的位置。其中,该相对旋转参数可以用于四元数表示。此处的层级关系,可以为关节之间的牵引关系,例如,肘关节的运动在一定程度上会牵引手腕关节的运动,肩关节的运动也会牵引肘关节的运动等。所述层级关系还可以为根据人体的关节预先确定的。According to the coordinates, the coordinate direction of the joint is obtained. According to the hierarchical relationship, the relative rotation parameters of the joints are determined. Determining the relative parameters may specifically include: determining the position of the key point of the joint relative to the root node of the human body. Among them, the relative rotation parameter can be used for quaternion representation. The hierarchical relationship here can be the traction relationship between joints. For example, the movement of the elbow joint will pull the movement of the wrist joint to a certain extent, and the movement of the shoulder joint will also pull the movement of the elbow joint. The hierarchical relationship may also be predetermined according to the joints of the human body.
利用该四元数控制受控模型的旋转。Use this quaternion to control the rotation of the controlled model.
例如,以下是一种层级关系的举例。第一层级:盆骨;第二层级:腰部;第三层级:大腿(例如,左大腿、右大腿);第四层级:小腿(例如,左小腿、右小腿);第五层级:脚部。For example, the following is an example of a hierarchical relationship. The first level: pelvis; the second level: waist; the third level: thigh (for example, left thigh, right thigh); fourth level: calf (for example, left calf, right calf); fifth level: feet.
再例如,以下是另一种层级关系。第一层级:胸;第二层级:脖子;第三层级,头。For another example, the following is another hierarchical relationship. The first level: chest; the second level: neck; the third level, head.
进一步地,例如,以下是再一种层级关系。第一层级:锁骨,对应于肩部;第二层级:上臂;第三层级:前臂(又称为小臂);第四层级:手部。Further, for example, the following is yet another hierarchical relationship. The first level: the clavicle, corresponding to the shoulder; the second level: the upper arm; the third level: the forearm (also called the forearm); the fourth level: the hand.
第一层级至第五层级,层级关系依次降低;高层级的局部的运动会影响低层级的局部的运动。故,牵引部的层级高于连接部的层级。From the first level to the fifth level, the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
在确定所述第二类运动信息时,首先,获取各层级的局部的关键点的运动信息,然后基于层级关系,确定出低层级的局部的关键点相对于高层级的关键点的局部的运动信息(即所述相对旋转信息)。When determining the second type of motion information, first, obtain the motion information of the local key points of each level, and then determine the local motion of the low-level local key points relative to the high-level key points based on the hierarchical relationship Information (that is, the relative rotation information).
例如,以四元数作表征运动信息,则相对旋转信息可以用如下计算公式进行表示:每个关键点相对于相机坐标系的旋转四元数{Q 0,Q 1,…,Q 18},然后再根据公式(1)计算每个关键点相对于父关键点的旋转四元数q iFor example, if a quaternion is used to characterize the motion information, the relative rotation information can be expressed by the following calculation formula: the rotation quaternion {Q 0 ,Q 1 ,...,Q 18 } of each key point relative to the camera coordinate system, Then calculate the rotation quaternion q i of each key point relative to the parent key point according to formula (1).
前述利用四元数控制受控模型的各个关节的运动,可包括:利用q i控制受控模型的各个关节的运动。 The aforementioned use of quaternions to control the motion of each joint of the controlled model may include: using q i to control the motion of each joint of the controlled model.
在一种进一步图像处理方法中,所述方法还包括:将所述四元数转换为第一欧拉角;对第一欧拉角做变换,得到位于约束条件内的第二欧拉角,其中,该约束条件可为对第一欧拉角进行角度限制;获得与第二欧拉角对应的四元数,再利用该四元数控制所述受控模型的旋转。获得与第二欧拉角对应的四元数,可以直接将该第二欧拉角转换为四元数。In a further image processing method, the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within the constraint condition, Wherein, the constraint condition may be to limit the first Euler angle; obtain a quaternion corresponding to the second Euler angle, and then use the quaternion to control the rotation of the controlled model. The quaternion corresponding to the second Euler angle is obtained, and the second Euler angle can be directly converted into a quaternion.
以人体为例进行说明,通过人体检测可以检测出17个关节的关键点,此外,在对应于左手和右手设置了2个关键点,一共为19个关键点。图7B为17个关键点的骨架示意图。图8为19个关键点的骨架示意图。图8所示的骨骼可对应于19个关键点,分别指代以下骨骼:盆骨、腰部、左大腿、左小腿、左脚;右大腿、右小腿、右脚、胸部、脖子、头部、左锁骨、右锁骨、右上臂、右前臂、右手部、左上臂、左前臂、左手。Taking the human body as an example, 17 key points of joints can be detected through human body detection. In addition, two key points are set corresponding to the left hand and the right hand, a total of 19 key points. Fig. 7B is a skeleton diagram of 17 key points. Figure 8 is a skeleton diagram of 19 key points. The bones shown in Figure 8 can correspond to 19 key points, referring to the following bones: pelvis, waist, left thigh, left calf, left foot; right thigh, right calf, right foot, chest, neck, head, Left clavicle, right clavicle, right upper arm, right forearm, right hand, left upper arm, left forearm, left hand.
首先通过图像中的人体关节的关键点的检测可以得到在图像坐标系17个关键点的坐标,具体可如下:S={(x 0,y 0,z 0),…,(x 16,y 16,z 16)}。其中,(x i,y i,z i)可为第i个关键点的坐标,i的取值从0到16。 First, by detecting the key points of the human joints in the image, the coordinates of the 17 key points in the image coordinate system can be obtained, which can be specifically as follows: S={(x 0 ,y 0 ,z 0 ),...,(x 16 ,y 16 ,z 16 )}. Among them, (x i , y i , z i ) can be the coordinates of the i-th key point, and the value of i ranges from 0 to 16.
19个关节的关键点在各自局部坐标系下的坐标可如下定义:A={(p 0,q 0),…,(p 18,q 18)}。其中,p i表示节点i局部坐标系下的三维坐标,一般都是原模型自带的固定值,不用做修改和迁移。q i则为一个四元数,表示节点i所控制的骨头在其父节点坐标系中的旋转,也可以认为是当前节点的局部坐标系和父节点局部坐标系的旋转。 The coordinates of the key points of the 19 joints in their respective local coordinate systems can be defined as follows: A={(p 0 ,q 0 ),...,(p 18 ,q 18 )}. Among them, p i represents the three-dimensional coordinates in the local coordinate system of node i, which are generally fixed values that come with the original model and do not need to be modified or migrated. q i is a quaternion, which represents the rotation of the bone controlled by node i in the coordinate system of its parent node. It can also be considered as the rotation of the local coordinate system of the current node and the local coordinate system of the parent node.
在计算各个关节所对应关键点的四元数的流程可如下:确定每个节点的局部坐标系的坐标轴方向。对于每根骨头,从子节点指向父节点的方向为x轴;使该骨头可旋转角度最大的旋转轴为z轴;若无法判断旋转轴,以人体面朝的方向为y轴;具体可参考图9所示。The process of calculating the quaternion of the key points corresponding to each joint can be as follows: Determine the coordinate axis direction of the local coordinate system of each node. For each bone, the direction from the child node to the parent node is the x-axis; the axis of rotation that makes the bone rotatable the maximum angle is the z-axis; if the rotation axis cannot be determined, the direction the human body faces is the y-axis; for details Shown in Figure 9.
本示例以左手坐标系为了进行说明,具体实现时还可以采用右手坐标系。This example uses the left-handed coordinate system for illustration, and the right-handed coordinate system can also be used in specific implementation.
Figure PCTCN2020072526-appb-000004
Figure PCTCN2020072526-appb-000004
Figure PCTCN2020072526-appb-000005
Figure PCTCN2020072526-appb-000005
上述表中(i-j)表示i指向j的向量,x表示叉乘。如(1-7)表示第1关键点指向第7关键点的向量。In the above table (i-j) represents the vector that i points to j, and x represents the cross product. For example, (1-7) represents the vector from the first key point to the seventh key point.
上述表中,节点8、15、11和18是手和脚的四个节点,由于这4个节点的四元数的计算需要用到具体姿势才可以确定,因此该表中没有包括这4个节点。此外,在上述表中,19点骨骼节点编号可以参见图8所示,17点骨架的关键点编号,可以参见图7B。In the above table, nodes 8, 15, 11, and 18 are the four nodes of the hands and feet. Since the calculation of the quaternion of these four nodes requires specific postures to be determined, these four nodes are not included in the table. node. In addition, in the above table, the number of the 19-point skeleton node can be referred to as shown in Fig. 8, and the key point number of the 17-point skeleton can be referred to as Fig. 7B.
求解所述第一欧拉角的过程如下。The process of solving the first Euler angle is as follows.
计算出关节点局部旋转四元数qi之后,首先将其转换为欧拉角,默认使用x-y-z的顺序。After calculating the local rotation quaternion qi of the joint points, first convert it to Euler angles, using the x-y-z order by default.
设qi=(q0,q1,q2,q3),q0为实数部分;q1,q2,q3均为虚数部分。则欧拉角的计算公式如(2)-(4)所示:Let qi=(q0, q1, q2, q3), q0 is the real part; q1, q2, q3 are all the imaginary part. The Euler angle calculation formula is as shown in (2)-(4):
X=atan2(2*(q0*q1-q2*q3),1-2*(q1*q1+q2*q2))      (2)X=atan2(2*(q0*q1-q2*q3),1-2*(q1*q1+q2*q2)) (2)
Y=asin(2*(q1*q3+q0*q2))且Y的取值在-1至1之间     (3)Y=asin(2*(q1*q3+q0*q2)) and the value of Y is between -1 and 1 (3)
Z=atan2(2*(q0*q3-q1*q2),1-2*(q2*q2+q3*q3))      (4)Z=atan2(2*(q0*q3-q1*q2),1-2*(q2*q2+q3*q3)) (4)
其中,X为第一方向上的欧拉角;Y为第二方向上的欧拉角;Z为第三方向上的欧拉角。第一方向、第二方向和第三方向中的任意两个垂直。Among them, X is the Euler angle in the first direction; Y is the Euler angle in the second direction; Z is the Euler angle in the third direction. Any two of the first direction, the second direction, and the third direction are perpendicular.
然后可以对(X,Y,Z)三个角度做限制,超出限制范围的话,则限定在边界值,得到修正之后的第二欧拉角(X',Y',Z')。恢复成新的局部坐标系旋转四元数qi'。Then you can limit the three angles (X, Y, Z). If it exceeds the limit range, it will be limited to the boundary value to obtain the corrected second Euler angle (X', Y', Z'). Revert to the new local coordinate system and rotate the quaternion qi'.
在另一种进一步的图像处理方法中,所述方法还包括:对第二欧拉角进行姿势优化调整。例如,对第二欧拉角中的某一些角度进行调整,可以基于预先设定规则,调整为姿势优化的欧拉角,从而得到第三欧拉角。则获得与第二欧拉角对应的四元数,可包括:将第三欧拉角转换为控制受控模型的四元数。In another further image processing method, the method further includes: performing posture optimization adjustment on the second Euler angle. For example, to adjust some of the second Euler angles, it may be adjusted to the Euler angle optimized for posture based on preset rules, so as to obtain the third Euler angle. Obtaining the quaternion corresponding to the second Euler angle may include: converting the third Euler angle into a quaternion for controlling the controlled model.
在再一种进一步的图像处理方法中,所述方法还包括:将第二欧拉角转换为四元数之后,对转换后的四元数据进行姿势优化处理。例如,基于预设规则进行调整,得到调整后的四元数,根据最终调整后的四元数控制所述受控模型。In a further image processing method, the method further includes: after converting the second Euler angles into a quaternion, performing posture optimization processing on the converted quaternion data. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
在一些实施例中,在对第二欧拉角或第二欧拉角转换得到的四元数进行调整时,可以是基于预设规则的调整,也可以是由深度学习模型自行进行优化调整;具体的实现方式有很多种,本申请不做限定。In some embodiments, when adjusting the second Euler angle or the quaternion obtained by the conversion of the second Euler angle, the adjustment may be based on a preset rule, or may be optimized and adjusted by the deep learning model itself; There are many specific implementation methods, which are not limited in this application.
此外,在又一种的图像处理方法中,还可以包括前处理。例如,根据采集的人体的尺寸,修改受控模型胯部和/或肩部的宽度,从而修正人体的整体姿态。对人体站立姿态,可以进行站直修正,进行挺腹修正。有人站立时会挺腹,通过挺腹修正可以使得受控模型不会模拟用户的挺腹动作。有人站立时会驼背,通过驼背修正可以使得受控模型不会模拟用户的驼背动作等。In addition, in yet another image processing method, pre-processing may also be included. For example, according to the size of the collected human body, the width of the crotch and/or shoulder of the controlled model is modified to correct the overall posture of the human body. The standing posture of the human body can be corrected for standing upright and for abdomen correction. Some people will push their abdomen when standing, and the abdomen correction can make the controlled model not simulate the user's abdomen movement. Some people hunch back when standing, and the hunchback correction can prevent the controlled model from simulating the user's hunchback.
示例4Example 4
本示例提供一种图像处理方法,该方法的步骤如下。This example provides an image processing method. The steps of the method are as follows.
获取图像,该图像包含目标,该目标可包括人体、人体上肢、人体下肢的至少一个。An image is acquired, and the image includes a target, and the target may include at least one of a human body, a human upper limb, and a human lower limb.
根据目标关节在图像坐标系中的位置信息获取目标关节的坐标系。根据肢体局部在图像坐标系中的位置信息,获取会牵引目标关节运动的肢体局部的坐标系。Obtain the coordinate system of the target joint according to the position information of the target joint in the image coordinate system. According to the position information of the limb part in the image coordinate system, the coordinate system of the limb part that will pull the target joint movement is obtained.
基于所述目标关节的坐标系和肢体局部的坐标系,确定目标关节相对于肢体局部的旋转,得到旋转参数;该旋转参数包括目标关节的自旋参数和由肢体局部牵引的旋转参数。Based on the coordinate system of the target joint and the coordinate system of the limb part, the rotation of the target joint relative to the limb part is determined to obtain the rotation parameter; the rotation parameter includes the spin parameter of the target joint and the rotation parameter traction by the limb part.
利用第一角度限制对由肢体局部牵引的旋转参数进行限制,得到最终牵引旋转参数。根据最终牵引旋转参数,修正肢体局部的旋转参数。根据第一肢体的坐标系与修正后旋转参数的肢体局部的相对旋转参数;对相对旋转参数进行第二角度限制,得到限制后的相对旋转参数。The first angle limit is used to limit the rotation parameters of the local traction of the limbs, and the final traction rotation parameters are obtained. According to the final traction rotation parameters, the local rotation parameters of the limbs are corrected. According to the coordinate system of the first limb and the partial relative rotation parameter of the limb after the correction, the relative rotation parameter is restricted by a second angle to obtain the restricted relative rotation parameter.
将限制后的旋转参数,得到四元数。根据该四元数控制受控模型的目标关节的运动。The limited rotation parameters are obtained as a quaternion. The movement of the target joint of the controlled model is controlled according to the quaternion.
例如,若对人体上肢进行处理时,获取图像坐标系下的手部的坐标系,并且得到小臂的坐标系以及上臂的坐标系。此时的目标关节为手腕关节。将手部相对于小臂的旋转分解成自旋和被牵引的旋转。将被牵引的旋转传递给小臂,具体如,被牵引的旋转赋值给小臂对应方向的旋转;利用小臂的第一角度限制,进行小臂的最大旋转进行限制。然后确定手部相对于被修正后的小臂的旋转,得到相对旋转参数。对该相对旋转参数进行第二角度限制,得到手部相对于小臂的旋转。For example, if the upper limbs of the human body are processed, the coordinate system of the hand in the image coordinate system is obtained, and the coordinate system of the forearm and the coordinate system of the upper arm are obtained. The target joint at this time is the wrist joint. The rotation of the hand relative to the forearm is broken down into spin and pulled rotation. Transfer the towed rotation to the forearm, specifically, assign the towed rotation to the rotation in the corresponding direction of the forearm; use the first angle limit of the forearm to limit the maximum rotation of the forearm. Then determine the rotation of the hand relative to the corrected forearm, and obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the hand relative to the forearm.
若对人体下肢进行处理时,获取图像坐标系下的脚部的坐标系,并且得到小腿的坐标系以及大腿的坐标系;此时的目标关节为脚腕关节。将脚部相对于小腿的旋转分解成自旋和被牵引的旋转。将被牵引的旋转传递给小腿,具体如,被牵引的旋转赋值给小腿对应方向的旋转;利用小腿的第一角度限制,进行小腿的最大旋转进行限制。然后确定脚部相对于被修正后的小腿的旋转,得到相对旋转参数。对该相对旋转参数进行第二角度限制,得到脚部相对于小腿的旋转。If the lower limbs of the human body are processed, the coordinate system of the foot under the image coordinate system is obtained, and the coordinate system of the lower leg and the coordinate system of the thigh are obtained; the target joint at this time is the ankle joint. The rotation of the foot relative to the calf is broken down into spin and pulled rotation. Transfer the pulled rotation to the lower leg, specifically, assign the pulled rotation to the rotation in the corresponding direction of the lower leg; use the first angle limit of the lower leg to limit the maximum rotation of the lower leg. Then determine the rotation of the foot relative to the corrected calf to obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the foot relative to the calf.
示例5Example 5
脖子是控制头的朝向,人脸、人体、人手是独立分开的部件,最终要合成一个整体,脖子的旋转是非常重要的。The neck controls the direction of the head. The face, the human body, and the hands are separate parts that are ultimately integrated. The rotation of the neck is very important.
根据人体的关键点可以计算出一个人体的朝向。而根据人脸关键点,可以计算出一个人脸的朝向,这两个朝向的相对位置就是脖子的旋转角度。要解决连接部的角度问题。这种连接部的角度问题就是通过相对计算。比如身体是0度,人脸是90度,要控制一个受控模型,它只关注局部角度,头和身体的角度变化,需要算出受控模型的脖子的角度,才能控制受控模型的头。According to the key points of the human body, the orientation of a human body can be calculated. According to the key points of the face, the orientation of a face can be calculated, and the relative position of the two orientations is the rotation angle of the neck. To solve the angle of the connection part. The angle of this connection part is calculated by relative calculation. For example, the body is 0 degrees and the face is 90 degrees. To control a controlled model, it only pays attention to the local angle, the head and body angle changes, and the neck angle of the controlled model needs to be calculated to control the head of the controlled model.
在本示例中,首先基于图像确定出用户目前脸部的朝向,然后计算脖子的旋转角度。由于脖子的旋转是有范围的,比如假设脖子最多可以转90度。如果计算出来的旋转角度超出这个范围(-90度到90度),则将范围的边界作为脖子的旋转角度(如,-90度或90度)。In this example, first determine the current orientation of the user's face based on the image, and then calculate the rotation angle of the neck. Since the rotation of the neck has a range, for example, suppose the neck can rotate up to 90 degrees. If the calculated rotation angle exceeds this range (-90 degrees to 90 degrees), the boundary of the range is used as the rotation angle of the neck (for example, -90 degrees or 90 degrees).
可用3D关键点来计算身体或脸部的朝向,具体的朝向的计算可为:脸部或身体所在平面内的两个不在一条直线上的向量叉乘得到该平面的法向量,该法向量即为脸部或身体的朝向。该朝向就可以作为身子和脸部之间的连接部(脖子)的朝向。3D key points can be used to calculate the orientation of the body or face. The calculation of the specific orientation can be: the normal vector of the plane is obtained by cross-multiplying the two vectors in the plane of the face or the body that are not on a straight line. It is the orientation of the face or body. This orientation can be used as the orientation of the connecting part (neck) between the body and the face.
如图10所示,本申请实施例提供了一种图像设备,包括:存储器1002,用于存储信息;处理器1001,与所述存储器1002连接,用于通过执行存储在所述存储器1002上的计算机可执行指令,能够实现前述一个或多个技术方案提供的图像处理方法,例如,如图1和/或图2所示的图像处理方法。As shown in FIG. 10, an embodiment of the present application provides an image device, including: a memory 1002, configured to store information; a processor 1001, connected to the memory 1002, configured to execute data stored on the memory 1002 The computer-executable instructions can implement the image processing method provided by one or more of the foregoing technical solutions, for example, the image processing method shown in FIG. 1 and/or FIG. 2.
该存储器1002可为各种类型的存储器,可为随机存储器、只读存储器、闪存等。所述存储器1002可用于信息存储,例如,存储计算机可执行指令等。所述计算机可执行指令可为各种程序指令,例如,目标程序指令和/或源程序指令等。The memory 1002 can be various types of memory, such as random access memory, read-only memory, flash memory, and the like. The memory 1002 may be used for information storage, for example, to store computer executable instructions. The computer-executable instructions may be various program instructions, for example, target program instructions and/or source program instructions.
所述处理器1001可为各种类型的处理器,例如,中央处理器、微处理器、数字信号处理器、可编程阵列、数字信号处理器、专用集成电路或图像处理器等。The processor 1001 may be various types of processors, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
所述处理器1001可以通过总线与所述存储器1002连接。所述总线可为集成电路总线等。The processor 1001 may be connected to the memory 1002 through a bus. The bus may be an integrated circuit bus or the like.
在一些实施例中,所述终端设备还可包括:通信接口1003,该通信接口1003可包括:网络接口、例如,局域网接口、收发天线等。所述通信接口同样与所述处理器1001连接,能够用于信息收发。In some embodiments, the terminal device may further include: a communication interface 1003, and the communication interface 1003 may include: a network interface, for example, a local area network interface, a transceiver antenna, and the like. The communication interface is also connected to the processor 1001 and can be used for information transceiving.
在一些实施例中,所述终端设备还包括人机交互接口1005,例如,所述人机交互接口1005可包括各种输入输出设备,例如,键盘、触摸屏等。In some embodiments, the terminal device further includes a human-computer interaction interface 1005. For example, the human-computer interaction interface 1005 may include various input and output devices, such as a keyboard and a touch screen.
在一些实施例中,所述图像设备还包括:显示器1004,该显示器可以显示各种提示、采集的人脸图像和/或各种界面。In some embodiments, the image device further includes: a display 1004, which can display various prompts, collected facial images, and/or various interfaces.
本申请实施例提供了一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行代码;所述计算机可执行代码被执行后,能够实现前述一个或多个技术方案提供的图像处理方法,例如如图1和/或图2所示的图像处理方法。The embodiment of the present application provides a non-volatile computer storage medium that stores computer executable code; after the computer executable code is executed, the image provided by one or more technical solutions can be realized The processing method is, for example, the image processing method shown in FIG. 1 and/or FIG. 2.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed device and method may be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed can be indirect coupling or communication connection through some interfaces, devices or units, and can be electrical, mechanical or other forms of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本公开各实施例中的各功能单元可以全部集成在一个处理模块中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, the functional units in the embodiments of the present disclosure can be all integrated into one processing module, or each unit can be individually used as a unit, or two or more units can be integrated into one unit; The unit of can be implemented in the form of hardware, or in the form of hardware plus software functional units.
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令 相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的非易失性存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware. The foregoing program can be stored in a computer readable storage medium, and when the program is executed, it is executed. Including the steps of the foregoing method embodiments; and the foregoing non-volatile storage media include: removable storage devices, read-only memory (ROM, Read-Only Memory), magnetic disks or optical disks and other media that can store program codes.
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of the present disclosure, but the scope of protection of the present disclosure is not limited to this, and any person skilled in the art can easily think of changes or replacements within the technical scope disclosed in the present disclosure. It should be covered by the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (32)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, characterized by comprising:
    获取图像;Get an image;
    基于所述图像,获取目标的局部的特征;Based on the image, acquiring local features of the target;
    基于所述特征,确定所述局部的运动信息;Based on the feature, determine the local motion information;
    根据所述运动信息,控制受控模型对应的局部的运动。According to the motion information, the local motion corresponding to the controlled model is controlled.
  2. 根据权利要求1所述的方法,其特征在于,基于所述图像,获取所述目标的局部的特征,包括:The method according to claim 1, characterized in that, based on the image, acquiring the local features of the target comprises:
    基于所述图像,获取所述目标的第一类局部的第一类特征;和/或,Based on the image, obtain the first-type features of the first-type part of the target; and/or,
    基于所述图像,获取所述目标的第二类局部的第二类特征。Based on the image, the second-type feature of the second-type part of the target is acquired.
  3. 根据权利要求2所述的方法,其特征在于,基于所述图像,获取所述目标的所述第一类局部的所述第一类特征,包括:The method according to claim 2, wherein, based on the image, acquiring the first-type feature of the first-type part of the target comprises:
    基于所述图像,获取头部的表情特征以及表情特征的强度系数。Based on the image, the expression feature of the head and the intensity coefficient of the expression feature are obtained.
  4. 根据权利要求3所述的方法,其特征在于,基于所述图像,获取所述表情特征的所述强度系数,包括:The method according to claim 3, wherein, based on the image, obtaining the intensity coefficient of the expression feature comprises:
    基于所述图像,获得表征所述第一类局部中各个子局部的强度系数。Based on the image, an intensity coefficient characterizing each sub-part in the first type of part is obtained.
  5. 根据权利要求3或4所述的方法,其特征在于,基于所述特征,确定所述局部的运动信息,包括:The method according to claim 3 or 4, wherein, based on the characteristic, determining the local motion information comprises:
    基于所述表情特征和所述强度系数,确定所述头部的运动信息;Determining the movement information of the head based on the expression feature and the intensity coefficient;
    根据所述运动信息,控制所述受控模型所述对应的局部的运动,包括:According to the movement information, controlling the movement of the corresponding part of the controlled model includes:
    根据所述头部的所述运动信息,控制所述受控模型的头部的表情变化。According to the movement information of the head, the expression change of the head of the controlled model is controlled.
  6. 根据权利要求2至5中任一项所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述第二类特征,包括:The method according to any one of claims 2 to 5, characterized in that, based on the image, acquiring the second-type feature of the second-type part of the target comprises:
    基于所述图像,获取所述目标的所述第二类局部的关键点的位置信息;Acquiring location information of the second-type local key points of the target based on the image;
    所述基于所述特征,确定所述局部的所述运动信息,包括:The determining the partial motion information based on the characteristic includes:
    基于所述位置信息,确定所述第二类局部的运动信息。Based on the position information, determine the second type of local motion information.
  7. 根据权利要求6所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述关键点的位置信息,包括:The method according to claim 6, wherein, based on the image, acquiring the position information of the key point of the second type part of the target comprises:
    基于所述图像,获取所述目标的所述第二类局部的支架关键点的第一坐标;Based on the image, acquiring the first coordinates of the key points of the support of the second type of the target;
    基于所述第一坐标,获得第二坐标。Based on the first coordinates, the second coordinates are obtained.
  8. 根据权利要求7所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述支架关键点的第一坐标,包括:8. The method according to claim 7, wherein, based on the image, acquiring the first coordinates of the key points of the support of the second type of the target includes:
    基于2D图像,获取所述第二类局部的所述支架关键点的第一2D坐标;Acquiring the first 2D coordinates of the key points of the bracket of the second type of local based on the 2D image;
    基于所述第一坐标,获得所述第二坐标,包括:Obtaining the second coordinates based on the first coordinates includes:
    基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。Based on the first 2D coordinate and the conversion relationship from the 2D coordinate to the 3D coordinate, the first 3D coordinate corresponding to the first 2D coordinate is obtained.
  9. 根据权利要求7所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述支架关键点的第一坐标,包括:8. The method according to claim 7, wherein, based on the image, acquiring the first coordinates of the key points of the support of the second type of the target includes:
    基于3D图像,获取所述目标的所述第二类局部的所述支架关键点的第二3D坐标;Acquiring, based on the 3D image, the second 3D coordinates of the key points of the support of the second type part of the target;
    基于所述第一坐标,获得所述第二坐标,包括:Obtaining the second coordinates based on the first coordinates includes:
    基于所述第二3D坐标,获得第三3D坐标。Based on the second 3D coordinate, a third 3D coordinate is obtained.
  10. 根据权利要求9所述的方法,其特征在于,基于所述第二3D坐标,获得所述第三3D坐标,包括:The method according to claim 9, wherein obtaining the third 3D coordinates based on the second 3D coordinates comprises:
    基于所述第二3D坐标,修正所述第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。Based on the second 3D coordinates, the 3D coordinates of the key points of the bracket corresponding to the occluded part of the second type of part in the 3D image are corrected, so as to obtain the third 3D coordinates.
  11. 根据权利要求6所述的方法,其特征在于,基于所述位置信息,确定所述第二类局部的运动信息,包括:The method according to claim 6, wherein determining the second type of local motion information based on the position information comprises:
    基于所述位置信息,确定所述第二类局部的四元数。Based on the location information, determine the quaternion of the second type of part.
  12. 根据权利要求6所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述关键点的位置信息,包括:The method according to claim 6, wherein, based on the image, acquiring the position information of the key point of the second type part of the target comprises:
    获取所述第二类局部中的第一局部的支架关键点的第一位置信息;Acquiring first position information of key points of the bracket of the first part in the second type of part;
    获取所述第二类局部中的第二局部的支架关键点的第二位置信息。Acquire second location information of key points of the bracket of the second part in the second type of part.
  13. 根据权利要求12所述的方法,其特征在于,基于所述位置信息,确定所述第二类局部的运动信息,包括:The method according to claim 12, wherein, based on the position information, determining the second type of local motion information comprises:
    根据所述第一位置信息,确定所述第一局部的运动信息;Determine the motion information of the first part according to the first position information;
    根据所述第二位置信息,确定所述第二局部的运动信息。According to the second position information, the motion information of the second part is determined.
  14. 根据权利要求12或13所述的方法,其特征在于,根据所述运动信息,控制受控模型对应的局部的运动,包括:The method according to claim 12 or 13, wherein, according to the motion information, controlling the local motion corresponding to the controlled model comprises:
    根据所述第一局部的所述运动信息,控制所述受控模型的与所述第一局部对应局部的运动;Controlling the movement of the controlled model corresponding to the first part according to the motion information of the first part;
    根据所述第二局部的所述运动信息,控制所述受控模型与所述第二局部对应局部的运动。According to the movement information of the second part, the movement of the controlled model and the part corresponding to the second part is controlled.
  15. 根据权利要求12至14任一项所述的方法,其特征在于,The method according to any one of claims 12 to 14, characterized in that,
    所述第一局部为:躯干;和/或,所述第二局部为上肢、下肢或四肢。The first part is a trunk; and/or, the second part is an upper limb, a lower limb or a limb.
  16. 一种图像处理装置,其特征在于,包括:An image processing device, characterized by comprising:
    第一获取模块,用于获取图像;The first acquisition module is used to acquire an image;
    第二获取模块,用于基于所述图像,获取目标的局部的特征;The second acquisition module is configured to acquire local features of the target based on the image;
    第一确定模块,用于基于所述特征,确定所述局部的运动信息;The first determining module is configured to determine the local motion information based on the feature;
    控制模块,用于根据所述运动信息,控制受控模型对应的局部的运动。The control module is used to control the local movement corresponding to the controlled model according to the movement information.
  17. 根据权利要求16所述的装置,其特征在于,所述第二获取模块,具体用于:The device according to claim 16, wherein the second obtaining module is specifically configured to:
    基于所述图像,获取所述目标的第一类局部的第一类特征;和/或,Based on the image, obtain the first-type features of the first-type part of the target; and/or,
    基于所述图像,获取所述目标的第二类局部的第二类特征。Based on the image, the second-type feature of the second-type part of the target is acquired.
  18. 根据权利要求17所述的装置,其特征在于,所述第二获取模块,具体用于基于所述图像,获取头部的表情特征以及所述表情特征的强度系数。The device according to claim 17, wherein the second acquisition module is specifically configured to acquire the facial expression feature of the head and the intensity coefficient of the facial expression feature based on the image.
  19. 根据权利要求18所述的装置,其特征在于,基于所述图像,获取所述表情特征的所述强度系数,包括:The device of claim 18, wherein, based on the image, acquiring the intensity coefficient of the expression feature comprises:
    基于所述图像,获得表征所述第一类局部中各个子局部的强度系数。Based on the image, an intensity coefficient characterizing each sub-part in the first type of part is obtained.
  20. 根据权利要求18或19所述的装置,其特征在于,The device according to claim 18 or 19, wherein:
    所述第一确定模块,具体用于基于所述表情特征和所述强度系数,确定所述头部的运动信息;The first determining module is specifically configured to determine the movement information of the head based on the expression feature and the intensity coefficient;
    所述控制模块,具体用于根据所述头部的所述运动信息,控制所述受控模型的头部的表情变化。The control module is specifically configured to control the expression change of the head of the controlled model according to the movement information of the head.
  21. 根据权利要求17至20中任一项所述的装置,其特征在于,The device according to any one of claims 17 to 20, wherein:
    所述第二获取模块,具体用于基于所述图像,获取所述目标的所述第二类局部的关键点的位置 信息;The second acquisition module is specifically configured to acquire the location information of the second-type local key points of the target based on the image;
    所述第一确定模块,具体用于基于所述位置信息,确定所述第二类局部的运动信息。The first determining module is specifically configured to determine the local motion information of the second type based on the position information.
  22. 根据权利要求21所述的装置,其特征在于,所述第二获取模块,具体用于:The device according to claim 21, wherein the second obtaining module is specifically configured to:
    基于所述图像,获取所述目标的所述第二类局部的支架关键点的第一坐标;Based on the image, acquiring the first coordinates of the key points of the support of the second type of the target;
    基于所述第一坐标,获得第二坐标。Based on the first coordinates, the second coordinates are obtained.
  23. 根据权利要求22所述的装置,其特征在于,所述第二获取模块,具体用于The device according to claim 22, wherein the second acquisition module is specifically configured to
    基于2D图像,获取所述第二类局部的所述支架关键点的第一2D坐标;Acquiring the first 2D coordinates of the key points of the bracket of the second type of local based on the 2D image;
    基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。Based on the first 2D coordinate and the conversion relationship from the 2D coordinate to the 3D coordinate, the first 3D coordinate corresponding to the first 2D coordinate is obtained.
  24. 根据权利要求22所述的装置,其特征在于,所述第二获取模块,具体用于The device according to claim 22, wherein the second acquisition module is specifically configured to
    基于3D图像,获取所述目标的所述第二类局部的所述支架关键点的第二3D坐标;Acquiring, based on the 3D image, the second 3D coordinates of the key points of the support of the second type part of the target;
    基于所述第二3D坐标,获得第三3D坐标。Based on the second 3D coordinate, a third 3D coordinate is obtained.
  25. 根据权利要求24所述的装置,其特征在于,所述第二获取模块,具体用于基于所述第二3D坐标,修正所述第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。The device according to claim 24, wherein the second acquisition module is specifically configured to correct the bracket corresponding to the occluded part of the second type of part in the 3D image based on the second 3D coordinates The 3D coordinates of the key points, thereby obtaining the third 3D coordinates.
  26. 根据权利要求21所述的装置,其特征在于,所述第一确定模块,具体用于基于所述位置信息,确定所述第二类局部的四元数。The device according to claim 21, wherein the first determining module is specifically configured to determine the quaternion of the second type of local based on the position information.
  27. 根据权利要求21所述的装置,其特征在于,所述第二获取模块,具体用于The device according to claim 21, wherein the second acquisition module is specifically configured to
    获取所述第二类局部中的第一局部的支架关键点的第一位置信息;Acquiring first position information of key points of the bracket of the first part in the second type of part;
    获取所述第二类局部中的第二局部的支架关键点的第二位置信息。Acquire second location information of key points of the bracket of the second part in the second type of part.
  28. 根据权利要求27所述的装置,其特征在于,所述第一确定模块,具体用于The device according to claim 27, wherein the first determining module is specifically configured to
    根据所述第一位置信息,确定所述第一局部的运动信息;Determine the motion information of the first part according to the first position information;
    根据所述第二位置信息,确定所述第二局部的运动信息。According to the second position information, the motion information of the second part is determined.
  29. 根据权利要求27或28所述的装置,其特征在于,所述控制模块,具体用于The device according to claim 27 or 28, wherein the control module is specifically configured to
    根据所述第一局部的运动信息,控制所述受控模型与所述第一局部对应局部的运动;Controlling the movement of the controlled model and the corresponding part of the first part according to the motion information of the first part;
    根据所述第二局部的运动信息,控制所述受控模型与所述第二局部对应局部的运动。According to the movement information of the second part, the movement of the controlled model and the corresponding part of the second part is controlled.
  30. 根据权利要求27至29任一项所述的装置,其特征在于,The device according to any one of claims 27 to 29, wherein:
    所述第一局部为:躯干;和/或,所述第二局部为上肢、下肢或四肢。The first part is a trunk; and/or, the second part is an upper limb, a lower limb or a limb.
  31. 一种图像设备,其特征在于,包括:An image device, characterized in that it comprises:
    存储器;Memory
    处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,以实现上述权利要求1至15任一项提供的方法。The processor is connected to the memory and is configured to execute the computer executable instructions located on the memory to implement the method provided in any one of claims 1 to 15.
  32. 一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行后,能够实现上述权利要求1至15任一项提供的方法。A non-volatile computer storage medium, the computer storage medium stores computer executable instructions; after the computer executable instructions are executed by a processor, the method provided in any one of claims 1 to 15 can be implemented.
PCT/CN2020/072526 2019-01-18 2020-01-16 Image processing method and apparatus, image device and storage medium WO2020147794A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
SG11202011596WA SG11202011596WA (en) 2019-01-18 2020-01-16 Image processing method and apparatus, image device and storage medium
KR1020207036619A KR20210011985A (en) 2019-01-18 2020-01-16 Image processing method and apparatus, image device, and storage medium
JP2020567116A JP2021525431A (en) 2019-01-18 2020-01-16 Image processing methods and devices, image devices and storage media
US17/102,331 US20210074004A1 (en) 2019-01-18 2020-11-23 Image processing method and apparatus, image device, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910049830.6 2019-01-18
CN201910049830 2019-01-18
CN201910362107.3A CN111460872B (en) 2019-01-18 2019-04-30 Image processing method and device, image equipment and storage medium
CN201910362107.3 2019-04-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/102,331 Continuation US20210074004A1 (en) 2019-01-18 2020-11-23 Image processing method and apparatus, image device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020147794A1 true WO2020147794A1 (en) 2020-07-23

Family

ID=71614381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/072526 WO2020147794A1 (en) 2019-01-18 2020-01-16 Image processing method and apparatus, image device and storage medium

Country Status (1)

Country Link
WO (1) WO2020147794A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870314A (en) * 2021-10-18 2021-12-31 南京硅基智能科技有限公司 Training method of action migration model and action migration method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930284A (en) * 2009-06-23 2010-12-29 腾讯科技(深圳)有限公司 Method, device and system for implementing interaction between video and virtual network scene
CN104866101A (en) * 2015-05-27 2015-08-26 世优(北京)科技有限公司 Real-time interactive control method and real-time interactive control device of virtual object
US20160163084A1 (en) * 2012-03-06 2016-06-09 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
CN106251396A (en) * 2016-07-29 2016-12-21 迈吉客科技(北京)有限公司 The real-time control method of threedimensional model and system
CN108229239A (en) * 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 A kind of method and device of image procossing
CN109325450A (en) * 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110139115A (en) * 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 Virtual image attitude control method, device and electronic equipment based on key point
CN110688008A (en) * 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 Virtual image interaction method and device
CN110889382A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Virtual image rendering method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930284A (en) * 2009-06-23 2010-12-29 腾讯科技(深圳)有限公司 Method, device and system for implementing interaction between video and virtual network scene
US20160163084A1 (en) * 2012-03-06 2016-06-09 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
CN104866101A (en) * 2015-05-27 2015-08-26 世优(北京)科技有限公司 Real-time interactive control method and real-time interactive control device of virtual object
CN106251396A (en) * 2016-07-29 2016-12-21 迈吉客科技(北京)有限公司 The real-time control method of threedimensional model and system
CN108229239A (en) * 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 A kind of method and device of image procossing
CN109325450A (en) * 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110139115A (en) * 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 Virtual image attitude control method, device and electronic equipment based on key point
CN110688008A (en) * 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 Virtual image interaction method and device
CN110889382A (en) * 2019-11-29 2020-03-17 深圳市商汤科技有限公司 Virtual image rendering method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870314A (en) * 2021-10-18 2021-12-31 南京硅基智能科技有限公司 Training method of action migration model and action migration method
CN113870314B (en) * 2021-10-18 2023-09-19 南京硅基智能科技有限公司 Training method of action migration model and action migration method

Similar Documents

Publication Publication Date Title
US20210074005A1 (en) Image processing method and apparatus, image device, and storage medium
WO2020147796A1 (en) Image processing method and apparatus, image device, and storage medium
US20230351663A1 (en) System and method for generating an avatar that expresses a state of a user
US20160048993A1 (en) Image processing device, image processing method, and program
US20210349529A1 (en) Avatar tracking and rendering in virtual reality
Gültepe et al. Real-time virtual fitting with body measurement and motion smoothing
WO2020147791A1 (en) Image processing method and device, image apparatus, and storage medium
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN102622766A (en) Multi-objective optimization multi-lens human motion tracking method
Xie et al. Visual feedback for core training with 3d human shape and pose
WO2020147797A1 (en) Image processing method and apparatus, image device, and storage medium
WO2020147794A1 (en) Image processing method and apparatus, image device and storage medium
CN111861822B (en) Patient model construction method, equipment and medical education system
Kirmizibayrak et al. Digital analysis and visualization of swimming motion
JP2021099666A (en) Method for generating learning model
Su et al. Estimating human pose with both physical and physiological constraints
Xie et al. CoreUI: Interactive Core Training System with 3D Human Shape
TW202341071A (en) Method for image analysis of motion
Wang et al. Research on Tai Chi APP Simulation System Based on Computer Virtual Reality Technology
Zhu Efficient and robust photo-based methods for precise shape and pose modeling of human subjects
Boulay Human posture recognition for behaviour
Diamanti Motion Capture in Uncontrolled Environments
CN108109197A (en) A kind of image procossing modeling method
Liu et al. Application of VR technology in sports training in colleges and universities
UNSA Bernard Boulay Human Posture Recognition for Behaviour Understanding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20741758

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020567116

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207036619

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20741758

Country of ref document: EP

Kind code of ref document: A1