WO2020147794A1 - Image processing method and apparatus, image device and storage medium - Google Patents
Image processing method and apparatus, image device and storage medium Download PDFInfo
- Publication number
- WO2020147794A1 WO2020147794A1 PCT/CN2020/072526 CN2020072526W WO2020147794A1 WO 2020147794 A1 WO2020147794 A1 WO 2020147794A1 CN 2020072526 W CN2020072526 W CN 2020072526W WO 2020147794 A1 WO2020147794 A1 WO 2020147794A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- type
- image
- coordinates
- information
- target
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Definitions
- the present disclosure relates to the field of information technology, and in particular to an image processing method and device, image equipment and storage medium.
- embodiments of the present disclosure provide an image processing method and device, image equipment, and storage medium.
- the present disclosure provides an image processing method, including:
- acquiring the local features of the target based on the image includes: acquiring the first type features of the first type of the target based on the image; and/or, based on the image To obtain the second-type feature of the second-type part of the target.
- acquiring the first-type feature of the first-type part of the target based on the image includes: acquiring the expression feature of the head and the intensity coefficient of the expression feature based on the image.
- obtaining the intensity coefficient of the expression feature based on the image includes: obtaining, based on the image, an intensity coefficient representing each sub-part of the first type of part.
- the determination of the local motion information based on the characteristics includes: determining the motion information of the head based on the expression characteristics and the intensity coefficient; according to the motion information, controlling the corresponding control model
- the local movement of the head includes: controlling the expression change of the head of the controlled model according to the movement information of the head.
- the acquiring the second type feature of the second type part of the target based on the image includes: acquiring the key of the second type part of the target based on the image Point location information; the determining the local motion information based on the characteristic includes: determining the second type of local motion information based on the location information.
- acquiring the position information of the key points of the second type part of the target based on the image includes: acquiring the support key of the second type part of the target based on the image The first coordinate of the point; based on the first coordinate, the second coordinate is obtained.
- acquiring the first coordinates of the support key points of the second type part of the target based on the image includes: acquiring the support key points of the second type part based on a 2D image
- the first 2D coordinates of the; based on the first coordinates, obtaining the second coordinates includes: obtaining the first 2D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a conversion relationship from 2D coordinates to 3D coordinates A 3D coordinate.
- acquiring the first coordinates of the key points of the support of the second type part of the target based on the image includes: acquiring all the key points of the second type part of the target based on the 3D image The second 3D coordinates of the key points of the support; and based on the first coordinates, obtaining the second coordinates includes: obtaining third 3D coordinates based on the second 3D coordinates.
- obtaining the third 3D coordinates based on the second 3D coordinates includes: based on the second 3D coordinates, correcting the bracket key corresponding to the occluded part of the second type of part in the 3D image The 3D coordinates of the point, thereby obtaining the third 3D coordinates.
- determining the motion information of the second type of part based on the position information includes: determining the quaternion of the second type of part based on the position information.
- acquiring the position information of the key points of the second type part of the target based on the image includes: acquiring the first part of the support key point of the first part in the second type part Location information; acquiring second location information of key points of the bracket in the second part of the second type of part.
- determining the movement information of the second type of part based on the position information includes: determining the movement information of the first part according to the first position information; and determining according to the second position information The motion information of the second part.
- the controlling the movement of the part corresponding to the controlled model according to the movement information includes: controlling the movement of the controlled model and the first part according to the movement information of the first part. Local motion of the part corresponding to the part; according to the motion information of the second part, controlling the motion of the part of the controlled model corresponding to the second part.
- the first part is the trunk; and/or the second part is the upper limbs, lower limbs or limbs.
- an image processing device including:
- the first acquisition module is used to acquire an image; the second acquisition module is used to acquire a local feature of the target based on the image; the first determination module is configured to determine the local motion information based on the feature; control The module is used to control the local motion of the controlled model according to the motion information.
- the present disclosure provides an image device including: a memory; a processor connected to the memory and configured to execute any of the above-mentioned image processing methods by executing computer-executable instructions located on the memory .
- the present disclosure provides a non-volatile computer storage medium that stores computer-executable instructions; after the computer-executable instructions are executed by a processor, any one of the aforementioned image processing can be implemented method.
- the technical solutions provided by the embodiments of the present disclosure can obtain the local characteristics of the target according to the acquired images, and then obtain the local motion information based on the local characteristics, and finally control the local motion of the controlled model according to the motion information. .
- the controlled model when the controlled model is used to simulate the movement of the target for live video broadcast, the movement of the controlled model can be accurately controlled, so that the controlled model can accurately simulate the movement of the target.
- the live video is realized, and the user is protected on the other hand. privacy.
- FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the disclosure.
- FIG. 2 is a schematic flowchart of an image processing method according to another embodiment of the present disclosure.
- 3A to 3C are schematic diagrams of a controlled model provided by this embodiment for simulating changes of the hand movement of the collected user.
- 4A to 4C are schematic diagrams of a controlled model provided by an embodiment of the disclosure to simulate changes in the torso movement of a collected user.
- 5A to 5C are schematic diagrams of a controlled model simulating the movement of the collected user's feet according to an embodiment of the disclosure.
- FIG. 6 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
- FIG. 7A is a schematic diagram of key points of a skeleton provided by an embodiment of the disclosure.
- FIG. 7B is a schematic diagram of a skeleton key point provided by another embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of a skeleton provided by the implementation of the present disclosure.
- FIG. 9 is a schematic diagram of a local coordinate system of different bones of a human body provided by an embodiment of the disclosure.
- FIG. 10 is a schematic structural diagram of an image device provided by an embodiment of the disclosure.
- this embodiment provides an image processing method, which includes the following steps.
- Step S110 Obtain an image.
- Step S120 Acquire local features of the target based on the image.
- Step S130 Based on the feature, determine the local motion information.
- Step S140 Control the local movement corresponding to the controlled model according to the movement information.
- the image processing method provided in this embodiment can drive the movement of the controlled model through image processing.
- the image processing method provided in this embodiment can be applied to an image device, which can be various electronic devices capable of image device processing, for example, an electronic device that performs image collection, image display, and image pixel reorganization to generate images.
- the image device includes but is not limited to various terminal devices, for example, a mobile terminal and/or a fixed terminal; it may also include various image servers capable of providing image services.
- the mobile terminal includes portable devices such as mobile phones or tablet computers that are easy for users to carry, and may also include devices worn by users, such as smart bracelets, smart watches, or smart glasses.
- the fixed terminal includes a fixed desktop computer and the like.
- the image acquired in step S110 may be a 2D (two-dimensional) image or a 3D (three-dimensional) image.
- the 2D image may include images collected by a single-lens or multi-lens camera, such as red, green, and blue (RGB) images.
- the 3D image may be a 2D coordinate detected from a 2D image, and then a 3D image obtained by a conversion algorithm from 2D coordinates to 3D coordinates, and the 3D image may also be an image collected by a 3D camera.
- the method for acquiring the image may include: using the camera of the imaging device itself to collect the image; and/or the image received from an external device; and/or reading the image from a local database or a local storage.
- the step S120 may include: detecting the image to obtain a feature of a part of the target, and the part may be any part of the target.
- the step S120 may include: detecting the image and acquiring features of at least two parts of the target, and the two parts may be different parts of the target.
- the two parts may be continuously distributed on the target, or may be distributed on the target at intervals.
- any part may include any of the following parts: head, trunk, limbs, upper limbs, lower limbs, hands and feet, etc.; the at least two parts may include the following parts At least two of them: head, trunk, limbs, upper limbs, lower limbs, hands and feet, etc.
- the target is not limited to humans, but can also be various movable living or non-living objects such as animals.
- one or more local features are acquired, and the features can be features that characterize the local spatial structure information, position information, or motion state in various forms.
- a deep learning model such as a neural network can be used to detect the image to obtain the feature.
- the feature may represent the relative positional relationship between joint points in the human skeleton.
- the feature can characterize the position change relationship of the corresponding joint points in the human skeleton at adjacent time points, or the feature can characterize the human skeleton of the current picture and the initial coordinate system (also called the camera coordinate system).
- the feature may include: the 3D coordinates of each joint point in the human skeleton detected by a deep learning model (such as the neural network used in the OpenPose project) in the world coordinate system.
- the feature may include: optical flow feature that characterizes the change of the posture of the human body.
- the acquired image may be one frame of image or multiple frames of image.
- the subsequently obtained motion information may reflect the motion of the joint points in the current image relative to the corresponding joint points of the camera coordinate system.
- the subsequently obtained motion information may reflect the motion of the joint points in the current image relative to the corresponding joint points in the previous frames, or the subsequent motion information may also reflect the current The movement of the joint points in the image relative to the corresponding joint points in the camera coordinate system. This application does not limit the number of images acquired.
- the motion information characterizes the change of the motion corresponding to the part and/or the change of the expression caused by the change of the motion.
- step S140 the part corresponding to the head in the controlled module is controlled to move, and the part corresponding to the torso in the controlled module is controlled Exercise.
- the information of the motion information includes, but is not limited to: the coordinates of the key points corresponding to the part, and the coordinates include but are not limited to 2D coordinates and 3D coordinates; the coordinates can characterize the change of the key points corresponding to the part relative to the reference position, thereby being able to characterize Corresponding to the local sports conditions.
- the motion information can be expressed in various information forms such as vectors, arrays, one-dimensional numerical values, and matrices.
- the controlled model may be a model corresponding to the target.
- the controlled model is a human body model; if the target is an animal, the controlled model may be a body model corresponding to an animal; if the target is a vehicle, the controlled model Can be a model of a vehicle.
- the controlled model is a model for the category of the target.
- the model can be predetermined and can be further divided into multiple styles.
- the style of the controlled model may be determined based on user instructions, and the style of the controlled model may include a variety of styles, for example, a real-life style that simulates a real person, an anime style, an internet celebrity style, styles of different temperaments, and game styles. Among them, different temperament styles can be literary style or rock style.
- the controlled model can be a role in the game.
- the teacher's face and body shape will be exposed.
- the image of the teacher's movement can be obtained through image collection and other methods, and then the movement of a virtual controlled model can be controlled through feature extraction and movement information acquisition.
- the controlled model can be used to simulate the teacher’s movement to complete the physical exercise teaching through its own limb movement.
- the teacher’s face and body shape do not need to be directly exposed to the teaching.
- the teacher’s privacy is protected.
- the method of this embodiment is used to simulate the real vehicle movement by using a vehicle model to obtain a surveillance video, the vehicle’s license plate information and/or the overall outline of the vehicle can be retained in the surveillance video.
- the vehicle’s brand, model, color and Both old and new can be hidden to protect user privacy.
- the step S120 may include the following steps.
- Step S121 Based on the image, obtain a first-type feature of the first-type part of the target.
- Step S122 Based on the image, obtain a second-type local second-type feature of the target.
- the first type of features and the second type of features are features that characterize the spatial structure information, position information, and/or motion state of the corresponding part.
- Different types of features have different characteristics, and the application of different types of parts will have higher accuracy.
- different features have different accuracy of the spatial changes caused by the movement.
- different types of faces and limbs can be used in this embodiment. It is expressed by features that match the accuracy of the face or limbs.
- the first-type features of the first-type part and the second-type features of the second-type part are obtained separately.
- the first type part and the second type part are different types of parts; different types of parts can be distinguished by the movable range of different types of parts; or, the movement fineness of the type parts is not used to distinguish.
- the first type of part and the second type of part may be two types of parts with a relatively large difference in the maximum amplitude of motion.
- the first type of part can be the head. All the five senses of the head can move, but the movement of the five senses of the head is relatively small; the whole head can also move, for example, nodding or shaking the head, but the movement range is relative to the limbs. Or the motion range of the trunk is too small.
- the second type of parts can be upper limbs, lower limbs or limbs, and the range of limb movement is very large. If the two types of local motion states are represented by the same feature, it may cause problems such as a decrease in accuracy or an increase in the complexity of the algorithm due to the amplitude of a certain local motion.
- different types of features are used to obtain motion information according to the characteristics of different types of parts.
- the accuracy of at least one type of local information can be reduced, and the motion information can be improved. Accuracy.
- the first-type features and the second-type features are acquired by different subjects, for example, acquired by using different deep learning models or deep learning modules.
- the first type of feature and the second type of feature have different acquisition logic.
- the step S121 may include: obtaining the facial expression features of the head based on the image.
- the first type of part is the head
- the head includes the face
- the expression features include but are not limited to at least one of the following: movement of the eyebrows, movement of the mouth, movement of the nose, and eyes. Facial movement and cheek movement.
- the movement of the eyebrows can include: eyebrow lifting and drooping.
- Mouth movements can include: opening, closing, flattening, pouting, grinning, and barking teeth.
- the movement of the nose may include: contraction of the nose produced by inhaling into the nose, and blowing outward accompanied by the movement of nose extension.
- Eye movement may include, but is not limited to: eye socket movement and/or eyeball movement.
- the movement of the eye socket will change the size and/or shape of the eye socket, for example, the shape and size of the eye socket of squinting, staring, and smiling eyes will change.
- the movement of the eyeball may include: the position of the eyeball in the eye socket.
- the change of the user's line of sight may cause the eyeball to be located at different positions of the eye socket, and the movement of the left and right eyeballs together can reflect the different emotional states of the user.
- cheek movement some users will have dimples or pear vortices when they laugh, and the shape of the cheeks will also change accordingly.
- the movement of the head is not limited to the expression movement
- the first type of feature is not limited to the expression feature, but also includes: hair movement features such as the head hair movement;
- the first type of features may also include the overall head movement features such as head shaking and/or head nodding.
- the step S121 further includes: obtaining the intensity coefficient of the expression feature based on the image.
- the intensity coefficient may correspond to the facial expression amplitude.
- the intensity coefficient here can be used to characterize the intensity of the expression action, for example, the intensity can be the magnitude of the expression action.
- the greater the intensity coefficient the higher the intensity of the characterization.
- the higher the intensity coefficient the larger the amplitude of the open mouth expression base, the larger the amplitude of the pouting expression base, and so on.
- the greater the intensity coefficient the higher the eyebrow raising height for the eyebrow raising expression base.
- the controlled model can simulate the current action of the target, but also the intensity of the current expression of the target can be accurately simulated to realize the accurate migration of the expression.
- the controlled object is a game character.
- the game character can not only be controlled by the user's body movements, but also accurately simulate the user's facial expression characteristics. In this way, in the game scene, the simulation degree of the game scene is improved, and the user's game experience is improved.
- the mesh information includes but is not limited to: quadrilateral mesh information and/or triangular patch information.
- the quadrilateral grid information indicates the information of the latitude and longitude lines; the triangular patch information is the information of the triangular patch connected by three key points.
- the mesh information is formed by a predetermined number of face key points including the body surface of the face, and the intersection of the longitude and latitude lines in the grid represented by the quadrilateral grid information may be the location of the face key point.
- the change in the position of the intersection of the grid is the change in expression.
- the expression feature and intensity coefficient obtained based on the quadrilateral grid information can be used for precise control of the facial expression of the controlled model.
- the vertices of the triangle face piece corresponding to the triangle face piece information include key points of the face, and the change in the position of the key point is the expression change.
- the expression features and intensity coefficients obtained based on the triangular face information can be used for precise control of the facial expressions of the controlled model.
- obtaining the intensity coefficient of the expression feature may include: obtaining an intensity coefficient representing each sub-part in the first type of part based on the image.
- each corresponds to at least one expression base, and some can correspond to multiple expression bases, and one expression base corresponds to one type of facial expressions of the facial features.
- the intensity coefficient characterizes the magnitude of the expression action.
- the step S130 may include: determining the movement information of the head based on the expression feature and the intensity coefficient; the step S140 may include: controlling the controlled head based on the movement information of the head The facial expression changes of the corresponding head of the model.
- the step S122 may include: acquiring the position information of the second-type local key points of the target based on the image.
- the position information may be represented by the position information of the key points of the target, and the key points may include: key points of the support and key points of the outer contour. If a person is taken as an example, the key points of the support may include the key points of the skeleton of the human body, and the key points of the outline may be the key points of the outer contour of the body surface of the human body. This application does not limit the number of key points, but the key points must at least represent a part of the skeleton.
- the position information may be represented by coordinates, for example, 2D coordinates and/or 3D coordinates of a predetermined coordinate system.
- the predetermined coordinate system includes but is not limited to the image coordinate system where the image is located.
- the location information can be the coordinates of key points, which is obviously different from the aforementioned mesh information. Since the second type of part is different from the first type of part, the use of position information can more accurately characterize the movement of the second type of part.
- the step S130 may include: determining the second type of local motion information based on the position information.
- the second type of part includes but is not limited to: trunk and/or limbs; trunk and/or upper limbs, trunk and/or lower limbs.
- the step S122 may specifically include: obtaining the first coordinates of the key points of the second type of local support of the target based on the image; obtaining the second coordinates based on the first coordinates.
- Both the first coordinate and the second coordinate are coordinates that characterize key points of the support. If the target is a human or animal, the key points of the bracket here are the key points of the skeleton.
- the first coordinate and the second coordinate may be different types of coordinates.
- the first coordinate is a 2D coordinate in a 2D coordinate system
- the second coordinate is a 3D coordinate in a 3D coordinate system.
- the first coordinate and the second coordinate may also be the same type of coordinate.
- the second coordinate is the coordinate after the first coordinate is corrected.
- the first coordinate and the second coordinate are the same type of coordinate.
- the first coordinate and the second coordinate are both 3D coordinates or both are 2D coordinates.
- acquiring the first coordinates of the key points of the support of the second type of the target includes: acquiring the first 2D key points of the support of the second type of parts based on the 2D image Coordinates; obtaining the second coordinates based on the first coordinates includes: obtaining the first 3D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a conversion relationship from 2D coordinates to 3D coordinates.
- acquiring the first coordinates of the key points of the support of the second type of the target includes: acquiring the first coordinates of the key points of the support of the second type of the target based on the 3D image 2. 3D coordinates; obtaining second coordinates based on the first coordinates, including: obtaining third 3D coordinates based on the second 3D coordinates.
- the 3D image directly acquired in step S110 includes: a 2D image and a depth image corresponding to the 2D image.
- the 2D image can provide the coordinate values of the key points of the stent in the xoy plane, and the depth value in the depth image can provide the coordinates of the key points of the stent on the z-axis.
- the z axis is perpendicular to the xoy plane.
- obtaining the third 3D coordinates based on the second 3D coordinates includes: based on the second 3D coordinates, correcting the 3D of the key points of the bracket corresponding to the occluded part of the second type of part in the 3D image Coordinates to obtain the third 3D coordinates.
- the 3D model is used to first extract the second 3D coordinates from the 3D image, and then the occlusion that takes into account different parts of the target is used; through correction, the correct third 3D coordinates of different parts of the target in the 3D space can be obtained. So as to ensure the control accuracy of the subsequent controlled model.
- the step S130 may include: determining the quaternion of the second type of part based on the position information.
- the motion information is not limited to being represented by a quaternion, but can also be represented by coordinate values in different coordinate systems, for example, coordinate values in the Euler coordinate system or the Lagrangian coordinate system, etc. .
- the quaternion can be used to accurately describe the spatial position and/or rotation of the second type of local.
- the quaternion is used as the motion information. In the specific implementation, it is not limited to the quaternion. It can also be indicated by the coordinate value relative to the reference point in various coordinate systems. For example, Euler Coordinates or Lagrangian coordinates replace the quaternion.
- the step S120 may include: acquiring the first position information of the stent key points of the first part in the second type part; acquiring the stent key of the second part in the second type part The second location information of the point.
- the second type of parts may include at least two different parts.
- the controlled model can simultaneously simulate at least two local motions of the target.
- the step S130 may include: determining the motion information of the first part according to the first position information; and determining the motion information of the second part according to the second position information.
- the step S140 may include: controlling the movement of the controlled model and the corresponding part of the first part according to the movement information of the first part; according to the movement information of the second part, Controlling the movement of the controlled model and the corresponding part of the second part.
- the first part is the trunk; the second part is the upper limbs, lower limbs or limbs.
- the method further includes: determining the second type of motion information of the connecting part according to the at least two local characteristics and the first motion constraint condition of the connecting part, wherein the connecting part uses To connect two parts; according to the second type of motion information, control the motion of the connecting part of the controlled model.
- some local motion information can be separately obtained through a motion information acquisition model, and the motion information obtained in this way may be referred to as the first type of motion information.
- Some parts are connecting parts connecting other two or more parts, and the motion information of these connecting parts is called the second type of motion information for convenience in this embodiment.
- the second type of motion information here is also one of the information that characterizes the local motion status of the target.
- the second type of motion information may be determined based on the first type of motion information of the two parts connected by the connecting portion.
- the difference between the second type of motion information and the first type of motion information is: the second type of motion information is the motion information of the connecting part, and the first type of motion information is the motion information of other parts except the connecting part; the first type of motion The information is generated solely based on the motion state of the corresponding part, and the second type of motion information may be related to the motion information of other parts connected to the corresponding connecting portion.
- the step S140 may include: determining a control method for controlling the connecting portion according to the type of the connecting portion; controlling the controlled method according to the control method and the second type of motion information Movement of the connecting part of the model.
- the connecting part can be used to connect the other two parts.
- the neck, wrist, ankle, or waist are all connecting parts that connect the two parts.
- the motion information of these connecting parts may be inconvenient to detect or rely on other adjacent parts to a certain extent. Therefore, in this embodiment, it can be based on the first type of two or more other parts connected to the connecting part.
- the movement information can determine the movement information of the connecting portion, thereby obtaining the second type of movement information corresponding to the connecting portion.
- the corresponding control method will be determined according to the type of the connecting part, so as to achieve precise control of the corresponding connecting part in the controlled model.
- the lateral rotation of the wrist such as the extension of the upper arm to the hand, is the axis of rotation, and the lateral rotation of the wrist is caused by the rotation of the upper arm.
- the lateral rotation of the ankle if taking the extension direction of the calf as the axis to rotate, the rotation of the ankle is also directly driven by the calf, of course, it is also possible that the thigh drives the calf, and the calf further drives the ankle of.
- determining the control method for controlling the connecting portion according to the type of the connecting portion includes: if the connecting portion is the first type connecting portion, determining to adopt the first type control method, wherein The first type of control method is used to directly control the movement of the connecting portion corresponding to the first type of connecting portion in the controlled model.
- the rotation of the first type of connecting portion is not driven by other parts.
- the connecting portion further includes a second type of connecting portion other than the first type of connecting portion.
- the movement of the second type of connection here may be limited to itself, but driven by other parts.
- determining a control method for controlling the connecting portion according to the type of the connecting portion includes: if the connecting portion is a second type of connecting portion, determining to adopt the second type of control method, wherein the The second type of control method is used to indirectly control the movement of the second type of connecting part by controlling the part of the controlled model other than the second type of connecting part.
- the parts other than the second-type connecting portion include but are not limited to: the part directly connected to the second-type connecting portion, or the part indirectly connected to the second-type connecting portion.
- the entire upper limb may be moving, and the shoulders and elbows are rotating.
- the rotation of the wrist can be indirectly driven by controlling the lateral rotation of the shoulder and/or elbow.
- controlling the movement of the connecting part of the controlled model according to the control method and the second type of motion information includes: if it is the second type of control method, decomposing the second type of motion information , Obtain the first type of rotation information of the connecting part being pulled by the traction part to rotate; adjust the motion information of the traction part according to the first type of rotation information; use the adjusted motion information of the traction part , Controlling the movement of the traction part in the controlled model to indirectly control the movement of the connecting part.
- the first type of rotation information is not the rotation information generated by the movement of the second type of connecting part itself, but the second type is pulled by the movement of other parts connected with the second type of connecting part (that is, the traction part).
- the connection part makes the movement information generated by the second type connection part relative to a specific reference point of the target (for example, the center of the human body).
- the traction part is a part directly connected with the second type connecting part. Taking the wrist as the second type of connecting part as an example, the traction part is the elbow or even the shoulder above the wrist. If an ankle is taken as the second type of connecting part as an example, the traction part is the knee or even the root of the thigh above the ankle.
- the lateral rotation of the wrist along the straight line from the shoulder to the elbow to the wrist may be caused by the rotation of the shoulder or the elbow.
- the information of lateral rotation should be assigned to the elbow or shoulder. Through this transfer assignment, the adjustment of the movement information of the elbow or shoulder is realized; the adjusted movement information is used to control the movement of the elbow or shoulder in the controlled model. Movement, in this way, the lateral rotation corresponding to the elbow or shoulder, viewed from the effect bar of the image, will be reflected by the wrist of the controlled model; thus, the controlled model can accurately simulate the target movement.
- the method further includes: decomposing the second type of motion information to obtain the second type of rotation information of the second type of connecting part rotating relative to the traction part; using the second type of rotation Information to control the rotation of the connecting part of the controlled model relative to the traction part.
- the first type of rotation information is information obtained by the information model from which the rotation information is extracted directly according to the characteristics of the image
- the second type of rotation information is the rotation information obtained by adjusting the first type of rotation information.
- the movement information of the second type of connecting portion relative to the predetermined posture can be known through the characteristics of the second type of connecting portion, for example, 2D coordinates or 3D coordinates, and this movement information is called the second type of movement information.
- the second type of motion information includes but is not limited to rotation information.
- the second type of connecting portion includes: a wrist; an ankle.
- the traction part corresponding to the wrist includes: a forearm and/or an upper arm; and/or, if the second type of connecting part is an ankle, The traction part corresponding to the ankle includes: calf and/or thigh.
- the first type of connecting portion includes a neck connecting the head and the torso.
- determining the motion information of the connecting part according to the characteristics of the at least two parts and the first motion constraint condition of the connecting part includes: determining the motion information of the connecting part according to the characteristics of the at least two parts.
- the orientation information of the at least two parts determine the candidate orientation information of the connection part according to the orientation information of the at least two parts; determine the connection part according to the candidate orientation information and the first motion constraint condition Sports information.
- determining the candidate orientation information of the connecting part according to the orientation information of at least two parts includes: determining the first candidate orientation and the second orientation information of the connecting part according to the orientation information of the at least two parts. Two alternative directions.
- Two included angles may be formed between the two local orientation information, and these two included angles correspond to the rotation information of different orientations of the connecting part; therefore, the orientations corresponding to the two included angles are all alternative orientations; Only one of the candidate orientations satisfies the first motion constraint condition for the movement of the connecting part, so the second type of motion information needs to be determined according to the target orientation of the first motion constraint condition.
- the included angle of rotation that satisfies the first motion constraint condition is used as the second type of motion information.
- the first motion constraint condition connecting the human face and the neck of the torso is: between -90 and 90 degrees, and the angle exceeding 90 degrees is excluded according to the first motion constraint condition. In this way, it is possible to reduce the abnormal situation in which the rotation angle exceeds 90 degrees clockwise or counterclockwise, for example, 120 degrees, 180 degrees, during the process of simulating the target movement by the controlled model. If the first motion constraint condition is between -90 and 90 degrees; then the first motion constraint condition corresponds to two extreme angles, one is -90 degrees and the other is 90 degrees.
- the detected rotation angle is modified to the maximum angle defined by the first motion constraint condition. For example, if a rotation angle exceeding 90 degrees is detected, the detected rotation angle is modified to a limit angle closer to the detected rotation angle, such as 90 degrees.
- determining the movement information of the connecting portion according to the candidate orientation information and the first movement constraint condition includes: from the first candidate orientation information and the second candidate orientation information , Select the target orientation information within the bounds of the orientation change; determine the movement information of the connecting part according to the target orientation information.
- the corresponding neck orientation may be 90 degrees to the right or 270 degrees to the left.
- the change of the neck orientation of the human body may not be 270 degrees to the left. So that the neck faces right.
- the orientation of the neck is: 90 degrees to the right and 270 degrees to the left are candidate orientation information, and the orientation information of the neck needs to be further determined, which needs to be determined according to the aforementioned first motion constraint condition.
- the neck 90 degrees to the right is the target orientation information of the neck, and according to the neck 90 degrees to the right, the second type of movement information of the neck relative to the camera coordinate system is obtained as the right rotation 90 degrees.
- the target orientation information here is the information that satisfies the first motion constraint condition.
- determining the orientation information of the at least two parts according to the characteristics of the at least two parts includes: acquiring a first key point and a second key point of each part of the at least two parts Acquire the first reference point of each of the at least two parts, wherein the first reference point is a predetermined key point in the target; based on the first key point and the first reference point A first vector is generated, and a second vector is generated based on the second key point and the first reference point; based on the first vector and the second vector, the value of each of the at least two parts is determined Towards information.
- the first reference point of the first part may be the waist key point of the target or the midpoint of the key points of the two hips. If the second part of the two parts is a human face, the first reference point of the second part may be a connection point between the neck and shoulders of the human face.
- determining the orientation information of each of the at least two parts includes: cross multiplying the first vector and the second vector of a part to obtain the corresponding The normal vector of the plane where the part is located; the normal vector is used as the orientation information of the part.
- the orientation of the plane where the part is located is also determined.
- determining the movement information of the connecting portion based on the at least two local movement information includes: acquiring a fourth 3D coordinate of the connecting portion relative to a second reference point; The fourth 3D coordinate obtains the absolute rotation information of the connecting part; according to the motion information, controlling the movement of the part corresponding to the controlled model includes: controlling the corresponding connecting part of the controlled model based on the absolute rotation information exercise.
- the second reference point may be one of the key points of the stent of the target, and the target is a person as an example, and the second reference point may be a local key point connected by the first type of connecting portion.
- the second reference point may be the key point of the shoulder connected to the neck.
- the second reference point may be the same as the first reference point.
- both the first reference point and the second reference point may be the root node of the human body, and the root node of the human body may be the human crotch.
- the root node includes but is not limited to the key point 0 shown in FIG. 7B.
- Fig. 7B is a schematic diagram of the skeleton of the human body, and Fig. 7B includes 17 skeleton joint points numbered 0-16.
- controlling the movement of the corresponding connection part of the controlled model further includes: decomposing according to the traction hierarchical relationship among the plurality of connection parts in the target The absolute rotation information obtains relative rotation information; based on the relative rotation information, the movement of the corresponding connection part in the controlled model is controlled.
- first level pelvis
- second level waist
- third level thigh (for example, left thigh, right thigh)
- fourth level lower leg (for example, left calf, Right calf)
- fifth level feet.
- the following is another level relationship; the first level: chest; the second level: neck; the third level, head.
- the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
- motion information (that is, the relative rotation information).
- the relative rotation information can be represented by the following calculation formula (1): the rotation quaternion of each key point relative to the camera coordinate system ⁇ Q 0 ,Q 1 ,...,Q 18 ⁇ , and then calculate the rotation quaternion q i of each key point relative to the parent key point;
- the parent key point parent(i) is the key point one level above the current key point i.
- Q i is the rotation quaternion of the current key point i relative to the camera coordinate system; It is the reverse rotation parameter of the key point of the upper level.
- Q parent(i) is the rotation parameter of the key point of the upper level, and the rotation angle is 90 degrees; then The angle of rotation is -90 degrees.
- controlling the movement of the corresponding connection part of the controlled model further includes: correcting the relative rotation information according to a second constraint condition; based on the relative rotation information, Controlling the movement of the corresponding connecting part in the controlled model includes: controlling the movement of the corresponding connecting part in the controlled model based on the corrected relative rotation information.
- the second constraint condition includes: a rotatable angle of the connecting portion.
- the method further includes: performing posture defect correction on the second type of motion information to obtain corrected second type of motion information; said controlling the subject according to the second type of motion information Controlling the movement of the connecting part of the model includes: using the corrected second-type movement information to control the movement of the connecting part of the controlled model.
- posture defect correction can be performed on the second type of motion information to obtain the corrected second type of motion information.
- the method further includes: performing posture defect correction on the first type of motion information to obtain corrected first type of motion information; the step S140 may include: using the corrected first type of motion information.
- the similar motion information controls the local motion corresponding to the controlled model.
- the postural defect correction includes at least one of the following: synchronization defect of upper and lower limbs; movement defect of looped leg; foot showing external figure-shaped movement defect; foot concave type movement defect.
- the method further includes: obtaining the posture defect correction parameter according to the difference information between the shape of the target and the standard form; wherein, the posture defect correction parameter is used for the first The correction of the second type of motion information and/or the second type of motion information.
- the shape of the target is detected first, and then the detected shape is compared with the standard shape to obtain difference information; posture defect correction is performed through the difference information.
- a prompt to maintain a predetermined posture is output on the display interface. After the user sees the prompt, the user maintains the predetermined posture, so that the imaging device can collect an image of the user maintaining the predetermined posture; then through image detection, it is determined whether the user maintains the predetermined posture Standard enough to get the difference information.
- the predetermined posture may include, but is not limited to, the upright posture of the human body.
- the normal standard standing posture should be that the toes of the feet and the roots of the feet are parallel to each other, and the first type of movement information and/or the second type corresponding to the characteristics of the target
- this non-standard correction of the shape ie, the posture defect correction
- the method further includes: correcting the proportions of different parts of the standard model according to the proportion relations of different parts of the target to obtain the corrected controlled model.
- the proportional relationship between the various parts of different targets may be different. For example, taking people as an example, the ratio of the length of the legs to the length of the head of a professional model is longer than that of an ordinary person. Some people have fuller buttocks, and the distance between their hips may be larger than that of ordinary people.
- the standard model may be a mean model obtained based on a large amount of human body data.
- the different parts of the standard model will be corrected according to the proportional relationship of different parts of the target. Ratio to obtain the corrected controlled model.
- the corrected part includes but not limited to the crotch and/or the leg.
- the small image in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the user's hand is moving. From FIG. 3A to FIG. 3B, and then from FIG. 3B to FIG. 3C, the user's hand is moving, and the hand of the controlled model is also moving.
- the user's hand movement changes from making a fist, extending the palm, and extending the index finger sequentially in FIGS. 3A to 3C, while the controlled model imitates the user's gestures changing from making a fist, extending the palm, and extending the index finger.
- the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the torso of the user is moving. From Fig. 4A to Fig. 4B and then from Fig. 4B to Fig. 4C, the torso of the user is in motion, and the torso of the controlled model is also in motion. 4A to 4C, the user changes his crotch from the right side of the image, to the right side of the image, and finally stands upright.
- the controlled model also simulates the user's torso movement.
- the small picture in the upper left corner of the image is the collected image, and the lower right corner is the controlled model of the human body.
- the user steps toward the right side of the image, steps toward the left side of the image, and finally stands up straight; the controlled model also simulates the user's foot movement.
- the controlled model also simulates changes in the user's expression.
- this embodiment provides an image processing device, which includes the following modules:
- the first acquisition module 110 is used to acquire images.
- the second acquisition module 120 is configured to acquire local features of the target based on the image.
- the first determining module 130 is configured to determine the local motion information based on the characteristic.
- the control module 140 is configured to control the local movement corresponding to the controlled model according to the movement information.
- the second acquisition module 120 is specifically configured to: acquire the first-type features of the first-type part of the target based on the image; and/or, based on the image, acquire the The second type of feature of the second type of the target.
- the second acquisition module 120 is specifically configured to acquire the expression feature of the head and the intensity coefficient of the expression feature based on the image.
- obtaining the intensity coefficient of the expression feature based on the image includes: obtaining, based on the image, an intensity coefficient representing each sub-part in the first type of part.
- the first determining module 130 is specifically configured to determine the movement information of the head based on the expression feature and the intensity coefficient; the control module 140 is specifically configured to: The movement information of the head controls the expression change of the head of the controlled model.
- the second obtaining module 120 is configured to obtain the first type of local grid information based on the image.
- the second obtaining module 120 is specifically configured to obtain an intensity coefficient representing each sub-part in the first type of part based on the image.
- the second acquiring module 120 is specifically configured to acquire the location information of the second-type local key points of the target based on the image; the first determining module 130 is specifically configured to The location information determines the movement information of the second type of local.
- the second acquiring module 120 is specifically configured to: acquire the first coordinates of the key points of the second type of local support of the target based on the image; and acquire the first coordinates based on the first coordinates. Two coordinates.
- the second acquiring module 120 is specifically configured to acquire the first 2D coordinates of the key points of the second type of local support based on a 2D image; and to 3D based on the first 2D coordinates and the 2D coordinates The transformation relationship of the coordinates, the first 3D coordinates corresponding to the first 2D coordinates are obtained.
- the second acquiring module 120 is specifically configured to acquire the second 3D coordinates of the key points of the second type local support of the target based on the 3D image; and to acquire the third coordinates based on the second 3D coordinates. 3D coordinates.
- the second acquisition module 120 is specifically configured to correct the 3D coordinates of key points of the bracket corresponding to the occluded part of the second type of part in the 3D image based on the second 3D coordinates, so as to obtain The third 3D coordinate.
- the first determining module 130 is specifically configured to determine the quaternion of the second type of part based on the location information.
- the second acquiring module 120 is specifically configured to acquire the first position information of the key points of the bracket of the first part in the second type of part; and to acquire the second position information of the second part of the second type.
- the first determining module 130 is specifically configured to determine the motion information of the first part according to the first position information; and determine the motion information of the second part according to the second position information. Sports information.
- control module 140 is specifically configured to: control the movement of the controlled model and the corresponding part of the first part according to the movement information of the first part; according to the movement information of the second part The motion information controls the motion of the controlled model and the corresponding part of the second part.
- the first part is a trunk; the second part is an upper limb, a lower limb or a limb.
- This example provides an image processing method.
- the steps of the method are as follows.
- An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
- the key points of the face of the human body are detected, where the key points of the face may be the key points of the contour of the face surface.
- the torso key points and/or limb key points of the human body are detected, where the torso key points and/or limb key points can be 3D key points, which are represented by 3D coordinates.
- the 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates.
- the 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera.
- the key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs.
- the key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the finger joints, and the key points of the fingertips; the positions of these key points Can reflect the movement of hands and fingers.
- the mesh information of the human face is generated.
- the expression base corresponding to the current expression of the target is selected according to the mesh information, and the expression of the controlled model is controlled according to the expression base; the expression of the controlled model corresponding to each expression base is controlled according to the intensity coefficient reflected by the mesh information strength.
- the quaternion is converted.
- the torso movement of the controlled model is controlled according to the quaternion corresponding to the torso key point; and/or the limb movement of the controlled model is controlled according to the quaternion corresponding to the limb key point.
- the key points of the face may include: 106 key points.
- the torso key points and/or limb key points may include: 14 key points or 17 key points, which may be specifically shown in FIG. 7A and FIG. 7B.
- FIG. 7A shows a schematic diagram containing 14 key points of the skeleton
- FIG. 7B shows a schematic diagram containing 17 key points of the skeleton.
- FIG. 7B may be a schematic diagram of 17 key points generated based on the 14 key points shown in FIG. 7A.
- the 17 key points in Fig. 7B are equivalent to the key points shown in Fig. 7A, with key point 0, key point 7 and key point 9 added.
- the 2D coordinates of key point 9 can be preliminarily determined based on the 2D coordinates of key point 8 and key point 10; the 2D coordinates of key point 7 can be determined according to the 2D coordinates of key point 8 and the 2D coordinates of key point 0.
- the key point 0 may be the reference point provided by the embodiments of the disclosure, and the reference point may be used as the aforementioned first reference point and/or the second reference point.
- the controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
- the clothes of the teacher model may be more stable, such as suits.
- the controlled model may wear sports clothing.
- This example provides an image processing method.
- the steps of the method are as follows.
- An image is acquired, the image includes a target, and the target includes but is not limited to a human body.
- the torso key points and limb key points of the human body are detected.
- the torso key points and/or the limb key points here can be 3D key points, which are represented by 3D coordinates.
- the 3D may include 2D coordinates detected from a 2D image, and then 3D coordinates obtained by a conversion algorithm from 2D coordinates to 3D coordinates.
- the 3D coordinates may also be 3D coordinates extracted from a 3D image collected by a 3D camera.
- the key points of the limbs here may include: key points of the upper limbs and/or key points of the lower limbs.
- the hand key points of the upper limb key points include but are not limited to the key points of the wrist joints, the key points of the finger joints, the key points of the knuckles, and the key points of the fingertips.
- the location of these key points can reflect the movement of the hands and fingers.
- the key points of the torso are converted into a quaternion that characterizes the movement of the torso.
- the quaternion can be called a trunk quaternion.
- the key points of the limbs are converted into quaternions representing the movement of the limbs, and the quaternion data can be called limb quaternions.
- the torso quaternion is used to control the torso movement of the controlled model.
- the torso key points and the limb key points may include: 14 key points or 17 key points, which may be specifically shown in FIG. 7A or FIG. 7B.
- the controlled model in this example can be a game character in a game scene; a teacher model in an online education video in an online teaching scene; a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to the application scenario. If the application scenario is different, the model and/or appearance of the controlled model is different.
- the clothes of the teacher model may be more stable, such as suits.
- the controlled model may wear sports clothing.
- This example provides an image processing method.
- the steps of the method are as follows.
- the image contains a target, and the target can be a human body.
- a 3D posture of the target in a three-dimensional space is obtained, and the 3D posture can be represented by the 3D coordinates of the key points of the skeleton of the human body.
- the absolute rotation parameters of the joints of the human body in the camera coordinate system can be determined by the coordinates in the camera coordinate system.
- the coordinate direction of the joint is obtained.
- the relative rotation parameters of the joints are determined. Determining the relative parameters may specifically include: determining the position of the key point of the joint relative to the root node of the human body. Among them, the relative rotation parameter can be used for quaternion representation.
- the hierarchical relationship here can be the traction relationship between joints. For example, the movement of the elbow joint will pull the movement of the wrist joint to a certain extent, and the movement of the shoulder joint will also pull the movement of the elbow joint.
- the hierarchical relationship may also be predetermined according to the joints of the human body.
- the first level pelvis
- the second level waist
- the third level thigh (for example, left thigh, right thigh)
- fourth level calf (for example, left calf, right calf)
- fifth level feet.
- the first level chest; the second level: neck; the third level, head.
- the first level the clavicle, corresponding to the shoulder; the second level: the upper arm; the third level: the forearm (also called the forearm); the fourth level: the hand.
- the hierarchical relationship decreases successively; the local movement of the higher level will affect the local movement of the lower level. Therefore, the level of the traction part is higher than the level of the connection part.
- the second type of motion information When determining the second type of motion information, first, obtain the motion information of the local key points of each level, and then determine the local motion of the low-level local key points relative to the high-level key points based on the hierarchical relationship Information (that is, the relative rotation information).
- the hierarchical relationship Information that is, the relative rotation information
- the relative rotation information can be expressed by the following calculation formula: the rotation quaternion ⁇ Q 0 ,Q 1 ,...,Q 18 ⁇ of each key point relative to the camera coordinate system, Then calculate the rotation quaternion q i of each key point relative to the parent key point according to formula (1).
- the aforementioned use of quaternions to control the motion of each joint of the controlled model may include: using q i to control the motion of each joint of the controlled model.
- the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within the constraint condition, Wherein, the constraint condition may be to limit the first Euler angle; obtain a quaternion corresponding to the second Euler angle, and then use the quaternion to control the rotation of the controlled model.
- the quaternion corresponding to the second Euler angle is obtained, and the second Euler angle can be directly converted into a quaternion.
- Fig. 7B is a skeleton diagram of 17 key points.
- Figure 8 is a skeleton diagram of 19 key points.
- the bones shown in Figure 8 can correspond to 19 key points, referring to the following bones: pelvis, waist, left thigh, left calf, left foot; right thigh, right calf, right foot, chest, neck, head, Left clavicle, right clavicle, right upper arm, right forearm, right hand, left upper arm, left forearm, left hand.
- (x i , y i , z i ) can be the coordinates of the i-th key point, and the value of i ranges from 0 to 16.
- p i represents the three-dimensional coordinates in the local coordinate system of node i, which are generally fixed values that come with the original model and do not need to be modified or migrated.
- q i is a quaternion, which represents the rotation of the bone controlled by node i in the coordinate system of its parent node. It can also be considered as the rotation of the local coordinate system of the current node and the local coordinate system of the parent node.
- the process of calculating the quaternion of the key points corresponding to each joint can be as follows: Determine the coordinate axis direction of the local coordinate system of each node. For each bone, the direction from the child node to the parent node is the x-axis; the axis of rotation that makes the bone rotatable the maximum angle is the z-axis; if the rotation axis cannot be determined, the direction the human body faces is the y-axis; for details Shown in Figure 9.
- This example uses the left-handed coordinate system for illustration, and the right-handed coordinate system can also be used in specific implementation.
- (i-j) represents the vector that i points to j, and x represents the cross product.
- (1-7) represents the vector from the first key point to the seventh key point.
- nodes 8, 15, 11, and 18 are the four nodes of the hands and feet. Since the calculation of the quaternion of these four nodes requires specific postures to be determined, these four nodes are not included in the table. node.
- the number of the 19-point skeleton node can be referred to as shown in Fig. 8, and the key point number of the 17-point skeleton can be referred to as Fig. 7B.
- Y asin(2*(q1*q3+q0*q2)) and the value of Y is between -1 and 1 (3)
- X is the Euler angle in the first direction
- Y is the Euler angle in the second direction
- Z is the Euler angle in the third direction. Any two of the first direction, the second direction, and the third direction are perpendicular.
- the method further includes: performing posture optimization adjustment on the second Euler angle. For example, to adjust some of the second Euler angles, it may be adjusted to the Euler angle optimized for posture based on preset rules, so as to obtain the third Euler angle.
- Obtaining the quaternion corresponding to the second Euler angle may include: converting the third Euler angle into a quaternion for controlling the controlled model.
- the method further includes: after converting the second Euler angles into a quaternion, performing posture optimization processing on the converted quaternion data. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
- the adjustment when adjusting the second Euler angle or the quaternion obtained by the conversion of the second Euler angle, the adjustment may be based on a preset rule, or may be optimized and adjusted by the deep learning model itself; There are many specific implementation methods, which are not limited in this application.
- pre-processing may also be included.
- the width of the crotch and/or shoulder of the controlled model is modified to correct the overall posture of the human body.
- the standing posture of the human body can be corrected for standing upright and for abdomen correction. Some people will push their abdomen when standing, and the abdomen correction can make the controlled model not simulate the user's abdomen movement. Some people hunch back when standing, and the hunchback correction can prevent the controlled model from simulating the user's hunchback.
- This example provides an image processing method.
- the steps of the method are as follows.
- An image is acquired, and the image includes a target, and the target may include at least one of a human body, a human upper limb, and a human lower limb.
- the coordinate system of the target joint according to the position information of the target joint in the image coordinate system. According to the position information of the limb part in the image coordinate system, the coordinate system of the limb part that will pull the target joint movement is obtained.
- the rotation of the target joint relative to the limb part is determined to obtain the rotation parameter; the rotation parameter includes the spin parameter of the target joint and the rotation parameter traction by the limb part.
- the first angle limit is used to limit the rotation parameters of the local traction of the limbs, and the final traction rotation parameters are obtained. According to the final traction rotation parameters, the local rotation parameters of the limbs are corrected. According to the coordinate system of the first limb and the partial relative rotation parameter of the limb after the correction, the relative rotation parameter is restricted by a second angle to obtain the restricted relative rotation parameter.
- the limited rotation parameters are obtained as a quaternion.
- the movement of the target joint of the controlled model is controlled according to the quaternion.
- the coordinate system of the hand in the image coordinate system is obtained, and the coordinate system of the forearm and the coordinate system of the upper arm are obtained.
- the target joint at this time is the wrist joint.
- the rotation of the hand relative to the forearm is broken down into spin and pulled rotation. Transfer the towed rotation to the forearm, specifically, assign the towed rotation to the rotation in the corresponding direction of the forearm; use the first angle limit of the forearm to limit the maximum rotation of the forearm. Then determine the rotation of the hand relative to the corrected forearm, and obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the hand relative to the forearm.
- the coordinate system of the foot under the image coordinate system is obtained, and the coordinate system of the lower leg and the coordinate system of the thigh are obtained; the target joint at this time is the ankle joint.
- the rotation of the foot relative to the calf is broken down into spin and pulled rotation. Transfer the pulled rotation to the lower leg, specifically, assign the pulled rotation to the rotation in the corresponding direction of the lower leg; use the first angle limit of the lower leg to limit the maximum rotation of the lower leg. Then determine the rotation of the foot relative to the corrected calf to obtain the relative rotation parameter. Perform a second angle restriction on the relative rotation parameter to obtain the rotation of the foot relative to the calf.
- the neck controls the direction of the head.
- the face, the human body, and the hands are separate parts that are ultimately integrated.
- the rotation of the neck is very important.
- the orientation of a human body can be calculated.
- the orientation of a face can be calculated, and the relative position of the two orientations is the rotation angle of the neck.
- the angle of this connection part is calculated by relative calculation. For example, the body is 0 degrees and the face is 90 degrees. To control a controlled model, it only pays attention to the local angle, the head and body angle changes, and the neck angle of the controlled model needs to be calculated to control the head of the controlled model.
- the rotation angle of the neck first determine the current orientation of the user's face based on the image, and then calculate the rotation angle of the neck. Since the rotation of the neck has a range, for example, suppose the neck can rotate up to 90 degrees. If the calculated rotation angle exceeds this range (-90 degrees to 90 degrees), the boundary of the range is used as the rotation angle of the neck (for example, -90 degrees or 90 degrees).
- 3D key points can be used to calculate the orientation of the body or face.
- the calculation of the specific orientation can be: the normal vector of the plane is obtained by cross-multiplying the two vectors in the plane of the face or the body that are not on a straight line. It is the orientation of the face or body. This orientation can be used as the orientation of the connecting part (neck) between the body and the face.
- an embodiment of the present application provides an image device, including: a memory 1002, configured to store information; a processor 1001, connected to the memory 1002, configured to execute data stored on the memory 1002
- the computer-executable instructions can implement the image processing method provided by one or more of the foregoing technical solutions, for example, the image processing method shown in FIG. 1 and/or FIG. 2.
- the memory 1002 can be various types of memory, such as random access memory, read-only memory, flash memory, and the like.
- the memory 1002 may be used for information storage, for example, to store computer executable instructions.
- the computer-executable instructions may be various program instructions, for example, target program instructions and/or source program instructions.
- the processor 1001 may be various types of processors, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor.
- the processor 1001 may be connected to the memory 1002 through a bus.
- the bus may be an integrated circuit bus or the like.
- the terminal device may further include: a communication interface 1003, and the communication interface 1003 may include: a network interface, for example, a local area network interface, a transceiver antenna, and the like.
- the communication interface is also connected to the processor 1001 and can be used for information transceiving.
- the terminal device further includes a human-computer interaction interface 1005.
- the human-computer interaction interface 1005 may include various input and output devices, such as a keyboard and a touch screen.
- the image device further includes: a display 1004, which can display various prompts, collected facial images, and/or various interfaces.
- the embodiment of the present application provides a non-volatile computer storage medium that stores computer executable code; after the computer executable code is executed, the image provided by one or more technical solutions can be realized
- the processing method is, for example, the image processing method shown in FIG. 1 and/or FIG. 2.
- the disclosed device and method may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
- the coupling, or direct coupling, or communication connection between the components shown or discussed can be indirect coupling or communication connection through some interfaces, devices or units, and can be electrical, mechanical or other forms of.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- the functional units in the embodiments of the present disclosure can be all integrated into one processing module, or each unit can be individually used as a unit, or two or more units can be integrated into one unit;
- the unit of can be implemented in the form of hardware, or in the form of hardware plus software functional units.
Abstract
Description
Claims (32)
- 一种图像处理方法,其特征在于,包括:An image processing method, characterized by comprising:获取图像;Get an image;基于所述图像,获取目标的局部的特征;Based on the image, acquiring local features of the target;基于所述特征,确定所述局部的运动信息;Based on the feature, determine the local motion information;根据所述运动信息,控制受控模型对应的局部的运动。According to the motion information, the local motion corresponding to the controlled model is controlled.
- 根据权利要求1所述的方法,其特征在于,基于所述图像,获取所述目标的局部的特征,包括:The method according to claim 1, characterized in that, based on the image, acquiring the local features of the target comprises:基于所述图像,获取所述目标的第一类局部的第一类特征;和/或,Based on the image, obtain the first-type features of the first-type part of the target; and/or,基于所述图像,获取所述目标的第二类局部的第二类特征。Based on the image, the second-type feature of the second-type part of the target is acquired.
- 根据权利要求2所述的方法,其特征在于,基于所述图像,获取所述目标的所述第一类局部的所述第一类特征,包括:The method according to claim 2, wherein, based on the image, acquiring the first-type feature of the first-type part of the target comprises:基于所述图像,获取头部的表情特征以及表情特征的强度系数。Based on the image, the expression feature of the head and the intensity coefficient of the expression feature are obtained.
- 根据权利要求3所述的方法,其特征在于,基于所述图像,获取所述表情特征的所述强度系数,包括:The method according to claim 3, wherein, based on the image, obtaining the intensity coefficient of the expression feature comprises:基于所述图像,获得表征所述第一类局部中各个子局部的强度系数。Based on the image, an intensity coefficient characterizing each sub-part in the first type of part is obtained.
- 根据权利要求3或4所述的方法,其特征在于,基于所述特征,确定所述局部的运动信息,包括:The method according to claim 3 or 4, wherein, based on the characteristic, determining the local motion information comprises:基于所述表情特征和所述强度系数,确定所述头部的运动信息;Determining the movement information of the head based on the expression feature and the intensity coefficient;根据所述运动信息,控制所述受控模型所述对应的局部的运动,包括:According to the movement information, controlling the movement of the corresponding part of the controlled model includes:根据所述头部的所述运动信息,控制所述受控模型的头部的表情变化。According to the movement information of the head, the expression change of the head of the controlled model is controlled.
- 根据权利要求2至5中任一项所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述第二类特征,包括:The method according to any one of claims 2 to 5, characterized in that, based on the image, acquiring the second-type feature of the second-type part of the target comprises:基于所述图像,获取所述目标的所述第二类局部的关键点的位置信息;Acquiring location information of the second-type local key points of the target based on the image;所述基于所述特征,确定所述局部的所述运动信息,包括:The determining the partial motion information based on the characteristic includes:基于所述位置信息,确定所述第二类局部的运动信息。Based on the position information, determine the second type of local motion information.
- 根据权利要求6所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述关键点的位置信息,包括:The method according to claim 6, wherein, based on the image, acquiring the position information of the key point of the second type part of the target comprises:基于所述图像,获取所述目标的所述第二类局部的支架关键点的第一坐标;Based on the image, acquiring the first coordinates of the key points of the support of the second type of the target;基于所述第一坐标,获得第二坐标。Based on the first coordinates, the second coordinates are obtained.
- 根据权利要求7所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述支架关键点的第一坐标,包括:8. The method according to claim 7, wherein, based on the image, acquiring the first coordinates of the key points of the support of the second type of the target includes:基于2D图像,获取所述第二类局部的所述支架关键点的第一2D坐标;Acquiring the first 2D coordinates of the key points of the bracket of the second type of local based on the 2D image;基于所述第一坐标,获得所述第二坐标,包括:Obtaining the second coordinates based on the first coordinates includes:基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。Based on the first 2D coordinate and the conversion relationship from the 2D coordinate to the 3D coordinate, the first 3D coordinate corresponding to the first 2D coordinate is obtained.
- 根据权利要求7所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述支架关键点的第一坐标,包括:8. The method according to claim 7, wherein, based on the image, acquiring the first coordinates of the key points of the support of the second type of the target includes:基于3D图像,获取所述目标的所述第二类局部的所述支架关键点的第二3D坐标;Acquiring, based on the 3D image, the second 3D coordinates of the key points of the support of the second type part of the target;基于所述第一坐标,获得所述第二坐标,包括:Obtaining the second coordinates based on the first coordinates includes:基于所述第二3D坐标,获得第三3D坐标。Based on the second 3D coordinate, a third 3D coordinate is obtained.
- 根据权利要求9所述的方法,其特征在于,基于所述第二3D坐标,获得所述第三3D坐标,包括:The method according to claim 9, wherein obtaining the third 3D coordinates based on the second 3D coordinates comprises:基于所述第二3D坐标,修正所述第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。Based on the second 3D coordinates, the 3D coordinates of the key points of the bracket corresponding to the occluded part of the second type of part in the 3D image are corrected, so as to obtain the third 3D coordinates.
- 根据权利要求6所述的方法,其特征在于,基于所述位置信息,确定所述第二类局部的运动信息,包括:The method according to claim 6, wherein determining the second type of local motion information based on the position information comprises:基于所述位置信息,确定所述第二类局部的四元数。Based on the location information, determine the quaternion of the second type of part.
- 根据权利要求6所述的方法,其特征在于,基于所述图像,获取所述目标的所述第二类局部的所述关键点的位置信息,包括:The method according to claim 6, wherein, based on the image, acquiring the position information of the key point of the second type part of the target comprises:获取所述第二类局部中的第一局部的支架关键点的第一位置信息;Acquiring first position information of key points of the bracket of the first part in the second type of part;获取所述第二类局部中的第二局部的支架关键点的第二位置信息。Acquire second location information of key points of the bracket of the second part in the second type of part.
- 根据权利要求12所述的方法,其特征在于,基于所述位置信息,确定所述第二类局部的运动信息,包括:The method according to claim 12, wherein, based on the position information, determining the second type of local motion information comprises:根据所述第一位置信息,确定所述第一局部的运动信息;Determine the motion information of the first part according to the first position information;根据所述第二位置信息,确定所述第二局部的运动信息。According to the second position information, the motion information of the second part is determined.
- 根据权利要求12或13所述的方法,其特征在于,根据所述运动信息,控制受控模型对应的局部的运动,包括:The method according to claim 12 or 13, wherein, according to the motion information, controlling the local motion corresponding to the controlled model comprises:根据所述第一局部的所述运动信息,控制所述受控模型的与所述第一局部对应局部的运动;Controlling the movement of the controlled model corresponding to the first part according to the motion information of the first part;根据所述第二局部的所述运动信息,控制所述受控模型与所述第二局部对应局部的运动。According to the movement information of the second part, the movement of the controlled model and the part corresponding to the second part is controlled.
- 根据权利要求12至14任一项所述的方法,其特征在于,The method according to any one of claims 12 to 14, characterized in that,所述第一局部为:躯干;和/或,所述第二局部为上肢、下肢或四肢。The first part is a trunk; and/or, the second part is an upper limb, a lower limb or a limb.
- 一种图像处理装置,其特征在于,包括:An image processing device, characterized by comprising:第一获取模块,用于获取图像;The first acquisition module is used to acquire an image;第二获取模块,用于基于所述图像,获取目标的局部的特征;The second acquisition module is configured to acquire local features of the target based on the image;第一确定模块,用于基于所述特征,确定所述局部的运动信息;The first determining module is configured to determine the local motion information based on the feature;控制模块,用于根据所述运动信息,控制受控模型对应的局部的运动。The control module is used to control the local movement corresponding to the controlled model according to the movement information.
- 根据权利要求16所述的装置,其特征在于,所述第二获取模块,具体用于:The device according to claim 16, wherein the second obtaining module is specifically configured to:基于所述图像,获取所述目标的第一类局部的第一类特征;和/或,Based on the image, obtain the first-type features of the first-type part of the target; and/or,基于所述图像,获取所述目标的第二类局部的第二类特征。Based on the image, the second-type feature of the second-type part of the target is acquired.
- 根据权利要求17所述的装置,其特征在于,所述第二获取模块,具体用于基于所述图像,获取头部的表情特征以及所述表情特征的强度系数。The device according to claim 17, wherein the second acquisition module is specifically configured to acquire the facial expression feature of the head and the intensity coefficient of the facial expression feature based on the image.
- 根据权利要求18所述的装置,其特征在于,基于所述图像,获取所述表情特征的所述强度系数,包括:The device of claim 18, wherein, based on the image, acquiring the intensity coefficient of the expression feature comprises:基于所述图像,获得表征所述第一类局部中各个子局部的强度系数。Based on the image, an intensity coefficient characterizing each sub-part in the first type of part is obtained.
- 根据权利要求18或19所述的装置,其特征在于,The device according to claim 18 or 19, wherein:所述第一确定模块,具体用于基于所述表情特征和所述强度系数,确定所述头部的运动信息;The first determining module is specifically configured to determine the movement information of the head based on the expression feature and the intensity coefficient;所述控制模块,具体用于根据所述头部的所述运动信息,控制所述受控模型的头部的表情变化。The control module is specifically configured to control the expression change of the head of the controlled model according to the movement information of the head.
- 根据权利要求17至20中任一项所述的装置,其特征在于,The device according to any one of claims 17 to 20, wherein:所述第二获取模块,具体用于基于所述图像,获取所述目标的所述第二类局部的关键点的位置 信息;The second acquisition module is specifically configured to acquire the location information of the second-type local key points of the target based on the image;所述第一确定模块,具体用于基于所述位置信息,确定所述第二类局部的运动信息。The first determining module is specifically configured to determine the local motion information of the second type based on the position information.
- 根据权利要求21所述的装置,其特征在于,所述第二获取模块,具体用于:The device according to claim 21, wherein the second obtaining module is specifically configured to:基于所述图像,获取所述目标的所述第二类局部的支架关键点的第一坐标;Based on the image, acquiring the first coordinates of the key points of the support of the second type of the target;基于所述第一坐标,获得第二坐标。Based on the first coordinates, the second coordinates are obtained.
- 根据权利要求22所述的装置,其特征在于,所述第二获取模块,具体用于The device according to claim 22, wherein the second acquisition module is specifically configured to基于2D图像,获取所述第二类局部的所述支架关键点的第一2D坐标;Acquiring the first 2D coordinates of the key points of the bracket of the second type of local based on the 2D image;基于所述第一2D坐标和2D坐标到3D坐标的转化关系,获得与所述第一2D坐标对应的第一3D坐标。Based on the first 2D coordinate and the conversion relationship from the 2D coordinate to the 3D coordinate, the first 3D coordinate corresponding to the first 2D coordinate is obtained.
- 根据权利要求22所述的装置,其特征在于,所述第二获取模块,具体用于The device according to claim 22, wherein the second acquisition module is specifically configured to基于3D图像,获取所述目标的所述第二类局部的所述支架关键点的第二3D坐标;Acquiring, based on the 3D image, the second 3D coordinates of the key points of the support of the second type part of the target;基于所述第二3D坐标,获得第三3D坐标。Based on the second 3D coordinate, a third 3D coordinate is obtained.
- 根据权利要求24所述的装置,其特征在于,所述第二获取模块,具体用于基于所述第二3D坐标,修正所述第二类局部在所述3D图像中被遮挡部分所对应支架关键点的3D坐标,从而获得所述第三3D坐标。The device according to claim 24, wherein the second acquisition module is specifically configured to correct the bracket corresponding to the occluded part of the second type of part in the 3D image based on the second 3D coordinates The 3D coordinates of the key points, thereby obtaining the third 3D coordinates.
- 根据权利要求21所述的装置,其特征在于,所述第一确定模块,具体用于基于所述位置信息,确定所述第二类局部的四元数。The device according to claim 21, wherein the first determining module is specifically configured to determine the quaternion of the second type of local based on the position information.
- 根据权利要求21所述的装置,其特征在于,所述第二获取模块,具体用于The device according to claim 21, wherein the second acquisition module is specifically configured to获取所述第二类局部中的第一局部的支架关键点的第一位置信息;Acquiring first position information of key points of the bracket of the first part in the second type of part;获取所述第二类局部中的第二局部的支架关键点的第二位置信息。Acquire second location information of key points of the bracket of the second part in the second type of part.
- 根据权利要求27所述的装置,其特征在于,所述第一确定模块,具体用于The device according to claim 27, wherein the first determining module is specifically configured to根据所述第一位置信息,确定所述第一局部的运动信息;Determine the motion information of the first part according to the first position information;根据所述第二位置信息,确定所述第二局部的运动信息。According to the second position information, the motion information of the second part is determined.
- 根据权利要求27或28所述的装置,其特征在于,所述控制模块,具体用于The device according to claim 27 or 28, wherein the control module is specifically configured to根据所述第一局部的运动信息,控制所述受控模型与所述第一局部对应局部的运动;Controlling the movement of the controlled model and the corresponding part of the first part according to the motion information of the first part;根据所述第二局部的运动信息,控制所述受控模型与所述第二局部对应局部的运动。According to the movement information of the second part, the movement of the controlled model and the corresponding part of the second part is controlled.
- 根据权利要求27至29任一项所述的装置,其特征在于,The device according to any one of claims 27 to 29, wherein:所述第一局部为:躯干;和/或,所述第二局部为上肢、下肢或四肢。The first part is a trunk; and/or, the second part is an upper limb, a lower limb or a limb.
- 一种图像设备,其特征在于,包括:An image device, characterized in that it comprises:存储器;Memory处理器,与所述存储器连接,用于通过执行位于所述存储器上的计算机可执行指令,以实现上述权利要求1至15任一项提供的方法。The processor is connected to the memory and is configured to execute the computer executable instructions located on the memory to implement the method provided in any one of claims 1 to 15.
- 一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令;所述计算机可执行指令被处理器执行后,能够实现上述权利要求1至15任一项提供的方法。A non-volatile computer storage medium, the computer storage medium stores computer executable instructions; after the computer executable instructions are executed by a processor, the method provided in any one of claims 1 to 15 can be implemented.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202011596WA SG11202011596WA (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device and storage medium |
KR1020207036619A KR20210011985A (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device, and storage medium |
JP2020567116A JP2021525431A (en) | 2019-01-18 | 2020-01-16 | Image processing methods and devices, image devices and storage media |
US17/102,331 US20210074004A1 (en) | 2019-01-18 | 2020-11-23 | Image processing method and apparatus, image device, and storage medium |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049830.6 | 2019-01-18 | ||
CN201910049830 | 2019-01-18 | ||
CN201910362107.3A CN111460872B (en) | 2019-01-18 | 2019-04-30 | Image processing method and device, image equipment and storage medium |
CN201910362107.3 | 2019-04-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/102,331 Continuation US20210074004A1 (en) | 2019-01-18 | 2020-11-23 | Image processing method and apparatus, image device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020147794A1 true WO2020147794A1 (en) | 2020-07-23 |
Family
ID=71614381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/072526 WO2020147794A1 (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device and storage medium |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020147794A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870314A (en) * | 2021-10-18 | 2021-12-31 | 南京硅基智能科技有限公司 | Training method of action migration model and action migration method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930284A (en) * | 2009-06-23 | 2010-12-29 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
CN104866101A (en) * | 2015-05-27 | 2015-08-26 | 世优(北京)科技有限公司 | Real-time interactive control method and real-time interactive control device of virtual object |
US20160163084A1 (en) * | 2012-03-06 | 2016-06-09 | Adobe Systems Incorporated | Systems and methods for creating and distributing modifiable animated video messages |
CN106251396A (en) * | 2016-07-29 | 2016-12-21 | 迈吉客科技(北京)有限公司 | The real-time control method of threedimensional model and system |
CN108229239A (en) * | 2016-12-09 | 2018-06-29 | 武汉斗鱼网络科技有限公司 | A kind of method and device of image procossing |
CN109325450A (en) * | 2018-09-25 | 2019-02-12 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110139115A (en) * | 2019-04-30 | 2019-08-16 | 广州虎牙信息科技有限公司 | Virtual image attitude control method, device and electronic equipment based on key point |
CN110688008A (en) * | 2019-09-27 | 2020-01-14 | 贵州小爱机器人科技有限公司 | Virtual image interaction method and device |
CN110889382A (en) * | 2019-11-29 | 2020-03-17 | 深圳市商汤科技有限公司 | Virtual image rendering method and device, electronic equipment and storage medium |
-
2020
- 2020-01-16 WO PCT/CN2020/072526 patent/WO2020147794A1/en active Application Filing
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930284A (en) * | 2009-06-23 | 2010-12-29 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
US20160163084A1 (en) * | 2012-03-06 | 2016-06-09 | Adobe Systems Incorporated | Systems and methods for creating and distributing modifiable animated video messages |
CN104866101A (en) * | 2015-05-27 | 2015-08-26 | 世优(北京)科技有限公司 | Real-time interactive control method and real-time interactive control device of virtual object |
CN106251396A (en) * | 2016-07-29 | 2016-12-21 | 迈吉客科技(北京)有限公司 | The real-time control method of threedimensional model and system |
CN108229239A (en) * | 2016-12-09 | 2018-06-29 | 武汉斗鱼网络科技有限公司 | A kind of method and device of image procossing |
CN109325450A (en) * | 2018-09-25 | 2019-02-12 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110139115A (en) * | 2019-04-30 | 2019-08-16 | 广州虎牙信息科技有限公司 | Virtual image attitude control method, device and electronic equipment based on key point |
CN110688008A (en) * | 2019-09-27 | 2020-01-14 | 贵州小爱机器人科技有限公司 | Virtual image interaction method and device |
CN110889382A (en) * | 2019-11-29 | 2020-03-17 | 深圳市商汤科技有限公司 | Virtual image rendering method and device, electronic equipment and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113870314A (en) * | 2021-10-18 | 2021-12-31 | 南京硅基智能科技有限公司 | Training method of action migration model and action migration method |
CN113870314B (en) * | 2021-10-18 | 2023-09-19 | 南京硅基智能科技有限公司 | Training method of action migration model and action migration method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210074005A1 (en) | Image processing method and apparatus, image device, and storage medium | |
WO2020147796A1 (en) | Image processing method and apparatus, image device, and storage medium | |
US20230351663A1 (en) | System and method for generating an avatar that expresses a state of a user | |
US20160048993A1 (en) | Image processing device, image processing method, and program | |
US20210349529A1 (en) | Avatar tracking and rendering in virtual reality | |
Gültepe et al. | Real-time virtual fitting with body measurement and motion smoothing | |
WO2020147791A1 (en) | Image processing method and device, image apparatus, and storage medium | |
JP2019096113A (en) | Processing device, method and program relating to keypoint data | |
CN102622766A (en) | Multi-objective optimization multi-lens human motion tracking method | |
Xie et al. | Visual feedback for core training with 3d human shape and pose | |
WO2020147797A1 (en) | Image processing method and apparatus, image device, and storage medium | |
WO2020147794A1 (en) | Image processing method and apparatus, image device and storage medium | |
CN111861822B (en) | Patient model construction method, equipment and medical education system | |
Kirmizibayrak et al. | Digital analysis and visualization of swimming motion | |
JP2021099666A (en) | Method for generating learning model | |
Su et al. | Estimating human pose with both physical and physiological constraints | |
Xie et al. | CoreUI: Interactive Core Training System with 3D Human Shape | |
TW202341071A (en) | Method for image analysis of motion | |
Wang et al. | Research on Tai Chi APP Simulation System Based on Computer Virtual Reality Technology | |
Zhu | Efficient and robust photo-based methods for precise shape and pose modeling of human subjects | |
Boulay | Human posture recognition for behaviour | |
Diamanti | Motion Capture in Uncontrolled Environments | |
CN108109197A (en) | A kind of image procossing modeling method | |
Liu et al. | Application of VR technology in sports training in colleges and universities | |
UNSA | Bernard Boulay Human Posture Recognition for Behaviour Understanding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20741758 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020567116 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207036619 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20741758 Country of ref document: EP Kind code of ref document: A1 |