US20210074005A1 - Image processing method and apparatus, image device, and storage medium - Google Patents
Image processing method and apparatus, image device, and storage medium Download PDFInfo
- Publication number
- US20210074005A1 US20210074005A1 US17/102,364 US202017102364A US2021074005A1 US 20210074005 A1 US20210074005 A1 US 20210074005A1 US 202017102364 A US202017102364 A US 202017102364A US 2021074005 A1 US2021074005 A1 US 2021074005A1
- Authority
- US
- United States
- Prior art keywords
- connection portion
- movement
- information
- parts
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 claims description 43
- 239000013598 vector Substances 0.000 claims description 43
- 210000000707 wrist Anatomy 0.000 claims description 25
- 244000309466 calf Species 0.000 claims description 22
- 210000003423 ankle Anatomy 0.000 claims description 17
- 210000000689 upper leg Anatomy 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 11
- 210000000245 forearm Anatomy 0.000 claims description 9
- 230000004044 response Effects 0.000 claims 5
- 210000003414 extremity Anatomy 0.000 description 32
- 210000003128 head Anatomy 0.000 description 26
- 230000036544 posture Effects 0.000 description 23
- 238000012937 correction Methods 0.000 description 17
- 210000002683 foot Anatomy 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 230000007547 defect Effects 0.000 description 12
- 210000001364 upper extremity Anatomy 0.000 description 12
- 210000001503 joint Anatomy 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 230000009471 action Effects 0.000 description 9
- 210000004247 hand Anatomy 0.000 description 9
- 210000003141 lower extremity Anatomy 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 230000037237 body shape Effects 0.000 description 7
- 210000000988 bone and bone Anatomy 0.000 description 6
- 210000004709 eyebrow Anatomy 0.000 description 6
- 210000001624 hip Anatomy 0.000 description 6
- 210000004279 orbit Anatomy 0.000 description 6
- 210000005252 bulbus oculi Anatomy 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 4
- 210000001015 abdomen Anatomy 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 210000003109 clavicle Anatomy 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 210000001508 eye Anatomy 0.000 description 4
- 210000002414 leg Anatomy 0.000 description 4
- 210000000697 sensory organ Anatomy 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000003238 somatosensory effect Effects 0.000 description 4
- 210000003857 wrist joint Anatomy 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 210000004197 pelvis Anatomy 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000005477 standard model Effects 0.000 description 3
- 229920000535 Tan II Polymers 0.000 description 2
- 241001125929 Trisopterus luscus Species 0.000 description 2
- 238000004873 anchoring Methods 0.000 description 2
- 210000002310 elbow joint Anatomy 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000001145 finger joint Anatomy 0.000 description 2
- 210000004209 hair Anatomy 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000000811 metacarpophalangeal joint Anatomy 0.000 description 2
- 210000003739 neck Anatomy 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 206010028347 Muscle twitching Diseases 0.000 description 1
- 235000014443 Pyrus communis Nutrition 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000000544 articulatio talocruralis Anatomy 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 210000001217 buttock Anatomy 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 210000000323 shoulder joint Anatomy 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000003371 toe Anatomy 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/754—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present disclosure relates to the field of information technologies, and in particular, to an image processing method and apparatus, an image device, and a storage medium.
- somatosensory games users can perform online teaching and online anchoring through video recording, and somatosensory games and the like become possible.
- users are required to wear special somatosensory devices to detect activities of their own limbs and the like, so as to control game characters.
- the face, limbs or the like of a user are completely exposed online; on the one hand, this may involve a user privacy issue, and on the other hand, this may also involve an information security issue.
- a face image can be covered by means of mosaic, etc., but this will affect the video effect.
- embodiments of the present disclosure are expected to provide an image processing method and apparatus, an image device, and a storage medium.
- the present disclosure provides an image processing method, including:
- connection portion obtaining an image; obtaining features of at least two parts of a target based on the image; determining, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion, where the connection portion connects two of the at least two parts; and controlling the movement of the connection portion in a controlled model according to the movement information of the connection portion.
- controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: determining, according to the type of the connection portion, a control mode of controlling the connection portion; and controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion.
- determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a first-type connection portion, the control mode is a first-type control mode, where the first-type control mode is used for directly controlling the movement of a connection portion in the controlled model corresponding to the first-type connection portion.
- determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a second-type connection portion, the control mode is a second-type control mode, where the second-type control mode is used for indirectly controlling the movement of a connection portion in the controlled model corresponding to the second-type connection portion, and the indirect control is achieved by controlling a part in the controlled model corresponding to the part other than the second-type connection portion.
- controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion includes: in the case that the control mode is the second-type control mode, decomposing the movement information of the connection portion to obtain first-type rotation information of the connection portion that the connection portion rotates under the traction of a traction portion; adjusting movement information of the traction portion according to the first-type rotation information; and controlling the movement of the traction portion in the controlled model according to the adjusted movement information of the traction portion, to indirectly control the movement of the connection portion.
- the method further includes: decomposing the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion; and controlling, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information.
- the second-type connection portion includes: a wrist; and an ankle.
- the traction portion corresponding to the wrist includes: an upper arm and/or a forearm; and in the case that the second-type connection portion is an ankle, the traction portion corresponding to the ankle includes: a thigh and/or a calf.
- the first-type connection portion includes a neck connecting a head and a trunk.
- determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining respective orientation information of the two parts according to the features of the two parts connected by the connection portion; determining candidate orientation information of the connection portion according to the respective orientation information of the two parts; and determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- determining the candidate orientation information of the connection portion according to the respective orientation information of the two parts includes: determining first candidate orientation information and second candidate orientation information of the connection portion according to the respective orientation information of the two parts.
- determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition includes: selecting target orientation information within an orientation change constraint range from the first candidate orientation information and the second candidate orientation information; and determining the movement information of the connection portion according to the target orientation information.
- determining the respective orientation information of the two parts according to the features of the two parts includes: obtaining, for each of the two parts, a first key point and a second key point; obtaining a first reference point for each of the two parts, where the first reference point refers to a first predetermined key point in the target; generating a first vector based on the first key point and the first reference point; generating a second vector based on the second key point and the first reference point; and determining the orientation information for each of the two parts based on the first vector and the second vector.
- determining the orientation information for each of the two parts based on the first vector and the second vector includes: performing, for each part, cross product on the two vectors of the part to obtain a normal vector of a plane where the part is located; and taking the normal vector as the orientation information of the part.
- the method further includes: determining movement information of the at least two parts based on the features. Determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining the movement information of the connection portion according to the movement information of the at least two parts.
- determining the movement information of the connection portion according to the movement information of the at least two parts includes: obtaining a third 3-Dimensional (3D) coordinate of the connection portion with respect to a second reference point, where the second reference point refers to a second predetermined key point in the at least two parts; and obtaining absolute rotation information of the connection portion according to the third 3D coordinate; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the absolute rotation information.
- 3D 3-Dimensional
- controlling the movement of the connection portion in the controlled model based on the absolute rotation information further includes: decomposing the absolute rotation information according to a traction hierarchy relationship among multiple connection portions in the target to obtain relative rotation information; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the relative rotation information.
- controlling the movement of the connection portion in the controlled model based on the absolute rotation information further includes: correcting the relative rotation information according to a second constraint condition; and controlling the movement of the connection portion in the controlled model based on the relative rotation information includes: controlling the movement of the connection portion in the controlled model based on the corrected relative rotation information.
- the second constraint condition includes: a rotatable angle of the connection portion.
- an image processing apparatus including:
- a first obtaining module configured to obtain an image
- a second obtaining module configured to obtain features of at least two parts of a target based on the image
- a first determination module configured to determine, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion, where the connection portion connects two of the at least two parts
- a control module configured to control the movement of the connection portion in a controlled model according to the movement information of the connection portion.
- the present disclosure provides an image device, including: a memory; and a processor, connected to the memory, and configured to execute computer-executable instructions on the memory to implement the image processing method according to any one of the foregoing items.
- the present disclosure provides a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and after the computer-executable instructions are executed by a processor, the image processing method according to any one of the foregoing items can be implemented.
- the movement information of the connection portion is obtained according to the features of the at least two parts and the first movement constraint condition of the connection portion.
- the movement information of the connection portion can also be obtained precisely, and thus the movement of the connection portion corresponding to the controlled model can be controlled.
- the controlled model is used to simulate the movement of a user for live video streaming, the movement of the connection portion in the controlled model can be controlled more precisely, so that the controlled model can precisely simulate the movements of acquisition objects such as users, thereby protecting the privacy of the user during the live video streaming.
- FIG. 1A is a schematic flowchart of an image processing method provided by the embodiments of the present disclosure.
- FIG. 1B is a schematic flowchart of an image processing method provided by another embodiment of the present disclosure.
- FIG. 2 is a schematic flowchart of an image processing method provided by yet another embodiment of the present disclosure.
- FIG. 3A to FIG. 3C are schematic diagrams of changes in hand movement of a collected user simulated by a controlled model provided by the embodiments of the present disclosure.
- FIG. 4A to FIG. 4C are schematic diagrams of changes in trunk movement of an acquired user simulated by a controlled model provided by the embodiments of the present disclosure.
- FIG. 5A to FIG. 5C are schematic diagrams of foot movement of an acquired user simulated by a controlled model provided by the embodiments of the present disclosure.
- FIG. 6 is a schematic structural diagram of an image processing apparatus provided by the embodiments of the present disclosure.
- FIG. 7A is a schematic diagram of key points of a skeleton provided by the embodiments of the present disclosure.
- FIG. 7B is a schematic diagram of key points of a skeleton provided by the embodiments of the present disclosure.
- FIG. 8 is a schematic diagram of a skeleton provided by the embodiments of the present disclosure.
- FIG. 9 is a schematic diagram of a local coordinate system of different bones of a human body provided by the embodiments of the present disclosure.
- FIG. 10 is a schematic structural diagram of an image device provided by the embodiments of the present disclosure.
- the embodiments provide an image processing method, including the following steps.
- step S 110 an image is obtained.
- step S 120 features of at least two parts of a target are obtained based on the image.
- step S 130 according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion is determined, where the connection portion connects two of the at least two parts.
- step S 140 the movement of the connection portion in a controlled model is controlled according to the movement information of the connection portion.
- the image processing method provided by the embodiments can drive the movement of the controlled model through image processing.
- the image processing method provided by the embodiments can be applied to an image device.
- the image device may be any of electronic devices capable of performing image processing, for example, electronic devices for performing image acquisition, image display, and image processing.
- the image device includes, but is not limited to, various terminal devices, such as a mobile terminal and/or a fixed terminal, and may also include various servers capable of providing image services.
- the mobile terminal includes portable devices easy to carry by a user, such as a mobile phone or a tablet computer, and may also include a device worn by the user, such as a smart bracelet, a smart watch, or smart glasses.
- the fixed terminal includes a fixed desktop computer, etc.
- the image obtained in step S 110 may be a two-dimensional (2D) image or a three-dimensional (3D) image.
- the 2D image may include: an image acquired by a monocular or multiocular camera, such as a red green blue (RGB) image.
- the approach of obtaining the image may include: acquiring the image by using a camera of the image device; and/or receiving an image from an external device; and/or reading the image from a local database or a local memory.
- the 3D image may be a 3D image obtained by detecting 2D coordinates from a 2D image, and then using a conversion algorithm from 2D coordinates to 3D coordinates.
- the 3D image may also be an image acquired using a 3D camera.
- the obtained image may be one image frame or multiple image frames.
- the subsequently obtained movement information may reflect the movement of the connection portion in the current image with respect to the corresponding connection portion of an initial coordinate system (also referred to as a camera coordinate system).
- the subsequently obtained movement information may reflect the movement of the connection portion in the current image with respect to the corresponding connection portions in the previous image frames, or the subsequently obtained movement information may also reflect the movement of the connection portion in the current image with respect to the corresponding connection portion of the camera coordinate system.
- the present application does not limit the number of the obtained images
- Step S 120 may include: obtaining the features of the at least two parts of the target by detecting the image, where the two parts are different parts on the target.
- the two parts may be continuously distributed on the target, and may also be distributed on the target at an interval.
- the at least two parts may include at least two of the following: head, trunk, limbs, upper limbs, lower limits, hands, feet, or the like.
- the connection portion may be a neck connecting the head and the trunk, a right hip connecting the right leg and the trunk, a wrist connecting the hand and the forearm, or the like.
- the target is not limited to humans, but may also be various movable living bodies or non-living bodies such as animals.
- the features of at least two parts are obtained, and the features may be features characterizing the spatial structure information, position information, or movement states of the two parts in various forms.
- the features of the two parts include, but are not limited to, various image features.
- the image features may include color features and/or optical flow features obtained using an optical flow algorithm.
- the color features include, but are not limited to, RGB features and/or gray-scale features.
- a deep learning model such as a neural network may be used to detect the image, so as to obtain the features.
- the precise movement information of the connection portion can be obtained based on the features, the connection relationship between the two parts and the connection portion, and the first movement constraint condition that the movement of the connection portion needs to satisfy.
- connection portion may be split into a plurality of sub connection portions; then, according to the method provided by the present disclosure, for each sub connection portion, the movement information thereof is calculated respectively, and then the movement information of the sub connection portions is merged to obtain the movement information of the connection portion.
- the present disclosure is described below in the case that one connection portion connects two parts in the present disclosure.
- the controlled module may be a model corresponding to the target.
- the controlled model is a human body model.
- the controlled model may be a corresponding animal body model.
- the controlled model may be a vehicle model.
- the controlled model is a model for the category to which the target belongs.
- the model may be predetermined, and may further have multiple styles.
- the style of the controlled model may be determined based on a user instruction.
- the controlled model may include a variety of styles, such as, a simulated real-person style, an anime style, an Internet celebrity style, different temperament styles, and a game style. Different temperament styles may be a literary style or a rock style. Under the game style, the controlled model may be a game character.
- a movement image of the teacher may be obtained through image acquisition, etc., and then a virtual controlled model may be controlled to move through feature extraction and movement information acquisition.
- limb movement teaching is completed by using own limb movement to make the controlled model simulate the movement of the teacher, and on the other hand, the movement of the controlled model is used for teaching, and the face and body of the teacher thus do not need to be directly exposed in the teaching video, thereby protecting the privacy of the teacher.
- a surveillance video is obtained by using a vehicle model to simulate the real vehicle movement, the license plate information of the vehicles and/or the overall outlines of the vehicles are reserved in the surveillance video, but the brands, models, colors, and condition, etc. of the vehicles can all be hidden, thereby protecting the privacy of the user.
- the movement information of the connection portion is obtained according to the features of the at least two parts.
- the movement information of the connection portion can also be obtained precisely, and thus the movement of the connection portion corresponding to the controlled model is controlled.
- the controlled model when the controlled model is used to simulate the movement of the target for live video broadcast, the movement of the connection portion in the controlled model can be controlled precisely, so that the controlled model can precisely simulate the movements of acquisition objects such as users, thereby protecting the privacy of the user during the live video broadcast.
- the embodiments further provide an image processing method.
- the method further includes steps S 150 and S 160 .
- step S 150 movement information of the at least two parts is determined based on the features.
- step S 160 according to the movement information of the at least two parts, the movement of the parts of the controlled model is controlled.
- the movement information represents movement changes and/or expression changes, etc. of the two parts at two adjacent moments.
- the movement information of the connection portion may be obtained according to the connection relationship between the connection portion and the two parts, and the corresponding first movement constraint condition.
- the information form of the movement information of the at least two parts includes, but is not limited to, at least one of the following: the coordinates of the key points corresponding to the parts.
- the coordinates include, but are not limited to: 2D coordinates and 3D coordinates.
- the coordinates can represent the changes of the key points corresponding to the parts with respect to a reference position, and thus can represent the movement state of the corresponding parts.
- the movement information may be represented in various information forms, such as vectors, arrays, one-dimensional values, and matrices.
- the movement information of the two parts is further obtained.
- the movement of the parts of the controlled model is controlled by means of the movement information of the two parts, so that each part in the controlled model can simulate the target to move.
- step S 120 may include the following steps.
- a first-type feature of a first-type part of the target is obtained based on the image.
- a second-type feature of a second-type part of the target is obtained based on the image, where the type of the second-type feature is different from that of the first-type feature.
- the first-type feature and the second-type feature are features also representing the spatial structure information, position information and/or movement states of the corresponding parts, but are different types of features.
- the first-type feature of the first-type part and the second-type feature of the second-type part are respectively obtained based on the image.
- the obtaining subjects for obtaining the first-type feature and the second-type feature are different; for example, the features are obtained using different deep learning models or deep learning modules.
- the obtaining logic of the first-type feature is different from that of the second-type feature.
- the first-type part and the second-type part are different types of parts.
- the different types of parts may be distinguished through the movable amplitudes of the different types of parts, or distinguished through the movement fineness of the different types of parts.
- the first-type part and the second-type part may be two types of parts with a relatively large difference in the maximum amplitudes of movement.
- the first-type part may be a head.
- the five sense organs of the head can move, but the movements of the five sense organs of the head are all relatively small; the entire head can also move, for example, nodding or shaking the head, but the movement amplitude is relatively small with respect to the movement amplitude of the limbs or the trunk.
- the second-type part may be the upper limbs, lower limbs, or four limbs, and the movement amplitudes of the limbs are all very large. If the movement states of the two types of parts are represented by the same feature, the problems such as a decrease in precision or an increase in the complexity of the algorithm due to fitting the movement amplitude of a certain part may be caused.
- the information precision of at least one type of part may be increased, thereby improving the precision of the movement information.
- step S 121 may include: obtaining expression features of the head based on the image.
- the first-type part is a head.
- the head includes a face.
- the expression features include, but are not limited to, at least one of the following: the movement of eyebrows, the movement of mouth, the movement of nose, the movement of eyes, and the movement of cheeks.
- the movement of eyebrows may include: raising eyebrows and slouching eyebrows.
- the movement of mouth may include: opening the mouth, closing the mouth, twitching the mouth, pouting, grinning, snarling, etc.
- the movement of nose may include: nose contraction generated by inhalation into the nose and nose extension movement accompanied with blowing outward.
- the movement of eyes may include, but is not limited to: the movement of eye sockets and/or the movement of eyeballs.
- the sizes and/or shapes of the eye sockets would be changed by the movement of the eye sockets, for example, both the shapes and sizes of the eye sockets of squinting, glaring and smiling eyes would be changed.
- the movement of eyeballs may include: the positions of the eyeballs in the eye sockets, for example, a change in the line of sight of the user may cause the eyeballs to be in different positions in the eye sockets, and the movement of the left and right eyeballs together may reflect different emotional states of the user.
- the movement of cheeks some users may have dimples or pear vortexes when they laugh, and the shape of their cheeks may also change accordingly.
- the movement of the head is not limited to the expression movement, and therefore, the first-type feature is not limited to the expression features and further includes hair movement features such as the hair movement of the head.
- the first-type feature may further include: the entire movement feature of the head, such as nodding and/or shaking the head.
- step S 121 further includes: obtaining an intensity coefficient of the expression feature based on the image.
- the intensity coefficient in the embodiments may correspond to an expression amplitude of a facial expression.
- the face is set with multiple expression bases.
- One expression base corresponds to one expression action.
- the intensity coefficient here may be used for representing the intensity of the expression action, for example, the intensity may be the amplitude of the expression action.
- the greater the intensity coefficient the higher the intensity.
- the higher the intensity coefficient the greater the amplitude of the open mouth expression base, and the greater the amplitude of the pout expression base, and so on.
- the greater the intensity coefficient the higher the eyebrow height of the eyebrow-raising expression base.
- the controlled model can not only simulate the current action of the target, but also precisely simulate the intensity of the current expression of the target, thereby implementing the precise migration of the expression.
- the controlled object is a game character.
- the game character can not only be controlled by the limb movements of the user, but also accurately simulate the expression features of the user. In such a game scene, the simulation of the game scene is improved, and the game experience of the user is improved.
- step S 160 may include: controlling an expression change of the head of the controlled model based on the expression features; and controlling the intensity of the expression change of the controlled model based on the intensity coefficient.
- obtaining the expression features of the target based on the image includes: obtaining mesh information of the first-type part based on the image.
- step S 120 when the target is a person, in step S 120 , through mesh detection, etc., mesh information representing the expression change of the head is obtained, and the change of the controlled model is controlled based on the mesh information.
- the mesh information includes, but is not limited to: quadrilateral mesh information and/or triangular patch information.
- the quadrilateral mesh information indicates information of longitude and latitude lines; the triangular patch information is information of a triangular patch formed by connecting three key points.
- the mesh information is formed by a predetermined number of face key points of a face surface; the intersections between the latitude and longitude lines in a mesh represented by the quadrilateral mesh information may be the positions of the face key points; position changes of the intersections of the mesh are the expression changes.
- the expression features and intensity coefficient obtained based on the quadrilateral mesh information can accurately control the expression of the face of the controlled model.
- the vertices of the triangular patch corresponding to the triangular patch information include the face key points.
- the changes in the positions of the key points are the expression changes.
- the expression features and intensity coefficient obtained based on the triangular patch information can be used for precise control of the expression of the face of the controlled model.
- obtaining the intensity coefficient of the expression feature based on the image includes: obtaining the intensity coefficient representing each subpart in the first-type part based on the image.
- the five sense organs of the face i.e., the eyes, the eyebrows, the nose, the mouth, and the ears, respectively correspond to at least one expression base, and some correspond to multiple expression bases; one expression base corresponds to one type of expression action of one of the five sense organs, and the intensity coefficient just represents the amplitude of the expression action.
- step S 122 may include: obtaining the image, position information of the key points of the second-type part of the target based on the image.
- the position information may be represented by the position information of the key points of the target.
- the key points may include: bracket key points and outer contour key points. If a person is taken as an example, the bracket key points may include the skeleton key points of the human body, and the contour key points may be the key points of the outer contour of the body surface of the human body.
- the position information may be represented by coordinates, for example, be represented with 2D coordinates and/or 3D coordinates of a predetermined coordinate system.
- the predetermined coordinate system includes, but is not limited to: an image coordinate system where the image is located.
- the position information may be the coordinates of the key points, and are obviously different from the above-mentioned mesh information. Because the second-type part is different from the first-type part, the movement change of the second-type part can be more precisely represented with the position information.
- step S 150 may include: determining the movement information of at least two parts of the second-type part based on the position information.
- the second-type part includes, but is not limited to: the trunk and/or four limbs; the trunk and/or upper limbs, or the trunk and/or lower limbs.
- step S 122 may specifically include: obtaining first coordinates of the bracket key points of the second-type part of the target based on the image; and obtaining second coordinates based on the first coordinates.
- Both the first coordinates and the second coordinates are coordinates representing the bracket key points. If a person or an animal is taken as an example of the target, the bracket key points here are skeleton key points.
- the first coordinates and the second coordinates may be different types of coordinates.
- the first coordinates are 2D coordinates in a 2D coordinate system
- the second coordinates are 3D coordinates in a 3D coordinate system.
- the first coordinates and the second coordinates may also be the same type of coordinates.
- the second coordinates are coordinates obtained after performing correction on the first coordinates.
- the first coordinates and the second coordinates are the same type of coordinates.
- both the first coordinates and the second coordinates are 3D coordinates or 2D coordinates.
- obtaining the first coordinates of the bracket key points of the second-type part of the target based on the image includes: obtaining first 2D coordinates of the bracket key points of the second-type part based on a 2D image.
- Obtaining the second coordinates based on the first coordinates includes: obtaining first 3D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a transforming relationship from 2D coordinates to 3D coordinates.
- obtaining the first coordinates of the bracket key points of the second-type part of the target based on the image includes: obtaining second 3D coordinates of the bracket key points of the second-type part of the target based on a 3D image.
- Obtaining the second coordinates based on the first coordinates includes: obtaining third 2D coordinates based on the second 3D coordinates.
- a 3D image is directly obtained in step S 110 , and the 3D image includes: a 2D image and a depth image corresponding to the 2D image.
- the 2D image provides coordinate values of the bracket key points in an xoy plane, and depth values in the depth image provide the coordinates of the bracket key points on a z-axis.
- the z-axis is perpendicular to the xoy plane.
- obtaining the third 2D coordinates based on the second 3D coordinates includes: adjusting the 3D coordinates of the bracket key points corresponding to a shaded portion of the second-type part in the 3D image based on the second 3D coordinates so as to obtain the third 2D coordinates.
- a 3D model is used to first extract the second 3D coordinates from the 3D image, and then the shading of different parts in the target is used and taken into account. Correct third 2D coordinates of different parts of the target in a 3D space can be obtained through correction, thereby ensuring the subsequent control precision of the controlled model.
- step S 150 may include: determining a quaternion of the second-type part based on the position information.
- the movement information of the at least two parts is not limited to being represented by the quaternion, but can also be represented by coordinate values in different coordinate systems, such as the coordinate values in the Euler coordinate system or Lagrange coordinate system.
- the quaternion may be used to precisely represent the spatial position and/or rotation in each direction of the second-type part.
- the quaternion is used as the movement information of the at least two parts and/or the movement information of the connection portion; in specific implementation, the movement information is not limited to the quaternion, and may also be indicated with the coordinate values in various coordinate systems with respect to a reference point, for example, the quaternion may be replaced with Euler coordinates or Lagrange coordinates.
- step S 120 may include: obtaining first position information of the bracket key points of a first part in the second-type part; and obtaining second position information of the bracket key points of a second part in the second-type part.
- the second-type part may at least include two different parts.
- the controlled model can simultaneously simulate the movements of at least two parts of the target.
- step S 150 may include: determining the movement information of the first part according to the first position information; and determining the movement information of the second part according to the second position information.
- step S 160 may include: controlling the movement of a part of the controlled model corresponding to the first part according to the movement information of the first part; and controlling the movement of a part of the controlled model corresponding to the second part according to the movement information of the second part.
- the first part is the trunk; and the second part is the upper limbs, lower limbs, or four limbs.
- step S 140 further includes: determining, according to the type of the connection portion, a control mode of controlling the connection portion; and controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion.
- connection portion may connect another two parts.
- the neck, wrist, ankle, or waist is a connection portion for connecting two parts.
- the movement information of these connection portions may be inconvenient to detect or depend to a certain extent on other adjacent parts.
- the control mode would be determined according to the type of the connection portion.
- the lateral rotation of the wrist is, for example, rotation performed by taking an extension direction from the upper arm to the hand as the axis, and the lateral rotation of the wrist is caused by the rotation of the upper arm.
- the lateral rotation of the ankle is, for example, rotation performed by taking the extension direction of the calf as the axis, and the rotation of the ankle is also directly driven by the calf.
- connection portion such as the neck
- determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a first-type connection portion, determining to use a first-type control model, where the first-type control mode is used for directly controlling the movement of a connection portion in the controlled model corresponding to the first-type connection portion.
- the first-type connection portion is driven by its own rotation rather than other parts.
- connection portion further includes a second-type connection portion other than the first-type connection portion.
- the movement of the second-type connection portion here is not limited to itself, but is driven by other parts.
- determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a second-type connection portion, determining to use a second-type control model, where the second-type control mode is used for indirectly controlling the second-type connection portion by controlling a part in the controlled model other than the second-type connection portion.
- the part other than the second-type connection portion includes, but is not limited to, a part directly connected to the second-type connection port, or a part indirectly connected to the second-type connection portion.
- the entire upper limb may be moving, and thus the shoulder and the elbow are both rotating. In such a way, the rotation of the wrist may be indirectly controlled by controlling the lateral rotation of the shoulder and/or elbow.
- controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion includes: in the case that the control mode is the second-type control mode, decomposing the movement information of the connection portion to obtain first-type rotation information of the connection portion that the connection portion rotates under the traction of a traction portion; adjusting movement information of the traction portion according to the first-type rotation information; and controlling the movement of the traction portion in the controlled model according to the adjusted movement information of the traction portion, to indirectly control the movement of the connection portion.
- the traction portion is a part directly connected to the second-type connection portion. Taking the wrist being the second-type connection portion as an example, the traction portion is the elbow above the wrist or even an arm. Taking the ankle being the second-type connection portion as an example, the traction portion is the knee above the ankle or even the root of the thigh.
- the lateral rotation of the wrist along a linear direction from the shoulder and the elbow to the wrist may be a rotation driven by the shoulder or elbow.
- the lateral rotation is caused by the movement of the wrist, so the lateral rotation information of the wrist should be essentially assigned to the elbow or shoulder.
- the movement information of the elbow or shoulder is adjusted through this transfer assignment, and the adjusted movement information is used to control the movement of the elbow or shoulder in the controlled model.
- the lateral rotation corresponding to the elbow or shoulder would be reflected by the wrist of the controlled model, thereby implementing the precise simulation of the movement of the target by the controlled model.
- the method further includes: decomposing the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion; and controlling, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information.
- the first-type rotation information is information obtained directly according to the features of the image by an information model for extracting the rotation information
- the second-type rotation information is rotation information obtained by adjusting the first-type rotation information.
- the movement information of the second-type connection portion with respect to the predetermined posture can be known through the features of the second-type connection portion, for example, 2D coordinates or 3D coordinates.
- the movement information of the connection portion includes, but is not limited to, the rotation information.
- the movement information of the connection portion further includes: translation information.
- the second-type connection portion includes: a wrist; and an ankle.
- the traction portion corresponding to the wrist includes: an upper arm and/or a forearm; and if the second-type connection portion is an ankle, the traction portion corresponding to the ankle includes: a thigh and/or a calf.
- the first-type connection portion includes a neck connecting the head and the trunk.
- determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining orientation information of the two parts according to the features of the two parts connected by the connection portion; determining candidate orientation information of the connection portion according to the orientation information of the two parts; and determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- determining the candidate orientation information of the connection portion according to the orientation information of the two parts includes: determining first candidate orientation information and second candidate orientation information of the connection portion according to the orientation information of the two parts.
- Two included angles may be formed between the orientation information of the two parts.
- the included angle satisfying the first movement constraint condition is taken as the movement information of the connection port.
- the first movement constraint condition for the neck connecting the face and the trunk is between ⁇ 90 degrees and 90 degrees, thus angles exceeding 90 degrees are excluded according to the first movement constraint condition.
- the abnormality of rotation angles clockwisely or counterclockwisely exceeding 90 degrees, for example, 120 degrees and 180 degrees, during the simulation of the movement of the target by the controlled model can be reduced. If such a situation where the limit value of the first constraint condition is exceeded, the limit value corresponding o the first constraint condition is substituted for the abnormal value.
- the first movement constraint condition is between ⁇ 90 degrees and 90 degrees
- the first movement constraint condition corresponds to two limit angles: one is ⁇ 90 degrees and the other is 90 degrees.
- the detected rotation angle is modified as the maximum angle defined by the first movement constraint condition, i.e., the limit value. For example, if a rotation angle exceeding 90 degrees is detected, the detected rotation angle is modified as a limit angle closer to the detected rotation angle, for example, 90 degrees.
- determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition includes: selecting target orientation information within an orientation change constraint range from the first candidate orientation information and the second candidate orientation information; and determining the movement information of the connection portion according to the target orientation information.
- the target orientation information here is information satisfying the first movement constraint condition.
- the corresponding orientation of the neck may be 90 degrees toward the right or 270 degrees toward the left.
- the orientation of the neck of the human body cannot be changed by turning left by 270 degrees to make the neck face right.
- the orientation of the neck is: 90 degrees toward the right and 270 degrees toward the left, which are both candidate orientation information.
- the orientation information of the neck needs to be further determined, and needs to be determined according to the aforementioned first movement constraint condition.
- 90 degrees toward right of the neck is the target orientation information of the neck, and according to the 90 degrees toward the right of the neck, it is obtained that the current movement information of the neck is rotating 90 degrees toward the right.
- determining the orientation information of the two parts according to the features of the two parts includes: obtaining a first key point and a second key point for each of the two parts; obtaining a first reference point for each of the two parts, where the first reference point refers to a first predetermined key point in the target; generating a first vector based on the first key point and the first reference point, and generating a second vector based on the second key point and the first reference point; and determining the orientation information for each of the two parts based on the first vector and the second vector.
- the first reference point of the first part may be the waist key point of the target or the midpoint of the key points of the two crotches. If the second part of the two parts is a human face, the first reference point of the second part may be the connection point of the neck connected to the human face and the shoulder.
- determining the orientation information for each of the two parts based on the two vectors includes: performing, for each part, cross product on the two vectors of the part to obtain a normal vector of a plane where the part is located; and taking the normal vector as the orientation information of the part.
- Another vector can be obtained by cross-product calculation, and the vector is the normal vector of a plane where the connection portion is located. If the normal vector is determined, the orientation of the plane where the part is located is also determined; thus, it may be equivalent to determining the rotation angle of the connection portion with respect to a reference plane, i.e., equivalent to determining the movement information of the connection portion.
- the method further includes: determining movement information of the at least two parts based on the features; and determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining the movement information of the connection portion according to the movement information of the at least two parts.
- determining the movement information of the connection portion according to the movement information of the at least two parts includes: obtaining a third 3D coordinate of the connection portion with respect to a second reference point, where the second reference point refers to a second predetermined key point in the at least two parts; and obtaining absolute rotation information of the connection portion according to the third 3D coordinate; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the absolute rotation information.
- the second reference point may be one of the skeleton key points of the target. Taking the target being a person as an example, the second reference point may be a key point of a part connected by the first-type connection portion. For example, taking the neck as an example, the second reference point may be the key point of the shoulder connected by the neck.
- the second reference point may be the same as the first reference point, for example, both the first reference point and the second reference point may be a root node of the human body, and the root node of the human body may be the midpoint of a connection line of the two key points of the crotches of the human body.
- the root node includes, but is not limited to, a key point 0 shown in FIG. 7B .
- FIG. 7B is a schematic diagram of a skeleton of a human body, and FIG. 7B includes totally 17 skeleton key points numbered 0 to 16.
- controlling the movement of the connection portion in the controlled model based on the absolute rotation information includes: decomposing the absolute rotation information according to a traction hierarchy relationship among multiple connection portions in the target to obtain relative rotation information; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the relative rotation information.
- the following is an example of a hierarchical relationship: the first hierarchy: pelvis; the second hierarchy: waist; the third hierarchy: thighs (e.g. the left thigh and the right thigh); the fourth hierarchy: calves (e.g., the left calf and the right calf); and the fifth hierarchy: feet.
- the following is another hierarchical relationship: the first hierarchy: chest; the second hierarchy: neck; and the third hierarchy: head.
- the first hierarchy clavicles, which correspond to the shoulders
- the second hierarchy upper arms
- the third hierarchy forearms (also referred to as lower arms)
- the fourth hierarchy hands.
- the hierarchical relationship decreases in sequence from the first hierarchy to the fifth hierarchy.
- the movement of a part of a high hierarchy affects the movement of a part of a low hierarchy. Therefore, the hierarchy of a traction portion is higher than that of a connection portion.
- movement information of key points corresponding to a part of each hierarchy is first obtained, and then the movement information of the key points of the part of the low hierarchy with respect to the key points of the part of the high hierarchy (i.e., the relative rotation information) is determined based on the hierarchical relationship.
- the relative rotation information may be represented by the following calculation formula (1): rotation quaternions of each key point with respect to the camera coordinate system are ⁇ Q 0 , Q 1 , . . . , Q 18 ⁇ , and then a rotation quaternion q i of each key point with respect to a parent key point is calculated:
- parent key point parent(i) is the key point of the previous hierarchy of the current key point i
- Q 1 is a rotation quaternion of the current key point i with respect to the camera coordinate system
- Q parent(i) ⁇ 1 is an inverse rotation parameter of the key point of the previous hierarchy. For example, if Q parent(i) is a rotation parameter of the key point of the previous hierarchy, and the rotation angle is 90 degrees, the rotation angle of Q parent(i) ⁇ 1 is ⁇ 90 degrees.
- controlling the movement of the connection portion in the controlled model based on the absolute rotation information further includes: correcting the relative rotation information according to a second constraint condition; and controlling the movement of the connection portion in the controlled model based on the relative rotation information includes: controlling the movement of the connection portion in the controlled model based on the corrected relative rotation information.
- the second constraint condition includes: a rotatable angle of the connection portion.
- the method further includes: performing posture defect correction on the movement information of the connection portion to obtain corrected movement information of the connection portion.
- Controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model by using the corrected movement information of the connection portion.
- posture defect correction may be performed on the movement information of the connection portion to obtain the corrected movement information of the connection portion.
- the method further includes: performing posture defect correction on the movement information of the at least two parts to obtain corrected movement information of the at least two parts.
- Step S 160 may include: controlling the movement of corresponding parts of the controlled model by using the corrected movement information of the at least two parts.
- the posture defect correction includes at least one of the following: synchronization defects of upper limbs and lower limbs; movement defects of looped legs; movement defects caused by toe-out feet; or movement defects caused by toe-in feet.
- the method further includes: obtaining a posture defect correction parameter according to difference information between the body shape of the target and a standard body shape, where the posture defect correction parameter is used for the correction of the movement information of the at least two parts and/or the movement information of the connection portion.
- the body shape of the target is detected first, and then the detected body shape is compared with the standard body shape to obtain the difference information; and posture defect correction is performed through the difference information.
- a prompt of maintaining a predetermined posture is output on a display interface, and after seeing the prompt, the user maintains the predetermined posture.
- an image device can acquire the image of the user who maintains the predetermined posture; and then, whether the user maintains the predetermined posture standardly enough is determined through image detection, so as to obtain the difference information.
- the normal standard standing posture should be that the connection lines of the toes and heels of the feet are parallel to each other.
- the correction of such non-standard body shape i.e., the posture defect correction
- the predetermined posture may include, but is not limited to, an upright standing posture of the human body.
- the method further includes: correcting the ratios of different parts of a standard model according to the ratio relationship of different parts of the target, to obtain the corrected controlled model.
- the standard model may be a mean model based on a large amount of human body data.
- the ratios of the different parts of the standard model re corrected according to the ratios of the different parts of the target to obtain the corrected controlled model.
- the corrected parts include, but are not limited to: crotches and/or legs.
- the small image in the upper left corner of the image is the acquired image, and the lower right corner is the controlled model of the human body.
- the hand of the user is moving. From FIG. 3A to FIG. 3B , and then from FIG. 3B to FIC. 3 C, the hand of the user is moving, and the hand of the controlled model is also following the movement.
- the hand movement of the user in FIG. 3A to FIG. 3C is changed in sequence from making a first to stretching out the palm and to extending the index finger, and the controlled model simulates the changes of the gesture of the user from making a first to stretching out the palm and to extending the index finger.
- the small image in the upper left corner of the image is the acquired image, and the lower right corner is the controlled model of the human body.
- the trunk of the user is moving. From FIG. 4A to FIG. 4B , and then from FIG. 4B to FIC. 4 C, the trunk of the user is moving, and the trunk of the controlled model is also following the movement. From FIG. 4A to FIG. 4C , the user changes from pushing the crotch to the right of the image, to pushing the crotch to the left of the image, and finally to standing upright.
- the controlled model also simulates the user to make trunk movement.
- the small image in the upper left corner of the image is the acquired image, and the lower right corner is the controlled model of the human body.
- the user strides toward the right side of the image, strides toward the left of the image, and finally stands upright.
- the controlled model also simulates the user to make foot movement.
- the controlled model also simulates the expression changes of the user.
- the embodiments provide an image processing apparatus, including the following modules.
- a first obtaining module 110 is configured to obtain an image.
- a second obtaining module 120 is configured to obtain features of at least two parts of a target based on the image.
- a first determination module 130 is configured to determine, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion, where the connection portion connects two of the at least two parts.
- a control module 140 is configured to control the movement of the connection portion in a controlled model according to the movement information of the connection portion.
- control module 140 is specifically configured to determine, according to the type of the connection portion, a control mode of controlling the connection portion; and control the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion.
- control module 140 is specifically configured to: in the case that the connection portion is a first-type connection portion, the control mode is a first-type control mode, where the first-type control mode is used for directly controlling the movement of a connection portion in the controlled model corresponding to the first-type connection portion.
- control module 140 is specifically configured to: in the case that the connection portion is a second-type connection portion, the control mode is a second-type control mode, where the second-type control mode is used for indirectly controlling the movement of a connection portion in the controlled model corresponding to the second-type connection portion, and the indirect control is achieved by controlling a part in the controlled model corresponding to the part other than the second-type connection portion.
- control module 140 is specifically configured to: in the case that the control mode is the second-type control mode, decompose the movement information of the connection portion to obtain first-type rotation information of the connection portion that the connection portion rotates under the traction of a traction portion; adjust movement information of the traction portion according to the first-type rotation information; and control the movement of the traction portion in the controlled model according to the adjusted movement information of the traction portion, to indirectly control the movement of the connection portion.
- the apparatus further includes: a decomposition module, configured to decompose the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion; and the control module 140 is further configured to control, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information.
- a decomposition module configured to decompose the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion
- the control module 140 is further configured to control, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information.
- the second-type connection portion includes: a wrist; and an ankle.
- the traction portion corresponding to the wrist includes: an upper arm and/or a forearm; and if the second-type connection portion is an ankle, the traction portion corresponding to the ankle includes: a thigh and/or a calf.
- the first-type connection portion includes a neck connecting the head and the trunk.
- the apparatus further includes: an orientation determination module, configured to determine respective orientation information of the two parts according to the features of the two parts connected by the connection portion; a second determination module, configured to determine candidate orientation information of the connection portion according to the respective orientation information of the two parts; and a selection module, configured to determine the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- an orientation determination module configured to determine respective orientation information of the two parts according to the features of the two parts connected by the connection portion
- a second determination module configured to determine candidate orientation information of the connection portion according to the respective orientation information of the two parts
- a selection module configured to determine the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- the second determination module is configured to determine first candidate orientation information and second candidate orientation information of the connection portion according to the respective orientation information of the two parts.
- the selection module is specifically configured to: select target orientation information within an orientation change constraint range from the first candidate orientation information and the second candidate orientation information; and determine the movement information of the connection portion according to the target orientation information.
- the orientation determination module is specifically configured to: obtain a first key point and a second key point for each of the two parts; obtain a first reference point for each of the two parts, where the first reference point refers to a first predetermined key point in the target; generate a first vector based on the first key point and the first reference point, and generate a second vector based on the second key point and the first reference point; and determine the orientation information for each of the two parts based on the first vector and the second vector.
- the orientation control module is specifically configured to: perform, for each part, cross product on the two vectors of the part to obtain a normal vector of a plane where the part is located; and take the normal vector as the orientation information of the part.
- the apparatus further includes: a third determination module, configured to determine movement information of the at least two parts based on the features; and the first determination module 130 is specifically configured to determine the movement information of the connection portion according to the movement information of the at least two parts.
- the first determination module 130 is specifically configured to: obtain a third 3D coordinate of the connection portion with respect to a second reference point, where the second reference point refers to a second predetermined key point in the at least two parts; and obtain absolute rotation information of the connection portion according to the third 3D coordinate.
- the control module 140 is specifically configured to control the movement of the connection portion in the controlled model based on the absolute rotation information.
- the first determination module 130 is specifically configured to: decompose the absolute rotation information according to a traction hierarchy relationship among multiple connection portions in the target to obtain relative rotation information; and the control module 140 is specifically configured to control the movement of the connection portion in the controlled model based on the relative rotation information.
- the apparatus further includes: a correction module, configured to correct the relative rotation information according to a second constraint condition.
- the control module 140 is specifically configured to control the movement of the connection portion in the controlled model based on the corrected relative rotation information.
- the second constraint condition includes: a rotatable angle of the connection portion.
- the present example provides an image processing method, and the method includes the following steps.
- An image is acquired, where the image includes a target, and the target includes, but is not limited to, a human body.
- Face key points of the human body are detected, where the face key points may be contour key points of a face surface.
- Trunk key points and/or limb key points of the human body are detected.
- the trunk key points and/or limb key points here may all be 3D key points, and are represented by 3D coordinates.
- the 3D coordinates may include 3D coordinates obtained by detecting 2D coordinates from a 2D image, and then using a conversion algorithm from 2D coordinates to 3D coordinates.
- the 3D coordinates may also be 3D coordinates extracted from a 3D image acquired using a 3D camera.
- the limb key points here may include: upper limb key points and/or lower limb key points.
- hand key points of the upper limb key points include, but are not limited to: wrist joint key points, metacarpophalangeal joint key points, finger joint key points, and fingertip key points.
- the positions of these key points can reflect the movements of the hand and fingers.
- mesh information of a face is generated.
- An expression base corresponding to a current expression of the target is selected according to the mesh information, and the expression of the controlled model is controlled according to the expression base.
- the expression intensity of the controlled model corresponding to each expression base is controlled according to an intensity coefficient reflected by the mesh information.
- Quaternions are converted according to the trunk key points and/or the limb key points.
- the trunk movement of the controlled model is controlled according to the quaternion corresponding to the trunk key points; and/or, the limb movement of the controlled model is controlled according to the quaternion corresponding to the limb key points.
- the face key points may include: 106 key points.
- the trunk key points and/or the limb key points may include: 14 key points or 17 key points.
- FIG. 7A and FIG. 7B show a schematic diagram including 14 skeleton key points; and FIG. 7B shows a schematic diagram including 17 skeleton key points.
- FIG. 7B may be a schematic diagram of 17 key points generated based on the 14 key points shown in FIG. 7A .
- the 17 key points in FIG. 7B are equivalent to the key points shown in FIG. 7A , with addition of a key point 0, a key point 7, and a key point 9, where the 2D coordinates of the key point 9 may be preliminarily determined based on the 2D coordinates of a key point 8 and a key point 10, and the 2D coordinates of the key point 7 may be determined according to the 2D coordinates of the key point 8 and the 2D coordinates of the key point 0.
- the key point 0 may be a reference point provided by the embodiments of the present disclosure, and the reference point may serve as the foregoing first reference point and/or second reference point.
- the controlled model in the present example may be a game character in a game scene, a teacher model in an online education video in an online teaching scene, and a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to application scenarios. If the application scenarios are different, the models and/or appearances of the controlled model are different.
- the teacher model may be more sedate in clothing, such as a suit.
- the controlled model may be wearing sportswear.
- the present example provides an image processing method, and the method includes the following steps.
- An image is acquired, where the image includes a target, and the target includes, but is not limited to, a human body.
- Trunk key points and limb key points of the human body are detected.
- the trunk key points and/or limb key points here may all be 3D key points, and are represented by 3D coordinates.
- the 3D coordinates may include 3D coordinates obtained by detecting 2D coordinates from a 2D image, and then using a conversion algorithm from 2D coordinates to 3D coordinates.
- the 3D coordinates may also be 3D coordinates extracted from a 3D image acquired using a 3D camera.
- the limb key points here may include: upper limb key points and/or lower limb key points. Taking a hand as an example, hand key points of the upper limb key points include, but are not limited to: wrist joint key points, metacarpophalangeal joint key points, finger joint key points, and fingertip key points. The positions of these key points can reflect the movements of the hand and fingers.
- the trunk key points are converted into a quaternion representing the trunk movement, and the quaternion may be called a trunk quaternion.
- the trunk key points are converted into a quaternion representing the trunk movement, and the quaternion may be called a trunk quaternion.
- the trunk movement of the controlled model is controlled using the trunk quaternion.
- the limb movement of the controlled model is controlled using a limb quaternion.
- the trunk key points and the limb key points may include: 14 key points or 17 key points.
- the details are shown in FIG. 7A or FIG. 7B .
- the controlled model in the present example may be a game character in a game scene, a teacher model in an online education video in an online teaching scene, and a virtual anchor in a virtual anchor scene.
- the controlled model is determined according to application scenarios. If the application scenarios are different, the models and/or appearances of the controlled model are different.
- the teacher model may be more sedate in clothing, such as a suit.
- the controlled model may be wearing sportswear.
- the present example provides an image processing method, and the method includes the following steps.
- An image is obtained, where the image includes a target, and the target may be a human body.
- a 3D posture of the target in a 3D space is obtained according to the image, where the 3D posture may be represented by 3D coordinates of skeleton key points of the human body.
- Absolute rotation parameters of joints of the human body in a camera coordinate system are obtained, where the absolute rotation position may be determined by the coordinates in the camera coordinate system.
- Coordinate directions of the joints are obtained according to the coordinates.
- Relative rotation parameters of the joints are determined according to a hierarchical relationship. Determining the relative rotation parameters may specifically include: determining the positions of key points of the joints with respect to the root node of the human body. The relative rotation parameters may be used for quaternion representation.
- the hierarchical relationship here may be a traction relationship among the joints. For example, the movement of the elbow joint causes the movement of the wrist joint to some extent, and the movement of the shoulder joint also causes the movement of the elbow joint, and so on.
- the hierarchical relationship may be predetermined according to the joints of the human body.
- the rotation of the controlled model is controlled using the quaternions.
- the following is an example of a hierarchical relationship: the first hierarchy: pelvis; the second hierarchy: waist; the third hierarchy: thighs (e.g. the left thigh and the right thigh); the fourth hierarchy: calves (e.g., the left calf and the right calf); and the fifth hierarchy: feet.
- the following is another hierarchical relationship: the first hierarchy: chest; the second hierarchy: neck; and the third hierarchy: head.
- the first hierarchy clavicles, which correspond to the shoulders
- the second hierarchy upper arms
- the third hierarchy forearms (also referred to as lower arms)
- the fourth hierarchy hands.
- the hierarchical relationship decreases in sequence from the first hierarchy to the fifth hierarchy.
- the movement of a part of a high hierarchy affects the movement of a part of a low hierarchy. Therefore, the hierarchy of a traction portion is higher than that of a connection portion.
- movement information of key points of a part of each hierarchy is first obtained, and then the movement information of the key points of the part of the low hierarchy with respect to the key points of the part of the high hierarchy (i.e., the relative rotation information) is determined based on the hierarchical relationship.
- the relative rotation information may be represented by a calculation formula: rotation quaternions of each key point with respect to the camera coordinate system are ⁇ Q 0 , Q 1 , . . . , Q 18 ⁇ , and then a rotation quaternion q i of each key point with respect to a parent key point is calculated according to formula (1).
- control of the movement of each joint of the controlled model by using a quaternion may include: controlling the movement of each joint of the controlled model by using q i .
- the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within a constraint condition, where the constraint condition may be used for angle limitation of the first Euler angle; and obtaining a quaternion corresponding to the second Euler angle, and then controlling the rotation of the controlled model by using the quaternion.
- the second Euler angle may be directly converted into a quaternion.
- FIG. 7B is a schematic diagram of a skeleton having 17 key points.
- FIG. 8 is a schematic diagram of a skeleton having 19 key points.
- the bones shown in FIG. 8 may correspond to the 19 key points, which respectively refer to the following bones: pelvis, waist, left thigh, left calf, left foot; right thigh, right calf, right foot, chest, neck, head, left clavicle, right clavicle, right upper arm, right forearm, right hand, left upper arm, left forearm, and left hand.
- (x i ,y i ,z i ) may be the coordinates of an i th key point, and the value of i is from 0 to 16.
- p i represents the 3D coordinates of a node i in a local coordinate system, is generally a fixed value carried by the original model, and does not need to be modified and migrated.
- q i is a quaternion, represents the rotation of a bone controlled by the node i in a parent node coordinate system thereof, and may also be considered as the rotation of a local coordinate system of the current node and a local coordinate system of a parent node.
- the process of calculating the quaternions of the key points corresponding to all the joints may be as follows: determining coordinate axis directions of a local coordinate system of each node. For each bone, the direction pointing from a child node to a parent node is an x-axis; a rotation axis that causes a maximum rotatable angle of the bone is a z-axis, and if the rotation axis cannot be determined, a direction that the human body faces is taken as a y-axis.
- FIG. 9 for details.
- a left-handed coordinate system is used for description, and during specific implementation, a right-handed coordinate system may also be used.
- (i-j) represents a vector where i points to j, and x represents a cross product.
- (1-7) represents a vector where a first key point points to a seventh key point.
- nodes 8, 15, 11, and 18 are the four key points of the hands and feet. Because the calculation of the quaternions of the four key points can be determined only when specific postures are used, the four key points are not included in this table.
- the serial numbers of the nodes of the skeleton having 19 points are shown in FIG. 8 , and for the serial numbers of the key points of the skeleton having 17 points, reference may be made to FIG. 7B .
- Y a sin(2*( q 1* q 3+ q 0* q 2)) and the value of Y is between ⁇ 1 and 1 (3)
- X is an Euler angle in a first direction
- Y is an Euler angle in a second direction
- Z is an Euler angle in a third direction. Any two of the first direction, the second direction, and the third direction are perpendicular.
- angles may be then limited. If the angles exceed the range, the angles are defined within boundary values to obtain corrected second Euler angles (X′, Y′, Z′). A new rotation quaternion q i ′ of the local coordinate system is restored.
- the method further includes: performing posture optimization adjustment on second Euler angles. For example, adjustment is performed on some of second Euler angles so that they can be adjusted into posture-optimized Euler angles based on a preset rule, thereby obtaining third Euler angles.
- the obtaining of quaternions corresponding to the second Euler angles may include: converting the third Euler angles into quaternions for controlling a controlled model.
- the method further includes: converting a second Euler angle into a quaternion, and then performing posture optimization processing on the converted quaternion. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
- the adjustment performed on the second Euler angle or the quaternion obtained by conversion from the second Euler angle may be adjustment based on a preset rule, and may also be optimization adjustment performed by a deep learning model.
- a preset rule may also be optimization adjustment performed by a deep learning model.
- a yet another image processing method may further include preprocessing.
- the width of the crotch and/or shoulder of the controlled model is modified according to the size of the acquired human body so as to correct the overall posture of the human body.
- the standing posture of the human body, upright standing correction and abdomen lifting correction may be performed. Some people lift their abdomens when standing, and abdomen lifting correction makes the controlled model not simulate the abdomen lifting action of the user. Some people hunch when standing, and hunching correction makes the controlled model not simulate the hunching action, etc. of the user.
- the present example provides an image processing method, and the method includes the following steps.
- An image is obtained, where the image includes a target, and the target may include at least one of a human body, human upper limbs, or human lower limbs.
- a coordinate system of a target joint is obtained according to position information of the target joint in an image coordinate system. According to position information of a limb part in the image coordinate system, a coordinate system of a limb part capable of causing the target joint to move is obtained.
- the rotation of the target joint with respect to the limb part is determined based on the coordinate system of the target joint and the coordinate system of the limb part to obtain rotation parameters, where the rotation parameters include self-rotation parameters of the target joint and rotation parameters under the traction of the limb part.
- Limitation is performed on the rotation parameters under the traction of the limb part by using first angle limitation to obtain final traction rotation parameters.
- the rotation parameters of the limb part are corrected according to the final traction rotation parameters.
- Relative rotation parameters are obtained according to a coordinate system of the limb part and the corrected rotation parameters of the limb part; and second angle limitation is performed on the relative rotation parameters to obtain limited relative rotation parameters.
- a quaternion is obtained according to the limited rotation parameters.
- the movement of the target joint of a controlled model is controlled according to the quaternion.
- a coordinate system of a hand in the image coordinate system is obtained, and a coordinate system of a lower arm and a coordinate system of an upper arm are obtained.
- the target joint is a wrist joint.
- the rotation of the hand with respect to the lower arm is decomposed into self-rotation and rotation under traction.
- the rotation under traction is transferred to the lower arm; specifically, the rotation under traction is assigned to the rotation of the lower arm in a corresponding direction.
- the maximum rotation of the lower arm is limited with the first angle limitation of the lower arm.
- the rotation of the hand with respect to the corrected lower arm is determined to obtain a relative rotation parameter. Second angle limitation is performed on the relative rotation parameter to obtain the rotation of the hand with respect to the lower arm.
- a coordinate system of a foot in the image coordinate system is obtained, and a coordinate system of a calf and a coordinate system of a thigh are obtained.
- the target joint is an ankle joint.
- the rotation of the foot with respect to the calf is decomposed into self-rotation and rotation under traction.
- the rotation under traction is transferred to the calf; specifically, the rotation under traction is assigned to the rotation of the calf in a corresponding direction.
- the maximum rotation of the calf is limited with the first angle limitation of the calf.
- the rotation of the foot with respect to the corrected calf is determined to obtain a relative rotation parameter. Second angle limitation is performed on the relative rotation parameter to obtain the rotation of the foot with respect to the calf.
- the neck controls the orientation of the head, and the face, human body, and human hands are separate components.
- the rotation of the neck is very important for the ultimate formation of the face, human body, and human hands into an integral whole.
- the orientation of a human body may be calculated according to key points of the human body.
- the orientation of a face can be calculated according to key points of the face.
- the relative position of the two orientations is the rotation angle of the neck.
- An angle problem of a connection portion is to be solved.
- the angle problem of the connection portion is solved through relative calculation. For example, if the body is 0 degree and the face is 90 degrees, to control a controlled model, it only focuses on part angles and angle changes of the head and the body.
- the angle of the neck of the controlled model needs to be calculated to control the head of the controlled model.
- the current orientation of the face of a user is determined based on an image, and then the rotation angle of the neck is calculated. Since the rotation of the neck has a range, for example, assuming that the neck can rotate 90 degrees at most, if the calculated rotation angle exceeds this range ( ⁇ 90 degrees to 90 degrees), the boundary of the range is taken as the rotation angle of the neck (such as, ⁇ 90 degrees or 90 degrees).
- the orientation of the body or face may be calculated using 3D key points.
- the specific calculation of the orientation may be: two vectors, which are not on a straight line, in a plane where the face or body is located, are performed cross product to obtain a normal vector of the plane, and the normal vector is the orientation of the face or body.
- the orientation may be taken as the orientation of a connection portion (the neck) between the body and the face.
- the embodiments of the present application provide an image device, including: a memory 1002 , configured to store information; and a processor 1001 , connected to the memory 1002 and configured to execute computer executable instructions stored on the memory 1002 so as to implement the image processing method provided by one or more of the foregoing technical solutions, for example, the image processing methods shown in FIG. 1A , FIG. 1B and/or FIG. 2 .
- the memory 1002 may be any type of memory, such as a random access memory, a read-only memory, and a flash memory.
- the memory 1002 may be configured to store information, for example, store computer executable instructions, etc.
- the computer executable instructions may be various program instructions, for example, a target program instruction and/or a source program instruction, etc.
- the processor 1001 may be any type of processor, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, an application-specific integrated circuit, an image processor, etc.
- the processor 1001 may be connected to the memory 1002 by means of a bus.
- the bus may be an integrated circuit bus, etc.
- the terminal device may further include: a communication interface 1003 .
- the communication interface 1003 may include a network interface, for example, a local area network interface, a transceiver antenna, etc.
- the communication interface is also connected to the processor 1001 and can be configured to transmit and receive information.
- the terminal device further includes a human-machine interaction interface 1005 .
- the human-machine interaction interface 1005 may include various input and output devices, for example, a keyboard and a touch screen, etc.
- the image device further includes: a display 1004 .
- the display may display various prompts, acquired face images and/or various interfaces.
- the embodiments of the present application provide a non-volatile computer storage medium.
- the computer storage medium stores computer executable codes. After the computer executable codes are executed, the image processing method provided by one or more of the foregoing technical solutions can be implemented, for example, the image processing methods shown in FIG. 1A , FIG. 1B , and/or FIG. 2 .
- the disclosed device and method in the embodiments provided in the present application may be implemented by other modes.
- the device embodiments described above are merely exemplary.
- the unit division is merely logical function division and may be actually implemented by other division modes.
- a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
- the displayed or discussed mutual couplings or direct couplings or communication connections among the components may be implemented by means of some interfaces.
- the indirect couplings or communication connections between the devices or units may be implemented in electronic, mechanical, or other forms.
- the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, may be located at one position, or may be distributed on a plurality of network units. A part of or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the embodiments of the present disclosure may be integrated into one processing module, or each of the units may exist as an independent unit, or two or more units are integrated into one unit, and the integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a hardware and software functional unit.
- the foregoing program may be stored in a computer-readable storage medium; when the program is executed, steps including the foregoing method embodiments are performed; moreover, the above-mentioned non-volatile storage medium includes various media capable of storing program codes such as a mobile storage device, an Read-Only Memory (ROM), a magnetic disk, or an optical disk.
- ROM Read-Only Memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
- The present application is a continuation of International Application No. PCT/CN2020/072549, filed on Jan. 16, 2020, which claims priority to Chinese Patent Application No. 201910049830.6, filed on Jan. 18, 2019, and entitled “IMAGE PROCESSING METHOD AND APPARATUS, IMAGE DEVICE, AND STORAGE MEDIUM”, and to Chinese Patent Application No. 201910363433.6, filed on Apr. 30, 2019, and entitled “IMAGE PROCESSING METHOD AND APPARATUS, IMAGE DEVICE, AND STORAGE MEDIUM”, all of which are incorporated herein by reference in their entirety.
- The present disclosure relates to the field of information technologies, and in particular, to an image processing method and apparatus, an image device, and a storage medium.
- With the development of information technologies, users can perform online teaching and online anchoring through video recording, and somatosensory games and the like become possible. However, in some cases, for example, in somatosensory games, users are required to wear special somatosensory devices to detect activities of their own limbs and the like, so as to control game characters. However, when performing online teaching or online anchoring, the face, limbs or the like of a user are completely exposed online; on the one hand, this may involve a user privacy issue, and on the other hand, this may also involve an information security issue. In order to solve such as privacy or security issue, a face image can be covered by means of mosaic, etc., but this will affect the video effect.
- In view of this, embodiments of the present disclosure are expected to provide an image processing method and apparatus, an image device, and a storage medium.
- In a first aspect, the present disclosure provides an image processing method, including:
- obtaining an image; obtaining features of at least two parts of a target based on the image; determining, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion, where the connection portion connects two of the at least two parts; and controlling the movement of the connection portion in a controlled model according to the movement information of the connection portion.
- Based on the foregoing solutions, controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: determining, according to the type of the connection portion, a control mode of controlling the connection portion; and controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion.
- Based on the foregoing solutions, determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a first-type connection portion, the control mode is a first-type control mode, where the first-type control mode is used for directly controlling the movement of a connection portion in the controlled model corresponding to the first-type connection portion.
- Based on the foregoing solutions, determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a second-type connection portion, the control mode is a second-type control mode, where the second-type control mode is used for indirectly controlling the movement of a connection portion in the controlled model corresponding to the second-type connection portion, and the indirect control is achieved by controlling a part in the controlled model corresponding to the part other than the second-type connection portion.
- Based on the foregoing solutions, controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion includes: in the case that the control mode is the second-type control mode, decomposing the movement information of the connection portion to obtain first-type rotation information of the connection portion that the connection portion rotates under the traction of a traction portion; adjusting movement information of the traction portion according to the first-type rotation information; and controlling the movement of the traction portion in the controlled model according to the adjusted movement information of the traction portion, to indirectly control the movement of the connection portion.
- Based on the foregoing solutions, the method further includes: decomposing the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion; and controlling, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information.
- Based on the foregoing solutions, the second-type connection portion includes: a wrist; and an ankle.
- Based on the foregoing solutions, in the case that the second-type connection portion is a wrist, the traction portion corresponding to the wrist includes: an upper arm and/or a forearm; and in the case that the second-type connection portion is an ankle, the traction portion corresponding to the ankle includes: a thigh and/or a calf.
- Based on the foregoing solutions, the first-type connection portion includes a neck connecting a head and a trunk.
- Based on the foregoing solutions, determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining respective orientation information of the two parts according to the features of the two parts connected by the connection portion; determining candidate orientation information of the connection portion according to the respective orientation information of the two parts; and determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- Based on the foregoing solutions, determining the candidate orientation information of the connection portion according to the respective orientation information of the two parts includes: determining first candidate orientation information and second candidate orientation information of the connection portion according to the respective orientation information of the two parts.
- Based on the foregoing solutions, determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition includes: selecting target orientation information within an orientation change constraint range from the first candidate orientation information and the second candidate orientation information; and determining the movement information of the connection portion according to the target orientation information.
- Based on the foregoing solutions, determining the respective orientation information of the two parts according to the features of the two parts includes: obtaining, for each of the two parts, a first key point and a second key point; obtaining a first reference point for each of the two parts, where the first reference point refers to a first predetermined key point in the target; generating a first vector based on the first key point and the first reference point; generating a second vector based on the second key point and the first reference point; and determining the orientation information for each of the two parts based on the first vector and the second vector.
- Based on the foregoing solutions, determining the orientation information for each of the two parts based on the first vector and the second vector includes: performing, for each part, cross product on the two vectors of the part to obtain a normal vector of a plane where the part is located; and taking the normal vector as the orientation information of the part.
- Based on the foregoing solutions, the method further includes: determining movement information of the at least two parts based on the features. Determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining the movement information of the connection portion according to the movement information of the at least two parts.
- Based on the foregoing solutions, determining the movement information of the connection portion according to the movement information of the at least two parts includes: obtaining a third 3-Dimensional (3D) coordinate of the connection portion with respect to a second reference point, where the second reference point refers to a second predetermined key point in the at least two parts; and obtaining absolute rotation information of the connection portion according to the third 3D coordinate; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the absolute rotation information.
- Based on the foregoing solutions, controlling the movement of the connection portion in the controlled model based on the absolute rotation information further includes: decomposing the absolute rotation information according to a traction hierarchy relationship among multiple connection portions in the target to obtain relative rotation information; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the relative rotation information.
- Based on the foregoing solutions, controlling the movement of the connection portion in the controlled model based on the absolute rotation information further includes: correcting the relative rotation information according to a second constraint condition; and controlling the movement of the connection portion in the controlled model based on the relative rotation information includes: controlling the movement of the connection portion in the controlled model based on the corrected relative rotation information.
- Based on the foregoing solutions, the second constraint condition includes: a rotatable angle of the connection portion.
- In a second aspect, the present disclosure provides an image processing apparatus, including:
- a first obtaining module, configured to obtain an image; a second obtaining module, configured to obtain features of at least two parts of a target based on the image; a first determination module, configured to determine, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion, where the connection portion connects two of the at least two parts; and a control module, configured to control the movement of the connection portion in a controlled model according to the movement information of the connection portion.
- In a third aspect, the present disclosure provides an image device, including: a memory; and a processor, connected to the memory, and configured to execute computer-executable instructions on the memory to implement the image processing method according to any one of the foregoing items.
- In a fourth aspect, the present disclosure provides a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and after the computer-executable instructions are executed by a processor, the image processing method according to any one of the foregoing items can be implemented.
- In the technical solutions provided by the embodiments of the present disclosure, the movement information of the connection portion is obtained according to the features of the at least two parts and the first movement constraint condition of the connection portion. In this way, for the connection portion whose movement information is inconvenient to directly obtain, the movement information of the connection portion can also be obtained precisely, and thus the movement of the connection portion corresponding to the controlled model can be controlled. When the controlled model is used to simulate the movement of a user for live video streaming, the movement of the connection portion in the controlled model can be controlled more precisely, so that the controlled model can precisely simulate the movements of acquisition objects such as users, thereby protecting the privacy of the user during the live video streaming.
-
FIG. 1A is a schematic flowchart of an image processing method provided by the embodiments of the present disclosure. -
FIG. 1B is a schematic flowchart of an image processing method provided by another embodiment of the present disclosure. -
FIG. 2 is a schematic flowchart of an image processing method provided by yet another embodiment of the present disclosure. -
FIG. 3A toFIG. 3C are schematic diagrams of changes in hand movement of a collected user simulated by a controlled model provided by the embodiments of the present disclosure. -
FIG. 4A toFIG. 4C are schematic diagrams of changes in trunk movement of an acquired user simulated by a controlled model provided by the embodiments of the present disclosure. -
FIG. 5A toFIG. 5C are schematic diagrams of foot movement of an acquired user simulated by a controlled model provided by the embodiments of the present disclosure. -
FIG. 6 is a schematic structural diagram of an image processing apparatus provided by the embodiments of the present disclosure. -
FIG. 7A is a schematic diagram of key points of a skeleton provided by the embodiments of the present disclosure. -
FIG. 7B is a schematic diagram of key points of a skeleton provided by the embodiments of the present disclosure. -
FIG. 8 is a schematic diagram of a skeleton provided by the embodiments of the present disclosure. -
FIG. 9 is a schematic diagram of a local coordinate system of different bones of a human body provided by the embodiments of the present disclosure. -
FIG. 10 is a schematic structural diagram of an image device provided by the embodiments of the present disclosure. - The technical solutions of the present disclosure are further described in detail below with reference to the accompanying drawings and specific embodiments of the specification.
- As shown in
FIG. 1A , the embodiments provide an image processing method, including the following steps. - At step S110, an image is obtained.
- At step S120, features of at least two parts of a target are obtained based on the image.
- At step S130, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion is determined, where the connection portion connects two of the at least two parts.
- At step S140, the movement of the connection portion in a controlled model is controlled according to the movement information of the connection portion.
- The image processing method provided by the embodiments can drive the movement of the controlled model through image processing.
- The image processing method provided by the embodiments can be applied to an image device. The image device may be any of electronic devices capable of performing image processing, for example, electronic devices for performing image acquisition, image display, and image processing. The image device includes, but is not limited to, various terminal devices, such as a mobile terminal and/or a fixed terminal, and may also include various servers capable of providing image services. The mobile terminal includes portable devices easy to carry by a user, such as a mobile phone or a tablet computer, and may also include a device worn by the user, such as a smart bracelet, a smart watch, or smart glasses. The fixed terminal includes a fixed desktop computer, etc.
- In the embodiments, the image obtained in step S110 may be a two-dimensional (2D) image or a three-dimensional (3D) image. The 2D image may include: an image acquired by a monocular or multiocular camera, such as a red green blue (RGB) image. The approach of obtaining the image may include: acquiring the image by using a camera of the image device; and/or receiving an image from an external device; and/or reading the image from a local database or a local memory.
- The 3D image may be a 3D image obtained by detecting 2D coordinates from a 2D image, and then using a conversion algorithm from 2D coordinates to 3D coordinates. The 3D image may also be an image acquired using a 3D camera.
- In step S110, the obtained image may be one image frame or multiple image frames. For example, when the obtained image is one image frame, the subsequently obtained movement information may reflect the movement of the connection portion in the current image with respect to the corresponding connection portion of an initial coordinate system (also referred to as a camera coordinate system). For another example, when multiple image frames are obtained, the subsequently obtained movement information may reflect the movement of the connection portion in the current image with respect to the corresponding connection portions in the previous image frames, or the subsequently obtained movement information may also reflect the movement of the connection portion in the current image with respect to the corresponding connection portion of the camera coordinate system. The present application does not limit the number of the obtained images
- Step S120 may include: obtaining the features of the at least two parts of the target by detecting the image, where the two parts are different parts on the target. The two parts may be continuously distributed on the target, and may also be distributed on the target at an interval.
- For example, if the target is a person, the at least two parts may include at least two of the following: head, trunk, limbs, upper limbs, lower limits, hands, feet, or the like. The connection portion may be a neck connecting the head and the trunk, a right hip connecting the right leg and the trunk, a wrist connecting the hand and the forearm, or the like. In some other embodiments, the target is not limited to humans, but may also be various movable living bodies or non-living bodies such as animals.
- In the embodiments, the features of at least two parts are obtained, and the features may be features characterizing the spatial structure information, position information, or movement states of the two parts in various forms.
- The features of the two parts include, but are not limited to, various image features. For example, the image features may include color features and/or optical flow features obtained using an optical flow algorithm. The color features include, but are not limited to, RGB features and/or gray-scale features. In the embodiments, a deep learning model such as a neural network may be used to detect the image, so as to obtain the features.
- After the features are obtained, the precise movement information of the connection portion can be obtained based on the features, the connection relationship between the two parts and the connection portion, and the first movement constraint condition that the movement of the connection portion needs to satisfy.
- For a case where one connection portion connects three or more parts, the connection portion may be split into a plurality of sub connection portions; then, according to the method provided by the present disclosure, for each sub connection portion, the movement information thereof is calculated respectively, and then the movement information of the sub connection portions is merged to obtain the movement information of the connection portion. For the sake of simplicity, the present disclosure is described below in the case that one connection portion connects two parts in the present disclosure.
- The controlled module may be a model corresponding to the target. For example, if the target is a person, the controlled model is a human body model. If the target is an animal, the controlled model may be a corresponding animal body model. If the target is a vehicle, the controlled model may be a vehicle model.
- In the embodiments, the controlled model is a model for the category to which the target belongs. The model may be predetermined, and may further have multiple styles. The style of the controlled model may be determined based on a user instruction. The controlled model may include a variety of styles, such as, a simulated real-person style, an anime style, an Internet celebrity style, different temperament styles, and a game style. Different temperament styles may be a literary style or a rock style. Under the game style, the controlled model may be a game character.
- For example, in the process of network teaching, some teachers are not willing to expose their faces and bodies, and think that these are privacy. If a video is recorded directly, the face and body, etc. of the teacher would necessarily be exposed. In the embodiments, a movement image of the teacher may be obtained through image acquisition, etc., and then a virtual controlled model may be controlled to move through feature extraction and movement information acquisition. In this way, on the one hand, limb movement teaching is completed by using own limb movement to make the controlled model simulate the movement of the teacher, and on the other hand, the movement of the controlled model is used for teaching, and the face and body of the teacher thus do not need to be directly exposed in the teaching video, thereby protecting the privacy of the teacher.
- For another example, in a road surveillance video, if the video of vehicles is directly acquired, once the video is exposed to the network, all vehicle information of some specific users is exposed, but without monitoring, situations where responsibility cannot be determined in the event of a traffic accident may occur. If the method of the embodiments is used, a surveillance video is obtained by using a vehicle model to simulate the real vehicle movement, the license plate information of the vehicles and/or the overall outlines of the vehicles are reserved in the surveillance video, but the brands, models, colors, and condition, etc. of the vehicles can all be hidden, thereby protecting the privacy of the user.
- In the technical solutions provided by the embodiments of the present disclosure, the movement information of the connection portion is obtained according to the features of the at least two parts. For the connection portion whose movement information is inconvenient to directly obtain, the movement information of the connection portion can also be obtained precisely, and thus the movement of the connection portion corresponding to the controlled model is controlled. Thus, when the controlled model is used to simulate the movement of the target for live video broadcast, the movement of the connection portion in the controlled model can be controlled precisely, so that the controlled model can precisely simulate the movements of acquisition objects such as users, thereby protecting the privacy of the user during the live video broadcast.
- Further, as shown in
FIG. 1B , the embodiments further provide an image processing method. On the basis ofFIG. 1A , the method further includes steps S150 and S160. - At step S150, movement information of the at least two parts is determined based on the features.
- At step S160, according to the movement information of the at least two parts, the movement of the parts of the controlled model is controlled.
- After the features of the at least two parts are obtained, the movement information of the at least two parts can be obtained. The movement information represents movement changes and/or expression changes, etc. of the two parts at two adjacent moments.
- After the movement information of the at least two parts is obtained, the movement information of the connection portion may be obtained according to the connection relationship between the connection portion and the two parts, and the corresponding first movement constraint condition.
- The information form of the movement information of the at least two parts includes, but is not limited to, at least one of the following: the coordinates of the key points corresponding to the parts. The coordinates include, but are not limited to: 2D coordinates and 3D coordinates. The coordinates can represent the changes of the key points corresponding to the parts with respect to a reference position, and thus can represent the movement state of the corresponding parts.
- The movement information may be represented in various information forms, such as vectors, arrays, one-dimensional values, and matrices.
- In the embodiments, the movement information of the two parts is further obtained. The movement of the parts of the controlled model is controlled by means of the movement information of the two parts, so that each part in the controlled model can simulate the target to move.
- In some embodiments, as shown in
FIG. 2 , on the basis ofFIG. 1B , step S120 may include the following steps. - At step S121, a first-type feature of a first-type part of the target is obtained based on the image.
- At step S122, a second-type feature of a second-type part of the target is obtained based on the image, where the type of the second-type feature is different from that of the first-type feature.
- In the embodiments, the first-type feature and the second-type feature are features also representing the spatial structure information, position information and/or movement states of the corresponding parts, but are different types of features.
- Different types of features have different characteristics, and would implement higher precision when applied to the different types of parts. In some embodiments, the first-type feature of the first-type part and the second-type feature of the second-type part are respectively obtained based on the image. The obtaining subjects for obtaining the first-type feature and the second-type feature are different; for example, the features are obtained using different deep learning models or deep learning modules. The obtaining logic of the first-type feature is different from that of the second-type feature.
- The first-type part and the second-type part are different types of parts. The different types of parts may be distinguished through the movable amplitudes of the different types of parts, or distinguished through the movement fineness of the different types of parts.
- In the embodiments, the first-type part and the second-type part may be two types of parts with a relatively large difference in the maximum amplitudes of movement. For example, the first-type part may be a head. The five sense organs of the head can move, but the movements of the five sense organs of the head are all relatively small; the entire head can also move, for example, nodding or shaking the head, but the movement amplitude is relatively small with respect to the movement amplitude of the limbs or the trunk.
- The second-type part may be the upper limbs, lower limbs, or four limbs, and the movement amplitudes of the limbs are all very large. If the movement states of the two types of parts are represented by the same feature, the problems such as a decrease in precision or an increase in the complexity of the algorithm due to fitting the movement amplitude of a certain part may be caused.
- Here, according to the characteristics of different types of parts, different types of features are used to obtain movement information. Compared with the related mode of using the same type of features to represent the same type of parts, the information precision of at least one type of part may be increased, thereby improving the precision of the movement information.
- In some embodiments, step S121 may include: obtaining expression features of the head based on the image.
- In the embodiments, the first-type part is a head. The head includes a face. The expression features include, but are not limited to, at least one of the following: the movement of eyebrows, the movement of mouth, the movement of nose, the movement of eyes, and the movement of cheeks. The movement of eyebrows may include: raising eyebrows and slouching eyebrows. The movement of mouth may include: opening the mouth, closing the mouth, twitching the mouth, pouting, grinning, snarling, etc. The movement of nose may include: nose contraction generated by inhalation into the nose and nose extension movement accompanied with blowing outward. The movement of eyes may include, but is not limited to: the movement of eye sockets and/or the movement of eyeballs. The sizes and/or shapes of the eye sockets would be changed by the movement of the eye sockets, for example, both the shapes and sizes of the eye sockets of squinting, glaring and smiling eyes would be changed. The movement of eyeballs may include: the positions of the eyeballs in the eye sockets, for example, a change in the line of sight of the user may cause the eyeballs to be in different positions in the eye sockets, and the movement of the left and right eyeballs together may reflect different emotional states of the user. For the movement of cheeks, some users may have dimples or pear vortexes when they laugh, and the shape of their cheeks may also change accordingly.
- In some embodiments, the movement of the head is not limited to the expression movement, and therefore, the first-type feature is not limited to the expression features and further includes hair movement features such as the hair movement of the head. The first-type feature may further include: the entire movement feature of the head, such as nodding and/or shaking the head.
- In some embodiments, step S121 further includes: obtaining an intensity coefficient of the expression feature based on the image.
- The intensity coefficient in the embodiments may correspond to an expression amplitude of a facial expression. For example, the face is set with multiple expression bases. One expression base corresponds to one expression action. The intensity coefficient here may be used for representing the intensity of the expression action, for example, the intensity may be the amplitude of the expression action.
- In some embodiments, the greater the intensity coefficient, the higher the intensity. For example, the higher the intensity coefficient, the greater the amplitude of the open mouth expression base, and the greater the amplitude of the pout expression base, and so on. For another example, the greater the intensity coefficient, the higher the eyebrow height of the eyebrow-raising expression base.
- Through the introduction of the intensity coefficient, the controlled model can not only simulate the current action of the target, but also precisely simulate the intensity of the current expression of the target, thereby implementing the precise migration of the expression. In such way, if the method is applied to a somatosensory game scene, the controlled object is a game character. By using this method, the game character can not only be controlled by the limb movements of the user, but also accurately simulate the expression features of the user. In such a game scene, the simulation of the game scene is improved, and the game experience of the user is improved.
- In some embodiments, step S160 may include: controlling an expression change of the head of the controlled model based on the expression features; and controlling the intensity of the expression change of the controlled model based on the intensity coefficient.
- In some embodiments, obtaining the expression features of the target based on the image includes: obtaining mesh information of the first-type part based on the image.
- In the embodiments, when the target is a person, in step S120, through mesh detection, etc., mesh information representing the expression change of the head is obtained, and the change of the controlled model is controlled based on the mesh information. The mesh information includes, but is not limited to: quadrilateral mesh information and/or triangular patch information. The quadrilateral mesh information indicates information of longitude and latitude lines; the triangular patch information is information of a triangular patch formed by connecting three key points.
- For example, the mesh information is formed by a predetermined number of face key points of a face surface; the intersections between the latitude and longitude lines in a mesh represented by the quadrilateral mesh information may be the positions of the face key points; position changes of the intersections of the mesh are the expression changes. Thus, the expression features and intensity coefficient obtained based on the quadrilateral mesh information can accurately control the expression of the face of the controlled model. Similarly, the vertices of the triangular patch corresponding to the triangular patch information include the face key points. The changes in the positions of the key points are the expression changes. The expression features and intensity coefficient obtained based on the triangular patch information can be used for precise control of the expression of the face of the controlled model.
- In some embodiments, obtaining the intensity coefficient of the expression feature based on the image includes: obtaining the intensity coefficient representing each subpart in the first-type part based on the image.
- For example, the five sense organs of the face, i.e., the eyes, the eyebrows, the nose, the mouth, and the ears, respectively correspond to at least one expression base, and some correspond to multiple expression bases; one expression base corresponds to one type of expression action of one of the five sense organs, and the intensity coefficient just represents the amplitude of the expression action.
- In some implementations, step S122 may include: obtaining the image, position information of the key points of the second-type part of the target based on the image.
- The position information may be represented by the position information of the key points of the target. The key points may include: bracket key points and outer contour key points. If a person is taken as an example, the bracket key points may include the skeleton key points of the human body, and the contour key points may be the key points of the outer contour of the body surface of the human body.
- The position information may be represented by coordinates, for example, be represented with 2D coordinates and/or 3D coordinates of a predetermined coordinate system. The predetermined coordinate system includes, but is not limited to: an image coordinate system where the image is located. The position information may be the coordinates of the key points, and are obviously different from the above-mentioned mesh information. Because the second-type part is different from the first-type part, the movement change of the second-type part can be more precisely represented with the position information.
- In some embodiments, step S150 may include: determining the movement information of at least two parts of the second-type part based on the position information.
- If a person is taken as an example of the target, the second-type part includes, but is not limited to: the trunk and/or four limbs; the trunk and/or upper limbs, or the trunk and/or lower limbs.
- Furthermore, step S122 may specifically include: obtaining first coordinates of the bracket key points of the second-type part of the target based on the image; and obtaining second coordinates based on the first coordinates.
- Both the first coordinates and the second coordinates are coordinates representing the bracket key points. If a person or an animal is taken as an example of the target, the bracket key points here are skeleton key points.
- The first coordinates and the second coordinates may be different types of coordinates. For example, the first coordinates are 2D coordinates in a 2D coordinate system, and the second coordinates are 3D coordinates in a 3D coordinate system. The first coordinates and the second coordinates may also be the same type of coordinates. For example, the second coordinates are coordinates obtained after performing correction on the first coordinates. In this case, the first coordinates and the second coordinates are the same type of coordinates. For example, both the first coordinates and the second coordinates are 3D coordinates or 2D coordinates.
- In some embodiments, obtaining the first coordinates of the bracket key points of the second-type part of the target based on the image includes: obtaining first 2D coordinates of the bracket key points of the second-type part based on a 2D image. Obtaining the second coordinates based on the first coordinates includes: obtaining first 3D coordinates corresponding to the first 2D coordinates based on the first 2D coordinates and a transforming relationship from 2D coordinates to 3D coordinates.
- In some embodiments, obtaining the first coordinates of the bracket key points of the second-type part of the target based on the image includes: obtaining second 3D coordinates of the bracket key points of the second-type part of the target based on a 3D image. Obtaining the second coordinates based on the first coordinates includes: obtaining third 2D coordinates based on the second 3D coordinates.
- For example, a 3D image is directly obtained in step S110, and the 3D image includes: a 2D image and a depth image corresponding to the 2D image. The 2D image provides coordinate values of the bracket key points in an xoy plane, and depth values in the depth image provide the coordinates of the bracket key points on a z-axis. The z-axis is perpendicular to the xoy plane.
- In some embodiments, obtaining the third 2D coordinates based on the second 3D coordinates includes: adjusting the 3D coordinates of the bracket key points corresponding to a shaded portion of the second-type part in the 3D image based on the second 3D coordinates so as to obtain the third 2D coordinates.
- In the embodiments, a 3D model is used to first extract the second 3D coordinates from the 3D image, and then the shading of different parts in the target is used and taken into account. Correct third 2D coordinates of different parts of the target in a 3D space can be obtained through correction, thereby ensuring the subsequent control precision of the controlled model.
- In some embodiments, step S150 may include: determining a quaternion of the second-type part based on the position information.
- In some embodiments, the movement information of the at least two parts is not limited to being represented by the quaternion, but can also be represented by coordinate values in different coordinate systems, such as the coordinate values in the Euler coordinate system or Lagrange coordinate system. The quaternion may be used to precisely represent the spatial position and/or rotation in each direction of the second-type part.
- In some embodiments, the quaternion is used as the movement information of the at least two parts and/or the movement information of the connection portion; in specific implementation, the movement information is not limited to the quaternion, and may also be indicated with the coordinate values in various coordinate systems with respect to a reference point, for example, the quaternion may be replaced with Euler coordinates or Lagrange coordinates.
- In some embodiments, step S120 may include: obtaining first position information of the bracket key points of a first part in the second-type part; and obtaining second position information of the bracket key points of a second part in the second-type part.
- The second-type part may at least include two different parts. Thus, the controlled model can simultaneously simulate the movements of at least two parts of the target.
- In some embodiments, step S150 may include: determining the movement information of the first part according to the first position information; and determining the movement information of the second part according to the second position information.
- In some embodiments, step S160 may include: controlling the movement of a part of the controlled model corresponding to the first part according to the movement information of the first part; and controlling the movement of a part of the controlled model corresponding to the second part according to the movement information of the second part.
- In some other embodiments, the first part is the trunk; and the second part is the upper limbs, lower limbs, or four limbs.
- In some embodiments, step S140 further includes: determining, according to the type of the connection portion, a control mode of controlling the connection portion; and controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion.
- The connection portion may connect another two parts. For example, taking a person as an example, the neck, wrist, ankle, or waist is a connection portion for connecting two parts. The movement information of these connection portions may be inconvenient to detect or depend to a certain extent on other adjacent parts. In the embodiments, the control mode would be determined according to the type of the connection portion.
- For example, the lateral rotation of the wrist is, for example, rotation performed by taking an extension direction from the upper arm to the hand as the axis, and the lateral rotation of the wrist is caused by the rotation of the upper arm. For another example, the lateral rotation of the ankle is, for example, rotation performed by taking the extension direction of the calf as the axis, and the rotation of the ankle is also directly driven by the calf.
- Moreover, the rotation of the connection portion, such as the neck, determines the orientation of the face and the orientation of the trunk.
- In some other embodiments, determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a first-type connection portion, determining to use a first-type control model, where the first-type control mode is used for directly controlling the movement of a connection portion in the controlled model corresponding to the first-type connection portion.
- In the embodiments, the first-type connection portion is driven by its own rotation rather than other parts.
- In some other embodiments, the connection portion further includes a second-type connection portion other than the first-type connection portion. The movement of the second-type connection portion here is not limited to itself, but is driven by other parts.
- In some other embodiments, determining, according to the type of the connection portion, the control mode of controlling the connection portion includes: in the case that the connection portion is a second-type connection portion, determining to use a second-type control model, where the second-type control mode is used for indirectly controlling the second-type connection portion by controlling a part in the controlled model other than the second-type connection portion.
- The part other than the second-type connection portion includes, but is not limited to, a part directly connected to the second-type connection port, or a part indirectly connected to the second-type connection portion. For example, during the lateral rotation of the wrist, the entire upper limb may be moving, and thus the shoulder and the elbow are both rotating. In such a way, the rotation of the wrist may be indirectly controlled by controlling the lateral rotation of the shoulder and/or elbow.
- In some embodiments, controlling the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion includes: in the case that the control mode is the second-type control mode, decomposing the movement information of the connection portion to obtain first-type rotation information of the connection portion that the connection portion rotates under the traction of a traction portion; adjusting movement information of the traction portion according to the first-type rotation information; and controlling the movement of the traction portion in the controlled model according to the adjusted movement information of the traction portion, to indirectly control the movement of the connection portion.
- In the embodiments, the traction portion is a part directly connected to the second-type connection portion. Taking the wrist being the second-type connection portion as an example, the traction portion is the elbow above the wrist or even an arm. Taking the ankle being the second-type connection portion as an example, the traction portion is the knee above the ankle or even the root of the thigh.
- For example, the lateral rotation of the wrist along a linear direction from the shoulder and the elbow to the wrist may be a rotation driven by the shoulder or elbow. However, during the detection of the movement information, the lateral rotation is caused by the movement of the wrist, so the lateral rotation information of the wrist should be essentially assigned to the elbow or shoulder. The movement information of the elbow or shoulder is adjusted through this transfer assignment, and the adjusted movement information is used to control the movement of the elbow or shoulder in the controlled model. Thus, with reference to the effect of the image, the lateral rotation corresponding to the elbow or shoulder would be reflected by the wrist of the controlled model, thereby implementing the precise simulation of the movement of the target by the controlled model.
- In some embodiments, the method further includes: decomposing the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion; and controlling, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information.
- The first-type rotation information is information obtained directly according to the features of the image by an information model for extracting the rotation information, and the second-type rotation information is rotation information obtained by adjusting the first-type rotation information. In the embodiments, first, the movement information of the second-type connection portion with respect to the predetermined posture can be known through the features of the second-type connection portion, for example, 2D coordinates or 3D coordinates. The movement information of the connection portion includes, but is not limited to, the rotation information. In some embodiments, the movement information of the connection portion further includes: translation information.
- In some embodiments, the second-type connection portion includes: a wrist; and an ankle.
- In some other embodiments, if the second-type connection portion is a wrist, the traction portion corresponding to the wrist includes: an upper arm and/or a forearm; and if the second-type connection portion is an ankle, the traction portion corresponding to the ankle includes: a thigh and/or a calf.
- In some embodiments, the first-type connection portion includes a neck connecting the head and the trunk.
- In some other embodiments, determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining orientation information of the two parts according to the features of the two parts connected by the connection portion; determining candidate orientation information of the connection portion according to the orientation information of the two parts; and determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- In some embodiments, determining the candidate orientation information of the connection portion according to the orientation information of the two parts includes: determining first candidate orientation information and second candidate orientation information of the connection portion according to the orientation information of the two parts.
- Two included angles may be formed between the orientation information of the two parts. In the embodiments, the included angle satisfying the first movement constraint condition is taken as the movement information of the connection port.
- For example, two included angles are formed between the orientation of the face and the orientation of the trunk, and the sum of the two included angles is 180 degrees. The two included angles are assumed to be a first included angle and a second included angle respectively. Moreover, the first movement constraint condition for the neck connecting the face and the trunk is between −90 degrees and 90 degrees, thus angles exceeding 90 degrees are excluded according to the first movement constraint condition. In this way, the abnormality of rotation angles clockwisely or counterclockwisely exceeding 90 degrees, for example, 120 degrees and 180 degrees, during the simulation of the movement of the target by the controlled model can be reduced. If such a situation where the limit value of the first constraint condition is exceeded, the limit value corresponding o the first constraint condition is substituted for the abnormal value. If the first movement constraint condition is between −90 degrees and 90 degrees, the first movement constraint condition corresponds to two limit angles: one is −90 degrees and the other is 90 degrees.
- When the rotation angle exceeds the range of −90 degrees to 90 degrees, the detected rotation angle is modified as the maximum angle defined by the first movement constraint condition, i.e., the limit value. For example, if a rotation angle exceeding 90 degrees is detected, the detected rotation angle is modified as a limit angle closer to the detected rotation angle, for example, 90 degrees.
- In some embodiments, determining the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition includes: selecting target orientation information within an orientation change constraint range from the first candidate orientation information and the second candidate orientation information; and determining the movement information of the connection portion according to the target orientation information.
- The target orientation information here is information satisfying the first movement constraint condition.
- For example, taking the neck as an example, if the face faces right, the corresponding orientation of the neck may be 90 degrees toward the right or 270 degrees toward the left. However, according to the physiological structure of the human body, the orientation of the neck of the human body cannot be changed by turning left by 270 degrees to make the neck face right. In this case, the orientation of the neck is: 90 degrees toward the right and 270 degrees toward the left, which are both candidate orientation information. The orientation information of the neck needs to be further determined, and needs to be determined according to the aforementioned first movement constraint condition. In this example, 90 degrees toward right of the neck is the target orientation information of the neck, and according to the 90 degrees toward the right of the neck, it is obtained that the current movement information of the neck is rotating 90 degrees toward the right.
- In some embodiments, determining the orientation information of the two parts according to the features of the two parts includes: obtaining a first key point and a second key point for each of the two parts; obtaining a first reference point for each of the two parts, where the first reference point refers to a first predetermined key point in the target; generating a first vector based on the first key point and the first reference point, and generating a second vector based on the second key point and the first reference point; and determining the orientation information for each of the two parts based on the first vector and the second vector.
- If the first part of the two parts is the shoulder of the human body, the first reference point of the first part may be the waist key point of the target or the midpoint of the key points of the two crotches. If the second part of the two parts is a human face, the first reference point of the second part may be the connection point of the neck connected to the human face and the shoulder.
- In some embodiments, determining the orientation information for each of the two parts based on the two vectors includes: performing, for each part, cross product on the two vectors of the part to obtain a normal vector of a plane where the part is located; and taking the normal vector as the orientation information of the part.
- Another vector can be obtained by cross-product calculation, and the vector is the normal vector of a plane where the connection portion is located. If the normal vector is determined, the orientation of the plane where the part is located is also determined; thus, it may be equivalent to determining the rotation angle of the connection portion with respect to a reference plane, i.e., equivalent to determining the movement information of the connection portion.
- In some embodiments, the method further includes: determining movement information of the at least two parts based on the features; and determining, according to the features of the at least two parts and the first movement constraint condition of the connection portion, the movement information of the connection portion includes: determining the movement information of the connection portion according to the movement information of the at least two parts.
- In some embodiments, determining the movement information of the connection portion according to the movement information of the at least two parts includes: obtaining a third 3D coordinate of the connection portion with respect to a second reference point, where the second reference point refers to a second predetermined key point in the at least two parts; and obtaining absolute rotation information of the connection portion according to the third 3D coordinate; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the absolute rotation information.
- In some embodiments, the second reference point may be one of the skeleton key points of the target. Taking the target being a person as an example, the second reference point may be a key point of a part connected by the first-type connection portion. For example, taking the neck as an example, the second reference point may be the key point of the shoulder connected by the neck.
- In some other embodiments, the second reference point may be the same as the first reference point, for example, both the first reference point and the second reference point may be a root node of the human body, and the root node of the human body may be the midpoint of a connection line of the two key points of the crotches of the human body. The root node includes, but is not limited to, a
key point 0 shown inFIG. 7B .FIG. 7B is a schematic diagram of a skeleton of a human body, andFIG. 7B includes totally 17 skeleton key points numbered 0 to 16. - In some other embodiments, controlling the movement of the connection portion in the controlled model based on the absolute rotation information includes: decomposing the absolute rotation information according to a traction hierarchy relationship among multiple connection portions in the target to obtain relative rotation information; and controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model based on the relative rotation information.
- For example, the following is an example of a hierarchical relationship: the first hierarchy: pelvis; the second hierarchy: waist; the third hierarchy: thighs (e.g. the left thigh and the right thigh); the fourth hierarchy: calves (e.g., the left calf and the right calf); and the fifth hierarchy: feet.
- For another example, the following is another hierarchical relationship: the first hierarchy: chest; the second hierarchy: neck; and the third hierarchy: head.
- Furthermore, for example, the following is still another hierarchical relationship: the first hierarchy: clavicles, which correspond to the shoulders; the second hierarchy: upper arms; the third hierarchy: forearms (also referred to as lower arms); and the fourth hierarchy: hands.
- The hierarchical relationship decreases in sequence from the first hierarchy to the fifth hierarchy. The movement of a part of a high hierarchy affects the movement of a part of a low hierarchy. Therefore, the hierarchy of a traction portion is higher than that of a connection portion.
- During determining the movement information of the connection portion, movement information of key points corresponding to a part of each hierarchy is first obtained, and then the movement information of the key points of the part of the low hierarchy with respect to the key points of the part of the high hierarchy (i.e., the relative rotation information) is determined based on the hierarchical relationship.
- For example, if a quaternion is used for representing movement information, the relative rotation information may be represented by the following calculation formula (1): rotation quaternions of each key point with respect to the camera coordinate system are {Q0, Q1, . . . , Q18}, and then a rotation quaternion qi of each key point with respect to a parent key point is calculated:
-
q i =Q parent(i) −1 ·Q i (1) - where the parent key point parent(i) is the key point of the previous hierarchy of the current key point i; Q1 is a rotation quaternion of the current key point i with respect to the camera coordinate system; and Qparent(i) −1 is an inverse rotation parameter of the key point of the previous hierarchy. For example, if Qparent(i) is a rotation parameter of the key point of the previous hierarchy, and the rotation angle is 90 degrees, the rotation angle of Qparent(i) −1 is −90 degrees.
- In some embodiments, controlling the movement of the connection portion in the controlled model based on the absolute rotation information further includes: correcting the relative rotation information according to a second constraint condition; and controlling the movement of the connection portion in the controlled model based on the relative rotation information includes: controlling the movement of the connection portion in the controlled model based on the corrected relative rotation information.
- In some embodiments, the second constraint condition includes: a rotatable angle of the connection portion.
- In some embodiments, the method further includes: performing posture defect correction on the movement information of the connection portion to obtain corrected movement information of the connection portion. Controlling the movement of the connection portion in the controlled model according to the movement information of the connection portion includes: controlling the movement of the connection portion in the controlled model by using the corrected movement information of the connection portion.
- For example, some users may have problems of non-standard body shapes, uncoordinated walking, etc. In order to reduce the occurrence of relatively weird actions due to direct simulation by the controlled model, in the embodiments, posture defect correction may be performed on the movement information of the connection portion to obtain the corrected movement information of the connection portion.
- In some embodiments, the method further includes: performing posture defect correction on the movement information of the at least two parts to obtain corrected movement information of the at least two parts. Step S160 may include: controlling the movement of corresponding parts of the controlled model by using the corrected movement information of the at least two parts.
- In some embodiments, the posture defect correction includes at least one of the following: synchronization defects of upper limbs and lower limbs; movement defects of looped legs; movement defects caused by toe-out feet; or movement defects caused by toe-in feet.
- In some embodiments, the method further includes: obtaining a posture defect correction parameter according to difference information between the body shape of the target and a standard body shape, where the posture defect correction parameter is used for the correction of the movement information of the at least two parts and/or the movement information of the connection portion.
- For example, before controlling the controlled model by using the image including the target, the body shape of the target is detected first, and then the detected body shape is compared with the standard body shape to obtain the difference information; and posture defect correction is performed through the difference information.
- A prompt of maintaining a predetermined posture is output on a display interface, and after seeing the prompt, the user maintains the predetermined posture. In this way, an image device can acquire the image of the user who maintains the predetermined posture; and then, whether the user maintains the predetermined posture standardly enough is determined through image detection, so as to obtain the difference information.
- For example, some people have toe-out feet, while the normal standard standing posture should be that the connection lines of the toes and heels of the feet are parallel to each other. During obtaining the movement information of the at least two parts corresponding to the features of the target and/or the movement information of the connection portion, the correction of such non-standard body shape (i.e., the posture defect correction) would be considered during the control of the controlled model. The predetermined posture may include, but is not limited to, an upright standing posture of the human body.
- In some other embodiments, the method further includes: correcting the ratios of different parts of a standard model according to the ratio relationship of different parts of the target, to obtain the corrected controlled model.
- There may be differences between the ratio relationships of the parts of different targets. For example, professional models have a greater leg to head ratio than ordinary people. For another example, some people have relatively full buttocks, so the gap between their two crotches may be larger than that of ordinary people.
- The standard model may be a mean model based on a large amount of human body data. In order to enable the controlled model to more precisely simulate the movement of the target, in the embodiments, the ratios of the different parts of the standard model re corrected according to the ratios of the different parts of the target to obtain the corrected controlled model. For example, taking the target being a person as an example, the corrected parts include, but are not limited to: crotches and/or legs.
- As shown in
FIG. 3A ,FIG. 3B , andFIG. 3C , the small image in the upper left corner of the image is the acquired image, and the lower right corner is the controlled model of the human body. The hand of the user is moving. FromFIG. 3A toFIG. 3B , and then fromFIG. 3B to FIC. 3C, the hand of the user is moving, and the hand of the controlled model is also following the movement. The hand movement of the user inFIG. 3A toFIG. 3C is changed in sequence from making a first to stretching out the palm and to extending the index finger, and the controlled model simulates the changes of the gesture of the user from making a first to stretching out the palm and to extending the index finger. - As shown in
FIG. 4A ,FIG. 4B , andFIG. 4C , the small image in the upper left corner of the image is the acquired image, and the lower right corner is the controlled model of the human body. The trunk of the user is moving. FromFIG. 4A toFIG. 4B , and then fromFIG. 4B to FIC. 4C, the trunk of the user is moving, and the trunk of the controlled model is also following the movement. FromFIG. 4A toFIG. 4C , the user changes from pushing the crotch to the right of the image, to pushing the crotch to the left of the image, and finally to standing upright. The controlled model also simulates the user to make trunk movement. - As shown in
FIG. 5A ,FIG. 5B , andFIG. 5C , the small image in the upper left corner of the image is the acquired image, and the lower right corner is the controlled model of the human body. FromFIG. 5A toFIG. 5C , the user strides toward the right side of the image, strides toward the left of the image, and finally stands upright. The controlled model also simulates the user to make foot movement. - In addition, in
FIG. 4A toFIG. 4C , the controlled model also simulates the expression changes of the user. - As shown in
FIG. 6 , the embodiments provide an image processing apparatus, including the following modules. - A first obtaining
module 110 is configured to obtain an image. - A second obtaining
module 120 is configured to obtain features of at least two parts of a target based on the image. - A
first determination module 130 is configured to determine, according to the features of the at least two parts and a first movement constraint condition of a connection portion, movement information of the connection portion, where the connection portion connects two of the at least two parts. - A
control module 140 is configured to control the movement of the connection portion in a controlled model according to the movement information of the connection portion. - In some embodiments, the
control module 140 is specifically configured to determine, according to the type of the connection portion, a control mode of controlling the connection portion; and control the movement of the connection portion in the controlled model according to the control mode and the movement information of the connection portion. - In some embodiments, the
control module 140 is specifically configured to: in the case that the connection portion is a first-type connection portion, the control mode is a first-type control mode, where the first-type control mode is used for directly controlling the movement of a connection portion in the controlled model corresponding to the first-type connection portion. - In some embodiments, the
control module 140 is specifically configured to: in the case that the connection portion is a second-type connection portion, the control mode is a second-type control mode, where the second-type control mode is used for indirectly controlling the movement of a connection portion in the controlled model corresponding to the second-type connection portion, and the indirect control is achieved by controlling a part in the controlled model corresponding to the part other than the second-type connection portion. - In some embodiments, the
control module 140 is specifically configured to: in the case that the control mode is the second-type control mode, decompose the movement information of the connection portion to obtain first-type rotation information of the connection portion that the connection portion rotates under the traction of a traction portion; adjust movement information of the traction portion according to the first-type rotation information; and control the movement of the traction portion in the controlled model according to the adjusted movement information of the traction portion, to indirectly control the movement of the connection portion. - In some embodiments, the apparatus further includes: a decomposition module, configured to decompose the movement information of the connection portion, to obtain second-type rotation information of the second-type connection portion rotating with respect to the traction portion; and the
control module 140 is further configured to control, in the controlled model, the rotation of the connection portion with respect to the traction portion by using the second-type rotation information. - In some embodiments, the second-type connection portion includes: a wrist; and an ankle.
- In some embodiments, if the second-type connection portion is a wrist, the traction portion corresponding to the wrist includes: an upper arm and/or a forearm; and if the second-type connection portion is an ankle, the traction portion corresponding to the ankle includes: a thigh and/or a calf.
- In some embodiments, the first-type connection portion includes a neck connecting the head and the trunk.
- In some embodiments, the apparatus further includes: an orientation determination module, configured to determine respective orientation information of the two parts according to the features of the two parts connected by the connection portion; a second determination module, configured to determine candidate orientation information of the connection portion according to the respective orientation information of the two parts; and a selection module, configured to determine the movement information of the connection portion according to the candidate orientation information and the first movement constraint condition.
- In some embodiments, the second determination module is configured to determine first candidate orientation information and second candidate orientation information of the connection portion according to the respective orientation information of the two parts.
- In some embodiments, the selection module is specifically configured to: select target orientation information within an orientation change constraint range from the first candidate orientation information and the second candidate orientation information; and determine the movement information of the connection portion according to the target orientation information.
- In some embodiments, the orientation determination module is specifically configured to: obtain a first key point and a second key point for each of the two parts; obtain a first reference point for each of the two parts, where the first reference point refers to a first predetermined key point in the target; generate a first vector based on the first key point and the first reference point, and generate a second vector based on the second key point and the first reference point; and determine the orientation information for each of the two parts based on the first vector and the second vector.
- In some embodiments, the orientation control module is specifically configured to: perform, for each part, cross product on the two vectors of the part to obtain a normal vector of a plane where the part is located; and take the normal vector as the orientation information of the part.
- In some embodiments, the apparatus further includes: a third determination module, configured to determine movement information of the at least two parts based on the features; and the
first determination module 130 is specifically configured to determine the movement information of the connection portion according to the movement information of the at least two parts. - In some embodiments, the
first determination module 130 is specifically configured to: obtain a third 3D coordinate of the connection portion with respect to a second reference point, where the second reference point refers to a second predetermined key point in the at least two parts; and obtain absolute rotation information of the connection portion according to the third 3D coordinate. Thecontrol module 140 is specifically configured to control the movement of the connection portion in the controlled model based on the absolute rotation information. - In some embodiments, the
first determination module 130 is specifically configured to: decompose the absolute rotation information according to a traction hierarchy relationship among multiple connection portions in the target to obtain relative rotation information; and thecontrol module 140 is specifically configured to control the movement of the connection portion in the controlled model based on the relative rotation information. - In some embodiments, the apparatus further includes: a correction module, configured to correct the relative rotation information according to a second constraint condition. The
control module 140 is specifically configured to control the movement of the connection portion in the controlled model based on the corrected relative rotation information. - In some embodiments, the second constraint condition includes: a rotatable angle of the connection portion.
- The following provides several specific examples with reference to any of the foregoing embodiments.
- The present example provides an image processing method, and the method includes the following steps.
- An image is acquired, where the image includes a target, and the target includes, but is not limited to, a human body.
- Face key points of the human body are detected, where the face key points may be contour key points of a face surface.
- Trunk key points and/or limb key points of the human body are detected. The trunk key points and/or limb key points here may all be 3D key points, and are represented by 3D coordinates. The 3D coordinates may include 3D coordinates obtained by detecting 2D coordinates from a 2D image, and then using a conversion algorithm from 2D coordinates to 3D coordinates. The 3D coordinates may also be 3D coordinates extracted from a 3D image acquired using a 3D camera. The limb key points here may include: upper limb key points and/or lower limb key points. Taking a hand as an example, hand key points of the upper limb key points include, but are not limited to: wrist joint key points, metacarpophalangeal joint key points, finger joint key points, and fingertip key points. The positions of these key points can reflect the movements of the hand and fingers.
- According to the face key points, mesh information of a face is generated. An expression base corresponding to a current expression of the target is selected according to the mesh information, and the expression of the controlled model is controlled according to the expression base. The expression intensity of the controlled model corresponding to each expression base is controlled according to an intensity coefficient reflected by the mesh information.
- Quaternions are converted according to the trunk key points and/or the limb key points. The trunk movement of the controlled model is controlled according to the quaternion corresponding to the trunk key points; and/or, the limb movement of the controlled model is controlled according to the quaternion corresponding to the limb key points.
- For example, the face key points may include: 106 key points. The trunk key points and/or the limb key points may include: 14 key points or 17 key points. The details are shown in
FIG. 7A andFIG. 7B .FIG. 7A shows a schematic diagram including 14 skeleton key points; andFIG. 7B shows a schematic diagram including 17 skeleton key points. -
FIG. 7B may be a schematic diagram of 17 key points generated based on the 14 key points shown inFIG. 7A . The 17 key points inFIG. 7B are equivalent to the key points shown inFIG. 7A , with addition of akey point 0, akey point 7, and akey point 9, where the 2D coordinates of thekey point 9 may be preliminarily determined based on the 2D coordinates of akey point 8 and akey point 10, and the 2D coordinates of thekey point 7 may be determined according to the 2D coordinates of thekey point 8 and the 2D coordinates of thekey point 0. Thekey point 0 may be a reference point provided by the embodiments of the present disclosure, and the reference point may serve as the foregoing first reference point and/or second reference point. - The controlled model in the present example may be a game character in a game scene, a teacher model in an online education video in an online teaching scene, and a virtual anchor in a virtual anchor scene. In short, the controlled model is determined according to application scenarios. If the application scenarios are different, the models and/or appearances of the controlled model are different.
- For example, in conventional platform teaching scenes of mathematics and physics, the teacher model may be more sedate in clothing, such as a suit. For another example, for sports teaching scenes such as yoga or gymnastics, the controlled model may be wearing sportswear.
- The present example provides an image processing method, and the method includes the following steps.
- An image is acquired, where the image includes a target, and the target includes, but is not limited to, a human body.
- Trunk key points and limb key points of the human body are detected. The trunk key points and/or limb key points here may all be 3D key points, and are represented by 3D coordinates. The 3D coordinates may include 3D coordinates obtained by detecting 2D coordinates from a 2D image, and then using a conversion algorithm from 2D coordinates to 3D coordinates. The 3D coordinates may also be 3D coordinates extracted from a 3D image acquired using a 3D camera. The limb key points here may include: upper limb key points and/or lower limb key points. Taking a hand as an example, hand key points of the upper limb key points include, but are not limited to: wrist joint key points, metacarpophalangeal joint key points, finger joint key points, and fingertip key points. The positions of these key points can reflect the movements of the hand and fingers.
- The trunk key points are converted into a quaternion representing the trunk movement, and the quaternion may be called a trunk quaternion. The trunk key points are converted into a quaternion representing the trunk movement, and the quaternion may be called a trunk quaternion.
- The trunk movement of the controlled model is controlled using the trunk quaternion. The limb movement of the controlled model is controlled using a limb quaternion.
- The trunk key points and the limb key points may include: 14 key points or 17 key points. The details are shown in
FIG. 7A orFIG. 7B . - The controlled model in the present example may be a game character in a game scene, a teacher model in an online education video in an online teaching scene, and a virtual anchor in a virtual anchor scene. In short, the controlled model is determined according to application scenarios. If the application scenarios are different, the models and/or appearances of the controlled model are different.
- For example, in conventional platform teaching scenes of mathematics and physics, the teacher model may be more sedate in clothing, such as a suit. For another example, for sports teaching scenes such as yoga or gymnastics, the controlled model may be wearing sportswear.
- The present example provides an image processing method, and the method includes the following steps.
- An image is obtained, where the image includes a target, and the target may be a human body.
- A 3D posture of the target in a 3D space is obtained according to the image, where the 3D posture may be represented by 3D coordinates of skeleton key points of the human body.
- Absolute rotation parameters of joints of the human body in a camera coordinate system are obtained, where the absolute rotation position may be determined by the coordinates in the camera coordinate system.
- Coordinate directions of the joints are obtained according to the coordinates. Relative rotation parameters of the joints are determined according to a hierarchical relationship. Determining the relative rotation parameters may specifically include: determining the positions of key points of the joints with respect to the root node of the human body. The relative rotation parameters may be used for quaternion representation. The hierarchical relationship here may be a traction relationship among the joints. For example, the movement of the elbow joint causes the movement of the wrist joint to some extent, and the movement of the shoulder joint also causes the movement of the elbow joint, and so on. The hierarchical relationship may be predetermined according to the joints of the human body.
- The rotation of the controlled model is controlled using the quaternions.
- For example, the following is an example of a hierarchical relationship: the first hierarchy: pelvis; the second hierarchy: waist; the third hierarchy: thighs (e.g. the left thigh and the right thigh); the fourth hierarchy: calves (e.g., the left calf and the right calf); and the fifth hierarchy: feet.
- For another example, the following is another hierarchical relationship: the first hierarchy: chest; the second hierarchy: neck; and the third hierarchy: head.
- Furthermore, for example, the following is still another hierarchical relationship: the first hierarchy: clavicles, which correspond to the shoulders; the second hierarchy: upper arms; the third hierarchy: forearms (also referred to as lower arms); and the fourth hierarchy: hands.
- The hierarchical relationship decreases in sequence from the first hierarchy to the fifth hierarchy. The movement of a part of a high hierarchy affects the movement of a part of a low hierarchy. Therefore, the hierarchy of a traction portion is higher than that of a connection portion.
- During determining the movement information of the connection portion, movement information of key points of a part of each hierarchy is first obtained, and then the movement information of the key points of the part of the low hierarchy with respect to the key points of the part of the high hierarchy (i.e., the relative rotation information) is determined based on the hierarchical relationship.
- For example, if a quaternion is used for representing movement information, the relative rotation information may be represented by a calculation formula: rotation quaternions of each key point with respect to the camera coordinate system are {Q0, Q1, . . . , Q18}, and then a rotation quaternion qi of each key point with respect to a parent key point is calculated according to formula (1).
- The above-mentioned control of the movement of each joint of the controlled model by using a quaternion may include: controlling the movement of each joint of the controlled model by using qi.
- In a further image processing method, the method further includes: converting the quaternion into a first Euler angle; transforming the first Euler angle to obtain a second Euler angle within a constraint condition, where the constraint condition may be used for angle limitation of the first Euler angle; and obtaining a quaternion corresponding to the second Euler angle, and then controlling the rotation of the controlled model by using the quaternion. By obtaining the quaternion corresponding to the second Euler angle, the second Euler angle may be directly converted into a quaternion.
- Taking a human body as an example, the key points of 17 joints can be detected through human body detection, and moreover, two key points are set corresponding to the left and right hands, and therefore, there are 19 key points in total.
FIG. 7B is a schematic diagram of a skeleton having 17 key points.FIG. 8 is a schematic diagram of a skeleton having 19 key points. The bones shown inFIG. 8 may correspond to the 19 key points, which respectively refer to the following bones: pelvis, waist, left thigh, left calf, left foot; right thigh, right calf, right foot, chest, neck, head, left clavicle, right clavicle, right upper arm, right forearm, right hand, left upper arm, left forearm, and left hand. - First, coordinates of the 17 key points in an image coordinate system may be obtained by detecting the key points of joints of a human body in an image, and may specifically be as follows: S={(x0,y0,z0), . . . , (x16,y16,z16)}. (xi,yi,zi) may be the coordinates of an ith key point, and the value of i is from 0 to 16.
- The coordinates of the key points of 19 joints in respective local coordinate systems may be defined as follows: A={(p0,q0), . . . , (p18,q18)}. pi represents the 3D coordinates of a node i in a local coordinate system, is generally a fixed value carried by the original model, and does not need to be modified and migrated. qi is a quaternion, represents the rotation of a bone controlled by the node i in a parent node coordinate system thereof, and may also be considered as the rotation of a local coordinate system of the current node and a local coordinate system of a parent node.
- The process of calculating the quaternions of the key points corresponding to all the joints may be as follows: determining coordinate axis directions of a local coordinate system of each node. For each bone, the direction pointing from a child node to a parent node is an x-axis; a rotation axis that causes a maximum rotatable angle of the bone is a z-axis, and if the rotation axis cannot be determined, a direction that the human body faces is taken as a y-axis. Reference may be made to
FIG. 9 for details. - In the present example, a left-handed coordinate system is used for description, and during specific implementation, a right-handed coordinate system may also be used.
-
Serial numbers of nodes in a skeleton Using key points in a skeleton having having 19 points 17 points for calculation 0 Take (1-7) × (1-4) as the y-axis, and (7-0) as the x-axis 1 Take the maximum default value of 3D coordinates 2 Take (14-11) × (14-7) as the y-axis, and (8-7) as the x-axis 3 Take the maximum default value of 3D coordinates 4 Take (10-8) as the x-axis, and (9-10) × (9-8) as the z- axis 5 Take (11-8) as the x-axis, and (12-11) × (11-8) as the y- axis 6 Take (12-11) as the x-axis, and (11-12) × (12-13) as the z- axis 7 Take (13-12) as the x-axis, and (11-12) × (12-13) as the z-axis; note: the node changes after the quaternions of the hands are added subsequently. 9 Take (5-4) as the x-axis, and (5-6) × (5-4) as the z- axis 10 Take (6-5) as the x-axis, and (5-6) × (5-4) as the z- axis 12 Take (14-8) as the x-axis, and (8-14) × (14-15) as the y- axis 13 Take (15-14) as the x-axis, and (14-15) × (15-16) as the z- axis 14 Take (16-15) as the x-axis, and (14-15) × (15-16) as the z-axis; note: the node changes after the quaternion of the hands are added subsequently. 16 Take (2-1) as the x-axis, and (2-3) × (2-1) as the z-axis 17 Take (3-2) as the x-axis, and (2-3) × (2-1) as the z-axis - In the table above, (i-j) represents a vector where i points to j, and x represents a cross product. For example, (1-7) represents a vector where a first key point points to a seventh key point.
- In the table above,
nodes FIG. 8 , and for the serial numbers of the key points of the skeleton having 17 points, reference may be made toFIG. 7B . - The process of solving the first Euler angle is as follows.
- After a local rotation quaternion qi of the joints is calculated, it is first converted into an Euler angle, where the order of x-y-z is used by default.
- Let qi=(q0,q1,q2,q3), q0 is a real component; and q2,q2,q3 are all imaginary components. Thus, the calculation formulas of the Euler angle are represented by formulas (2)-(4):
-
X=a tan 2(2*(q0*q1−q2*q3),1−2*(q1*q1+q2*q2)) (2) -
Y=a sin(2*(q1*q3+q0*q2)) and the value of Y is between −1 and 1 (3) -
Z=a tan 2(2*(q0*q3−q1*q2),1−2*(q2*q2+q3*q3)) (4) - where X is an Euler angle in a first direction; Y is an Euler angle in a second direction; and Z is an Euler angle in a third direction. Any two of the first direction, the second direction, and the third direction are perpendicular.
- Three angles (X, Y, Z) may be then limited. If the angles exceed the range, the angles are defined within boundary values to obtain corrected second Euler angles (X′, Y′, Z′). A new rotation quaternion qi′ of the local coordinate system is restored.
- In another further image processing method, the method further includes: performing posture optimization adjustment on second Euler angles. For example, adjustment is performed on some of second Euler angles so that they can be adjusted into posture-optimized Euler angles based on a preset rule, thereby obtaining third Euler angles. Thus, the obtaining of quaternions corresponding to the second Euler angles may include: converting the third Euler angles into quaternions for controlling a controlled model.
- In still another further image processing method, the method further includes: converting a second Euler angle into a quaternion, and then performing posture optimization processing on the converted quaternion. For example, adjustment is performed based on a preset rule to obtain an adjusted quaternion, and the controlled model is controlled according to the finally adjusted quaternion.
- In some embodiments, the adjustment performed on the second Euler angle or the quaternion obtained by conversion from the second Euler angle may be adjustment based on a preset rule, and may also be optimization adjustment performed by a deep learning model. There are various implementation modes, which are not limited in the present application.
- In addition, a yet another image processing method may further include preprocessing. For example, the width of the crotch and/or shoulder of the controlled model is modified according to the size of the acquired human body so as to correct the overall posture of the human body. The standing posture of the human body, upright standing correction and abdomen lifting correction may be performed. Some people lift their abdomens when standing, and abdomen lifting correction makes the controlled model not simulate the abdomen lifting action of the user. Some people hunch when standing, and hunching correction makes the controlled model not simulate the hunching action, etc. of the user.
- The present example provides an image processing method, and the method includes the following steps.
- An image is obtained, where the image includes a target, and the target may include at least one of a human body, human upper limbs, or human lower limbs.
- A coordinate system of a target joint is obtained according to position information of the target joint in an image coordinate system. According to position information of a limb part in the image coordinate system, a coordinate system of a limb part capable of causing the target joint to move is obtained.
- The rotation of the target joint with respect to the limb part is determined based on the coordinate system of the target joint and the coordinate system of the limb part to obtain rotation parameters, where the rotation parameters include self-rotation parameters of the target joint and rotation parameters under the traction of the limb part.
- Limitation is performed on the rotation parameters under the traction of the limb part by using first angle limitation to obtain final traction rotation parameters. The rotation parameters of the limb part are corrected according to the final traction rotation parameters. Relative rotation parameters are obtained according to a coordinate system of the limb part and the corrected rotation parameters of the limb part; and second angle limitation is performed on the relative rotation parameters to obtain limited relative rotation parameters.
- A quaternion is obtained according to the limited rotation parameters. The movement of the target joint of a controlled model is controlled according to the quaternion.
- For example, if processing is performed on a human upper limb, a coordinate system of a hand in the image coordinate system is obtained, and a coordinate system of a lower arm and a coordinate system of an upper arm are obtained. In this case, the target joint is a wrist joint. The rotation of the hand with respect to the lower arm is decomposed into self-rotation and rotation under traction. The rotation under traction is transferred to the lower arm; specifically, the rotation under traction is assigned to the rotation of the lower arm in a corresponding direction. The maximum rotation of the lower arm is limited with the first angle limitation of the lower arm. Then, the rotation of the hand with respect to the corrected lower arm is determined to obtain a relative rotation parameter. Second angle limitation is performed on the relative rotation parameter to obtain the rotation of the hand with respect to the lower arm.
- During performing processing on a human lower limb, a coordinate system of a foot in the image coordinate system is obtained, and a coordinate system of a calf and a coordinate system of a thigh are obtained. In this case, the target joint is an ankle joint. The rotation of the foot with respect to the calf is decomposed into self-rotation and rotation under traction. The rotation under traction is transferred to the calf; specifically, the rotation under traction is assigned to the rotation of the calf in a corresponding direction. The maximum rotation of the calf is limited with the first angle limitation of the calf. Then, the rotation of the foot with respect to the corrected calf is determined to obtain a relative rotation parameter. Second angle limitation is performed on the relative rotation parameter to obtain the rotation of the foot with respect to the calf.
- The neck controls the orientation of the head, and the face, human body, and human hands are separate components. The rotation of the neck is very important for the ultimate formation of the face, human body, and human hands into an integral whole.
- The orientation of a human body may be calculated according to key points of the human body. Moreover, the orientation of a face can be calculated according to key points of the face. The relative position of the two orientations is the rotation angle of the neck. An angle problem of a connection portion is to be solved. The angle problem of the connection portion is solved through relative calculation. For example, if the body is 0 degree and the face is 90 degrees, to control a controlled model, it only focuses on part angles and angle changes of the head and the body. The angle of the neck of the controlled model needs to be calculated to control the head of the controlled model.
- In the present example, the current orientation of the face of a user is determined based on an image, and then the rotation angle of the neck is calculated. Since the rotation of the neck has a range, for example, assuming that the neck can rotate 90 degrees at most, if the calculated rotation angle exceeds this range (−90 degrees to 90 degrees), the boundary of the range is taken as the rotation angle of the neck (such as, −90 degrees or 90 degrees).
- The orientation of the body or face may be calculated using 3D key points. The specific calculation of the orientation may be: two vectors, which are not on a straight line, in a plane where the face or body is located, are performed cross product to obtain a normal vector of the plane, and the normal vector is the orientation of the face or body. The orientation may be taken as the orientation of a connection portion (the neck) between the body and the face.
- As shown in
FIG. 10 , the embodiments of the present application provide an image device, including: amemory 1002, configured to store information; and aprocessor 1001, connected to thememory 1002 and configured to execute computer executable instructions stored on thememory 1002 so as to implement the image processing method provided by one or more of the foregoing technical solutions, for example, the image processing methods shown inFIG. 1A ,FIG. 1B and/orFIG. 2 . - The
memory 1002 may be any type of memory, such as a random access memory, a read-only memory, and a flash memory. Thememory 1002 may be configured to store information, for example, store computer executable instructions, etc. The computer executable instructions may be various program instructions, for example, a target program instruction and/or a source program instruction, etc. - The
processor 1001 may be any type of processor, for example, a central processing unit, a microprocessor, a digital signal processor, a programmable array, an application-specific integrated circuit, an image processor, etc. - The
processor 1001 may be connected to thememory 1002 by means of a bus. The bus may be an integrated circuit bus, etc. - In some embodiments, the terminal device may further include: a
communication interface 1003. Thecommunication interface 1003 may include a network interface, for example, a local area network interface, a transceiver antenna, etc. The communication interface is also connected to theprocessor 1001 and can be configured to transmit and receive information. - In some embodiments, the terminal device further includes a human-
machine interaction interface 1005. For example, the human-machine interaction interface 1005 may include various input and output devices, for example, a keyboard and a touch screen, etc. - In some embodiments, the image device further includes: a
display 1004. The display may display various prompts, acquired face images and/or various interfaces. - The embodiments of the present application provide a non-volatile computer storage medium. The computer storage medium stores computer executable codes. After the computer executable codes are executed, the image processing method provided by one or more of the foregoing technical solutions can be implemented, for example, the image processing methods shown in
FIG. 1A ,FIG. 1B , and/orFIG. 2 . - It should be understood that the disclosed device and method in the embodiments provided in the present application may be implemented by other modes. The device embodiments described above are merely exemplary. For example, the unit division is merely logical function division and may be actually implemented by other division modes. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections among the components may be implemented by means of some interfaces. The indirect couplings or communication connections between the devices or units may be implemented in electronic, mechanical, or other forms.
- The units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, may be located at one position, or may be distributed on a plurality of network units. A part of or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- In addition, the functional units in the embodiments of the present disclosure may be integrated into one processing module, or each of the units may exist as an independent unit, or two or more units are integrated into one unit, and the integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a hardware and software functional unit.
- A person of ordinary skill in the art may understand that: all or some steps of implementing the forgoing method embodiments may be achieved by a program by instructing related hardware; the foregoing program may be stored in a computer-readable storage medium; when the program is executed, steps including the foregoing method embodiments are performed; moreover, the above-mentioned non-volatile storage medium includes various media capable of storing program codes such as a mobile storage device, an Read-Only Memory (ROM), a magnetic disk, or an optical disk.
- The descriptions above only specific implementations of this disclosure. However, the scope of protection of this disclosure is not limited thereto. Within the technical scope disclosed by this disclosure, any variation or substitution that can be easily conceived of by those skilled in the art should all fall within the scope of protection of this disclosure. Therefore, the scope of protection of the present disclosure should be defined by the scope of protection of the claims.
Claims (20)
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910049830 | 2019-01-18 | ||
CN201910049830.6 | 2019-01-18 | ||
CN201910363433.6 | 2019-04-30 | ||
CN201910363433.6A CN111460873A (en) | 2019-01-18 | 2019-04-30 | Image processing method and apparatus, image device, and storage medium |
PCT/CN2020/072549 WO2020147796A1 (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device, and storage medium |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/072549 Continuation WO2020147796A1 (en) | 2019-01-18 | 2020-01-16 | Image processing method and apparatus, image device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210074005A1 true US20210074005A1 (en) | 2021-03-11 |
Family
ID=71679913
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/073,769 Active 2040-04-01 US11538207B2 (en) | 2019-01-18 | 2020-10-19 | Image processing method and apparatus, image device, and storage medium |
US17/102,331 Abandoned US20210074004A1 (en) | 2019-01-18 | 2020-11-23 | Image processing method and apparatus, image device, and storage medium |
US17/102,364 Abandoned US20210074005A1 (en) | 2019-01-18 | 2020-11-23 | Image processing method and apparatus, image device, and storage medium |
US17/102,373 Active US11468612B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of a model based on captured images and determined information |
US17/102,305 Active US11741629B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of model derived from captured image |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/073,769 Active 2040-04-01 US11538207B2 (en) | 2019-01-18 | 2020-10-19 | Image processing method and apparatus, image device, and storage medium |
US17/102,331 Abandoned US20210074004A1 (en) | 2019-01-18 | 2020-11-23 | Image processing method and apparatus, image device, and storage medium |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/102,373 Active US11468612B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of a model based on captured images and determined information |
US17/102,305 Active US11741629B2 (en) | 2019-01-18 | 2020-11-23 | Controlling display of model derived from captured image |
Country Status (6)
Country | Link |
---|---|
US (5) | US11538207B2 (en) |
JP (4) | JP7109585B2 (en) |
KR (4) | KR20210011985A (en) |
CN (7) | CN111460871B (en) |
SG (5) | SG11202010399VA (en) |
WO (1) | WO2020181900A1 (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910393B (en) * | 2018-09-18 | 2023-03-24 | 北京市商汤科技开发有限公司 | Data processing method and device, electronic equipment and storage medium |
US11610414B1 (en) * | 2019-03-04 | 2023-03-21 | Apple Inc. | Temporal and geometric consistency in physical setting understanding |
US10902618B2 (en) * | 2019-06-14 | 2021-01-26 | Electronic Arts Inc. | Universal body movement translation and character rendering system |
CA3154216A1 (en) | 2019-10-11 | 2021-04-15 | Beyeonics Surgical Ltd. | System and method for improved electronic assisted medical procedures |
KR102610840B1 (en) * | 2019-12-19 | 2023-12-07 | 한국전자통신연구원 | System and method for automatic recognition of user motion |
CN111105348A (en) * | 2019-12-25 | 2020-05-05 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image processing device, and storage medium |
US11504625B2 (en) | 2020-02-14 | 2022-11-22 | Electronic Arts Inc. | Color blindness diagnostic system |
US11648480B2 (en) | 2020-04-06 | 2023-05-16 | Electronic Arts Inc. | Enhanced pose generation based on generative modeling |
US11232621B2 (en) | 2020-04-06 | 2022-01-25 | Electronic Arts Inc. | Enhanced animation generation based on conditional modeling |
CN111881838B (en) * | 2020-07-29 | 2023-09-26 | 清华大学 | Dyskinesia assessment video analysis method and equipment with privacy protection function |
US11403801B2 (en) | 2020-09-18 | 2022-08-02 | Unity Technologies Sf | Systems and methods for building a pseudo-muscle topology of a live actor in computer animation |
CN114333228B (en) * | 2020-09-30 | 2023-12-08 | 北京君正集成电路股份有限公司 | Intelligent video nursing method for infants |
CN112165630B (en) * | 2020-10-16 | 2022-11-15 | 广州虎牙科技有限公司 | Image rendering method and device, electronic equipment and storage medium |
CN112932468A (en) * | 2021-01-26 | 2021-06-11 | 京东方科技集团股份有限公司 | Monitoring system and monitoring method for muscle movement ability |
US11887232B2 (en) | 2021-06-10 | 2024-01-30 | Electronic Arts Inc. | Enhanced system for generation of facial models and animation |
US20230177881A1 (en) * | 2021-07-06 | 2023-06-08 | KinTrans, Inc. | Automatic body movement recognition and association system including smoothing, segmentation, similarity, pooling, and dynamic modeling |
WO2023079987A1 (en) * | 2021-11-04 | 2023-05-11 | ソニーグループ株式会社 | Distribution device, distribution method, and program |
CN114115544B (en) * | 2021-11-30 | 2024-01-05 | 杭州海康威视数字技术股份有限公司 | Man-machine interaction method, three-dimensional display device and storage medium |
KR20230090852A (en) * | 2021-12-15 | 2023-06-22 | 삼성전자주식회사 | Electronic device and method for acquiring three-dimensional skeleton data of user hand captured using plurality of cameras |
CN117315201A (en) * | 2022-06-20 | 2023-12-29 | 香港教育大学 | System for animating an avatar in a virtual world |
CN115564689A (en) * | 2022-10-08 | 2023-01-03 | 上海宇勘科技有限公司 | Artificial intelligence image processing method and system based on block processing technology |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4561066A (en) * | 1983-06-20 | 1985-12-24 | Gti Corporation | Cross product calculator with normalized output |
US20100149341A1 (en) * | 2008-12-17 | 2010-06-17 | Richard Lee Marks | Correcting angle error in a tracking system |
US20160267699A1 (en) * | 2015-03-09 | 2016-09-15 | Ventana 3D, Llc | Avatar control system |
Family Cites Families (103)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0816820A (en) * | 1994-04-25 | 1996-01-19 | Fujitsu Ltd | Three-dimensional animation generation device |
US6657628B1 (en) | 1999-11-24 | 2003-12-02 | Fuji Xerox Co., Ltd. | Method and apparatus for specification, control and modulation of social primitives in animated characters |
JP2002024807A (en) * | 2000-07-07 | 2002-01-25 | National Institute Of Advanced Industrial & Technology | Object movement tracking technique and recording medium |
JP4077622B2 (en) * | 2001-11-15 | 2008-04-16 | 独立行政法人科学技術振興機構 | 3D human moving image generation system |
US7215828B2 (en) * | 2002-02-13 | 2007-05-08 | Eastman Kodak Company | Method and system for determining image orientation |
US9177387B2 (en) | 2003-02-11 | 2015-11-03 | Sony Computer Entertainment Inc. | Method and apparatus for real time motion capture |
US20160098095A1 (en) | 2004-01-30 | 2016-04-07 | Electronic Scripting Products, Inc. | Deriving Input from Six Degrees of Freedom Interfaces |
JP2007004732A (en) * | 2005-06-27 | 2007-01-11 | Matsushita Electric Ind Co Ltd | Image generation device and method |
US7869646B2 (en) | 2005-12-01 | 2011-01-11 | Electronics And Telecommunications Research Institute | Method for estimating three-dimensional position of human joint using sphere projecting technique |
US7859540B2 (en) | 2005-12-22 | 2010-12-28 | Honda Motor Co., Ltd. | Reconstruction, retargetting, tracking, and estimation of motion for articulated systems |
JP4148281B2 (en) * | 2006-06-19 | 2008-09-10 | ソニー株式会社 | Motion capture device, motion capture method, and motion capture program |
JP5076744B2 (en) * | 2007-08-30 | 2012-11-21 | セイコーエプソン株式会社 | Image processing device |
JP5229910B2 (en) | 2008-08-08 | 2013-07-03 | 株式会社メイクソフトウェア | Image processing apparatus, image output apparatus, image processing method, and computer program |
US8267781B2 (en) * | 2009-01-30 | 2012-09-18 | Microsoft Corporation | Visual target tracking |
US8588465B2 (en) * | 2009-01-30 | 2013-11-19 | Microsoft Corporation | Visual target tracking |
US8682028B2 (en) | 2009-01-30 | 2014-03-25 | Microsoft Corporation | Visual target tracking |
CN101930284B (en) | 2009-06-23 | 2014-04-09 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
KR101616926B1 (en) * | 2009-09-22 | 2016-05-02 | 삼성전자주식회사 | Image processing apparatus and method |
US9240067B2 (en) | 2009-10-15 | 2016-01-19 | Yeda Research & Development Co. Ltd. | Animation of photo-images via fitting of combined models |
RU2534892C2 (en) | 2010-04-08 | 2014-12-10 | Самсунг Электроникс Ко., Лтд. | Apparatus and method of capturing markerless human movements |
US9177409B2 (en) | 2010-04-29 | 2015-11-03 | Naturalmotion Ltd | Animating a virtual object within a virtual world |
US8437506B2 (en) * | 2010-09-07 | 2013-05-07 | Microsoft Corporation | System for fast, probabilistic skeletal tracking |
TWI534756B (en) * | 2011-04-29 | 2016-05-21 | 國立成功大學 | Motion-coded image, producing module, image processing module and motion displaying module |
AU2011203028B1 (en) * | 2011-06-22 | 2012-03-08 | Microsoft Technology Licensing, Llc | Fully automatic dynamic articulated model calibration |
US9134127B2 (en) * | 2011-06-24 | 2015-09-15 | Trimble Navigation Limited | Determining tilt angle and tilt direction using image processing |
US10319133B1 (en) | 2011-11-13 | 2019-06-11 | Pixar | Posing animation hierarchies with dynamic posing roots |
KR101849373B1 (en) | 2012-01-31 | 2018-04-17 | 한국전자통신연구원 | Apparatus and method for estimating skeleton structure of human body |
US9747495B2 (en) | 2012-03-06 | 2017-08-29 | Adobe Systems Incorporated | Systems and methods for creating and distributing modifiable animated video messages |
CN102824176B (en) | 2012-09-24 | 2014-06-04 | 南通大学 | Upper limb joint movement degree measuring method based on Kinect sensor |
US10241639B2 (en) | 2013-01-15 | 2019-03-26 | Leap Motion, Inc. | Dynamic user interactions for display control and manipulation of display objects |
TWI475495B (en) | 2013-02-04 | 2015-03-01 | Wistron Corp | Image identification method, electronic device, and computer program product |
CN104103090A (en) * | 2013-04-03 | 2014-10-15 | 北京三星通信技术研究有限公司 | Image processing method, customized human body display method and image processing system |
CN103268158B (en) * | 2013-05-21 | 2017-09-08 | 上海速盟信息技术有限公司 | A kind of method, device and a kind of electronic equipment of simulated gravity sensing data |
JP6136926B2 (en) * | 2013-06-13 | 2017-05-31 | ソニー株式会社 | Information processing apparatus, storage medium, and information processing method |
JP2015061579A (en) * | 2013-07-01 | 2015-04-02 | 株式会社東芝 | Motion information processing apparatus |
JP6433149B2 (en) | 2013-07-30 | 2018-12-05 | キヤノン株式会社 | Posture estimation apparatus, posture estimation method and program |
JP6049202B2 (en) * | 2013-10-25 | 2016-12-21 | 富士フイルム株式会社 | Image processing apparatus, method, and program |
US9600887B2 (en) * | 2013-12-09 | 2017-03-21 | Intel Corporation | Techniques for disparity estimation using camera arrays for high dynamic range imaging |
JP6091407B2 (en) * | 2013-12-18 | 2017-03-08 | 三菱電機株式会社 | Gesture registration device |
KR101700817B1 (en) | 2014-01-10 | 2017-02-13 | 한국전자통신연구원 | Apparatus and method for multiple armas and hands detection and traking using 3d image |
JP6353660B2 (en) * | 2014-02-06 | 2018-07-04 | 日本放送協会 | Sign language word classification information generation device and program thereof |
JP6311372B2 (en) * | 2014-03-13 | 2018-04-18 | オムロン株式会社 | Image processing apparatus and image processing method |
WO2015176163A1 (en) * | 2014-05-21 | 2015-11-26 | Millennium Three Technologies Inc | Fiducial marker patterns, their automatic detection in images, and applications thereof |
US10426372B2 (en) * | 2014-07-23 | 2019-10-01 | Sony Corporation | Image registration system with non-rigid registration and method of operation thereof |
JP6662876B2 (en) * | 2014-12-11 | 2020-03-11 | インテル コーポレイション | Avatar selection mechanism |
CN104700433B (en) | 2015-03-24 | 2016-04-27 | 中国人民解放军国防科学技术大学 | A kind of real-time body's whole body body motion capture method of view-based access control model and system thereof |
US10022628B1 (en) | 2015-03-31 | 2018-07-17 | Electronic Arts Inc. | System for feature-based motion adaptation |
CN104866101B (en) * | 2015-05-27 | 2018-04-27 | 世优(北京)科技有限公司 | The real-time interactive control method and device of virtual objects |
US10430867B2 (en) * | 2015-08-07 | 2019-10-01 | SelfieStyler, Inc. | Virtual garment carousel |
DE102015215513A1 (en) * | 2015-08-13 | 2017-02-16 | Avl List Gmbh | System for monitoring a technical device |
CN106991367B (en) * | 2016-01-21 | 2019-03-19 | 腾讯科技(深圳)有限公司 | The method and apparatus for determining face rotational angle |
JP2017138915A (en) * | 2016-02-05 | 2017-08-10 | 株式会社バンダイナムコエンターテインメント | Image generation system and program |
US9460557B1 (en) | 2016-03-07 | 2016-10-04 | Bao Tran | Systems and methods for footwear fitting |
JP6723061B2 (en) * | 2016-04-15 | 2020-07-15 | キヤノン株式会社 | Information processing apparatus, information processing apparatus control method, and program |
CN106023288B (en) * | 2016-05-18 | 2019-11-15 | 浙江大学 | A kind of dynamic scapegoat's building method based on image |
CN106296778B (en) * | 2016-07-29 | 2019-11-15 | 网易(杭州)网络有限公司 | Virtual objects motion control method and device |
CN106251396B (en) | 2016-07-29 | 2021-08-13 | 迈吉客科技(北京)有限公司 | Real-time control method and system for three-dimensional model |
US10661453B2 (en) * | 2016-09-16 | 2020-05-26 | Verb Surgical Inc. | Robotic arms |
WO2018069981A1 (en) * | 2016-10-11 | 2018-04-19 | 富士通株式会社 | Motion recognition device, motion recognition program, and motion recognition method |
CN108229239B (en) | 2016-12-09 | 2020-07-10 | 武汉斗鱼网络科技有限公司 | Image processing method and device |
KR101867991B1 (en) | 2016-12-13 | 2018-06-20 | 한국과학기술원 | Motion edit method and apparatus for articulated object |
CN106920274B (en) * | 2017-01-20 | 2020-09-04 | 南京开为网络科技有限公司 | Face modeling method for rapidly converting 2D key points of mobile terminal into 3D fusion deformation |
JP2018119833A (en) * | 2017-01-24 | 2018-08-02 | キヤノン株式会社 | Information processing device, system, estimation method, computer program, and storage medium |
JP2018169720A (en) * | 2017-03-29 | 2018-11-01 | 富士通株式会社 | Motion detection system |
CN107272884A (en) * | 2017-05-09 | 2017-10-20 | 聂懋远 | A kind of control method and its control system based on virtual reality technology |
CN107220933B (en) * | 2017-05-11 | 2021-09-21 | 上海联影医疗科技股份有限公司 | Reference line determining method and system |
CN107154069B (en) * | 2017-05-11 | 2021-02-02 | 上海微漫网络科技有限公司 | Data processing method and system based on virtual roles |
CN108876879B (en) * | 2017-05-12 | 2022-06-14 | 腾讯科技(深圳)有限公司 | Method and device for realizing human face animation, computer equipment and storage medium |
JPWO2018207388A1 (en) | 2017-05-12 | 2020-03-12 | ブレイン株式会社 | Program, apparatus and method for motion capture |
US10379613B2 (en) * | 2017-05-16 | 2019-08-13 | Finch Technologies Ltd. | Tracking arm movements to generate inputs for computer systems |
CN107578462A (en) * | 2017-09-12 | 2018-01-12 | 北京城市系统工程研究中心 | A kind of bone animation data processing method based on real time motion capture |
CN108205654B (en) * | 2017-09-30 | 2021-06-04 | 北京市商汤科技开发有限公司 | Action detection method and device based on video |
CN108229332B (en) | 2017-12-08 | 2020-02-14 | 华为技术有限公司 | Bone posture determination method, device and computer readable storage medium |
CN107958479A (en) * | 2017-12-26 | 2018-04-24 | 南京开为网络科技有限公司 | A kind of mobile terminal 3D faces augmented reality implementation method |
CN108062783A (en) * | 2018-01-12 | 2018-05-22 | 北京蜜枝科技有限公司 | FA Facial Animation mapped system and method |
CN108227931A (en) * | 2018-01-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | For controlling the method for virtual portrait, equipment, system, program and storage medium |
CN108357595B (en) * | 2018-01-26 | 2020-11-20 | 浙江大学 | Model-based self-balancing unmanned bicycle and model driving control method thereof |
CN108305321B (en) * | 2018-02-11 | 2022-09-30 | 牧星天佑(北京)科技文化发展有限公司 | Three-dimensional human hand 3D skeleton model real-time reconstruction method and device based on binocular color imaging system |
CN108364254B (en) | 2018-03-20 | 2021-07-23 | 北京奇虎科技有限公司 | Image processing method and device and electronic equipment |
JP6973258B2 (en) * | 2018-04-13 | 2021-11-24 | オムロン株式会社 | Image analyzers, methods and programs |
CN108427942A (en) * | 2018-04-22 | 2018-08-21 | 广州麦仑信息科技有限公司 | A kind of palm detection based on deep learning and crucial independent positioning method |
CN108648280B (en) * | 2018-04-25 | 2023-03-31 | 深圳市商汤科技有限公司 | Virtual character driving method and device, electronic device and storage medium |
CN108829232B (en) * | 2018-04-26 | 2021-07-23 | 深圳市同维通信技术有限公司 | Method for acquiring three-dimensional coordinates of human skeletal joint points based on deep learning |
CN108830783B (en) * | 2018-05-31 | 2021-07-02 | 北京市商汤科技开发有限公司 | Image processing method and device and computer storage medium |
CN108830200A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN108830784A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN108765274A (en) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage media |
CN108765272B (en) * | 2018-05-31 | 2022-07-08 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and readable storage medium |
CN109035415B (en) | 2018-07-03 | 2023-05-16 | 百度在线网络技术(北京)有限公司 | Virtual model processing method, device, equipment and computer readable storage medium |
CN109117726A (en) * | 2018-07-10 | 2019-01-01 | 深圳超多维科技有限公司 | A kind of identification authentication method, device, system and storage medium |
WO2020016963A1 (en) * | 2018-07-18 | 2020-01-23 | 日本電気株式会社 | Information processing device, control method, and program |
CN109101901B (en) * | 2018-07-23 | 2020-10-27 | 北京旷视科技有限公司 | Human body action recognition method and device, neural network generation method and device and electronic equipment |
CN109146769A (en) * | 2018-07-24 | 2019-01-04 | 北京市商汤科技开发有限公司 | Image processing method and device, image processing equipment and storage medium |
CN109146629B (en) * | 2018-08-16 | 2020-11-27 | 连云港伍江数码科技有限公司 | Target object locking method and device, computer equipment and storage medium |
CN109242789A (en) * | 2018-08-21 | 2019-01-18 | 成都旷视金智科技有限公司 | Image processing method, image processing apparatus and storage medium |
CN109191593A (en) * | 2018-08-27 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Motion control method, device and the equipment of virtual three-dimensional model |
CN109325450A (en) | 2018-09-25 | 2019-02-12 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
US10832472B2 (en) * | 2018-10-22 | 2020-11-10 | The Hong Kong Polytechnic University | Method and/or system for reconstructing from images a personalized 3D human body model and thereof |
CN109376671B (en) * | 2018-10-30 | 2022-06-21 | 北京市商汤科技开发有限公司 | Image processing method, electronic device, and computer-readable medium |
CN109816773A (en) | 2018-12-29 | 2019-05-28 | 深圳市瑞立视多媒体科技有限公司 | A kind of driving method, plug-in unit and the terminal device of the skeleton model of virtual portrait |
CN110139115B (en) | 2019-04-30 | 2020-06-09 | 广州虎牙信息科技有限公司 | Method and device for controlling virtual image posture based on key points and electronic equipment |
CN110688008A (en) | 2019-09-27 | 2020-01-14 | 贵州小爱机器人科技有限公司 | Virtual image interaction method and device |
CN110889382A (en) | 2019-11-29 | 2020-03-17 | 深圳市商汤科技有限公司 | Virtual image rendering method and device, electronic equipment and storage medium |
-
2019
- 2019-03-14 CN CN201910193649.2A patent/CN111460871B/en active Active
- 2019-03-14 CN CN201910191918.1A patent/CN111460870A/en active Pending
- 2019-04-30 CN CN201910365188.2A patent/CN111460875B/en active Active
- 2019-04-30 CN CN201910362107.3A patent/CN111460872B/en active Active
- 2019-04-30 CN CN201910363433.6A patent/CN111460873A/en active Pending
- 2019-04-30 CN CN202210210775.6A patent/CN114399826A/en active Pending
- 2019-04-30 CN CN201910363858.7A patent/CN111460874A/en active Pending
- 2019-12-31 SG SG11202010399VA patent/SG11202010399VA/en unknown
- 2019-12-31 WO PCT/CN2019/130970 patent/WO2020181900A1/en active Application Filing
- 2019-12-31 JP JP2020558530A patent/JP7109585B2/en active Active
-
2020
- 2020-01-16 KR KR1020207036619A patent/KR20210011985A/en not_active Application Discontinuation
- 2020-01-16 SG SG11202011596WA patent/SG11202011596WA/en unknown
- 2020-01-16 SG SG11202011595QA patent/SG11202011595QA/en unknown
- 2020-01-16 KR KR1020207036649A patent/KR20210011425A/en not_active Application Discontinuation
- 2020-01-16 JP JP2020567116A patent/JP2021525431A/en active Pending
- 2020-01-16 SG SG11202011600QA patent/SG11202011600QA/en unknown
- 2020-01-16 KR KR1020207036647A patent/KR20210011424A/en not_active Application Discontinuation
- 2020-01-16 JP JP2020565269A patent/JP7061694B2/en active Active
- 2020-01-16 SG SG11202011599UA patent/SG11202011599UA/en unknown
- 2020-01-16 KR KR1020207036612A patent/KR20210011984A/en not_active Application Discontinuation
- 2020-01-16 JP JP2020559380A patent/JP7001841B2/en active Active
- 2020-10-19 US US17/073,769 patent/US11538207B2/en active Active
- 2020-11-23 US US17/102,331 patent/US20210074004A1/en not_active Abandoned
- 2020-11-23 US US17/102,364 patent/US20210074005A1/en not_active Abandoned
- 2020-11-23 US US17/102,373 patent/US11468612B2/en active Active
- 2020-11-23 US US17/102,305 patent/US11741629B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4561066A (en) * | 1983-06-20 | 1985-12-24 | Gti Corporation | Cross product calculator with normalized output |
US20100149341A1 (en) * | 2008-12-17 | 2010-06-17 | Richard Lee Marks | Correcting angle error in a tracking system |
US20160267699A1 (en) * | 2015-03-09 | 2016-09-15 | Ventana 3D, Llc | Avatar control system |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210074005A1 (en) | Image processing method and apparatus, image device, and storage medium | |
CN108629801B (en) | Three-dimensional human body model posture and shape reconstruction method of video sequence | |
WO2020147796A1 (en) | Image processing method and apparatus, image device, and storage medium | |
KR101911133B1 (en) | Avatar construction using depth camera | |
Gültepe et al. | Real-time virtual fitting with body measurement and motion smoothing | |
CA3089316A1 (en) | Matching meshes for virtual avatars | |
JP7015152B2 (en) | Processing equipment, methods and programs related to key point data | |
CN109712080A (en) | Image processing method, image processing apparatus and storage medium | |
US20210349529A1 (en) | Avatar tracking and rendering in virtual reality | |
JP2015531098A5 (en) | ||
CN103208133A (en) | Method for adjusting face plumpness in image | |
WO2020147791A1 (en) | Image processing method and device, image apparatus, and storage medium | |
WO2020147797A1 (en) | Image processing method and apparatus, image device, and storage medium | |
CN111401340A (en) | Method and device for detecting motion of target object | |
WO2020147794A1 (en) | Image processing method and apparatus, image device and storage medium | |
Kim et al. | A parametric model of shoulder articulation for virtual assessment of space suit fit | |
US11669999B2 (en) | Techniques for inferring three-dimensional poses from two-dimensional images | |
Zhu | Efficient and robust photo-based methods for precise shape and pose modeling of human subjects | |
Diamanti | Motion Capture in Uncontrolled Environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:XIE, FUBAO;LIU, WENTAO;REEL/FRAME:055737/0386 Effective date: 20200723 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |