US11538207B2 - Image processing method and apparatus, image device, and storage medium - Google Patents

Image processing method and apparatus, image device, and storage medium Download PDF

Info

Publication number
US11538207B2
US11538207B2 US17/073,769 US202017073769A US11538207B2 US 11538207 B2 US11538207 B2 US 11538207B2 US 202017073769 A US202017073769 A US 202017073769A US 11538207 B2 US11538207 B2 US 11538207B2
Authority
US
United States
Prior art keywords
region
vector
adjusted
orientation
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/073,769
Other languages
English (en)
Other versions
US20210035344A1 (en
Inventor
Tong Li
Wentao Liu
Chen Qian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Assigned to BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD. reassignment BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, TONG, LIU, WENTAO, QIAN, Chen
Publication of US20210035344A1 publication Critical patent/US20210035344A1/en
Application granted granted Critical
Publication of US11538207B2 publication Critical patent/US11538207B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the field of information technology, and in particular, to an image processing method and apparatus, a device, and a storage medium.
  • Embodiments of the present disclosure expect to provide an image processing method and apparatus, a device, and a storage medium.
  • an image processing method including:
  • an image processing apparatus including:
  • an obtaining unit configured to obtain key points of a reference region of an object in an image
  • a determining unit configured to determine an orientation of the reference region according to the key points of the reference region
  • a processing unit configured to perform deformation processing on a region to be adjusted of the object based on the orientation of the reference region, where the region to be adjusted is the same as or different from the reference region.
  • an image processing device including:
  • a processor connected to the memory, and configured to execute computer executable instructions stored on the memory to implement the image processing method provided according to any of the foregoing technical solutions.
  • a computer storage medium where the computer storage medium stores computer executable instructions, and the computer executable instructions can implement the image processing method provided according to any of the foregoing technical solutions.
  • FIG. 1 is a schematic diagram of a pixel coordinate system provided in embodiments of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided in the embodiments of the present disclosure
  • FIG. 3 is a schematic diagram of key points provided in the embodiments of the present disclosure.
  • FIG. 4 is a schematic flowchart of another image processing method provided in the embodiments of the present disclosure.
  • FIG. 5 is a schematic diagram of determination of an orientation based on a vector formed by key points provided in the embodiments of the present disclosure
  • FIG. 6 is a schematic diagram of a center line provided in the embodiments of the present disclosure.
  • FIG. 7 is a schematic diagram of a deformation processing effect provided in the embodiments of the present disclosure.
  • FIG. 8 is a schematic diagram of another deformation processing effect provided in the embodiments of the present disclosure.
  • FIG. 9 is a schematic diagram of still another deformation processing effect provided in the embodiments of the present disclosure.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus provided in the embodiments of the present disclosure.
  • FIG. 11 is a schematic structural diagram of an image device provided in the embodiments of the present disclosure.
  • an image processing method including:
  • determining the orientation of the reference region according to the key points of the reference region includes: obtaining at least three key points in the reference region, where the at least three key points are not on a same straight line;
  • the at least three key points include a first key point, a second key point, and a third key point, and the first key point and the third key point are symmetrical about the second key point;
  • determining the target vector based on the at least three key points includes:
  • determining the target vector based on the first vector and the second vector includes:
  • performing the deformation processing on the region to be adjusted of the object based on the orientation of the reference region includes:
  • determining the orientation of the region to be adjusted based on the orientation of the reference region includes:
  • the region to be adjusted in response to the region to be adjusted being a region of a first type, determining that the orientation of the region to be adjusted is opposite to the orientation of the reference region, where the region of the first type includes: a hip region; and
  • the region to be adjusted in response to the region to be adjusted being a region of a second type, determining that the orientation of the region to be adjusted is the same as the orientation of the reference region, where the region of the second type includes at least one of: a face region, a shoulder region, or a crotch region.
  • performing the deformation processing on the region to be adjusted of the object based on the orientation of the reference region includes:
  • the region to be adjusted includes a first sub-region and a second sub-region, and the first sub-region and the second sub-region are symmetrical about a center line of the region to be adjusted; performing the deformation processing on the region to be adjusted according to the target vector, the first reference vector, and the second reference vector includes:
  • adjusting the area of the first sub-region and the area of the second sub-region according to the first included angle between the target vector and the first reference vector and the second included angle between the target vector and the second reference vector includes:
  • determining the first area adjustment amount of the first sub-region and the second area adjustment amount of the second sub-region according to the first included angle and the second included angle includes:
  • the method further includes:
  • an image processing apparatus including:
  • an obtaining unit configured to obtain key points of a reference region of an object in an image
  • a determining unit configured to determine an orientation of the reference region according to the key points of the reference region
  • a processing unit configured to perform deformation processing on a region to be adjusted of the object based on the orientation of the reference region, where the region to be adjusted is the same as or different from the reference region.
  • the determining unit is configured to:
  • the at least three key points include a first key point, a second key point, and a third key point, and the first key point and the third key point are symmetrical about the second key point;
  • the determining unit is configured to:
  • the determining unit is configured to:
  • the processing unit is configured to:
  • the processing unit is configured to:
  • the region to be adjusted in response to the region to be adjusted being a region of a first type, determine that the orientation of the region to be adjusted is opposite to the orientation of the reference region, where the region of the first type includes: a hip region;
  • the region to be adjusted in response to the region to be adjusted being a region of a second type, determine that the orientation of the region to be adjusted to be same as the orientation of the reference region, where the region of the second type includes at least one of: a face region, a shoulder region, or a crotch region.
  • the processing unit is configured to:
  • the region to be adjusted comprises a first sub-region and a second sub-region, and the first sub-region and the second sub-region are symmetrical about a center line of the region to be adjusted;
  • the processing unit is configured to:
  • the processing unit is configured to:
  • the processing unit is configured to:
  • the processing unit is further configured to:
  • an image processing device including:
  • a processor connected to the memory, and configured to execute computer executable instructions stored on the memory to implement the image processing method provided according to any of the foregoing technical solutions.
  • a computer storage medium where the computer storage medium stores computer executable instructions, and the computer executable instructions can implement the image processing method provided according to any of the foregoing technical solutions.
  • the orientation of the reference region is obtained, and the deformation is performed on the region to be adjusted according to the orientation of the reference region.
  • the deformation is performed on the region to be adjusted according to the orientation of the reference region.
  • a pixel coordinate system xoy is constructed by taking a lower left corner of a human body image A as an origin o of the pixel coordinate system, a direction parallel to the row of the human body image A as the direction of an x-axis, and a direction parallel to the column of the human body image A as the direction of a y-axis.
  • the abscissa is used for representing a column number in the human body image A of a pixel in the human body image A; the ordinate is used for representing a row number in the human body image A of the pixel in the human body image A; and the units of the abscissa and the ordinate may both be pixels.
  • the coordinates of a pixel a in FIG. 1 is (10, 30)
  • the abscissa of the pixel a is 30 pixels
  • the ordinate of the pixel a is 20 pixels
  • the pixel a is a pixel of column 30 and row 20 in the human body image A.
  • the embodiments provide an image processing method, including the following operations.
  • the orientation of the reference region is determined according to the key points of the reference region.
  • deformation processing is performed on a region to be adjusted of the object based on the orientation.
  • the image processing method provided in the embodiments can be applied to various types of electronic devices capable of processing images, for example, various user equipments such as a mobile phone, a tablet computer, or a wearable device.
  • Obtaining the key point of the reference region of the image in operation S 210 includes: detecting the key point of the reference region using a deep learning model such as a neural network.
  • a deep learning model such as a neural network.
  • the key points may be key points of a skeleton of the reference region.
  • the key points are detected, the key points are connected to form the skeleton of the reference region.
  • the key points may be 2D key points.
  • the key points may be 3D key points.
  • the 3D image may be constituted by an RGB image and a depth image corresponding to the RGB image, or a YUV image and a depth image corresponding to the YUV image.
  • a pixel value of the depth may be the value of a distance between a camera which is to capture the RGB image or the YUV image and an object to be captured, and such a pixel value representing a distance may be called as a depth value.
  • the 3D image may be captured based on a depth camera.
  • the depth camera further includes other depth cameras for capturing a deep image, for example, a Time of Flight (TOF) camera.
  • TOF Time of Flight
  • the 2D key point is (x, y), and the 3D key point may be (x, y, z).
  • the coordinates of the 2D key point is the coordinates of a plane coordinate system, and the coordinates of the 3D key point is the coordinates in a 3D coordinate system.
  • FIG. 3 is a schematic diagram of a skeleton of a human body.
  • FIG. 3 shows a schematic diagram displaying 17 skeleton key points, which are respectively numbered 0 - 16 , on a skeleton of a human body, where the skeleton key point numbered 0 is also called as a number 0 key point or a root node.
  • Key points 11 and 14 respectively correspond to two shoulder key points of the skeleton of the human body.
  • Key points 1 and 4 respectively correspond to two crotch key points.
  • Key point 7 corresponds to a torso center key point.
  • Key points 8 and 9 respectively correspond to two end points of the neck.
  • Key point 10 is a head key point.
  • a region covered by an object in an image includes a key point region, where the object includes a person or an animal, the key point region includes key points, and the key point region includes at least one of: a face region, a shoulder region, or a crotch region.
  • the reference region is the key point region nearest to the region to be adjusted.
  • a distance between the first center point and the second center point is determined as the distance between the region to be adjusted and the key point region.
  • the shortest distance between the region to be adjusted and the key point region is taken as the distance between the region to be adjusted and the key point region.
  • the orientation of the reference region includes at least one of:
  • the crotch region includes a waist and abdomen region.
  • Performing deformation on the region to be adjusted may include: performing pixel transformation on an image region including the region to be adjusted to yield a visual deformation effect.
  • the pixel transformation may be performed using the following method.
  • a deformation mesh is used to assist the deformation processing on the region to be adjusted.
  • Mesh points in the deformation mesh are control points of the deformation processing, and the change of the coordinates of the control points directly determines the transformation of pixel coordinates of pixels in the mesh where the control points are located.
  • the pixel transformation corresponding to a certain control point can be determined based on a deformation interpolation algorithm.
  • the deformation interpolation algorithm may be a spline curve algorithm.
  • the deformation mesh may be a crisscrossed mesh. Intersections of crisscrossed deformation lines are the control points of the deformation mesh.
  • the mapping of the pixel coordinates of the pixels included in the mesh where the control points are located can be controlled by the coordinate mapping of the control points.
  • coordinate adjustment in at least two directions can be performed, so that the zooming in or zooming out of the region to be adjusted can be at least achieved.
  • the zooming in of the region to be adjusted can obtain a visual zooming in effect, and the zooming out of the region to be adjusted can obtain a visual zooming out effect of the region to be adjusted.
  • the deformation processing on the region to be adjusted can be performed according to the orientation of the reference region
  • the deformation processing on the region to be adjusted can be accurately performed according to the orientation of the reference region to reduce, with respect to the direct deformation processing regardless of the orientation of the reference region, the weird deformation processing caused by not taking the orientation of the reference region into consideration, so as to improve the quality of the image after the deformation processing.
  • the deformation processing is performed on the region to be adjusted according to the orientation of the region to be adjusted. Because the region to be adjusted may not include key points, the orientation of the region to be adjusted may not be determined according to the region to be adjusted. Because the reference region includes key points, the orientation of the region to be adjusted may be determined according to the orientation of the reference region.
  • the opposite direction of the orientation of the reference region may be taken as the orientation of the region to be adjusted.
  • the region to be adjusted is the hip region and the reference region is the crotch region
  • the opposite direction of the orientation of the crotch region may be taken as the orientation of the region to be adjusted.
  • the orientation of the reference region may be taken as the orientation of the region to be adjusted.
  • the orientation of the shoulder region may be taken as the orientation of the region to be adjusted.
  • the orientation of the reference region in the image can be determined according to the key points. Performing the deformation processing on the region to be adjusted is performing the deformation processing in different directions on the region to be adjusted according to the orientation, rather than performing the same deformation processing on all parts of the region to be adjusted.
  • operation S 220 may include the following operations.
  • a target vector is determined based on the at least three key points
  • a target vector can be determined based on the at least three key points, and the direction of the target vector is taken as the orientation of the reference region.
  • the region to be adjusted can be classified into at least two types according to the orientation: a region of a first type and a region of a second type, where the orientation of the region of the first type is opposite to the orientation of the face region, and the orientation of the region of the second type is the same as the orientation of the face region.
  • the region of the first type includes: the hip region.
  • the region of the second type includes: the face region, the shoulder region, or the crotch region.
  • determining the orientation of the region to be adjusted according to the orientation of the reference region includes: if the region to be adjusted is the region of the first type, taking the opposite direction of the orientation of the reference region as the orientation of the region to be adjusted; and if the region to be adjusted is the region of the second type, taking the orientation of the reference region as the orientation of the region to be adjusted.
  • the region to be adjusted is the hip region and the reference region is the crotch region
  • the region to be adjusted is the region of the first type
  • the opposite direction of the orientation of the reference region is taken as the direction of the region to be adjusted.
  • the region to be adjusted is the chest region and the reference region is the shoulder region
  • the region to be adjusted is the region of the second type
  • the orientation of the reference region is taken as the direction of the region to be adjusted.
  • the reference region and the region to be adjusted may be same or different.
  • the region to be adjusted is the shoulder region, the reference region may also be the shoulder region.
  • the target vector can be determined based on at least three key points in the shoulder region, so that the orientation of the region to be adjusted can be determined according to the orientation of the target vector.
  • the region to be adjusted is a leg region
  • the reference region may be the crotch region.
  • the target vector is determined based on at least three key points in the crotch region, the direction of the target vector is taken as the orientation of the crotch region, and the orientation of the leg region is determined according to the orientation of the crotch region.
  • operation S 220 may further include the following operation.
  • operation S 223 the orientation of the region to be adjusted is determined according to the orientation of the target vector.
  • operation S 230 may include: performing deformation processing on the region to be adjusted based on the orientation of the target vector.
  • the region to be adjusted is the region of the first type
  • the opposite direction of the target vector is taken as the orientation of the region to be adjusted.
  • the region to be adjusted is the region of the second type
  • the direction of the target vector is taken as the orientation of the region to be adjusted.
  • the orientation of the region to be adjusted can be determined according to the orientation of the reference region so as to improve the processing efficiency.
  • the deformation processing of the region to be adjusted may also be performed directly according to the orientation of the reference region without mapping the orientation of the reference region as the orientation of the region to be adjusted.
  • the deformation processing of the region to be adjusted may be performed by using the orientation of the reference region.
  • the at least three key points include a first key point, a second key point, and a third key point, where the first key point and the third key point are symmetrical about the second key point.
  • Operation S 220 may include:
  • the first key point, the second key point, and the third key point can form at least two vectors, and the at least two vectors are not on a same straight line, so that they can form a plane.
  • the key point 1 of a left crotch shown in FIG. 3 is taken as the first key point
  • the root node 0 or the torso center key point 7 shown in FIG. 3 is taken as the second key point
  • the key point 4 of a right crotch shown in FIG. 3 is taken as the third key point.
  • the target vector is obtained based on the first vector (the vector from the key point 4 to the key point 7 ) and the second vector (the vector from the key point 1 to the key point 7 ).
  • the third key point and the first key point are located on either side of the second key point, and the second key point is taken as a symcenter of the central symmetry between the third key point and the first key point.
  • the second key point is taken as a symcenter of the central symmetry between the third key point and the first key point.
  • the first vector may be the vector from the key point 4 to the key point 7
  • the second vector may be the vector from the key point 1 to the key point 7
  • the third vector may be a vector of the key point 7 perpendicular to a plane formed by the key point 1 , the key point 4 , and the key point 7 .
  • the first vector may be the vector from the key point 4 to the key point 0
  • the second vector may be the vector from the key point 1 to the key point 0 .
  • the first vector may be the vector from the key point 11 to the key point 7
  • the second vector may be the vector from the key point 14 to the key point 7 .
  • symmetrical parts of a living being are taken as an example. Therefore, during selecting the key points of the reference region, the selected first key point and third key point are symmetrical about the second key point in the reference region.
  • a plane is constructed based on the first vector and the second vector, and the normal vector of the plane may be the orientation of the reference region.
  • determining the orientation of the reference region based on the first vector and the second vector includes:
  • the normal vector of the plane formed by the first vector and the second vector is obtained as the target vector by performing the cross product on the first vector and the second vector such that the orientation of the reference region can be obtained according to the direction of the target vector.
  • operation S 230 may further include:
  • the deformation processing on the region to be adjusted according to the target vector, a first reference vector, and a second reference vector, where the direction of the first reference vector is perpendicular to a photographing direction of the image, and the direction of the first reference vector is opposite to the direction of the second reference vector.
  • the photographing direction is the direction of a direction vector of an optical axis of an imaging device (including, a camera, a camera lens, and a video camera) photographing the foregoing image.
  • an imaging device including, a camera, a camera lens, and a video camera
  • the shape of the region to be adjusted can be adjusted by performing deformation processing on the region to be adjusted. Adjusting the shape of the region to be adjusted includes at least one of: zooming in the area of the region to be adjusted, zooming out the area of the region to be adjusted, and adjusting the contour of the region to be adjusted
  • a thin user may have a demand for hip enlargement and the like, and deformation processing can be performed on the image including the user to zoom in the hip region in the image.
  • a plump user may have a demand for a thin hip and the like, and deformation processing can be performed on the image including the user to zoom out the hip region in the image.
  • a thin user may have a demand for breast enlargement and the like, and deformation processing can be performed on the image including the user to zoom in the chest region in the image.
  • a plump user may have a demand for thin breast and the like, and deformation processing can be performed on the image including the user to zoom out the chest region in the image.
  • the region to be adjusted is a symmetrical region
  • the region to be adjusted can be divided into a first sub-region and a second sub-region.
  • the regions to be adjusted in the human body are symmetrical about the center line, i.e., the regions to be adjusted are symmetrical about the center line.
  • the region to be adjusted being a hip as an example, if the first sub-region is the left hip, the second sub-region is the right hip, and if the first sub-region is the right hip, the second sub-region is the left hip.
  • the region to be adjusted being the chest as an example, if the first sub-region is the right chest of the human body, the second sub-region is the left chest of the human body, and if the first sub-region is the left chest of the human body, the second sub-region is the right chest of the human body.
  • the left hip, the right hip, the left chest, and the right chest of the human body herein are distinguished by left and right parts of the human body.
  • the first sub-region is the same as the second sub-region. In this case, the same deformation processing is performed on the first sub-region and the second sub-region. If the orientation of the region to be adjusted is different from the photographing direction, or the orientation of the region to be adjusted is different from the opposite direction of the photographing direction, the first sub-region is different from the second sub-region (including different areas and different contours). If the same deformation processing is performed on the first sub-region and the second sub-region, distortion occurs in the region to be adjusted after the deformation processing (for example, the proportion of the region to be adjusted after the deformation processing is not harmonious).
  • the deformation processing on the first sub-region and the deformation processing on the second sub-region are determined according to the area of the first sub-region and the area of the second sub-region, so as to reduce the probability of the occurrence of distortion in the region to be adjusted after the deformation processing.
  • the included angle between the target vector and the first reference vector (hereinafter referred to as a first included angle) and an included angle between the target vector and the second reference vector (hereinafter referred to as a second included angle)
  • the size relationship between the area of the first sub-region and the area of the second sub-region (hereinafter referred to as an area relationship) is determined, and thus the deformation processing of the first sub-region and the deformation processing of the second sub-region are determined according to the area relationship.
  • the deformation processing includes adjusting the area of the region to be adjusted, i.e., including adjusting the area of the first sub-region and adjusting the area of the second sub-region.
  • the ratio between the area of the first sub-region and the area of the second sub-region hereinafter referred to as a first area ratio
  • the ratio of an area adjustment amount of the first sub-region to an area adjustment amount of the second sub-region is determined.
  • the proportion of the first sub-region after the deformation processing and the second sub-region after the deformation processing is more harmonious, thereby reducing the probability of the occurrence of distortion of the region to be adjusted after the deformation processing, and achieving a more natural deformation processing effect.
  • the area adjustment amount of the first sub-region (hereinafter referred to as a first area adjustment amount) and the area adjustment amount of the second sub-region (hereinafter referred to as a second area adjustment amount) are determined according to the first included angle and the second included angle.
  • the area of the first sub-region is adjusted according to the first area adjustment amount and the area of the second sub-region is adjusted according to the second area adjustment amount, thereby achieving the deformation processing of the region to be adjusted.
  • the first included angle is equal to the second included angle, it indicates that the target vector is perpendicular to the first reference vector and the target vector is perpendicular to the second reference vector, i.e., the area of the first sub-region is equal to the area of the second sub-region. Therefore, the first area adjustment amount may be made to be equal to the second area adjustment amount. For example, if the orientation of the crotch region is the same as the photographing direction, the first included angle is equal to the second included angle. In this case, the first area adjustment amount is equal to the second area adjustment amount. Therefore, if the left hip region extends outwardly, the right hip region also extends outwardly, and if the left hip region narrows inwardly, the right hip region also narrows inwardly.
  • the direction of the target vector is the same as the direction of the first reference vector, or the direction of the target vector is the same as the direction of the second reference vector, it indicates that the image merely include one of the first sub-region and the second sub-region. In this case, merely the first sub-region or the second sub-region is required to be adjusted. For example, taking the hip as an example, if the image merely includes the right hip region, merely the right hip region (i.e., one of the first sub-region and the second sub-region) is required to be adjusted, and if the image merely includes the left hip region, merely the left hip region is required to be adjusted.
  • the first included angle is not equal to the second included angle
  • the direction of the target vector is different from the direction of the first reference vector and the direction of the target vector is different from the direction of the second reference vector
  • the bigger the difference between the first included angle and the second included angle the bigger the difference between the area of the first sub-region and the area of the second sub-region.
  • the first area adjustment amount and the second area adjustment amount can be determined according to the difference between the first included angle and the second included angle (hereinafter referred to as an included angle difference).
  • the ratio of the first included angle to the second included angle is taken as a second ratio, and the second ratio is positively correlated with the included angle difference. Therefore, the first area adjustment amount and the second area adjustment amount can be determined according to the second ratio.
  • the ratio of the first area adjustment amount to the second area adjustment amount is taken as a first ratio.
  • the absolute value of the included angle difference is positively correlated with the area of the big sub-region, and the absolute value of the included angle difference is negatively correlated with the area of the small sub-region.
  • the area adjustment amount of the big sub-region and the area adjustment amount of the small sub-region are determined to make the area adjustment amount of the big sub-region positively correlated with the absolute value of the included angle difference, and the area adjustment amount of the small sub-region negatively correlated with the absolute value of the included angle difference.
  • the area adjustment amount of the big region the absolute value of the included angle difference*d, where d is a positive number
  • the area adjustment amount of the small region 1/the absolute value of the included angle difference.
  • the left chest region is the first sub-region
  • the right chest region is the second sub-region. If the area of the left chest region is greater than the area of the right chest region, the first area adjustment amount is greater than the second area adjustment amount, and if the area of the left chest region is smaller than the area of the right chest region, the first area adjustment amount is smaller than the second area adjustment amount.
  • the left hip region is the first sub-region
  • the right hip region is the second sub-region. If the area of the left hip region is greater than the area of the right hip region, the first area adjustment amount is greater than the second area adjustment amount, and if the area of the left hip region is smaller than the area of the right hip region, the first area adjustment amount is smaller than the second area adjustment amount.
  • the region to be adjusted is the chest and the hip is taken as an example for description.
  • the region to be adjusted is not limited to the hip and the chest.
  • the region to be adjusted may further include: the shoulder region, the leg region, the back region or the like.
  • the implementation of adjusting the area of the region to be adjusted includes at least one of: adjusting the area of the region to be adjusted while keeping the shape of the contour of the region to be adjusted unchanged; and adjusting the area of the region to be adjusted by adjusting the contour of the region to be adjusted.
  • the first area adjustment amount is made to be greater than the second area adjustment amount, and if the area of the first sub-region is smaller than the area of the second sub-region, the first area adjustment amount is made to be smaller than the second area adjustment amount, so that the proportion of the region to be adjusted after the deformation processing is more harmonious.
  • FIG. 7 to FIG. 9 of the present disclosure are schematic diagrams of hip enlargement deformation effects.
  • the crotch region of the human body is oriented toward the left, and because the hip region nearer to the left is hidden in the image, hip enlargement processing is merely required to be performed on the hip region of the human body nearer to the right.
  • hip enlargement processing i.e., the deformation processing above
  • the crotch region of the human body is oriented toward the right, and because the hip region of the human body nearer to the right is hidden in the image, hip enlargement processing is merely required to be performed on the hip region of the human body nearer to the left. As shown in FIG. 8 , after the hip enlargement processing, it can be obviously seen in the right drawing of FIG. 8 that the left hip region is augmented.
  • the orientation of the reference region (i.e., the hip region) shown in FIG. 9 is different from the first reference vector, and the orientation of the reference region is different from the direction of the second reference vector.
  • the size relationship between the area of the left hip region and the area of the right hip region can be determined according to the first included angle (the included angle between the orientation of the reference region and the first reference vector) and the second included angle (the included angle between the reference region and the second reference vector), thereby achieving the hip enlargement processing.
  • the area adjustment amount of the left hip region is smaller than the area adjustment amount of the right hip region
  • an image processing apparatus including:
  • an obtaining unit 11 configured to obtain key points of a reference region of an object in an image
  • a determining unit 12 configured to determine the orientation of the reference region according to the key points of the reference region
  • a processing unit 13 configured to perform deformation processing on a region to be adjusted of the object based on the orientation of the reference region, where the region to be adjusted is the same as or different from the reference region.
  • the determining unit 12 is configured to:
  • the at least three key points include a first key point, a second key point, and a third key point, and the first key point and the third key point are symmetrical about the second key point.
  • the determining unit 12 is configured to:
  • the determining unit 12 is configured to:
  • the processing unit 13 is configured to:
  • the processing unit 13 is configured to:
  • the region to be adjusted is a region of a first type
  • determine that the orientation of the region to be adjusted is opposite to the orientation of the reference region, where the region of the first type includes: a hip region; and
  • the region to be adjusted is a region of a second type, determine that the orientation of the region to be adjusted is the same as the orientation of the reference region, where the region of the second type includes: a face region; a shoulder region, or a crotch region.
  • the processing unit 13 is configured to:
  • the region to be adjusted includes a first sub-region and a second sub-region, and the first sub-region and the second sub-region are symmetrical about the center line of the region to be adjusted.
  • the processing unit 13 is configured to:
  • the direction of the target vector is different from the direction of the first reference vector and the direction of the target vector is different from the direction of the second reference vector, adjust the area of the first sub-region and the area of the second sub-region according to a first included angle between the target vector and the first reference vector and a second included angle between the target vector and the second reference vector.
  • the processing unit 13 is configured to:
  • the processing unit 13 is configured to:
  • the processing unit 13 is further configured to:
  • an image device including:
  • a processor connected to the memory, and configured to execute computer executable instructions stored on the memory to implement the image processing method provided in one or more of the foregoing embodiments, for example, one or more of the image processing methods as shown in FIG. 2 and FIG. 4 .
  • the memory may be different types of memories, such as a random memory, a Read-Only Memory (ROM), and a flash memory.
  • the memory may be used for storing informations, such as computer executable instructions.
  • the computer executable instructions may be different program instructions, such as target program instructions and/or source program instructions.
  • the processor may be different types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, a specific integrated circuit, or an image processor.
  • processors such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, a specific integrated circuit, or an image processor.
  • the processor may be connected to the memory by means of a bus.
  • the bus may be an integrated circuit bus and the like.
  • the image device may further include a communication interface.
  • the communication interface may include a network interface, such as a local area network interface and a transceiving antenna.
  • the communication interface is also connected to the processor, and can be used for information receiving and transmitting.
  • the electronic device further includes a man-machine interactive interface.
  • the man-machine interactive interface may include different input/output devices, such a keyboard and a touch screen.
  • the embodiments provide a computer storage medium, where the computer storage medium stores computer executable instructions, and after the computer executable instructions are executed, the image processing method provided in one or more of the foregoing embodiments, for example, one or more of the image processing methods shown in FIG. 2 and FIG. 4 , can be implemented.
  • the computer storage medium may be different recording media having a recording function, for example, different types of storage media such a CD, a floppy disk, a hard disk drive, a magnetic tape, an optical disk, a USB flash disk, and a mobile hard disk drive.
  • the computer storage medium may be a non-instantaneous storage medium, and the computer storage medium may be read by a processor, so that after the computer executable instructions stored on a computer storage mechanism are obtained and executed by the processor, an information processing method provided by any of the foregoing technical solutions can be implemented, for example, the information processing method applied to a terminal device or the information processing method applied to a server is executed.
  • the embodiments further provide a computer program product, where the computer program product includes computer executable instructions, and after the computer executable instructions are executed, the image processing method provided in one or more of the foregoing embodiments, for example, one or more of the image processing methods shown in FIG. 2 and FIG. 4 , can be implemented.
  • the computer program product includes a computer program tangibly included in the computer storage medium.
  • the computer program includes a program code for executing a method shown in the flowchart, and the program code includes corresponding instructions for correspondingly executing the operations of the method provided in the embodiments of the present application.
  • the disclosed device and method in the embodiments provided in the present application may be implemented by other modes.
  • the device embodiments described above are merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections among the components may be implemented by means of some interfaces.
  • the indirect couplings or communication connections between the devices or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, may be located at one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the embodiments of the present disclosure may be integrated into one processing module, or each of the units may exist as an independent unit, or two or more units are integrated into one unit, and the integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a hardware and software functional unit.
  • the foregoing method embodiments may be achieved by a program by instructing related hardware; the foregoing program may be stored in a computer-readable storage medium; when the program is executed, operations including the foregoing method embodiments are performed; moreover, the foregoing storage medium includes various media capable of storing the program codes, such as a portable storage device, a ROM, a Random Access Memory (RAM), a magnetic disk, or an optical disk.
  • the foregoing storage medium includes various media capable of storing the program codes, such as a portable storage device, a ROM, a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
US17/073,769 2019-01-18 2020-10-19 Image processing method and apparatus, image device, and storage medium Active 2040-04-01 US11538207B2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201910049830 2019-01-18
CN201910193649.2A CN111460871B (zh) 2019-01-18 2019-03-14 图像处理方法及装置、存储介质
CN201910191918.1 2019-03-14
CN201910193649.2 2019-03-14
CN201910191918.1A CN111460870A (zh) 2019-01-18 2019-03-14 目标的朝向确定方法及装置、电子设备及存储介质
PCT/CN2019/130970 WO2020181900A1 (zh) 2019-01-18 2019-12-31 图像处理方法及装置、图像设备及存储介质

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130970 Continuation WO2020181900A1 (zh) 2019-01-18 2019-12-31 图像处理方法及装置、图像设备及存储介质

Publications (2)

Publication Number Publication Date
US20210035344A1 US20210035344A1 (en) 2021-02-04
US11538207B2 true US11538207B2 (en) 2022-12-27

Family

ID=71679913

Family Applications (5)

Application Number Title Priority Date Filing Date
US17/073,769 Active 2040-04-01 US11538207B2 (en) 2019-01-18 2020-10-19 Image processing method and apparatus, image device, and storage medium
US17/102,305 Active US11741629B2 (en) 2019-01-18 2020-11-23 Controlling display of model derived from captured image
US17/102,331 Abandoned US20210074004A1 (en) 2019-01-18 2020-11-23 Image processing method and apparatus, image device, and storage medium
US17/102,364 Abandoned US20210074005A1 (en) 2019-01-18 2020-11-23 Image processing method and apparatus, image device, and storage medium
US17/102,373 Active US11468612B2 (en) 2019-01-18 2020-11-23 Controlling display of a model based on captured images and determined information

Family Applications After (4)

Application Number Title Priority Date Filing Date
US17/102,305 Active US11741629B2 (en) 2019-01-18 2020-11-23 Controlling display of model derived from captured image
US17/102,331 Abandoned US20210074004A1 (en) 2019-01-18 2020-11-23 Image processing method and apparatus, image device, and storage medium
US17/102,364 Abandoned US20210074005A1 (en) 2019-01-18 2020-11-23 Image processing method and apparatus, image device, and storage medium
US17/102,373 Active US11468612B2 (en) 2019-01-18 2020-11-23 Controlling display of a model based on captured images and determined information

Country Status (6)

Country Link
US (5) US11538207B2 (ko)
JP (4) JP7109585B2 (ko)
KR (4) KR20210011424A (ko)
CN (7) CN111460870A (ko)
SG (5) SG11202010399VA (ko)
WO (1) WO2020181900A1 (ko)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210241020A1 (en) * 2019-12-25 2021-08-05 Beijing Sense Time Technology Development Co., Ltd. Method and device for processing image, and storage medium

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910393B (zh) * 2018-09-18 2023-03-24 北京市商汤科技开发有限公司 数据处理方法及装置、电子设备及存储介质
US11610414B1 (en) * 2019-03-04 2023-03-21 Apple Inc. Temporal and geometric consistency in physical setting understanding
US10902618B2 (en) * 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system
EP4042374A4 (en) * 2019-10-11 2024-01-03 Beyeonics Surgical Ltd SYSTEM AND METHOD FOR ELECTRONICALLY ASSISTED MEDICAL PROCEDURES
KR102610840B1 (ko) * 2019-12-19 2023-12-07 한국전자통신연구원 사용자 모션 자동 인식 시스템 및 그 방법
US11504625B2 (en) 2020-02-14 2022-11-22 Electronic Arts Inc. Color blindness diagnostic system
US11648480B2 (en) 2020-04-06 2023-05-16 Electronic Arts Inc. Enhanced pose generation based on generative modeling
US11232621B2 (en) 2020-04-06 2022-01-25 Electronic Arts Inc. Enhanced animation generation based on conditional modeling
CN111881838B (zh) * 2020-07-29 2023-09-26 清华大学 具有隐私保护功能的运动障碍评估录像分析方法及设备
US11403801B2 (en) 2020-09-18 2022-08-02 Unity Technologies Sf Systems and methods for building a pseudo-muscle topology of a live actor in computer animation
CN114333228B (zh) * 2020-09-30 2023-12-08 北京君正集成电路股份有限公司 一种婴儿的智能视频看护方法
CN112165630B (zh) * 2020-10-16 2022-11-15 广州虎牙科技有限公司 图像渲染方法、装置、电子设备及存储介质
CN112932468A (zh) * 2021-01-26 2021-06-11 京东方科技集团股份有限公司 一种肌肉运动能力的监测系统以及监测方法
US11887232B2 (en) 2021-06-10 2024-01-30 Electronic Arts Inc. Enhanced system for generation of facial models and animation
US20230177881A1 (en) * 2021-07-06 2023-06-08 KinTrans, Inc. Automatic body movement recognition and association system including smoothing, segmentation, similarity, pooling, and dynamic modeling
WO2023079987A1 (ja) * 2021-11-04 2023-05-11 ソニーグループ株式会社 配信装置、配信方法、およびプログラム
CN114115544B (zh) * 2021-11-30 2024-01-05 杭州海康威视数字技术股份有限公司 人机交互方法、三维显示设备及存储介质
KR20230090852A (ko) * 2021-12-15 2023-06-22 삼성전자주식회사 복수의 카메라를 이용하여 촬영된 손의 3차원 골격 데이터를 획득하는 전자 장치 및 방법
CN117315201A (zh) * 2022-06-20 2023-12-29 香港教育大学 用于在虚拟世界中动画化化身的系统
CN115564689A (zh) * 2022-10-08 2023-01-03 上海宇勘科技有限公司 一种基于按块处理技术的人工智能图像处理方法及系统

Citations (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4561066A (en) 1983-06-20 1985-12-24 Gti Corporation Cross product calculator with normalized output
JP2002024807A (ja) 2000-07-07 2002-01-25 National Institute Of Advanced Industrial & Technology 物体運動追跡手法及び記録媒体
JP2003150977A (ja) 2001-11-15 2003-05-23 Japan Science & Technology Corp 3次元人物動画像生成システム
US20030152289A1 (en) * 2002-02-13 2003-08-14 Eastman Kodak Company Method and system for determining image orientation
US6657628B1 (en) 1999-11-24 2003-12-02 Fuji Xerox Co., Ltd. Method and apparatus for specification, control and modulation of social primitives in animated characters
JP2007004732A (ja) 2005-06-27 2007-01-11 Matsushita Electric Ind Co Ltd 画像生成装置及び画像生成方法
US20070126743A1 (en) 2005-12-01 2007-06-07 Chang-Joon Park Method for estimating three-dimensional position of human joint using sphere projecting technique
US20070146371A1 (en) 2005-12-22 2007-06-28 Behzad Dariush Reconstruction, Retargetting, Tracking, And Estimation Of Motion For Articulated Systems
JP2007333690A (ja) 2006-06-19 2007-12-27 Sony Corp モーションキャプチャ装置及びモーションキャプチャ方法、並びにモーションキャプチャプログラム
JP2010061646A (ja) 2008-08-08 2010-03-18 Make Softwear:Kk 画像処理装置、画像出力装置、画像処理方法及びコンピュータプログラム
US20100149341A1 (en) 2008-12-17 2010-06-17 Richard Lee Marks Correcting angle error in a tracking system
US20100197399A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking
US20100197391A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking
US20100322111A1 (en) 2009-06-23 2010-12-23 Zhuanke Li Methods and systems for realizing interaction between video input and virtual network scene
US20110249865A1 (en) 2010-04-08 2011-10-13 Samsung Electronics Co., Ltd. Apparatus, method and computer-readable medium providing marker-less motion capture of human
US20120218262A1 (en) 2009-10-15 2012-08-30 Yeda Research And Development Co. Ltd. Animation of photo-images via fitting of combined models
JP2012234541A (ja) 2011-04-29 2012-11-29 National Cheng Kung Univ 体動五線譜並びにその画像処理モジュール、運動複製モジュール及び生成モジュール
CN102824176A (zh) 2012-09-24 2012-12-19 南通大学 一种基于Kinect传感器的上肢关节活动度测量方法
US20130195330A1 (en) 2012-01-31 2013-08-01 Electronics And Telecommunications Research Institute Apparatus and method for estimating joint structure of human body
US20130243255A1 (en) 2010-09-07 2013-09-19 Microsoft Corporation System for fast, probabilistic skeletal tracking
US20140156219A1 (en) * 2011-06-24 2014-06-05 Trimble Navigation Limited Determining tilt angle and tilt direction using image processing
CN103971113A (zh) 2013-02-04 2014-08-06 纬创资通股份有限公司 图像的识别方法、电子装置与计算机程序产品
US20150003687A1 (en) 2013-07-01 2015-01-01 Kabushiki Kaisha Toshiba Motion information processing apparatus
US20150036879A1 (en) 2013-07-30 2015-02-05 Canon Kabushiki Kaisha Posture estimating apparatus, posture estimating method and storing medium
CN104700433A (zh) 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 一种基于视觉的实时人体全身体运动捕捉方法及其系统
US20150161797A1 (en) 2013-12-09 2015-06-11 Sung Hee Park Techniques for disparity estimation using camera arrays for high dynamic range imaging
JP2015116308A (ja) 2013-12-18 2015-06-25 三菱電機株式会社 ジェスチャ登録装置
US20150199824A1 (en) 2014-01-10 2015-07-16 Electronics And Telecommunications Research Institute Apparatus and method for detecting multiple arms and hands by using three-dimensional image
JP2015148706A (ja) 2014-02-06 2015-08-20 日本放送協会 手話単語分類情報生成装置およびそのプログラム、ならびに、手話単語検索装置およびそのプログラム
CN104866101A (zh) 2015-05-27 2015-08-26 世优(北京)科技有限公司 虚拟对象的实时互动控制方法及装置
US9177409B2 (en) 2010-04-29 2015-11-03 Naturalmotion Ltd Animating a virtual object within a virtual world
US20160027178A1 (en) * 2014-07-23 2016-01-28 Sony Corporation Image registration system with non-rigid registration and method of operation thereof
US20160098095A1 (en) 2004-01-30 2016-04-07 Electronic Scripting Products, Inc. Deriving Input from Six Degrees of Freedom Interfaces
US20160163084A1 (en) 2012-03-06 2016-06-09 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
US20160267699A1 (en) 2015-03-09 2016-09-15 Ventana 3D, Llc Avatar control system
CN106251396A (zh) 2016-07-29 2016-12-21 迈吉客科技(北京)有限公司 三维模型的实时控制方法和系统
JP2017138915A (ja) 2016-02-05 2017-08-10 株式会社バンダイナムコエンターテインメント 画像生成システム及びプログラム
US20170249745A1 (en) * 2014-05-21 2017-08-31 Millennium Three Technologies, Inc. Fiducial marker patterns, their automatic detection in images, and applications thereof
US20170270709A1 (en) 2016-03-07 2017-09-21 Bao Tran Systems and methods for fitting product
JP2017191576A (ja) 2016-04-15 2017-10-19 キヤノン株式会社 情報処理装置、情報処理装置の制御方法およびプログラム
US20170352092A1 (en) * 2015-08-07 2017-12-07 SelfieStyler, Inc. Virtual garment carousel
JP2018505462A (ja) 2014-12-11 2018-02-22 インテル コーポレイション アバター選択機構
CN108027597A (zh) 2015-08-13 2018-05-11 Avl 里斯脱有限公司 用于监控技术设备的系统
US20180158196A1 (en) 2003-02-11 2018-06-07 Sony Interactive Entertainment Inc. Methods for Capturing Images of Markers of a Person to Control Interfacing With an Application
US20180165860A1 (en) 2016-12-13 2018-06-14 Korea Advanced Institute Of Science And Technology Motion edit method and apparatus for articulated object
CN108205654A (zh) 2017-09-30 2018-06-26 北京市商汤科技开发有限公司 一种基于视频的动作检测方法及装置
CN108229239A (zh) 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 一种图像处理的方法及装置
CN108227931A (zh) 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 用于控制虚拟人物的方法、设备、系统、程序和存储介质
CN108229332A (zh) 2017-12-08 2018-06-29 华为技术有限公司 骨骼姿态确定方法、装置及计算机可读存储介质
US10022628B1 (en) 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
CN108305321A (zh) 2018-02-11 2018-07-20 谢符宝 一种基于双目彩色成像系统的立体人手3d骨架模型实时重建方法和装置
JP2018119833A (ja) 2017-01-24 2018-08-02 キヤノン株式会社 情報処理装置、システム、推定方法、コンピュータプログラム、及び記憶媒体
CN108357595A (zh) 2018-01-26 2018-08-03 浙江大学 一种基于模型的自平衡无人驾驶自行车及其模型驱动的控制方法
CN108364254A (zh) 2018-03-20 2018-08-03 北京奇虎科技有限公司 图像处理方法、装置及电子设备
US20180225842A1 (en) 2016-01-21 2018-08-09 Tencent Technology (Shenzhen) Company Limited Method and apparatus for determining facial pose angle, and computer storage medium
CN108427942A (zh) 2018-04-22 2018-08-21 广州麦仑信息科技有限公司 一种基于深度学习的手掌检测与关键点定位方法
CN108472028A (zh) 2016-09-16 2018-08-31 威博外科公司 机器人臂
JP2018169720A (ja) 2017-03-29 2018-11-01 富士通株式会社 動き検出システム
CN108765274A (zh) 2018-05-31 2018-11-06 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
WO2018207388A1 (ja) 2017-05-12 2018-11-15 ブレイン株式会社 モーションキャプチャに関するプログラム、装置および方法
CN108830200A (zh) 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
US20180335843A1 (en) 2017-05-16 2018-11-22 Finch Technologies Ltd. Tracking finger movements to generate inputs for computer systems
US20180345116A1 (en) 2013-06-13 2018-12-06 Sony Corporation Information processing device, storage medium, and information processing method
CN109035415A (zh) 2018-07-03 2018-12-18 百度在线网络技术(北京)有限公司 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN109117726A (zh) 2018-07-10 2019-01-01 深圳超多维科技有限公司 一种识别认证方法、装置、系统及存储介质
CN109146769A (zh) 2018-07-24 2019-01-04 北京市商汤科技开发有限公司 图像处理方法及装置、图像处理设备及存储介质
CN109146629A (zh) 2018-08-16 2019-01-04 连云港伍江数码科技有限公司 目标对象的锁定方法、装置、计算机设备和存储介质
CN109242789A (zh) 2018-08-21 2019-01-18 成都旷视金智科技有限公司 图像处理方法、图像处理装置及存储介质
CN109325450A (zh) 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN109376671A (zh) 2018-10-30 2019-02-22 北京市商汤科技开发有限公司 图像处理方法、电子设备及计算机可读介质
CN109816773A (zh) 2018-12-29 2019-05-28 深圳市瑞立视多媒体科技有限公司 一种虚拟人物的骨骼模型的驱动方法、插件及终端设备
US10319133B1 (en) 2011-11-13 2019-06-11 Pixar Posing animation hierarchies with dynamic posing roots
US20190220657A1 (en) * 2016-10-11 2019-07-18 Fujitsu Limited Motion recognition device and motion recognition method
CN110139115A (zh) 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 基于关键点的虚拟形象姿态控制方法、装置及电子设备
US20190318151A1 (en) * 2018-04-13 2019-10-17 Omron Corporation Image analysis apparatus, method, and program
CN110688008A (zh) 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 虚拟形象交互方法和装置
CN110889382A (zh) 2019-11-29 2020-03-17 深圳市商汤科技有限公司 虚拟形象渲染方法及装置、电子设备和存储介质
US20200126295A1 (en) * 2018-10-22 2020-04-23 The Hong Kong Polytechnic University Method and/or system for reconstructing from images a personalized 3d human body model and thereof
US20210026516A1 (en) 2013-01-15 2021-01-28 Ultrahaptics IP Two Limited Dynamic user interactions for display control and measuring degree of completeness of user gestures
US20210166012A1 (en) * 2018-07-18 2021-06-03 Nec Corporation Information processing apparatus, control method, and non-transitory storage medium

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0816820A (ja) * 1994-04-25 1996-01-19 Fujitsu Ltd 3次元アニメーション作成装置
JP5076744B2 (ja) * 2007-08-30 2012-11-21 セイコーエプソン株式会社 画像処理装置
US8588465B2 (en) * 2009-01-30 2013-11-19 Microsoft Corporation Visual target tracking
KR101616926B1 (ko) * 2009-09-22 2016-05-02 삼성전자주식회사 영상 처리 장치 및 방법
AU2011203028B1 (en) * 2011-06-22 2012-03-08 Microsoft Technology Licensing, Llc Fully automatic dynamic articulated model calibration
CN104103090A (zh) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 图像处理方法、个性化人体显示方法及其图像处理系统
CN103268158B (zh) * 2013-05-21 2017-09-08 上海速盟信息技术有限公司 一种模拟重力传感数据的方法、装置及一种电子设备
JP6049202B2 (ja) * 2013-10-25 2016-12-21 富士フイルム株式会社 画像処理装置、方法、及びプログラム
JP6311372B2 (ja) * 2014-03-13 2018-04-18 オムロン株式会社 画像処理装置および画像処理方法
CN106023288B (zh) * 2016-05-18 2019-11-15 浙江大学 一种基于图像的动态替身构造方法
CN106296778B (zh) * 2016-07-29 2019-11-15 网易(杭州)网络有限公司 虚拟对象运动控制方法与装置
CN106920274B (zh) * 2017-01-20 2020-09-04 南京开为网络科技有限公司 移动端2d关键点快速转换为3d融合变形的人脸建模方法
CN107272884A (zh) * 2017-05-09 2017-10-20 聂懋远 一种基于虚拟现实技术的控制方法及其控制系统
CN107154069B (zh) * 2017-05-11 2021-02-02 上海微漫网络科技有限公司 一种基于虚拟角色的数据处理方法及系统
CN107220933B (zh) * 2017-05-11 2021-09-21 上海联影医疗科技股份有限公司 一种参考线确定方法和系统
CN108876879B (zh) * 2017-05-12 2022-06-14 腾讯科技(深圳)有限公司 人脸动画实现的方法、装置、计算机设备及存储介质
CN107578462A (zh) * 2017-09-12 2018-01-12 北京城市系统工程研究中心 一种基于实时运动捕捉的骨骼动画数据处理方法
CN107958479A (zh) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 一种移动端3d人脸增强现实实现方法
CN108062783A (zh) * 2018-01-12 2018-05-22 北京蜜枝科技有限公司 面部动画映射系统及方法
CN108648280B (zh) * 2018-04-25 2023-03-31 深圳市商汤科技有限公司 虚拟角色驱动方法及装置、电子设备和存储介质
CN108829232B (zh) * 2018-04-26 2021-07-23 深圳市同维通信技术有限公司 基于深度学习的人体骨骼关节点三维坐标的获取方法
CN108830784A (zh) * 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
CN108830783B (zh) * 2018-05-31 2021-07-02 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
CN108765272B (zh) * 2018-05-31 2022-07-08 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及可读存储介质
CN109101901B (zh) * 2018-07-23 2020-10-27 北京旷视科技有限公司 人体动作识别及其神经网络生成方法、装置和电子设备
CN109191593A (zh) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 虚拟三维模型的运动控制方法、装置及设备

Patent Citations (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4561066A (en) 1983-06-20 1985-12-24 Gti Corporation Cross product calculator with normalized output
US6657628B1 (en) 1999-11-24 2003-12-02 Fuji Xerox Co., Ltd. Method and apparatus for specification, control and modulation of social primitives in animated characters
JP2002024807A (ja) 2000-07-07 2002-01-25 National Institute Of Advanced Industrial & Technology 物体運動追跡手法及び記録媒体
JP2003150977A (ja) 2001-11-15 2003-05-23 Japan Science & Technology Corp 3次元人物動画像生成システム
US20030152289A1 (en) * 2002-02-13 2003-08-14 Eastman Kodak Company Method and system for determining image orientation
US20180158196A1 (en) 2003-02-11 2018-06-07 Sony Interactive Entertainment Inc. Methods for Capturing Images of Markers of a Person to Control Interfacing With an Application
US10410359B2 (en) 2003-02-11 2019-09-10 Sony Interactive Entertainment Inc. Methods for capturing images of markers of a person to control interfacing with an application
US20160098095A1 (en) 2004-01-30 2016-04-07 Electronic Scripting Products, Inc. Deriving Input from Six Degrees of Freedom Interfaces
JP2007004732A (ja) 2005-06-27 2007-01-11 Matsushita Electric Ind Co Ltd 画像生成装置及び画像生成方法
US20070126743A1 (en) 2005-12-01 2007-06-07 Chang-Joon Park Method for estimating three-dimensional position of human joint using sphere projecting technique
US20070146371A1 (en) 2005-12-22 2007-06-28 Behzad Dariush Reconstruction, Retargetting, Tracking, And Estimation Of Motion For Articulated Systems
JP2007333690A (ja) 2006-06-19 2007-12-27 Sony Corp モーションキャプチャ装置及びモーションキャプチャ方法、並びにモーションキャプチャプログラム
JP2010061646A (ja) 2008-08-08 2010-03-18 Make Softwear:Kk 画像処理装置、画像出力装置、画像処理方法及びコンピュータプログラム
US20100149341A1 (en) 2008-12-17 2010-06-17 Richard Lee Marks Correcting angle error in a tracking system
JP2012516504A (ja) 2009-01-30 2012-07-19 マイクロソフト コーポレーション 視覚的目標追跡
US20100197399A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking
US20100197391A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking
US20100322111A1 (en) 2009-06-23 2010-12-23 Zhuanke Li Methods and systems for realizing interaction between video input and virtual network scene
CN101930284A (zh) 2009-06-23 2010-12-29 腾讯科技(深圳)有限公司 一种实现视频和虚拟网络场景交互的方法、装置和系统
US9247201B2 (en) 2009-06-23 2016-01-26 Tencent Holdings Limited Methods and systems for realizing interaction between video input and virtual network scene
US20120218262A1 (en) 2009-10-15 2012-08-30 Yeda Research And Development Co. Ltd. Animation of photo-images via fitting of combined models
US20110249865A1 (en) 2010-04-08 2011-10-13 Samsung Electronics Co., Ltd. Apparatus, method and computer-readable medium providing marker-less motion capture of human
US9177409B2 (en) 2010-04-29 2015-11-03 Naturalmotion Ltd Animating a virtual object within a virtual world
US20130243255A1 (en) 2010-09-07 2013-09-19 Microsoft Corporation System for fast, probabilistic skeletal tracking
JP2012234541A (ja) 2011-04-29 2012-11-29 National Cheng Kung Univ 体動五線譜並びにその画像処理モジュール、運動複製モジュール及び生成モジュール
US20140156219A1 (en) * 2011-06-24 2014-06-05 Trimble Navigation Limited Determining tilt angle and tilt direction using image processing
US10319133B1 (en) 2011-11-13 2019-06-11 Pixar Posing animation hierarchies with dynamic posing roots
US9058514B2 (en) 2012-01-31 2015-06-16 Electronics And Telecommunications Research Institute Apparatus and method for estimating joint structure of human body
US20130195330A1 (en) 2012-01-31 2013-08-01 Electronics And Telecommunications Research Institute Apparatus and method for estimating joint structure of human body
US9626788B2 (en) 2012-03-06 2017-04-18 Adobe Systems Incorporated Systems and methods for creating animations using human faces
US20160163084A1 (en) 2012-03-06 2016-06-09 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
CN102824176A (zh) 2012-09-24 2012-12-19 南通大学 一种基于Kinect传感器的上肢关节活动度测量方法
US20210026516A1 (en) 2013-01-15 2021-01-28 Ultrahaptics IP Two Limited Dynamic user interactions for display control and measuring degree of completeness of user gestures
CN103971113A (zh) 2013-02-04 2014-08-06 纬创资通股份有限公司 图像的识别方法、电子装置与计算机程序产品
US20140219557A1 (en) 2013-02-04 2014-08-07 Wistron Corporation Image identification method, electronic device, and computer program product
US20180345116A1 (en) 2013-06-13 2018-12-06 Sony Corporation Information processing device, storage medium, and information processing method
US20150003687A1 (en) 2013-07-01 2015-01-01 Kabushiki Kaisha Toshiba Motion information processing apparatus
US20150036879A1 (en) 2013-07-30 2015-02-05 Canon Kabushiki Kaisha Posture estimating apparatus, posture estimating method and storing medium
US20150161797A1 (en) 2013-12-09 2015-06-11 Sung Hee Park Techniques for disparity estimation using camera arrays for high dynamic range imaging
JP2015116308A (ja) 2013-12-18 2015-06-25 三菱電機株式会社 ジェスチャ登録装置
US20150199824A1 (en) 2014-01-10 2015-07-16 Electronics And Telecommunications Research Institute Apparatus and method for detecting multiple arms and hands by using three-dimensional image
JP2015148706A (ja) 2014-02-06 2015-08-20 日本放送協会 手話単語分類情報生成装置およびそのプログラム、ならびに、手話単語検索装置およびそのプログラム
US20170249745A1 (en) * 2014-05-21 2017-08-31 Millennium Three Technologies, Inc. Fiducial marker patterns, their automatic detection in images, and applications thereof
US20160027178A1 (en) * 2014-07-23 2016-01-28 Sony Corporation Image registration system with non-rigid registration and method of operation thereof
JP2018505462A (ja) 2014-12-11 2018-02-22 インテル コーポレイション アバター選択機構
US20160267699A1 (en) 2015-03-09 2016-09-15 Ventana 3D, Llc Avatar control system
CN104700433A (zh) 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 一种基于视觉的实时人体全身体运动捕捉方法及其系统
US10022628B1 (en) 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
CN104866101A (zh) 2015-05-27 2015-08-26 世优(北京)科技有限公司 虚拟对象的实时互动控制方法及装置
US20170352092A1 (en) * 2015-08-07 2017-12-07 SelfieStyler, Inc. Virtual garment carousel
CN108027597A (zh) 2015-08-13 2018-05-11 Avl 里斯脱有限公司 用于监控技术设备的系统
US20180225842A1 (en) 2016-01-21 2018-08-09 Tencent Technology (Shenzhen) Company Limited Method and apparatus for determining facial pose angle, and computer storage medium
JP2017138915A (ja) 2016-02-05 2017-08-10 株式会社バンダイナムコエンターテインメント 画像生成システム及びプログラム
US20170270709A1 (en) 2016-03-07 2017-09-21 Bao Tran Systems and methods for fitting product
JP2017191576A (ja) 2016-04-15 2017-10-19 キヤノン株式会社 情報処理装置、情報処理装置の制御方法およびプログラム
CN106251396A (zh) 2016-07-29 2016-12-21 迈吉客科技(北京)有限公司 三维模型的实时控制方法和系统
US20190156574A1 (en) 2016-07-29 2019-05-23 Appmagics Tech (Beijing) Limited Method and system for real-time control of three-dimensional models
CN108472028A (zh) 2016-09-16 2018-08-31 威博外科公司 机器人臂
US20190220657A1 (en) * 2016-10-11 2019-07-18 Fujitsu Limited Motion recognition device and motion recognition method
CN108229239A (zh) 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 一种图像处理的方法及装置
US20180165860A1 (en) 2016-12-13 2018-06-14 Korea Advanced Institute Of Science And Technology Motion edit method and apparatus for articulated object
JP2018119833A (ja) 2017-01-24 2018-08-02 キヤノン株式会社 情報処理装置、システム、推定方法、コンピュータプログラム、及び記憶媒体
JP2018169720A (ja) 2017-03-29 2018-11-01 富士通株式会社 動き検出システム
WO2018207388A1 (ja) 2017-05-12 2018-11-15 ブレイン株式会社 モーションキャプチャに関するプログラム、装置および方法
CN108874119A (zh) 2017-05-16 2018-11-23 芬奇科技有限公司 跟踪臂移动以生成计算机系统的输入
US10534431B2 (en) 2017-05-16 2020-01-14 Finch Technologies Ltd. Tracking finger movements to generate inputs for computer systems
US20180335843A1 (en) 2017-05-16 2018-11-22 Finch Technologies Ltd. Tracking finger movements to generate inputs for computer systems
CN108205654A (zh) 2017-09-30 2018-06-26 北京市商汤科技开发有限公司 一种基于视频的动作检测方法及装置
US20190251341A1 (en) 2017-12-08 2019-08-15 Huawei Technologies Co., Ltd. Skeleton Posture Determining Method and Apparatus, and Computer Readable Storage Medium
CN108229332A (zh) 2017-12-08 2018-06-29 华为技术有限公司 骨骼姿态确定方法、装置及计算机可读存储介质
CN108227931A (zh) 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 用于控制虚拟人物的方法、设备、系统、程序和存储介质
CN108357595A (zh) 2018-01-26 2018-08-03 浙江大学 一种基于模型的自平衡无人驾驶自行车及其模型驱动的控制方法
CN108305321A (zh) 2018-02-11 2018-07-20 谢符宝 一种基于双目彩色成像系统的立体人手3d骨架模型实时重建方法和装置
CN108364254A (zh) 2018-03-20 2018-08-03 北京奇虎科技有限公司 图像处理方法、装置及电子设备
US20190318151A1 (en) * 2018-04-13 2019-10-17 Omron Corporation Image analysis apparatus, method, and program
CN108427942A (zh) 2018-04-22 2018-08-21 广州麦仑信息科技有限公司 一种基于深度学习的手掌检测与关键点定位方法
CN108830200A (zh) 2018-05-31 2018-11-16 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
CN108765274A (zh) 2018-05-31 2018-11-06 北京市商汤科技开发有限公司 一种图像处理方法、装置和计算机存储介质
CN109035415A (zh) 2018-07-03 2018-12-18 百度在线网络技术(北京)有限公司 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN109117726A (zh) 2018-07-10 2019-01-01 深圳超多维科技有限公司 一种识别认证方法、装置、系统及存储介质
US20210166012A1 (en) * 2018-07-18 2021-06-03 Nec Corporation Information processing apparatus, control method, and non-transitory storage medium
CN109146769A (zh) 2018-07-24 2019-01-04 北京市商汤科技开发有限公司 图像处理方法及装置、图像处理设备及存储介质
CN109146629A (zh) 2018-08-16 2019-01-04 连云港伍江数码科技有限公司 目标对象的锁定方法、装置、计算机设备和存储介质
CN109242789A (zh) 2018-08-21 2019-01-18 成都旷视金智科技有限公司 图像处理方法、图像处理装置及存储介质
CN109325450A (zh) 2018-09-25 2019-02-12 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
US20200126295A1 (en) * 2018-10-22 2020-04-23 The Hong Kong Polytechnic University Method and/or system for reconstructing from images a personalized 3d human body model and thereof
CN109376671A (zh) 2018-10-30 2019-02-22 北京市商汤科技开发有限公司 图像处理方法、电子设备及计算机可读介质
CN109816773A (zh) 2018-12-29 2019-05-28 深圳市瑞立视多媒体科技有限公司 一种虚拟人物的骨骼模型的驱动方法、插件及终端设备
CN110139115A (zh) 2019-04-30 2019-08-16 广州虎牙信息科技有限公司 基于关键点的虚拟形象姿态控制方法、装置及电子设备
CN110688008A (zh) 2019-09-27 2020-01-14 贵州小爱机器人科技有限公司 虚拟形象交互方法和装置
CN110889382A (zh) 2019-11-29 2020-03-17 深圳市商汤科技有限公司 虚拟形象渲染方法及装置、电子设备和存储介质

Non-Patent Citations (56)

* Cited by examiner, † Cited by third party
Title
"A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units", 2009, Mohammad H. Mahoor, Steven Dadavid, Daniel S. Messinger and Jeffrey F. Cohn, IEEE Conference on Computer Vision and Patten Recognition, CVPR 2009; pp. 74-80.
"Adversarial PoseNet: A Structure-aware Convolutional Network for Human Pose Estimation", 2017, Yu Chen, Chunhua Shen, Xiu-Shen Wei, Lingqiano Liu and Jian Yang, Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1221-1230.
"Based on Operational Degree of Virtual Human Hand Motion Control of Obstacle Avoidance Algorithm", 2014, Dai-Rui She, Jung-Chan Yang, Yue Shi and Da-Ling Yang, Computer Simulation, vol. 31, No. 12, 5 pgs.
"Dynamic Facial Expression Analysis and Synthesis With MPEG-4 Facial Animation Parameters", Oct. 2008, Yongmian Zhang, Qiang Ji, Zhiwei Zhu and Beifang Yi, IEEE Transactions on Circuits and Systems for Video Technology, vol. 18 No. 10, pp. 1383-1396.
"Facial Expression Analysis", 2005, Ying-Li Tian, Takeo Kanade and Jeffrey F. Cohn, Chapter 11 in "Handbook of Face Recognition;" Springer, New York, NY. pages 247-275.
"Indirect Adaptive Fuzzy Decoupling Control With a Lower Limb Exoskeleton", Aug. 2016, Chih-Wei Lin, Shun-Eng Su and Ming-Change Chen, Proceedings of 2016 International Conference on Advanced Robotics and Intelligent Systems, Taipei, Taiwan, 5 pgs.
"Operator Attitude Algorithm for Telerobotic Nursing System"; Dec. 2016, Guo-Yu Zuo, Shuang-Yue, Yu and Dao-Xiong Gong; Acta Automatica Sinica,vol. 42, No. 12, 10 pgs.
"QuaterNet: A Quaternion-based Recurrent Model for Human Motion", Jul. 2018, Dario Pavilo, David Grangier and Michael Auli, British Machine Vision Conference (BMVC), available on line arXiv. org-1805.06485v2, pp. 1-14.
"Recurrent Network Models for Human Dynamics", 2015, Katerina Fragkiadaki, Sergey Levine, Panna Felsen and Jitendra Malik, Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4346-4354.
"Research on Motion Control Technology of Virtual Human's Lower Limb Based on Optical Motion Capture Data" Feb. 2015; Liang Feng, Zhang Zhili, Li Xiangyang, Tang Zhibo and Ma Chao; Journal of System Simulation, vol. 27, No. 2, 9 pgs.
"Subtly Different Facial Expression Recognition and Expression Intensity Estimation", Jun. 1998, James Jenn-Jier Lien, Takeo Kanade, Jeffrey F. Cohn and Ching-Chung Li, Published in the Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 853-859.
"The Design and Implementation of a Kinect-Based Framework for Selective Human Activity Tracking", Oct. 2016, Roanna Lun, Connor Gordon and Wenbing Zhao, IEEE International Conference on Systems, Man, and Cybernetics SMC 2016, Budapest, Hungary, 6 pages.
"The design and implementation of Kinect-based motion capture system"; Apr. 2018; Zuoyun Zhang; China Master's Theses Full-text Database, Information Science, Apr. 15, 2018, 99 pgs.
"The Pose Knows: Video Forecasting by Generating Pose Futures", 2017, Jacob Walker, Kenneth Marino, Abhinav Gupta and Martial Hebert, Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 3352-3361.
"Virtual Fashion Show Using Marker-less Motion Capture", Mar. 2005, Ryuzo Okada, Bjorn Tenger, Taukasa Ike and Nobuhire Kondoh, The Institute of Electronics, Information and Communication Engineers, Technical report of IEICE, TL2004-47, PRMU2004-215, 6 pgs.
Advisory Action of the U.S. Appl. No. 17/102,305, dated May 27, 2022, 4 pgs.
Advisory Action of the U.S. Appl. No. 17/102,331, dated May 27, 2022, 4 pgs.
Advisory Action of the U.S. Appl. No. 17/102,364, dated May 27, 2022, 6 pgs.
English translation of the Written Opinion of the International Search Authority in the international application No. PCT/CN2019/130970, dated Apr. 1, 2020, 5 pgs.
English translation of the Written Opinion of the International Search Authority in the international application No. PCT/CN2020/072520, dated Apr. 23, 2020, 6 pgs.
English translation of the Written Opinion of the International Search Authority in the international application No. PCT/CN2020/072526, dated Apr. 21, 2020, 7 pgs.
English translation of the Written Opinion of the International Search Authority in the international application No. PCT/CN2020/072549, dated Apr. 21, 2020, 6 pgs.
English translation of the Written Opinion of the International Search Authority in the international application No. PCT/CN2020/072550, dated Apr. 21, 2020, 7 pgs.
Final Office Action of the U.S. Appl. No. 17/102,305, dated Mar. 15, 2022, 64 pgs.
Final Office Action of the U.S. Appl. No. 17/102,331, dated Mar. 15, 2022, 55 pgs.
Final Office Action of the U.S. Appl. No. 17/102,364, dated Mar. 17, 2022, 59 pgs.
First Office Action of the Chinese application No. 201910365188.2, dated Apr. 6, 2021, 29 pgs.
First Office Action of the Indian application No. 202027050357, dated Sep. 10, 2021, 7 pgs.
First Office Action of the Indian application No. 202027050399, dated Sep. 6, 2021, 6 pgs.
First Office Action of the Indian application No. 202027050400, dated Dec. 13, 2021, 6 pgs.
First Office Action of the Indian application No. 202027050802, dated Sep. 27, 2021, 6 pgs.
First Office Action of the Japanese application No. 2020-558530, dated Dec. 14, 2021, 6 pgs.
First Office Action of the Japanese application No. 2020-565269, dated Jan. 5, 2022, 10 pgs.
First Office Action of the Japanese application No. 2020-567116, dated Jan. 20, 2022, 12 pgs.
First Office Action of the Japanese application No. 2021-516694, dated May 18, 2022, 14 pgs.
International Search Report in the international application No. PCT/CN2019/130970, dated Apr. 1, 2020, 3 pgs.
International Search Report in the international application No. PCT/CN2020/072520, dated Apr. 23, 2020, 2 pgs.
International Search Report in the international application No. PCT/CN2020/072526, dated Apr. 21, 2020, 3 pgs.
International Search Report in the international application No. PCT/CN2020/072549, dated Apr. 21, 2020, 2 pgs.
International Search Report in the international application No. PCT/CN2020/072550, dated Apr. 21, 2020, 3 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,305, dated Aug. 4, 2022, 43 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,305, dated Oct. 4, 2021, 57 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,331, dated Jul. 5, 2022, 53 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,331, dated Oct. 6, 2021, 55 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,364, dated Aug. 18, 2022, 59 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,364, dated Oct. 4, 2021, 61 pgs.
Non-Final Office Action of the U.S. Appl. No. 17/102,373, dated Dec. 9, 2021, 67 pgs.
Notice of Allowance of the Japanese application No. 2020-559380, dated Dec. 21, 2021, 5 pgs.
Notice of Allowance of the Japanese application No. 2020-565269, dated Apr. 12, 2022, 5 pgs.
Notice of Allowance of the U.S. Appl. No. 17/102,373, dated Jun. 9, 2022, 26 pgs.
Notice of Allowance of the U.S. Appl. No. 17/102,373, dated Sep. 14, 2022, 11 pgs.
Notice of Rejection of the Japanese application No. 2020-567116, dated May 17, 2022, 7 pgs.
Office Action of the Indian application No. 202047053772, dated Dec. 15, 2021, 6 pgs.
Search Report by Registered Search Organization of the Japanese application No. 2021-516694, dated Apr. 5, 2022, 34 pgs.
Second Office Action of the Chinese application No. 201910365188.2, dated Jul. 9, 2021, 28 pgs.
Third Office Action of the Chinese application No. 201910365188.2, dated Sep. 26, 2021, 27 pgs.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210241020A1 (en) * 2019-12-25 2021-08-05 Beijing Sense Time Technology Development Co., Ltd. Method and device for processing image, and storage medium
US11734829B2 (en) * 2019-12-25 2023-08-22 Beijing Sensetime Technology Development Co., Ltd. Method and device for processing image, and storage medium

Also Published As

Publication number Publication date
US20210074003A1 (en) 2021-03-11
WO2020181900A1 (zh) 2020-09-17
KR20210011985A (ko) 2021-02-02
US11741629B2 (en) 2023-08-29
US11468612B2 (en) 2022-10-11
US20210074004A1 (en) 2021-03-11
CN111460871A (zh) 2020-07-28
JP2021518960A (ja) 2021-08-05
CN114399826A (zh) 2022-04-26
US20210035344A1 (en) 2021-02-04
JP7109585B2 (ja) 2022-07-29
KR20210011424A (ko) 2021-02-01
CN111460875A (zh) 2020-07-28
US20210074005A1 (en) 2021-03-11
SG11202011595QA (en) 2020-12-30
KR20210011984A (ko) 2021-02-02
CN111460871B (zh) 2023-12-22
US20210074006A1 (en) 2021-03-11
SG11202010399VA (en) 2020-11-27
JP2021518023A (ja) 2021-07-29
CN111460873A (zh) 2020-07-28
CN111460874A (zh) 2020-07-28
CN111460872A (zh) 2020-07-28
SG11202011600QA (en) 2020-12-30
CN111460870A (zh) 2020-07-28
JP2021525431A (ja) 2021-09-24
JP7001841B2 (ja) 2022-01-20
JP2021524113A (ja) 2021-09-09
JP7061694B2 (ja) 2022-04-28
SG11202011599UA (en) 2020-12-30
SG11202011596WA (en) 2020-12-30
CN111460872B (zh) 2024-04-16
KR20210011425A (ko) 2021-02-01
CN111460875B (zh) 2022-03-01

Similar Documents

Publication Publication Date Title
US11538207B2 (en) Image processing method and apparatus, image device, and storage medium
US20210319621A1 (en) Face modeling method and apparatus, electronic device and computer-readable medium
TWI535285B (zh) Conference system, surveillance system, image processing device, image processing method and image processing program, etc.
US11238569B2 (en) Image processing method and apparatus, image device, and storage medium
CN111353930B (zh) 数据处理方法及装置、电子设备及存储介质
JP2010165248A (ja) 画像処理装置、画像照合方法、および、プログラム
JP2019083402A (ja) 画像処理装置、画像処理システム、画像処理方法、及びプログラム
JP7064257B2 (ja) 画像深度確定方法及び生き物認識方法、回路、装置、記憶媒体
JP2020067748A (ja) 画像処理装置、画像処理方法、およびプログラム
CN110070481B (zh) 用于面部的虚拟物品的图像生成方法、装置、终端及存储介质
US11138743B2 (en) Method and apparatus for a synchronous motion of a human body model
US11985294B2 (en) Information processing apparatus, information processing method, and program
US20180158171A1 (en) Display apparatus and controlling method thereof
US20190122643A1 (en) Image processing system, image processing apparatus, and program
CN115908120B (zh) 图像处理方法和电子设备
CN106200911A (zh) 一种基于双摄像头的体感控制方法、移动终端及系统
CN110852934A (zh) 图像处理方法及装置、图像设备及存储介质
JP2019144958A (ja) 画像処理装置、画像処理方法およびプログラム
US11516448B2 (en) Method and apparatus for compensating projection images
US11263456B2 (en) Virtual object repositioning versus motion of user and perceived or expected delay
US20200167005A1 (en) Recognition device and recognition method
CN110852932A (zh) 图像处理方法及装置、图像设备及存储介质
US11928775B2 (en) Apparatus, system, method, and non-transitory medium which map two images onto a three-dimensional object to generate a virtual image
US20220408069A1 (en) Information processing apparatus, information processing method, and storage medium
US20210042947A1 (en) Method and apparatus for processing data, electronic device and storage medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

AS Assignment

Owner name: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, TONG;LIU, WENTAO;QIAN, CHEN;REEL/FRAME:054750/0524

Effective date: 20200728

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE