WO2017186016A1 - 图像形变处理的方法和装置、计算机存储介质 - Google Patents
图像形变处理的方法和装置、计算机存储介质 Download PDFInfo
- Publication number
- WO2017186016A1 WO2017186016A1 PCT/CN2017/080822 CN2017080822W WO2017186016A1 WO 2017186016 A1 WO2017186016 A1 WO 2017186016A1 CN 2017080822 W CN2017080822 W CN 2017080822W WO 2017186016 A1 WO2017186016 A1 WO 2017186016A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point
- reference point
- line segment
- image
- configuration
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013507 mapping Methods 0.000 claims abstract description 156
- 230000001815 facial effect Effects 0.000 claims abstract description 37
- 230000005484 gravity Effects 0.000 claims description 59
- 238000006073 displacement reaction Methods 0.000 claims description 35
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000005192 partition Methods 0.000 claims description 7
- 230000000903 blocking effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 27
- 238000004422 calculation algorithm Methods 0.000 description 14
- 238000001514 detection method Methods 0.000 description 10
- 238000004590 computer program Methods 0.000 description 4
- 238000012886 linear function Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000003672 processing method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/754—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to the field of computer technologies, and in particular, to a method and apparatus for image deformation processing, and a computer storage medium.
- facial features deformation is a very important application in the field of image deformation, and is widely used in advertising, film, animation and other fields.
- the existing face deformation technique is generally based on a local image deformation algorithm, which is deformed using model parameters, and cannot adaptively match the target shape given by the user.
- a method of image deformation processing comprising:
- the deformation template carrying a configuration reference point and a configuration reference point, determining a current reference point corresponding to the configuration reference point in the facial features reference point, and configuring a reference point to be matched corresponding to the reference point;
- mapping the to-be-processed image point to a corresponding target location according to a positional relationship between the target reference point and the reference point to be matched, and a positional relationship between the mapping point pair and the image point to be processed.
- An apparatus for image deformation processing comprising:
- a reference point positioning module configured to locate a five-point reference point of the face image among the acquired images
- a reference point distinguishing module configured to acquire a deformation template, where the deformation template carries a configuration reference point and a configuration reference point, and the current reference point corresponding to the configuration reference point is determined in the facial features reference point, and the configuration reference point corresponds to be matched Benchmark point
- a similarity mapping module configured to determine a target reference point corresponding to the configuration reference point in the image to be processed, where the target reference point forms a mapping point pair with the corresponding reference point to be matched;
- the image point mapping module to be processed is configured to map the to-be-processed image point to a corresponding target location according to a positional relationship between the target reference point and the reference point to be matched, and a positional relationship between the mapping point pair and the image point to be processed.
- a computer storage medium storing a computer program configured to perform the above-described image deformation processing method.
- the method and device for image deformation processing and the computer storage medium obtain the image to be processed, identify the face image in the image to be processed, locate the facial features of the face image, acquire the deformation template, and carry the configuration reference point and configuration.
- the reference point determines the current reference point corresponding to the configuration reference point in the five-point reference point, configures the reference point to be matched corresponding to the reference point, and according to the positional relationship between the configuration reference point and the configuration reference point, the current reference point, and the position of the reference point to be matched
- the relationship is performed, and the facial features similarity mapping is performed, and the corresponding reference datum point of the configuration reference point in the image to be processed is obtained, and the target datum point forms a mapping point pair with the corresponding datum point to be matched, and the target datum point and the to-be-matched datum are selected according to the mapping point.
- the positional relationship of the point and the positional relationship between the mapping point pair and the image point to be processed determine the mapping relationship of the image point to be processed, and the image point to be processed is mapped to the corresponding target position according to the mapping relationship, and the configuration reference point is included in the deformation template. And configuring the datum point and passing the current reference in the corresponding pending image And a reference point to be matched, to obtain a facial similarity mapping for the target reference point, and then determines the mapping relationship of image points to be processed, so as to be treated in FIG.
- the image points are mapped to the corresponding target positions, and the deformation size can be adaptively determined according to the deformation template, and the matching degree between the deformed image and the target image is improved.
- FIG. 1 is an application environment diagram of a method for image deformation processing in an embodiment
- Figure 2 is a diagram showing the internal structure of the terminal of Figure 1 in an embodiment
- Figure 3 is a diagram showing the internal structure of the server of Figure 1 in an embodiment
- FIG. 4 is a flow chart of a method for image deformation processing in an embodiment
- FIG. 5 is a schematic diagram of a five-point reference point of a face image in one embodiment
- FIG. 6 is a schematic diagram of a five-point reference point of a deformation module in one embodiment
- FIG. 7 is a schematic diagram of a five-point reference point of an image to be processed in an embodiment
- Figure 8 is a flow chart for determining the position of a target reference point in one embodiment
- FIG. 9 is a schematic diagram showing a positional relationship between a configuration reference point and a configuration reference point in an embodiment
- Figure 10 is a schematic diagram showing the position of a target reference point in an embodiment
- Figure 11 is a flow chart for determining the position of a target reference point based on a triangular pattern in one embodiment
- FIG. 12 is a schematic diagram of configuring a reference point formation configuration triangle pattern in one embodiment
- FIG. 13 is a schematic diagram showing a positional relationship between a configuration reference point and a configuration reference point in another embodiment
- Figure 14 is a schematic diagram showing the position of a target reference point in another embodiment
- 15 is a flow chart of mapping an image to be processed to a corresponding target location in one embodiment
- 16 is a schematic diagram of point mapping calculation in an embodiment
- FIG. 17 is a schematic diagram of a point mapping result in an embodiment
- FIG. 18 is a flow chart showing, in one embodiment, a block maps a point to be processed to a corresponding target location
- 19 is a schematic diagram of partitioning an image to be processed in one embodiment
- 20 is a schematic diagram of an image block mapping to be processed in an embodiment
- Figure 21 is a schematic illustration of an image after deformation in one embodiment
- Figure 22 is a technical framework diagram of a method of image deformation processing in an embodiment
- Figure 23 is a block diagram showing the structure of an apparatus for image deformation processing in an embodiment
- 24 is a structural block diagram of a similarity mapping module in an embodiment
- 25 is a structural block diagram of an image point mapping module to be processed in an embodiment
- Figure 26 is a block diagram showing the structure of a target position determining unit in one embodiment.
- FIG. 1 is an application environment diagram of a method of image deformation processing in an embodiment.
- the application environment includes a terminal 110 and a server 120.
- Terminal 110 and server 120 can communicate over a network.
- the terminal 110 can be a smartphone, a tablet, a notebook, a desktop computer, etc., but is not limited thereto.
- the terminal 110 may transmit a deformation template acquisition request, transmit image data, and the like to the server 120, and the server 120 may transmit a deformation template or the like to the terminal 110.
- the method of image deformation processing can be implemented on a terminal or a server.
- the internal structure of the terminal 110 in FIG. 1 is as shown in FIG. 2.
- the terminal 110 includes a processor, a graphics processing unit, a storage medium, a memory, a network interface, a display screen, and an input device connected through a system bus.
- the storage medium of the terminal 110 stores an operating system, and further includes a first image deformation processing device for implementing a method suitable for image deformation processing of the terminal.
- the processor is used to provide computing and control capabilities to support the operation of the entire terminal 110.
- the graphics processing unit in the terminal 110 is configured to provide at least a rendering capability of the display interface, the memory is an operation providing environment of the device for processing the first image deformation processing in the storage medium, and the memory in the terminal is the first image deformation processing in the storage medium.
- the operation of the apparatus provides an environment in which computer readable instructions are stored, the computer readable instructions being executable by the processor to cause the processor to perform a method of image deformation processing.
- the network interface is used for network communication with the server 120, such as sending a deformation template acquisition request to the server 120, and receiving the number returned by the server 120.
- the display screen is used to display an application interface or the like, and the input device is used to receive commands or data input by the user.
- the display screen and the input device can be a touch screen.
- FIG. 3 the internal structure of server 120 in FIG. 1 is illustrated in FIG. 3, which includes a processor, storage medium, memory, and network interface connected by a system bus.
- the storage medium of the server 120 stores an operating system, a database, and a second image deformation processing device, and the second image deformation processing device is configured to implement a method suitable for image deformation processing of the server 120.
- the processor of the server 120 is used to provide computing and control capabilities to support the operation of the entire server 120.
- the memory of the server 120 provides an environment for the operation of the second image deformation processing device in the storage medium, and the memory in the server provides an environment for the operation of the second image deformation processing device in the storage medium, and the memory can store the computer.
- a readable instruction that, when executed by a processor, causes the processor to perform a method of image deformation processing.
- the network interface of the server 120 is used to communicate with the external terminal 110 via a network connection, such as receiving image data transmitted by the terminal 110 and returning data to the terminal 110.
- a method for image deformation processing is provided to be applied to a terminal or a server in the application environment, and includes the following steps:
- Step S210 Acquire an image to be processed, identify a face image in the image to be processed, and locate a facial feature reference point of the face image.
- the image to be processed may be an image captured by the camera in real time, or may be an image stored in advance by the terminal, or may be an image acquired from a server in real time.
- the face detection algorithm can be used to identify the face image in the image to be processed.
- the face detection algorithm can be customized according to requirements, such as OpenCV face detection algorithm, IOS, Android system's own face detection algorithm, Face++ face. Detection, etc.
- the face detection algorithm can return whether the image includes a face and a specific face range, such as identifying the position of the face by a rectangular frame, and returning multiple rectangular frames if there are multiple faces.
- the five-point reference point is the key point for determining facial features and facial expressions.
- Figure 5 is a schematic diagram of the facial features of the facial image.
- the point includes: the face contour reference point, that is, the points 1 to 9 in the figure, the left and right eye reference points, that is, the points 10-14 in the figure, the points 15-19, the nose reference point is the point 20-26 in the figure, The lip reference point is the point 27-30 in the figure, the left and right eyebrow reference points are points 31-32, 33-34, etc., different parts correspond to different types of reference points, and the five official points include at least one type. Benchmark point.
- Step S220 Acquire a deformation template, and the deformation template carries a configuration reference point and a configuration reference point, and determines a current reference point corresponding to the configuration reference point in the facial features reference point, and configures a reference point to be matched corresponding to the reference point.
- the deformation template is a target shape image that carries target features such as a pointed face, a large eye, a small nose, and the like.
- the deformation template can be obtained by offline preprocessing or online preprocessing of any target image.
- the preprocessing process includes the extraction of the five-point reference point, and the specific position of the five-dimensional reference point of the target image is obtained, which can be identified by coordinate record and icon. If the detection algorithm cannot directly obtain the five-point reference point of the deformation template, the five-point reference point can be marked by offline manual labeling.
- the configuration reference point and the configuration reference point are all five-point reference points, wherein the configuration reference point is used to determine the position difference between the current pending image and the facial features of the deformation template, and the configuration reference point is used to calculate the current pending image and the shape of the deformation template.
- the deformation trend of the reference point is a control point that affects the deformation of other image points to be processed.
- different configuration reference points can be set, such as face reference points: nose tip, outer contour point corresponding to left and right eyes, and chin cusp.
- Left eye deformation reference point the center point of the left eye, the tip of the nose, and the contour point of the face corresponding to the left eye.
- Right eye deformation reference point the contour point of the right eye center point, the nose tip, and the right eye.
- Nose deformation reference point the center point of the left eye, the tip of the nose, and the center point of the right eye.
- Mouth deformation reference point According to the position of different points on the mouth, the determined reference points are divided into left and right halves, which are the center point of the mouth, the tip of the nose, the center of the left eye, or the center point of the mouth, the tip of the nose, and the center of the right eye.
- the five-point reference point detected by the image to be processed can be matched with the configuration reference point and the configuration reference point in the deformation template, and the same algorithm can be used to perform the five-point reference point detection on the processed image and the deformation template to ensure the detected five-point reference point. Matching. If the image is to be processed The five-point reference point does not correspond to the configuration reference point and the configuration reference point in the deformation template. If the number is different, a secondary detection or matching algorithm may be performed to remove the mismatched reference point.
- the reference point in the deformation template is the nose point, the outer contour point corresponding to the left and right eyes, and the chin tip point, the corresponding nose point, the outer contour point corresponding to the left and right eyes, and the chin tip are obtained from the facial features of the image to be processed.
- the point at which it is used serves as the current reference point.
- the position of the current reference point is constant, which serves as a positioning reference.
- FIG. 6 it is a schematic diagram of a deformation module, wherein 311, 312, 313, and 314 are configuration reference points, 315-320 are configuration reference points, and FIG. 7 is a schematic diagram of an image to be processed, where 321, 322, 323, and 324 are current.
- 325-330 is the reference point to be matched.
- the number of current reference points is the same as the number of configuration reference points.
- the number of reference points to be matched is the same as the number of configuration reference points.
- Step S230 performing a facial features similarity mapping according to the positional relationship between the configuration reference point and the configuration reference point, the current reference point, and the positional relationship of the reference point to be matched, and obtaining the target reference point corresponding to the configuration reference point in the image to be processed, and the target reference The point forms a mapping point pair with the corresponding reference point to be matched.
- each configuration reference point may be combined to form a corresponding graphic, such as any three adjacent points forming a configuration triangle, four points forming a configuration quadrilateral, and the like, and the current reference point may also be combined with the same rule of the configuration reference point.
- the relationship maps the configuration reference point to the corresponding position of the image to be processed to obtain the target reference point.
- the deformation factor can be calculated first, and then the position of the target reference point is calculated according to the deformation factor.
- the specific algorithm for performing the similarity mapping may be customized according to requirements.
- the adjacent current reference points are connected to obtain a reference line segment, and the adjacent current reference points are connected to obtain a current line segment, and the configuration reference point is obtained.
- the reference line segment determines the position of the target reference point and the current line segment according to the positional relationship between the configuration reference point and the reference line segment Relationship, thereby determining the position of the target reference point according to the positional relationship. If the configuration reference point is just on the reference line segment, the position of the target reference point is also on the corresponding current line segment.
- the target reference point corresponds to the reference point to be matched, and the displacement offset between the target reference point and the reference point to be matched represents the deformation size of the deformation template and the image to be processed.
- Step S240 Determine a mapping relationship between the target reference point and the reference point to be matched according to the mapping point, and a mapping relationship between the mapping point pair and the image point to be processed, and map the image point to be processed according to the mapping relationship. To the corresponding target location.
- the displacement offset between the target reference point and the reference point to be matched is calculated, and the corresponding displacement offset can be calculated by the coordinates of the target reference point and the reference point to be matched, for example, the coordinate of the target reference point is (x) , y), if the coordinates of the reference point to be matched are (a, b), the displacement difference is (xa, yb).
- the displacement offset can be expressed in the form of a vector.
- the image to be processed may be divided according to the distribution position of the pair of mapping points, and different regions include corresponding mapping point pairs.
- the image point to be processed in the first region is only subjected to the first
- the effect of mapping point pairs, other mapping point pairs will not affect the deformation of the image points to be processed in the first area. It is also possible to calculate the weight of the influence of each mapping point on the deformation of the image point to be processed according to the distance of the mapping point from the image point to be processed. The influence weight and the displacement offset are combined to calculate the mapping relationship of the image points to be processed.
- the mapping relationship can directly determine the position of the image to be processed after the deformation, so as to map the image points to be processed to the corresponding target position.
- the mapping point of the facial feature similarity mapping target is first obtained, and then the mapping relationship of the image point to be processed is determined, thereby mapping the image point to be processed to the corresponding target.
- the position can be adaptively determined according to the deformation template to improve the matching degree between the deformed image and the target image.
- the deformation template includes a configuration reference point and a configuration reference point corresponding to a plurality of facial features.
- the deformed facial features have different types, and the positions and numbers of the corresponding configuration reference points and configuration reference points are also different.
- the reference point is the nose point, the outer contour point corresponding to the left and right eyes, the chin cusp, and the reference point is the point on the outer contour of the face.
- the configuration reference point is the right eye center point, the nose tip, and the right eye corresponding face contour point, and the configuration reference point is a point on the outer contour of the eye.
- the configuration reference point is generally a point on the deformed facial features, and the reference point can be configured to select a point near the facial features that is easy to locate.
- the configuration reference point and the configuration reference point corresponding to a plurality of facial features can complete various types of facial features deformation at one time, and the deformation sizes between the various facial features are mutually influenced, and the facial features are changed from the global.
- step S230 includes:
- Step S231 acquiring a first center of gravity point of the graphic formed by the configuration reference point, and configuring the reference point and the first center of gravity point to form a first line segment, where the first center of gravity point and the different configuration reference point are connected to form a first reference line segment set .
- the number of acquired configuration reference points may be customized according to requirements, at least three, such as acquiring a first centroid point 400 of a quadrangle formed by any four adjacent configuration reference points.
- the configuration reference point is generally multiple, and the position of the target reference point corresponding to each configuration reference point is determined in turn.
- the configuration reference point 410 is connected to the first center of gravity point 400 to form a first line segment 411, and the first center of gravity point 400 and the different configuration reference points 420, 430, 440, and 450 are connected to form a first reference line. Segment set 412, 413, 414, 415.
- Step S232 determining a deformation factor according to an angle relationship between the first line segment and the line segment in the first reference line segment set, the length of the first line segment, and the length of the line segment in the first reference line segment set.
- the first line segment and the first reference line segment 412, 413, 414, 415 form an angle ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 , respectively .
- the deformation weight may be determined according to the distance between the first line segment and the line segment in the first reference line segment set, such as the first reference line segment 412, the first reference line segment set 415 has a large deformation weight, and the first reference line segment 413, 414 has a small deformation weight.
- a part of the target line segments in the first reference line segment set may be used to calculate a target angle. When calculating the deformation factor, only the target angle is considered.
- the first reference line segment 412 and the first reference line segment set 415 are selected as the target line segment, and the calculation is performed. Only the target angle ⁇ 1 and the target angle ⁇ 2 are considered .
- Dsrc is the first line
- the length of the segment 411, ⁇ 1 to ⁇ n is the angle between the first line segment and each line segment in the first reference line segment set, where n is the number of line segments in the first reference line segment set, and d 1 to d n are The length of each line segment in the first reference line segment set, p 1 to p n are deformation weights, and the specific size can be customized.
- Step S233 acquiring a second center of gravity point of the graphic formed according to the current reference point, the target reference point and the second center of gravity point are connected to form a second line segment, and the second center of gravity point and the different current reference point are connected to form a second reference line segment set. And determining an angle between each of the second line segment and the second reference line segment set according to the angle relationship.
- the number and location of the current reference points are obtained, corresponding to the number and location of the configuration reference points, such as acquiring the second center of gravity 500 of the quadrilateral formed by any four adjacent current reference points, as shown in FIG.
- the target reference point 510 and the second center of gravity point 500 are connected to form a second line segment 511
- the first center of gravity point 500 and the different configuration reference points 520, 530, 540, 550 are connected to form a first line.
- the second reference line set 512, 513, 514, 515, the second line segment 511 and the second reference line segment 512, 513, 514, 515 respectively form a second angle
- the first angle of the first segment and the first reference line is formed ⁇ 1, ⁇ 2, ⁇ 3 , ⁇ 4, the angle can be custom algorithm according to a first segment formed with the first reference line
- the relationship between the second line segment 511 and the line segment in the second reference line segment set is determined. If any two first angles are selected, the ratio of the two first angles is calculated such that the second angle corresponds to The ratio of the angle is the same as the ratio of the first angle, thereby obtaining the direction of the second line segment, and determining The specific angle value.
- Step S234 determining the position of the target reference point according to the deformation factor, the included angle, and the length of the line segment in the second reference line segment set.
- a custom algorithm can determine the position of the target reference point, such as a linear function and a non-linear function, in one embodiment according to the formula Calculating the position of the target reference point, where Ddst is the length of the second line segment 511, to The angle between the second line segment and the line segment in the second reference line segment set, where n is the number of line segments in the second reference line segment set, which is consistent with the number of line segments in the first reference line segment set, p 1 to p n
- the specific size is consistent with the calculation of the deformation factor.
- the position of the target reference point is determined by using the center of gravity formed by the configuration reference point and the current reference point, and the length and angle relationship of the various line segments, so that the accuracy of the position calculation of the target reference point can be improved, so that the deformed pending The image is closer to the target image and the feature matching is higher.
- step S230 includes:
- Step S236 obtaining a configuration center point of the configuration triangle graphic, configuring the reference point and configuring the center of gravity
- the point connection forms a first line segment, and two target configuration reference points adjacent to the configuration reference point are acquired, and the configuration center point of gravity and the target configuration reference point are connected to form a first configuration reference line segment and a second configuration reference line segment.
- any three adjacent configuration reference points form three configuration triangle patterns, as shown in FIG. 12, and any three adjacent reference points 321 , 322 , 323 , and 324 are configured .
- the configuration triangle patterns S1, S2, and S3 are formed.
- the configuration reference point can be used to calculate the target reference point according to the position of the configuration reference point, for example, the configuration triangle pattern closest to the configuration reference point can be used.
- the configuration reference point 610 is connected to the configuration gravity center point 600 to form a first line segment 611, and the configuration gravity center point 600 and the configuration reference points 620, 630 are connected to form a first configuration reference line segment 612 and a second configuration reference line segment. 613.
- Step S237 acquiring an angle ⁇ between the first line segment and the first configuration reference line segment, an angle ⁇ between the first line segment and the second configuration reference line segment, a length Dsrc of the first line segment, and a length d of the first configuration reference line segment. 1.
- Step S238 Acquire a current center of gravity point of the current triangle pattern, and connect the target reference point and the current center of gravity point to form a second line segment, obtain two target current reference points corresponding to the target configuration reference point, and connect the current center of gravity point with the target current reference point.
- the position of a target reference point is assumed first, and the target reference point 710 is connected with the current center of gravity point 700 to form a second line segment 711, and the current center of gravity point 700 and the target current reference point 720, 730 are formed.
- ⁇ ′ and ⁇ ′ are respectively 20° and 40° to get the direction of the second line segment
- the position of the other end point target reference point can be determined.
- the center of gravity of the triangle is adopted, and since the triangle end point is small and the figure is stable, the position of the target reference point can be quickly and conveniently calculated.
- two target configuration reference points adjacent to the configuration reference point are adopted. The calculation is carried out, taking into account the principle of greater influence when the position is close, ensuring the accuracy of the position calculation of the target reference point.
- step S240 includes:
- Step S241 calculating, according to the position of the image point to be processed and the position of the reference point to be matched, the corresponding weighting factor of each mapping point pair.
- the position distance of the image point to be processed and each of the reference points to be matched may be calculated, according to the inverse relationship, The distance distribution factor with a small distribution is larger, and the weight distribution factor is larger for the small distance.
- the specific allocation algorithm can be customized according to the needs, for example, different weight factors are assigned according to the total number of pairs of mapping points. For example, if 4 pairs of map points are used, 4 levels of influence weight factors are assigned, and the sum of the influence weight factors of each level is 1, such as 0.1, 0.2, 0.3, 0.4, and then the image points to be processed and 4 map points are paired.
- the distance of the reference point to be matched is calculated, and the matching influence weight factor is found according to the size of the distance, and the influence weight factor of the largest distance is the smallest, thereby obtaining the corresponding influence weight factor of each mapping point pair.
- Step S242 calculating a corresponding displacement offset of each mapping point pair, and pairing according to each mapping point
- the influence weight factor and the displacement offset are calculated to obtain the displacement of the image point to be processed, and the image point to be processed is mapped to the corresponding target position according to the displacement.
- the mapped point pairs can be represented as (S 1 , D 1 ), (S 2 , D 2 )...(S n , D n ), where n is the total number of mapped point pairs, where each point has a corresponding The coordinates of the map, such as S 1 (S 1x , S 1y ), the corresponding displacement offset of the map point pair is D i -S i , where D i is the coordinate of the target reference point of the i-th map point pair, and S i is The coordinates of the i-th mapping point pair to be matched to the reference point. Since the coordinates are two-dimensional coordinates, the displacement offset includes the absolute displacement distance and direction.
- the displacement of the image point to be processed may be obtained according to the corresponding displacement offset and the corresponding influence weight factor of all or part of the mapping points in the image. If the image to be processed is previously partitioned, the image points to be processed in each area are The displacement is only affected by the pair of mapping points in the same area.
- the corresponding weighting factors and displacement offsets of the corresponding mapping points in the same area are obtained first, and then calculated. The target location of the image point map to be processed. As shown in FIG.
- FIG. 16 it is a schematic diagram of point map calculation, which includes 6 pairs of mapping points, respectively (S 1 , D 1 ), (S 2 , D 2 )...(S 6 , D 6 ), mapping point pairs
- the arrow between the two points represents the corresponding displacement offset of each mapping point pair, the direction of the arrow indicates the direction of deformation, and the length of the arrow indicates the magnitude of the deformation.
- the dotted line between the image point A to be processed and the reference point S 1 to be matched indicates the Euclidean distance between A and S 1 .
- FIG. 17 it is a schematic diagram of the point mapping result, in which the image point A to be processed is mapped to A', and the image point B to be processed is mapped to B'.
- mapping points have different weights on the deformation of the image points to be processed, and the local deformation difference can be considered in the case of global deformation, so that the accuracy of the deformed image is more high.
- step S241 includes: Calculate the corresponding influence weighting factor w i of the i-th mapping point pair, where A represents the position coordinate of the image point to be processed, S i represents the coordinate of the reference point to be matched in the i-th mapping point pair, and
- AS i represents the distance from A to Si, such as Euclidean distance.
- Step S242 includes: according to The target position A' is calculated, where D i represents the coordinates of the target reference point of the i-th mapping point pair.
- the deformed position of the image point A to be processed is related to all the mapped point pairs in the image to be processed, and the overall weight of the image is considered while the local weight difference is considered.
- the influence weighting factor can be calculated according to the distance of each mapping point to the image point to be processed, which is more accurate and simple and convenient to calculate, and all the mapping point pairs in the image to be processed will be the current image to be processed.
- the deformation of the point has an influence, and the closer the distance to the current image point to be processed, the greater the influence.
- the step of mapping the to-be-processed image point to the corresponding target location according to the mapping relationship in step S240 includes:
- step S243 the image to be processed is divided into blocks to obtain the original block, and the vertices corresponding to the original blocks are used as the first image points to be processed, and other points in the original blocks are used as the second image points to be processed.
- the rules for performing blocking can be customized according to requirements, such as triangulation, quadrilateral, and the like.
- the number of blocks determines the complexity and precision of the calculation. The more blocks, the smaller each block, the higher the computational complexity and the higher the accuracy.
- the vertices of each original block are the first image points to be processed, and the first image points to be processed are points that need to accurately calculate the target deformation position during the deformation process.
- the other points in each original block are the second image points to be processed, and the position of the second image to be processed is determined by the position of the first image to be processed, which does not require accurate calculation, which greatly reduces the calculation of points.
- the quantity increases the speed of the calculation.
- the image to be processed is divided into blocks to obtain a plurality of triangular original blocks.
- Step S244 mapping the first to-be-processed image point to the corresponding target location according to the mapping relationship to obtain a first mapping image point, where the first mapping image point forms a mapping partition corresponding to the original partition.
- the first to-be-processed image point is mapped to the corresponding target position according to the mapping relationship to obtain the first mapped image point, and according to the offset of the original three vertices of each triangle, the offset of the entire block can be calculated, such as As shown in FIG. 20, the triangle original partition 810 in the image to be processed is offset to the position of the mapping block 820 in the image.
- Step S245 Map the second to-be-processed image point to the corresponding position in the mapping block corresponding to the original block by using the original block as a processing unit.
- the second to-be-processed image point in each original block of the triangle is directly mapped to a corresponding position in the corresponding mapped block.
- the position of each triangle original block in the deformed image can be obtained, because the vertices of the original triangles of each triangle are shared, so the deformed image is also a continuous pixel, as shown in FIG.
- the entire deformation process can be implemented by OpenGL ES (OpenGL for Embedded Systems), and the distortion of the image to be processed can be achieved by adjusting the coordinates of the vertex shader of the output image. Due to the powerful computing power of the GPU, the entire calculation process can be completed in a very short time. For example, an image with a resolution of 640x480 requires only 20ms, or 50FPS, on the low-end mobile terminal iPod Touch 5 device to achieve real-time deformation performance.
- the technical framework of the image deformation processing method is as shown in FIG. 22, including an online processing portion 920 and an offline processing portion 910, wherein the offline processing portion 910 is configured to generate a deformation template, and to identify and detect the deformation template.
- the offline processing portion 910 includes: the target image acquiring unit 911 is configured to acquire a target deformation image, and the face detection and facial features reference point detecting unit 912 is configured to detect the target deformation image to obtain a configuration reference point and a configuration reference point, and the manual labeling unit 913, If the face is not detected, the area where the face is located is determined by manual labeling, and the configuration reference point and the configuration reference point are marked, and finally the deformation template carrying the configuration reference point and the configuration reference point is obtained.
- the online processing portion 920 includes a to-be-processed image acquisition unit 921 configured to acquire a to-be-processed input image, and the reference point positioning module 922 is configured to detect a face in the image to be processed and perform a five-point positioning to obtain a five-point reference point, the five-point reference point including the deformation template Corresponding current reference point and reference point to be matched.
- the block-shaped variable calculation unit 923 is configured to calculate the block-shaped variable in units of blocks according to the positional relationship between the configuration reference point and the configuration reference point in the deformation template and the image to be processed and the positional relationship between the current reference point and the reference point to be matched.
- the target deformation image generation unit 924 is configured to generate a total deformed image according to the deformation of each of the segments.
- an apparatus for providing image deformation processing includes:
- the reference point positioning module 1010 is configured to acquire an image to be processed, identify a face image in the image to be processed, and locate a facial feature reference point of the face image.
- the reference point distinguishing module 1020 is configured to acquire a deformation template, and the deformation template carries a configuration reference point and a configuration reference point, and determines a current reference point corresponding to the configuration reference point in the facial features reference point, and configures a reference point to be matched corresponding to the reference point.
- the similarity mapping module 1030 is configured to perform a facial features similarity mapping according to the positional relationship between the configuration reference point and the configuration reference point, the current reference point, and the positional relationship of the reference point to be matched, and obtain the corresponding target of the configuration reference point in the image to be processed.
- the reference point, the target reference point and the corresponding reference point to be matched form a mapping point pair.
- the image point mapping module 1040 to be processed determines the mapping relationship between the target reference point and the reference point to be matched according to the mapping point, and the positional relationship between the mapping point pair and the image point to be processed, according to the mapping relationship.
- the image points to be processed are mapped to corresponding target locations.
- the deformation template includes a configuration reference point and a configuration reference point corresponding to a plurality of facial features.
- the similarity mapping module 1030 includes:
- the deformation factor determining unit 1031 is configured to acquire a first center of gravity point of the graphic formed by the configuration reference point, the configuration reference point and the first gravity center point are connected to form a first line segment, and the first gravity center point and the different configuration reference point are connected to form a line
- the first reference line segment set determines a deformation factor according to an angle relationship between the first line segment and the line segment in the first reference line segment set, a length of the first line segment, and a length of the line segment in the first reference line segment set.
- the angle determining unit 1032 is configured to acquire a second center of gravity point of the graphic formed according to the current reference point, the target reference point and the second center of gravity point are connected to form a second line segment, and the second center of gravity point and the different current reference point are connected to form a line And the second reference line segment set determines an angle between the second line segment and each of the second reference line segment sets according to the angle relationship.
- the target reference point determining unit 1033 is configured to determine the position of the target reference point according to the deformation factor, the included angle, and the length of the line segment in the second reference line segment set.
- the pattern formed by the configuration reference point is a configuration triangle pattern formed according to the adjacent three configuration reference points, and the current reference point is formed into a current triangular pattern formed according to the same rule as the configuration triangle pattern, and the deformation is performed.
- the factor determining unit 1031 is further configured to acquire a configuration center of gravity point of the configuration triangle pattern, the configuration reference point and the configuration center of gravity point line form a first line segment, acquire two target configuration reference points adjacent to the configuration reference point, and configure the center of gravity point and
- the target configuration reference point is connected to form a first configuration reference line segment and a second configuration reference line segment, and obtains an angle ⁇ between the first line segment and the first configuration reference line segment, and an angle ⁇ between the first line segment and the second configuration reference line segment,
- the included angle determining unit 1032 is further configured to acquire a current center of gravity point of the current triangular figure, the target reference point and the current center of gravity point are connected to form a second line segment, and obtain two target current reference points corresponding to the target configuration reference point, the current center of gravity point and the target
- the current reference point line forms a first current reference line segment and a second current reference line segment, according to a formula An angle ⁇ ′ of the second line segment and the first current reference line segment and an angle ⁇ ′ between the second line segment and the second current reference line segment are determined.
- the image point mapping module 1040 to be processed includes:
- the influence weighting factor calculation unit 1041 is configured to calculate an influence weighting factor corresponding to each mapping point pair according to the position of the image point to be processed and the position of the reference point to be matched.
- the target position determining unit 1042 is configured to calculate a displacement offset corresponding to each mapping point pair, calculate a displacement of the image point to be processed according to the corresponding influence weight factor and the displacement offset of each mapping point, and map the image point to be processed according to the displacement. To the corresponding target location.
- the influence weighting factor calculation unit 1041 is further configured to Calculate the corresponding influence weighting factor w i of the i-th mapping point pair, where A represents the position coordinate of the image point to be processed, S i represents the coordinate of the reference point to be matched in the i-th mapping point pair, and
- the target position determining unit 1042 is further configured to The target position A' is calculated, where D i represents the coordinates of the target reference point of the i-th mapping point pair.
- the target location determining unit 1042 includes:
- the blocking unit 1042a is configured to block the image to be processed to obtain the original segment, and use the vertice corresponding to each original block as the first image point to be processed, and the other points in each original block as the second image point to be processed. .
- the first mapping unit 1042b is configured to map the first to-be-processed image point to the corresponding target location according to the mapping relationship to obtain a first mapping image point, where the first mapping image point forms a mapping partition corresponding to the original partition.
- the second mapping unit 1042c is configured to map the second to-be-processed image point into a processing unit by using the original block as a processing unit to a corresponding position in the mapping block corresponding to the original block.
- the functions implemented by the respective units in the apparatus for image deformation processing may be implemented by a central processing unit (CPU) or a microprocessor (Micro Processor Unit) located in the apparatus for image deformation processing. , MPU), or Digital Signal Processor (DSP), or Field Programmable Gate Array (FPGA) implementation.
- CPU central processing unit
- MPU Micro Processor Unit
- DSP Digital Signal Processor
- FPGA Field Programmable Gate Array
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
- an embodiment of the present invention further provides a computer storage medium in which a computer program is stored, the computer program being configured to perform the image deformation processing method of the embodiment of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computer Graphics (AREA)
- Image Analysis (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (15)
- 一种图像形变处理的方法,所述方法包括:定位获取到的图像当中的人脸图像的五官基准点;获取形变模版,所述形变模版携带配置参考点和配置基准点;在所述五官基准点中确定所述配置参考点对应的当前参考点,配置基准点对应的待匹配基准点;确定所述配置基准点在待处理图像中对应的位置为目标基准点,所述配置基准点与目标基准点形成映射点对;根据目标基准点与待匹配基准点的位置关系,以及映射点对与待处理图像点的位置关系将所述待处理图像点映射至对应的目标位置。
- 根据权利要求1所述的方法,其特征在于,所述形变模版包括多种五官类型对应的配置基准点和配置参考点。
- 根据权利要求1所述的方法,其中,所述确定所述配置基准点在待处理图像中对应的位置为目标基准点的步骤包括:获取所述配置参考点形成的图形的第一重心点,所述配置基准点和第一重心点连线形成第一线段,所述第一重心点和不同的配置参考点连线形成第一参考线段集合;根据所述第一线段与第一参考线段集合中的线段的夹角关系、所述第一线段的长度、所述第一参考线段集合中的线段的长度确定形变因子;获取根据所述当前参考点形成的图形的第二重心点,所述目标基准点和第二重心点连线形成第二线段,所述第二重心点和不同的当前参考点连线形成第二参考线段集合;根据所述夹角关系确定所述第二线段与第二参考线段集合中的各个线段的夹角;根据所述形变因子、所述夹角和第二参考线段集合中的线段的长度确 定所述目标基准点的位置。
- 根据权利要求3所述的方法,其中,所述配置参考点形成的图形为根据相邻的3个配置参考点形成的配置三角图形,所述当前参考点形成的图形为根据与配置三角图形相同的规则形成的当前三角图形,所述根据所述配置参考点和配置基准点的位置关系、当前参考点和待匹配基准点的位置关系,进行五官相似性映射,得到所述配置基准点在待处理图像中对应的目标基准点的步骤包括:获取所述配置三角图形的配置重心点,所述配置基准点和配置重心点连线形成所述第一线段,获取与所述配置基准点相邻的两个目标配置参考点,所述配置重心点和目标配置参考点连线形成第一配置参考线段和第二配置参考线段;获取所述第一线段与第一配置参考线段的夹角α,所述第一线段与第二配置参考线段的夹角β、所述第一线段的长度Dsrc、所述第一配置参考线段的长度d1,所述第二配置参考线段的长度d2,根据公式Dsrc=d1αk+d2βk确定形变因子k;获取所述当前三角图形的当前重心点,所述目标基准点和当前重心点连线形成第二线段,获取所述目标配置参考点对应的两个目标当前参考点,所述当前重心点和目标当前参考点连线形成第一当前参考线段和第二当前参考线段;获取所述第一当前参考线段的长度d1'、第二当前参考线段的长度d2',根据公式Ddst=d1α'k+d2β'k计算得到第二线段的长度Ddst,从而确定所述目标基准点的位置。
- 根据权利要求1所述的方法,其中,所述根据目标基准点与待匹配基准点的位置关系,以及映射点对与待处理图像点的位置关系将所述待处理图像点映射至对应的目标位置的步骤包括:根据待处理图像点与待匹配基准点的位置计算得到各个映射点对对应的影响权重因子;计算各个映射点对对应的位移偏移;根据所述各个映射点对对应的影响权重因子和位移偏移计算得到待处理图像点的位移;根据所述位移将所述待处理图像点映射至对应的目标位置。
- 根据权利要求1所述的方法,其中,所述将所述待处理图像点映射至对应的目标位置的步骤包括:将所述待处理图像进行分块得到原始分块,将各个原始分块对应的顶 点作为第一待处理图像点,各个原始分块内的其它点作为第二待处理图像点;根据所述映射关系将所述第一待处理图像点映射至对应的目标位置得到第一映射图像点,所述第一映射图像点形成所述原始分块对应的映射分块;将所述第二待处理图像点以原始分块为处理单元映射至所述原始分块对应的映射分块内对应的位置。
- 一种图像形变处理的装置,所述装置包括:基准点定位模块,配置为定位获取到的图像当中的人脸图像的五官基准点;基准点区分模块,配置为获取形变模版,所述形变模版携带配置参考点和配置基准点,在所述五官基准点中确定所述配置参考点对应的当前参考点,配置基准点对应的待匹配基准点;相似性映射模块,配置为确定所述配置基准点在待处理图像中对应的目标基准点,所述目标基准点与对应的待匹配基准点形成映射点对;待处理图像点映射模块,配置为根据目标基准点与待匹配基准点的位置关系,以及映射点对与待处理图像点的位置关系将所述待处理图像点映射至对应的目标位置。
- 根据权利要求8所述的装置,其中,所述形变模版包括多种五官类型对应的配置基准点和配置参考点。
- 根据权利要求8所述的装置,其中,所述相似性映射模块包括:形变因子确定单元,配置为获取所述配置参考点形成的图形的第一重心点,所述配置基准点和第一重心点连线形成第一线段,所述第一重心点和不同的配置参考点连线形成第一参考线段集合,根据所述第一线段与第一参考线段集合中的线段的夹角关系、所述第一线段的长度、所述第一参 考线段集合中的线段的长度确定形变因子;夹角确定单元,配置为获取根据所述当前参考点形成的图形的第二重心点,所述目标基准点和第二重心点连线形成第二线段,所述第二重心点和不同的当前参考点连线形成第二参考线段集合,根据所述夹角关系确定所述第二线段与第二参考线段集合中的各个线段的夹角;目标基准点确定单元,配置为根据所述形变因子、所述夹角和第二参考线段集合中的线段的长度确定所述目标基准点的位置。
- 根据权利要求10所述的装置,其中,所述配置参考点形成的图形为根据相邻的3个配置参考点形成的配置三角图形,所述当前参考点形成的图形为根据与配置三角图形相同的规则形成的当前三角图形,所述形变因子确定单元还配置为获取所述配置三角图形的配置重心点,所述配置基准点和配置重心点连线形成所述第一线段,获取与所述配置基准点相邻的两个目标配置参考点,所述配置重心点和目标配置参考点连线形成第一配置参考线段和第二配置参考线段,获取所述第一线段与第一配置参考线段的夹角α,所述第一线段与第二配置参考线段的夹角β、所述第一线段的长度Dsrc、所述第一配置参考线段的长度d1,所述第二配置参考线段的长度d2,根据公式Dsrc=d1αk+d2βk确定形变因子k;所述夹角确定单元还配置为获取所述当前三角图形的当前重心点,所述目标基准点和当前重心点连线形成第二线段,获取所述目标配置参考点对应的两个目标当前参考点,所述当前重心点和目标当前参考点连线形成第一当前参考线段和第二当前参考线段,根据公式确定所述第二线段与第一当前参考线段的夹角α'和第二线段与第二当前参考线段的夹角β';所述目标基准点确定单元还配置为获取所述第一当前参考线段的长度 d1'、第二当前参考线段的长度d2',根据公式Ddst=d1α'k+d2β'k计算得到第二线段的长度Ddst,从而确定所述目标基准点的位置。
- 根据权利要求8所述的装置,其中,所述待处理图像点映射模块包括:影响权重因子计算单元,配置为根据待处理图像点与待匹配基准点的位置计算得到各个映射点对对应的影响权重因子;目标位置确定单元,配置为计算各个映射点对对应的位移偏移,根据所述各个映射点对对应的影响权重因子和位移偏移计算得到待处理图像点的位移,根据所述位移将所述待处理图像点映射至对应的目标位置。
- 根据权利要求8所述的装置,其中,目标位置确定单元包括:分块单元,配置为将所述待处理图像进行分块得到原始分块,将各个原始分块对应的顶点作为第一待处理图像点,各个原始分块内的其它点作为第二待处理图像点;第一映射单元,配置为根据所述映射关系将所述第一待处理图像点映射至对应的目标位置得到第一映射图像点,所述第一映射图像点形成所述 原始分块对应的映射分块;第二映射单元,配置为将所述第二待处理图像点以原始分块为处理单元映射至所述原始分块对应的映射分块内对应的位置。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求1-7任一项所述的图像形变处理的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020187009599A KR102076431B1 (ko) | 2016-04-27 | 2017-04-17 | 이미지 변형 처리 방법 및 장치, 컴퓨터 기억 매체 |
JP2018517437A JP6585838B2 (ja) | 2016-04-27 | 2017-04-17 | 画像歪み処理方法及び装置、コンピュータ記憶媒体 |
US16/014,410 US10691927B2 (en) | 2016-04-27 | 2018-06-21 | Image deformation processing method and apparatus, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610270547.2A CN105956997B (zh) | 2016-04-27 | 2016-04-27 | 图像形变处理的方法和装置 |
CN201610270547.2 | 2016-04-27 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/014,410 Continuation US10691927B2 (en) | 2016-04-27 | 2018-06-21 | Image deformation processing method and apparatus, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017186016A1 true WO2017186016A1 (zh) | 2017-11-02 |
Family
ID=56916929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/080822 WO2017186016A1 (zh) | 2016-04-27 | 2017-04-17 | 图像形变处理的方法和装置、计算机存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10691927B2 (zh) |
JP (1) | JP6585838B2 (zh) |
KR (1) | KR102076431B1 (zh) |
CN (1) | CN105956997B (zh) |
WO (1) | WO2017186016A1 (zh) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956997B (zh) * | 2016-04-27 | 2019-07-05 | 腾讯科技(深圳)有限公司 | 图像形变处理的方法和装置 |
WO2019075666A1 (zh) * | 2017-10-18 | 2019-04-25 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、终端及存储介质 |
WO2019113839A1 (en) * | 2017-12-13 | 2019-06-20 | Shenzhen United Imaging Healthcare Co., Ltd. | System and method for diagnosis and treatment |
CN110415164A (zh) * | 2018-04-27 | 2019-11-05 | 武汉斗鱼网络科技有限公司 | 人脸变形处理方法、存储介质、电子设备及系统 |
CN108564082B (zh) * | 2018-04-28 | 2023-06-09 | 苏州赛腾精密电子股份有限公司 | 图像处理方法、装置、服务器和介质 |
CN108830200A (zh) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN108830784A (zh) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN108765274A (zh) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置和计算机存储介质 |
CN108875633B (zh) * | 2018-06-19 | 2022-02-08 | 北京旷视科技有限公司 | 表情检测与表情驱动方法、装置和系统及存储介质 |
CN108830787A (zh) * | 2018-06-20 | 2018-11-16 | 北京微播视界科技有限公司 | 图像变形的方法、装置及电子设备 |
CN109003224A (zh) * | 2018-07-27 | 2018-12-14 | 北京微播视界科技有限公司 | 基于人脸的形变图像生成方法和装置 |
CN112581512A (zh) * | 2019-09-27 | 2021-03-30 | 鲁班嫡系机器人(深圳)有限公司 | 图像匹配、3d成像及姿态识别方法、装置及系统 |
CN110837332A (zh) * | 2019-11-13 | 2020-02-25 | 北京字节跳动网络技术有限公司 | 面部图像变形方法、装置、电子设备和计算机可读介质 |
CN112149647B (zh) * | 2020-11-24 | 2021-02-26 | 北京蜜莱坞网络科技有限公司 | 图像处理方法、装置、设备及存储介质 |
CN112729214B (zh) * | 2020-11-27 | 2022-06-14 | 成都飞机工业(集团)有限责任公司 | 一种基于试验数据修正基准点坐标的测量方法 |
CN113436063B (zh) * | 2021-07-30 | 2024-03-05 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN115775279B (zh) * | 2023-02-13 | 2023-05-16 | 苏州希盟科技股份有限公司 | 点胶定位方法、装置及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007034724A (ja) * | 2005-07-27 | 2007-02-08 | Glory Ltd | 画像処理装置、画像処理方法および画像処理プログラム |
CN102971769A (zh) * | 2010-04-30 | 2013-03-13 | 欧姆龙株式会社 | 图像变形装置、电子设备、图像变形方法、图像变形程序、以及记录有该程序的记录介质 |
CN103916588A (zh) * | 2012-12-28 | 2014-07-09 | 三星电子株式会社 | 图像变换设备和方法 |
CN105118024A (zh) * | 2015-09-14 | 2015-12-02 | 北京中科慧眼科技有限公司 | 人脸交换方法 |
CN105184249A (zh) * | 2015-08-28 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | 用于人脸图像处理的方法和装置 |
CN105956997A (zh) * | 2016-04-27 | 2016-09-21 | 腾讯科技(深圳)有限公司 | 图像形变处理的方法和装置 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100514365C (zh) * | 2007-01-15 | 2009-07-15 | 浙江大学 | 多张人脸照片自动合成方法 |
KR100902995B1 (ko) * | 2007-10-23 | 2009-06-15 | 에스케이 텔레콤주식회사 | 최적화 비율의 얼굴영상 형성 방법 및 이에 적용되는 장치 |
CN101276474B (zh) * | 2008-04-14 | 2010-09-01 | 中山大学 | 一种基于局部坐标的线性约束图像变形方法 |
JP2011053942A (ja) * | 2009-09-02 | 2011-03-17 | Seiko Epson Corp | 画像処理装置、画像処理方法および画像処理プログラム |
WO2011152842A1 (en) * | 2010-06-01 | 2011-12-08 | Hewlett-Packard Development Company, L.P. | Face morphing based on learning |
JP5840528B2 (ja) * | 2012-02-21 | 2016-01-06 | 花王株式会社 | 顔画像合成装置及び顔画像合成方法 |
CN104751404B (zh) * | 2013-12-30 | 2019-04-12 | 腾讯科技(深圳)有限公司 | 一种图像变换的方法及装置 |
CN105184735B (zh) * | 2014-06-19 | 2019-08-06 | 腾讯科技(深圳)有限公司 | 一种人像变形方法及装置 |
CN105205779B (zh) * | 2015-09-15 | 2018-10-19 | 厦门美图之家科技有限公司 | 一种基于图像变形的眼部图像处理方法、系统及拍摄终端 |
-
2016
- 2016-04-27 CN CN201610270547.2A patent/CN105956997B/zh active Active
-
2017
- 2017-04-17 JP JP2018517437A patent/JP6585838B2/ja active Active
- 2017-04-17 KR KR1020187009599A patent/KR102076431B1/ko active IP Right Grant
- 2017-04-17 WO PCT/CN2017/080822 patent/WO2017186016A1/zh active Application Filing
-
2018
- 2018-06-21 US US16/014,410 patent/US10691927B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007034724A (ja) * | 2005-07-27 | 2007-02-08 | Glory Ltd | 画像処理装置、画像処理方法および画像処理プログラム |
CN102971769A (zh) * | 2010-04-30 | 2013-03-13 | 欧姆龙株式会社 | 图像变形装置、电子设备、图像变形方法、图像变形程序、以及记录有该程序的记录介质 |
CN103916588A (zh) * | 2012-12-28 | 2014-07-09 | 三星电子株式会社 | 图像变换设备和方法 |
CN105184249A (zh) * | 2015-08-28 | 2015-12-23 | 百度在线网络技术(北京)有限公司 | 用于人脸图像处理的方法和装置 |
CN105118024A (zh) * | 2015-09-14 | 2015-12-02 | 北京中科慧眼科技有限公司 | 人脸交换方法 |
CN105956997A (zh) * | 2016-04-27 | 2016-09-21 | 腾讯科技(深圳)有限公司 | 图像形变处理的方法和装置 |
Non-Patent Citations (1)
Title |
---|
XU, SHAOJIE: "Fast Facial Beautification Algorithm and System Based on Edge-Preserving Smoothing Filter and Edit Propagation", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE , CHINA MASTER'S THESES FULL-TEXT DATABASE, 15 March 2016 (2016-03-15), pages 48, ISSN: 1674-0246 * |
Also Published As
Publication number | Publication date |
---|---|
JP2018536221A (ja) | 2018-12-06 |
US20180300537A1 (en) | 2018-10-18 |
CN105956997B (zh) | 2019-07-05 |
KR20180050702A (ko) | 2018-05-15 |
JP6585838B2 (ja) | 2019-10-02 |
KR102076431B1 (ko) | 2020-03-02 |
US10691927B2 (en) | 2020-06-23 |
CN105956997A (zh) | 2016-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017186016A1 (zh) | 图像形变处理的方法和装置、计算机存储介质 | |
AU2019432052B2 (en) | Three-dimensional image measurement method, electronic device, storage medium, and program product | |
CN109859305B (zh) | 基于多角度二维人脸的三维人脸建模、识别方法及装置 | |
CN106203400A (zh) | 一种人脸识别方法及装置 | |
CN112085033B (zh) | 一种模板匹配方法、装置、电子设备及存储介质 | |
US20120147167A1 (en) | Facial recognition using a sphericity metric | |
JP5873362B2 (ja) | 視線誤差補正装置、そのプログラム及びその方法 | |
WO2022142783A1 (zh) | 一种图像处理方法以及相关设备 | |
CN117333928B (zh) | 一种人脸特征点检测方法、装置、电子设备及存储介质 | |
CN112883920A (zh) | 基于点云深度学习的三维人脸扫描特征点检测方法和装置 | |
CN110910478B (zh) | Gif图生成方法、装置、电子设备及存储介质 | |
Anasosalu et al. | Compact and accurate 3-D face modeling using an RGB-D camera: let's open the door to 3-D video conference | |
CN110020577B (zh) | 人脸关键点扩展计算方法、存储介质、电子设备及系统 | |
US10861174B2 (en) | Selective 3D registration | |
WO2022262201A1 (zh) | 面部三维模型可视化方法、装置、电子设备及存储介质 | |
US20210133995A1 (en) | Electronic devices, methods, and computer program products for controlling 3d modeling operations based on pose metrics | |
CN112967329A (zh) | 图像数据优化方法、装置、电子设备及存储介质 | |
CN111008966A (zh) | 基于rgbd的单视角人体测量方法、装置以及计算机可读存储介质 | |
JP2019175165A (ja) | オブジェクト追跡装置、オブジェクト追跡方法、及びオブジェクト追跡プログラム | |
US11869217B2 (en) | Image processing apparatus, detection method, and non-transitory computer readable medium | |
CN113538655B (zh) | 一种虚拟人脸的生成方法及设备 | |
CN108108694A (zh) | 一种人脸特征点定位方法及装置 | |
KR102678784B1 (ko) | 얼굴 모델 합성 장치 및 방법 | |
JP2018142267A (ja) | 物体判定装置、物体判定方法、プログラム、および特徴量列のデータ構造 | |
JP2024525703A (ja) | 三次元動的追跡方法、装置、電子機器及び記憶媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20187009599 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2018517437 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17788668 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17788668 Country of ref document: EP Kind code of ref document: A1 |