CN111985280B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111985280B
CN111985280B CN201910439198.6A CN201910439198A CN111985280B CN 111985280 B CN111985280 B CN 111985280B CN 201910439198 A CN201910439198 A CN 201910439198A CN 111985280 B CN111985280 B CN 111985280B
Authority
CN
China
Prior art keywords
face
dimensional
image
model
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910439198.6A
Other languages
Chinese (zh)
Other versions
CN111985280A (en
Inventor
胡毅
汪轩然
林哲弘
杜慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910439198.6A priority Critical patent/CN111985280B/en
Publication of CN111985280A publication Critical patent/CN111985280A/en
Application granted granted Critical
Publication of CN111985280B publication Critical patent/CN111985280B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to an image processing method and device. The method comprises the following steps: acquiring first face pose information of a first face in a two-dimensional face image to be processed, first two-dimensional coordinates of first face feature points, second face pose information of a second face and second two-dimensional coordinates of second face feature points, so as to determine a first perspective transformation matrix of the two-dimensional face image and a second perspective transformation matrix of the second face; performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image; performing perspective distortion correction on a second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image; and comparing the first three-dimensional face model of the second face in the second intermediate image with a pre-stored second three-dimensional face model to obtain model difference information so as to correct the second face. The technical scheme can reduce global perspective distortion and local perspective distortion of the group photo image.

Description

Image processing method and device
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to an image processing method and device.
Background
When a mobile terminal (such as a mobile phone) photographs, the rear camera and the front camera can be greatly influenced by perspective distortion in a scene containing a portrait, so that a face part in an imaging result inevitably causes a problem.
For example, in mobile terminal shooting, a rear camera can shoot a scene of a multi-person group photo, and the scene can show a perspective distortion problem on a photo, especially a face far from the center of an image has a significant distortion problem. As faces are more critical information, distorted faces need to be corrected.
For another example, the front camera may take a single self-photograph and 2-3 persons are taking a photograph. When shooting a single person, a certain perspective distortion is caused because the face is closer to a front camera of the mobile phone, which has a great influence on the correct expression of the face. When 2-3 persons shoot a group photo, because the moving terminal is close to the photographed face in the self-shooting, the face on the left side and the face on the right side in the image have a large degree of perspective distortion, and the problem is reflected in the final image, and the shape of the face on both sides is deformed (such as transverse stretching, radial stretching and the combination of the transverse stretching and the radial stretching), so that correction processing is also necessary.
Therefore, how to process an image to mitigate perspective distortion is a technical problem to be solved.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide an image processing method and apparatus, which are used to reduce global perspective distortion and local perspective distortion of a group photo image and improve quality of the group photo image.
According to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including:
acquiring first face posture information of a first face, first two-dimensional coordinates of first face feature points, second face posture information of a second face and second two-dimensional coordinates of second face feature points in a two-dimensional face image to be processed;
determining a first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates;
determining a second perspective transformation matrix of the second face according to the second face posture information and the second two-dimensional coordinates;
performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image;
Acquiring a first three-dimensional face model of a second face in the second intermediate image;
comparing the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information;
and correcting the second face in the second intermediate image according to the model difference information to obtain a corrected two-dimensional face image.
In an embodiment, before the obtaining the first face pose information of the first face, the first two-dimensional coordinates of the first face feature point, the second face pose information of the second face, and the second two-dimensional coordinates of the second face feature point in the two-dimensional face image to be processed, the method may further include:
determining respective areas of at least one face in the two-dimensional face image;
determining at least one face with the largest area according to the respective areas of the at least one face to obtain the first face; or alternatively
Determining respective positions of at least one face in the two-dimensional face image;
and determining at least one face nearest to the center of the two-dimensional face image according to the respective positions of the at least one face to obtain the first face.
In an embodiment, before the obtaining the first face pose information of the first face, the first two-dimensional coordinates of the first face feature point, the second face pose information of the second face, and the second two-dimensional coordinates of the second face feature point in the two-dimensional face image to be processed, the method may further include:
Determining respective areas of at least one face in the two-dimensional face image, and obtaining at least one area value;
determining the distance between each face and the center of the two-dimensional face image to obtain at least one distance value;
determining the weight coefficient of each of the at least one area value and the weight coefficient of each of the at least one distance value;
determining an evaluation value of the importance degree of each face according to the at least one area value, the weight coefficient of each at least one area value, the at least one distance value and the weight coefficient of each at least one distance value;
and determining at least one face with the maximum evaluation value according to the evaluation value of the importance degree of each face to obtain the first face.
In an embodiment, before the obtaining the first face pose information of the first face, the first two-dimensional coordinates of the first face feature point, the second face pose information of the second face, and the second two-dimensional coordinates of the second face feature point in the two-dimensional face image to be processed, the method may further include:
determining an edge region in the two-dimensional face image according to preset region parameters;
And determining at least one face in the edge area as the second face.
In one embodiment, the determining the first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates may include:
according to the first face pose information and third face pose information of a preset two-dimensional face model, adjusting the first two-dimensional coordinates; the adjusted face gesture of the first face is the same as the face gesture of the preset two-dimensional face model;
and registering according to the adjusted first two-dimensional coordinates and the third two-dimensional coordinates of the third face feature points in the preset two-dimensional face model to obtain the first perspective transformation matrix.
In one embodiment, the determining the second perspective transformation matrix of the second face according to the second face pose information and the second two-dimensional coordinates may include:
according to the second face posture information and the third face posture information, the second two-dimensional coordinates are adjusted; the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model;
And registering according to the adjusted second two-dimensional coordinate and the third two-dimensional coordinate to obtain the second perspective transformation matrix.
In one embodiment, the comparing the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information may include:
acquiring first model parameters of the first three-dimensional face model;
obtaining second model parameters of the second three-dimensional face model;
and comparing the first model parameter with the second model parameter to obtain the model difference information.
In one embodiment, the performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image may include:
determining an inverse of the first perspective transformation matrix;
and performing perspective distortion correction on the second face in the first intermediate image according to the inverse matrix and the second perspective transformation matrix to obtain the second intermediate image.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus including:
the first acquisition module is configured to acquire first face pose information of a first face, first two-dimensional coordinates of first face feature points, second face pose information of a second face and second two-dimensional coordinates of second face feature points in a two-dimensional face image to be processed;
A first determining module configured to determine a first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates;
a second determining module configured to determine a second perspective transformation matrix of the second face according to the second face pose information and the second two-dimensional coordinates;
the first correcting module is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
the second correcting module is configured to perform perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image;
the second acquisition module is configured to acquire a first three-dimensional face model of a second face in the second intermediate image;
the comparison module is configured to compare the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information;
and the third correction module is configured to correct the second face in the second intermediate image according to the model difference information to obtain a corrected two-dimensional face image.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform: the method of the first aspect above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect described above.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the method comprises the steps of obtaining first face pose information of a first face in a two-dimensional face image to be processed, first two-dimensional coordinates of first face feature points, second face pose information of a second face and second two-dimensional coordinates of second face feature points, determining a first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates, determining a second perspective transformation matrix of the second face according to the second face pose information and the second two-dimensional coordinates, and performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image, so that global perspective distortion of a corrected group photo image is achieved. And then, carrying out perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image, thereby realizing the correction of the local perspective distortion of the group photo image. And then, acquiring a first three-dimensional face model of a second face in the second intermediate image, comparing the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information, and correcting the second face in the second intermediate image according to the model difference information to obtain a corrected two-dimensional face image, thereby realizing the local optical distortion of the corrected group photo image. The embodiment of the invention can not only reduce the global perspective distortion of the group photo image, but also reduce the local perspective distortion and the local optical distortion, and improve the quality of the group photo image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 3 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 5 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 6 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 7 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 8 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 9 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 10 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment. The image processing method can be applied to terminal equipment with an image processing function, such as a smart phone, a tablet personal computer (PAD), a video camera and the like. As shown in fig. 1, the image processing method includes the following steps S101 to S108:
in step S101, first face pose information of a first face, first two-dimensional coordinates of a first face feature point, second face pose information of a second face, and second two-dimensional coordinates of a second face feature point in a two-dimensional face image to be processed are obtained.
In one embodiment, the two-dimensional face image acquired by the image capturing device of the terminal device may be used as the two-dimensional face image to be processed. Further, the terminal device may detect whether a plurality of faces exist in the two-dimensional face image acquired by the image capturing device, and when a plurality of faces exist, the acquired two-dimensional face image may be used as the two-dimensional face image to be processed.
In one embodiment, the first face may be a large-area face or a face that is relatively centered in a two-dimensional face image. The second face may be a face with a smaller area or a face located in an edge area of the two-dimensional face image, but is not limited thereto. The number of the first faces is at least one, and may be one, two or three, for example. The number of the second faces is at least one, and may be one, two, three or more, for example.
In one embodiment, the area of the first face is larger than the area of the other faces in the two-dimensional face image. In this embodiment, as shown in fig. 2, before step S101, the following steps S201 to S202 may be further included:
in step S201, an area of each of at least one face in the two-dimensional face image is determined.
In step S202, at least one face with the largest area is determined according to the respective areas of the at least one face, so as to obtain the first face.
In one embodiment, the terminal device may mark the detected at least one face with rectangular face recognition frames, and use the area surrounded by the at least one face recognition frames as the area of the corresponding face. Then, the area of the largest at least one face can be determined from the areas of the at least one face, and the face corresponding to the area of the largest at least one face is determined as the first face. For example, the largest two areas may be determined from the areas of at least one face, and the face corresponding to the two areas may be determined as the first face.
In one embodiment, before step S202, the following steps may be further included: and determining a maximum area value and a minimum area value in the respective areas of the at least one face, and determining that the difference value between the maximum area value and the minimum area value is larger than a first preset threshold value. Thus, when the difference between the areas of the faces in the two-dimensional face image is large, at least one face with the largest area can be selected as the first face. The information of the area corresponding to the face with the largest area is relatively more, so that the accuracy of image processing is improved.
In another embodiment, the first face is located closest to the center of the two-dimensional face image. In this embodiment, as shown in fig. 3, before step S101, the following steps S301 to S302 may be further included:
in step S301, respective positions of at least one face in the two-dimensional face image are determined.
In step S302, at least one face nearest to the center of the two-dimensional face image is determined according to the respective positions of the at least one face, and the first face is obtained.
In one embodiment, the terminal device may mark the detected at least one face with rectangular face recognition frames, and determine the central position of the at least one face recognition frame as the position of the corresponding face, respectively. Then, determining a distance value between the respective position of at least one face and the center of the two-dimensional face image to obtain at least one distance value. Then, determining at least one minimum distance value in the at least one distance values, and determining a face corresponding to the at least one minimum distance value as the first face. For example, the smallest two distance values of the at least one distance value may be determined, and the face corresponding to the smallest two distance values may be determined as the first face. The method of determining the distance between the position of the face and the center of the two-dimensional face image is not limited to the above method.
In one embodiment, before step S301, the following steps may be further included: determining respective areas of at least one face in the two-dimensional face image, then determining a maximum area value and a minimum area value in the respective areas of the at least one face, and determining that the difference value between the maximum area value and the minimum area value is smaller than a second preset threshold value. The second preset threshold value is smaller than the first preset threshold value. Thus, when the difference between the areas of the faces in the two-dimensional face image is not large, the face close to the center of the two-dimensional face image can be selected as the first face. The distortion of the face close to the center of the two-dimensional face image caused by other reasons is small, so that the accuracy of image processing is improved.
In yet another embodiment, the first face may be determined by considering both the position and the area of the face. In this embodiment, as shown in fig. 4, before step S101, the following steps S401 to S405 may be further included:
in step S401, an area of each of at least one face in the two-dimensional face image is determined, and at least one area value is obtained.
In step S402, a distance between each of the at least one face and a center of the two-dimensional face image is determined, and at least one distance value is obtained.
In step S403, a weight coefficient for each of the at least one area value and a weight coefficient for each of the at least one distance value are determined.
In step S404, an evaluation value of the importance degree of each of the at least one face is determined according to the at least one area value, the weight coefficient of each of the at least one area value, the at least one distance value, and the weight coefficient of each of the at least one distance value.
In step S405, at least one face with the largest evaluation value is determined according to the evaluation value of the importance degree of each of the at least one face, so as to obtain the first face.
In one embodiment, the terminal device may mark the detected at least one face with rectangular face recognition frames, and use the area surrounded by the at least one face recognition frame as the area of the corresponding face, to obtain at least one area value.
In one embodiment, the terminal device may determine the center positions of the at least one face recognition frame as the positions of the corresponding faces, respectively. Then, determining a distance value between the respective position of at least one face and the center of the two-dimensional face image to obtain at least one distance value.
In one embodiment, the terminal device may store a first correspondence between an area and a weight and a second correspondence between a distance and a weight in advance. Wherein the area and the weight are positively correlated and the distance and the weight are negatively correlated. The terminal device may obtain the respective weight coefficient of the at least one area value according to the at least one area value and the first correspondence, and obtain the respective weight coefficient of the at least one distance value according to the at least one distance value and the second correspondence.
In one embodiment, the terminal device may calculate, according to the at least one area value, the weight coefficient of each of the at least one area value, the at least one distance value, and the weight coefficient of each of the at least one distance value, to obtain an evaluation value of the importance degree of each of the at least one face. The evaluation value of the importance degree of the face may be a weighted sum of the area value and the distance value. For example, if the area value of the face a is S, the weight coefficient corresponding to the area value S is 0.8, the distance value corresponding to the face a is L, and the weight coefficient corresponding to the distance value L is 0.6, the evaluation value of the importance degree of the face a is 0.8s+0.6l.
In one embodiment, the terminal device may determine at least one face with the largest evaluation value according to the evaluation value of the importance degree of each of the at least one face, so as to obtain the first face. For example, two faces having the largest evaluation values may be selected as the first face. Thus, two factors of the position and the area of the face are comprehensively considered, the face with relatively large area and relatively centered position can be used as the first face, more face information can be provided by the first face, distortion caused by other reasons is less, and the accuracy of image processing is improved.
In one embodiment, as shown in fig. 5, before step S101, the following steps S501 to S502 may be further included:
in step S501, determining an edge region in the two-dimensional face image according to a preset region parameter;
in step S502, at least one face in the edge region is determined as the second face.
In one embodiment, the area parameters may be pre-stored in the terminal device. The above-mentioned region parameters may include start coordinates and end coordinates of one, two or more regions in the two-dimensional face image, for determining edge regions in the two-dimensional face image. The terminal device may determine an edge region in the two-dimensional face image according to a preset region parameter, and determine at least one face in the edge region as the second face. The second face in the edge area of the two-dimensional face image is determined through the area parameters, so that the two-dimensional face image is convenient to realize and high in accuracy.
In one embodiment, the first two-dimensional coordinates of the first face feature point of the first face and the second two-dimensional coordinates of the second face feature point of the second face may be obtained by face feature point detection. The face feature points may include left outer corners, right outer corners, nasal tips, left corners, right corners, and the like. The number of face feature points per face may be 21 points, 106 points, or other numbers. The face feature point detection method may include, but is not limited to, model-based ASM (Active Shape Model ) and AAM (Active Appearnce Model, active appearance model), cascade shape regression (CPR, cascaded pose regression) based, and deep learning based methods, etc.
In one embodiment, the face pose estimation may be performed on the first face and the second face, so as to obtain first face pose information of the first face and second face pose information of the second face. The face pose information may include a horizontal rotation angle (yaw), a pitch angle (pitch), and a rotation angle (roll) of the face, among others. The horizontal rotation angle (yaw), the pitch angle (pitch), and the rotation angle (roll) may be angles rotated around three coordinate axes of a space rectangular coordinate system established with a certain point on the face as an origin, respectively.
In step S102, a first perspective transformation matrix of the two-dimensional face image is determined according to the first face pose information and the first two-dimensional coordinates.
In one embodiment, as shown in fig. 6, step S102 may include the following steps S601 to 602:
in step S601, the first two-dimensional coordinates are adjusted according to the first face pose information and third face pose information of a preset two-dimensional face model; the adjusted face gesture of the first face is the same as the face gesture of the preset two-dimensional face model;
in step S602, registration is performed according to the adjusted first two-dimensional coordinate and a third two-dimensional coordinate of a third face feature point in the preset two-dimensional face model, so as to obtain the first perspective transformation matrix.
In this embodiment, the terminal device may store a trained preset two-dimensional face model. In the third face pose information of the preset two-dimensional face model, the horizontal rotation angle (yaw), the pitch angle (pitch) and the rotation angle (roll) may be zero. That is, the preset two-dimensional face model may be a model built from a front face.
In this embodiment, the terminal device may adjust the first two-dimensional coordinates of the first face feature point of the first face according to the first face pose information of the first face and the third face pose information of the preset two-dimensional face model, so that the adjusted face pose of the first face is the same as the face pose of the preset two-dimensional face model. Specifically, the distance and the proportional relation between the first face feature points of the first face can be the same as or a certain proportion to the distance and the proportional relation between the third face feature points of the preset two-dimensional face model by adjusting the first two-dimensional coordinates of the first face feature points of the first face.
Then, the terminal equipment can register the adjusted first two-dimensional coordinates with the third two-dimensional coordinates of the third face feature points in the preset two-dimensional face model to obtain the first perspective transformation matrix. Illustratively, the adjusted first face feature point of the first face is denoted as X, the third face feature point in the preset two-dimensional face model is denoted as X', and the first perspective transformation matrix is denoted as a. Then there is
X’=AX (1)
By solving the above equation (1), the value of the first perspective transformation matrix a can be obtained.
In step S103, a second perspective transformation matrix of the second face is determined according to the second face pose information and the second two-dimensional coordinates.
In one embodiment, as shown in fig. 7, step S103 may include the following steps S701 to 702:
in step S701, the second two-dimensional coordinate is adjusted according to the second face pose information and the third face pose information; the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model;
in step S702, the second perspective transformation matrix is obtained by registering the adjusted second two-dimensional coordinate with the third two-dimensional coordinate.
In this embodiment, the terminal device may store a trained preset two-dimensional face model. In the third face pose information of the preset two-dimensional face model, the horizontal rotation angle (yaw), the pitch angle (pitch) and the rotation angle (roll) may be zero. That is, the preset two-dimensional face model may be a model built from a front face.
In this embodiment, the terminal device may adjust second two-dimensional coordinates of the second face feature point of the second face according to the second face pose information of the second face and third face pose information of the preset two-dimensional face model, so that the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model. Specifically, the distance and the proportional relation between the second face feature points of the second face can be the same as or a certain proportion to the distance and the proportional relation between the third face feature points of the preset two-dimensional face model by adjusting the second two-dimensional coordinates of the second face feature points of the second face.
And then, the terminal equipment can register the adjusted second two-dimensional coordinates with the third two-dimensional coordinates of the third face feature points in the preset two-dimensional face model to obtain the second perspective transformation matrix. The adjusted second face feature point of the second face is denoted as Y, the third face feature point in the preset two-dimensional face model is denoted as X', and the second perspective transformation matrix is denoted as B. Then there is
X’=BY (2)
By solving the above equation (2), the value of the second perspective transformation matrix B can be obtained.
In step S104, perspective distortion correction is performed on the two-dimensional face image according to the first perspective transformation matrix, so as to obtain a first intermediate image.
In this embodiment, the two-dimensional face image may be denoted as Z, the first intermediate image may be denoted as M, and M may be obtained by the following calculation formula (3):
M=AZ (3)
in this embodiment, the first perspective transformation matrix a is used to perform overall perspective distortion correction on the two-dimensional face image Z, so as to obtain a first intermediate image M.
In step S105, perspective distortion correction is performed on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix, so as to obtain a second intermediate image.
In this embodiment, since the first perspective transformation matrix is used to perform overall perspective distortion correction on the two-dimensional face image in step S104, in addition to the perspective distortion correction on the first face in the two-dimensional face image, the perspective distortion correction on the second face in the two-dimensional face image is also required. Therefore, when the perspective distortion correction is performed on the second face, it is necessary to eliminate the perspective distortion correction performed on the second face by using the first perspective transformation matrix, and then perform the perspective distortion correction on the second face by using the second perspective transformation matrix.
In one embodiment, as shown in fig. 8, step S105 may include the following steps S801 to S802:
in step S801, determining an inverse of the first perspective transformation matrix;
in step S802, perspective distortion correction is performed on the second face in the first intermediate image according to the inverse matrix and the second perspective transformation matrix, so as to obtain the second intermediate image.
In the present embodiment, the inverse matrix A of the first perspective transformation matrix is first determined -1 Then, rootAccording to the inverse matrix A -1 And performing perspective distortion correction on the second face in the first intermediate image M with the second perspective transformation matrix B.
In this embodiment, the second face in the first intermediate image M may be denoted as F, and the second face in the corrected two-dimensional face image may be denoted as F ', and then F' may be calculated by the following calculation formula (4):
F’=A -1 BF (4)
in step S106, a first three-dimensional face model of the second face in the second intermediate image is obtained.
In one embodiment, the terminal device may acquire the second two-dimensional coordinates of the second face feature point of the second face through face feature point detection, and may detect three-dimensional geometric information of the second face through a 3D distance sensor. The 3D distance sensor may be a 3D structure optical device, a TOF (Time of Flight) depth sensor, or an active Near Infrared (NIR) sensor, and the 3D structure optical device may be a floodlight sensing element or a lattice projector, but is not limited thereto.
In one embodiment, the terminal device may adjust model parameters of a reference three-dimensional face model stored in advance according to second two-dimensional coordinates of second face feature points of the second face and three-dimensional geometric information of the second face, so as to obtain a first three-dimensional face model of the second face.
In step S107, the first three-dimensional face model is compared with a pre-stored second three-dimensional face model to obtain model difference information.
In one embodiment, as shown in fig. 9, step S107 may include the following steps S901 to S902:
in step S901, first model parameters of the first three-dimensional face model are acquired.
In step S902, second model parameters of the second three-dimensional face model are acquired.
In step S903, the first model parameter and the second model parameter are compared to obtain the model difference information.
In this embodiment, the terminal device may store a pre-stored second three-dimensional face model. The pre-stored second three-dimensional face model may be a three-dimensional face model of a standard face, which may be a front face without optical distortion or perspective distortion.
In this embodiment, the terminal device may obtain the first model parameter of the first three-dimensional face model and the second model parameter of the second three-dimensional face model, and compare the first model parameter with the second model parameter to obtain the model difference information. The first model parameters carry three-dimensional geometric information of the second face, and the second model parameters carry three-dimensional geometric information of the standard face.
In step S108, the second face in the second intermediate image is corrected according to the model difference information, so as to obtain a corrected two-dimensional face image.
In this embodiment, the terminal device may correct the second face in the second intermediate image according to the model difference information, so as to obtain a corrected two-dimensional face image, so that optical distortion of the second face may be corrected.
In the embodiment of the disclosure, first face pose information of a first face, first two-dimensional coordinates of first face feature points, second face pose information of a second face and second two-dimensional coordinates of second face feature points in a two-dimensional face image to be processed are obtained, then, a first perspective transformation matrix of the two-dimensional face image is determined according to the first face pose information and the first two-dimensional coordinates, a second perspective transformation matrix of the second face is determined according to the second face pose information and the second two-dimensional coordinates, then perspective distortion correction is performed on the two-dimensional face image according to the first perspective transformation matrix, a first intermediate image is obtained, and global perspective distortion of a group photo image is corrected. And then, carrying out perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image, thereby realizing the correction of the local perspective distortion of the group photo image. And then, acquiring a first three-dimensional face model of a second face in the second intermediate image, comparing the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information, and correcting the second face in the second intermediate image according to the model difference information to obtain a corrected two-dimensional face image, thereby realizing the local optical distortion of the corrected group photo image. The embodiment of the invention can not only reduce the global perspective distortion of the group photo image, but also reduce the local perspective distortion and the local optical distortion, and improve the quality of the group photo image.
In the embodiment of the disclosure, when a group photo image is shot, a perspective transformation matrix of the whole group photo image is determined according to the face pose information of a relatively large face or a face (a first face) centered in the position in the group photo image and the two-dimensional coordinates of the face feature points, the perspective distortion of the whole group photo image is corrected, and the perspective distortion of the appointed face in the group photo image is corrected according to the face pose information of the appointed face (a second face) in the group photo image and the two-dimensional coordinates of the face feature points. When two designated faces are arranged and are respectively positioned at two side edge areas in the group photo image, perspective distortion correction at different angles can be respectively carried out on the two designated faces. And the model difference information can be obtained by comparing the three-dimensional face model of the appointed face with a pre-stored three-dimensional face model, and the optical distortion of the appointed face can be corrected according to the model difference information. Therefore, the problems of face deformation caused by perspective distortion and face deformation caused by optical distortion can be automatically reduced, and the imaging effect is improved.
Fig. 10 is a block diagram of an image processing apparatus according to an exemplary embodiment. In this embodiment, the apparatus includes:
The first obtaining module 1001 is configured to obtain first face pose information of a first face, first two-dimensional coordinates of a first face feature point, second face pose information of a second face, and second two-dimensional coordinates of a second face feature point in a two-dimensional face image to be processed;
a first determining module 1002 configured to determine a first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates;
a second determining module 1003 configured to determine a second perspective transformation matrix of the second face according to the second face pose information and the second two-dimensional coordinates;
the first correcting module 1004 is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
a second correction module 1005 configured to perform perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix, to obtain a second intermediate image;
a second obtaining module 1006 configured to obtain a first three-dimensional face model of a second face in the second intermediate image;
A comparison module 1007 configured to compare the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information;
and a third correction module 1008, configured to correct the second face in the second intermediate image according to the model difference information, so as to obtain a corrected two-dimensional face image.
The embodiment of the disclosure also provides a terminal device, which comprises a processor and a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory, and implement the method steps described in any one of the foregoing embodiments.
The disclosed embodiments also provide a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the method steps of any of the embodiments described above.
The specific manner in which the processor performs the operations in the apparatus of the above embodiments has been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 11 is a block diagram of a terminal device according to an exemplary embodiment. For example, device 1100 may be a mobile phone, computer, digital broadcast terminal, messaging device, tablet device, personal digital assistant, or the like.
Referring to fig. 11, device 1100 may include one or more of the following components: a processing component 1102, a memory 1104, a power component 1106, a multimedia component 1108, an audio component 1110, an input/output (I/O) interface 1112, a sensor component 1114, and a communication component 1116.
The processing component 1102 generally controls overall operation of the device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1102 may include one or more processors 1120 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1102 can include one or more modules that facilitate interactions between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operations at the device 1000. Examples of such data include instructions for any application or method operating on device 1100, contact data, phonebook data, messages, pictures, video, and the like. The memory 1104 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 1106 provides power to the various components of the device 1100. The power components 1106 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 1100.
Multimedia component 1108 includes a screen between the device 1100 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, multimedia component 1108 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 1110 is configured to output and/or input an audio signal. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the device 1100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio component 1110 further comprises a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1114 includes one or more sensors for providing status assessment of various aspects of the device 1100. For example, the sensor assembly 1114 may detect an on/off state of the device 1000, a relative positioning of the components, such as a display and keypad of the device 1100, a change in position of the device 1100 or a component of the device 1100, the presence or absence of user contact with the device 1100, an orientation or acceleration/deceleration of the device 1100, and a change in temperature of the device 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1116 is configured to facilitate communication between the device 1100 and other devices, either wired or wireless. The device 1100 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G lte,5G NR, or a combination thereof. In one exemplary embodiment, the communication part 1116 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 1116 further includes a Near Field Communication (NFC) module to facilitate short range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1104, including instructions executable by processor 1120 of device 1100 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. An image processing method, the method comprising:
acquiring first face posture information of a first face, first two-dimensional coordinates of first face feature points, second face posture information of a second face and second two-dimensional coordinates of second face feature points in a two-dimensional face image to be processed;
determining a first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates;
Determining a second perspective transformation matrix of the second face according to the second face posture information and the second two-dimensional coordinates;
performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image;
acquiring a first three-dimensional face model of a second face in the second intermediate image;
comparing the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information;
and correcting the second face in the second intermediate image according to the model difference information to obtain a corrected two-dimensional face image.
2. The method according to claim 1, wherein before the step of obtaining the first face pose information of the first face, the first two-dimensional coordinates of the first face feature point, the second face pose information of the second face, and the second two-dimensional coordinates of the second face feature point in the two-dimensional face image to be processed, the method further comprises:
Determining respective areas of at least one face in the two-dimensional face image;
determining at least one face with the largest area according to the respective areas of the at least one face to obtain the first face; or alternatively
Determining respective positions of at least one face in the two-dimensional face image;
and determining at least one face nearest to the center of the two-dimensional face image according to the respective positions of the at least one face to obtain the first face.
3. The method according to claim 1, wherein before the step of obtaining the first face pose information of the first face, the first two-dimensional coordinates of the first face feature point, the second face pose information of the second face, and the second two-dimensional coordinates of the second face feature point in the two-dimensional face image to be processed, the method further comprises:
determining respective areas of at least one face in the two-dimensional face image, and obtaining at least one area value;
determining the distance between each face and the center of the two-dimensional face image to obtain at least one distance value;
determining the weight coefficient of each of the at least one area value and the weight coefficient of each of the at least one distance value;
Determining an evaluation value of the importance degree of each face according to the at least one area value, the weight coefficient of each at least one area value, the at least one distance value and the weight coefficient of each at least one distance value;
and determining at least one face with the maximum evaluation value according to the evaluation value of the importance degree of each face to obtain the first face.
4. The method according to claim 1, wherein before the step of obtaining the first face pose information of the first face, the first two-dimensional coordinates of the first face feature point, the second face pose information of the second face, and the second two-dimensional coordinates of the second face feature point in the two-dimensional face image to be processed, the method further comprises:
determining an edge region in the two-dimensional face image according to preset region parameters;
and determining at least one face in the edge area as the second face.
5. The method of claim 1, wherein the determining a first perspective transformation matrix of the two-dimensional face image from the first face pose information and the first two-dimensional coordinates comprises:
According to the first face pose information and third face pose information of a preset two-dimensional face model, adjusting the first two-dimensional coordinates; the adjusted face gesture of the first face is the same as the face gesture of the preset two-dimensional face model;
and registering according to the adjusted first two-dimensional coordinates and the third two-dimensional coordinates of the third face feature points in the preset two-dimensional face model to obtain the first perspective transformation matrix.
6. The method of claim 5, wherein the determining a second perspective transformation matrix of the second face from the second face pose information and the second two-dimensional coordinates comprises:
according to the second face posture information and the third face posture information, the second two-dimensional coordinates are adjusted; the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model;
and registering according to the adjusted second two-dimensional coordinate and the third two-dimensional coordinate to obtain the second perspective transformation matrix.
7. The method of claim 1, wherein comparing the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information comprises:
Acquiring first model parameters of the first three-dimensional face model;
obtaining second model parameters of the second three-dimensional face model;
and comparing the first model parameter with the second model parameter to obtain the model difference information.
8. The method according to claim 1, wherein the performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image includes:
determining an inverse of the first perspective transformation matrix;
and performing perspective distortion correction on the second face in the first intermediate image according to the inverse matrix and the second perspective transformation matrix to obtain the second intermediate image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is configured to acquire first face pose information of a first face, first two-dimensional coordinates of first face feature points, second face pose information of a second face and second two-dimensional coordinates of second face feature points in a two-dimensional face image to be processed;
a first determining module configured to determine a first perspective transformation matrix of the two-dimensional face image according to the first face pose information and the first two-dimensional coordinates;
A second determining module configured to determine a second perspective transformation matrix of the second face according to the second face pose information and the second two-dimensional coordinates;
the first correcting module is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
the second correcting module is configured to perform perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a second intermediate image;
the second acquisition module is configured to acquire a first three-dimensional face model of a second face in the second intermediate image;
the comparison module is configured to compare the first three-dimensional face model with a pre-stored second three-dimensional face model to obtain model difference information;
and the third correction module is configured to correct the second face in the second intermediate image according to the model difference information to obtain a corrected two-dimensional face image.
10. A terminal device comprising a processor and a memory; the memory is used for storing a computer program; the processor being adapted to execute a computer program stored on the memory for implementing the method steps of any one of claims 1-9.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-9.
CN201910439198.6A 2019-05-24 2019-05-24 Image processing method and device Active CN111985280B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910439198.6A CN111985280B (en) 2019-05-24 2019-05-24 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910439198.6A CN111985280B (en) 2019-05-24 2019-05-24 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111985280A CN111985280A (en) 2020-11-24
CN111985280B true CN111985280B (en) 2023-12-29

Family

ID=73436914

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910439198.6A Active CN111985280B (en) 2019-05-24 2019-05-24 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111985280B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
WO2015029982A1 (en) * 2013-08-29 2015-03-05 日本電気株式会社 Image processing device, image processing method, and program
CN108470328A (en) * 2018-03-28 2018-08-31 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
WO2018188277A1 (en) * 2017-04-14 2018-10-18 广州视源电子科技股份有限公司 Sight correction method and device, intelligent conference terminal and storage medium
CN108985220A (en) * 2018-07-11 2018-12-11 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866497A (en) * 2010-06-18 2010-10-20 北京交通大学 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system
WO2015029982A1 (en) * 2013-08-29 2015-03-05 日本電気株式会社 Image processing device, image processing method, and program
WO2018188277A1 (en) * 2017-04-14 2018-10-18 广州视源电子科技股份有限公司 Sight correction method and device, intelligent conference terminal and storage medium
CN108470328A (en) * 2018-03-28 2018-08-31 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN108985220A (en) * 2018-07-11 2018-12-11 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于姿态估计的单幅图像三维人脸重建;詹红燕;张磊;陶培亚;;微电子学与计算机(第09期);全文 *

Also Published As

Publication number Publication date
CN111985280A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
JP6134446B2 (en) Image division method, image division apparatus, image division device, program, and recording medium
CN108470322B (en) Method and device for processing face image and readable storage medium
JP6348611B2 (en) Automatic focusing method, apparatus, program and recording medium
CN107944367B (en) Face key point detection method and device
CN106778773B (en) Method and device for positioning target object in picture
EP3057304A1 (en) Method and apparatus for generating image filter
US11308692B2 (en) Method and device for processing image, and storage medium
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
CN106503682B (en) Method and device for positioning key points in video data
CN110930336B (en) Image processing method and device, electronic equipment and storage medium
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
EP2975574B1 (en) Method, apparatus and terminal for image retargeting
CN108648280B (en) Virtual character driving method and device, electronic device and storage medium
CN114170324A (en) Calibration method and device, electronic equipment and storage medium
CN105678296B (en) Method and device for determining character inclination angle
EP3770859B1 (en) Image processing method, image processing apparatus, and storage medium
US9665925B2 (en) Method and terminal device for retargeting images
CN110876014B (en) Image processing method and device, electronic device and storage medium
KR20200135998A (en) Position posture detection method and device, electronic device and storage medium
CN108154090B (en) Face recognition method and device
CN107239758B (en) Method and device for positioning key points of human face
CN106469446B (en) Depth image segmentation method and segmentation device
CN108846321B (en) Method and device for identifying human face prosthesis and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Hu Yi

Inventor after: Wang Xuanran

Inventor after: Lin Zhehong

Inventor after: Du Hui

Inventor before: Hu Yi

Inventor before: Wang Xuanran

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant