CN111986097B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111986097B
CN111986097B CN201910439657.0A CN201910439657A CN111986097B CN 111986097 B CN111986097 B CN 111986097B CN 201910439657 A CN201910439657 A CN 201910439657A CN 111986097 B CN111986097 B CN 111986097B
Authority
CN
China
Prior art keywords
face
dimensional
image
transformation matrix
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910439657.0A
Other languages
Chinese (zh)
Other versions
CN111986097A (en
Inventor
胡毅
汪轩然
林哲弘
杜慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910439657.0A priority Critical patent/CN111986097B/en
Publication of CN111986097A publication Critical patent/CN111986097A/en
Application granted granted Critical
Publication of CN111986097B publication Critical patent/CN111986097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The disclosure relates to an image processing method and device for reducing global perspective distortion and local perspective distortion of a group photo image and improving quality of the group photo image. The method comprises the following steps: acquiring face posture information of a first face and a second face in a two-dimensional face image to be processed and two-dimensional coordinates of face feature points; determining a first perspective transformation matrix of a two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points; determining a second perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points; performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image; and performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image. The technical scheme can improve the quality of the group photo image.

Description

Image processing method and device
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to an image processing method and device.
Background
When a mobile terminal (such as a mobile phone) photographs, the rear camera and the front camera can be greatly influenced by perspective distortion in a scene containing a portrait, so that a face part in an imaging result inevitably causes a problem.
For example, in mobile terminal shooting, a rear camera can shoot a scene of a multi-person group photo, and the scene can show a perspective distortion problem on a photo, especially a face far from the center of an image has a significant distortion problem. As faces are more critical information, distorted faces need to be corrected.
For another example, the front camera may take a single self-photograph and 2-3 persons are taking a photograph. When shooting a single person, a certain perspective distortion is caused because the face is closer to a front camera of the mobile phone, which has a great influence on the correct expression of the face. When 2-3 persons shoot a group photo, because the moving terminal is close to the photographed face in the self-shooting, the face on the left side and the face on the right side in the image have a large degree of perspective distortion, and the problem is reflected in the final image, and the shape of the face on both sides is deformed (such as transverse stretching, radial stretching and the combination of the transverse stretching and the radial stretching), so that correction processing is also necessary.
Therefore, how to process an image to mitigate perspective distortion is a technical problem to be solved.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide an image processing method and apparatus, which are used to reduce global perspective distortion and local perspective distortion of a group photo image and improve quality of the group photo image.
According to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including:
acquiring face posture information of a first face and a second face in a two-dimensional face image to be processed and two-dimensional coordinates of face feature points;
determining a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
determining a second perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points;
performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
and performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image.
In one embodiment, before the acquiring the face pose information of the first face and the second face in the two-dimensional face image to be processed and the two-dimensional coordinates of the face feature points, the method further includes:
determining respective areas of at least one face in the two-dimensional face image;
determining at least one face with the largest area according to the respective areas of the at least one face to obtain the first face; or alternatively
Determining respective positions of at least one face in the two-dimensional face image;
and determining at least one face nearest to the center of the two-dimensional face image according to the respective positions of the at least one face to obtain the first face.
In one embodiment, before the acquiring the face pose information of the first face and the second face in the two-dimensional face image to be processed and the two-dimensional coordinates of the face feature points, the method further includes:
determining an edge region in the two-dimensional face image according to preset region parameters;
and determining at least one face in the edge area as the second face.
In one embodiment, the determining the first perspective transformation matrix of the two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points includes:
According to the face posture information of the first face and the face posture information of a preset two-dimensional face model, adjusting the two-dimensional coordinates of the face feature points of the first face; the adjusted face gesture of the first face is the same as the face gesture of the preset two-dimensional face model;
registering the adjusted face feature points of the first face with the face feature points in the preset two-dimensional face model to obtain the first perspective transformation matrix.
In one embodiment, the determining the second perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points includes:
according to the face posture information of the second face and the face posture information of a preset two-dimensional face model, adjusting the two-dimensional coordinates of the face feature points of the second face; the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model;
registering the adjusted face feature points of the second face with the face feature points in the preset two-dimensional face model to obtain the second perspective transformation matrix.
In one embodiment, the performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image includes:
Determining an inverse of the first perspective transformation matrix;
and performing perspective distortion correction on the second face in the first intermediate image according to the inverse matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the method comprises the steps of obtaining face pose information of a first face and a second face in a two-dimensional face image to be processed and two-dimensional coordinates of face feature points, determining a first perspective transformation matrix of the two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points, determining a second perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points, and then conducting perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image, so that global perspective distortion of a corrected group image is achieved. And then, carrying out perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix, so as to realize the partial perspective distortion of the corrected group photo image. Thus, the global perspective distortion of the group photo image can be reduced, the local perspective distortion can be reduced, and the quality of the group photo image can be improved.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing method including:
acquiring face posture information of a first face and two-dimensional coordinates of face feature points in a two-dimensional face image to be processed;
determining a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
acquiring face posture information of a second face in the first intermediate image and two-dimensional coordinates of face feature points;
determining a third perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points;
and performing perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix to obtain a corrected two-dimensional face image.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: by acquiring the face pose information of the first face and the two-dimensional coordinates of the face feature points in the two-dimensional face image to be processed, a first perspective transformation matrix of the two-dimensional face image can be determined according to the face pose information of the first face and the two-dimensional coordinates of the face feature points, perspective distortion correction is performed on the two-dimensional face image according to the first perspective transformation matrix, a first intermediate image is obtained, and global perspective distortion of a corrected group photo image is achieved. Then, acquiring face posture information of a second face in the first intermediate image and two-dimensional coordinates of face feature points, determining a third perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points, and then performing perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix, so that local perspective distortion of a corrected group photo image is realized. Thus, the global perspective distortion of the group photo image can be reduced, the local perspective distortion can be reduced, and the quality of the group photo image can be improved.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
the first acquisition module is configured to acquire face posture information of each of a first face and a second face and two-dimensional coordinates of face feature points in a two-dimensional face image to be processed;
the first determining module is configured to determine a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
the second determining module is configured to determine a second perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points;
the first correcting module is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
and the second correcting module is configured to perform perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
The second acquisition module is configured to acquire face posture information of a first face and two-dimensional coordinates of face feature points in the two-dimensional face image to be processed;
the third determining module is configured to determine a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
the third correcting module is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
the third acquisition module is configured to acquire face posture information of a second face in the first intermediate image and two-dimensional coordinates of face feature points;
a fourth determining module configured to determine a third perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points;
and the fourth correction module is configured to perform perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix to obtain a corrected two-dimensional face image.
According to a fifth aspect of embodiments of the present disclosure, there is provided a terminal device comprising a processor and a memory; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory, and implement the method steps described above.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method steps.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the perspective distortion correction can be performed on the group image as a whole by performing perspective distortion correction on the whole image of the two-dimensional face image, and the local perspective distortion correction can be performed on the appointed face in the group image by performing perspective distortion correction on the second face in the two-dimensional face image. In this way, the technical scheme provided by the embodiment of the disclosure not only can reduce global perspective distortion of the group photo image, but also can reduce local perspective distortion and improve the quality of the group photo image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 3 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 5 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 6 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 7 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 8 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram of an image processing apparatus according to another exemplary embodiment.
Fig. 11 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment. The image processing method can be applied to terminal equipment with an image processing function, such as a smart phone, a tablet personal computer (PAD), a video camera and the like. As shown in fig. 1, the image processing method includes the following steps S101 to S105:
in step S101, two-dimensional coordinates of face pose information and face feature points of each of the first face and the second face in the two-dimensional face image to be processed are obtained.
In one embodiment, the two-dimensional face image acquired by the image capturing device of the terminal device may be used as the two-dimensional face image to be processed. Further, whether a plurality of faces exist in the two-dimensional face image acquired by the camera device of the terminal equipment can be detected, and when the plurality of faces exist, the acquired two-dimensional face image can be used as a two-dimensional face image to be processed.
In one embodiment, the number of the first faces is at least one, and may be one, two or three, for example. The number of the second faces is at least one, and may be one, two, three or more, for example. The first face may be a face with a large area or a face centered in a two-dimensional face image. The second face may be a face located in an edge region of the two-dimensional face image.
In one embodiment, the area of the first face is larger than the area of the other faces in the two-dimensional face image. In this embodiment, as shown in fig. 2, before step S101, the following steps S201 to S202 may be further included:
in step S201, an area of each of at least one face in the two-dimensional face image is determined.
In step S202, at least one face with the largest area is determined according to the respective areas of the at least one face, so as to obtain the first face.
In one embodiment, the terminal device may mark the detected at least one face with rectangular face recognition frames, and use the area of the at least one face recognition frame as the area of the corresponding face. Then, the area of the largest at least one face can be determined from the areas of the at least one face, and the face corresponding to the area of the largest at least one face is determined as the first face. For example, the largest two areas may be determined from the areas of at least one face, and the face corresponding to the two areas may be determined as the first face.
In one embodiment, before step S202, the following steps may be further included: and determining a maximum area value and a minimum area value in the respective areas of the at least one face, and determining that the difference value between the maximum area value and the minimum area value is larger than a first preset threshold value. Thus, when the difference between the areas of the faces in the two-dimensional face image is large, at least one face with the largest area can be selected as the first face. The information of the area corresponding to the face with the largest area is relatively more, so that the accuracy of image processing is improved.
In another embodiment, the first face is located closest to the center of the two-dimensional face image. In this embodiment, as shown in fig. 3, before step S101, the following steps S301 to S302 may be further included:
in step S301, respective positions of at least one face in the two-dimensional face image are determined.
In step S302, at least one face nearest to the center of the two-dimensional face image is determined according to the respective positions of the at least one face, and the first face is obtained.
In one embodiment, the terminal device may mark the detected at least one face with rectangular face recognition frames, and determine the central position of the at least one face recognition frame as the position of the corresponding face, respectively. Then, determining a distance value between the respective position of at least one face and the center of the two-dimensional face image to obtain at least one distance value. Then, determining at least one minimum distance value in the at least one distance values, and determining a face corresponding to the at least one minimum distance value as the first face. For example, the smallest two distance values of the at least one distance value may be determined, and the face corresponding to the smallest two distance values may be determined as the first face.
In one embodiment, before step S301, the following steps may be further included: determining respective areas of at least one face in the two-dimensional face image, then determining a maximum area value and a minimum area value in the respective areas of the at least one face, and determining that the difference value between the maximum area value and the minimum area value is smaller than a second preset threshold value. The second preset threshold value is smaller than the first preset threshold value. Thus, when the difference between the areas of the faces in the two-dimensional face image is not large, the face close to the center of the two-dimensional face image can be selected as the first face. The distortion of the face close to the center of the two-dimensional face image caused by other reasons is small, so that the accuracy of image processing is improved.
In one embodiment, as shown in fig. 4, before step S101, the following steps S401 to S402 may be further included:
in step S401, an edge region in the two-dimensional face image is determined according to a preset region parameter.
In step S402, at least one face in the edge region is determined as the second face.
In one embodiment, the area parameters may be pre-stored in the terminal device. The above-mentioned region parameters may include start coordinates and end coordinates of one, two or more regions in the two-dimensional face image, for determining edge regions in the two-dimensional face image. The terminal device may determine an edge region in the two-dimensional face image according to a preset region parameter, and determine at least one face in the edge region as the second face. The second face in the edge area of the two-dimensional face image is determined through the area parameters, so that the two-dimensional face image is convenient to realize and high in accuracy.
In one embodiment, two-dimensional coordinates of the face feature points of the first face and the second face may be obtained through face feature point detection. The face feature points may include left outer corners, right outer corners, nasal tips, left corners, right corners, and the like. The number of face feature points per face may be 21 points, 106 points, or other numbers. The face feature point detection method may include, but is not limited to, model-based ASM (Active Shape Model ) and AAM (Active Appearnce Model, active appearance model), cascade shape regression (CPR, cascaded pose regression) based, and deep learning based methods, etc.
In one embodiment, the face pose estimation may be performed on the first face and the second face, so as to obtain the face pose information of each of the first face and the second face. The face pose information may include a horizontal rotation angle (yaw), a pitch angle (pitch), and a rotation angle (roll) of the face, among others. The horizontal rotation angle (yaw), the pitch angle (pitch), and the rotation angle (roll) may be angles rotated around three coordinate axes of a space rectangular coordinate system established with a certain point on the face as an origin, respectively.
In step S102, a first perspective transformation matrix of the two-dimensional face image is determined according to the face pose information of the first face and the two-dimensional coordinates of the face feature points.
In one embodiment, as shown in fig. 5, step S102 may include the following steps S501 to 502:
in step S501, according to the face pose information of the first face and the face pose information of a preset two-dimensional face model, two-dimensional coordinates of face feature points of the first face are adjusted; the adjusted face pose of the first face is the same as the face pose of the preset two-dimensional face model.
In step S502, the adjusted face feature points of the first face and the face feature points in the preset two-dimensional face model are registered, so as to obtain the first perspective transformation matrix.
In this embodiment, the terminal device may store a trained preset two-dimensional face model. In face pose information of a preset two-dimensional face model, the horizontal rotation angle (yaw), the pitch angle (pitch) and the rotation angle (roll) may be zero. That is, the preset two-dimensional face model is a model built from the front face.
In this embodiment, the terminal device may adjust two-dimensional coordinates of the face feature points of the first face according to the face pose information of the first face and the face pose information of the preset two-dimensional face model, so that the adjusted face pose of the first face is the same as the face pose of the preset two-dimensional face model. Specifically, the two-dimensional coordinates of the face feature points of the first face can be adjusted, so that the distance and the proportional relation between the face feature points of the first face are the same as or are in a certain proportion to the distance and the proportional relation between the face feature points of the preset two-dimensional face model.
Then, the terminal equipment can register the adjusted face feature points of the first face with the face feature points in the preset two-dimensional face model to obtain the first perspective transformation matrix. Illustratively, the adjusted face feature point of the first face is denoted as X, the face feature point in the preset two-dimensional face model is denoted as X', and the first perspective transformation matrix is denoted as a. Then there is
X’=AX (1)
By solving the above equation (1), the value of the first perspective transformation matrix a can be obtained.
In step S103, a second perspective transformation matrix of the second face is determined according to the face pose information of the second face and the two-dimensional coordinates of the face feature points.
In one embodiment, as shown in fig. 6, step S103 may include the following steps S601 to 602:
in step S601, according to the face pose information of the second face and the face pose information of the preset two-dimensional face model, two-dimensional coordinates of face feature points of the second face are adjusted; the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model.
In step S602, the adjusted face feature points of the second face are registered with the face feature points in the preset two-dimensional face model, so as to obtain the second perspective transformation matrix.
In this embodiment, the terminal device may store a trained preset two-dimensional face model. In face pose information of a preset two-dimensional face model, the horizontal rotation angle (yaw), the pitch angle (pitch) and the rotation angle (roll) may be zero. That is, the preset two-dimensional face model is a model built from the front face.
In this embodiment, the terminal device may adjust two-dimensional coordinates of the face feature points of the second face according to the face pose information of the second face and the face pose information of the preset two-dimensional face model, so that the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model. Specifically, the two-dimensional coordinates of the face feature points of the second face may be adjusted, so that the distance and the proportional relationship between the face feature points of the second face are the same as the distance and the proportional relationship between the face feature points of the preset two-dimensional face model, or are in a certain proportion.
And then, the terminal equipment can register the adjusted face characteristic points of the second face with the face characteristic points in the preset two-dimensional face model to obtain the second perspective transformation matrix. Illustratively, the adjusted face feature point of the second face is denoted as Y, the face feature point in the preset two-dimensional face model is denoted as Y', and the above-mentioned second perspective transformation matrix is denoted as B. Then there is
Y’=BY (2)
By solving the above equation (2), the value of the second perspective transformation matrix B can be obtained.
In the embodiment of the present disclosure, step S102 and step S103 may be performed in parallel, and thus, the time of image processing may be reduced, and the efficiency of image processing may be improved.
In step S104, perspective distortion correction is performed on the two-dimensional face image according to the first perspective transformation matrix, so as to obtain a first intermediate image.
In this embodiment, the two-dimensional face image may be denoted as Z, the first intermediate image may be denoted as M, and M may be obtained by the following calculation formula (3):
M=AZ (3)
in this embodiment, the first perspective transformation matrix is used to perform overall perspective distortion correction on the two-dimensional face image, so as to obtain a first intermediate image.
In step S105, perspective distortion correction is performed on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix, so as to obtain a corrected two-dimensional face image.
In this embodiment, since the entire perspective distortion correction is performed on the two-dimensional face image by using the first perspective transformation matrix in step S104, in addition to the perspective distortion correction is performed on the first face in the two-dimensional face image, the perspective distortion correction is performed on the second face in the two-dimensional face image. Therefore, when the perspective distortion correction is performed on the second face, it is necessary to eliminate the perspective distortion correction performed on the second face by using the first perspective transformation matrix, and then perform the perspective distortion correction on the second face by using the second perspective transformation matrix.
In one embodiment, as shown in FIG. 7, step S105 may include the following steps S701-S702:
in step S701, an inverse of the first perspective transformation matrix is determined.
In step S702, perspective distortion correction is performed on the second face in the first intermediate image according to the inverse matrix and the second perspective transformation matrix, so as to obtain a corrected two-dimensional face image.
In the present embodiment, the inverse matrix A of the first perspective transformation matrix is first determined -1 Then, according to the inverse matrix A -1 And performing perspective distortion correction on the second face in the first intermediate image M with the second perspective transformation matrix B.
In this embodiment, the second face in the first intermediate image M may be denoted as F, and the second face in the corrected two-dimensional face image may be denoted as F ', and then F' may be calculated by the following calculation formula (4):
F’=A -1 BF (4)
in the embodiment of the disclosure, when a group photo image is shot, a perspective transformation matrix of the whole group photo image is determined according to the face pose information of a relatively large face or a face centered in the position in the group photo image and the two-dimensional coordinates of the face feature points, the perspective distortion of the whole group photo image is corrected, and the perspective transformation matrix of the appointed face is determined according to the face pose information of the appointed face in the group photo image and the two-dimensional coordinates of the face feature points, so that the perspective distortion of the appointed face in the group photo image is corrected. Therefore, the problem of face deformation caused by perspective distortion can be automatically reduced, and the imaging effect is improved.
In the embodiment of the disclosure, the two-dimensional coordinates of the face pose information and the face feature points of the first face and the second face in the two-dimensional face image to be processed are obtained, so that the first perspective transformation matrix of the two-dimensional face image to be processed is determined according to the face pose information of the first face and the two-dimensional coordinates of the face feature points, the second perspective transformation matrix of the second face is determined according to the face pose information of the second face and the two-dimensional coordinates of the face feature points, and then perspective distortion correction is performed on the two-dimensional face image to be processed according to the first perspective transformation matrix to obtain a first intermediate image, so that global perspective distortion of a corrected group image is realized. And then, performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix, so that partial perspective distortion of the corrected group photo image is realized. Thus, the global perspective distortion of the group photo image can be reduced, the local perspective distortion can be reduced, and the quality of the group photo image can be improved.
Fig. 8 is a flowchart illustrating an image processing method according to another exemplary embodiment. The image processing method can be applied to terminal equipment with an image processing function, such as a smart phone, a tablet personal computer (PAD), a video camera and the like. As shown in fig. 8, the image processing method includes the following steps S801 to S806:
In step S801, face pose information of a first face and two-dimensional coordinates of face feature points in a two-dimensional face image to be processed are acquired.
In this embodiment, step S801 is similar to the method for acquiring the two-dimensional coordinates of the face feature point and the face pose information of the first face in the two-dimensional face image in step S101, and will not be described herein.
In this embodiment, the two-dimensional face image to be processed may be denoted as Z.
In step S802, a first perspective transformation matrix of the two-dimensional face image is determined according to the face pose information of the first face and the two-dimensional coordinates of the face feature points.
In this embodiment, step S802 is similar to the method for determining the first perspective transformation matrix of the two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points in step S102, and is not described herein.
In this embodiment, the determined first perspective transformation matrix is also a.
In step S803, perspective distortion correction is performed on the two-dimensional face image according to the first perspective transformation matrix, so as to obtain a first intermediate image.
In this embodiment, step S803 is similar to the method for obtaining the first intermediate image by performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix in the above step S104, and will not be described herein.
In the present embodiment, the first intermediate image M can be obtained by the following expression (3):
M=AZ (3)
in this embodiment, the first perspective transformation matrix a is used to perform overall perspective distortion correction on the two-dimensional face image Z, so as to obtain a first intermediate image M.
In step S804, two-dimensional coordinates of face pose information of the second face and face feature points in the first intermediate image are obtained.
In this embodiment, the method for acquiring the face pose information of the second face in the first intermediate image and the two-dimensional coordinates of the face feature points is similar to the method for acquiring the face pose information of the second face in the two-dimensional face image to be processed and the two-dimensional coordinates of the face feature points in the above-mentioned step S101, and will not be described herein.
In step S805, a third perspective transformation matrix of the second face is determined according to the face pose information of the second face and the two-dimensional coordinates of the face feature points.
In this embodiment, the method for determining the third perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points is similar to the method for determining the second perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points in step S103, and is not described herein.
In this embodiment, the determined third perspective transformation matrix may be denoted as C.
In step S806, perspective distortion correction is performed on the second face in the first intermediate image according to the third perspective transformation matrix, so as to obtain a corrected two-dimensional face image.
In this embodiment, the second face in the first intermediate image M is denoted as F, and the second face in the corrected two-dimensional face image is denoted as F ', and then F' may be calculated by the following calculation formula (5):
F’=CF (5)
in the embodiment of the disclosure, by acquiring the face pose information of the first face and the two-dimensional coordinates of the face feature points in the two-dimensional face image to be processed, the first perspective transformation matrix of the two-dimensional face image can be determined according to the face pose information of the first face and the two-dimensional coordinates of the face feature points, so that perspective distortion correction is performed on the two-dimensional face image according to the first perspective transformation matrix, a first intermediate image is obtained, and global perspective distortion of a corrected group photo image is realized. Then, acquiring face posture information of a second face in the first intermediate image and two-dimensional coordinates of face feature points, determining a third perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points, and performing perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix, so that local perspective distortion of a corrected group photo image is realized. Thus, the global perspective distortion of the group photo image can be reduced, the local perspective distortion can be reduced, and the quality of the group photo image can be improved.
Fig. 9 is a block diagram of an image processing apparatus according to an exemplary embodiment. In this embodiment, the apparatus includes:
a first obtaining module 91, configured to obtain face pose information of each of a first face and a second face in a two-dimensional face image to be processed and two-dimensional coordinates of face feature points;
a first determining module 92 configured to determine a first perspective transformation matrix of the two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points;
a second determining module 93 configured to determine a second perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points;
a first correction module 94, configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix, so as to obtain a first intermediate image;
and the second correction module 95 is configured to perform perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix, so as to obtain a corrected two-dimensional face image.
Fig. 10 is a block diagram of an image processing apparatus according to another exemplary embodiment. In this embodiment, the apparatus includes:
A second obtaining module 1001, configured to obtain face pose information of a first face and two-dimensional coordinates of face feature points in a two-dimensional face image to be processed;
a third determining module 1002, configured to determine a first perspective transformation matrix of the two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points;
a third correction module 1003, configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix, so as to obtain a first intermediate image;
a third obtaining module 1004, configured to obtain face pose information of a second face in the first intermediate image and two-dimensional coordinates of face feature points;
a fourth determining module 1005 configured to determine a third perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points;
and a fourth correction module 1006, configured to perform perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix, so as to obtain a corrected two-dimensional face image.
The specific manner in which the processor performs the operations in the apparatus of the above embodiments has been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 11 is a block diagram of a terminal device according to an exemplary embodiment. For example, device 1100 may be a mobile phone, computer, digital broadcast terminal, messaging device, tablet device, personal digital assistant, or the like.
Referring to fig. 11, device 1100 may include one or more of the following components: a processing component 1102, a memory 1104, a power component 1106, a multimedia component 1108, an audio component 1110, an input/output (I/O) interface 1112, a sensor component 1114, and a communication component 1116.
The processing component 1102 generally controls overall operation of the device 1100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1102 may include one or more processors 1120 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1102 can include one or more modules that facilitate interactions between the processing component 1102 and other components. For example, the processing component 1102 may include a multimedia module to facilitate interaction between the multimedia component 1108 and the processing component 1102.
The memory 1104 is configured to store various types of data to support operations at the device 1000. Examples of such data include instructions for any application or method operating on device 1100, contact data, phonebook data, messages, pictures, video, and the like. The memory 1104 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 1106 provides power to the various components of the device 1100. The power components 1106 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 1100.
Multimedia component 1108 includes a screen between the device 1100 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, multimedia component 1108 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 1110 is configured to output and/or input an audio signal. For example, the audio component 1110 includes a Microphone (MIC) configured to receive external audio signals when the device 1100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 1104 or transmitted via the communication component 1116. In some embodiments, the audio component 1110 further comprises a speaker for outputting audio signals.
The I/O interface 1112 provides an interface between the processing component 1102 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1114 includes one or more sensors for providing status assessment of various aspects of the device 1100. For example, the sensor assembly 1114 may detect an on/off state of the device 1000, a relative positioning of the components, such as a display and keypad of the device 1100, a change in position of the device 1100 or a component of the device 1100, the presence or absence of user contact with the device 1100, an orientation or acceleration/deceleration of the device 1100, and a change in temperature of the device 1100. The sensor assembly 1114 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 1114 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1114 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1116 is configured to facilitate communication between the device 1100 and other devices, either wired or wireless. The device 1100 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G lte,5G NR, or a combination thereof. In one exemplary embodiment, the communication part 1116 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 1116 further includes a Near Field Communication (NFC) module to facilitate short range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 1100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1104, including instructions executable by processor 1120 of device 1100 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. An image processing method, the method comprising:
acquiring face posture information of a first face and a second face in a two-dimensional face image to be processed and two-dimensional coordinates of face feature points;
determining a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
Determining a second perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points;
performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
and performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image.
2. The method according to claim 1, wherein before the step of obtaining the two-dimensional coordinates of the face feature points and the face pose information of the first face and the second face in the two-dimensional face image to be processed, the method further comprises:
determining respective areas of at least one face in the two-dimensional face image;
determining at least one face with the largest area according to the respective areas of the at least one face to obtain the first face; or alternatively
Determining respective positions of at least one face in the two-dimensional face image;
and determining at least one face nearest to the center of the two-dimensional face image according to the respective positions of the at least one face to obtain the first face.
3. The method according to claim 1, wherein before the step of obtaining the two-dimensional coordinates of the face feature points and the face pose information of the first face and the second face in the two-dimensional face image to be processed, the method further comprises:
determining an edge region in the two-dimensional face image according to preset region parameters;
and determining at least one face in the edge area as the second face.
4. The method according to claim 1, wherein the determining the first perspective transformation matrix of the two-dimensional face image according to the face pose information of the first face and the two-dimensional coordinates of the face feature points includes:
according to the face posture information of the first face and the face posture information of a preset two-dimensional face model, adjusting the two-dimensional coordinates of the face feature points of the first face; the adjusted face gesture of the first face is the same as the face gesture of the preset two-dimensional face model;
registering the adjusted face feature points of the first face with the face feature points in the preset two-dimensional face model to obtain the first perspective transformation matrix.
5. The method according to claim 1, wherein the determining the second perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points includes:
According to the face posture information of the second face and the face posture information of a preset two-dimensional face model, adjusting the two-dimensional coordinates of the face feature points of the second face; the adjusted face pose of the second face is the same as the face pose of the preset two-dimensional face model;
registering the adjusted face feature points of the second face with the face feature points in the preset two-dimensional face model to obtain the second perspective transformation matrix.
6. The method according to claim 1, wherein the performing perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image includes:
determining an inverse of the first perspective transformation matrix;
and performing perspective distortion correction on the second face in the first intermediate image according to the inverse matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image.
7. An image processing method, the method comprising:
acquiring face posture information of a first face and two-dimensional coordinates of face feature points in a two-dimensional face image to be processed;
Determining a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
performing perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
acquiring face posture information of a second face in the first intermediate image and two-dimensional coordinates of face feature points;
determining a third perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points;
and performing perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix to obtain a corrected two-dimensional face image.
8. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is configured to acquire face posture information of each of a first face and a second face and two-dimensional coordinates of face feature points in a two-dimensional face image to be processed;
the first determining module is configured to determine a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
The second determining module is configured to determine a second perspective transformation matrix of the second face according to the face posture information of the second face and the two-dimensional coordinates of the face feature points;
the first correcting module is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
and the second correcting module is configured to perform perspective distortion correction on the second face in the first intermediate image according to the first perspective transformation matrix and the second perspective transformation matrix to obtain a corrected two-dimensional face image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the second acquisition module is configured to acquire face posture information of a first face and two-dimensional coordinates of face feature points in the two-dimensional face image to be processed;
the third determining module is configured to determine a first perspective transformation matrix of the two-dimensional face image according to the face posture information of the first face and the two-dimensional coordinates of the face feature points;
the third correcting module is configured to perform perspective distortion correction on the two-dimensional face image according to the first perspective transformation matrix to obtain a first intermediate image;
The third acquisition module is configured to acquire face posture information of a second face in the first intermediate image and two-dimensional coordinates of face feature points;
a fourth determining module configured to determine a third perspective transformation matrix of the second face according to the face pose information of the second face and the two-dimensional coordinates of the face feature points;
and the fourth correction module is configured to perform perspective distortion correction on the second face in the first intermediate image according to the third perspective transformation matrix to obtain a corrected two-dimensional face image.
10. A terminal device comprising a processor and a memory; the memory is used for storing a computer program; the processor being adapted to execute a computer program stored on the memory for implementing the method steps of any of claims 1-7.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-7.
CN201910439657.0A 2019-05-24 2019-05-24 Image processing method and device Active CN111986097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910439657.0A CN111986097B (en) 2019-05-24 2019-05-24 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910439657.0A CN111986097B (en) 2019-05-24 2019-05-24 Image processing method and device

Publications (2)

Publication Number Publication Date
CN111986097A CN111986097A (en) 2020-11-24
CN111986097B true CN111986097B (en) 2024-02-09

Family

ID=73437550

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910439657.0A Active CN111986097B (en) 2019-05-24 2019-05-24 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111986097B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016053961A (en) * 2015-09-28 2016-04-14 国土交通省国土技術政策総合研究所長 Information processing device, information processing method, and program
CN106295530A (en) * 2016-07-29 2017-01-04 北京小米移动软件有限公司 Face identification method and device
KR20170080116A (en) * 2015-12-31 2017-07-10 동의대학교 산학협력단 Face Recognition System using Depth Information
CN107748887A (en) * 2017-09-30 2018-03-02 五邑大学 It is a kind of based on dominant with recessive Line segment detection planar document perspective image antidote
CN108171744A (en) * 2017-12-26 2018-06-15 努比亚技术有限公司 Determining method, mobile terminal and the storage medium of disparity map in a kind of binocular virtualization
CN108369653A (en) * 2015-10-16 2018-08-03 奇跃公司 Use the eyes gesture recognition of eye feature
CN108470328A (en) * 2018-03-28 2018-08-31 百度在线网络技术(北京)有限公司 Method and apparatus for handling image
CN108898043A (en) * 2018-02-09 2018-11-27 迈格威科技有限公司 Image processing method, image processing apparatus and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3804916B2 (en) * 2001-02-09 2006-08-02 シャープ株式会社 Imaging system, program used for controlling image data thereof, method for correcting distortion of captured image in imaging system, and storage medium storing procedure thereof
US9881203B2 (en) * 2013-08-29 2018-01-30 Nec Corporation Image processing device, image processing method, and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016053961A (en) * 2015-09-28 2016-04-14 国土交通省国土技術政策総合研究所長 Information processing device, information processing method, and program
CN108369653A (en) * 2015-10-16 2018-08-03 奇跃公司 Use the eyes gesture recognition of eye feature
KR20170080116A (en) * 2015-12-31 2017-07-10 동의대학교 산학협력단 Face Recognition System using Depth Information
CN106295530A (en) * 2016-07-29 2017-01-04 北京小米移动软件有限公司 Face identification method and device
CN107748887A (en) * 2017-09-30 2018-03-02 五邑大学 It is a kind of based on dominant with recessive Line segment detection planar document perspective image antidote
CN108171744A (en) * 2017-12-26 2018-06-15 努比亚技术有限公司 Determining method, mobile terminal and the storage medium of disparity map in a kind of binocular virtualization
CN108898043A (en) * 2018-02-09 2018-11-27 迈格威科技有限公司 Image processing method, image processing apparatus and storage medium
CN108470328A (en) * 2018-03-28 2018-08-31 百度在线网络技术(北京)有限公司 Method and apparatus for handling image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Correcting radial and perspective distortion by using face shape information;T. -Y. Lee等;2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG);第1-8页 *
基于仿射变换的多姿态人脸矫正和识别;李海彦等;计算机应用研究;第31卷(第04期);第1215-1219+1228页 *

Also Published As

Publication number Publication date
CN111986097A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
US9674395B2 (en) Methods and apparatuses for generating photograph
KR101694643B1 (en) Method, apparatus, device, program, and recording medium for image segmentation
CN106778773B (en) Method and device for positioning target object in picture
CN108470322B (en) Method and device for processing face image and readable storage medium
CN107944367B (en) Face key point detection method and device
CN110930336B (en) Image processing method and device, electronic equipment and storage medium
CN109840939B (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and storage medium
CN109325908B (en) Image processing method and device, electronic equipment and storage medium
CN106503682B (en) Method and device for positioning key points in video data
EP2975574B1 (en) Method, apparatus and terminal for image retargeting
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
EP3173978A1 (en) Method and device for characteristic extraction
CN105678296B (en) Method and device for determining character inclination angle
CN110876014B (en) Image processing method and device, electronic device and storage medium
US9665925B2 (en) Method and terminal device for retargeting images
CN108154090B (en) Face recognition method and device
CN107730443B (en) Image processing method and device and user equipment
CN107992894B (en) Image recognition method, image recognition device and computer-readable storage medium
CN107239758B (en) Method and device for positioning key points of human face
CN111986097B (en) Image processing method and device
CN111985280B (en) Image processing method and device
CN112070681B (en) Image processing method and device
CN113920083A (en) Image-based size measurement method and device, electronic equipment and storage medium
CN115760585A (en) Image correction method, image correction device, storage medium and electronic equipment
CN114418865A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Hu Yi

Inventor after: Wang Xuanran

Inventor after: Lin Zhehong

Inventor after: Du Hui

Inventor before: Hu Yi

Inventor before: Wang Xuanran

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant