WO2020252910A1 - Image distortion correction method, apparatus, electronic device and readable storage medium - Google Patents
Image distortion correction method, apparatus, electronic device and readable storage medium Download PDFInfo
- Publication number
- WO2020252910A1 WO2020252910A1 PCT/CN2019/102870 CN2019102870W WO2020252910A1 WO 2020252910 A1 WO2020252910 A1 WO 2020252910A1 CN 2019102870 W CN2019102870 W CN 2019102870W WO 2020252910 A1 WO2020252910 A1 WO 2020252910A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- image
- corrected
- distortion correction
- relative distance
- Prior art date
Links
- 238000012937 correction Methods 0.000 title claims abstract description 171
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000004364 calculation method Methods 0.000 claims description 23
- 238000012549 training Methods 0.000 claims description 16
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This application relates to the technical field of graphics and image processing, and in particular to an image distortion correction method, device, electronic equipment, and readable storage medium.
- an electronic device such as a smart phone, a tablet computer, etc.
- the front camera of an electronic device to take a self-portrait photo with multiple people is an important use scene in taking pictures.
- the physical distance of the photographer from the lens usually cannot exceed the length of the arm, while the other people in the group photo do not have the physical limitation of hand-held electronic devices, and can freely choose a suitable location for shooting.
- the photographer in the group photo is usually the person closest to the lens, which will cause the face of the photographer holding the electronic device to be distorted when multiple people take a group photo. This distortion will be more obvious when a group photo is taken.
- the face of the photographer will look abnormal compared to other people, which will affect the shooting effect.
- one of the objectives of the embodiments of the present application is to provide an image distortion correction method, device, electronic device, and readable storage medium, which can automatically recognize the face to be corrected for distortion correction in real time when multiple people take a selfie together. In order to optimize the shooting effect in the multi-person selfie photo scene.
- the present application provides an electronic device, which may include one or more storage media and one or more processors in communication with the storage media.
- One or more storage media stores machine executable instructions executable by the processor.
- the processor executes the machine executable instructions to execute the image distortion correction method.
- the present application also provides an image distortion correction method, which is applied to electronic equipment, and the method includes:
- Distortion correction is performed on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- the step of performing face recognition on the image to be corrected to obtain face frame information and face key points corresponding to at least two faces in the image to be corrected includes:
- face recognition is performed on the frame of the image to be corrected through the face recognition model obtained in advance to obtain the face frame information corresponding to each face in the frame of the image to be corrected And key points of the face;
- the face recognition model uses multiple training samples and the labeled data of each training sample to be obtained based on deep learning neural network training, where the labeled data of each training sample includes the face frame corresponding to each face in the training sample Key points of information and faces.
- the method further includes:
- the face image is rotated to the set position using the affine matrix.
- the method further includes:
- a pre-trained age estimation model is used to recognize the face image to obtain the face age in the face image;
- the size of the face frame of the face image is corrected according to the age of the face in the face image.
- the electronic device pre-stores the median of the face circumference corresponding to the age of each face, and the size of the face frame of the face image is performed according to the face age in the face image.
- the corrective steps include:
- the size of the face frame of the face image is corrected according to the face frame correction coefficient.
- the step of determining the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face, and calculating the relative distance coefficient between the face to be corrected and the camera lens include:
- the face frame information corresponding to each face determine the face with the largest face frame area as the face to be corrected
- the step of calculating the relative distance coefficient between the face to be corrected and the camera lens includes:
- the relative distance coefficient between the face to be corrected and the camera lens is calculated according to the sum of the squared values of the difference.
- the specific calculation formula is:
- d is the relative distance coefficient between the face to be corrected and the camera lens
- N is the number of faces
- x i is the area of the ith face frame
- r is the average area of the face frame corresponding to all faces.
- the step of calculating the relative distance coefficient between the face to be corrected and the camera lens includes:
- the relative distance coefficient between the face to be corrected and the camera lens is calculated according to the weight coefficients corresponding to the preset first ratio and second ratio, the first ratio and the second ratio, and the specific calculation formula is:
- d is the relative distance coefficient between the face to be corrected and the camera lens
- a max is the maximum number area corresponding to the face frame of the face other than the face to be corrected
- a mid is the face to be corrected
- K is a constant between 0 and 1
- y is the face frame area of the face to be repaired.
- the electronic device pre-stores a plurality of distortion correction parameters corresponding to preset distance coefficients, and the distortion correction is performed on the face to be corrected according to the relative distance coefficients to obtain the target image after the distortion correction.
- the steps include:
- the preset distance coefficient range including a first endpoint value and a second endpoint value, the first endpoint value is less than the second endpoint value;
- the target distortion correction parameter is obtained by the following calculation formula:
- z is the target distortion correction parameter
- d is the relative distance coefficient between the face to be corrected and the camera lens
- d 1 is the first endpoint value
- d 2 is the second endpoint value
- c 1 is the The distortion correction parameter corresponding to the first endpoint value
- c 2 is the distortion correction parameter corresponding to the first endpoint value.
- the step of performing distortion correction on the face to be corrected according to the target distortion correction parameter to obtain a target image after the distortion correction includes:
- the face to be corrected is mapped to the adjusted face grid to obtain a target image after distortion correction.
- the method further includes:
- the step of determining the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face includes:
- the face frame information corresponding to each face it is determined that the face with the largest face frame area is the face to be corrected.
- the step of calculating the relative distance coefficient between the face to be corrected and the camera lens includes:
- the relative distance coefficient between the face to be corrected and the camera is calculated according to the area of the face frame of each face.
- the method further includes:
- the size of the face frame of each face image is corrected according to the face age corresponding to each face image.
- the method further includes:
- the face image is rotated to the set position using the affine matrix.
- the present application also provides an image distortion correction device applied to electronic equipment, and the device includes:
- a recognition module configured to perform face recognition on the image to be corrected, and obtain face frame information and face key points corresponding to at least two faces in the image to be corrected;
- a calculation module configured to determine the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face, and calculate the relative distance coefficient between the face to be corrected and the camera lens;
- the distortion correction module is configured to perform distortion correction on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- the present application also provides a readable storage medium having machine executable instructions stored on the readable storage medium, and the computer program can execute the steps of the above-mentioned image distortion correction method when the computer program is run by the processor.
- the embodiment of the present application obtains face frame information and face key points corresponding to at least two faces in the image to be corrected by performing face recognition on the image to be corrected, and then according to the face corresponding to each recognized face
- the frame information determines the face to be corrected in the image to be corrected, and calculates the relative distance coefficient between the face to be corrected and the camera lens, and then performs distortion correction on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- FIG. 1 shows one of the schematic flowcharts of the image distortion correction method provided by the embodiment of the present application
- FIG. 2 shows a schematic diagram of a face frame corresponding to a recognized face provided by an embodiment of the present application
- FIG. 3 shows the second schematic flowchart of the image distortion correction method provided by the embodiment of the present application
- FIG. 4 shows a schematic diagram of a photographing preview interface of an electronic device before image distortion correction provided by an embodiment of the present application
- FIG. 5 shows a schematic diagram of a photographing preview interface of an electronic device after image distortion correction provided by an embodiment of the present application
- FIG. 6 shows one of the schematic block diagrams of the functional modules of the image distortion correction device included in the electronic device provided by the embodiment of the present application
- FIG. 7 shows the second schematic block diagram of the functional modules of the image distortion correction apparatus included in the electronic device provided by the embodiment of the present application.
- the inventor of the present application has discovered through research that in a scene where multiple people are photographed together, the photographer will distort his face when he is close to the lens, the face area will be enlarged, and the other people in the group photo will be relative to the photographer. Usually the distance from the lens is farther, which will cause the distortion of the photographer's face area to be enlarged in the contrast.
- Current distortion correction methods mostly use general face-lifting or head-shrinking schemes to reduce distortion, but they cannot automatically perform distortion processing on the photographers in the group photo in real-time for multi-person selfie photo scenes.
- the inventor proposes the following technical solutions to solve or improve the above problems. It should be noted that the defects in the above solutions in the prior art are the results of the inventors after practice and careful study. Therefore, the discovery process of the above problems and the following embodiments of the application address the above problems.
- the proposed solutions should all be contributions made by the inventor to the application in the process of invention and creation, and should not be understood as technical content known to those skilled in the art.
- Figure 1 shows a schematic flow chart of an image distortion correction method provided by an embodiment of the present application. It should be understood that in other embodiments, the sequence of some steps of the image distortion correction method of this embodiment may not be specifically implemented as shown in Figure 1 and below. The order of the examples is limited, for example, they can be exchanged according to actual needs, or some of the steps can also be omitted or deleted. The detailed steps of the image distortion correction method are introduced as follows.
- Step S110 Perform face recognition on the image to be corrected to obtain face frame information and face key points corresponding to at least two human faces in the image to be corrected.
- Step S120 Determine the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face, and calculate the relative distance coefficient between the face to be corrected and the camera lens.
- Step S130 performing distortion correction on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- this embodiment can automatically recognize the face to be corrected in real time when multiple people take a selfie together to correct the distortion based on the calculated relative distance coefficient between the face to be corrected and the camera lens, thereby optimizing the multi-person selfie scene The shooting effect.
- step S110 after the camera start instruction is detected, the camera is turned on and the shooting preview interface is entered.
- the manner of detecting the camera start instruction may be different.
- the camera can be turned on when the camera control triggered by the photographer on the interactive interface is detected; for example, the camera can be turned on when the camera voice command sent by the photographer is received; for example, you can also When it is detected that the action of the photographer is consistent with the preset action of taking a picture, a camera start instruction is obtained.
- the camera When the camera start instruction is detected, the camera is turned on and enters the shooting preview interface.
- the shooting preview interface In the shooting preview interface, the image of the current shooting scene acquired by the camera can be displayed in real time.
- the image to be corrected may be each frame of the image displayed in the shooting preview interface.
- face recognition can be performed on the frame of the image to be corrected through the face recognition model obtained in advance to obtain each person in the frame of the image to be corrected.
- the face recognition model can be obtained by training based on a deep learning neural network (for example, YOLO neural network, Fast-RCNN neural network, MTCNN neural network, etc.) using multiple training samples and label data of each training sample.
- the labeled data of each training sample may include face frame information and face key points corresponding to each face in the training sample.
- the face frame information corresponding to each face may include face ID information, coordinate information of the vertices of the face frame, width information and height information of the face frame, etc.
- the key points of the face may include various parts of the face.
- the feature points and the geometric relationship between each feature point may include face ID information, coordinate information of the vertices of the face frame, width information and height information of the face frame, etc.
- FIG. 2 a schematic diagram of a face frame F corresponding to a recognized face is shown.
- the face frame F covers the face area
- the face frame information may include the width W, height H, and vertex coordinates Q of the face frame shown in Figure 2
- the face frame F The area is the product of the width W and the height H of the face frame F.
- the width W and height H of the face frame F are adaptively adjusted during the recognition process.
- the vertex coordinate Q of the face frame F can be selected according to actual needs.
- the vertex coordinate Q shown in FIG. 2 is the vertex coordinate of the upper right corner of the face frame.
- the vertex coordinate Q of the face frame can also be selected. Select the vertex coordinates of the upper left corner, the vertex coordinates of the lower left corner, or the vertex coordinates of the lower right corner, or any combination of the foregoing, and this embodiment does not impose any limitation on this.
- the face recognition model obtained through neural network training based on deep learning can identify the face frame information and face key points corresponding to each face in the image to be corrected, so as to facilitate subsequent use of the face corresponding to each face
- the frame information and key points of the face are corrected for distortion.
- the inventor also discovered in the process of research that usually in the video stream in the above-mentioned shooting preview interface, these faces may exist due to the deviation of the head of the subject, the camera shake, the relative position of the lens, etc. In the case of skew, the subsequent determination of the area of the face image will affect the distortion correction effect of the image.
- a corresponding face image can be cropped according to the face frame information corresponding to the face.
- the cropping area of the face can be determined according to the coordinate information of the vertices of the face frame corresponding to the face, the width information and the height information of the face frame, and then the corresponding face image can be cropped according to the determined cropping area.
- the face image is rotated to the set position using the affine matrix.
- the difference between each face key template point in the standard face template stored in advance and the face key point corresponding to the face can be used to calculate the rotation parameter for rotating the face image using the affine matrix,
- the face key points corresponding to the face image are rotated to the corresponding positions determined according to the rotation parameters to obtain the face image after the straightening. In this way, it can be avoided that these faces may be skewed due to the deflection of the head of the subject, the camera shake, or the offset of the relative position of the lens.
- the inventor also discovered in the process of research that usually when there are children in a group photo, due to the camera, the video stream in the above-mentioned shooting preview interface may cause the children to have a little distortion compared to other adults.
- the actual image of the child is different from the real image, which affects the shooting effect.
- a pre-trained age estimation model is used to recognize the face image, and the face image in the face image is obtained. Face age, and judge whether the face age in the face image is greater than the set age, if the face age in the face image is less than the set age, then the face age in the face image The size of the face frame of the face image is corrected.
- the electronic device may pre-store the median of the face circumference corresponding to the age of each face, and then the face frame of the face image according to the face age in the face image
- the size correction method can be as follows: First, obtain the first median of the face circumference corresponding to the age of the face in the face image and the second median of the face circumference corresponding to the set age Then, the face frame correction coefficient is calculated according to the first median and the second median, and finally the face frame size of the face image is corrected according to the face frame correction coefficient.
- the face age in the face image is 10 years old as an example
- the first median of the face circumference corresponding to 10 years old is b 1
- the face circumference corresponding to 18 years old The first median of is b 2
- the calculated face frame correction coefficient is b 1 /b 2 .
- the method of correcting the size of the face frame of the face image according to the face frame correction coefficient may be: using the face frame correction coefficient b 1 /b 2 to respectively correct the face image of the face image.
- the side length and area of the face frame are corrected.
- the median face circumference corresponding to the age of each face can be obtained by collecting a large number of samples of different ages.
- the specific data can be fine-tuned according to actual needs, which is not specifically limited in this embodiment.
- step S120 as a possible implementation manner, according to the face frame information corresponding to each face, the face with the largest face frame area may be determined as the face to be corrected, and the face to be corrected may be calculated The relative distance coefficient between the face and the camera lens.
- the average area of the face frame corresponding to all recognized faces can be calculated, and then the area of the face frame corresponding to each face can be calculated as compared with the average area. And finally calculate the relative distance coefficient between the face to be corrected and the camera lens according to the sum of the square difference values.
- the specific calculation formula of the relative distance coefficient can be:
- d is the relative distance coefficient between the face to be corrected and the camera lens
- N is the number of faces
- x i is the area of the ith face frame
- r is the average area of the face frame corresponding to all faces.
- the median area and the maximum number area corresponding to the face frame of the face other than the face to be corrected it is also possible to obtain the median area and the person whose face is to be corrected
- the first ratio of the face frame area and the second ratio of the maximum number area to the face frame area of the face to be corrected and finally according to the weight coefficients corresponding to the preset first and second ratios, the The first ratio and the second ratio calculate a relative distance coefficient between the face to be corrected and the camera lens.
- the specific calculation formula of the relative distance coefficient can be:
- d is the relative distance coefficient between the face to be corrected and the camera lens
- a max is the maximum number area corresponding to the face frame of the face other than the face to be corrected
- a mid is the face to be corrected
- K is a constant between 0 and 1
- y is the face frame area of the face to be repaired.
- the electronic device may pre-store a plurality of distortion correction parameters corresponding to preset distance coefficients, so that when the relative distance coefficient obtained in step S120 is a preset distance coefficient stored in advance, it can be directly obtained
- the distortion correction parameter corresponding to the relative distance coefficient In a real scene, the relative distance coefficient obtained in step S120 is usually difficult to match the accurate preset distance coefficient. It is very difficult to collect all the distortion correction parameters corresponding to the preset distance coefficient. In addition, if the relative distance coefficient obtained in step S120 is matched with the distortion correction parameter corresponding to the similar preset distance coefficient, there will be a distortion correction error.
- the preset distance coefficient range may include a first endpoint value and a second endpoint value, and the first endpoint value is smaller than the second endpoint value.
- the corresponding target distortion correction parameter is calculated according to the first endpoint value, the first difference value, the second difference value, and the third difference value.
- the target distortion correction parameter can be obtained by the following calculation formula:
- z is the target distortion correction parameter
- d is the relative distance coefficient between the face to be corrected and the camera lens
- d 1 is the first endpoint value
- d 2 is the second endpoint value
- c 1 is the The distortion correction parameter corresponding to the first endpoint value
- c 2 is the distortion correction parameter corresponding to the first endpoint value.
- the target distortion correction parameter is:
- the human face to be corrected may be subjected to distortion correction according to the target distortion correction parameter to obtain a target image after distortion correction.
- first establish the face grid of the face to be corrected determine each constraint point in the face grid, and then calculate the face grid in the face grid according to the target distortion correction parameter
- the constraint deformation variables of each constraint point are then adjusted according to the calculated constraint deformation variables of each constraint point to obtain the adjusted face grid, and the face to be corrected is mapped to the adjustment After the face grid, the target image after distortion correction is obtained.
- the image distortion correction method provided in this embodiment may further include the following steps:
- Step S140 Display each target image after distortion correction in the shooting preview interface in real time, and when a shooting instruction is detected, the target image currently displayed in the shooting preview interface is taken as the shooting image and stored in the shooting preview interface. Mentioned in electronic equipment.
- each target image after distortion correction is displayed in the shooting preview interface in real time, so that the photographer can see the target image after distortion correction instead of the target image that has been distorted before taking a picture. Correct the image. Later, when you need to take a picture, you can press the shooting button, and when a shooting instruction is detected, the target image currently displayed in the shooting preview interface is used as the shooting image and stored in the electronic device.
- face recognition can be performed on the image to be corrected first to obtain face frame information and face key points corresponding to at least two faces in the image to be corrected. Then, according to the face frame information corresponding to each recognized face, the face to be corrected in the image to be corrected is determined, and the relative distance coefficient between the face to be corrected and the camera lens is calculated. Then, distortion correction is performed on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- the face frame area corresponding to the user is the largest, and the face image of the user is the image that needs to be corrected. Therefore, in this embodiment, it is possible to determine the face with the largest face frame area as the face to be corrected according to the face frame information corresponding to each face, and then calculate the relative distance between the face to be corrected and the camera lens coefficient.
- the face to be corrected when correcting the face to be corrected, the face to be corrected needs to be corrected to the size of the other faces in the image to be corrected. Therefore, in the embodiment, it can be calculated according to the area of the face frame of each face The relative distance coefficient between the face to be corrected and the camera. For the specific calculation method of the distance coefficient, please refer to the preceding text, which will not be repeated here.
- the face age corresponding to each face image in the image to be corrected can be identified first, and the face frame size of each face image is corrected according to the face age corresponding to each face image, and then Then perform the determination of the face to be corrected or the calculation of the relative distance coefficient. In this way, the influence of the size of the child's face on the subsequent determination of the face to be corrected and the calculation of the relative distance can be reduced.
- the previous text for specific correction methods please refer to the previous text for specific correction methods, so I won't repeat them here.
- the skew of the face image being photographed when self-calculating the size of the face frame may cause the calculation of the face frame size to be inaccurate, thereby affecting subsequent calculations
- the corresponding face image is cut out according to the face frame information corresponding to the face; according to the person corresponding to the face Face key points, using an affine matrix to rotate the face image to a set position. In this way, the calculation results of the subsequent calculation steps based on the size of the face frame can be made more accurate.
- the photographer in a multi-person photo scene in daily life, such as taking a group photo of people while traveling, usually the photographer holds the electronic device while the rest of the people choose a suitable location to take the photo.
- the electronic device turns on the camera B4 and displays the shooting preview interface L.
- the shooting preview interface L displays the real-time video stream of the group photo in the travel scene obtained by the camera B4 .
- the photographer can also select the shooting mode button B2 to choose front camera or rear camera, but whether it is front camera or rear camera, the photographer is the person closest to the camera B4, which will cause the group to take a selfie
- the face of the photographer (the face to be corrected) Face0 will be distorted. It can be seen from Figure 4 that this distortion will be more obvious when a group photo is taken.
- the image distortion correction method provided in the embodiment of the present application is adopted, as shown in FIG. 5, the distortion of the face of the photographer's face after correction is obviously greatly improved. Therefore, each time the shooting preview interface L displays the video stream after distortion correction, thereby optimizing the shooting effect in the multi-person selfie photo scene.
- the image currently displayed in the shooting preview interface L can be used as a shooting image and stored, and the just stored shooting image can also be viewed through the shooting image preview frame B3.
- FIG. 6 shows a schematic diagram of an electronic device 100 provided by an embodiment of the present application.
- the electronic device 100 may include a storage medium 110, a processor 120, and an image distortion correction device 130.
- the processor 120 may be a general-purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more for controlling the implementation of the foregoing methods
- CPU Central Processing Unit
- ASIC application-specific integrated circuit
- the example provides an integrated circuit for program execution of the image distortion correction method.
- the storage medium 110 may be ROM or other types of static storage devices that can store static information and instructions, RAM or other types of dynamic storage devices that can store information and instructions, or may be an electrically erasable programmable read-only memory (Electrically Erasable Programmable Read Only Memory). Programmable-Only Memory, EEPROM), CD-ROM (Compact disc Read-Only Memory, CD-ROM) or other optical disc storage, optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disks A storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
- EEPROM Electrically erasable programmable read-only memory
- CD-ROM Compact disc Read-Only Memory
- CD-ROM Compact disc Read-Only Memory
- optical disc storage including compact discs, laser discs, optical discs, digital versatile discs, Blu
- the storage medium 110 may exist independently and is connected to the processor 120 through a communication bus.
- the storage medium 110 may also be integrated with the processor.
- the storage medium 110 is configured to store application program codes for executing the solution of the present application, such as the image distortion correction device 130 shown in FIG. 5, and is controlled by the processor 120 to execute.
- the processor 120 is configured to execute application program codes stored in the storage medium 110, such as the image distortion correction device 130, to execute the image distortion correction method of the foregoing method embodiment.
- the present application may divide the image distortion correction device 130 into functional modules according to the foregoing method embodiments.
- each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
- the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in this application is illustrative and only a logical function division, and there may be other division methods in actual implementation.
- the image distortion correction device 130 shown in FIG. 6 is only a schematic diagram of the device.
- the image distortion correction device 130 shown in FIG. 6 may include an identification module 131, a calculation module 132, and a distortion correction module 133. The functions of each functional module of the image distortion correction device 130 are respectively described in detail below.
- the recognition module 131 is configured to perform face recognition on the image to be corrected to obtain face frame information and face key points corresponding to at least two human faces in the image to be corrected. It can be understood that the identification module 131 may be configured to execute the above step S110, and for the detailed implementation of the identification module 131, please refer to the content related to the above step S110.
- the calculation module 132 is configured to determine the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face, and to calculate the relative distance coefficient between the face to be corrected and the camera lens. It can be understood that the calculation module 132 may be configured to execute the foregoing step S120, and for the detailed implementation of the calculation module 132, refer to the foregoing content related to the step S120.
- the distortion correction module 133 is configured to perform distortion correction on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction. It can be understood that the distortion correction module 133 may be configured to execute the above-mentioned step S130, and for the detailed implementation of the distortion correction module 133, please refer to the above-mentioned content related to the step S130.
- the image distortion correction device 130 may further include:
- the display storage module 134 is configured to display each target image after distortion correction in the shooting preview interface in real time, and when a shooting instruction is detected, use the target image currently displayed in the shooting preview interface as the shooting image And stored in the electronic device 100. It can be understood that the display storage module 134 can be configured to execute the above step S140, and for the detailed implementation of the display storage module 134, please refer to the content related to the above step S140.
- the image distortion correction device 130 provided by the embodiment of the present application is another implementation form of the image distortion correction method shown in FIG. 1 or FIG. 3, and the image distortion correction device 130 can be configured to execute the image distortion correction method shown in FIG. 1 or FIG.
- the image distortion correction method provided by the embodiment therefore, the technical effect that can be obtained can refer to the above method embodiment, and it will not be repeated here.
- an embodiment of the present application further provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the above-mentioned image distortion correction method A step of.
- the storage medium can be a general storage medium, such as a portable disk, a hard disk, etc., and when the computer program on the storage medium is executed, the above-mentioned image distortion correction method can be executed.
- These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are generated It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- the embodiment of this application obtains the face frame information and face key points corresponding to at least two faces in the image to be corrected by performing face recognition on the image to be corrected, and then determines the face frame information corresponding to each recognized face. Correct the face to be corrected in the image, calculate the relative distance coefficient between the face to be corrected and the camera lens, and then perform distortion correction on the face to be corrected according to the relative distance coefficient to obtain the target image after distortion correction. In this way, it is possible to automatically recognize the face to be corrected for distortion correction in real-time when multiple people take a selfie together, thereby optimizing the shooting effect in a scene where multiple people take a selfie together.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (18)
- 一种图像畸变修正方法,其特征在于,应用于电子设备,所述方法包括:An image distortion correction method, characterized in that it is applied to an electronic device, and the method includes:对待修正图像进行人脸识别,得到所述待修正图像中至少两个人脸对应的人脸框信息和人脸关键点;Performing face recognition on the image to be corrected to obtain face frame information and face key points corresponding to at least two faces in the image to be corrected;根据识别到的每个人脸对应的人脸框信息,确定所述待修正图像中的待修正人脸,并计算所述待修正人脸与摄像镜头的相对距离系数;Determine the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face, and calculate the relative distance coefficient between the face to be corrected and the camera lens;根据所述相对距离系数对所述待修正人脸进行畸变修正,得到畸变修正后的目标图像。Distortion correction is performed on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- 根据权利要求1所述的图像畸变修正方法,其特征在于,所述对待修正图像进行人脸识别,得到所述待修正图像中至少两个人脸对应的人脸框信息和人脸关键点的步骤,包括:The image distortion correction method according to claim 1, wherein the step of performing face recognition on the image to be corrected to obtain face frame information and face key points corresponding to at least two faces in the image to be corrected ,include:在检测到相机开启指令后,开启摄像头并进入拍摄预览界面;After detecting the camera start command, turn on the camera and enter the shooting preview interface;针对所述拍摄预览界面中的每帧待修正图像,通过预先训练得到的人脸识别模型对该帧待修正图像进行人脸识别,得到该帧待修正图像中每个人脸对应的人脸框信息和人脸关键点;For each frame of the image to be corrected in the shooting preview interface, face recognition is performed on the frame of the image to be corrected through the face recognition model obtained in advance to obtain the face frame information corresponding to each face in the frame of the image to be corrected And key points of the face;其中,所述人脸识别模型利用多个训练样本和各个训练样本的标注数据基于深度学习的神经网络训练获得,其中,各个训练样本的标注数据包括该训练样本中各个人脸对应的人脸框信息和人脸关键点。Wherein, the face recognition model uses multiple training samples and the labeled data of each training sample to be obtained based on deep learning neural network training, where the labeled data of each training sample includes the face frame corresponding to each face in the training sample Key points of information and faces.
- 根据权利要求1或2所述的图像畸变修正方法,其特征在于,所述根据识别到的每个人脸对应的人脸框信息确定所述待修正图像中的待修正人脸,并计算所述待修正人脸与摄像镜头的相对距离系数的步骤之前,所述方法还包括:The image distortion correction method according to claim 1 or 2, wherein the face frame information corresponding to each recognized face is used to determine the face to be corrected in the image to be corrected, and calculate the Before the step of correcting the relative distance coefficient between the face and the camera lens, the method further includes:针对每个人脸,根据该人脸对应的人脸框信息裁剪出对应的人脸图像;For each face, crop out the corresponding face image according to the face frame information corresponding to the face;根据该人脸对应的人脸关键点,利用仿射矩阵将该人脸图像旋转到设定位置。According to the key points of the face corresponding to the face, the face image is rotated to the set position using the affine matrix.
- 根据权利要求3所述的图像畸变修正方法,其特征在于,所述方法还包括:4. The image distortion correction method according to claim 3, wherein the method further comprises:针对旋转后的每个人脸图像,采用预先训练的年龄预估模型对该人脸图像进行识别,得到该人脸图像中的人脸年龄;For each face image after rotation, a pre-trained age estimation model is used to recognize the face image to obtain the face age in the face image;判断该人脸图像中的人脸年龄是否大于设定年龄;Determine whether the age of the face in the face image is greater than the set age;若该人脸图像中的人脸年龄小于设定年龄,则根据该人脸图像中的人脸年龄对该人脸图像的人脸框大小进行矫正。If the age of the face in the face image is less than the set age, the size of the face frame of the face image is corrected according to the age of the face in the face image.
- 根据权利要求4所述的图像畸变修正方法,其特征在于,所述电子设备中预先存储有每个人脸年龄对应的人脸周长的中位数,所述根据该人脸图像中的人脸年龄对该人脸图像的人脸框大小进行矫正的步骤,包括:The image distortion correction method according to claim 4, wherein the electronic device pre-stores the median of the face circumference corresponding to the age of each face, and the face image in the face image The steps to correct the size of the face frame of the face image by age include:获取该人脸图像中的人脸年龄对应的人脸周长的第一中位数以及所述设定年龄对应的人脸周长的第二中位数;Acquiring the first median of the face circumference corresponding to the age of the face in the face image and the second median of the face circumference corresponding to the set age;根据所述第一中位数和所述第二中位数计算得到人脸框矫正系数;Calculating a face frame correction coefficient according to the first median and the second median;根据所述人脸框矫正系数对该人脸图像的人脸框大小进行矫正。The size of the face frame of the face image is corrected according to the face frame correction coefficient.
- 根据权利要求1-5中任意一项所述的图像畸变修正方法,其特征在于,所述根据识别到的每个人脸对应的人脸框信息确定所述待修正图像中的待修正人脸,并计算所述待修正人脸与摄像镜头的相对距离系数的步骤,包括:The image distortion correction method according to any one of claims 1 to 5, wherein said determining the face to be corrected in the image to be corrected according to face frame information corresponding to each recognized face, And the step of calculating the relative distance coefficient between the face to be corrected and the camera lens includes:根据所述每个人脸对应的人脸框信息,确定人脸框面积最大的人脸为待修正人脸;According to the face frame information corresponding to each face, determine the face with the largest face frame area as the face to be corrected;计算所述待修正人脸与摄像镜头的相对距离系数。Calculate the relative distance coefficient between the face to be corrected and the camera lens.
- 根据权利要求6所述的图像畸变修正方法,其特征在于,所述计算所述待修正人脸与摄像镜头的相对距离系数的步骤,包括:8. The image distortion correction method according to claim 6, wherein the step of calculating the relative distance coefficient between the face to be corrected and the camera lens comprises:根据所述每个人脸对应的人脸框信息,计算识别到的所有人脸对应的人脸框的平均面积;Calculating the average area of the face frame corresponding to all recognized faces according to the face frame information corresponding to each face;计算每个人脸对应的人脸框的面积与所述平均面积的差值平方值之和;Calculating the sum of the square value of the difference between the area of the face frame corresponding to each face and the average area;根据所述差值平方值之和计算所述待修正人脸与摄像镜头的相对距离系数,具体计算公式为:The relative distance coefficient between the face to be corrected and the camera lens is calculated according to the sum of the squared values of the difference. The specific calculation formula is:其中,d为所述待修正人脸与摄像镜头的相对距离系数,N为人脸数量,x ix i为第i个人脸框的面积,r为所有人脸对应的人脸框的平均面积。 Where, d is the relative distance coefficient between the face to be corrected and the camera lens, N is the number of faces, x i x i is the area of the i-th face frame, and r is the average area of the face frame corresponding to all faces.
- 根据权利要求6所述的图像畸变修正方法,其特征在于,所述计算所述待修正人脸与摄像镜头的相对距离系数的步骤,包括:8. The image distortion correction method according to claim 6, wherein the step of calculating the relative distance coefficient between the face to be corrected and the camera lens comprises:获取所述待修正人脸之外的其它人脸的人脸框对应的中位数面积和最大数面积;Acquiring the median area and the maximum number area corresponding to the face frame of the face other than the face to be corrected;计算所述中位数面积与所述待修正人脸的人脸框面积的第一比值以及所述最大数面积与所述待修正人脸的人脸框面积的第二比值;Calculating a first ratio of the median area to the face frame area of the face to be corrected and a second ratio of the maximum number area to the face frame area of the face to be corrected;根据预设的第一比值和第二比值各自对应的权重系数、所述第一比值和所述第二比值计算所述待修正人脸与摄像镜头的相对距离系数,具体计算公式为:The relative distance coefficient between the face to be corrected and the camera lens is calculated according to the weight coefficients corresponding to the preset first ratio and second ratio, the first ratio and the second ratio, and the specific calculation formula is:其中,d为所述待修正人脸与摄像镜头的相对距离系数,a max为所述待修正人脸之外的其它人脸的人脸框对应的最大数面积,a mid为所述待修正人脸之外的其它人脸的人脸框对应的中位数面积,K为0到1之间的常数,y为所述待修人脸的人脸框面积。 Where d is the relative distance coefficient between the face to be corrected and the camera lens, a max is the maximum number area corresponding to the face frame of the face other than the face to be corrected, and a mid is the face to be corrected The median area corresponding to the face frame of the face other than the face, K is a constant between 0 and 1, and y is the face frame area of the face to be repaired.
- 根据权利要求1-8中任意一项所述的图像畸变修正方法,其特征在于,所述电子设备预先存储有多个预设距离系数对应的畸变修正参数,所述根据所述相对距离系数对所述待修正人脸进行畸变修正,得到畸变修正后的目标图像的步骤,包括:The image distortion correction method according to any one of claims 1-8, wherein the electronic device pre-stores a plurality of distortion correction parameters corresponding to preset distance coefficients, and the pair of distortion correction parameters according to the relative distance coefficients The step of performing distortion correction on the face to be corrected to obtain a target image after distortion correction includes:获取所述相对距离系数所在的预设距离系数范围,所述预设距离系数范围包括第一端点值和第二端点值,所述第一端点值小于所述第二端点值;Acquiring a preset distance coefficient range in which the relative distance coefficient is located, the preset distance coefficient range including a first endpoint value and a second endpoint value, the first endpoint value is less than the second endpoint value;计算所述相对距离系数与所述第一端点值的第一差值、所述第二端点值与所述第一端点值的第二差值以及所述第二端点值对应的畸变修正参数与所述第一端点值对应的畸变修正参数的第三差值;Calculate the first difference between the relative distance coefficient and the first endpoint value, the second difference between the second endpoint value and the first endpoint value, and the distortion correction corresponding to the second endpoint value The third difference between the parameter and the distortion correction parameter corresponding to the first endpoint value;根据所述第一端点值、所述第一差值、所述第二差值以及所述第三差值计算得到对应的目标畸变修正参数;Calculating corresponding target distortion correction parameters according to the first endpoint value, the first difference value, the second difference value, and the third difference value;根据所述目标畸变修正参数对所述待修正人脸进行畸变修正,得到畸变修正后的目标图像;Performing distortion correction on the face to be corrected according to the target distortion correction parameter to obtain a target image after the distortion correction;其中,所述目标畸变修正参数通过以下计算公式得到:Wherein, the target distortion correction parameter is obtained by the following calculation formula:其中,z为目标畸变修正参数,d为所述待修正人脸与摄像镜头的相对距离系数,d 1为所述第一端点值,d 2为所述第二端点值,c 1为所述第一端点值对应的畸变修正参数,c 2为所述第一端点值对应的畸变修正参数。 Where z is the target distortion correction parameter, d is the relative distance coefficient between the face to be corrected and the camera lens, d 1 is the first endpoint value, d 2 is the second endpoint value, and c 1 is the The distortion correction parameter corresponding to the first endpoint value, and c 2 is the distortion correction parameter corresponding to the first endpoint value.
- 根据权利要求9所述的图像畸变修正方法,其特征在于,所述根据所述目标畸变修正参数对所述待修正人脸进行畸变修正,得到畸变修正后的目标图像的步骤,包括:9. The image distortion correction method according to claim 9, wherein the step of performing distortion correction on the face to be corrected according to the target distortion correction parameter to obtain a target image after the distortion correction comprises:建立所述待修正人脸的人脸网格,并确定所述人脸网格中的各个约束点;Establishing a face grid of the face to be corrected, and determining each constraint point in the face grid;根据所述目标畸变修正参数计算所述人脸网格中的各个约束点的约束形变量;Calculating the constraint deformation variables of each constraint point in the face grid according to the target distortion correction parameter;根据计算的各个约束点的约束形变量对各个约束点的坐标进行调整,得到调整后的人脸网格;Adjust the coordinates of each constraint point according to the calculated constraint deformation variables of each constraint point to obtain an adjusted face grid;将所述待修正人脸映射到所述调整后的人脸网格,得到畸变修正后的目标图像。The face to be corrected is mapped to the adjusted face grid to obtain a target image after distortion correction.
- 根据权利要求2-10所述的图像畸变修正方法,其特征在于,所述方法还包括:11. The image distortion correction method according to claims 2-10, wherein the method further comprises:将畸变修正后的每张目标图像实时显示在所述拍摄预览界面中,并在检测到拍摄指令时,将当前显示在所述拍摄预览界面中的目标图像作为拍摄图像并存储在所述电子设备中。Display each target image after distortion correction in the shooting preview interface in real time, and when a shooting instruction is detected, use the target image currently displayed in the shooting preview interface as a shooting image and store it in the electronic device in.
- 根据权利要求1项所述的图像畸变修正方法,其特征在于,所述根据识别到的每个人脸对应的人脸框信息确定所述待修正图像中的待修正人脸的步骤,包括:The image distortion correction method according to claim 1, wherein the step of determining the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face comprises:根据所述每个人脸对应的人脸框信息,确定人脸框面积最大的人脸为待修正人脸。According to the face frame information corresponding to each face, it is determined that the face with the largest face frame area is the face to be corrected.
- 根据权利要求12所述的图像畸变矫正方法,其特征在于,所述计算所述待修正人脸与摄像镜头的相对距离系数的步骤,包括:The image distortion correction method according to claim 12, wherein the step of calculating the relative distance coefficient between the face to be corrected and the camera lens comprises:根据各人脸的人脸框的面积计算所述待修正人脸与摄像头的相对距离系数。The relative distance coefficient between the face to be corrected and the camera is calculated according to the area of the face frame of each face.
- 根据权利要求12或13所述的图像畸变修正方法,其特征在于,所述确定人脸框面积最大的人脸为待修正人脸之前,所述方法还包括:The image distortion correction method according to claim 12 or 13, wherein before the determining that the face with the largest face frame area is the face to be corrected, the method further comprises:识别所述待修正图像中各人脸图像对应的人脸年龄;Identifying the age of the face corresponding to each face image in the image to be corrected;根据各人脸图像对应的人脸年龄对各人脸图像的人脸框大小进行矫正。The size of the face frame of each face image is corrected according to the face age corresponding to each face image.
- 根据权利要求12-14任意一项所述的图像畸变修正方法,其特征在于,所述根据识别到的每个人脸对应的人脸框信息确定所述待修正图像中的待修正人脸,并计算所述待修正人脸与摄像镜头的相对距离系数的步骤之前,所述方法还包括:The image distortion correction method according to any one of claims 12-14, wherein the face frame information corresponding to each recognized face is determined to determine the face to be corrected in the image to be corrected, and Before the step of calculating the relative distance coefficient between the face to be corrected and the camera lens, the method further includes:针对每个人脸,根据该人脸对应的人脸框信息裁剪出对应的人脸图像;For each face, crop out the corresponding face image according to the face frame information corresponding to the face;根据该人脸对应的人脸关键点,利用仿射矩阵将该人脸图像旋转到设定位置。According to the key points of the face corresponding to the face, the face image is rotated to the set position using the affine matrix.
- 一种图像畸变修正装置,其特征在于,应用于电子设备,所述装置包括:An image distortion correction device, characterized in that it is applied to electronic equipment, and the device includes:识别模块,配置成对待修正图像进行人脸识别,得到所述待修正图像中至少两个人脸对应的人脸框信息和人脸关键点;A recognition module configured to perform face recognition on the image to be corrected, and obtain face frame information and face key points corresponding to at least two faces in the image to be corrected;计算模块,配置成根据识别到的每个人脸对应的人脸框信息确定所述待修正图像中的待修正人脸,并计算所述待修正人脸与摄像镜头的相对距离系数;A calculation module, configured to determine the face to be corrected in the image to be corrected according to the face frame information corresponding to each recognized face, and calculate the relative distance coefficient between the face to be corrected and the camera lens;畸变修正模块,配置成根据所述相对距离系数对所述待修正人脸进行畸变修正,得到畸变修正后的目标图像。The distortion correction module is configured to perform distortion correction on the face to be corrected according to the relative distance coefficient to obtain a target image after distortion correction.
- 一种电子设备,其特征在于,所述电子设备包括一个或多个存储介质和一个或多个与存储介质通信的处理器,一个或多个存储介质存储有处理器可执行的机器可执行指令,当电子设备运行时,处理器执行所述机器可执行指令,以实现权利要求1-15中任意一项所述的图像畸变修正方法。An electronic device, characterized in that the electronic device includes one or more storage media and one or more processors in communication with the storage media, and the one or more storage media stores machine executable instructions executable by the processor When the electronic device is running, the processor executes the machine executable instructions to realize the image distortion correction method according to any one of claims 1-15.
- 一种可读存储介质,其特征在于,所述可读存储介质存储有机器可执行指令,所述机器可执行指令被执行时实现权利要求1-15中任意一项所述的图像畸变修正方法。A readable storage medium, wherein the readable storage medium stores machine-executable instructions, and the machine-executable instructions realize the image distortion correction method according to any one of claims 1-15 when executed .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910521767.1A CN110232667B (en) | 2019-06-17 | 2019-06-17 | Image distortion correction method, device, electronic equipment and readable storage medium |
CN201910521767.1 | 2019-06-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020252910A1 true WO2020252910A1 (en) | 2020-12-24 |
Family
ID=67860030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/102870 WO2020252910A1 (en) | 2019-06-17 | 2019-08-27 | Image distortion correction method, apparatus, electronic device and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110232667B (en) |
WO (1) | WO2020252910A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114120391A (en) * | 2021-10-19 | 2022-03-01 | 哈尔滨理工大学 | Multi-pose face recognition system and method thereof |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110751602B (en) * | 2019-09-20 | 2022-09-30 | 北京迈格威科技有限公司 | Conformal distortion correction method and device based on face detection |
CN111028161B (en) * | 2019-11-22 | 2024-04-05 | 维沃移动通信有限公司 | Image correction method and electronic equipment |
CN111008947B (en) * | 2019-12-09 | 2024-05-07 | Oppo广东移动通信有限公司 | Image processing method and device, terminal equipment and storage medium |
CN111080545B (en) * | 2019-12-09 | 2024-03-12 | Oppo广东移动通信有限公司 | Face distortion correction method, device, terminal equipment and storage medium |
CN111105367B (en) * | 2019-12-09 | 2023-07-18 | Oppo广东移动通信有限公司 | Face distortion correction method and device, electronic equipment and storage medium |
CN111158563A (en) * | 2019-12-11 | 2020-05-15 | 青岛海信移动通信技术股份有限公司 | Electronic terminal and picture correction method |
CN111325691B (en) * | 2020-02-20 | 2023-11-10 | Oppo广东移动通信有限公司 | Image correction method, apparatus, electronic device, and computer-readable storage medium |
CN111337142A (en) * | 2020-04-07 | 2020-06-26 | 北京迈格威科技有限公司 | Body temperature correction method and device and electronic equipment |
CN113850726A (en) * | 2020-06-28 | 2021-12-28 | 华为技术有限公司 | Image transformation method and device |
WO2022001630A1 (en) * | 2020-06-29 | 2022-01-06 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and system for capturing at least one smart media |
CN112070021B (en) * | 2020-09-09 | 2024-08-13 | 深圳数联天下智能科技有限公司 | Ranging method, ranging system, equipment and storage medium based on face detection |
CN112927183A (en) * | 2021-01-13 | 2021-06-08 | 上海商米科技集团股份有限公司 | Lens module detection method and system of specific image recognition equipment |
CN113971827A (en) * | 2021-10-27 | 2022-01-25 | 中国银行股份有限公司 | Face recognition method and device |
CN115937010B (en) * | 2022-08-17 | 2023-10-27 | 北京字跳网络技术有限公司 | Image processing method, device, equipment and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426149A (en) * | 2013-07-24 | 2013-12-04 | 玉振明 | Large-viewing-angle image distortion correction and processing method |
US20160283780A1 (en) * | 2015-03-25 | 2016-09-29 | Alibaba Group Holding Limited | Positioning feature points of human face edge |
CN107358207A (en) * | 2017-07-14 | 2017-11-17 | 重庆大学 | A kind of method for correcting facial image |
CN108470322A (en) * | 2018-03-09 | 2018-08-31 | 北京小米移动软件有限公司 | Handle the method, apparatus and readable storage medium storing program for executing of facial image |
CN109543495A (en) * | 2017-09-22 | 2019-03-29 | 中国移动通信有限公司研究院 | A kind of face key point mask method, device, electronic equipment and storage medium |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103208133B (en) * | 2013-04-02 | 2015-08-19 | 浙江大学 | The method of adjustment that in a kind of image, face is fat or thin |
JP6389888B2 (en) * | 2013-08-04 | 2018-09-12 | アイズマッチ エルティーディー.EyesMatch Ltd. | Virtualization device, system and method in mirror |
CN105574006A (en) * | 2014-10-10 | 2016-05-11 | 阿里巴巴集团控股有限公司 | Method and device for establishing photographing template database and providing photographing recommendation information |
KR102290301B1 (en) * | 2015-05-06 | 2021-08-17 | 엘지전자 주식회사 | Mobile terminal and method of controlling the same |
CN105046657B (en) * | 2015-06-23 | 2018-02-09 | 浙江大学 | A kind of image stretch distortion self-adapting correction method |
CN105550671A (en) * | 2016-01-28 | 2016-05-04 | 北京麦芯科技有限公司 | Face recognition method and device |
CN105554403B (en) * | 2016-02-29 | 2018-12-04 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
CN106131409B (en) * | 2016-07-12 | 2019-01-25 | 京东方科技集团股份有限公司 | Image processing method and device |
CN108021852A (en) * | 2016-11-04 | 2018-05-11 | 株式会社理光 | A kind of demographic method, passenger number statistical system and electronic equipment |
CN107124543B (en) * | 2017-02-20 | 2020-05-29 | 维沃移动通信有限公司 | Shooting method and mobile terminal |
CN107506693B (en) * | 2017-07-24 | 2019-09-20 | 深圳市智美达科技股份有限公司 | Distort face image correcting method, device, computer equipment and storage medium |
CN108357269B (en) * | 2018-04-12 | 2023-05-02 | 电子科技大学中山学院 | Intelligent pen rack |
CN109447072A (en) * | 2018-11-08 | 2019-03-08 | 北京金山安全软件有限公司 | Thumbnail clipping method and device, electronic equipment and readable storage medium |
-
2019
- 2019-06-17 CN CN201910521767.1A patent/CN110232667B/en active Active
- 2019-08-27 WO PCT/CN2019/102870 patent/WO2020252910A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103426149A (en) * | 2013-07-24 | 2013-12-04 | 玉振明 | Large-viewing-angle image distortion correction and processing method |
US20160283780A1 (en) * | 2015-03-25 | 2016-09-29 | Alibaba Group Holding Limited | Positioning feature points of human face edge |
CN107358207A (en) * | 2017-07-14 | 2017-11-17 | 重庆大学 | A kind of method for correcting facial image |
CN109543495A (en) * | 2017-09-22 | 2019-03-29 | 中国移动通信有限公司研究院 | A kind of face key point mask method, device, electronic equipment and storage medium |
CN108470322A (en) * | 2018-03-09 | 2018-08-31 | 北京小米移动软件有限公司 | Handle the method, apparatus and readable storage medium storing program for executing of facial image |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114120391A (en) * | 2021-10-19 | 2022-03-01 | 哈尔滨理工大学 | Multi-pose face recognition system and method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN110232667B (en) | 2021-06-04 |
CN110232667A (en) | 2019-09-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020252910A1 (en) | Image distortion correction method, apparatus, electronic device and readable storage medium | |
WO2019085786A1 (en) | Image processing method and device, electronic device and computer-readable storage medium | |
US10674069B2 (en) | Method and apparatus for blurring preview picture and storage medium | |
CN101216883B (en) | A photography method and device | |
US7356254B2 (en) | Image processing method, apparatus, and program | |
CN105046657B (en) | A kind of image stretch distortion self-adapting correction method | |
US8350955B2 (en) | Digital photographing apparatus, method of controlling the digital photographing apparatus, and recording medium having recorded thereon a program for executing the method | |
US8577099B2 (en) | Method, apparatus, and program for detecting facial characteristic points | |
US20180359415A1 (en) | Panoramic video processing method and device and non-transitory computer-readable medium | |
US11283987B2 (en) | Focus region display method and apparatus, and storage medium | |
US20050243350A1 (en) | Image processing method, apparatus, and program | |
CN108810406B (en) | Portrait light effect processing method, device, terminal and computer readable storage medium | |
JP2005303991A (en) | Imaging device, imaging method, and imaging program | |
JP4515208B2 (en) | Image processing method, apparatus, and program | |
WO2021008205A1 (en) | Image processing | |
JP2006139369A (en) | Image processing method and apparatus, and program | |
CN110781712B (en) | Human head space positioning method based on human face detection and recognition | |
WO2022042669A1 (en) | Image processing method, apparatus, device, and storage medium | |
CN116152121B (en) | Curved surface screen generating method and correcting method based on distortion parameters | |
JP7110899B2 (en) | Image processing device, image processing method, and image processing program | |
JP2009251634A (en) | Image processor, image processing method, and program | |
WO2022127491A1 (en) | Image processing method and device, and storage medium and terminal | |
US11763509B2 (en) | Frame calibration for robust video synthesis | |
CN114049250B (en) | Method, device and medium for correcting face pose of certificate photo | |
WO2024146165A1 (en) | Human eye positioning method and apparatus, and computing device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19933452 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933452 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933452 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.06.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19933452 Country of ref document: EP Kind code of ref document: A1 |