CN113486714A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN113486714A
CN113486714A CN202110621419.9A CN202110621419A CN113486714A CN 113486714 A CN113486714 A CN 113486714A CN 202110621419 A CN202110621419 A CN 202110621419A CN 113486714 A CN113486714 A CN 113486714A
Authority
CN
China
Prior art keywords
image
area
region
electronic device
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110621419.9A
Other languages
Chinese (zh)
Other versions
CN113486714B (en
Inventor
丁大钧
乔晓磊
朱聪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202110621419.9A priority Critical patent/CN113486714B/en
Publication of CN113486714A publication Critical patent/CN113486714A/en
Application granted granted Critical
Publication of CN113486714B publication Critical patent/CN113486714B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

An image processing method and electronic equipment relate to the technical field of electronic equipment and can improve image quality. The specific scheme comprises the following steps: the electronic device detects a first operation. In response to a first operation of a user, the electronic device may capture a first image through the camera. Thereafter, the electronic device may determine that the first image includes a face region image, wherein the face region image includes an image of the first region and an image of the second region. Then, the electronic device can determine that the face region image has an image of a third region, and the third region is used for representing a deformation region of the face due to the refraction of the glasses. Then, the electronic device may process the image of the third area to obtain a second image. The second image includes a processed image of the third area, and a difference between a color of the processed image of the third area and a color of the image of the first area is smaller than a difference between the color of the processed image of the third area and the color of the image of the first area.

Description

Image processing method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of electronic equipment, in particular to an image processing method and electronic equipment.
Background
With the development of electronic technology, electronic devices (such as mobile phones, tablet computers or smart watches) have more and more functions. For example, a camera may be installed in most electronic devices, so that the electronic devices have a function of capturing images. Taking a mobile phone as an example, the mobile phone may acquire an image of the target object through the camera. For example, the mobile phone can acquire a face image through a camera.
However, as the number of people with myopia increases, more and more people wear the glasses for myopia to reduce the inconvenience caused by myopia. The lens of the myopia glasses is a concave lens and has a divergence effect on light. Therefore, in the case that the human face wears the glasses for myopia, the image of the glasses for myopia may appear in the image of the human face captured by the mobile phone, in the region outside the human face (for example, the background region of the human face). Therefore, the face image acquired by the mobile phone may be incomplete or distorted, and the image quality may be affected.
Disclosure of Invention
The application provides an image processing method and electronic equipment, which can guarantee the authenticity of a face image and improve the image quality.
In a first aspect, the present application provides a method for processing an image, where the method may be applied to an electronic device, and the electronic device may include a camera.
In the method, an electronic device detects a first operation. In response to a first operation of a user, the electronic device may capture a first image through the camera. Then, the electronic device may determine that the first image includes a face region image, where the face region image includes an image of a first region and an image of a second region, the first region is used to represent a region where the face skin is located, and the second region is used to represent a region where the glasses are located. Then, the electronic device can determine that the face region image has an image of a third region, and the third region is used for representing a deformation region of the face due to the refraction of the glasses.
It is understood that when the image of the first region and the image of the second region exist in the face region image, the image of the third region may exist in the first image. Therefore, after the electronic device determines that the first image includes the face region image, the electronic device may determine that the face region image exists in an image of a third region. Therefore, whether the image of the third area exists in the first image or not can be avoided when the first image does not comprise the face area image, and the waste of resources is reduced.
Then, the electronic device may process the image of the third area to obtain a second image. The second image includes a processed image of the third area, and a difference between a color of the processed image of the third area and a color of the image of the first area is smaller than a difference between the color of the processed image of the third area and the color of the image of the first area.
That is, after the electronic device determines that the face region image has the image of the third region, the electronic device may process the image of the third region to reduce a difference between a color of the image of the third region and a color of the image of the first region.
In summary, in the technical solution of the present application, the electronic device may reduce a difference between a color of the image of the third area and a color of the image of the first area. Therefore, the authenticity of the face image can be guaranteed, the image quality is improved, and the ornamental value of the image is improved.
With reference to the first aspect, in one possible design manner, the electronic device may determine that the image of the fourth area is the image of the third area, where the fourth area is an overlapping area of the first area and the second area.
It can be understood that, when the fourth region exists, it can be determined that the region where the glasses are located intersects with the region where the face skin is located. Thus, the image of the region where the face skin is located may have the image of the third region.
With reference to the first aspect, in another possible design manner, the method further includes: the electronic device may acquire first position information indicating a position of the image of the second region relative to the first image and second position information indicating a position of the face contour image in the face region image relative to the first image. The electronic device may determine whether the image of the second region is within a range of the face contour image in the face region image according to the first position information and the second position information. And the electronic equipment detects whether the image of the third area exists in the face area image according to whether the image of the second area is in the range of the face outline image.
In this way, the electronic device can determine the relationship between the second area and the face contour according to the first position information and the second position information. Therefore, the electronic equipment can detect whether the image of the third area exists in the face area image according to the relation between the second area and the face outline, and further process the image of the third area, so that the image quality is improved.
With reference to the first aspect, in another possible design manner, if the image of the second region is within the range of the face contour image, the electronic device may determine an image of a fifth region according to the first position information, where the fifth region includes a region of the fourth region except for the second region and the eye region. The electronic equipment acquires color information of each pixel point in the image of the fifth area, wherein the color information comprises an RGB value or an HSV value. The electronic device can detect whether the face region image has the image of the third region according to the color information of each pixel point in the image of the fifth region.
It can be understood that when the image of the second region is within the range of the face contour image, the refraction of the glasses can cause the face in the first region except the face contour to be deformed, and the face contour cannot be deformed. Therefore, the electronic equipment can directly detect whether the image of the face area has the image of the third area according to the color information of the pixel points in the image of the fifth area, and then process the image of the third area, so that the image quality is improved.
With reference to the first aspect, in another possible design manner, if the image of the second region is outside the range of the face contour image, the electronic device determines an actual face contour image according to the second position information, and the face region image is an image within the range of the actual face contour image.
It should be noted that if the image of the second region is outside the range of the face contour image, the refraction of the glasses may cause the deformation of the face contour. In this way, the electronic device can determine the actual face contour image by the second position information, i.e., modify the deformed face contour to determine an image within the range of the actual face contour image.
Then, the electronic device may determine an image of a fifth region according to the actual face contour image and the first position information, where the image of the fifth region includes regions other than the second region and the eye region in the fourth region. Next, the electronic device may obtain color information of each pixel point in the image of the fifth region. Then, the electronic device may detect whether the face region image has an image of the third region according to color information of each pixel point in the image of the fifth region.
Therefore, the electronic equipment can detect whether the image of the face area has the image of the third area according to the color information of the pixel points in the image of the fifth area, and then process the image of the third area, so that the image quality is improved.
With reference to the first aspect, in another possible design manner, the color information includes RGB values. The electronic device may calculate a first variance according to the RGB value of each pixel point in the image of the fifth region, where the first variance is a variance of the RGB values of the pixel points in the image of the fifth region. If the first variance is larger than a first preset variance threshold, the electronic equipment determines that an image of a third area exists in the face area image. If the first variance is smaller than a first preset variance threshold, the electronic device determines that the image of the third area does not exist in the face area image.
It will be appreciated that the image of the third region comprises images other than the first region. That is, the color of the image of the third area may be different from the color of the image of the skin area. Therefore, when the dispersion degree of the RGB values of the pixel points in the image of the fifth region is large, it indicates that the color of the image of the fifth region is not uniform, and the image of the region other than the distorted face exists in the image of the fifth region, that is, the image of the third region exists in the image of the face region. When the dispersion degree of the RGB values of the pixel points in the image of the fifth region is low, it indicates that the color of the image of the fifth region is uniform, and the image of the region other than the face does not exist in the image of the fifth region, that is, the image of the third region does not exist in the image of the face region.
With reference to the first aspect, in another possible design manner, the color information includes HSV values. The electronic device may calculate, according to the RGB value of each pixel point in the image of the fifth region, an HSV value of each pixel point in the image of the fifth region. And the electronic equipment calculates a second variance according to the HSV value of each pixel point in the image of the fifth area, wherein the second variance is the variance of the HSV value of the pixel point in the image of the fifth area. And if the second variance is larger than a second preset variance threshold, the electronic equipment determines that the image of the third area exists in the face area image. If the second variance is smaller than a second preset variance threshold, the electronic device determines that the image of the third area does not exist in the face area image.
It will be appreciated that the image of the third region comprises images other than the first region. That is, the color of the image of the third area may be different from the color of the image of the skin area. Therefore, when the dispersion degree of the HSV values of the pixel points in the image of the fifth area is large, the color of the image of the fifth area is not uniform, and the image of the area except the human face exists in the image of the fifth area, namely the image of the third area exists in the image of the human face area. When the dispersion degree of HSV values of pixel points in the image of the fifth area is low, the color of the image of the fifth area is uniform, and the image of an area except the face does not exist in the image of the fifth area, namely the image of the third area does not exist in the image of the face area.
With reference to the first aspect, in another possible design manner, the electronic device may acquire third location information, where the third location information is used to indicate a location of an image of the third area relative to the first image. And the electronic equipment carries out image filling on the image of the third area according to the third position information to obtain a second image.
It is to be understood that after the electronic device acquires the third position information, the image of the third area may be image-filled in combination with the position of the image of the third area relative to the first image. Therefore, the authenticity of the face image can be guaranteed, the image quality is improved, and the ornamental value of the image is improved.
With reference to the first aspect, in another possible design manner, the electronic device determines whether each pixel point in the image of the fifth region is a pixel point in the image of the first region according to the color information of each pixel point in the image of the fifth region and preset skin color information; wherein the preset skin color information is used to indicate a color of the image of the first region. If the pixel point in the image of the fifth area is not the pixel point of the image of the first area, the electronic equipment determines that the pixel point is the pixel point in the image of the third area, and obtains the position of the pixel point relative to the first image. The electronic equipment acquires the position of each pixel point in the image of the third area relative to the first image, and the third position information consists of the position of each pixel point in the image of the third area relative to the first image.
That is, the electronic device may determine whether each pixel point in the image of the fifth region is a pixel point of the image of the skin region of the human face. Then, the electronic device may obtain location information (i.e., third location information) of a pixel point in the image of the fifth region that is not the face skin region image (i.e., each pixel point in the image of the third region). In this way, the image of the third area may be image-filled in combination with the position of the image of the third area relative to the first image. Therefore, the authenticity of the face image can be guaranteed, the image quality is improved, and the ornamental value of the image is improved.
With reference to the first aspect, in another possible design manner, the electronic device may obtain an RGB value of each pixel point in the image of the third area. And the electronic equipment calculates a target average value, wherein the target average value is the average value of the RGB values of all the pixel points in the image of the first area. And the electronic equipment adjusts the RGB value of each pixel point in the image of the third area into a target average value according to the third position information to obtain a second image.
It can be understood that, by adjusting the RGB value of each pixel point in the image of the third region to the target average value, the difference between the color of the image of the third region and the color of the image of the first region can be reduced. Therefore, the authenticity of the face image can be guaranteed, the image quality is improved, and the ornamental value of the image is improved.
With reference to the first aspect, in another possible design manner, the electronic device determines that the first image includes an image of the first area. The electronic device determines that the first image includes an image of the second region. That is, the electronic device may determine that the glasses image and the face image are present in the first image. The electronic device may acquire first position information and fourth position information indicating a position of the eye area image relative to the first image. The electronic equipment determines whether the image of the fourth area exists in the first image according to the first position information and the fourth position information.
It is understood that the electronic device acquires the first position information and the fourth position information, and can determine whether there is an overlap between the eye area and the area where the glasses are located. And the eye area is within the first area, and when there is an overlap between the eye area and the area where the eyewear is located, there is an overlap area between the first area and the second area. In this way, the electronic device may determine whether the image of the third region is present in the image of the fourth region. And then the image of the third area is processed, so that the authenticity of the face image is guaranteed, and the image quality is improved.
With reference to the first aspect, in another possible design manner, a degree of difference between a color of the processed image of the third area and a color of the image of the first area is smaller than a preset difference threshold.
With reference to the first aspect, in another possible design manner, the method further includes: the electronic device may determine a first edge area image and a second edge area image, the first edge area image being an image of an edge area of an image of the processed third area in the second image, the second edge area image being an area image adjacent to the image of the processed third area. And the electronic equipment performs graphic fusion on the first edge area image and the second edge area image to obtain a third image.
The difference degree between the color information of the first edge area image subjected to image fusion and the color information of the second edge area image subjected to image fusion is smaller than the difference degree between the color information of the first edge area image not subjected to image fusion and the color information of the second edge area image.
It can be understood that the first color difference degree is smaller than the second color difference degree, which indicates that the color information of the first edge area image after image fusion is more similar to the color information of the second edge area image. Therefore, the spliced part between the image of the third area after image fusion and the image of the second edge area can be smoother, and the image quality is improved.
In a second aspect, the present application provides an electronic device comprising: a memory, a display screen, and one or more processors, the memory, display screen, and the processors coupled; the memory is for storing computer program code, the computer program code comprising computer instructions; the processor is configured to detect a first operation when the computer instructions are executed by the one or more processors. The processor is further configured to acquire a first image through the camera in response to a first operation of the user. The processor is further configured to determine that the first image includes a face region image, where the face region image includes an image of a first region and an image of a second region, the first region is used to represent a region where a face skin is located, and the second region is used to represent a region where glasses are located. The processor is further configured to determine that the face region image has an image of a third region, where the third region is used to characterize a deformation region of the face due to refraction of the glasses. The processor is further configured to process the image of the third area to obtain a second image, where the second image includes the processed image of the third area, and a difference between a color of the processed image of the third area and a color of the image of the first area is smaller than a difference between the color of the image of the third area and the color of the image of the first area.
With reference to the second aspect, in a possible design, when the computer instructions are executed by the one or more processors, the processors are further configured to determine that the image of the fourth area exists in the image of the third area, where the fourth area is an overlapping area of the first area and the second area.
In combination with the second aspect, in another possible design, when the computer instructions are executed by the one or more processors, the processors are further configured to obtain first position information and second position information, where the first position information is used to indicate a position of an image of the second region relative to the first image, and the second position information is used to indicate a position of the face contour image in the face region image relative to the first image. The processor is further configured to determine whether the image of the second region is within a range of the face contour image in the face region image according to the first position information and the second position information. The processor is further configured to detect whether the image of the third area exists in the face area image according to whether the image of the second area is within the range of the face contour image.
With reference to the second aspect, in another possible design, when the computer instructions are executed by the one or more processors, the processors are further configured to determine an image of a fifth region according to the first position information if the image of the second region is within the range of the face contour image, where the fifth region includes a region of the fourth region except for the second region and the glasses region. The processor is further configured to obtain color information of each pixel point in the image of the fifth region, where the color information includes an RGB value or an HSV value. The processor is further configured to detect whether the image in the face region has an image in the third region according to color information of each pixel point in the image in the fifth region.
With reference to the second aspect, in another possible design, when the computer instructions are executed by the one or more processors, the processors are further configured to determine an actual face contour image according to the second position information if the image of the second region is outside the range of the face contour image, where the face region image is an image within the range of the actual face contour image. The processor is further configured to determine an image of a fifth region according to the actual face contour image and the first position information, where the image of the fifth region includes a region of the fourth region except the second region and the glasses region. The processor is further configured to obtain color information of each pixel point in the image of the fifth region. The processor is further configured to detect whether the image in the face region has an image in the third region according to color information of each pixel point in the image in the fifth region.
With reference to the second aspect, in another possible design manner, the color information includes RGB values, and when the computer instructions are executed by the one or more processors, the processors are further configured to calculate a first variance according to the RGB values of each pixel point in the image of the fifth region, where the first variance is a variance of the RGB values of the pixel points in the image of the fifth region. The processor is further configured to determine that a distorted region image exists in the face region image if the first variance is greater than a first preset variance threshold. The processor is further configured to determine that no distorted region image exists in the face region image if the first variance is smaller than a first preset variance threshold.
With reference to the second aspect, in another possible design manner, the color information includes HSV values, and when the computer instructions are executed by the one or more processors, the processors are further configured to calculate, according to the RGB values of each pixel point in the image of the fifth region, an HSV value of each pixel point in the image of the fifth region. The processor is further configured to calculate a second variance according to the HSV value of each pixel in the image of the fifth area, where the second variance is a variance of the HSV values of the pixels in the image of the fifth area. The processor is further configured to determine that a distorted region image exists in the face region image if the second variance is greater than a second preset variance threshold. The processor is further configured to determine that no distorted region image exists in the face region image if the second variance is smaller than a second preset variance threshold.
With reference to the second aspect, in another possible design, the color information includes HSV values, and when the computer instructions are executed by the one or more processors, the processors are further configured to obtain third location information indicating a location of an image of the third area with respect to the first image. The processor is further configured to perform image filling on the image of the third area according to the third position information to obtain a second image.
With reference to the second aspect, in another possible design manner, the color information includes HSV values, and when the computer instructions are executed by the one or more processors, the processors are further configured to determine whether each pixel point in the image of the fifth region is a pixel point in the image of the first region according to the color information of each pixel point in the image of the fifth region and preset skin color information; wherein the preset skin color information is used to indicate a color of the image of the first region. The processor is further configured to determine that the pixel point is the pixel point in the image of the third region and obtain a position of the pixel point relative to the first image if the pixel point in the image of the fifth region is not the pixel point in the image of the first region. The processor is further configured to obtain a position of each pixel point in the image of the third area relative to the first image, and the third position information is composed of a position of each pixel point in the image of the third area relative to the first image.
With reference to the second aspect, in another possible design manner, the color information includes HSV values, and when the computer instructions are executed by the one or more processors, the processors are further configured to obtain an RGB value of each pixel point in the image of the third area. The processor is further configured to calculate a target average value, where the target average value is an average value of RGB values of all pixel points in the image of the first region. The processor is further configured to adjust the RGB value of each pixel point in the image in the third area to a target average value according to the third position information, so as to obtain a second image.
With reference to the second aspect, in another possible design, the color information includes HSV values, and when the computer instructions are executed by the one or more processors, the processors are further configured to determine that the first image includes an image of the first region. The processor is further configured to determine that the first image includes an image of the second region. The processor is further configured to obtain first position information and fourth position information, where the fourth position information is used to indicate a position of the eye region image relative to the first image. The processor is further configured to determine whether an image of the fourth area exists in the first image according to the first position information and the fourth position information.
With reference to the second aspect, in another possible design manner, a degree of difference between the color of the processed image of the third area and the color of the image of the first area is smaller than a preset difference threshold.
With reference to the second aspect, in another possible design, the color information includes HSV values, and when the computer instructions are executed by the one or more processors, the processors are further configured to determine a first edge area image and a second edge area image, where the first edge area image is an image of an edge area of a processed warp area image in the second image, and the second edge area image is an area image adjacent to the processed warp area image. The processor is further configured to perform image fusion on the first edge area image and the second edge area image to obtain a third image.
The difference degree between the color information of the first edge area image subjected to image fusion and the color information of the second edge area image subjected to image fusion is smaller than the difference degree between the color information of the first edge area image not subjected to image fusion and the color information of the second edge area image.
In a third aspect, the present application provides an electronic device comprising: a memory, a display screen, and one or more processors, the memory, display screen, and the processors coupled; the memory is for storing computer program code, the computer program code comprising computer instructions; the computer instructions, when executed by the one or more processors described above, cause the electronic device to perform the method as described in the first aspect and any possible design thereof.
In a fourth aspect, the present application provides a chip system, which is applied to an electronic device. The system-on-chip includes one or more interface circuits and one or more processors. The interface circuit and the processor are interconnected by a line. The interface circuit is configured to receive a signal from a memory of the electronic device and to transmit the signal to the processor, the signal including computer instructions stored in the memory. When executed by a processor, the computer instructions cause an electronic device to perform the method according to the first aspect and any of its possible designs.
In a fifth aspect, the present application provides a computer-readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method according to the first aspect and any one of its possible designs.
In a sixth aspect, the present application provides a computer program product for causing a computer to perform the method according to the first aspect and any one of its possible designs when the computer program product runs on the computer.
It should be understood that, for the electronic device according to the second aspect and any one of the possible design manners of the electronic device according to the second aspect, the electronic device according to the third aspect, the chip system according to the fourth aspect, the computer-readable storage medium according to the fifth aspect, and the beneficial effects that can be achieved by the computer program product according to the sixth aspect, reference may be made to the beneficial effects in the first aspect and any one of the possible design manners of the electronic device according to the first aspect, and details are not repeated here.
Drawings
Fig. 1 is a schematic diagram illustrating an example of a warp area image according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an example of an image provided by an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an example of a display interface provided in an embodiment of the present application;
fig. 5 is a flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 6A is a schematic view illustrating recognition of a face image according to an embodiment of the present application;
FIG. 6B is a schematic diagram of an example of an image coordinate system according to an embodiment of the present disclosure;
FIG. 6C is a schematic diagram of an example of another image coordinate system provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a positional relationship between a glasses image and a face image according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of an example of a face contour image according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a face region image within a scope of an image of glasses according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of an image of a fifth area according to an embodiment of the present application
FIG. 11 is a schematic diagram of an example of another image provided by an embodiment of the present application;
FIG. 12 is a schematic diagram of an example of another image provided by an embodiment of the present application;
FIG. 13 is a flowchart of another image processing method provided in the embodiments of the present application;
fig. 14 is a schematic diagram of an edge area image according to an embodiment of the present disclosure;
FIG. 15 is a schematic diagram of an example of another image coordinate system provided in an embodiment of the present application;
FIG. 16 is a schematic diagram of another edge region image provided in the embodiments of the present application;
FIG. 17 is a schematic diagram of an example of another display interface provided by an embodiment of the present application;
fig. 18 is a schematic structural component diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The "/" character in this application generally indicates that the former and latter associated objects are in an "or" relationship. For example, A/B may be understood as A or B.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present embodiment, "a plurality" means two or more unless otherwise specified.
Furthermore, the terms "including" and "having," and any variations thereof, as referred to in the description of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to the listed steps or modules but may alternatively include other steps or modules not listed or inherent to such process, method, article, or apparatus.
In addition, in the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "e.g.," is intended to present concepts in a concrete fashion.
To facilitate understanding of the technical solutions of the present application, before describing the image processing method in the embodiments of the present application in detail, terms mentioned in the embodiments of the present application are described.
1. RAW formatted image
An image in the RAW format is an image in which RAW information of a camera sensor is recorded, and some metadata (ISO setting, shutter speed, aperture value, white balance, etc.) generated by capturing an image by a camera is recorded, and the image is not processed by the IPS module. Among them, ISO is an abbreviation of International Organization for Standardization.
2. An Image Signal Processing (ISP) module.
After the camera captures the RAW image (i.e., the image in RAW format), the electronic device can transmit the RAW image to the ISP module. The RAW format is an unprocessed or uncompressed format. The ISP module may then analyze the original image to check for density gaps between adjacent pixels in the image. Then, the ISP module can use a preset adjustment algorithm in the ISP module to appropriately process the original image so as to improve the quality of the image acquired by the camera.
3. HSV color model
The HSV color model is a model created from the intuitive nature of colors. The parameters of the colors in the HSV color model are respectively: hue (Hue, H), Saturation (S), and lightness (Value, V).
At present, a camera can be installed in most electronic devices (such as mobile phones, tablet computers or smart watches), so that the electronic devices have the function of shooting images. Taking a mobile phone as an example, the mobile phone may acquire an image of the target object through the camera. For example, the mobile phone can acquire a face image through a camera. However, as the number of people with myopia increases, more and more people wear the glasses for myopia to reduce the inconvenience caused by myopia. Wherein, the lens of the myopia glasses is a concave lens which has a divergence effect on light. Illustratively, as shown in fig. 1 (a), the light ray a, the light ray b, and the light ray c are all parallel to the major axis of the concave lens 101, and the point o is the optical center of the concave lens 101. Due to the refraction effect of the concave lens 101, the light rays a and c diverge toward the periphery of the concave lens 101 after passing through the concave lens 101. Since the light ray b passes through the optical center, the light ray b is still parallel to the major axis of the concave lens 101 after passing through the concave lens 101. Therefore, when the face wears the glasses, in the face image collected by the mobile phone, the image of the glasses may be distorted due to the refraction effect of the glasses, and the image quality is affected. Illustratively, as shown in fig. 1 (b), the image 102 includes a warp field image 103. The color of the distorted region image 103 is not consistent with the color of the face region image 104, resulting in poor display of the face region image 104.
In some technical solutions, the electronic device may eliminate the glasses image in the face image through a deep learning algorithm. However, in this scheme, the distorted area image existing in the face image is not corrected, and the image quality cannot be improved.
Therefore, the embodiment of the application provides an image processing method. In the method, the electronic equipment can respond to the photographing operation of a user, and a first image is acquired through a camera, wherein the first image comprises a face region image, a distorted region image exists in the face region image, and the distorted region image is an image of a region refracted by glasses worn on a face. Then, the electronic device can adjust the color of the distorted region image, so that the color of the distorted region image after adjustment is the same as the color of the human face skin in the human face region image. It is understood that, when the color of the distorted region image is the same as the color of the face region image, the difference between the distorted region image and the face image can be reduced. Therefore, the authenticity of the face image can be guaranteed, the image quality is improved, and the ornamental value of the image is improved.
Illustratively, in conjunction with fig. 1 (b), after image filling the distorted area image 103, an image 201 as shown in fig. 2 may be displayed, where the image 201 includes the same (or similar) color as that of the face area image 203 of the image 202 after image filling the distorted area image.
It should be noted that, in the embodiments of the present application, the eyeglasses are all myopia eyeglasses, and an image of a refractive area of the eyeglasses is an image of a refractive area of a lens (i.e., a concave lens) of the eyeglasses.
It should be noted that, in the embodiment of the present application, an image acquired by an electronic device through a camera may be: and the ISP module processes the image acquired by the camera and acquired by the original image. That is to say, the first image acquired by the electronic device through the camera may be: and processing the original image acquired by the camera by the ISP module. Optionally, an image acquired by the electronic device through the camera in the embodiment of the present application may also be an original image (i.e., an image in a RAW format), which is not limited in the embodiment of the present application.
For example, the electronic device in the embodiment of the present application may be a tablet computer, a mobile phone, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, a vehicle-mounted device, and the like, and the embodiment of the present application does not particularly limit the specific form of the electronic device.
The execution main body of the image processing method provided by the application can be an image processing device, and the execution device can be the electronic equipment shown in fig. 3. Meanwhile, the execution device may also be a Central Processing Unit (CPU) of the electronic device, or a control module for Processing an image in the electronic device. In the embodiment of the present application, a method for processing an image performed by an electronic device is taken as an example, and the method for processing an image provided in the embodiment of the present application is described.
Referring to fig. 3, the electronic device provided in the present application is described herein by taking the electronic device as the mobile phone 200 shown in fig. 3 as an example. Therein, the cell phone 200 shown in fig. 3 is only one example of an electronic device, and the cell phone 200 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 3 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
As shown in fig. 3, the handset 200 may include: the mobile communication device includes a processor 210, an external memory interface 220, an internal memory 221, a Universal Serial Bus (USB) interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 250, a wireless communication module 260, an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, a sensor module 280, a button 290, a motor 291, an indicator 292, a camera 293, a display 294, and a Subscriber Identity Module (SIM) card interface 295.
The sensor module 280 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
Processor 210 may include one or more processing units, such as: the processor 210 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be the neural center and command center of the cell phone 200. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 210 for storing instructions and data. In some embodiments, the memory in the processor 210 is a cache memory. The memory may hold instructions or data that have just been used or recycled by processor 210. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 210, thereby increasing the efficiency of the system.
In some embodiments, processor 210 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
It should be understood that the connection relationship between the modules illustrated in this embodiment is only an exemplary illustration, and does not constitute a limitation to the structure of the mobile phone 200. In other embodiments, the mobile phone 200 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charge management module 240 is configured to receive a charging input from a charger. The charger may be a wireless charger or a wired charger. The charging management module 240 may also supply power to the electronic device through the power management module 241 while charging the battery 242.
The power management module 241 is used to connect the battery 242, the charging management module 240 and the processor 210. The power management module 241 receives input from the battery 242 and/or the charging management module 240, and provides power to the processor 210, the internal memory 221, the external memory, the display 294, the camera 293, and the wireless communication module 260. In some embodiments, the power management module 241 and the charging management module 240 may also be disposed in the same device.
The wireless communication function of the mobile phone 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor, the baseband processor, and the like. In some embodiments, antenna 1 of handset 200 is coupled to mobile communication module 250 and antenna 2 is coupled to wireless communication module 260, such that handset 200 may communicate with networks and other devices via wireless communication techniques. For example, in the embodiment of the present application, the mobile phone 200 may transmit the facial image to other devices through a wireless communication technology.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the handset 200 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied to the handset 200. The mobile communication module 250 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 250 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation.
The mobile communication module 250 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the processor 210. In some embodiments, at least some of the functional modules of the mobile communication module 250 may be disposed in the same device as at least some of the modules of the processor 210.
The wireless communication module 260 may provide solutions for wireless communication applied to the mobile phone 200, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity, Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. For example, in the embodiment of the present application, the mobile phone 200 may access a Wi-Fi network through the wireless communication module 260.
The wireless communication module 260 may be one or more devices integrating at least one communication processing module. The wireless communication module 260 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 210. The wireless communication module 260 may also receive a signal to be transmitted from the processor 210, frequency-modulate and amplify the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
The mobile phone 200 implements the display function through the GPU, the display screen 294, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 294 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 294 is used to display images, video, and the like. The display screen 294 includes a display panel. For example, in the embodiment of the present application, the display screen 294 may be used to display an application interface of a photographing application, such as an image preview interface.
The mobile phone 200 may implement a shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294, and the application processor. The ISP is used to process the data fed back by the camera 293. The camera 293 is used to capture still images or video. In some embodiments, handset 200 may include 1 or N cameras 293, N being a positive integer greater than 1.
The external memory interface 220 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the mobile phone 200. The external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 221 may be used to store computer-executable program code, including instructions. The processor 210 executes various functional applications and data processing of the cellular phone 200 by executing instructions stored in the internal memory 221. For example, in the present embodiment, the processor 210 may execute instructions stored in the internal memory 221, and the internal memory 221 may include a program storage area and a data storage area.
The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (such as audio data, a phone book, etc.) created during use of the mobile phone 200, and the like. In addition, the internal memory 221 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The mobile phone 200 can implement an audio function through the audio module 270, the speaker 270A, the receiver 270B, the microphone 270C, the earphone interface 270D, and the application processor. Such as music playing, recording, etc.
The keys 290 include a power-on key, a volume key, etc. The keys 290 may be mechanical keys. Or may be touch keys. The motor 291 may generate a vibration cue. The motor 291 can be used for both incoming call vibration prompting and touch vibration feedback. Indicator 292 may be an indicator light that may be used to indicate a state of charge, a change in charge, or may be used to indicate a message, missed call, notification, etc. The SIM card interface 295 is used to connect a SIM card. The SIM card can be attached to and detached from the mobile phone 200 by being inserted into the SIM card interface 295 or being pulled out from the SIM card interface 295. The handset 200 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 295 may support a Nano SIM card, a Micro SIM card, a SIM card, etc.
Although not shown in fig. 3, the mobile phone 200 may also be a flash, a micro-projector, a Near Field Communication (NFC) device, etc., and will not be described herein.
It is to be understood that the structure illustrated in the present embodiment does not specifically limit the mobile phone 200. In other embodiments, handset 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The methods in the following embodiments may be implemented in an electronic device having the above hardware structure and the above system architecture. In the following embodiments, the method of the embodiments of the present application is described by taking the above electronic device as an example.
Before executing the method provided by the embodiment of the application, the user may control the electronic device to start the photographing application. Among them, the photographing application is an application program for photographing an image, such as a camera application. In some embodiments, in response to a user initiating operation of the photographing application, the electronic device may initiate the camera to capture an image. Illustratively, as shown in fig. 4 (a), the electronic device may receive an operation a (e.g., an operation of clicking a camera icon 401) input by a user. In response to operation a of the user, the electronic device may initiate a camera (e.g., a main camera) to capture an image.
And, in response to an operation of the user starting the photographing application, the electronic device may display an image preview interface including a view finder, a preview image, and the like. Illustratively, as shown in fig. 4 (b), the image preview interface 402 may include: a viewfinder 403, a camera conversion key 404, a capture key 405, an album key 406, a preview image 407, a "take" option, a "record" option, a flash option, a "portrait" option, etc.
The view frame 403 shown in fig. 4 (b) is used to display a preview image (for example, a preview image 407) captured by the camera. The preview image 407 is an image corresponding to a preset area, where the preset area is an area where the electronic device acquires an image through a camera. That is, the user may control the electronic device to direct the camera of the electronic device toward the preset area. Thereafter, the camera may capture an image of the preset area and display the image in the above-mentioned view frame 403. The camera conversion key 404 is used to trigger the electronic device to convert to capture an image using a front-facing camera and a rear-facing camera. The shooting key 405 is used to control the electronic device to save the image captured by the camera. The album key 406 is used to view images saved in the electronic device. The take flash option is used to trigger the electronic device to turn the flash on or off when taking a picture. The "record" option is used to trigger the electronic device to display a recorded viewfinder interface (not shown in the figures). The "take picture" option is used to trigger the electronic device to display a view interface for taking a picture (e.g., image preview interface 402 shown in fig. 4 (b)). The "panorama" option is used to trigger the electronic device to display a viewfinder interface (not shown in the figures) for the cell phone to take a panoramic photograph.
The embodiment of the application provides an image processing method, which can comprise S501-S505 as shown in FIG. 5.
S501, the electronic device detects a first operation.
The first operation is used for triggering the electronic equipment to start a photographing function. That is, the electronic apparatus receives one operation by the user, and can capture an image. The embodiment of the present application does not limit the first operation. For example, the first operation may be a click operation of the user, for example, the click operation may be an operation of the user clicking a photographing button (the photographing key 205 shown in (b) of fig. 4). For example, the first operation may be a voice operation of the user, for example, the voice operation may issue a voice instruction of "take a picture" for the user.
S502, responding to a first operation of a user, and acquiring a first image through a camera by the electronic equipment.
The first image comprises a face area image, and the face area image comprises an image of a first area and an image of a second area. The first area is used for representing the area where the skin of the human face is located, and the second area is used for representing the area where the glasses are located.
In one possible design, the first region may be a region of the skin of the person comprising an ear, i.e. the first region may comprise an ear, a nose, a mouth, etc. The first region may also be a region of the skin of the person's face that does not include ears. The second area may be an area where the spectacle lens is located, i.e. the image of the second area only comprises an image of the area where the spectacle lens is located (excluding spectacle frames and temples etc.). The second area may also be an area where components such as spectacle lenses, spectacle frames and spectacle legs are located, i.e. the image of the second area may comprise an image of an area where components such as spectacle lenses, spectacle frames and spectacle legs are located. In the embodiment of the present application, the first region and the second region are not limited. For convenience of description, in the following embodiments, the first region is a region of the skin of the human face, which does not include the ear, and the second region is a region where the spectacle lens is located.
Optionally, the face region image is an image (including a face contour image) of a region surrounded by an actual face contour (i.e., an undeformed face contour), and may be a complete face contour, such as the face contour image 106 shown in (b) of fig. 1, or an incomplete face contour, such as a face contour partially occluded by glasses shown in (a) of fig. 12. Alternatively, it is understood that the face region image includes an image of the largest region enclosed by the face contour and the glasses contour, such as the face contour image 106 shown in (b) of fig. 1 and the face contour image 1205 shown in (a) of fig. 12.
Optionally, when the electronic device determines whether the first image includes a face region image, the determination may be performed according to an actual complete contour of a face (an undeformed face contour), or according to an actual local contour of the face (such as a lower half or a lower jaw of the face), or according to another method, for example, determining that the first image includes a mouth and a nose according to five sense organs detection, and then determining that the first image includes the face, or according to another method, as long as the method that the electronic device can determine that the first image includes the image of the face region is implemented, which is not limited in this application.
Illustratively, as shown in fig. 1 (b), the image 102 includes a face region image 104, the face region image 104 is an image of a region surrounded by a face contour image 106, and the face region image 104 includes an eye region image 105 and the face contour image 106.
The camera is not limited in the embodiments of the present application. For example, the camera may be a main camera. For another example, the camera may be a tele camera. For another example, the camera may be a wide-angle camera. For another example, the camera is a front camera. That is to say, the method provided by the embodiment of the application is suitable for the image acquired by the electronic device in any shooting mode (such as a wide-angle shooting mode, a portrait shooting mode, etc.).
In some embodiments, after the electronic device acquires the first image through the camera, the electronic device may detect whether an image of the first area exists in the first image through a preset face detection algorithm. For example, the electronic device may detect whether a face exists in the first image through a preset face detection algorithm. In the case where a human face is present in the first image, the electronic device may determine an image of the first region. The preset face detection algorithm is not limited in the embodiment of the application. For example, the predetermined Face detection algorithm may be a Face recognition (Face matching) algorithm. For another example, the predetermined face Detection algorithm may be a face Landmark Detection (Facial Landmark Detection) algorithm.
It should be noted that, the electronic device detects whether the image of the first region exists in the first image by presetting a face detection algorithm, which may refer to a method for detecting whether the image of the face exists in the electronic device in the conventional technology, and details are not repeated here.
If the image of the first area does not exist in the first image, the electronic device does not process the first image. If the image of the first region exists in the face region image, the electronic device may obtain fourth position information through a preset face detection algorithm, where the fourth position information is used to indicate a position of the eye region image relative to the first image.
It should be noted that, after the electronic device acquires the first image through the camera, the electronic device may store the position information of each pixel point in the first image.
Illustratively, as shown in fig. 6A, the electronic device may parse the Face image 601 through a Face matching algorithm. Then, the electronic device may analyze the facial images 601 according to the plurality of feature points in fig. 6A to obtain respective position images, such as an eye image 602, an eyebrow image 603, a nose image 604, a lip image 605, and a face contour image 606.
Also, the position of the eye region image relative to the first image may be represented by a coordinate system. For example, the origin of the coordinate system of the first image may be any corner (e.g., upper left corner or lower left corner) in the first image, and the x-axis and the y-axis are two adjacent edges. As shown in FIG. 6B, point o is the origin of coordinates, the x-axis is the lower side of the first image 606, and the y-axis is the left side of the first image 606. The eye region image 607 includes a plurality of pixel points (for example, a pixel point a1, a pixel point a2, a pixel point A3, and the like), two-dimensional coordinates of the pixel point a1, the pixel point a2, and the pixel point A3 in the xoy coordinate system shown in fig. 6B are a1(x1, y1), a2(x2, y2), and A3(x3, y3), respectively, and x1< x2< x 3. The pixel point a1 is the pixel point with the smallest abscissa in the eye region image 607, the pixel point a2 is the pixel point located in the center of the eye region image 607, and the pixel point A3 is the pixel point with the largest abscissa in the eye region image 607. That is, the pixel point a1 is the pixel point closest to the face contour image 608 in the eye region image 607.
S503, the electronic equipment detects whether the image of the third area exists in the first image.
And the third area is used for representing a deformed area of the human face in the second area due to the refraction of the glasses. Illustratively, as shown in connection with (b) of fig. 1, the first image 501 includes an image 503 of the third area.
It is understood that in the case where an image of the second region (which may also be referred to as a glasses image) exists in the face region image, an image of the third region (which may also be referred to as a warp region image) may exist in the first image. If the glasses image does not exist in the first image, the first image does not have the distortion area image. Under the condition that the glasses image does not exist in the first image, if the electronic device continues to detect whether the distorted area image exists in the face area image, the resource waste of the electronic device is caused, and the energy consumption of the electronic device is increased.
In some embodiments, before the electronic device detects whether the warped region image is present in the face region image, the electronic device may detect whether the glasses image is present in the first image. Specifically, the electronic device may detect whether the first image has the glasses image through a preset object detection algorithm. Optionally, before the electronic device detects whether the distorted region image exists in the face region image, the electronic device may detect whether the face exists in the first image by using a preset face detection algorithm. If the first image does not have the glasses image and/or the face, the electronic device does not detect whether the face region image has the distortion region image. If the first image has the glasses image and the face, the electronic device may detect whether the face region image has the distortion region image.
In this embodiment, the order in which the electronic device detects whether the glasses image exists in the first image and the electronic device detects whether the face exists in the first image is not limited. For example, the electronic device may detect whether a glasses image exists in the first image, and then detect whether a human face exists in the first image. For another example, the electronic device may first detect whether a human face exists in the first image, and then detect whether a glasses image exists in the first image. For another example, the electronic device may simultaneously detect whether a human face exists in the first image and whether a glasses image exists in the first image.
It should be noted that, in the embodiment of the present application, the preset object detection algorithm is not limited. For example, the preset object detection algorithm may be a Selective Search (Selective Search) algorithm. For another example, the predetermined object detection algorithm may be a Spatial Pyramid Pooling (Spatial Pyramid Pooling) algorithm. Optionally, the preset object detection algorithm may be a preset object detection model. For example, the preset object detection model may be an Adaboost classifier. For another example, the preset object detection model may be an SVM classifier. For the way that the electronic device detects whether the face exists in the first image through the preset face detection algorithm, reference may be made to the description in the foregoing embodiments, which is not described herein again.
It is understood that the first image does not have the glasses image, and the first image does not have the distortion area image. Therefore, if the first image does not have the glasses image, the electronic device does not need to detect whether the face region image has the distorted region image or not, and does not need to process the first image. Therefore, the resource utilization rate of the electronic equipment is improved, and the energy consumption of the electronic equipment is saved.
It should be noted that in some scenarios (e.g., in a glasses shop), there may be multiple glasses in the first image. And the distortion area image is an image of an area refracted by glasses worn on a human face. If the plurality of glasses are not worn on the human face, the first image does not have the distortion area image. Therefore, under the condition that the glasses are not worn on the face, the electronic equipment detects whether the distorted area image exists in the face area image or not, so that the resource waste of the electronic equipment is caused, and the resource utilization rate of the electronic equipment is low.
In some embodiments, the electronic device may determine the positional relationship of the eye region image and the glasses image before the electronic device detects whether the distorted region image is present in the face region image (or after the electronic device determines that the glasses image is present in the face region image). In particular, the electronic device may acquire first location information indicating a location of the eyewear image relative to the first image. The electronic device may determine whether the eye area image is within the range of the glasses image according to the fourth position information and the first position information. The eye area image is within the range of the glasses image, which means that the eye area image is surrounded by the glasses image.
For example, the electronic device may obtain the first location information through a semantic segmentation algorithm. The semantic segmentation algorithm is not limited in the embodiment of the present application. For example, the semantic segmentation algorithm may be a Full Convolutional Network (FCN). For another example, the semantic segmentation algorithm may be a Pyramid Scene Parsing Network (PSPNet). For another example, the semantic segmentation algorithm may be a depth experiment (deplaybt) algorithm.
Note that the face region image includes an eye region image. If the eye area image is within the range of the glasses image, it is indicated that the glasses image overlaps the face area image. That is, if the eye area image is within the range of the glasses image, it is indicated that the person has the glasses image.
Illustratively, the position of the eyewear image relative to the first image may be represented by a coordinate system. For example, in conjunction with the xoy coordinate system shown in FIG. 6B, as shown in FIG. 6C, point o is the origin of coordinates, the x-axis is the lower side of the first image 606, and the y-axis is the left side of the first image 606. The glasses image 607 includes a plurality of pixel points, which may include: a pixel point a4, a pixel point a5, a pixel point A6, and a pixel point a7, where two-dimensional coordinates of the pixel point a4, the pixel point a5, the pixel point A6, and the pixel point a7 in the xoy coordinate system shown in fig. 6C are a4(x4, y4), a5(x5, y5), A6(x6, y6), and a7(x7, y7), respectively. The pixel point a4 is a pixel point with the smallest abscissa in the glasses image 607 (i.e., x4< x5, x4< x6, and x4< x7), the pixel point a5 is a pixel point with the largest abscissa in the glasses image 607 (i.e., x4< x5, x6< x5, and x7< x5), the pixel point A6 is a pixel point with the largest ordinate in the glasses image 607 (i.e., y4< y6, y5< y6, and y7< y6), and the pixel point a7 is a pixel point with the smallest ordinate in the glasses image 607 (i.e., y7< y4, y7< y5, and y7< y 6).
Then, the electronic device may determine whether the eye area image is within the range of the glasses image according to the two-dimensional coordinates of the pixel point a1, the pixel point a2, and the pixel point A3 in the xoy coordinate system shown in fig. 6B, and the two-dimensional coordinates of the pixel point a4, the pixel point a5, the pixel point a6, and the pixel point a7 in the xoy coordinate system shown in fig. 6C.
For example, the electronic device may determine whether the eye region image is within the scope of the eyewear image by determining whether the most marginal pixel points in the eye region image (e.g., pixel point a1 and/or pixel point A3, etc.) are within the eyewear image. Specifically, taking the pixel point a1 as an example, if x4< x1< x5, and y7< y1< y6, the electronic device may determine that the eye area image is within the range of the glasses image. For another example, the electronic device may determine whether the eye region image is within the scope of the eyewear image by determining whether a center pixel point (e.g., pixel point a2) in the eye region image is within the eyewear image. Specifically, taking the pixel point a2 as an example, if x4< x2< x5, and y7< y2< y6, the electronic device may determine that the eye area image is within the range of the glasses image.
In this embodiment, the electronic device may determine that the image of the fourth area is the overlapping area of the first area and the second area. That is, there is an intersection between the area where the face skin is located and the area where the glasses are located.
The region where the face skin is located refers to a region where the actual face skin is located. For example, as can be seen from fig. 7 (c), the image of the fourth area includes: glasses image 706, eyebrow region image 707, eye region image 708, and warp region image 710, and the like. For another example, as can be seen from fig. 10 (b), the image of the fourth region includes: a skin region image 1001 and a warp region image 1002.
If the eye area image is not in the range of the glasses image, the electronic device does not need to detect whether a distorted area image exists in the face area image.
It is understood that in the case where the eye area image is not within the range of the glasses image, it indicates that the person's face does not wear the glasses image. And the distortion region image is an image of a region refracted by the glasses image worn on the human face. Therefore, under the condition that the glasses image is not worn on the face, the distortion area image does not appear in the first image, and the electronic equipment does not need to detect whether the distortion area image exists in the face area image or not. Therefore, the resource waste of the electronic equipment is reduced, and the resource utilization rate of the electronic equipment is improved.
If the eye area image is within the range of the glasses image, the electronic device may detect whether a distorted area image exists in the face area image. Specifically, the electronic device may detect whether the image of the fourth region has a distorted region image (refer to the description of detecting whether the image of the face region has the distorted region image in the embodiment, which is not described herein again). Wherein the degree of difference between the color of the distorted region image and the color of the image of the first region (which may also be referred to as skin region image) is greater than a preset difference threshold.
It should be noted that, the preset difference threshold is not limited in the embodiment of the present application. For example, the preset difference threshold may be 50%. For another example, the preset difference threshold may be 40%. For example, the preset difference threshold may be 35.5%. The skin area image is not limited in the embodiments of the present application. For example, the skin area image may be a skin image around the eye area image. For another example, the skin region image is an image of skin in the face region image.
It should be noted that, in the embodiment of the present application, that the difference degree between the color of the distorted region image and the color of the skin region image is smaller than the preset difference threshold value means that the difference degree between the color information of the pixel point in the distorted region image and the color information of the pixel point in the skin region image is smaller than the preset difference threshold value (for example, 50%, 40%, 35.5%, and the like). Wherein the color information includes RGB values or HSV values. The following describes embodiments of the present application, taking color information as RGB values as an example.
It should be noted that, in the embodiment of the present application, the difference degree between the RGB values of the pixel points in the distorted region image and the RGB values of the pixel points in the skin region image is not limited. For example, the difference between the RGB values of the pixels in the distorted region image and the RGB values of the pixels in the skin region image may be the difference between the RGB value of each pixel in the distorted region image and the average of the RGB values of all the pixels in the skin region image. For another example, the difference between the RGB values of the pixels in the distorted region image and the RGB values of the pixels in the skin region image may be the difference between the average of the RGB values of all the pixels in the distorted region image and the average of the RGB values of the skin region image.
Illustratively, the average value of the RGB values of all the pixels in the warped region image is R1:20、G1:20、B1: 50, the average value of the RGB values of all pixel points in the skin region image is: r2:100、G2:40、B2:100, the degree of difference in R value is 80%, the degree of difference in G value is 50%, and the degree of difference in B value is 50%. And the electronic equipment determines that the difference degree between the RGB value of the pixel point in the distorted region image and the RGB value of the pixel point in the skin region image is greater than the preset difference threshold value.
That is, if the preset difference threshold is 50%, the degree of difference between the R values is 80%, the degree of difference between the G values is 50%, and the degree of difference between the B values is 50%. The electronic device may determine that a degree of difference between a color of the warped region image and a color of the skin region image is greater than a preset difference threshold.
Note that the distortion area image includes an image (for example, a background image) other than the first area. That is, the color of the distorted region image is different from the color of the skin region image. In this way, when an image of a region other than the human face exists in the image of the fifth region in the human face region image in the eyeglass image, color unevenness of the image of the fifth region may be caused.
In this embodiment, the electronic device may determine whether the distorted region image exists in the first image according to whether the color of the image of the fifth region is uniform. Wherein the image of the fourth region includes an image of a fifth region including a region of the fourth region other than the second region and the eye region. That is, the fifth area is a skin area in the fourth area. Specifically, the electronic device may determine an image of the fifth region.
The manner in which the electronic device determines the image of the fifth area is related to the positional relationship between the glasses image and the face contour image. The types of the glasses images are different, and the sizes of the glasses images are also different; in addition, the sizes of human faces are different due to different body types of users. Therefore, in the case where the human face has the glasses image, there may be two positional relationships, i.e., the first positional relationship and the second positional relationship, between the glasses image and the face contour image. The first position relation is: the glasses image is within the range of the face contour image, i.e. the glasses image is surrounded by the face contour image. Illustratively, as shown in fig. 7 (a), the glasses image 702 is within the range of the face contour image 701. The position relationship II is as follows: the glasses image (or the partial area image of the glasses image) is not within the range of the face contour image (i.e., the glasses image is outside the range of the face contour image), i.e., the glasses image is not surrounded by the face contour image. Illustratively, as shown in fig. 7 (b), the partial region image of the eyeglass image 704 is outside the range of the face contour image 703.
In the embodiment of the application, the electronic device can determine whether the glasses image is in the range of the face contour image according to the first position information and the second position information. Wherein the second position information is used for indicating the position of the face contour image relative to the first image.
It should be noted that, in the embodiment of the present application, the face contour image is an actual face contour image (i.e., an image of an undistorted face contour). Before the electronic device determines whether the eyeglass image is within the range of the face contour image, the electronic device may determine an actual face contour image (refer to the description of the electronic device determining the actual face contour image in the embodiments described below). The face contour image is an image formed by the most marginal pixel points in the face region image. That is, the range of the face contour image is the same as the range of the face region image. Therefore, in the embodiment of the present application, whether the glasses image is within the range of the face contour image may also be expressed as whether the glasses image is within the range of the face region image.
For example, in conjunction with fig. 6C, the electronic device may determine whether the glasses image is within the range of the face contour image according to the two-dimensional coordinates of the pixel point a4, the pixel point a5, the pixel point a6, and the pixel point a7 in the xoy coordinate system, and the two-dimensional coordinates of the face contour image 608 in the xoy coordinate system, which are shown in fig. 6C. If the pixel point a4, the pixel point a5, the pixel point a6, and the pixel point a7 are all within the range of the face contour image 608, the electronic device may determine that the glasses image is within the range of the face contour image. For example, taking a pixel point A8 and a pixel point a9 in the face contour image 608 as an example, two-dimensional coordinates of the pixel point A8 and the pixel point a9 in the xoy coordinate system shown in fig. 6C are A8(x8, y8), a9(x9, y9), y4 ═ y8 ═ y9, and x8< x9, respectively. If x8< x4< x9, the electronic device may determine that pixel point a4 is within the face contour image 608. If any one of the pixel point a4, the pixel point a5, the pixel point a6, and the pixel point a7 is not within the range of the face contour image 608, the electronic device may determine that the glasses image is outside the range of the face contour image. For example, if x4< x8< x9, the electronic device may determine that pixel a4 is outside the range of the face contour image 608.
In some embodiments, in the case that the glasses image is within the range of the face contour image (i.e., the positional relationship one), the electronic device may determine the image of the fifth region in the following manner. Specifically, the electronic device may obtain first position information (i.e., a position of the eyewear image relative to the first image, as shown in fig. 6C). Then, the electronic device can acquire each characteristic region image, such as an eye region image and an eyebrow region image, in the face region image within the range of the glasses image (including the glasses image). Then, the electronic device may determine an image other than the eyeglass image and the organ images of the respective parts within the eyeglass image range as an image of the fifth region. For example, as shown in (c) of fig. 7, the face region image in the range of the eyeglass image 706 includes the eye region image 708 and an image 709 of a fifth region, and the image 709 of the fifth region is an image other than the eye region image 708 and the eyeglass image 706 in the range of the eyeglass image 706, that is, the image 709 of the fifth region does not include the eyeglass image 706 and the eye region image 708. Optionally, the face region image in the range of the glasses image 706 includes: an eyebrow region image 707, an eye region image 708, and an image 709 of a fifth region, the image 709 of the fifth region being an image within the range of the glasses image 706 except for the glasses image 706, the eye region image 708, and the eyebrow region image 707, i.e., the image 709 of the fifth region does not include the glasses image 706, the eyebrow region image 707, and the eye region image 708. Optionally, the image of the fifth region may include a warped region image 710.
For another example, suppose that the electronic device can obtain the position information of 100 pixel points B1-B100 in the glasses image range. The pixel points of the glasses images comprise B1-B10, the pixel points of the eye area images comprise B11-B20, and the pixel points of the eyebrow area images comprise B21-B30. That is, B31-B100 are the pixel points of the image of the fifth region.
It should be noted that, in the case that the glasses image is outside the range of the face contour image (i.e. the position relationship is two), the face contour image may be changed. Illustratively, in conjunction with (b) in fig. 7, the face contour image 705 in the eyeglass image 704 is shifted, which causes the face contour image of the face region image 703 to be broken and discontinuous.
In other embodiments, in the case that the glasses image is outside the range of the face contour image (i.e., the positional relationship is two), the electronic device may determine the image of the fifth region in the following manner. Specifically, the electronic device may determine a face contour image according to the second position information and a preset curve fitting algorithm, where the face contour image is an actual face contour image. The preset curve fitting algorithm is not limited in the embodiment of the present application. For example, the predetermined curve fitting algorithm may be a polynomial curve fitting algorithm. For another example, the predetermined curve fitting algorithm may be a least squares method. Illustratively, a predetermined curve fitting algorithm is taken as a polynomial curve fitting algorithm. As shown in fig. 8, the face region image 801 includes: a warped face contour image 802 and a partial actual face contour image 804. The electronic device can obtain the location (e.g., two-dimensional coordinates) of the partial face contour image 804 relative to the first image. Then, the electronic device obtains an actual face contour image 803 according to the position of the face contour image 804 relative to the first image and a polynomial curve fitting algorithm.
It should be noted that the face contour images in the embodiments of the present application are all used to represent actual face contour images. Specifically, for the way that the electronic device obtains the actual face contour image 803 according to the position of the face contour image 804 relative to the first image and the polynomial curve fitting algorithm, a method of performing curve fitting by the electronic device according to the curve fitting algorithm and the two-dimensional coordinates of a plurality of points in the coordinate system in the conventional technology may be referred to, and details are not repeated here.
Thereafter, the electronic device may acquire the first location information. Then, the electronic device can acquire each feature region image in the face region image within the range of the glasses image (including the glasses image). The face region image is an image (including a face contour image) within an actual face contour image range. The electronic device may determine that an image other than the glasses image and the respective part images in the face region image within the glasses image range is an image of a fifth region.
It can be understood that after the face contour image in the face region image is corrected, the face region image in the scope of the glasses image will also be changed. For example, referring to fig. 8, as shown in (a) of fig. 9, in a case where the electronic device does not correct the face contour image, the face region image in the glasses image range is a region image a 901. As shown in fig. 9 (b), when the electronic device corrects the face contour image, the face region images in the eyeglass image range are a region image a901 and a region image b 902.
In the case where the face region image within the scope of the eyeglass image is changed, the image of the fifth region is also changed. Illustratively, as shown in fig. 10 (a), the face region image in the eyeglass image range includes an eyebrow region image 1003 and an eye region image 1004, the image of the fifth region is a skin region image 1001 in a case where the electronic device does not correct the face contour image, and the skin region image 1001 does not include the eyebrow region image 1003 and the eye region image 1004. As shown in fig. 10 (b), when the electronic device corrects the face contour image, the face region image in the eyeglass image range is a skin region image 1001 and a distortion region image 1002.
It should be noted that, for the method for determining the image of the fifth area by the electronic device, reference may be made to a manner that the electronic device determines the image of the fifth area when the glasses image is within the range of the face contour image (i.e., the first positional relationship), which is not described herein again.
Then, the electronic device may obtain color information of each pixel point in the image of the fifth region, where the color information includes an RGB value or an HSV value. The electronic device can detect whether a distorted region image exists in the face region image according to the color information of each pixel point in the image of the fifth region.
For example, in a case that the color information includes RGB values, the electronic device may obtain the RGB values of each pixel point in the image of the fifth region, and calculate a first variance, which is a variance of the RGB values of the pixel points in the image of the fifth region. Illustratively, the first variance may be calculated by formula one.
Figure BDA0003100063310000181
Wherein S is2For representing a first variance, xnThe RGB values for the nth pixel point in the image representing the fifth region,
Figure BDA0003100063310000182
the average value of the RGB values of the skin area image is represented, and n is used for representing the number of pixel points in the image of the fifth area.
It should be noted that, after the electronic device acquires the first image, the RGB value of each pixel point in the first image may be stored. The position of the RGB value of the pixel point stored by the electronic equipment is not limited. For example, the electronic device can store the RGB values for each pixel point in the first image in a memory.
If the first variance is greater than a first preset variance threshold, the electronic device may determine that the color of the image of the fifth region is not uniform, and a distorted region image exists in the face region image. If the first variance is smaller than a first preset variance threshold, the electronic device may determine that the color of the image of the fifth region is uniform, and the first image does not have a distorted region image. The first preset variance threshold may be 10, 15, 15.5, and the like, which is not limited in this embodiment of the application.
It is understood that the warped region image includes images other than the face skin region. That is, the color of the warped region image may be different from the color of the skin region image. Therefore, when the dispersion degree of the RGB values of the pixel points in the image of the fifth region is large, it indicates that the color of the image of the fifth region is not uniform, and the image of the region other than the distorted face exists in the image of the fifth region, that is, the distorted region image exists in the image of the face region. When the dispersion degree of the RGB values of the pixel points in the image of the fifth region is low, it indicates that the color of the image of the fifth region is uniform, and the image of the region other than the face does not exist in the image of the fifth region, that is, the image of the distorted region does not exist in the first image.
In a case where the color information includes HSV values, the electronic device may calculate a second variance, which is a variance of HSV values of pixel points in the image of the fifth region. For example, the electronic device may obtain an RGB value of each pixel point in the image of the fifth region. And then, the electronic equipment can obtain the HSV value of each pixel point according to the RGB value of each pixel point and a preset conversion algorithm. If the second variance is greater than a second preset variance threshold, the electronic device may determine that the color of the image of the fifth region is not uniform, and the second image has a distorted region image. If the second variance is smaller than a second preset variance threshold, the electronic device may determine that the color of the image of the fifth region is uniform, and the second image does not have a distorted region image. The second preset variance threshold may be 10, 15, 15.5, and the like, which is not limited in this embodiment of the application.
It should be noted that, the electronic device obtains the HSV value of each pixel point according to the RGB value of each pixel point and the preset conversion algorithm, and a method for obtaining the HSV value by the electronic device according to the RGB value in the conventional technology may be referred to, which is not described herein again.
In other embodiments, the electronic device may determine whether the warp field image exists in the first image through a preset image recognition model, which is used for recognizing whether the warp field image exists in the image. Specifically, the electronic device may establish a preset image recognition model. The preset image recognition model can be established based on a semantic segmentation algorithm. For example, the preset image recognition model may be built based on the FCN. As another example, the preset image recognition model may be established based on PSPNet. Thereafter, the electronic device may train the preset image recognition model. For example, the annotator can annotate the warped region image and the unwarped region image in a plurality of images (e.g., 10, 100, 1000, etc.). Then, the electronic device may input the plurality of images into a preset image recognition model, and output the warped region image in each of the recognized images through the preset image recognition model. Then, the electronic device can compare the distortion region image in the annotated image with the distortion region image in the image identified by the electronic device according to the preset image identification model through the preset image identification model, and determine the accuracy of the preset image identification model in identifying the distortion region image. For example, the resolution of the warped region image in the annotated image a is 300 × 400, the resolution of the warped region image in the image a identified by the electronic device according to the preset image identification model is 150 × 200, and the accuracy of the preset image identification model for identifying the warped region image is 50%.
If the accuracy of the preset image recognition model for recognizing the image in the distorted area is smaller than the preset recognition threshold, the electronic device can adjust parameters in the preset image recognition model and continue to train the preset image recognition model. If the accuracy of the preset image recognition model for recognizing the image in the distorted area is greater than the preset recognition threshold, the electronic device may determine that the preset image recognition model is trained. Then, the electronic device may input the first image into the trained preset image recognition model, and determine whether the first image has the distorted area image.
It should be noted that, in the embodiment of the present application, the preset identification threshold is not limited. For example, the preset recognition threshold may be 90%. For another example, the predetermined recognition threshold may be 85%. For another example, the preset recognition threshold may be 85.5%.
It can be understood that the higher the preset recognition threshold is, the higher the accuracy of the trained preset image recognition model for recognizing the image of the distorted area is. Therefore, the accuracy of the electronic equipment for identifying the distorted area image through the preset image identification model can be improved.
In some embodiments, if the distorted region image exists in the face region image, the electronic device may perform S504. If the face region image does not have the distorted region image, the electronic device executes S505.
S504, the electronic equipment processes the image of the third area to obtain a second image.
Wherein the second image comprises a processed warped region image, and a difference between a color of the processed warped region image and a color of the skin region image is smaller than a difference between the color of the warped region image and the color of the skin region image.
In one possible design, the degree of difference between the color of the processed warped region image and the color of the skin region image is less than a preset difference threshold.
It should be noted that, in this embodiment of the application, that the difference degree between the color of the processed distorted region image and the color of the skin region image is smaller than the preset difference threshold means that the difference degree between the color information of the pixel point in the processed distorted region image and the color information of the pixel point in the skin region image is smaller than the preset difference threshold.
It should be noted that, in the embodiment of the present application, the difference degree between the RGB values of the pixel points in the processed distorted region image and the RGB values of the pixel points in the skin region image is not limited. For example, the difference between the RGB values of the pixels in the processed distorted region image and the RGB values of the pixels in the skin region image may be, for example, the difference between the RGB value of each pixel in the processed distorted region image and the average of the RGB values of all the pixels in the skin region image. For another example, the difference between the RGB values of the pixels in the processed warped area image and the RGB values of the pixels in the skin area image may be, and the difference between the average value of the RGB values of all the pixels in the processed warped area image and the average value of the RGB values of the skin area image may be.
Illustratively, the average value of the RGB values of all the pixels in the warped region image is R1:80、G1:30、B1: 60, the average value of the RGB values of all the pixel points in the skin region image is: r2:100、G2:40、B2:100, the degree of difference in R value is 20%, the degree of difference in G value is 25%, and the degree of difference in B value is 40%. Wherein, the difference degree of the R value, the difference degree of the G value and the difference degree of the B value are all smaller than a preset difference threshold, and the electronic equipment determines the difference between the RGB value of the pixel point in the distorted area image and the RGB value of the pixel point in the skin area imageThe degree is less than a preset difference threshold.
That is, if the preset difference threshold is 50%, the degree of difference of the R value is 20%, the degree of difference of the G value is 25%, and the degree of difference of the B value is 40%. The electronic device may determine that a degree of difference between a color of the distorted region image and a color of the skin region image is less than a preset difference threshold.
It will be appreciated that the color of the processed warped area image is less different from the color of the skin area image, i.e. the processed warped area image is the same as or similar to the skin area image. Therefore, the distorted area image can be processed into an image which is the same as or similar to the skin area image, the human face image acquired by the electronic equipment is ensured to be complete and not distorted, and the image quality is improved.
In this embodiment, after the electronic device determines that the distorted region image exists in the face region image, the electronic device may acquire third location information indicating a location of the distorted region image relative to the first image. Specifically, the electronic device may obtain color information of each pixel point in the image of the fifth region. And then, the electronic equipment can determine whether the pixel points are pixel points in the skin area image according to the color information of each pixel point and the preset skin color information. The preset skin color information is used for indicating the color of the skin area image, and the electronic device can save the preset skin color information in advance.
In a possible implementation manner, the color information is an RGB value, the preset skin color information is a preset skin RGB value, and the electronic device may obtain an RGB value of each pixel point in the image of the fifth region. Then, the electronic device may compare the RGB value of each pixel point in the image of the fifth region with the preset skin RGB value. If the RGB value of the pixel point is larger than the preset skin RGB value, the electronic equipment can determine that the pixel point is the pixel point in the skin area image. If the RGB value of the pixel point is smaller than the preset skin RGB value, the electronic equipment can determine that the pixel point is not the pixel point in the skin area image.
Wherein, the preset skin RGB values comprise: a preset R value, a preset G value and a preset B value. The RGB value of the pixel point is greater than the preset skin RGB value, which means that the R value of the pixel point is greater than the preset R value, the G value of the pixel point is greater than the preset G value, and the B value of the pixel point is greater than the preset B value. The RGB value of the pixel point is smaller than the preset skin RGB value, namely, the R value of the pixel point is smaller than the preset R value, and/or the G value of the pixel point is smaller than the preset G value, and/or the B value of the pixel point is smaller than the preset B value.
Note that, the preset skin RGB values are not limited in the embodiments of the present application. For example, the preset R value may be 95, the preset G value may be 40, and the preset B value may be 20. For another example, the preset R value may be 220, the preset G value may be 210, and the preset B value may be 170.
In another possible implementation manner, the color information is an HSV value, and the preset skin color information is a preset skin HSV value. Wherein, presetting the skin HSV value comprises the following steps: the first preset H value, the second preset H value, the preset S value and the preset V value. The electronic device may obtain an HSV value of each pixel point in the image of the fifth region. Then, the electronic device may compare the HSV value of each pixel point in the image of the fifth region with a preset skin HSV value. If the H value of the pixel point is greater than the first preset H value, the H value of the pixel point is less than the second preset H value, the S value of the pixel point is greater than the preset S value, and the V value of the pixel point is greater than the preset V value, the electronic device may determine that the pixel point is a pixel point in the skin region image. If the HSV value of the pixel point is smaller than the first preset H value, and/or the H value of the pixel point is larger than the second preset H value, and/or the S value of the pixel point is smaller than the preset S value, and/or the V value of the pixel point is smaller than the preset V value, the electronic device may determine that the pixel point is not a pixel point in the skin region image.
It should be noted that, the preset skin HSV value is not limited in the embodiments of the present application. For example, the first preset H value may be 0, the second preset H value may be 21, the preset S value may be 49, and the preset V value may be 51. For another example, the first preset H value may be 1, the second preset H value may be 22, the preset S value may be 50, and the preset V value may be 52.
And if the pixel point in the image of the fifth area is the pixel point in the image of the skin area, the electronic equipment does not process the pixel point. If the pixel point in the image of the fifth region is not the pixel point in the image of the skin region, the electronic device may determine that the pixel point is the pixel point in the image of the distorted region, and obtain a position of the pixel point relative to the first image. Then, the electronic device may obtain a position of each pixel point in the distorted region image relative to the first image, and the position of each pixel point in the distorted region image relative to the first image may constitute third position information.
It is understood that, after the electronic device acquires the position of the warped area image relative to the first image (i.e., the third position information), the electronic device may process the warped area image such that the color of the processed warped area image is the same as or similar to the color of the skin area image, i.e., the processed warped area image is the same as or similar to the skin area image. Therefore, the distorted area image can be processed into an image which is the same as or similar to the skin area image, the human face image acquired by the electronic equipment is ensured to be complete and not distorted, and the image quality is improved.
In this embodiment of the application, after the electronic device obtains the third position information, the electronic device may process the image of the distorted region according to the third position information to obtain the second image. Specifically, the electronic device may perform image filling on the image of the distorted region to obtain the second image. For example, the electronic device may perform image filling on the warp area image in the manner (a) and the manner (b). The method (a) is to adjust the RGB value of each pixel in the distorted region image. The manner (b) is to process the warp area image through a preset image filling model, wherein the preset image filling model is used for filling the warp area image in the image.
In the method (a), the electronic device may adjust an RGB value of each pixel point in the distortion region image to obtain the second image. Specifically, the electronic device may obtain an RGB value of each pixel point in the skin region image. The electronic device may then calculate a target average value, which is the average of the RGB values of all the pixel points in the skin region image. For example, the skin region image may be an image other than the distortion region image in the image of the fifth region. For example, the skin region image includes pixel a, pixel B, and pixel C. The RGB value of the pixel point A is R:100, G: 80. b: 150, the RGB value of the pixel point A is R:99, G: 85. b: 150, the RGB value of the pixel point A is R:101, G: 90. b: 153. target average values are R:100, G: 85. b: 151.
the electronic device may adjust the RGB values of all the pixels in the distorted region image to a target average value to obtain a second image. For example, the RGB values of the pixel point a1 in the warped area image are R:200, G: 30. b: target mean values of R:100, G: 85. b: 151. the electronic equipment adjusts the RGB value of the pixel point A1 in the distortion area image into R:100, G: 85. b: 151.
for example, in the case that the positional relationship between the glasses image and the face contour image is one, that is, the glasses image is within the range of the face contour image, the first image captured by the electronic device may be a first image 1101 as shown in (a) of fig. 11, and the first image 1101 may include a distorted area image 1102. After the electronic device processes the first image (i.e., the electronic device performs S503 and S504), the electronic device may obtain a second image 1103 as shown in (b) of fig. 11, and the second image 1103 may include a processed warped region image 1104. For another example, in a case where the positional relationship between the glasses image and the face contour image is a two-dimensional positional relationship, that is, the glasses image is out of the range of the face contour image, the first image captured by the electronic device may be a first image 1201 as shown in (a) in fig. 12, and the first image 1201 may include the distorted area image 1202. After the electronic device processes the first image (i.e., the electronic device performs S503 and S504), the electronic device may obtain a second image 1203 as shown in (b) of fig. 12, where the second image 1203 may include a processed warped region image 1204.
Optionally, the electronic device may adjust the RGB value of each pixel point in the distorted region image through a preset stretching algorithm. The preset stretching algorithm is not limited in the embodiment of the present application. For example, the preset stretching algorithm may be an affine transformation algorithm. For another example, the preset stretching algorithm may be a partial margin scaling algorithm. For another example, the preset stretching algorithm may be a global fitting algorithm.
It should be noted that, for the way that the electronic device adjusts the RGB value of each pixel point in the image of the distortion area through the preset stretching algorithm, reference may be made to a method for stretching an image by the electronic device in the conventional art, which is not described herein again.
It can be understood that the electronic device adjusts the RGB values of all the pixel points in the distorted region image to the target average value, that is, the RGB values of all the pixel points in the distorted region image are adjusted to the average value of the RGB values of all the pixel points in the skin region image. Thus, the RGB values of the warp field image can be made close to the RGB values of the skin field image, i.e., the color of the warp field image is close to the color of the skin. Therefore, the difference between the distorted area image and the face image can be reduced, the authenticity of the face image is improved, and the image quality is improved.
In the mode (b), the electronic device may perform image filling on the distorted region image through a preset image filling model to obtain a second image. Specifically, the electronic device may establish a preset image filling model. The default image filling model may be established based on a Generative Adaptive Networks (GAN). The electronic device may then train the preset image fill model. For example, the image processor may create a training set that includes a plurality of sets of images (e.g., 100 sets of images, 1000 sets of images, or 3000 sets of images, etc.), each set including an image with a warped region image and an image without a warped region image. For example, the group of images in the training set may be the image 102 shown in (b) of fig. 1 (i.e., the image in which the image of the distorted area exists) and the image 201 shown in fig. 2.
The embodiment of the present application does not limit the way of creating the training set. For example, an image processing person may capture an image of the presence of the distorted region image via an electronic device. Then, the image processing person can process (e.g., fill in) the distorted area image through the image processing software, so as to obtain an image without the distorted area image. As another example, an image processing person may capture an image without a distorted area image via an electronic device. The image processor may then process (e.g., degrade) the image with image processing software to obtain an image with a distorted image.
The electronic device may input the training set (i.e., the plurality of groups of images) into a preset image filling model, perform image filling on a warp area image in the images in which the warp area image exists through the preset image filling model, and output the filled image. Then, the electronic device may compare the output filled image with the image without the distorted area image through the preset image filling model, and determine a filling accuracy rate, where the filling accuracy rate is used to indicate an accuracy degree of the preset image filling model for filling the distorted area image. For example, in a set of images, the resolution of a warp field image in an image in which the warp field image exists is 400 × 400. The electronic equipment carries out image filling on the distorted area image in the image with the distorted area image through a preset image filling model, the filled distorted area image is 300 multiplied by 300, and the accuracy rate of the preset image filling model for filling the distorted area image is 75%.
If the accuracy of the preset image filling model for filling the distorted region image is smaller than the preset filling threshold, the electronic device may adjust parameters in the preset image filling model and continue to train the preset image filling model. If the accuracy of the preset image filling model for filling the distorted region image is greater than the preset filling threshold, the electronic device may determine that the preset image filling model has been trained. Then, the electronic device may input the first image into the trained preset image filling model, and perform image filling on the image of the distorted region in the first image to obtain a second image.
It should be noted that, the preset filling threshold is not limited in the embodiment of the present application. For example, the preset fill threshold may be 90%. For another example, the predetermined fill threshold may be 85%. For another example, the preset fill threshold may be 85.5%.
It can be understood that the higher the preset filling threshold is, the higher the accuracy of the trained preset image filling model for filling the warped area image is. Therefore, the accuracy of the electronic equipment for filling the distorted region image through the preset image filling model can be improved, and the image quality of the face region image is improved.
And S505, the electronic equipment does not process the first image.
It can be understood that, if there is no distorted image in the first image, the face region image in the first image is complete and not distorted; the electronic equipment does not need to process the first image, so that the resources of the electronic equipment are saved. Moreover, the electronic equipment can ensure the image quality of the first image without processing the first image.
It should be noted that, after the electronic device obtains the second image, the color information of the processed warped region image in the second image may be different from the color information of the region image adjacent to the warped region image to a greater extent. Thus, the processed warped region image may be too unsmooth and abrupt to the adjacent region image, which may affect the image quality.
For this reason, the embodiment of the present application further provides an image processing method, as shown in fig. 13, after the electronic device obtains the second image (i.e., S504), the image processing method may include S1301-S1302.
S1301, the electronic equipment determines a first edge area image and a second edge area image.
The first edge area image is an image of an edge area of the processed distorted area image in the second image, and the second edge area image is an area image adjacent to the processed distorted area image.
In the embodiments of the present application, the image of the edge area is not limited. For example, the first edge region image may be composed of a group of pixels at the outermost periphery in the processed warped region image. For example, as shown in fig. 14 (a), the width of the processed distorted region image 1401 is 10 pixels, the height of the processed distorted region image 1401 is 19 pixels, and the first edge region image 1402 is composed of a group of pixels (54 pixels) at the outermost periphery of the processed distorted region image 1401. For example, the first edge region image may also be composed of two outermost groups of pixel points in the processed warped region image. For example, as shown in (b) in fig. 14, the first edge region image 1403 is composed of two sets of pixel points (100 pixel points) at the outermost periphery in the processed distorted region image 1401.
In an embodiment of the application, the electronic device may determine the first edge area image according to the third position information. For example, the electronic device may determine the first edge area image according to two-dimensional coordinates of each pixel point in the distorted area image in the coordinate system of the first image. For example, as shown in fig. 15, point o is the origin of coordinates, the x-axis is the lower side of the first image 1501, and the y-axis is the left side of the first image 1501. The warp field image 1502 includes a plurality of edge pixel points, including: the distortion region image comprises pixels with the minimum abscissa in the distortion region image (such as an edge pixel A1 and an edge pixel A2), pixels with the maximum abscissa in the distortion region image (such as an edge pixel A3 and an edge pixel A4), pixels with the minimum ordinate in the distortion region image (such as an edge pixel A2 and an edge pixel A5), and pixels with the maximum ordinate in the distortion region image (such as an edge pixel A3 and an edge pixel A6). Wherein, x1 is the minimum value in the abscissa of the two-dimensional coordinate of all pixel points in the distortion area image 1502, x3 is the maximum value in the abscissa of the two-dimensional coordinate of all pixel points in the distortion area image 1502, y2 is the minimum value in the ordinate of the two-dimensional coordinate of all pixel points in the distortion area image 1502, y3 is the maximum value in the ordinate of the two-dimensional coordinate of all pixel points in the distortion area image 1502, and x1 equals x2, x3 equals x4, y2 equals y5, y3 equals y 6.
For example, the second edge region image may be composed of a group of pixel points adjacent to the processed warp region image. As shown in fig. 16 (a), the width of the processed distorted region image 1601 is 8 pixels, the height of the processed distorted region image 1601 is 7 pixels, and the second edge region image 1602 is composed of a group of pixels (38 pixels) adjacent to the processed distorted region image 1601. For example, the second edge region image may be composed of two sets of pixel points adjacent to the processed warp region image. As shown in fig. 16 (b), the width of the processed distorted region image 1603 is 6 pixels, the height of the processed distorted region image 1603 is 5 pixels, and the second edge region image 1604 is composed of a group of pixels (60 pixels) adjacent to the processed distorted region image 1603.
S1302, the electronic device performs image fusion on the first edge area image and the second edge area image to obtain a third image.
In an embodiment of the application, the first degree of color difference is smaller than the second degree of color difference. The first color difference degree is the difference degree between the color information of the first edge region image after image fusion and the color information of the second edge region image after image fusion, and the second color difference degree is the difference degree between the color information of the first edge region image without image fusion and the color information of the second edge region image without image fusion.
Illustratively, the first degree of color difference includes: the first R value difference degree is 15%, the first G value difference degree is 12%, the first B value difference degree is 13%, and the second color difference degree includes: the second R value difference is 30%, the second G value difference is 28%, and the second B value difference is 32%. That is, the first R value difference degree is smaller than the second R value difference degree, the first G value difference degree is smaller than the second G value difference degree, and the first B value difference degree is smaller than the second B value difference degree.
In some embodiments, the electronic device may perform image fusion on the first edge region image and the second edge region image through a preset fusion algorithm to obtain a third image. The preset fusion algorithm is not limited in the embodiment of the present application. For example, the predetermined fusion algorithm may be a logic filter method. For another example, the predetermined fusion algorithm may be a pyramid decomposition method. For another example, the preset fusion algorithm may be a simple combined image fusion method. As another example, the predetermined fusion algorithm may be feathering. The embodiments of the present application will be described below by way of examples.
For example, in conjunction with (b) in fig. 16, the electronic device may obtain an RGB value of each pixel point in the first edge region image 1605 (i.e., the shadow region image in the processed distorted region image 1603) and an RGB value of each pixel point in the second edge region image 1604. Thereafter, the electronic device can adjust the RGB values of each pixel in the first edge-region image 1605 and the RGB values of each pixel in the second edge-region image 1604.
It should be noted that, the method for the electronic device to perform image fusion on the first edge region image through the preset fusion algorithm may refer to a method for the electronic device to perform image fusion on an image in the conventional technology, which is not described herein again.
It can be understood that the first color difference degree is smaller than the second color difference degree, which indicates that the color information of the first edge area image after image fusion is more similar to the color information of the second edge area image. Therefore, the spliced part between the distorted region image and the second edge region image after image fusion can be smoother, and the image quality is improved.
As can be seen from fig. 4 (b), the preview image 407 displayed on the electronic device also includes a distorted area image. This may result in a low image quality of the preview image, which may affect the viewing experience of the user viewing the preview image.
In some embodiments, the electronic device may process the warped region image in the preview image upon launching the photographing application. Specifically, in response to the operation of starting the photographing application by the user, the electronic device may preview the image by capturing the image. Thereafter, the electronic device can detect whether a warped region image is present in the preview image. If the preview image has the distorted area image, the electronic device may process the distorted area image and display the processed preview image. Illustratively, the electronic device may display a processed preview image 1701 as shown in FIG. 17, the processed preview image 1701 not including a warped region image.
It should be noted that, specifically, for the description of the process of the electronic device processing the warped area image in the preview image, reference may be made to the description of the process of the electronic device processing the warped area image in the first image in the foregoing embodiment (i.e., S503-S505 or S503-S1302), which is not repeated herein.
It can be understood that the electronic device processes the preview image, which can ensure that the preview image displayed by the electronic device is complete and not distorted. Therefore, the image quality of the preview image can be improved, and the viewing experience of the user for viewing the preview image is improved.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of electronic equipment. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those skilled in the art will readily appreciate that the steps of a method of processing an image of the examples described in connection with the embodiments disclosed herein may be implemented in hardware or a combination of hardware and computer software. Whether a function is performed as hardware or as software driven hardware in an electronic device depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional modules or functional units may be divided according to the method example described above, for example, each functional module or functional unit may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module or a functional unit. The division of the modules or units in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Other embodiments of the present application provide an electronic device (e.g., a handset 200 as shown in fig. 3). The electronic device may include: a memory and one or more processors. The memory is coupled to the processor. The electronic device may further include a camera. Or, the electronic device may be externally connected with a camera. The memory is for storing computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform various functions or steps performed by the mobile phone in the above-described method embodiments. The structure of the electronic device may refer to the structure of the cellular phone 200 shown in fig. 3.
An embodiment of the present application further provides a chip system, as shown in fig. 18, where the chip system includes at least one processor 1801 and at least one interface circuit 1802. The processor 1801 and the interface circuit 1802 may be interconnected by wires. For example, the interface circuit 1802 may be used to receive signals from other devices (e.g., a memory of an electronic device). Also for example, the interface circuit 1802 may be used to send signals to other devices, such as the processor 1801. Illustratively, the interface circuit 1802 may read instructions stored in the memory and send the instructions to the processor 1801. The instructions, when executed by the processor 1801, may cause an electronic device (e.g., the cell phone 200 shown in fig. 3) to perform the steps of the above-described embodiments. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
An embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium includes computer instructions, and when the computer instructions are executed on the electronic device (such as the mobile phone 200 shown in fig. 3), the electronic device is caused to perform various functions or steps performed by the mobile phone in the foregoing method embodiment.
The embodiment of the present application further provides a computer program product, which when running on a computer, causes the computer to execute each function or step executed by the mobile phone in the above method embodiments.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (16)

1. An image processing method is applied to an electronic device, wherein the electronic device comprises a camera, and the method comprises the following steps:
the electronic device detects a first operation;
responding to a first operation of a user, and acquiring a first image by the electronic equipment through the camera;
the electronic equipment determines that the first image comprises a face area image, wherein the face area image comprises an image of a first area and an image of a second area, the first area is used for representing the area where the face skin is located, and the second area is used for representing the area where the glasses are located;
the electronic equipment determines that the face region image has an image of a third region, wherein the third region is used for representing a deformation region of a face caused by refraction of glasses;
the electronic equipment processes the image of the third area to obtain a second image, wherein the second image comprises the processed image of the third area, and the difference between the color of the processed image of the third area and the color of the image of the first area is smaller than the difference between the color of the image of the third area and the color of the image of the first area.
2. The method of claim 1, wherein the electronic device determines that the face region image has an image of a third region, and comprises:
the electronic equipment determines that the image of a fourth area exists in the image of the third area, wherein the fourth area is an overlapping area of the first area and the second area.
3. The method of claim 2, wherein before the electronic device determines that the image of the third region is present in the image of the fourth region, the method further comprises:
the electronic equipment acquires first position information and second position information, wherein the first position information is used for indicating the position of the image of the second area relative to the first image, and the second position information is used for indicating the position of a face contour image in the face area image relative to the first image;
the electronic equipment determines whether the image of the second area is in the range of the face contour image in the face area image according to the first position information and the second position information;
and the electronic equipment detects whether the image of the third area exists in the face area image according to whether the image of the second area is in the range of the face outline image.
4. The method according to claim 3, wherein the electronic device detects whether the image of the third region exists in the face region image according to whether the image of the second region is within the range of the face contour image, and the method comprises:
if the image of the second area is in the range of the face contour image, the electronic equipment determines an image of a fifth area according to the first position information, wherein the fifth area comprises areas except the second area and the eye area in the fourth area;
the electronic equipment acquires color information of each pixel point in the image of the fifth area, wherein the color information comprises an RGB value or an HSV value;
and the electronic equipment detects whether the face region image has the image of the third region according to the color information of each pixel point in the image of the fifth region.
5. The method according to claim 3, wherein the electronic device detects whether the image of the third region exists in the face region image according to whether the image of the second region is within the range of the face contour image, and the method comprises:
if the image of the second area is out of the range of the face contour image, the electronic equipment determines an actual face contour image according to the second position information, and the face area image is an image in the range of the actual face contour image;
the electronic equipment determines an image of a fifth region according to the actual face contour image and the first position information, wherein the image of the fifth region comprises regions except the second region and the eye region in the fourth region;
the electronic equipment acquires color information of each pixel point in the image of the fifth area;
and the electronic equipment detects whether the face region image has the image of the third region according to the color information of each pixel point in the image of the fifth region.
6. The method according to claim 4 or 5, wherein the color information includes RGB values, and the electronic device detects whether the image of the face region exists in the image of the third region according to the color information of each pixel point in the image of the fifth region, including:
the electronic equipment calculates a first variance according to the RGB value of each pixel point in the image of the fifth area, wherein the first variance is the variance of the RGB value of the pixel point in the image of the fifth area;
if the first variance is larger than the first preset variance threshold, the electronic equipment determines that the image of the third area exists in the face area image;
if the first variance is smaller than the first preset variance threshold, the electronic device determines that the image of the third area does not exist in the face area image.
7. The method according to claim 4 or 5, wherein the color information includes HSV values, and the electronic device detects whether the image of the face region has the image of the third region according to the color information of each pixel point in the image of the fifth region, including:
the electronic equipment calculates the HSV value of each pixel point in the image of the fifth area according to the RGB value of each pixel point in the image of the fifth area;
the electronic equipment calculates a second variance according to the HSV value of each pixel point in the image of the fifth area, wherein the second variance is the variance of the HSV value of the pixel point in the image of the fifth area;
if the second variance is larger than the second preset variance threshold, the electronic equipment determines that the image of the third area exists in the face area image;
if the second variance is smaller than the second preset variance threshold, the electronic device determines that the image of the third area does not exist in the face area image.
8. The method according to any one of claims 4-7, wherein the electronic device processes the image of the third area to obtain a second image, comprising:
the electronic equipment acquires third position information, wherein the third position information is used for indicating the position of the image of the third area relative to the first image;
and the electronic equipment performs image filling on the image of the third area according to the third position information to obtain the second image.
9. The method of claim 8, wherein the electronic device obtains third location information comprising:
the electronic equipment determines whether each pixel point in the image of the fifth area is a pixel point in the image of the first area or not according to the color information of each pixel point in the image of the fifth area and preset skin color information; wherein the preset skin color information is used to indicate a color of an image of the first region;
if the pixel point in the image of the fifth area is not the pixel point of the image of the first area, the electronic equipment determines that the pixel point is the pixel point in the image of the third area and obtains the position of the pixel point relative to the first image;
the electronic equipment acquires the position of each pixel point in the image of the third area relative to the first image, and the third position information is composed of the position of each pixel point in the image of the third area relative to the first image.
10. The method according to claim 8 or 9, wherein the electronic device performs image filling on the image of the third area according to the third position information to obtain the second image, and the method comprises:
the electronic equipment acquires the RGB value of each pixel point in the image of the third area;
the electronic equipment calculates a target average value, wherein the target average value is an average value of RGB values of all pixel points in the image of the first area;
and the electronic equipment adjusts the RGB value of each pixel point in the image of the third area to the target average value according to the third position information to obtain the second image.
11. The method of any of claims 3-10, wherein the electronic device determining that the first image comprises a face region image comprises:
the electronic device determining that the first image comprises an image of the first region;
the electronic device determining that the first image comprises an image of the second region;
before the electronic device determines that the image of the fourth area exists in the image of the third area, the method further comprises:
the electronic equipment acquires first position information and fourth position information, wherein the fourth position information is used for indicating the position of the eye area image relative to the first image;
and the electronic equipment determines whether the image of the fourth area exists in the first image according to the first position information and the fourth position information.
12. The method according to claims 1-11, wherein the difference between the color of the processed image of the third area and the color of the image of the first area is smaller than the difference between the color of the image of the third area and the color of the image of the first area, comprising:
the difference degree between the color of the processed image of the third area and the color of the image of the first area is smaller than a preset difference threshold value.
13. The method according to any one of claims 1-12, wherein after the electronic device processes the image of the third area to obtain a second image, the method further comprises:
the electronic equipment determines a first edge area image and a second edge area image, wherein the first edge area image is an image of an edge area of the processed image of the third area in the second image, and the second edge area image is an area image adjacent to the processed image of the third area;
the electronic equipment performs graph fusion on the first edge area image and the second edge area image to obtain a third image;
the difference degree between the color information of the first edge area image subjected to image fusion and the color information of the second edge area image subjected to image fusion is smaller than the difference degree between the color information of the first edge area image not subjected to image fusion and the color information of the second edge area image.
14. An electronic device, characterized in that the electronic device comprises: a memory, a display screen, and one or more processors; the memory, the display screen, and the processor are coupled, the memory for storing computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform the method of any of claims 1-13.
15. A computer readable storage medium comprising computer instructions which, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-13.
16. A computer program product, characterized in that the computer program product, when run on a computer, causes the computer to perform the method according to any of claims 1-13.
CN202110621419.9A 2021-06-03 2021-06-03 Image processing method and electronic equipment Active CN113486714B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110621419.9A CN113486714B (en) 2021-06-03 2021-06-03 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110621419.9A CN113486714B (en) 2021-06-03 2021-06-03 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113486714A true CN113486714A (en) 2021-10-08
CN113486714B CN113486714B (en) 2022-09-02

Family

ID=77934631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110621419.9A Active CN113486714B (en) 2021-06-03 2021-06-03 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN113486714B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187588A (en) * 2022-09-07 2022-10-14 合肥金星智控科技股份有限公司 Foreign matter detection method, foreign matter detection device, storage medium, and electronic apparatus

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150065803A1 (en) * 2013-09-05 2015-03-05 Erik Scott DOUGLAS Apparatuses and methods for mobile imaging and analysis
US20160202483A1 (en) * 2015-01-09 2016-07-14 Samsung Display Co., Ltd. Head-mounted display device
CN107368806A (en) * 2017-07-18 2017-11-21 广东欧珀移动通信有限公司 Image correction method, device, computer-readable recording medium and computer equipment
CN108513668A (en) * 2016-12-29 2018-09-07 华为技术有限公司 Image processing method and device
CN108564540A (en) * 2018-03-05 2018-09-21 广东欧珀移动通信有限公司 Remove image processing method, device and the terminal device that eyeglass is reflective in image
CN109618098A (en) * 2019-01-04 2019-04-12 Oppo广东移动通信有限公司 A kind of portrait face method of adjustment, device, storage medium and terminal
CN110084763A (en) * 2019-04-29 2019-08-02 北京达佳互联信息技术有限公司 Image repair method, device, computer equipment and storage medium
CN111327814A (en) * 2018-12-17 2020-06-23 华为技术有限公司 Image processing method and electronic equipment
CN112235483A (en) * 2020-08-24 2021-01-15 深圳市雄帝科技股份有限公司 Automatic photographing light adjusting method and system and self-service photographing equipment thereof
CN112532891A (en) * 2020-11-25 2021-03-19 维沃移动通信有限公司 Photographing method and device
US20210088811A1 (en) * 2019-09-24 2021-03-25 Bespoke, Inc. d/b/a Topology Eyewear. Systems and methods for adjusting stock eyewear frames using a 3d scan of facial features

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150065803A1 (en) * 2013-09-05 2015-03-05 Erik Scott DOUGLAS Apparatuses and methods for mobile imaging and analysis
US20160202483A1 (en) * 2015-01-09 2016-07-14 Samsung Display Co., Ltd. Head-mounted display device
CN108513668A (en) * 2016-12-29 2018-09-07 华为技术有限公司 Image processing method and device
CN107368806A (en) * 2017-07-18 2017-11-21 广东欧珀移动通信有限公司 Image correction method, device, computer-readable recording medium and computer equipment
CN108564540A (en) * 2018-03-05 2018-09-21 广东欧珀移动通信有限公司 Remove image processing method, device and the terminal device that eyeglass is reflective in image
CN111327814A (en) * 2018-12-17 2020-06-23 华为技术有限公司 Image processing method and electronic equipment
CN109618098A (en) * 2019-01-04 2019-04-12 Oppo广东移动通信有限公司 A kind of portrait face method of adjustment, device, storage medium and terminal
CN110084763A (en) * 2019-04-29 2019-08-02 北京达佳互联信息技术有限公司 Image repair method, device, computer equipment and storage medium
US20210088811A1 (en) * 2019-09-24 2021-03-25 Bespoke, Inc. d/b/a Topology Eyewear. Systems and methods for adjusting stock eyewear frames using a 3d scan of facial features
CN112235483A (en) * 2020-08-24 2021-01-15 深圳市雄帝科技股份有限公司 Automatic photographing light adjusting method and system and self-service photographing equipment thereof
CN112532891A (en) * 2020-11-25 2021-03-19 维沃移动通信有限公司 Photographing method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DONG EUN LEE 等: ""Fuzzy-system-based detection of pupil center and corneal speaular reflection for a driver-gaze tracking system based on the symmetrical characteristics of face and facial feature points"", 《SYMMETRY》 *
PIERRE-YVES LAFFONT等: ""Adaptive Dynamic Refocusing:Toward Solving Discomfort in Virtual Reality"", 《IEEE》 *
刘小路等: "近视运动防护眼镜舒适性评价体系设计研究与实践", 《包装工程》 *
陈岳林 等: ""基于人脸图片边缘信息的眼镜检测"", 《软件导刊》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115187588A (en) * 2022-09-07 2022-10-14 合肥金星智控科技股份有限公司 Foreign matter detection method, foreign matter detection device, storage medium, and electronic apparatus

Also Published As

Publication number Publication date
CN113486714B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN107767333B (en) Method and equipment for beautifying and photographing and computer storage medium
CN111443884A (en) Screen projection method and device and electronic equipment
WO2021057277A1 (en) Photographing method in dark light and electronic device
CN110471606B (en) Input method and electronic equipment
WO2021078001A1 (en) Image enhancement method and apparatus
WO2021036715A1 (en) Image-text fusion method and apparatus, and electronic device
CN111669462B (en) Method and related device for displaying image
WO2021008551A1 (en) Fingerprint anti-counterfeiting method, and electronic device
CN112529784A (en) Image distortion correction method and device
CN112085647B (en) Face correction method and electronic equipment
WO2022001806A1 (en) Image transformation method and apparatus
US20230188831A1 (en) Electronic device and method for generating image by applying effect to subject and background
WO2022206177A1 (en) Fingerprint identification method and electronic device
CN111028144A (en) Video face changing method and device and storage medium
CN112672053A (en) Photographing method, photographing device, terminal equipment and computer-readable storage medium
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN113723144A (en) Face watching unlocking method and electronic equipment
CN113486714B (en) Image processing method and electronic equipment
CN113364975B (en) Image fusion method and electronic equipment
WO2021238351A1 (en) Image correction method and electronic apparatus
CN113850709A (en) Image transformation method and device
CN111612723B (en) Image restoration method and device
CN113033341A (en) Image processing method, image processing device, electronic equipment and storage medium
EP4145343A1 (en) Fingerprint liveness detection method and device, and storage medium
CN111417982A (en) Color spot detection method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant