WO2022127738A1 - Image processing method and apparatus, electronic device, and storage medium - Google Patents

Image processing method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2022127738A1
WO2022127738A1 PCT/CN2021/137491 CN2021137491W WO2022127738A1 WO 2022127738 A1 WO2022127738 A1 WO 2022127738A1 CN 2021137491 W CN2021137491 W CN 2021137491W WO 2022127738 A1 WO2022127738 A1 WO 2022127738A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point spread
glare
spread function
display screen
Prior art date
Application number
PCT/CN2021/137491
Other languages
French (fr)
Chinese (zh)
Inventor
张华�
王津男
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022127738A1 publication Critical patent/WO2022127738A1/en

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/42Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06T5/60
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and storage medium.
  • the existing mobile phone adopts a full-screen mobile phone composed of a lifting front lens, and a sliding-type full-screen mobile phone. These designs make the visual experience of the mobile phone close to 100% full screen, but its complex mechanical structure will increase the thickness and weight of the mobile phone, occupying too much internal space, and the repeated use of the mechanical mechanism is also difficult to meet the needs of mobile phone users.
  • the mobile phone can increase the screen ratio of the display.
  • the camera can be set under the display screen, but due to the occlusion of the display screen, the images captured by the camera set under the display screen have glare, especially when shooting bright objects, the glare is serious, which greatly reduces the imaging effect of the camera and affects the user. experience.
  • an embodiment of the present application provides an image processing method, the method includes: acquiring an image to be processed; the to-be-processed image is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display a screen and a camera arranged under the display screen; the to-be-processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a specified color and/or brightness in the to-be-processed image photographing an object; generating a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; for the first point spread function
  • the function and the image to be processed are subjected to deconvolution processing to obtain the image after the glare has been removed.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • the generating a first point spread function according to the spectral components of the highlighted object includes: according to the spectrum of the highlighted object components and the model of the display screen to generate a second point spread function; wherein, the second point spread function is the point spread function corresponding to different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the The second point spread function and the photosensitive characteristic curve of the camera are used to generate the first point spread function.
  • the point spread function of the different wavelengths of the highlighted object passing through the under-screen imaging system can be generated, and then combined with the photosensitive characteristic curve of the camera to obtain the under-screen imaging system.
  • the point spread function of at least one channel In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • the generating a first point spread function according to the spectral components of the highlighted object includes: according to the spectrum of the highlighted object composition and a preset third point spread function to generate the first point spread function; wherein, the third point spread function is the point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
  • the point spread function of at least one channel in the under-screen imaging system can be quickly generated through the preset point spread functions of different wavelengths after passing through the under-screen imaging system, combined with the spectral components of the above-mentioned high-brightness objects.
  • the calculation efficiency is improved, and the processing speed of anti-glare is improved, and it can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements such as video shooting.
  • the method further includes: acquiring the spectral components of the high-brightness object collected by the spectral measurement device.
  • the spectral components of high-brightness objects can be acquired in real time during shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment;
  • the spectral components can be obtained to obtain the spectral components of bright objects, which saves the cost of electronic equipment.
  • the method further includes: performing image recognition on the to-be-processed image to determine the light source type of the highlighted object; The spectral components of different light source types are determined to determine the spectral components of the highlighted object.
  • the light source type of the highlighted object in the photographed object can be identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the electronic device does not need to be additionally configured with a spectral measurement device , the spectral composition of the bright object can be obtained, and the cost of electronic equipment can be saved.
  • the method further includes: dividing the to-be-processed image into a first area and a second area; wherein the first area is the area where the bright object and its glare are located; the second area is the area other than the first area in the image to be processed; the pair of the first point spread function and the image to be processed Performing deconvolution processing to obtain the image after removing the glare, comprising: performing deconvolution processing on the first point spread function and the first area to obtain the first area after removing the glare; The glare-removed first area is fused with the second area to obtain the glare-removed image.
  • the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it. Further anti-glare processing is performed in the first area where it is located, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the first area that does not contain bright objects and the glare generated by them in the image to be processed during the anti-glare processing. The effect of the second area.
  • the generation of the The two point spread function includes: taking the model of the display screen as the transmittance function, combining the light wave propagation factor, and taking the spectral component of the highlighted object as the weight to obtain the second point spread function; wherein, the The model of the display screen includes: an amplitude modulation function and a phase modulation characteristic function of the display screen; the light wave propagation factor is determined according to the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transparent
  • the over-rate function is used to determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object.
  • the spectral components of the highlighted object are used as weights to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • the obtained point spread function can more realistically express the process of glare generated by bright objects through under-screen imaging.
  • an embodiment of the present application provides an image processing apparatus, the apparatus includes: an acquisition module, configured to acquire an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the screen
  • the lower imaging system includes a display screen and a camera arranged under the display screen;
  • the to-be-processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a specified color in the to-be-processed image and/or a photographic object of brightness;
  • a generating module configured to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point of at least one channel in the under-screen imaging system a spread function;
  • a processing module configured to perform deconvolution processing on the first point spread function and the to-be-processed image to obtain the image after the glare has been removed.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • the generating module is further configured to: generate a second point spread function; wherein, the second point spread function is the point spread function corresponding to the light of different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the second point spread function and the camera The photosensitive characteristic curve is used to generate the first point spread function.
  • the point spread functions of the different wavelengths of the highlighted objects passing through the under-screen imaging system can be generated, and then combined with the photosensitive characteristic curve of the camera, the under-screen imaging system can be obtained.
  • the point spread function of at least one channel In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • the generation module is further configured to: generate the generation module according to the spectral components of the highlighted object and a preset third point spread function The first point spread function; wherein, the third point spread function is the point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
  • the point spread function of at least one channel in the under-screen imaging system can be quickly generated by using the corresponding point spread functions of different preset wavelengths after passing through the under-screen imaging system, combined with the spectral components of the above-mentioned high-brightness objects,
  • the calculation efficiency is significantly improved, and the processing speed of anti-glare is improved, which can be applied to the anti-glare processing scene with large amount of processing data and high processing speed requirements such as video shooting.
  • the apparatus further includes: a spectral measurement device, configured to collect spectral components of the highlighted object.
  • the spectral components of high-brightness objects can be acquired in real time during shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment;
  • the spectral components can be obtained to obtain the spectral components of bright objects, which saves the cost of electronic equipment.
  • the device further includes: an acquisition module, configured to: perform image recognition on the to-be-processed image, and determine the size of the highlighted object.
  • an acquisition module configured to: perform image recognition on the to-be-processed image, and determine the size of the highlighted object.
  • Light source type according to preset spectral components of different light source types, determine the spectral components of the highlighted object.
  • the light source type of the highlighted object in the photographed object can be identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the electronic device does not need to be additionally configured with a spectral measurement device , the spectral composition of the bright object can be obtained, and the cost of electronic equipment can be saved.
  • the apparatus further includes: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein , the first area is the area where the bright object and the glare generated by it are located; the second area is the area other than the first area in the image to be processed; the processing module is also used for: by Perform deconvolution processing on the first point spread function and the first region to obtain the first region after removing the glare; deconvolute the first region and the second region after removing the glare Fusion to obtain the image after removing the glare.
  • a segmentation module configured to: segment the to-be-processed image into a first area and a second area; wherein , the first area is the area where the bright object and the glare generated by it are located; the second area is the area other than the first area in the image to be processed; the processing module is also used for: by Perform deconvolution processing on the first point spread function and the first region to obtain the first region after removing the
  • the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it. Further anti-glare processing is performed in the first area where it is located, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the first area that does not contain bright objects and the glare generated by them in the image to be processed during the anti-glare processing. The effect of the second area.
  • the generating module is further configured to: use the model of the display screen as a transmittance function , combined with the light wave propagation factor, and using the spectral components of the highlighted object as the weight, the second point spread function is obtained; wherein, the model of the display screen includes: the amplitude modulation function and phase modulation characteristics of the display screen function; the light wave propagation factor is determined according to the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transparent
  • the over-rate function is used to determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object.
  • the spectral components of the highlighted object are used as weights to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • the obtained point spread function can more realistically express the process of glare generated by bright objects through under-screen imaging.
  • embodiments of the present application provide an electronic device, including: a display screen, a camera below the display screen, a processor, and a memory; wherein, the camera is used to collect images to be processed through the display screen;
  • the display screen is used to display the to-be-processed image and the image after glare removal;
  • the memory is used to store processor-executable instructions;
  • the processor is configured to implement the above-mentioned first aspect or One or more image processing methods in multiple possible implementation manners of the first aspect.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, characterized in that, when the computer program instructions are executed by a processor, the above-mentioned first aspect is implemented Or one or more image processing methods in multiple possible implementation manners of the first aspect.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • embodiments of the present application provide a computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic
  • the processor in the electronic device executes the first aspect or one or more of the image processing methods in multiple possible implementations of the first aspect.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • FIG. 1 shows a schematic diagram of a mobile phone configured with an under-screen camera according to an embodiment of the present application
  • FIGS. 2A-2B show schematic diagrams of displaying images through the mobile phone 10 according to an embodiment of the present application
  • FIG. 3 shows a schematic diagram of a shooting interface according to an embodiment of the present application
  • FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present application
  • FIG. 5 shows a schematic diagram of extracting the shape of the highlighted object and the glare starburst area in the glare area 302 in the above-mentioned FIG. 3 according to an embodiment of the present application;
  • FIG. 6 shows the spectral components of a strong light source collected by a color temperature sensor according to an embodiment of the present application
  • FIG. 7 shows a schematic diagram of a broadening of the point spread function of the under-screen imaging system according to an embodiment of the present application
  • FIG. 8 shows a partial schematic diagram of a display screen according to an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of a filter transmittance curve according to an embodiment of the present application.
  • FIG. 10 shows a schematic diagram of the point spread function of the generated RBG three-channel according to an embodiment of the present application
  • FIG. 11 shows a schematic diagram of the point spread function corresponding to the pre-calibrated light of different wavelengths passing through the under-screen imaging system according to an embodiment of the present application
  • FIG. 12 shows a schematic diagram of a partial image after deglare according to an embodiment of the present application.
  • FIG. 13 shows a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 14 shows a structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 16 shows a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • Point spread function for an optical system, the light field distribution of the output image when the input object is a point light source is called the point spread function, also known as the point spread function.
  • a point light source can be represented by a delta function (point pulse), and the light field distribution of the output image is called an impulse response, so the point spread function is also the impulse response function of the optical system.
  • the imaging performance of an optical system can be expressed by the point spread function of the optical system.
  • Non-blind deconvolution also known as non-blind deconvolution
  • the image formed by the optical system can be understood as the result of the convolution of the original image and the point spread function of the optical system.
  • the point spread function of the optical system, the process of restoring the original image is called non-blind deconvolution.
  • OLED The principle of OLED is the phenomenon that organic semiconductor materials and light-emitting materials are driven by an electric field and lead to light emission through carrier injection and recombination.
  • OLED is a device that uses a multilayer organic thin film structure to generate electroluminescence. It is easy to fabricate and only requires low driving voltage. These main features make OLED very prominent in the application of flat panel displays. OLED displays are lighter and thinner than LCDs, with high brightness, low power consumption, fast response, high definition, good flexibility and high luminous efficiency.
  • the embodiment of the present application provides an image processing method, and the method can be applied to an electronic device, and the electronic device can include a display screen and a camera under the display screen.
  • the display screen can be a touch screen or a non-touch screen.
  • the touch screen can control the electronic device by clicking or sliding on the screen with a finger, a stylus, etc.
  • the non-touch screen device can be connected to Input devices such as mouse, keyboard, touch panel, etc., through which the electronic device is controlled;
  • the camera may include a front camera and/or a rear camera.
  • the electronic device of the present application may be a smart phone, a netbook, a tablet computer, a notebook computer, a wearable electronic device (such as a smart bracelet, a smart watch, etc.), a car device, a TV, a virtual reality device, an audio system, and an electronic ink. , etc., the embodiments of the present application do not limit the specific type of the electronic device.
  • FIG. 1 shows a schematic diagram of a mobile phone configured with an under-screen camera according to an embodiment of the present application
  • the mobile phone 10 may include a display screen 101 and a camera 102 ; wherein, the display screen 101 For displaying images or videos, etc., the camera 102 is arranged below the display screen 101, and is used to shoot images or videos through the display screen 101; One), the display screen 101 and the camera 102 constitute the under-screen imaging system of the mobile phone 10 .
  • FIG. 2A is a schematic diagram of the original image displayed by the mobile phone 10 before being processed by the image processing method according to the embodiment of the present application.
  • FIG. 2B is an image displayed by the mobile phone 10
  • the photographed sun 204 appears in the original image 201 in the form of rainbow glare (not shown in the figure).
  • the main reasons for glare are: (1)
  • the display screen used to display images and other content consists of many pixels representing red, green and blue colors, and these pixels are periodically arranged; when the camera located under the display screen passes through the display Diffraction effect occurs when external light passes through the monitor when shooting with a screen.
  • the diffraction effect of the display screen is sensitive to the wavelength, that is, the diffraction effect is different for light of different wavelengths, and the diffraction broadening of the light of different wavelengths after passing through the display screen and the lens of the camera is different, which is called dispersion.
  • Phenomenon this phenomenon of chromatic dispersion gives rise to rainbow glare in captured images.
  • an anti-glare solution is: increasing the lens aperture of the camera to increase the amount of light entering; using a large-sized image sensor to improve the image sensor's ability to sense dark light; adding a new type of high transmittance light Materials, improve the light transmittance of the display screen; adopt the high dynamic range image (High Dynamic Range, HDR) data acquisition mode to improve the dynamic range of the image obtained by the camera and reduce the glare characteristics.
  • HDR High Dynamic Range
  • there is another anti-glare solution optimizing the pixel arrangement, pixel structure, and pixel driving circuit design of the OLED display screen.
  • optimizing the pixel arrangement, pixel structure, and pixel driving circuit design of the OLED display screen By improving the local characteristics of the periodic pixel structure of the display screen, the energy distribution of the point spread function of the under-screen imaging system and the distribution of glare can be optimized.
  • this solution can only improve the glare effect to a certain extent, and the improvement effect is limited; This solution cannot eliminate the rainbow glare caused by the imaging of the camera under the display.
  • the image processing method can generate a point spread function suitable for different highlighted objects according to the spectral components of the highlighted objects in the image, and then solve the problem by solving
  • the convolution process obtains a high-definition image after glare removal; thus, the glare generated by the image captured by the camera under the display screen can be effectively improved or eliminated, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
  • the electronic device when there is a bright object in the captured image, the electronic device can turn on the anti-glare function, so as to improve or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
  • the highlighted objects may include strong light sources, such as the sun, the moon, lights, display screens, etc., and may also include objects whose surfaces reflect strong light, such as glass, metal, and the like.
  • the front camera of the mobile phone is set under the display screen, the user takes a selfie through the front camera of the mobile phone outdoors, and the user turns on the front camera function of the mobile phone, so that the display screen enters the shooting mode. interface, the user can observe the captured picture in real time through the preview image displayed in the shooting interface; exemplarily, the preview image is the original image 201 shown in the above-mentioned FIG.
  • the sun appears as rainbow glare, and at the same time, the rainbow glare blocks some objects (such as mountain peaks), and the mobile phone turns on the anti-glare function, performs real-time anti-glare processing, and obtains a high-definition image with no glare removed.
  • the high-definition image is shown in Figure 2B above.
  • the glare is eliminated in the high-definition image displayed on the shooting interface, and the sun and some mountain peaks originally blocked by the rainbow glare of the sun can be seen; Preview the glare of the sun (rainbow glare) in the image so that you can get a high-definition image with no glare.
  • the user can trigger an instruction to enter the shooting, the mobile phone responds to the instruction, turns on the camera, and at the same time the display screen enters the shooting interface; for example, the user can click the main page in the display screen, or the boot page, or take a photo in other applications. icon to make the display screen enter the shooting interface; the user can also press the physical button for shooting set on the mobile phone to make the display screen enter the shooting interface; the user can also make the display screen enter the shooting interface through voice commands; the user can also The display screen is brought into the shooting interface by means of a shortcut gesture; in practical applications, the user can also make the display screen enter the shooting interface by other means, which is not limited in this embodiment of the present application.
  • the shooting interface may also include aperture, night scene, portrait, photo, video, professional, etc.
  • the shooting interface can also include options for functions such as flash, HDR, AI, settings, and color tone.
  • the user can select different functions, turn on or off the flash, adjust the color tone, etc.
  • the shooting interface can also include focal length adjustment options, so that the preview image can be enlarged or reduced accordingly by adjusting the focal length; in addition, the shooting interface It may also include a shooting button, an album button, a camera switching button, and the like.
  • Method 1 The mobile phone detects whether there is a bright object in the preview image. When it detects that there is a bright object in the preview image, the mobile phone automatically turns on the anti-glare function; when no bright object is detected in the preview image, the mobile phone does not. Turn on the anti-glare function.
  • the highlighted object includes a photographic object exhibiting a specified color and/or brightness in the preview image.
  • the specified color may be white
  • the specified brightness may be the luminance value of white in YUV color coding, or the gray value of white in RGB color mode, and so on.
  • the mobile phone can detect whether there are highlighted objects in the preview image on the condition that the gray values of most of the pixels in a certain area in the preview image are not less than the preset gray value threshold.
  • the grayscale values of most of the pixels are not less than the preset grayscale threshold, it is determined that there are bright objects in the preview image, and at this time, the mobile phone automatically turns on the anti-glare function.
  • the grayscale value range of each pixel in the preview image is 0-255, white is 255, and black is 0. Considering that the area where the highlighted object is usually white, you can set the preset grayscale threshold equal to 255 or a value around 255.
  • the preset grayscale threshold can be 255, a window with a size of 100*100 can be selected, and the window can be sequentially slid in the preview image, so that the entire preview image can be spread; each time the window is slid, each pixel in the window is calculated. If the gray value of more than half of the pixels in the window is 255, it is determined that there is a bright object in the preview image, and at this time, the mobile phone automatically turns on the anti-glare function.
  • the mobile phone may continue to execute the current shooting mode or other preset shooting modes (eg, a preset image enhancement mode).
  • the mobile phone can automatically detect whether there is a highlight object in the preview image, and automatically turn on the anti-glare function when a highlight object is detected; the anti-glare function can be turned on without the user's manual operation, which improves the performance of the mobile phone. How intelligent the operation is.
  • Manner 2 The mobile phone receives an instruction to enable the anti-glare function triggered by the user, and in response to the instruction, enables the anti-glare function.
  • the user can observe and determine whether there is a highlight object by previewing the image or the photographed object, and when the user determines that there is a highlight object, the user can click the preset on the shooting interface to enable the anti-glare function or enter the anti-glare mode.
  • pictures trigger the command to enable the anti-glare function; the command to enable the anti-glare function may also be triggered by means of preset voice commands, shortcut gestures, physical buttons, etc. of the mobile phone, which is not limited in this embodiment of the present application.
  • the camera interface of the mobile phone 10 may include an icon 301 of the anti-glare function. The user can click the icon 301 of the anti-glare function to trigger an instruction to enable the anti-glare function. Anti-glare function.
  • the user independently judges whether there is a bright object when taking a photo, and selects an instruction to trigger the anti-glare function according to the judgment result, so that the mobile phone can turn on the anti-glare function and meet the user's demand for autonomous control of the mobile phone.
  • Method 3 The mobile phone detects whether there is a highlight object in the preview image. When it detects that there is a highlight object in the preview image, the mobile phone can send a prompt message to the user to prompt the user whether to enable the anti-glare function; if the user triggers the anti-glare function function command, the mobile phone will turn on the anti-glare function in response to the command.
  • the prompt information that the mobile phone can send to the user can include: voice prompt information, vibration, light flashing, icon flashing, etc.; for example, as shown in FIG. 3 above, the mobile phone 10 can detect that there is a highlighted object in the preview image.
  • the icon 301 of the anti-glare function in the control shooting interface flashes, thereby prompting the user to select and click the icon 301 of the anti-glare function.
  • the mobile phone will enable the anti-glare function.
  • the mobile phone may display the prompt message "Enable the anti-glare function" in the preview image area of the shooting interface, thereby prompting the user to enable the anti-glare mode, wherein, the specific implementation of enabling the anti-glare mode by the user may refer to the above-mentioned method for enabling the anti-glare function on the mobile phone.
  • the related descriptions in Section 2 will not be repeated here; if the user triggers an instruction to enable the anti-glare function, the mobile phone responds to the instruction to enable the anti-glare function.
  • the user may forget the anti-glare function of the mobile phone or be unfamiliar with the anti-glare function when taking pictures, and the user can be prompted to turn on the anti-glare function through the prompt information, thereby improving the user experience.
  • the mobile phone can also display a prompt box around the detected image area where there are highlighted objects to remind that there are highlighted objects in this area.
  • the user can select the prompt box (for example, click the inner area of the prompt box) to choose to perform glare removal.
  • Feature image area For example, click the inner area of the prompt box.
  • the mobile phone may also enable the anti-glare function in other ways, which is not limited in this embodiment of the present application.
  • FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 4 , the method may include the following steps:
  • Step 400 The mobile phone determines the glare area where the highlighted object in the preview image is located.
  • the number of highlighted objects may be one or more, and each glare area includes at least one highlighted object.
  • each glare area may include one highlighted object.
  • the number of glare areas and the number of highlighted objects same.
  • the highlighted object occupies a plurality of connected pixels whose gray value is not less than a preset threshold (eg, 255), and the glare area where the highlighted object is located at least includes these connected pixels occupied by the highlighted object. of pixels.
  • the glare area may be in the shape of a rectangle, a circle, or the like.
  • the glare area may be a rectangular area, and a rectangular window with a size of m*n may be constructed, where m and n are both integers greater than 1.
  • the initial values of m and n may be 100. Sliding on the preview image, when a pixel with a gray value exceeding 255 appears in the rectangular window, adjust the values of m and n, so that multiple connected pixels, including the pixel, whose gray value exceeds 255 are connected to each other. When the pixels all fall on the rectangular window, the values of m and n are fixed at this time, and the rectangular window is the glare area. Multiple pixels connected to each other with a gray value of more than 255 in the rectangular window are a highlight object.
  • the glare area 302 is a rectangular area, and the glare area includes the pixels occupied by the sun in the preview image.
  • this step 400 is an optional step, that is, after the anti-glare function is enabled on the mobile phone, this step 400 can be executed, so that the processing efficiency of the following steps can be improved and processing resources can be saved; at the same time, the processing process can be reduced or eliminated.
  • the mobile phone after the mobile phone enables the anti-glare function, the mobile phone can also directly perform the following step 401.
  • Step 401 the mobile phone identifies the shape of the highlighted object, and extracts the glare starburst area.
  • the mobile phone can identify the shape of the highlighted object in the glare area where the highlighted object is located in the preview image determined in the above step 400, and extract the glare starburst area in the glare area; Identify the shape of the highlighted object, and extract the glare starburst area in the preview image, where the glare starburst area is the area where glare appears and is in the shape of a starburst, and the area other than the glare starburst area in the preview image is non-glare Astral region.
  • the mobile phone can automatically identify the shape of the highlighted object through the trained neural network.
  • the shape of the strong light source may include a ring light source, a strip light source, a point light source, and the like.
  • Pre-select photos of light sources of different shapes to train the neural network input the glare area or preview image into the trained neural network, and then output the glare area or the shape of the highlighted object in the preview image; the mobile phone can also use conventional overexposure Detection and other methods to identify the shape of the highlighted object.
  • the mobile phone can detect the glare area or the glare starburst area in the preview image based on the position of the highlighted object.
  • the center of the highlighted object can be used as the cross star.
  • the cross star At the center, continuously adjust the size of the cross star, so that the pixels occupied by the highlighted object all fall within the cross star.
  • the pixels included in the cross star are the glare star area.
  • the glare starburst region or the glare starburst region in the preview image is cropped to extract the glare starburst region; in this way, further anti-glare processing can be performed on the extracted glare starburst region to improve the This improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the impact of processing on other areas in the image that do not contain bright objects and their glare.
  • this step 401 is an optional step, that is, after the anti-glare function is enabled on the mobile phone, this step 401 may be executed, and the following step 402 may also be directly executed.
  • the following is an exemplary description of performing step 402 after performing step 401 .
  • FIG. 5 shows a schematic diagram of extracting the shape of the highlighted object and the glare starburst area in the glare area 302 in FIG. 3 according to an embodiment of the present application.
  • the shape of the sun is extracted 501, and at the same time, the glare starburst region 502 is extracted.
  • Step 402 the mobile phone acquires the spectral component Ispe ( ⁇ ) of the highlighted object, where ⁇ represents the wavelength.
  • the mobile phone is equipped with a spectral measurement device (for example, a spectral sensor, a color temperature sensor, a spectral camera, etc.), and the spectral components of a bright object (such as a strong light source) are collected by the spectral sensor.
  • a spectral measurement device for example, a spectral sensor, a color temperature sensor, a spectral camera, etc.
  • the spectral components of a bright object such as a strong light source
  • the mobile phone 10 in FIG. 1 may be configured with a color temperature sensor, and FIG. 6 shows the spectral components of the strong light source collected by the color temperature sensor according to an embodiment of the present application.
  • the spectral component Ispe ( ⁇ ) of the sun 204 in the photographed object in FIG. 2B shows the relative intensities corresponding to different wavelengths ⁇ .
  • the mobile phone may also be configured with a hyperspectral detection device, and the hyperspectral detection device collects the spectral components of the bright object, thereby further improving the accuracy of the collected spectral components.
  • the spectral components of the highlighted objects can be obtained in real time during shooting through the color temperature sensor or hyperspectral detection equipment built in the mobile phone, without relying on external equipment, which improves the functional integrity of the mobile phone.
  • the mobile phone can determine the spectral components of the identified highlighted objects by identifying the light source types of the highlighted objects in the photographed object, and according to preset spectral components corresponding to different light source types.
  • the spectral components of different strong light sources are fixed.
  • the spectral components are stored in the mobile phone; when shooting, the mobile phone can identify the light source type of the highlighted object in the subject through computer vision and other methods. From the pre-stored spectral components of a plurality of light source types, the spectral components of the sun are determined.
  • the spectral components corresponding to different light source types are stored in advance, and the light source type of the highlighted object in the subject is identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the mobile phone does not need to be additionally equipped with a spectral sensor or hyperspectral spectrum.
  • Spectral measurement equipment such as detection equipment can obtain the spectral components of bright objects, saving the cost of mobile phones.
  • Method 3 The mobile phone can receive the spectral components of the bright object input from the outside.
  • a spectral measurement device (such as a hyperspectral detection device, a color temperature sensor, a spectral camera, etc.) disposed in the external environment collects the spectral components of the highlighted objects, and inputs the collected spectral components of the highlighted objects into the mobile phone. , the mobile phone receives the inputted spectral components, so as to obtain the spectral components of the bright objects.
  • the mobile phone can obtain the spectral components of the bright object by using the spectral components collected by the external device, which saves the cost of the mobile phone.
  • Step 403 The mobile phone generates a point spread function of three RGB channels according to the spectral components of the highlighted object.
  • FIG. 7 shows a schematic diagram of the broadening of the point spread function of the under-screen imaging system according to an embodiment of the present application, as shown in FIG.
  • the distribution of the point spread function of the under-screen imaging system will expand horizontally and vertically along the display screen, resulting in a serious smearing effect, which greatly reduces the imaging quality of the camera under the display screen. Therefore, in order to improve the imaging quality, in this step, the mobile phone uses the above-mentioned acquired spectral components of the bright object to generate the point spread function of the RGB three channels of the under-screen imaging system.
  • the mobile phone generates the point spread function of the RGB three channels of the under-screen imaging system through the display screen model, the spectral components of the highlighted objects obtained above, and the photosensitive characteristic curve (transmittance curve) of the filter of the image sensor of the camera. .
  • the display screen model can be pre-stored in the mobile phone, and the display screen model can include: amplitude modulation function A(m,n) and phase modulation function P(m,n), etc., m,n represent the pixels on the display screen
  • the amplitude modulation function and phase modulation function can be determined according to factors such as the pixel point distribution on the display screen, the wiring method of the display screen and the material of the display screen.
  • FIG. 8 shows a partial schematic diagram of a display screen according to an embodiment of the present application. As shown in FIG. 8 , the distribution of pixel points on the display screen can be obtained, wherein each pixel point includes a red photon pixel, a green photon pixel, and a blue photon pixel. subpixels.
  • the photosensitive characteristic curve of the filter of the image sensor of the camera can be determined according to the relevant parameters of the camera configured on the mobile phone, and the filter of the image sensor can include: red filter, green filter, blue filter;
  • a schematic diagram of the filter transmittance curve of an embodiment of the application, as shown in FIG. 9 , the transmittance curve Fr, g, b ( ⁇ ) of the filter includes: the transmittance curve Fr of the red filter ( ⁇ ), the transmittance curve F g ( ⁇ ) of the green filter, and the transmittance curve F b ( ⁇ ) of the blue filter, where ⁇ is the wavelength.
  • the mobile phone can use the spectral composition I spe ( ⁇ ) of the sun in the photographed object collected by the color temperature sensor in the above-mentioned FIG. 6 and the preset display model (A(m,n) and P(m,n)) , generate the point spread function corresponding to the different wavelengths of the bright object (the sun) after passing through the under-screen imaging system, and then pass the transmittance curve F r,g,b ( ⁇ ) of the filter of the image sensor to generate the under-screen imaging The point spread function of the RGB three channels of the system.
  • the mobile phone can also generate the point spread function corresponding to the different wavelengths of the highlighted object after passing through the under-screen imaging system through the spectral components of the highlighted object collected by the hyperspectral detection device and the preset display screen model, so that the The point spread function of different wavelengths of the generated highlight objects after passing through the under-screen imaging system is closer to the actual situation, thereby improving the effect of anti-glare.
  • the display screen model can be used as the transmittance function, combined with the light wave propagation factor, and the spectral components of the highlighted object obtained above are used as weights to obtain the points corresponding to the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • Spread function wherein, the model of the display screen may include: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor may be determined according to the focal length of the camera to capture the image to be processed and the wavelength of the highlighted object.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object and at the same time use the spectral component of the highlighted object as the weight to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • Spread function the obtained point spread function can more realistically express the glare process of bright objects through under-screen imaging.
  • the point spread function I(u, v; ⁇ ) corresponding to different wavelengths of the highlighted object after passing through the under-screen imaging system is shown in the following formula (1):
  • represents the wavelength
  • I spe ( ⁇ ) represents the spectral component of the highlighted object
  • f represents the actual focal length of the camera when shooting
  • m, n represent the coordinates of the pixels on the display screen
  • u, v Represents the coordinates of the pixel on the image sensor
  • A(m,n) represents the amplitude modulation function
  • P(m,n) represents the phase modulation function.
  • PSF r represents the point spread function of the red channel
  • PSF g represents the point spread function of the green channel
  • PSF b represents the point spread function of the blue channel
  • I(u, v; ⁇ ) represents the highlight
  • represents the wavelength
  • u, v represent the coordinates of the pixel points on the image sensor
  • F r ( ⁇ ) represents the transmittance curve of the red filter of the image sensor
  • F g ( ⁇ ) represents the transmittance curve of the green filter of the image sensor
  • F b ( ⁇ ) represents the transmittance curve of the blue filter of the image sensor.
  • FIG. 10 shows a schematic diagram of point spread functions of three RBG channels generated according to an embodiment of the present application.
  • the point spread functions of the red light channel, the point spread function of the green light channel, and the point spread function wherein the point spread function of the red light channel of the same pixel and the point spread function of the green light channel of the pixel point are separated by a sub-pixel point, and the point spread function of the red light channel of the same pixel point is the same as the pixel point.
  • the point spread function of the blue light channel is separated by one sub-pixel point
  • the point spread function of the green light channel of the same pixel point and the point spread function of the blue light channel of the pixel point are separated by one sub-pixel point.
  • the display screen model can be stored in the mobile phone in advance, and combined with the acquired spectral components of the highlighted object, different wavelengths of the highlighted object can be generated and imaged under the screen.
  • the point spread functions corresponding to different wavelengths of different wavelengths of the highlighted object passing through the display screen and the lens of the camera under the display screen, and then the point spread function of the three RGB channels of the under-screen imaging system is obtained.
  • the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • Method 2 The mobile phone generates the point spread function of the RGB three channels of the under-screen imaging system according to the pre-calibrated point spread functions of different wavelengths after passing through the under-screen imaging system and the spectral components of the highlighted objects obtained above.
  • the point spread functions corresponding to different wavelengths after passing through the under-screen imaging system can be pre-calibrated in the laboratory and stored in the mobile phone in advance;
  • the schematic diagram of the point spread function corresponding to the wavelength after passing through the under-screen imaging system, as shown in Figure 11, is the point spread function corresponding to different wavelengths in the visible light band (400nm-780nm) after passing through the under-screen imaging system, Wherein, the interval between wavelengths may be 6 nm, 3 nm, or the like.
  • the point spread function corresponding to the different wavelengths stored in the mobile phone after passing through the under-screen imaging system can be directly called, and integrated with the spectral components of the highlighted objects obtained above to obtain the under-screen imaging system.
  • RGB three-channel point spread function RGB three-channel point spread function.
  • the point spread functions corresponding to different wavelengths after passing through the under-screen imaging system are pre-stored in the mobile phone, and the point spread functions of the RGB three channels of the under-screen imaging system can be quickly generated in combination with the above-mentioned acquisition of the spectral components of the highlighted objects.
  • the computing efficiency is significantly improved, and the processing speed of the mobile phone's anti-glare is improved. It can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements such as video shooting.
  • step 402 and step 403 can be executed before the above step 400 or step 401, can also be executed after step 400 or step 401, and can also be executed simultaneously with step 400 or step 401. Not limited.
  • Step 404 using the generated point spread function of the three RGB channels and the above-mentioned glare starburst area, and adopting a preset non-blind deconvolution algorithm to remove the glare in the glare starburst area.
  • the preset non-blind deconvolution algorithm may include: neural network learning, convex optimization deconvolution, non-convex optimization deconvolution and other existing non-blind deconvolution algorithms, combined with the above-identified highlighted objects.
  • the deconvolution results of the RGB three channels are obtained, so as to obtain the local area after the glare removal. image.
  • FIG. 12 shows a schematic diagram of a partial image after glare removal according to an embodiment of the present application.
  • the partial image 1201 after glare removal is generated by removing the glare in the glare starburst area 502 in FIG. 5 above.
  • Step 405 Perform fusion processing on the de-glare partial image and the preview image to obtain a de-glare high-definition image.
  • the glare area and the non-glare area where the partial image after glare removal is located may be spliced, so as to obtain a final high-definition image without glare.
  • the high-definition image after de-glare can be displayed in real time on the shooting interface of the display screen.
  • FIG. 2B in the high-definition image after de-glare, there is no glare or there is less glare, especially it can be eliminated.
  • Rainbow glare so that the brightness in the image can be smoothly transitioned to more realistically reflect the captured subject.
  • this glare is sensitive to the wavelength.
  • This generates a point spread function suitable for different bright objects, and then obtains a high-definition image with no glare through non-blind deconvolution processing, thereby effectively improving or even eliminating the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
  • which can achieve the glare-free imaging effect of the camera under the display screen when the display screen is blocked, so that the user cannot perceive the effect of rainbow glare on the final image when using the camera shooting function under the display screen.
  • the user performs anti-glare processing on the image or video that has been captured by the electronic device.
  • the images or videos that have been captured are images or videos captured by the camera under the display screen.
  • the electronic device can turn on the glare removal function, so as to improve or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
  • the front camera of the mobile phone is set under the display screen, and the user takes a selfie through the front camera of the mobile phone outdoors.
  • the captured image is the one shown in FIG. 2A above.
  • the original image 201 after the self-timer is completed, the captured image is stored in the mobile phone.
  • the user turns on the image editing function of the mobile phone, so that the display screen enters the image editing interface.
  • Glare that is, the sun appears as rainbow glare, and at the same time, the rainbow glare blocks some objects (such as mountain peaks), and the mobile phone turns on the anti-glare function and performs anti-glare processing, thereby obtaining a high-definition image with no glare.
  • the high-definition image with no glare is shown in Figure 2B above.
  • the glare has been eliminated, and the sun and some mountain peaks originally blocked by the rainbow glare of the sun can be seen;
  • the glare of the sun (rainbow glare) in the captured image so that a glare-removed high-definition image can be obtained.
  • the user when viewing an image, the user can make the display screen enter the image editing interface by clicking an editing button, or by using a voice command, or the like.
  • the anti-glare processing is performed to obtain the high-definition image with the glare removed.
  • the glare generated in the captured image is sensitive to the wavelength. Therefore, by obtaining the spectrum of the highlighted object components, and thus generate a point spread function suitable for different bright objects, and then obtain a high-definition image with no glare through non-blind deconvolution processing, so as to effectively improve or even eliminate the glare generated by the image captured by the camera under the display screen. Especially the rainbow glare improves the clarity of the image.
  • FIG. 13 shows a flowchart of an image processing method according to an embodiment of the present application.
  • the execution subject of the method may be an electronic device, for example, the mobile phone in FIG. 1 , and the method may be Include the following steps:
  • Step 1301 Acquire an image to be processed.
  • the image to be processed is an image collected by the under-screen imaging system.
  • the under-screen imaging system may include a display screen and a camera disposed below the display screen; for example, the image to be processed may be the user in the above embodiment.
  • the real-time preview image when an image or video is captured by an electronic device as shown in FIG. 3 , can also be the image or video that has been captured in the above embodiment and stored in the electronic device, and can also be an image captured by other external devices, etc. , which is not limited in the embodiments of the present application.
  • the image to be processed may include: a highlight object and the glare generated by it, wherein the highlight object may be a photographic object showing a specified color and/or brightness in the image to be processed; glare is the light of the highlight object passing through the display screen and the The dazzling light from the camera under the display, which can be a rainbow-shaped glare.
  • the specified color can be white
  • the specified brightness can be the brightness value of white in YUV color coding, or the gray value of white in RGB color mode.
  • it may be detected whether there is a bright object in the image to be processed under the condition that the grayscale values of most of the pixels in a certain area of the image to be processed are not less than a preset grayscale threshold.
  • a preset grayscale threshold For a specific implementation process, reference may be made to the relevant expressions in the foregoing embodiments, and details are not repeated here.
  • Step 1302 Generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system.
  • the spectral components of the highlighted objects can be collected by the spectral measurement device configured in the electronic device or the external spectral measurement device, or the spectral components corresponding to different light source types can be pre-stored in the electronic device, so as to identify the high brightness in the image to be processed.
  • the light source type of the bright object determines the spectral composition of the bright object.
  • the spectral composition of the bright object is shown in Figure 6.
  • the step may further include: acquiring the spectral components of the bright object collected by the spectral measurement device.
  • the spectral measurement device may be a configured spectral measurement device of the electronic device or an external spectral measurement device.
  • reference may be made to the relevant expressions about "the first and third modes of the method for obtaining the spectral components of the highlighted object" in step 402 in FIG.
  • the spectral components of the high-brightness objects can be acquired in real time when shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment; or, with the help of spectral components collected by external equipment, Thus, the spectral components of the bright object are obtained, and the cost of electronic equipment is saved.
  • this step may further include: performing image recognition on the image to be processed to determine the light source type of the highlighted object; and determining the spectral composition of the highlighted object according to preset spectral components of different light source types.
  • this step may further include: performing image recognition on the image to be processed to determine the light source type of the highlighted object; and determining the spectral composition of the highlighted object according to preset spectral components of different light source types.
  • the electronic device can obtain the spectral composition of the bright object without additional configuration of the spectral measurement device, which saves the cost of the electronic device.
  • the first point spread function may be the point spread function of the red light channel, the point spread function of the green light channel, or the point spread function of the blue light channel, or the point spread function of the blue light channel. It may be the point spread function of any two channels, or may be the point spread function of three channels, which is not limited in this embodiment of the present application. For example, it may be the point spread function of three channels as shown in FIG. 10 .
  • step 1302 may include: generating a second point spread function according to the spectral components of the highlighted object and the model of the display screen; The corresponding point spread function behind the under-screen imaging system; the first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
  • the model of the display screen may be pre-stored in the electronic device, and the display screen model may be generated while performing step 1302; the display screen model may include: an amplitude modulation function A(m,n) and a phase modulation function P(m,n) ), etc., m, n represent the coordinates of the pixels on the display screen.
  • a display screen model can be stored in the electronic device in advance, and then a highlight can be generated according to the model of the display screen in combination with the spectral components of the highlighted object.
  • Different wavelengths of the object pass through the point spread function of the under-screen imaging system, and then combine with the photosensitive characteristic curve of the camera to obtain the point spread function of at least one channel of the under-screen imaging system.
  • the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • generating the second point spread function according to the spectral components of the highlighted object and the model of the display screen may include: taking the model of the display screen as the transmittance function, combining the light wave propagation factor, and using The spectral components of the highlighted objects are used as weights to obtain the second point spread function; wherein, the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; The wavelength of bright objects is determined.
  • the above formula (1) can be used to determine the actual focal length of the camera when shooting, the coordinates of the pixels on the display screen, the coordinates of the pixels on the image sensor of the camera, the spectral components of the highlighted objects, and the coordinates of the display screen.
  • the phase modulation function and the amplitude modulation function determine the second point spread function.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object and at the same time use the spectral component of the highlighted object as the weight to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • Spread function the obtained point spread function can more realistically express the glare process of bright objects through under-screen imaging.
  • step 1302 may include: generating the first point spread function according to the spectral components of the highlighted object and a preset third point spread function; wherein the third point spread function is different The point spread function corresponding to the wavelength after passing through the under-screen imaging system.
  • the third point spread function can be obtained by pre-calibrating in the laboratory and stored in the electronic device, for example, the third point spread function can be shown in FIG.
  • the point spread function corresponding to the under-screen imaging system, combined with the spectral components of the above-mentioned bright objects, can quickly generate the point spread function of at least one channel in the under-screen imaging system, which significantly improves the computational efficiency and improves the anti-glare effect.
  • the processing speed can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements, such as video shooting.
  • Step 1303 Perform deconvolution processing on the first point spread function and the to-be-processed image to obtain a glare-removed image.
  • non-blind deconvolution algorithms such as neural network learning, convex optimization deconvolution, and non-convex optimization deconvolution can be used to perform deconvolution operations on the first point spread function and the image to be processed, thereby obtaining glare removal.
  • the resulting image for example, after removing the glare is shown in Figure 2B.
  • the image to be processed may also be divided into a first area and a second area; wherein the first area is the area where the highlighted object and the glare generated by it are located; the second area The area is the area other than the first area in the image to be processed.
  • step 1303 may include: performing deconvolution processing on the first point spread function and the first region to obtain the first region after removing glare; and merging the first region and the second region after removing the glare to obtain Image after removing glare.
  • the first area may be the glare area where the highlighted object is located in the preview image in the foregoing embodiment, such as the glare area 302 shown in FIG. 3 .
  • the second area may be the glare area except for the preview image in the foregoing embodiment.
  • the first area after removing the glare may be as shown in FIG. 12 .
  • the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it, and then the first area where the highlighted object and the glare generated by it are located. Further anti-glare processing is performed on one area, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the second area in the image to be processed that does not contain bright objects and the glare generated by them during the anti-glare processing. influences.
  • step 400 For the specific description of the implementation manner, reference may be made to the relevant expressions in step 400, step 401, step 404, and step 405 in FIG. 4 in the foregoing embodiment, which will not be repeated here.
  • the image to be processed includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the image to be processed, different highlight objects are generated.
  • the corresponding point spread function of at least one channel in the under-screen imaging system is used, and then the first point spread function and the image to be processed are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve Or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
  • the embodiments of the present application further provide an image processing apparatus, and the image processing is used to execute the technical solutions described in the above method embodiments.
  • FIG. 14 shows a structural diagram of an image processing apparatus according to an embodiment of the present application.
  • the apparatus may include: an acquisition module 1401 for acquiring an image to be processed; the image to be processed is off-screen imaging
  • the under-screen imaging system includes a display screen and a camera set below the display screen; the images to be processed include: highlighted objects and the glare generated by them, wherein the highlighted objects are the specified colors and colors in the image to be processed.
  • the generating module 1402 is used to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; the processing module 1403 , which is used to perform deconvolution processing on the first point spread function and the image to be processed to obtain an image after glare removal.
  • the generation module is further configured to: generate a second point spread function according to the spectral components of the highlighted object and the model of the display screen; wherein the second point spread function is different wavelengths of the highlighted object
  • the point spread function corresponding to the light passing through the under-screen imaging system; the first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
  • the generating module is further configured to: generate a first point spread function according to the spectral components of the highlighted object and a preset third point spread function; wherein the third point spread function is a different wavelength The corresponding point spread function after passing through the under-screen imaging system.
  • the apparatus may further include: a spectral measurement device, configured to collect spectral components of the highlighted object.
  • the device may further include: a collection module, configured to perform image recognition on the image to be processed, and determine the light source type of the highlighted object; and determine the highlighted object according to preset spectral components of different light source types spectral composition.
  • a collection module configured to perform image recognition on the image to be processed, and determine the light source type of the highlighted object; and determine the highlighted object according to preset spectral components of different light source types spectral composition.
  • the apparatus may further include: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein the first area is the highlighted object and its generated the area where the glare is located; the second area is an area other than the first area in the image to be processed; the processing module is also used for: deconvolution processing the first point spread function and the first area to obtain the glare-removed image The first area; the glare-removed first area is fused with the second area to obtain the glare-removed image.
  • a segmentation module configured to: segment the to-be-processed image into a first area and a second area; wherein the first area is the highlighted object and its generated the area where the glare is located; the second area is an area other than the first area in the image to be processed; the processing module is also used for: deconvolution processing the first point spread function and the first area to obtain the glare-removed image The first area; the
  • the generation module is further used to: take the model of the display screen as the transmittance function, combine the light wave propagation factor, and take the spectral component of the highlighted object as the weight to obtain the second point spread function;
  • the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor is determined according to the focal length when the camera captures the image to be processed and the wavelength of the highlighted object.
  • the image to be processed includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the image to be processed, different highlight objects are generated.
  • the corresponding point spread function of at least one channel in the under-screen imaging system is used, and then the first point spread function and the image to be processed are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve Or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
  • An embodiment of the present application provides an electronic device, which may include: a display screen, a camera below the display screen, a processor, and a memory; wherein, the camera is used to collect images to be processed through the display screen; the display screen is used for Display the image to be processed and the image after deglare; the memory is used to store the processor-executable instructions; the processor is configured to implement the image processing method of the above embodiment when the instruction is executed.
  • FIG. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. Taking the electronic device as a mobile phone as an example, FIG. 15 shows a schematic structural diagram of the mobile phone 200 .
  • the mobile phone 200 may include a processor 210, an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc.
  • a processor 210 an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc.
  • the sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity light sensor 280G, a fingerprint sensor 280H, and a touch sensor 280K (of course, the mobile phone 200 may also include other sensors, such as a color temperature sensor, a temperature sensor, a pressure sensor, and a distance sensor. , magnetic sensor, ambient light sensor, air pressure sensor, bone conduction sensor, etc., not shown in the figure).
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the mobile phone 200 .
  • the mobile phone 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or Neural-network Processing Unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 200 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is cache memory.
  • the memory may hold instructions or data that have just been used or recycled by the processor 210 . If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
  • the processor 210 can run the image processing method provided by the embodiments of the present application, so as to effectively improve or eliminate glare generated by the image captured by the camera under the display screen, especially rainbow glare, which improves the clarity of the image and improves the user experience.
  • the processor 210 may include different devices. For example, when the CPU and the GPU/NPU are integrated, the CPU and the GPU/NPU may cooperate to execute the image processing method provided by the embodiments of the present application. For example, some algorithms in the image processing method are executed by the CPU, and another part of the algorithms Executed by GPU/NPU for faster processing efficiency.
  • Display screen 294 is used to display images, videos, and the like.
  • Display screen 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • cell phone 200 may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the display screen 294 may be used to display information entered by or provided to the user as well as various graphical user interfaces (GUIs). For example, display 294 may display photos, videos, web pages, or documents, and the like.
  • GUIs graphical user interfaces
  • display 294 may display photos, videos, web pages, or documents, and the like.
  • the display screen 294 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the camera 293 (a front camera or a rear camera, or a camera can be used as both a front camera and a rear camera) is used to capture still images or videos, and the camera 293 can be arranged below the display screen 294 .
  • the camera 293 may include a photosensitive element such as a lens group and an image sensor, wherein the lens group includes a plurality of lenses (convex or concave) for collecting the light signal reflected by the object to be photographed, and transmitting the collected light signal to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • Internal memory 221 may be used to store computer executable program code, which includes instructions.
  • the processor 210 executes various functional applications and data processing of the mobile phone 200 by executing the instructions stored in the internal memory 221 .
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area may store the operating system, the code of the application (such as a camera application, etc.), and the like.
  • the storage data area may store data created during the use of the mobile phone 200 (such as images and videos collected by the camera application) and the like.
  • the internal memory 221 may also store one or more computer programs corresponding to the image processing methods provided in the embodiments of the present application.
  • the one or more computer programs are stored in the above-mentioned memory 221 and configured to be executed by the one or more processors 210, and the one or more computer programs include instructions that can be used to perform the corresponding embodiments described above.
  • the computer program may include an acquisition module 1401 for acquiring an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display screen and a camera set below the display screen;
  • the processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a photographic object exhibiting a specified color and/or brightness in the image to be processed;
  • the generating module 1402 is configured to generate a first a point spread function; the first point spread function is the point spread function of at least one channel in the under-screen imaging system;
  • the processing module 1403 is used to perform deconvolution processing on the first point spread function and the image to be processed to obtain a glare-removed image Image.
  • the internal memory 221 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the code of the image processing method provided by the embodiment of the present application may also be stored in an external memory.
  • the processor 210 may execute the code of the image processing method stored in the external memory through the external memory interface 220 .
  • the display screen 294 of the mobile phone 200 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, etc.).
  • Display screen 294 displays an interface of a camera application, such as a capture interface.
  • the mobile communication module 251 can provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the mobile phone 200 .
  • the mobile communication module 251 may also be used for information interaction with other devices (eg, acquiring spectral components, images to be processed, etc.).
  • the mobile phone 200 can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, and an application processor. Such as music playback, recording, etc.
  • the cell phone 200 can receive key 290 input and generate key signal input related to user settings and function control of the cell phone 200 .
  • the mobile phone 200 can use the motor 291 to generate a vibration prompt (for example, a prompt for turning on the anti-glare function).
  • the indicator 292 in the mobile phone 200 can be an indicator light, which can be used to indicate a charging state, a change in power, or a message, a prompt message, a missed call, a notification, and the like.
  • the mobile phone 200 may include more or less components than those shown in FIG. 13 , which are not limited in this embodiment of the present application.
  • the illustrated handset 200 is merely an example, and the handset 200 may have more or fewer components than those shown, two or more components may be combined, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the software system of the electronic device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of an electronic device.
  • FIG. 16 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • An embodiment of the present application provides an image processing apparatus, including: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above method when executing the instructions.
  • Embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, implement the above method.
  • Embodiments of the present application provide a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (Electrically Programmable Read-Only-Memory, EPROM or flash memory), static random access memory (Static Random-Access Memory, SRAM), portable compact disk read-only memory (Compact Disc Read-Only Memory, CD - ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices, such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the foregoing .
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read-only memory
  • EPROM Errically Programmable Read-Only-Memory
  • SRAM static random access memory
  • portable compact disk read-only memory Compact Disc Read-Only Memory
  • CD - ROM Compact Disc Read-Only Memory
  • DVD Digital Video Disc
  • memory sticks floppy disks
  • Computer readable program instructions or code described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, may be connected to an external computer (eg, use an internet service provider to connect via the internet).
  • electronic circuits such as programmable logic circuits, Field-Programmable Gate Arrays (FPGA), or Programmable Logic Arrays (Programmable Logic Arrays), are personalized by utilizing state information of computer-readable program instructions.
  • Logic Array, PLA the electronic circuit can execute computer readable program instructions to implement various aspects of the present application.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in hardware (eg, circuits or ASICs (Application) that perform the corresponding functions or actions. Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented by a combination of hardware and software, such as firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The present application relates to an image processing method and apparatus, an electronic device, and a storage medium, the method comprising: acquiring an image to be processed; the image to be processed is an image collected by an under-screen imaging system, the under-screen imaging system comprising a display screen and a camera arranged below the display screen; the image to be processed comprises a highlight object and glare produced thereby, the highlight object being a photographed object of a specified colour or brightness in the image to be processed; on the basis of the spectral composition of the highlight object, generating a first point spread function; the first point spread function is a point spread function of at least one channel in the under-screen imaging system; and performing deconvolution processing on the first point spread function and the image to be processed to obtain a de-glared image. In the present application, the glare produced by an image photographed by a camera under a display screen can be improved or eliminated, in particular rainbow glare, thus increasing the clarity of the image and enhancing the user experience.

Description

一种图像处理方法、装置、电子设备及存储介质An image processing method, device, electronic device and storage medium
本申请要求于2020年12月14日提交中国专利局、申请号为202011474431.3、发明名称为“一种图像处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on December 14, 2020 with the application number 202011474431.3 and the invention titled "An image processing method, device, electronic device and storage medium", the entire contents of which are approved by Reference is incorporated in this application.
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、电子设备及存储介质。The present application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and storage medium.
背景技术Background technique
现有的手机采用升降前置镜头构成的全面屏手机,以及滑盖式的全面屏手机。这些设计使得手机在视觉体验上接近了100%全面屏,但是其复杂的机械结构会使得手机厚度、重量增加,占用太多内部空间,同时机械机构的重复使用次数也难以满足手机用户需求。The existing mobile phone adopts a full-screen mobile phone composed of a lifting front lens, and a sliding-type full-screen mobile phone. These designs make the visual experience of the mobile phone close to 100% full screen, but its complex mechanical structure will increase the thickness and weight of the mobile phone, occupying too much internal space, and the repeated use of the mechanical mechanism is also difficult to meet the needs of mobile phone users.
基于利用有机发光二极管(Organic Light Emitting Diode,OLED)的显示屏设计,手机可以提高显示屏的屏占比。在显示屏下可以设置摄像头,但是由于显示屏的遮挡,设置于显示屏下的摄像头所拍摄的图像存在眩光,尤其是拍摄高亮物体时,眩光严重,大大降低了摄像头的成像效果,影响用户体验。Based on the display design using organic light-emitting diodes (Organic Light Emitting Diode, OLED), the mobile phone can increase the screen ratio of the display. The camera can be set under the display screen, but due to the occlusion of the display screen, the images captured by the camera set under the display screen have glare, especially when shooting bright objects, the glare is serious, which greatly reduces the imaging effect of the camera and affects the user. experience.
发明内容SUMMARY OF THE INVENTION
有鉴于此,提出了一种图像处理方法、装置、电子设备及存储介质。In view of this, an image processing method, apparatus, electronic device and storage medium are proposed.
第一方面,本申请的实施例提供了一种图像处理方法,所述方法包括:获取待处理图像;所述待处理图像为屏下成像系统所采集的图像,所述屏下成像系统包括显示屏及设置在所述显示屏下方的摄像头;所述待处理图像包括:高亮物体及其产生的眩光,其中,所述高亮物体为所述待处理图像中呈现指定颜色和/或亮度的拍摄对象;根据所述高亮物体的光谱成分,生成第一点扩散函数;所述第一点扩散函数为所述屏下成像系统中至少一个通道的点扩散函数;对所述第一点扩散函数及所述待处理图像进行解卷积处理,得到去除所述眩光后的图像。In a first aspect, an embodiment of the present application provides an image processing method, the method includes: acquiring an image to be processed; the to-be-processed image is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display a screen and a camera arranged under the display screen; the to-be-processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a specified color and/or brightness in the to-be-processed image photographing an object; generating a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; for the first point spread function The function and the image to be processed are subjected to deconvolution processing to obtain the image after the glare has been removed.
基于上述技术方案,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。Based on the above technical solution, for the to-be-processed image collected by the under-screen imaging system, the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated. The point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
根据第一方面,在所述第一方面的第一种可能的实现方式中,所述根据所述高亮物体的光谱成分,生成第一点扩散函数,包括:根据所述高亮物体的光谱成分及所述显示屏的模型,生成第二点扩散函数;其中,所述第二点扩散函数为所述高亮物体的不同波长经过所述屏下成像系统后对应的点扩散函数;根据所述第二点扩散函数及所述摄像头的感光特性曲线,生成所述第一点扩散函数。According to the first aspect, in a first possible implementation manner of the first aspect, the generating a first point spread function according to the spectral components of the highlighted object includes: according to the spectrum of the highlighted object components and the model of the display screen to generate a second point spread function; wherein, the second point spread function is the point spread function corresponding to different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the The second point spread function and the photosensitive characteristic curve of the camera are used to generate the first point spread function.
基于上述技术方案,可以根据显示屏的模型,结合高亮物体的光谱成分,生成高亮物体 的不同波长经过屏下成像系统的点扩散函数,进而结合摄像头的感光特性曲线得到屏下成像系统的至少一个通道的点扩散函数。这样,利用高亮物体的光谱成分,通过物理建模的方式获得真实眩光所对应的点扩散函数,从而可以有效消除由于显示屏周期性结构的色散效应,从而对于显示屏周期性结构引起的眩光,尤其是彩虹眩光具有优异的抑制作用,提升屏下成像系统的成像质量。Based on the above technical solution, according to the model of the display screen, combined with the spectral components of the highlighted object, the point spread function of the different wavelengths of the highlighted object passing through the under-screen imaging system can be generated, and then combined with the photosensitive characteristic curve of the camera to obtain the under-screen imaging system. The point spread function of at least one channel. In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated. , especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
根据第一方面,在所述第一方面的第二种可能的实现方式中,所述根据所述高亮物体的光谱成分,生成第一点扩散函数,包括:根据所述高亮物体的光谱成分及预设的第三点扩散函数,生成所述第一点扩散函数;其中,所述第三点扩散函数为不同波长经过所述屏下成像系统后对应的点扩散函数。According to the first aspect, in a second possible implementation manner of the first aspect, the generating a first point spread function according to the spectral components of the highlighted object includes: according to the spectrum of the highlighted object composition and a preset third point spread function to generate the first point spread function; wherein, the third point spread function is the point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
基于上述技术方案,通过预设的不同波长经过屏下成像系统后所对应的点扩散函数,结合上述高亮物体的光谱成分,可以快速生成屏下成像系统中至少一个通道的点扩散函数,显著提升了计算效率,进而提升了去眩光的处理速度,可以适用于视频拍摄等处理数据量大且对处理速度要求高的去眩光处理场景。Based on the above technical solution, the point spread function of at least one channel in the under-screen imaging system can be quickly generated through the preset point spread functions of different wavelengths after passing through the under-screen imaging system, combined with the spectral components of the above-mentioned high-brightness objects. The calculation efficiency is improved, and the processing speed of anti-glare is improved, and it can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements such as video shooting.
根据第一方面,在所述第一方面的第三种可能的实现方式中,所述方法还包括:获取光谱测量设备所采集的所述高亮物体的光谱成分。According to the first aspect, in a third possible implementation manner of the first aspect, the method further includes: acquiring the spectral components of the high-brightness object collected by the spectral measurement device.
基于上述技术方案,通过电子设备自带的光谱测量设备,可以在拍摄时实时获取高亮物体的光谱成分,无需依赖外部设备,提高了电子设备的功能完整性;或者,借助外部设备所采集的光谱成分,从而获取高亮物体的光谱成分,节约了电子设备的成本。Based on the above technical solutions, the spectral components of high-brightness objects can be acquired in real time during shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment; The spectral components can be obtained to obtain the spectral components of bright objects, which saves the cost of electronic equipment.
根据第一方面,在所述第一方面的第四种可能的实现方式中,所述方法还包括:对所述待处理图像进行图像识别,确定所述高亮物体的光源类型;根据预设的不同光源类型的光谱成分,确定所述高亮物体的光谱成分。According to the first aspect, in a fourth possible implementation manner of the first aspect, the method further includes: performing image recognition on the to-be-processed image to determine the light source type of the highlighted object; The spectral components of different light source types are determined to determine the spectral components of the highlighted object.
基于上述技术方案,通过预先存储不同光源类型对应的光谱成分,可以在拍摄时识别出拍摄对象中高亮物体的光源类型,从而确定高亮物体的光谱成分;这样,电子设备无需额外配置光谱测量设备,即可获取高亮物体的光谱成分,节约了电子设备的成本。Based on the above technical solution, by pre-storing the spectral components corresponding to different light source types, the light source type of the highlighted object in the photographed object can be identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the electronic device does not need to be additionally configured with a spectral measurement device , the spectral composition of the bright object can be obtained, and the cost of electronic equipment can be saved.
根据第一方面,在所述第一方面的第五种可能的实现方式中,所述方法还包括:将所述待处理图像分割为第一区域及第二区域;其中,所述第一区域为所述高亮物体及其产生的眩光所在的区域;所述第二区域为所述待处理图像中第一区域以外的区域;所述对所述第一点扩散函数及所述待处理图像进行解卷积处理,得到去除所述眩光后的图像,包括:通过对所述第一点扩散函数及所述第一区域,进行解卷积处理,得到去除所述眩光后的第一区域;将所述去除所述眩光后的第一区域与所述第二区域融合,得到所述去除所述眩光后的图像。According to the first aspect, in a fifth possible implementation manner of the first aspect, the method further includes: dividing the to-be-processed image into a first area and a second area; wherein the first area is the area where the bright object and its glare are located; the second area is the area other than the first area in the image to be processed; the pair of the first point spread function and the image to be processed Performing deconvolution processing to obtain the image after removing the glare, comprising: performing deconvolution processing on the first point spread function and the first area to obtain the first area after removing the glare; The glare-removed first area is fused with the second area to obtain the glare-removed image.
基于上述技术方案,将待处理图像分割为高亮物体及其产生的眩光所在的第一区域及不包含高亮物体及其产生的眩光的第二区域,进而针对高亮物体及其产生的眩光所在的第一区域执行进一步的去眩光处理,提高了处理效率,节约了处理资源;同时,可以减轻或消除在去眩光处理过程中对待处理图像中不包含高亮物体及其产生的眩光的第二区域的影响。Based on the above technical solution, the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it. Further anti-glare processing is performed in the first area where it is located, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the first area that does not contain bright objects and the glare generated by them in the image to be processed during the anti-glare processing. The effect of the second area.
根据第一方面的第一种可能的实现方式,在所述第一方面的第六种可能的实现方式中,所述根据所述高亮物体的光谱成分及所述显示屏的模型,生成第二点扩散函数,包括:将所述显示屏的模型作为透过率函数,结合光波传播因子,并以所述高亮物体的光谱成分作为权重,得到所述第二点扩散函数;其中,所述显示屏的模型包括:所述显示屏的振幅调制函数和相位调制特性函数;所述光波传播因子根据所述摄像头拍摄所述待处理图像时的焦距及所 述高亮物体的波长确定。According to a first possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the generation of the The two point spread function includes: taking the model of the display screen as the transmittance function, combining the light wave propagation factor, and taking the spectral component of the highlighted object as the weight to obtain the second point spread function; wherein, the The model of the display screen includes: an amplitude modulation function and a phase modulation characteristic function of the display screen; the light wave propagation factor is determined according to the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object.
基于上述技术方案,从眩光产生的根本原因出发,即显示屏周期性的像素排布导致的衍射效应是对波长敏感的,将通过振幅调制函数和相位调制特性函数所构建的显示屏模型作为透过率函数,并根据摄像头拍摄待处理图像时的焦距及高亮物体的实际波长确定光波传播因子,同时以高亮物体的光谱成分作为权重,得到高亮物体的不同波长经过屏下成像系统后对应的点扩散函数,所得到的点扩散函数能够更加真实地表达高亮物体通过屏下成像产生眩光过程。Based on the above technical solutions, starting from the root cause of glare, that is, the diffraction effect caused by the periodic pixel arrangement of the display screen is sensitive to wavelength, the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transparent The over-rate function is used to determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object. At the same time, the spectral components of the highlighted object are used as weights to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system. Corresponding point spread function, the obtained point spread function can more realistically express the process of glare generated by bright objects through under-screen imaging.
第二方面,本申请的实施例提供了一种图像处理装置,所述装置包括:获取模块,用于获取待处理图像;所述待处理图像为屏下成像系统所采集的图像,所述屏下成像系统包括显示屏及设置在所述显示屏下方的摄像头;所述待处理图像包括:高亮物体及其产生的眩光,其中,所述高亮物体为所述待处理图像中呈现指定颜色和/或亮度的拍摄对象;生成模块,用于根据所述高亮物体的光谱成分,生成第一点扩散函数;所述第一点扩散函数为所述屏下成像系统中至少一个通道的点扩散函数;处理模块,用于对所述第一点扩散函数及所述待处理图像进行解卷积处理,得到去除所述眩光后的图像。In a second aspect, an embodiment of the present application provides an image processing apparatus, the apparatus includes: an acquisition module, configured to acquire an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the screen The lower imaging system includes a display screen and a camera arranged under the display screen; the to-be-processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a specified color in the to-be-processed image and/or a photographic object of brightness; a generating module, configured to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point of at least one channel in the under-screen imaging system a spread function; a processing module configured to perform deconvolution processing on the first point spread function and the to-be-processed image to obtain the image after the glare has been removed.
基于上述技术方案,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。Based on the above technical solution, for the to-be-processed image collected by the under-screen imaging system, the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated. The point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
根据第二方面,在所述第二方面的第一种可能的实现方式中,所述生成模块,还用于:根据所述高亮物体的光谱成分及所述显示屏的模型,生成第二点扩散函数;其中,所述第二点扩散函数为所述高亮物体的不同波长光线经过所述屏下成像系统后对应的点扩散函数;根据所述第二点扩散函数与所述摄像头的感光特性曲线,生成所述第一点扩散函数。According to the second aspect, in a first possible implementation manner of the second aspect, the generating module is further configured to: generate a second point spread function; wherein, the second point spread function is the point spread function corresponding to the light of different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the second point spread function and the camera The photosensitive characteristic curve is used to generate the first point spread function.
基于上述技术方案,可以根据显示的屏模型,结合高亮物体的光谱成分,生成高亮物体的不同波长经过屏下成像系统的点扩散函数,进而结合摄像头的感光特性曲线得到屏下成像系统的至少一个通道的点扩散函数。这样,利用高亮物体的光谱成分,通过物理建模的方式获得真实眩光所对应的点扩散函数,从而可以有效消除由于显示屏周期性结构的色散效应,从而对于显示屏周期性结构引起的眩光,尤其是彩虹眩光具有优异的抑制作用,提升屏下成像系统的成像质量。Based on the above technical solutions, according to the displayed screen model, combined with the spectral components of the highlighted objects, the point spread functions of the different wavelengths of the highlighted objects passing through the under-screen imaging system can be generated, and then combined with the photosensitive characteristic curve of the camera, the under-screen imaging system can be obtained. The point spread function of at least one channel. In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated. , especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
根据第二方面,在所述第二方面的第二种可能的实现方式中,所述生成模块,还用于:根据所述高亮物体的光谱成分及预设的第三点扩散函数,生成所述第一点扩散函数;其中,所述第三点扩散函数为不同波长经过所述屏下成像系统后对应的点扩散函数。According to the second aspect, in a second possible implementation manner of the second aspect, the generation module is further configured to: generate the generation module according to the spectral components of the highlighted object and a preset third point spread function The first point spread function; wherein, the third point spread function is the point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
基于上述技术方案,通过在预设的不同波长经过屏下成像系统后所对应的点扩散函数,结合上述高亮物体的光谱成分,可以快速生成屏下成像系统中至少一个通道的点扩散函数,显著提升了计算效率,进而提升了去眩光的处理速度,可以适用于视频拍摄等处理数据量大且对处理速度要求高的去眩光处理场景。Based on the above technical solution, the point spread function of at least one channel in the under-screen imaging system can be quickly generated by using the corresponding point spread functions of different preset wavelengths after passing through the under-screen imaging system, combined with the spectral components of the above-mentioned high-brightness objects, The calculation efficiency is significantly improved, and the processing speed of anti-glare is improved, which can be applied to the anti-glare processing scene with large amount of processing data and high processing speed requirements such as video shooting.
根据第二方面,在所述第二方面的第三种可能的实现方式中,所述装置还包括:光谱测量设备,用于采集所述高亮物体的光谱成分。According to the second aspect, in a third possible implementation manner of the second aspect, the apparatus further includes: a spectral measurement device, configured to collect spectral components of the highlighted object.
基于上述技术方案,通过电子设备自带的光谱测量设备,可以在拍摄时实时获取高亮物 体的光谱成分,无需依赖外部设备,提高了电子设备的功能完整性;或者,借助外部设备所采集的光谱成分,从而获取高亮物体的光谱成分,节约了电子设备的成本。Based on the above technical solutions, the spectral components of high-brightness objects can be acquired in real time during shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment; The spectral components can be obtained to obtain the spectral components of bright objects, which saves the cost of electronic equipment.
根据第二方面,在所述第二方面的第四种可能的实现方式中,所述装置还包括:采集模块,用于:对所述待处理图像进行图像识别,确定所述高亮物体的光源类型;根据预设的不同光源类型的光谱成分,确定所述高亮物体的光谱成分。According to the second aspect, in a fourth possible implementation manner of the second aspect, the device further includes: an acquisition module, configured to: perform image recognition on the to-be-processed image, and determine the size of the highlighted object. Light source type; according to preset spectral components of different light source types, determine the spectral components of the highlighted object.
基于上述技术方案,通过预先存储不同光源类型对应的光谱成分,可以在拍摄时识别出拍摄对象中高亮物体的光源类型,从而确定高亮物体的光谱成分;这样,电子设备无需额外配置光谱测量设备,即可获取高亮物体的光谱成分,节约了电子设备的成本。Based on the above technical solution, by pre-storing the spectral components corresponding to different light source types, the light source type of the highlighted object in the photographed object can be identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the electronic device does not need to be additionally configured with a spectral measurement device , the spectral composition of the bright object can be obtained, and the cost of electronic equipment can be saved.
根据第二方面,在所述第二方面的第五种可能的实现方式中,所述装置还包括:分割模块,用于:将所述待处理图像分割为第一区域及第二区域;其中,所述第一区域为所述高亮物体及其产生的眩光所在的区域;所述第二区域为所述待处理图像中第一区域以外的区域;所述处理模块,还用于:通过对所述第一点扩散函数及所述第一区域,进行解卷积处理,得到去除所述眩光后的第一区域;将所述去除所述眩光后的第一区域与所述第二区域融合,得到所述去除所述眩光后的图像。According to the second aspect, in a fifth possible implementation manner of the second aspect, the apparatus further includes: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein , the first area is the area where the bright object and the glare generated by it are located; the second area is the area other than the first area in the image to be processed; the processing module is also used for: by Perform deconvolution processing on the first point spread function and the first region to obtain the first region after removing the glare; deconvolute the first region and the second region after removing the glare Fusion to obtain the image after removing the glare.
基于上述技术方案,将待处理图像分割为高亮物体及其产生的眩光所在的第一区域及不包含高亮物体及其产生的眩光的第二区域,进而针对高亮物体及其产生的眩光所在的第一区域执行进一步的去眩光处理,提高了处理效率,节约了处理资源;同时,可以减轻或消除在去眩光处理过程中对待处理图像中不包含高亮物体及其产生的眩光的第二区域的影响。Based on the above technical solution, the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it. Further anti-glare processing is performed in the first area where it is located, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the first area that does not contain bright objects and the glare generated by them in the image to be processed during the anti-glare processing. The effect of the second area.
根据第二方面的第一种可能的实现方式,在所述第二方面的第六种可能的实现方式中,所述生成模块,还用于:将所述显示屏的模型作为透过率函数,结合光波传播因子,并以所述高亮物体的光谱成分作为权重,得到所述第二点扩散函数;其中,所述显示屏的模型包括:所述显示屏的振幅调制函数和相位调制特性函数;所述光波传播因子根据所述摄像头拍摄所述待处理图像时的焦距及所述高亮物体的波长确定。According to a first possible implementation manner of the second aspect, in a sixth possible implementation manner of the second aspect, the generating module is further configured to: use the model of the display screen as a transmittance function , combined with the light wave propagation factor, and using the spectral components of the highlighted object as the weight, the second point spread function is obtained; wherein, the model of the display screen includes: the amplitude modulation function and phase modulation characteristics of the display screen function; the light wave propagation factor is determined according to the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object.
基于上述技术方案,从眩光产生的根本原因出发,即显示屏周期性的像素排布导致的衍射效应是对波长敏感的,将通过振幅调制函数和相位调制特性函数所构建的显示屏模型作为透过率函数,并根据摄像头拍摄待处理图像时的焦距及高亮物体的实际波长确定光波传播因子,同时以高亮物体的光谱成分作为权重,得到高亮物体的不同波长经过屏下成像系统后对应的点扩散函数,所得到的点扩散函数能够更加真实地表达高亮物体通过屏下成像产生眩光过程。Based on the above technical solutions, starting from the root cause of glare, that is, the diffraction effect caused by the periodic pixel arrangement of the display screen is sensitive to wavelength, the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transparent The over-rate function is used to determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object. At the same time, the spectral components of the highlighted object are used as weights to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system. Corresponding point spread function, the obtained point spread function can more realistically express the process of glare generated by bright objects through under-screen imaging.
第三方面,本申请的实施例提供了一种电子设备,包括:显示屏、显示屏下方的摄像头、处理器和存储器;其中,所述摄像头用于透过所述显示屏采集待处理图像;所述显示屏用于显示所述待处理图像和去眩光后的图像;所述存储器,用于存储处理器可执行指令;所述处理器被配置为执行所述指令时实现上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。In a third aspect, embodiments of the present application provide an electronic device, including: a display screen, a camera below the display screen, a processor, and a memory; wherein, the camera is used to collect images to be processed through the display screen; The display screen is used to display the to-be-processed image and the image after glare removal; the memory is used to store processor-executable instructions; the processor is configured to implement the above-mentioned first aspect or One or more image processing methods in multiple possible implementation manners of the first aspect.
基于上述技术方案,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。Based on the above technical solution, for the to-be-processed image collected by the under-screen imaging system, the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated. The point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
第四方面,本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。In a fourth aspect, embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, characterized in that, when the computer program instructions are executed by a processor, the above-mentioned first aspect is implemented Or one or more image processing methods in multiple possible implementation manners of the first aspect.
基于上述技术方案,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。Based on the above technical solution, for the to-be-processed image collected by the under-screen imaging system, the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated. The point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
第五方面,本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行上述第一方面或者第一方面的多种可能的实现方式中的一种或几种的图像处理方法。In a fifth aspect, embodiments of the present application provide a computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic When running in the device, the processor in the electronic device executes the first aspect or one or more of the image processing methods in multiple possible implementations of the first aspect.
基于上述技术方案,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。Based on the above technical solution, for the to-be-processed image collected by the under-screen imaging system, the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated. The point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
本申请的这些和其他方面在以下(多个)实施例的描述中会更加简明易懂。These and other aspects of the present application will be more clearly understood in the following description of the embodiment(s).
附图说明Description of drawings
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本申请的示例性实施例、特征和方面,并且用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features and aspects of the application and together with the description, serve to explain the principles of the application.
图1示出根据本申请一实施例的配置有屏下摄像头的手机示意图;1 shows a schematic diagram of a mobile phone configured with an under-screen camera according to an embodiment of the present application;
图2A-2B示出根据本申请一实施例的通过手机10显示图像的示意图;2A-2B show schematic diagrams of displaying images through the mobile phone 10 according to an embodiment of the present application;
图3示出根据本申请一实施例的拍摄界面的示意图;FIG. 3 shows a schematic diagram of a shooting interface according to an embodiment of the present application;
图4示出了本申请实施例的一种图像处理方法的流程图;FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present application;
图5示出了根据本申请一实施例的提取上述图3中眩光区域302中高亮物体的形状及眩光星芒区域的示意图;5 shows a schematic diagram of extracting the shape of the highlighted object and the glare starburst area in the glare area 302 in the above-mentioned FIG. 3 according to an embodiment of the present application;
图6示出了根据本申请一实施例的通过色温传感器采集的强光源的光谱成分;FIG. 6 shows the spectral components of a strong light source collected by a color temperature sensor according to an embodiment of the present application;
图7示出根据本申请一实施例的屏下成像系统的点扩散函数的展宽示意图;FIG. 7 shows a schematic diagram of a broadening of the point spread function of the under-screen imaging system according to an embodiment of the present application;
图8示出根据本申请一实施例的显示屏的局部示意图;FIG. 8 shows a partial schematic diagram of a display screen according to an embodiment of the present application;
图9示出根据本申请一实施例的滤波片透过率曲线的示意图;9 shows a schematic diagram of a filter transmittance curve according to an embodiment of the present application;
图10示出根据本申请一实施例的生成的RBG三通道的点扩散函数的示意图;10 shows a schematic diagram of the point spread function of the generated RBG three-channel according to an embodiment of the present application;
图11示出根据本申请一实施例的预先标定的不同波长的光线透过屏下成像系统所对应的点扩散函数的示意图;11 shows a schematic diagram of the point spread function corresponding to the pre-calibrated light of different wavelengths passing through the under-screen imaging system according to an embodiment of the present application;
图12示出了根据本申请一实施例的去眩光后的局部图像的示意图;FIG. 12 shows a schematic diagram of a partial image after deglare according to an embodiment of the present application;
图13示出了根据本申请一实施例的一种图像处理方法的流程图;FIG. 13 shows a flowchart of an image processing method according to an embodiment of the present application;
图14示出了根据本申请一实施例的一种图像处理装置的结构图;FIG. 14 shows a structural diagram of an image processing apparatus according to an embodiment of the present application;
图15示出根据本申请一实施例的电子设备的结构示意图;FIG. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application;
图16示出根据本申请一实施例的电子设备的软件结构框图。FIG. 16 shows a block diagram of a software structure of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
以下将参考附图详细说明本申请的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures denote elements that have the same or similar functions. While various aspects of the embodiments are shown in the drawings, the drawings are not necessarily drawn to scale unless otherwise indicated.
在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
另外,为了更好的说明本申请,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本申请同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。In addition, in order to better illustrate the present application, numerous specific details are given in the following detailed description. It should be understood by those skilled in the art that the present application may be practiced without certain specific details. In some instances, methods, means, components and circuits well known to those skilled in the art have not been described in detail so as not to obscure the subject matter of the present application.
为便于理解,将本申请实施例所涉及的部分用语进行解释说明。For ease of understanding, some terms involved in the embodiments of the present application are explained.
点扩散函数(point spread function,PSF),对光学系统来讲,输入物为一点光源时其输出像的光场分布,称为点扩散函数,也称点扩展函数。在数学上点光源可用δ函数(点脉冲)代表,输出像的光场分布叫做脉冲响应,所以点扩散函数也就是光学系统的脉冲响应函数。光学系统的成像性能可以通过该光学系统的点扩散函数来表述。Point spread function (PSF), for an optical system, the light field distribution of the output image when the input object is a point light source is called the point spread function, also known as the point spread function. Mathematically, a point light source can be represented by a delta function (point pulse), and the light field distribution of the output image is called an impulse response, so the point spread function is also the impulse response function of the optical system. The imaging performance of an optical system can be expressed by the point spread function of the optical system.
非盲解卷积(Non-blind deconvolution),又称非盲去卷积;光学系统所成的像可以理解成原始图像与光学系统的点扩散函数卷积的结果,根据已知拍摄的图像以及光学系统的点扩散函数,恢复出原始图像的过程称为非盲解卷积。Non-blind deconvolution (Non-blind deconvolution), also known as non-blind deconvolution; the image formed by the optical system can be understood as the result of the convolution of the original image and the point spread function of the optical system. The point spread function of the optical system, the process of restoring the original image is called non-blind deconvolution.
OLED的原理是有机半导体材料和发光材料在电场驱动下,通过载流子注入和复合导致发光的现象。OLED是一种利用多层有机薄膜结构产生电致发光的器件,它很容易制作,而且只需要低的驱动电压,这些主要的特征使得OLED在满足平面显示器的应用上显得非常突出。OLED显示屏比LCD更轻薄、亮度高、功耗低、响应快、清晰度高、柔性好、发光效率高。The principle of OLED is the phenomenon that organic semiconductor materials and light-emitting materials are driven by an electric field and lead to light emission through carrier injection and recombination. OLED is a device that uses a multilayer organic thin film structure to generate electroluminescence. It is easy to fabricate and only requires low driving voltage. These main features make OLED very prominent in the application of flat panel displays. OLED displays are lighter and thinner than LCDs, with high brightness, low power consumption, fast response, high definition, good flexibility and high luminous efficiency.
本申请实施例提供了一种图像处理方法,该方法可以应用于电子设备,该电子设备可以包括显示屏及该显示屏下的摄像头。其中,显示屏可以是触屏的、也可以是非触屏的,触屏的可以通过手指、触控笔等在显示屏上点击、滑动等方式对电子设备进行控制,非触屏的设备可以连接鼠标、键盘、触控面板等输入设备,通过输入设备对电子设备进行控制;摄像头可以包括前置摄像头和/或后置摄像头。The embodiment of the present application provides an image processing method, and the method can be applied to an electronic device, and the electronic device can include a display screen and a camera under the display screen. Among them, the display screen can be a touch screen or a non-touch screen. The touch screen can control the electronic device by clicking or sliding on the screen with a finger, a stylus, etc., and the non-touch screen device can be connected to Input devices such as mouse, keyboard, touch panel, etc., through which the electronic device is controlled; the camera may include a front camera and/or a rear camera.
示例性地,本申请的电子设备可以是智能手机、上网本、平板电脑、笔记本电脑、可穿戴电子设备(如智能手环、智能手表等)、车载设备、TV、虚拟现实设备、音响、电子墨水,等等,本申请实施例对电子设备的具体类型不作限制。Exemplarily, the electronic device of the present application may be a smart phone, a netbook, a tablet computer, a notebook computer, a wearable electronic device (such as a smart bracelet, a smart watch, etc.), a car device, a TV, a virtual reality device, an audio system, and an electronic ink. , etc., the embodiments of the present application do not limit the specific type of the electronic device.
以电子设备为手机为例,图1示出根据本申请一实施例的配置有屏下摄像头的手机示意图;如图1所示,手机10可以包括显示屏101及摄像头102;其中,显示屏101用于显示图像或视频等,摄像头102设置于显示屏101的下方,用于透过显示屏101拍摄图像或视频等;摄像头102的数量可以为一个或多个(图1中仅示例性示出了一个),显示屏101及摄像头102构成手机10的屏下成像系统。Taking the electronic device as a mobile phone as an example, FIG. 1 shows a schematic diagram of a mobile phone configured with an under-screen camera according to an embodiment of the present application; as shown in FIG. 1 , the mobile phone 10 may include a display screen 101 and a camera 102 ; wherein, the display screen 101 For displaying images or videos, etc., the camera 102 is arranged below the display screen 101, and is used to shoot images or videos through the display screen 101; One), the display screen 101 and the camera 102 constitute the under-screen imaging system of the mobile phone 10 .
由于摄像头设置于显示屏之下,当摄像头透过显示屏拍摄时,所拍摄的图像或视频会出现眩光,尤其是拍摄高亮物体时,眩光严重;示例性地,图2A-2B示出根据本申请一实施例的通过手机10显示图像的示意图。其中,图2A为手机10所显示的本申请实施例图像处理方 法处理前的原始图像的示意图,如图2A所示,原始图像201中出现了明显的眩光202;图2B为手机10所显示的利用本申请实施例图像处理方法处理后的高清图像的示意图,如图2B所示,高清图像203中,可以清晰看到高亮物体-太阳204;对比原始图像201及高清图像203,可以看到所拍摄的太阳204在原始图像201中呈现出的形态为彩虹眩光(图中未示出)。Since the camera is set under the display screen, when the camera shoots through the display screen, the captured image or video will have glare, especially when shooting bright objects, the glare is serious; A schematic diagram of displaying an image through the mobile phone 10 according to an embodiment of the present application. 2A is a schematic diagram of the original image displayed by the mobile phone 10 before being processed by the image processing method according to the embodiment of the present application. As shown in FIG. 2A , obvious glare 202 appears in the original image 201 ; FIG. 2B is an image displayed by the mobile phone 10 A schematic diagram of a high-definition image processed by the image processing method according to the embodiment of the present application, as shown in FIG. 2B , in the high-definition image 203 , a bright object - the sun 204 can be clearly seen; The photographed sun 204 appears in the original image 201 in the form of rainbow glare (not shown in the figure).
出现眩光的主要原因为:(1)用于显示图像等内容的显示屏由许多表示红绿蓝颜色的像素组成,且这些像素是周期性排布的;当位于显示屏下的摄像头透过显示屏拍摄时,外部光线通过显示屏时会产生衍射效应。The main reasons for glare are: (1) The display screen used to display images and other content consists of many pixels representing red, green and blue colors, and these pixels are periodically arranged; when the camera located under the display screen passes through the display Diffraction effect occurs when external light passes through the monitor when shooting with a screen.
(2)在所拍摄的景象中存在高亮物体,例如强光源时,在强光源的照射下,显示屏的衍射效应会进一步加强,会将强光源中心区域的光衍射至周围区域,造成摄像头的成像效果下降;同时,摄像头的图像传感器有限的动态范围,无法捕捉到衍射光的变化规律。(2) When there is a bright object in the shot scene, such as a strong light source, under the illumination of the strong light source, the diffraction effect of the display screen will be further strengthened, and the light from the central area of the strong light source will be diffracted to the surrounding area, causing the camera At the same time, the limited dynamic range of the image sensor of the camera cannot capture the changing law of diffracted light.
(3)显示屏的衍射效应是对波长敏感的,即衍射效应对不同波长的光线是不同的,表现在不同波长的光线经过显示屏和摄像头的镜头后的衍射展宽是不同的,称为色散现象;这种色散现象在所拍摄的图像中呈现出彩虹眩光。(3) The diffraction effect of the display screen is sensitive to the wavelength, that is, the diffraction effect is different for light of different wavelengths, and the diffraction broadening of the light of different wavelengths after passing through the display screen and the lens of the camera is different, which is called dispersion. Phenomenon; this phenomenon of chromatic dispersion gives rise to rainbow glare in captured images.
在一些实施例中,一种去眩光的方案为:增大摄像头的镜头光圈,提高进光量;使用大尺寸的图像传感器,提高图像传感器对暗光的感应能力;添加新型的高透过率发光材料,提升显示屏的透光率;采用高动态范围图像(High Dynamic Range,HDR)数据采集模式,提升摄像头获得图像的动态范围,降低眩光特点。在这些实施例中仅考虑位于显示屏下的摄像头成像通光量低的问题,而增加摄像头的镜头的进光亮、图像传感器的感光能力或者显示屏的透光特性;或者仅优化显示屏的像素结构,更改眩光的位置的能量分布。这种方案消除眩光的效果是有限,由于没有直面眩光产生的根本原因:显示屏周期性的像素排布导致的衍射效应是对波长敏感的,这种方案无法消除显示屏下摄像头成像产生的彩虹眩光。In some embodiments, an anti-glare solution is: increasing the lens aperture of the camera to increase the amount of light entering; using a large-sized image sensor to improve the image sensor's ability to sense dark light; adding a new type of high transmittance light Materials, improve the light transmittance of the display screen; adopt the high dynamic range image (High Dynamic Range, HDR) data acquisition mode to improve the dynamic range of the image obtained by the camera and reduce the glare characteristics. In these embodiments, only the problem of low imaging throughput of the camera located under the display screen is considered, and the incoming light of the lens of the camera, the photosensitive ability of the image sensor or the light transmission characteristics of the display screen are increased; or only the pixel structure of the display screen is optimized. , which changes the energy distribution at the location of the glare. The effect of this solution in eliminating glare is limited, because it does not directly face the root cause of glare: the diffraction effect caused by the periodic pixel arrangement of the display screen is sensitive to wavelengths, and this solution cannot eliminate the rainbow generated by the camera imaging under the display screen. glare.
在另外一些实施例中,有另一种去眩光的方案为:优化OLED显示屏的像素排布、像素结构、像素驱动电路设计。通过对显示屏周期性的像素结构局部特征的改进,可以优化屏下成像系统的点扩散函数的能量分布和眩光的分布形态。然而,受限于显示屏发光区域大小的限制,这种方案只能在一定程度上改善眩光的效果,改善效果有限;同时,针对不同波长显示屏的衍射效应所表现的色散现象依旧无法改变,这种方案无法消除显示屏下摄像头成像产生的彩虹眩光。In other embodiments, there is another anti-glare solution: optimizing the pixel arrangement, pixel structure, and pixel driving circuit design of the OLED display screen. By improving the local characteristics of the periodic pixel structure of the display screen, the energy distribution of the point spread function of the under-screen imaging system and the distribution of glare can be optimized. However, due to the limitation of the size of the light-emitting area of the display screen, this solution can only improve the glare effect to a certain extent, and the improvement effect is limited; This solution cannot eliminate the rainbow glare caused by the imaging of the camera under the display.
为了解决上述技术问题,本申请一些实施例提供了一种图像处理方法,该图像处理方法能够实现根据图像中高亮物体的光谱成分,生成与不同高亮物体相适应的点扩散函数,进而通过解卷积处理获得去眩光后的高清图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。In order to solve the above technical problems, some embodiments of the present application provide an image processing method, the image processing method can generate a point spread function suitable for different highlighted objects according to the spectral components of the highlighted objects in the image, and then solve the problem by solving The convolution process obtains a high-definition image after glare removal; thus, the glare generated by the image captured by the camera under the display screen can be effectively improved or eliminated, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
下面结合应用场景,具体介绍本申请实施例提供的一种图像处理方法。In the following, an image processing method provided by an embodiment of the present application is specifically introduced in conjunction with an application scenario.
在一些场景中,用户通过电子设备拍摄时,对所拍摄的图像进行实时去眩光处理。In some scenarios, when a user shoots through an electronic device, real-time anti-glare processing is performed on the captured image.
示例性地,当所拍摄的图像中存在高亮物体时,电子设备可以开启去眩光功能,从而改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光。其中,高亮物体可以包括强光源,如:太阳、月亮、照明灯、显示屏等等,还可以包括:玻璃、金属等等表面反射强光的物体。Exemplarily, when there is a bright object in the captured image, the electronic device can turn on the anti-glare function, so as to improve or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare. The highlighted objects may include strong light sources, such as the sun, the moon, lights, display screens, etc., and may also include objects whose surfaces reflect strong light, such as glass, metal, and the like.
以上述图1中的手机为例,该手机的前置摄像头设置于显示屏之下,用户在户外通过手机的前置摄像头进行自拍,用户打开手机的前置拍照功能,从而使显示屏进入拍摄界面,用 户可以通过拍摄界面中所显示的预览图像,实时观察所拍摄的画面;示例性地,预览图像为上述图2A中所示的原始图像201,在预览图像中可以看到存在眩光,即太阳呈现为彩虹眩光,同时该彩虹眩光遮挡了部分物体(如山峰),手机开启去眩光功能,进行实时去眩光处理,得到去除眩光的高清图像,去除眩光的高清图像为上述图2B中所示的高清图像203,在拍摄界面所显示的高清图像中消除了眩光,可以看到太阳,以及原本被太阳的彩虹眩光所遮挡的部分山峰;这样,通过本申请实施例的图像处理方法,去除了预览图像中太阳的眩光(彩虹眩光),从而可以得到去除眩光的高清图像。Taking the mobile phone in Figure 1 above as an example, the front camera of the mobile phone is set under the display screen, the user takes a selfie through the front camera of the mobile phone outdoors, and the user turns on the front camera function of the mobile phone, so that the display screen enters the shooting mode. interface, the user can observe the captured picture in real time through the preview image displayed in the shooting interface; exemplarily, the preview image is the original image 201 shown in the above-mentioned FIG. 2A, and glare can be seen in the preview image, that is The sun appears as rainbow glare, and at the same time, the rainbow glare blocks some objects (such as mountain peaks), and the mobile phone turns on the anti-glare function, performs real-time anti-glare processing, and obtains a high-definition image with no glare removed. The high-definition image is shown in Figure 2B above. In the high-definition image 203, the glare is eliminated in the high-definition image displayed on the shooting interface, and the sun and some mountain peaks originally blocked by the rainbow glare of the sun can be seen; Preview the glare of the sun (rainbow glare) in the image so that you can get a high-definition image with no glare.
下面对手机的显示屏进入拍摄界面的方式进行举例说明:The following is an example of how the display screen of the mobile phone enters the shooting interface:
用户可以通过触发进入拍摄的指令,手机响应于该指令,打开摄像头,同时显示屏进入拍摄界面;示例性地,用户可以通过点击显示屏中主页面、或开机页面、或其他应用程序中的拍照图标,使显示屏进入拍摄界面;用户还可以通过按压手机设置的用于拍摄的物理按钮,使显示屏进入拍摄界面;用户还可以通过语音指令的方式,使显示屏进入拍摄界面;用户还可以通过快捷手势的方式,使显示屏进入拍摄界面;在实际应用中,用户还还可以通过其它方式使显示屏进入拍摄界面,本申请实施例对此不作限定。The user can trigger an instruction to enter the shooting, the mobile phone responds to the instruction, turns on the camera, and at the same time the display screen enters the shooting interface; for example, the user can click the main page in the display screen, or the boot page, or take a photo in other applications. icon to make the display screen enter the shooting interface; the user can also press the physical button for shooting set on the mobile phone to make the display screen enter the shooting interface; the user can also make the display screen enter the shooting interface through voice commands; the user can also The display screen is brought into the shooting interface by means of a shortcut gesture; in practical applications, the user can also make the display screen enter the shooting interface by other means, which is not limited in this embodiment of the present application.
图3示出根据本申请一实施例的拍摄界面的示意图;如图3所示,拍摄界面中除显示预览图像之外;拍摄界面还可以包括光圈、夜景、人像、拍照、录像、专业、更多等等多种操作模式的选项,用户可以通过切换不同的操作模式,进入拍摄、录像、夜景等等特定的操作模式;拍摄界面还可以包括闪光、HDR、AI、设置、色调等功能的选项,用户可以通过选择去不同功能,开启或关闭闪光,进行色调调节等等功能,拍摄界面还可以包括焦距调节选项,用于可以通过调节焦距,使预览图像相应的放大或缩小;此外,拍摄界面还可以包括拍摄按钮、相册按钮及摄像头切换按钮等。3 shows a schematic diagram of a shooting interface according to an embodiment of the present application; as shown in FIG. 3, in addition to displaying a preview image in the shooting interface; the shooting interface may also include aperture, night scene, portrait, photo, video, professional, etc. There are options for multiple operation modes, and users can switch between different operation modes to enter specific operation modes such as shooting, video recording, night scene, etc.; the shooting interface can also include options for functions such as flash, HDR, AI, settings, and color tone. , the user can select different functions, turn on or off the flash, adjust the color tone, etc. The shooting interface can also include focal length adjustment options, so that the preview image can be enlarged or reduced accordingly by adjusting the focal length; in addition, the shooting interface It may also include a shooting button, an album button, a camera switching button, and the like.
下面对手机开启去眩光功能的方式进行举例说明:The following is an example of how to enable the anti-glare function on a mobile phone:
方式一、手机检测预览图像中是否存在高亮物体,当检测到预览图像中存在高亮物体时,则手机自动开启去眩光功能;当未检测到预览图像中存在高亮物体时,则手机不开启去眩光功能。其中,高亮物体包括在预览图像中呈现指定颜色和/或亮度的拍摄对象。示例性地,指定颜色可以为白色,指定亮度可以为白色在YUV颜色编码的亮度值,或者白色在RGB色彩模式中的灰度值,等等。 Method 1. The mobile phone detects whether there is a bright object in the preview image. When it detects that there is a bright object in the preview image, the mobile phone automatically turns on the anti-glare function; when no bright object is detected in the preview image, the mobile phone does not. Turn on the anti-glare function. Wherein, the highlighted object includes a photographic object exhibiting a specified color and/or brightness in the preview image. Illustratively, the specified color may be white, the specified brightness may be the luminance value of white in YUV color coding, or the gray value of white in RGB color mode, and so on.
示例性地,手机可以以预览图像中某一区域内大部分像素点的灰度值不小于预设灰度阈值作为条件,检测预览图像是否存在高亮物体,当预览图像中某一区域内的大部分像素点的灰度值不小于该预设灰度阈值时,则确定该预览图像中存在高亮物体,此时,手机自动开启去眩光功能。预览图像中各像素点的灰度值范围是0-255,白色为255,黑色为0,考虑到高亮物体所在区域通常为白色,可以设置预设灰度阈值等于255或255左右的值,例如,预设灰度阈值可以为255,可以选取大小为100*100的窗口,在预览图像中顺序滑动该窗口,从而使遍布整个预览图像;每滑动一次该窗口则计算该窗口中各像素点的灰度值,若该窗口中超过一半数量的像素点的灰度值为255时,则确定预览图像中存在高亮物体,此时,手机自动开启去眩光功能。Exemplarily, the mobile phone can detect whether there are highlighted objects in the preview image on the condition that the gray values of most of the pixels in a certain area in the preview image are not less than the preset gray value threshold. When the grayscale values of most of the pixels are not less than the preset grayscale threshold, it is determined that there are bright objects in the preview image, and at this time, the mobile phone automatically turns on the anti-glare function. The grayscale value range of each pixel in the preview image is 0-255, white is 255, and black is 0. Considering that the area where the highlighted object is usually white, you can set the preset grayscale threshold equal to 255 or a value around 255. For example, the preset grayscale threshold can be 255, a window with a size of 100*100 can be selected, and the window can be sequentially slid in the preview image, so that the entire preview image can be spread; each time the window is slid, each pixel in the window is calculated. If the gray value of more than half of the pixels in the window is 255, it is determined that there is a bright object in the preview image, and at this time, the mobile phone automatically turns on the anti-glare function.
示例性地,当手机未检测到预览图像中存在高亮物体时,手机可以继续执行当前拍摄模式或者其他预设的拍摄模式(例如,预设的图像增强模式)。Exemplarily, when the mobile phone does not detect that there is a bright object in the preview image, the mobile phone may continue to execute the current shooting mode or other preset shooting modes (eg, a preset image enhancement mode).
该方式中,手机可以自动检测预览图像中是否存在高亮物体,并在检测到高亮物体的情 况下自动开启去眩光功能;不需要用户手动操作,即可得到开启去眩光功能,提高了手机操作的智能程度。In this method, the mobile phone can automatically detect whether there is a highlight object in the preview image, and automatically turn on the anti-glare function when a highlight object is detected; the anti-glare function can be turned on without the user's manual operation, which improves the performance of the mobile phone. How intelligent the operation is.
方式二、手机接收用户触发的开启去眩光功能的指令,响应于该指令,开启去眩光功能。Manner 2: The mobile phone receives an instruction to enable the anti-glare function triggered by the user, and in response to the instruction, enables the anti-glare function.
示例性地,用户可以通过预览图像或所拍摄的对象,观察判断是否存在高亮物体,当用户判断存在高亮物体时,则可以通过点击拍摄界面预设的开启去眩光功能或者进入去眩光模式的图片,触发开启去眩光功能的指令;也可以通过手机预设的语音指令、快捷手势、物理按钮等等方式触发开启去眩光功能的指令,本申请实施例对此不作限定。例如,如图3所示,手机10的拍摄界面中可以包括去眩光功能的图标301,用户可以通过点击该去眩光功能的图标301,触发开启去眩光功能的指令,手机响应于该指令,开启去眩光功能。Exemplarily, the user can observe and determine whether there is a highlight object by previewing the image or the photographed object, and when the user determines that there is a highlight object, the user can click the preset on the shooting interface to enable the anti-glare function or enter the anti-glare mode. pictures, trigger the command to enable the anti-glare function; the command to enable the anti-glare function may also be triggered by means of preset voice commands, shortcut gestures, physical buttons, etc. of the mobile phone, which is not limited in this embodiment of the present application. For example, as shown in FIG. 3 , the camera interface of the mobile phone 10 may include an icon 301 of the anti-glare function. The user can click the icon 301 of the anti-glare function to trigger an instruction to enable the anti-glare function. Anti-glare function.
该方式中,用户拍照时自主判断是否存在高亮物体,并根据判断结果,选择触发开启去眩光功能的指令,从而使手机开启去眩光功能,满足用户对自主控制手机的需求。In this method, the user independently judges whether there is a bright object when taking a photo, and selects an instruction to trigger the anti-glare function according to the judgment result, so that the mobile phone can turn on the anti-glare function and meet the user's demand for autonomous control of the mobile phone.
方式三、手机检测预览图像中是否存在高亮物体,当检测到预览图像中存在高亮物体时,则手机可以向用户发出提示信息,从而提示用户是否开启去眩光功能;若用户触发开启去眩光功能的指令,则手机响应于该指令,开启去眩光功能。Method 3: The mobile phone detects whether there is a highlight object in the preview image. When it detects that there is a highlight object in the preview image, the mobile phone can send a prompt message to the user to prompt the user whether to enable the anti-glare function; if the user triggers the anti-glare function function command, the mobile phone will turn on the anti-glare function in response to the command.
示例性地,手机检测预览图像是否存在高亮物体的具体实现可以参照上述手机开启去眩光功能的方式一中相关描述,在此不再赘述。手机可以向用户发出的提示信息可以包括:语音提示信息、振动、灯光闪烁、图标闪烁等等多种形式;例如,如上述图3所示,手机10可以在检测到预览图像中存在高亮物体时,控制拍摄界面中的去眩光功能的图标301闪烁,从而提示用户选择点击该去眩光功能的图标301,若用户点击该去眩光功能的图标301,则手机开启去眩光功能。再例如,手机可以在拍摄界面的预览图像区域显示提示信息“开启去眩光功能”,从而提示用户开启去眩光模式,其中,用户开启去眩光模式的具体实现可以参照上述手机开启去眩光功能的方式二中相关描述,在此不再赘述;若用户触发开启去眩光功能的指令,则手机响应于该指令,开启去眩光功能。Exemplarily, for the specific implementation of the mobile phone detecting whether there is a highlighted object in the preview image, reference may be made to the relevant description in the above-mentioned method 1 for enabling the anti-glare function on the mobile phone, which will not be repeated here. The prompt information that the mobile phone can send to the user can include: voice prompt information, vibration, light flashing, icon flashing, etc.; for example, as shown in FIG. 3 above, the mobile phone 10 can detect that there is a highlighted object in the preview image. At the time, the icon 301 of the anti-glare function in the control shooting interface flashes, thereby prompting the user to select and click the icon 301 of the anti-glare function. If the user clicks the icon 301 of the anti-glare function, the mobile phone will enable the anti-glare function. For another example, the mobile phone may display the prompt message "Enable the anti-glare function" in the preview image area of the shooting interface, thereby prompting the user to enable the anti-glare mode, wherein, the specific implementation of enabling the anti-glare mode by the user may refer to the above-mentioned method for enabling the anti-glare function on the mobile phone. The related descriptions in Section 2 will not be repeated here; if the user triggers an instruction to enable the anti-glare function, the mobile phone responds to the instruction to enable the anti-glare function.
该方式中,用户拍照时可能会忘记手机的去眩光功能或者不熟悉去眩光功能,通过提示信息,可以提示用户开启去眩光功能,从而提升用户体验。In this method, the user may forget the anti-glare function of the mobile phone or be unfamiliar with the anti-glare function when taking pictures, and the user can be prompted to turn on the anti-glare function through the prompt information, thereby improving the user experience.
此外,手机还可在检测到的存在高亮物体的图像区域周围显示提示框,以提示此区域存在高亮物体,用户可以通过选择提示框(例如点击提示框内部区域)来选择要执行去眩光功能的图像区域。In addition, the mobile phone can also display a prompt box around the detected image area where there are highlighted objects to remind that there are highlighted objects in this area. The user can select the prompt box (for example, click the inner area of the prompt box) to choose to perform glare removal. Feature image area.
在实际应用中,手机还可以通过其它方式开启去眩光功能,本申请实施例对此不作限定。In practical applications, the mobile phone may also enable the anti-glare function in other ways, which is not limited in this embodiment of the present application.
下面对手机开启去眩光功能后,进行实时去眩光处理,得到去除眩光的高清图像进行举例说明:The following is an example of how to perform real-time anti-glare processing after the mobile phone has the anti-glare function turned on to obtain a high-definition image with no glare:
图4示出了本申请实施例的一种图像处理方法的流程图,如图4所示,该方法可以包括以下步骤:FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 4 , the method may include the following steps:
步骤400、手机确定预览图像中高亮物体所在的眩光区域。Step 400: The mobile phone determines the glare area where the highlighted object in the preview image is located.
其中,高亮物体的数量可以为一个或多个,每一眩光区域至少包括一个高亮物体,例如,每一眩光区域可以包括一个高亮物体,此时眩光区域的数量与高亮物体的数量相同。在预览图像中,高亮物体占据灰度值不小于预设阈值(如255)的多个彼此相连的像素点,该高亮物体所在的眩光区域至少包括该高亮物体所占据的这些彼此相连的像素点。其中,眩光区域可以为矩形、圆形等形状。The number of highlighted objects may be one or more, and each glare area includes at least one highlighted object. For example, each glare area may include one highlighted object. In this case, the number of glare areas and the number of highlighted objects same. In the preview image, the highlighted object occupies a plurality of connected pixels whose gray value is not less than a preset threshold (eg, 255), and the glare area where the highlighted object is located at least includes these connected pixels occupied by the highlighted object. of pixels. Wherein, the glare area may be in the shape of a rectangle, a circle, or the like.
示例性地,眩光区域可以为矩形区域,可以构建大小为m*n矩形窗口,其中,m、n均为大于1的整数,例如,m、n的初始值可以为100,利用该矩形窗口在预览图像上滑动,当该矩形窗口中出现灰度值超过255的像素点时,则调整m、n的值,从而使得包括该像素点在内的,灰度值超过255的彼此相连的多个像素点均落在该矩形窗口时,此时固定m、n的值,该矩形窗口即为眩光区域,该矩形窗口内灰度值超过255的彼此相连的多个像素点即为一个高亮物体,从而确定了该高亮物体所在的眩光区域;遍布整个预览图像,可以得到该预览图像中的各高亮物体所在的眩光区域,相应的,预览图像中除眩光区域以外的区域即为非眩光区域。示例性地,如上述图3所示,眩光区域302为矩形区域,在该眩光区域内,包括了太阳在预览图像中所占据的像素点。Exemplarily, the glare area may be a rectangular area, and a rectangular window with a size of m*n may be constructed, where m and n are both integers greater than 1. For example, the initial values of m and n may be 100. Sliding on the preview image, when a pixel with a gray value exceeding 255 appears in the rectangular window, adjust the values of m and n, so that multiple connected pixels, including the pixel, whose gray value exceeds 255 are connected to each other. When the pixels all fall on the rectangular window, the values of m and n are fixed at this time, and the rectangular window is the glare area. Multiple pixels connected to each other with a gray value of more than 255 in the rectangular window are a highlight object. , thereby determining the glare area where the highlighted object is located; covering the entire preview image, the glare area where each highlighted object in the preview image is located can be obtained. Correspondingly, the area other than the glare area in the preview image is non-glare area. Exemplarily, as shown in FIG. 3 above, the glare area 302 is a rectangular area, and the glare area includes the pixels occupied by the sun in the preview image.
需要说明的是,该步骤400为可选步骤,即在手机开启去眩光功能后,可以执行该步骤400,从而可以提高下述步骤的处理效率,节约处理资源;同时,可以减轻或消除处理过程中对非眩光区域的影响;另外,在手机开启去眩光功能后,手机也可以直接执行下述步骤401。It should be noted that this step 400 is an optional step, that is, after the anti-glare function is enabled on the mobile phone, this step 400 can be executed, so that the processing efficiency of the following steps can be improved and processing resources can be saved; at the same time, the processing process can be reduced or eliminated. In addition, after the mobile phone enables the anti-glare function, the mobile phone can also directly perform the following step 401.
步骤401、手机识别高亮物体的形状,并提取眩光星芒区域。 Step 401 , the mobile phone identifies the shape of the highlighted object, and extracts the glare starburst area.
该步骤中,手机可以在上述步骤400中所确定的预览图像中高亮物体所在的眩光区域中识别高亮物体的形状,并提取眩光区域中的眩光星芒区域;手机也可以在预览图像中直接识别高亮物体的形状,并提取预览图像中的眩光星芒区域,其中眩光星芒区域是出现眩光且为星芒形状的区域,预览图像中除眩光星芒区域以外的区域,即为非眩光星芒区域。In this step, the mobile phone can identify the shape of the highlighted object in the glare area where the highlighted object is located in the preview image determined in the above step 400, and extract the glare starburst area in the glare area; Identify the shape of the highlighted object, and extract the glare starburst area in the preview image, where the glare starburst area is the area where glare appears and is in the shape of a starburst, and the area other than the glare starburst area in the preview image is non-glare Astral region.
示例性地,手机可以通过训练好的神经网络,自动识别高亮物体的形状,例如,高亮物体为强光源时,强光源的形状可以包括环形光源、条形光源、点光源等等。预先选取不同形状光源的照片对神经网络进行训练,将眩光区域或预览图像输入到训练好的神经网络中,即可输出该眩光区域或预览图像中高亮物体的形状;手机还可以通过常规过曝检测等方式,识别高亮物体的形状。Exemplarily, the mobile phone can automatically identify the shape of the highlighted object through the trained neural network. For example, when the highlighted object is a strong light source, the shape of the strong light source may include a ring light source, a strip light source, a point light source, and the like. Pre-select photos of light sources of different shapes to train the neural network, input the glare area or preview image into the trained neural network, and then output the glare area or the shape of the highlighted object in the preview image; the mobile phone can also use conventional overexposure Detection and other methods to identify the shape of the highlighted object.
进一步地,手机可以在识别出高亮物体的形状之后,基于该高亮物体的位置,检测眩光区域或预览图像中的眩光星芒区域,例如,可以将高亮物体的中心作为十字星芒的中心,不断调整十字星芒的大小,使得该高亮物体所占据的像素点均落在该十字星芒内,此时,该十字星芒的所包括的像素点即为眩光星芒区域。在确定眩光星芒区域之后,对眩光区域或预览图像中的眩光星芒区域进行裁剪,从而提取出眩光星芒区域;这样,可以针对所提取的眩光星芒区域执行进一步的去眩光处理,提高了处理效率,节约了处理资源;同时,可以减轻或消除处理过程中对图像中其他不包含高亮物体及其眩光的区域的影响。Further, after recognizing the shape of the highlighted object, the mobile phone can detect the glare area or the glare starburst area in the preview image based on the position of the highlighted object. For example, the center of the highlighted object can be used as the cross star. At the center, continuously adjust the size of the cross star, so that the pixels occupied by the highlighted object all fall within the cross star. At this time, the pixels included in the cross star are the glare star area. After the glare starburst region is determined, the glare starburst region or the glare starburst region in the preview image is cropped to extract the glare starburst region; in this way, further anti-glare processing can be performed on the extracted glare starburst region to improve the This improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the impact of processing on other areas in the image that do not contain bright objects and their glare.
需要说明的是,该步骤401为可选步骤,即在手机开启去眩光功能后,可以执行该步骤401,还可以直接执行下述步骤402。下面以执行完步骤401后再执行步骤402进行示例性说明。It should be noted that this step 401 is an optional step, that is, after the anti-glare function is enabled on the mobile phone, this step 401 may be executed, and the following step 402 may also be directly executed. The following is an exemplary description of performing step 402 after performing step 401 .
图5示出了根据本申请一实施例的提取上述图3中眩光区域302中高亮物体的形状及眩光星芒区域的示意图,如图5所示,在眩光区域302中,提取出太阳的形态501,同时,提取出眩光星芒区域502。FIG. 5 shows a schematic diagram of extracting the shape of the highlighted object and the glare starburst area in the glare area 302 in FIG. 3 according to an embodiment of the present application. As shown in FIG. 5 , in the glare area 302 , the shape of the sun is extracted 501, and at the same time, the glare starburst region 502 is extracted.
步骤402、手机获取高亮物体的光谱成分I spe(λ),λ表示波长。 Step 402, the mobile phone acquires the spectral component Ispe (λ) of the highlighted object, where λ represents the wavelength.
下面对该步骤中获取高亮物体的光谱成分的方式进行举例说明。The manner of acquiring the spectral components of the highlighted object in this step will be illustrated below with an example.
方式一、手机配置有光谱测量设备(例如,光谱传感器、色温传感器、光谱相机等),通过光谱传感器采集高亮物体(如强光源)的光谱成分。Mode 1: The mobile phone is equipped with a spectral measurement device (for example, a spectral sensor, a color temperature sensor, a spectral camera, etc.), and the spectral components of a bright object (such as a strong light source) are collected by the spectral sensor.
举例来说,上述图1中手机10可以配置有色温传感器,图6示出了根据本申请一实施例的通过色温传感器采集的强光源的光谱成分,如图6所示,通过色温传感器采集上述图2B中拍摄对象中太阳204的光谱成分I spe(λ),可以看出不同波长λ所对应的相对强度。 For example, the mobile phone 10 in FIG. 1 may be configured with a color temperature sensor, and FIG. 6 shows the spectral components of the strong light source collected by the color temperature sensor according to an embodiment of the present application. As shown in FIG. The spectral component Ispe (λ) of the sun 204 in the photographed object in FIG. 2B shows the relative intensities corresponding to different wavelengths λ.
示例性地,手机还可以配置高光谱检测设备,通过高光谱检测设备采集高亮物体的光谱成分,从而进一步提高所采集的光谱成分的精度。Exemplarily, the mobile phone may also be configured with a hyperspectral detection device, and the hyperspectral detection device collects the spectral components of the bright object, thereby further improving the accuracy of the collected spectral components.
该方式中,通过手机自带的色温传感器或高光谱检测设备等光谱测量设备,可以在拍摄时实时获取高亮物体的光谱成分,无需依赖外部设备,提高了手机的功能完整性。In this method, the spectral components of the highlighted objects can be obtained in real time during shooting through the color temperature sensor or hyperspectral detection equipment built in the mobile phone, without relying on external equipment, which improves the functional integrity of the mobile phone.
方式二、手机可以通过识别拍摄对象中高亮物体的光源类型,并根据预设的不同光源类型对应的光谱成分,确定所识别出的高亮物体的光谱成分。In a second way, the mobile phone can determine the spectral components of the identified highlighted objects by identifying the light source types of the highlighted objects in the photographed object, and according to preset spectral components corresponding to different light source types.
示例性地,对于实际中常出现的强光源,如太阳、月亮、荧光、白炽灯、日光灯等等,不同强光源的光谱成分是固定不变的,因此,可以通过预先测量,确定不同光源类型的光谱成分,并存储在手机中;在拍摄时,手机可以通过计算机视觉等方式,识别出拍摄对象中高亮物体的光源类型,例如,识别出上述2B中高亮物体的光源类型为太阳,则可以根据预先存储的多个光源类型的光谱成分中,确定太阳的光谱成分。Exemplarily, for the strong light sources that often appear in practice, such as the sun, the moon, fluorescent lamps, incandescent lamps, fluorescent lamps, etc., the spectral components of different strong light sources are fixed. The spectral components are stored in the mobile phone; when shooting, the mobile phone can identify the light source type of the highlighted object in the subject through computer vision and other methods. From the pre-stored spectral components of a plurality of light source types, the spectral components of the sun are determined.
该方式中,通过预先存储不同光源类型对应的光谱成分,并在拍摄时识别出拍摄对象中高亮物体的光源类型,从而确定高亮物体的光谱成分;这样,手机无需额外配置光谱传感器或高光谱检测设备等光谱测量设备,即可获取高亮物体的光谱成分,节约了手机的成本。In this method, the spectral components corresponding to different light source types are stored in advance, and the light source type of the highlighted object in the subject is identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the mobile phone does not need to be additionally equipped with a spectral sensor or hyperspectral spectrum. Spectral measurement equipment such as detection equipment can obtain the spectral components of bright objects, saving the cost of mobile phones.
方式三、手机可以接收外部输入的高亮物体的光谱成分。Method 3: The mobile phone can receive the spectral components of the bright object input from the outside.
示例性地,设置于外部环境中的光谱测量设备(如,高光谱检测设备、色温传感器、光谱相机等)采集高亮物体的光谱成分,并将所采集的高亮物体的光谱成分输入到手机中,手机接收所输入的光谱成分,从而实现获取高亮物体的光谱成分。Exemplarily, a spectral measurement device (such as a hyperspectral detection device, a color temperature sensor, a spectral camera, etc.) disposed in the external environment collects the spectral components of the highlighted objects, and inputs the collected spectral components of the highlighted objects into the mobile phone. , the mobile phone receives the inputted spectral components, so as to obtain the spectral components of the bright objects.
该方式中,手机可以借助外部设备所采集的光谱成分,从而获取高亮物体的光谱成分,节约了手机的成本。In this way, the mobile phone can obtain the spectral components of the bright object by using the spectral components collected by the external device, which saves the cost of the mobile phone.
步骤403、手机根据高亮物体的光谱成分,生成RGB三通道的点扩散函数。Step 403: The mobile phone generates a point spread function of three RGB channels according to the spectral components of the highlighted object.
考虑到显示屏的周期性像素排布对于显示屏下摄像头的成像影响表现为点扩散函数的展宽,图7示出根据本申请一实施例的屏下成像系统的点扩散函数的展宽示意图,如图7所示,屏下成像系统的点扩散函数分布会沿着显示屏横向和纵向扩展,形成严重的拖尾效应,这种拖尾效应使得显示屏下的摄像头的成像质量大大降低。因此,为了提高成像质量,该步骤中,手机利用上述获取高亮物体的光谱成分,生成屏下成像系统的RGB三通道的点扩散函数。Considering that the influence of the periodic pixel arrangement of the display screen on the imaging of the camera under the display screen is manifested as the broadening of the point spread function, FIG. 7 shows a schematic diagram of the broadening of the point spread function of the under-screen imaging system according to an embodiment of the present application, as shown in FIG. As shown in Figure 7, the distribution of the point spread function of the under-screen imaging system will expand horizontally and vertically along the display screen, resulting in a serious smearing effect, which greatly reduces the imaging quality of the camera under the display screen. Therefore, in order to improve the imaging quality, in this step, the mobile phone uses the above-mentioned acquired spectral components of the bright object to generate the point spread function of the RGB three channels of the under-screen imaging system.
下面对该步骤中根据高亮物体的光谱成分,生成RGB三通道的点扩散函数的方式进行举例说明:The following is an example of how to generate the point spread function of the RGB three-channel according to the spectral components of the highlighted object in this step:
方式一、手机通过显示屏模型、上述获取的高亮物体的光谱成分及摄像头的图像传感器的滤波片的感光特性曲线(透过率曲线),生成屏下成像系统的RGB三通道的点扩散函数。 Method 1. The mobile phone generates the point spread function of the RGB three channels of the under-screen imaging system through the display screen model, the spectral components of the highlighted objects obtained above, and the photosensitive characteristic curve (transmittance curve) of the filter of the image sensor of the camera. .
其中,可以将显示屏模型预先存储在手机中,显示屏模型可以包括:振幅调制函数A(m,n)和相位调制函数P(m,n)等,m,n表示显示屏上的像素点的坐标;振幅调制函数和相位调制函数可以根据显示屏上的像素点分布、显示屏的排线方法及显示屏的材料等因素确定。图8示出根据本申请一实施例的显示屏的局部示意图,如图8所示,可以得到显示屏上的像素点的分布,其中,每个像素点包括红光子像素、绿光子像素、蓝光子像素。摄像头的图像传感器的滤波片的感光特性曲线可以根据手机所配置的摄像头的相关参数确定,图像传感器的滤波 片可以包括:红色滤波片、绿色滤波片、蓝色滤波片;图9示出根据本申请一实施例的滤波片透过率曲线的示意图,如图9所示,滤波片的透过率曲线F r,g,b(λ),包括:红色滤波片的透过率曲线F r(λ)、绿色滤波片的透过率曲线F g(λ)、蓝色滤波片的透过率曲线F b(λ),其中,λ为表示波长。 Among them, the display screen model can be pre-stored in the mobile phone, and the display screen model can include: amplitude modulation function A(m,n) and phase modulation function P(m,n), etc., m,n represent the pixels on the display screen The amplitude modulation function and phase modulation function can be determined according to factors such as the pixel point distribution on the display screen, the wiring method of the display screen and the material of the display screen. FIG. 8 shows a partial schematic diagram of a display screen according to an embodiment of the present application. As shown in FIG. 8 , the distribution of pixel points on the display screen can be obtained, wherein each pixel point includes a red photon pixel, a green photon pixel, and a blue photon pixel. subpixels. The photosensitive characteristic curve of the filter of the image sensor of the camera can be determined according to the relevant parameters of the camera configured on the mobile phone, and the filter of the image sensor can include: red filter, green filter, blue filter; A schematic diagram of the filter transmittance curve of an embodiment of the application, as shown in FIG. 9 , the transmittance curve Fr, g, b (λ) of the filter includes: the transmittance curve Fr of the red filter ( λ), the transmittance curve F g (λ) of the green filter, and the transmittance curve F b (λ) of the blue filter, where λ is the wavelength.
示例性地,手机可以利用上述图6中通过色温传感器采集的拍摄对象中太阳的光谱成分I spe(λ)以及预设的显示屏模型(A(m,n)和P(m,n)),生成高亮物体(太阳)的不同波长经过屏下成像系统后对应的的点扩散函数,进而通过图像传感器的滤波片的透过率曲线F r,g,b(λ),生成屏下成像系统的RGB三通道的点扩散函数。 Exemplarily, the mobile phone can use the spectral composition I spe (λ) of the sun in the photographed object collected by the color temperature sensor in the above-mentioned FIG. 6 and the preset display model (A(m,n) and P(m,n)) , generate the point spread function corresponding to the different wavelengths of the bright object (the sun) after passing through the under-screen imaging system, and then pass the transmittance curve F r,g,b (λ) of the filter of the image sensor to generate the under-screen imaging The point spread function of the RGB three channels of the system.
示例性地,手机还可以通过高光谱检测设备采集的高亮物体的光谱成分,以及预设的显示屏模型,生成高亮物体的不同波长经过屏下成像系统后对应的点扩散函数,从而使得生成的高亮物体的不同波长经过屏下成像系统后对的点扩散函数更接近实际情况,进而提高去眩光的效果。Exemplarily, the mobile phone can also generate the point spread function corresponding to the different wavelengths of the highlighted object after passing through the under-screen imaging system through the spectral components of the highlighted object collected by the hyperspectral detection device and the preset display screen model, so that the The point spread function of different wavelengths of the generated highlight objects after passing through the under-screen imaging system is closer to the actual situation, thereby improving the effect of anti-glare.
示例性地,可以将显示屏模型作为透过率函数,结合光波传播因子,并以上述获取的高亮物体的光谱成分作为权重,得到高亮物体的不同波长经过屏下成像系统后对应的点扩散函数;其中,显示屏的模型可以包括:所述显示屏的振幅调制函数和相位调制特性函数;光波传播因子可以根据摄像头拍摄待处理图像时的焦距及高亮物体的波长确定。Exemplarily, the display screen model can be used as the transmittance function, combined with the light wave propagation factor, and the spectral components of the highlighted object obtained above are used as weights to obtain the points corresponding to the different wavelengths of the highlighted object after passing through the under-screen imaging system. Spread function; wherein, the model of the display screen may include: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor may be determined according to the focal length of the camera to capture the image to be processed and the wavelength of the highlighted object.
这样,从眩光产生的根本原因出发,即显示屏周期性的像素排布导致的衍射效应是对波长敏感的,将通过振幅调制函数和相位调制特性函数所构建的显示屏模型作为透过率函数,并根据摄像头拍摄待处理图像时的焦距及高亮物体的实际波长确定光波传播因子,同时以高亮物体的光谱成分作为权重,得到高亮物体的不同波长经过屏下成像系统后对应的点扩散函数,所得到的点扩散函数能够更加真实地表达高亮物体通过屏下成像产生眩光过程。In this way, starting from the root cause of glare, that is, the diffraction effect caused by the periodic pixel arrangement of the display screen is sensitive to wavelength, the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function. , and determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object, and at the same time use the spectral component of the highlighted object as the weight to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system. Spread function, the obtained point spread function can more realistically express the glare process of bright objects through under-screen imaging.
示例性地,高亮物体的不同波长经过屏下成像系统后对应的点扩散函数I(u,v;λ)如下述公式(1)所示:Exemplarily, the point spread function I(u, v; λ) corresponding to different wavelengths of the highlighted object after passing through the under-screen imaging system is shown in the following formula (1):
Figure PCTCN2021137491-appb-000001
Figure PCTCN2021137491-appb-000001
在公式(1)中,λ表示波长,I spe(λ)表示高亮物体的光谱成分,f表示摄像头拍摄时的实际焦距,m,n为表示显示屏上的像素点的坐标,u,v表示图像传感器上的像素点的坐标;A(m,n)表示振幅调制函数,P(m,n)表示相位调制函数。 In formula (1), λ represents the wavelength, I spe (λ) represents the spectral component of the highlighted object, f represents the actual focal length of the camera when shooting, m, n represent the coordinates of the pixels on the display screen, u, v Represents the coordinates of the pixel on the image sensor; A(m,n) represents the amplitude modulation function, and P(m,n) represents the phase modulation function.
生成的屏下成像系统的RGB三通道的点扩散函数如下述公式(2)所示:The generated point spread function of the three-channel RGB of the under-screen imaging system is shown in the following formula (2):
Figure PCTCN2021137491-appb-000002
Figure PCTCN2021137491-appb-000002
在公式(2)中,PSF r表示红光通道的点扩散函数,PSF g表示绿光通道的点扩散函数,PSF b表示蓝光通道的点扩散函数,I(u,v;λ)表示高亮物体的不同波长经过屏下成像系统后对应的点扩散函数,λ表示波长,u,v表示图像传感器上的像素点的坐标,F r(λ)表示图像传感器的红色滤波片的透过率曲线、F g(λ)表示图像传感器的绿色滤波片的透过率曲线、F b(λ)表示图像传感器的蓝色滤波片的透过率曲线。 In formula (2), PSF r represents the point spread function of the red channel, PSF g represents the point spread function of the green channel, PSF b represents the point spread function of the blue channel, and I(u, v; λ) represents the highlight The point spread function corresponding to different wavelengths of the object after passing through the under-screen imaging system, λ represents the wavelength, u, v represent the coordinates of the pixel points on the image sensor, F r (λ) represents the transmittance curve of the red filter of the image sensor , F g (λ) represents the transmittance curve of the green filter of the image sensor, and F b (λ) represents the transmittance curve of the blue filter of the image sensor.
图10示出根据本申请一实施例的生成的RBG三通道的点扩散函数的示意图,如图10所示,分别为红光通道的点扩散函数、绿光通道的点扩散函数、蓝光通道的点扩散函数,其中,同一像素点的红光通道的点扩散函数与该像素点的绿光通道的点扩散函数间隔一个子像素点, 同一像素点的红光通道的点扩散函数与该像素点的蓝光通道的点扩散函数间隔一个子像素点,同一像素点的绿光通道的点扩散函数与该像素点的蓝光通道的点扩散函数间隔一个子像素点。FIG. 10 shows a schematic diagram of point spread functions of three RBG channels generated according to an embodiment of the present application. As shown in FIG. 10 , the point spread functions of the red light channel, the point spread function of the green light channel, and the point spread function, wherein the point spread function of the red light channel of the same pixel and the point spread function of the green light channel of the pixel point are separated by a sub-pixel point, and the point spread function of the red light channel of the same pixel point is the same as the pixel point. The point spread function of the blue light channel is separated by one sub-pixel point, and the point spread function of the green light channel of the same pixel point and the point spread function of the blue light channel of the pixel point are separated by one sub-pixel point.
该方式中,可以在手机的计算能力和存储能力满足需求的情况下,预先在手机中存储显示屏模型,结合所获取的高亮物体的光谱成分,生成高亮物体的不同波长经过屏下成像系统后,即高亮物体的不同波长经过显示屏及显示屏下摄像头的镜头后不同波长的所对应的点扩散函数,进而得到屏下成像系统的RGB三通道的点扩散函数。这样,利用高亮物体的光谱成分,通过物理建模的方式获得真实眩光所对应的点扩散函数,从而可以有效消除由于显示屏周期性结构的色散效应,从而对于显示屏周期性结构引起的眩光,尤其是彩虹眩光具有优异的抑制作用,提升屏下成像系统的成像质量。In this method, when the computing power and storage capacity of the mobile phone meet the requirements, the display screen model can be stored in the mobile phone in advance, and combined with the acquired spectral components of the highlighted object, different wavelengths of the highlighted object can be generated and imaged under the screen. After the system, that is, the point spread functions corresponding to different wavelengths of different wavelengths of the highlighted object passing through the display screen and the lens of the camera under the display screen, and then the point spread function of the three RGB channels of the under-screen imaging system is obtained. In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated. , especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
方式二、手机根据预先标定的不同波长经过所述屏下成像系统后所对应的点扩散函数及上述获取的高亮物体的光谱成分,生成屏下成像系统的RGB三通道的点扩散函数。Method 2: The mobile phone generates the point spread function of the RGB three channels of the under-screen imaging system according to the pre-calibrated point spread functions of different wavelengths after passing through the under-screen imaging system and the spectral components of the highlighted objects obtained above.
示例性地,可以在实验室中预先标定不同波长经过所述屏下成像系统后所对应的点扩散函数,并预先存储在手机中;图11示出根据本申请一实施例的预先标定的不同波长经过所述屏下成像系统后所对应的点扩散函数的示意图,如图11所示,为可见光波段(400nm-780nm)的不同波长经过所述屏下成像系统后所对应的点扩散函数,其中,波长之间的间隔可以为6nm、3nm等。Exemplarily, the point spread functions corresponding to different wavelengths after passing through the under-screen imaging system can be pre-calibrated in the laboratory and stored in the mobile phone in advance; The schematic diagram of the point spread function corresponding to the wavelength after passing through the under-screen imaging system, as shown in Figure 11, is the point spread function corresponding to different wavelengths in the visible light band (400nm-780nm) after passing through the under-screen imaging system, Wherein, the interval between wavelengths may be 6 nm, 3 nm, or the like.
手机开启去眩光功能时,可以直接调用手机中存储的不同波长经过屏下成像系统后所对应的点扩散函数,并与上述获取的高亮物体的光谱成分做积分运算,从而得到屏下成像系统的RGB三通道的点扩散函数。When the anti-glare function is enabled on the mobile phone, the point spread function corresponding to the different wavelengths stored in the mobile phone after passing through the under-screen imaging system can be directly called, and integrated with the spectral components of the highlighted objects obtained above to obtain the under-screen imaging system. RGB three-channel point spread function.
该方式中,通过在手机中预存不同波长经过屏下成像系统后所对应的点扩散函数,结合上述获取高亮物体的光谱成分,可以快速生成屏下成像系统的RGB三通道的点扩散函数,显著提升了计算效率,进而提升了手机去眩光的处理速度,可以适用于视频拍摄等处理数据量大且对处理速度要求高的去眩光处理场景。In this method, the point spread functions corresponding to different wavelengths after passing through the under-screen imaging system are pre-stored in the mobile phone, and the point spread functions of the RGB three channels of the under-screen imaging system can be quickly generated in combination with the above-mentioned acquisition of the spectral components of the highlighted objects. The computing efficiency is significantly improved, and the processing speed of the mobile phone's anti-glare is improved. It can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements such as video shooting.
需要说明的是,上述步骤402及步骤403可以在上述步骤400或步骤401之前执行,也可以在步骤400或步骤401之后执行,还可以与步骤400或步骤401同时执行,本申请实施例对此不作限定。It should be noted that the above step 402 and step 403 can be executed before the above step 400 or step 401, can also be executed after step 400 or step 401, and can also be executed simultaneously with step 400 or step 401. Not limited.
步骤404、利用生成的RGB三通道的点扩散函数和上述眩光星芒区域,采用预设的非盲解卷积算法,去除眩光星芒区域中的眩光。 Step 404 , using the generated point spread function of the three RGB channels and the above-mentioned glare starburst area, and adopting a preset non-blind deconvolution algorithm to remove the glare in the glare starburst area.
示例性地,预设的非盲解卷积算法可以包括:神经网络学习、凸优化解卷积、非凸优化解卷积等现有的非盲解卷积算法,结合上述识别的高亮物体的形状,利用生成的屏下成像系统的RGB三通道的点扩散函数分别和上述眩光星芒区域进行非盲解卷积处理,得到RGB三通道的解卷积结果,从而得到去眩光后的局部图像。Exemplarily, the preset non-blind deconvolution algorithm may include: neural network learning, convex optimization deconvolution, non-convex optimization deconvolution and other existing non-blind deconvolution algorithms, combined with the above-identified highlighted objects. Using the point spread function of the RGB three channels of the generated under-screen imaging system to perform non-blind deconvolution processing with the above-mentioned glare starburst area, the deconvolution results of the RGB three channels are obtained, so as to obtain the local area after the glare removal. image.
图12示出了根据本申请一实施例的去眩光后的局部图像的示意图,如图12所示,去眩光后的局部图像1201为去除上述图5中眩光星芒区域502中眩光所生成的图像,该去眩光后的局部图像1202中没有眩光,可以清楚看到太阳形态1202。FIG. 12 shows a schematic diagram of a partial image after glare removal according to an embodiment of the present application. As shown in FIG. 12 , the partial image 1201 after glare removal is generated by removing the glare in the glare starburst area 502 in FIG. 5 above. The image, there is no glare in the deglared partial image 1202, and the sun shape 1202 can be clearly seen.
步骤405、将去眩光后的局部图像与预览图像进行融合处理,得到去眩光的高清图像。Step 405: Perform fusion processing on the de-glare partial image and the preview image to obtain a de-glare high-definition image.
示例性地,可以将去眩光后的局部图像所在的眩光区域与非眩光区域进行拼接,从而得到最终去眩光的高清图像。Exemplarily, the glare area and the non-glare area where the partial image after glare removal is located may be spliced, so as to obtain a final high-definition image without glare.
示例性地,可以在显示屏的拍摄界面实时显示该去眩光后的高清图像,如上述图2B所示, 去眩光后的高清图像中,不存在眩光或存在较少的眩光,尤其是可以消除彩虹眩光,从而使得图像中的亮度可以平滑过渡,能够更真实的反映出所拍摄的拍摄对象。Exemplarily, the high-definition image after de-glare can be displayed in real time on the shooting interface of the display screen. As shown in FIG. 2B above, in the high-definition image after de-glare, there is no glare or there is less glare, especially it can be eliminated. Rainbow glare, so that the brightness in the image can be smoothly transitioned to more realistically reflect the captured subject.
本申请实施例中,针对用户在通过显示屏下摄像头拍摄时,所拍摄图像中产生的眩光,尤其是彩虹眩光,这种眩光对于波长是敏感的,因此通过获取高亮物体的光谱成分,并由此生成与不同高亮物体相适应的点扩散函数,进而通过非盲解卷积处理获得去眩光的高清图像,从而有效改善甚至消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,能够达到在有显示屏遮挡的情况下,实现显示屏下摄像头无眩光成像效果,使得用户在使用显示屏下摄像头拍摄功能时感知不到彩虹眩光对最终成像的影响。In the embodiment of the present application, for the glare generated in the captured image when the user shoots through the camera under the display screen, especially the rainbow glare, this glare is sensitive to the wavelength. This generates a point spread function suitable for different bright objects, and then obtains a high-definition image with no glare through non-blind deconvolution processing, thereby effectively improving or even eliminating the glare generated by the image captured by the camera under the display screen, especially the rainbow glare. , which can achieve the glare-free imaging effect of the camera under the display screen when the display screen is blocked, so that the user cannot perceive the effect of rainbow glare on the final image when using the camera shooting function under the display screen.
在一些场景中,用户通过电子设备对已经拍摄的图像或视频,进行去眩光处理。其中,已经拍摄的图像或视频为通过显示屏下摄像头所拍摄的图像或视频。In some scenarios, the user performs anti-glare processing on the image or video that has been captured by the electronic device. The images or videos that have been captured are images or videos captured by the camera under the display screen.
示例性地,当电子设备已经拍摄的图像或视频中存在高亮物体时,电子设备可以开启去眩光功能,从而改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光。Exemplarily, when there is a bright object in the image or video already captured by the electronic device, the electronic device can turn on the glare removal function, so as to improve or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
以上述图1中的手机为例,该手机的前置摄像头设置于显示屏之下,用户在户外通过手机的前置摄像头进行自拍,示例性地,所拍摄的图像为上述图2A所示的原始图像201,完成自拍后,该拍摄的图像存储在手机中,用户在查看该图像时,打开手机的图像编辑功能,从而使显示屏进入图像编辑界面,在图像编辑界面可以看到图像中存在眩光,即太阳呈现为彩虹眩光,同时该彩虹眩光遮挡了部分物体(如山峰),手机开启去眩光功能,进行去眩光处理,从而得到去除眩光的高清图像,去除眩光的高清图像为上述图2B中所示的高清图像203,在编辑后的图像中消除了眩光,可以看到太阳,以及原本被太阳的彩虹眩光所遮挡的部分山峰;这样,通过本申请实施例的图像处理方法,去除了所拍摄的图像中太阳的眩光(彩虹眩光),从而可以得到去除眩光的高清图像。Taking the mobile phone in FIG. 1 above as an example, the front camera of the mobile phone is set under the display screen, and the user takes a selfie through the front camera of the mobile phone outdoors. Exemplarily, the captured image is the one shown in FIG. 2A above. The original image 201, after the self-timer is completed, the captured image is stored in the mobile phone. When viewing the image, the user turns on the image editing function of the mobile phone, so that the display screen enters the image editing interface. Glare, that is, the sun appears as rainbow glare, and at the same time, the rainbow glare blocks some objects (such as mountain peaks), and the mobile phone turns on the anti-glare function and performs anti-glare processing, thereby obtaining a high-definition image with no glare. The high-definition image with no glare is shown in Figure 2B above. In the high-definition image 203 shown in the edited image, the glare has been eliminated, and the sun and some mountain peaks originally blocked by the rainbow glare of the sun can be seen; The glare of the sun (rainbow glare) in the captured image, so that a glare-removed high-definition image can be obtained.
示例性地,用户可以在查看图像时,通过点击编辑按钮,或者通过语音指令,等等方式,使显示屏进入图像编辑界面。Exemplarily, when viewing an image, the user can make the display screen enter the image editing interface by clicking an editing button, or by using a voice command, or the like.
示例性地,在显示屏进入图像编辑界面之后,手机开启去眩光功能的具体方式可以参照前文关于手机开启去眩光功能的相关表述,在此不再赘述。Exemplarily, after the display screen enters the image editing interface, for the specific manner of enabling the anti-glare function of the mobile phone, reference may be made to the foregoing description about enabling the anti-glare function of the mobile phone, which will not be repeated here.
示例性地,手机开启去眩光功能后,进行去眩光处理,从而得到去除眩光的高清图像的具体实现方式可以参照上述图4中相关表述,在此不再赘述。Exemplarily, after the anti-glare function is enabled on the mobile phone, the anti-glare processing is performed to obtain the high-definition image with the glare removed. Reference may be made to the relevant description in FIG. 4 above, which will not be repeated here.
本申请实施例中,针对用户编辑通过显示屏下摄像头拍摄好的图像时,所拍摄图像中产生的眩光,尤其是彩虹眩光,这种眩光对于波长是敏感的,因此通过获取高亮物体的光谱成分,并由此生成与不同高亮物体相适应的点扩散函数,进而通过非盲解卷积处理获得去眩光的高清图像,从而有效改善甚至消除显示屏下摄像头所拍摄的图像产生的眩光,尤其是彩虹眩光,提高图像的清晰度。In the embodiment of the present application, when the user edits the image captured by the camera under the display screen, the glare generated in the captured image, especially the rainbow glare, is sensitive to the wavelength. Therefore, by obtaining the spectrum of the highlighted object components, and thus generate a point spread function suitable for different bright objects, and then obtain a high-definition image with no glare through non-blind deconvolution processing, so as to effectively improve or even eliminate the glare generated by the image captured by the camera under the display screen. Especially the rainbow glare improves the clarity of the image.
图13示出了根据本申请一实施例的一种图像处理方法的流程图,如图13所示,该方法的执行主体可以为电子设备,例如,可以为图1中的手机,该方法可以包括以下步骤:FIG. 13 shows a flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 13 , the execution subject of the method may be an electronic device, for example, the mobile phone in FIG. 1 , and the method may be Include the following steps:
步骤1301、获取待处理图像。Step 1301: Acquire an image to be processed.
其中,待处理图像为屏下成像系统所采集的图像,如图2A所示,屏下成像系统可以包括显示屏及设置在显示屏下方的摄像头;例如,待处理图像可以为上述实施例中用户通过电子设备拍摄图像或视频时实时的预览图像,如图3所示,还可以为上述实施例中已经拍摄的存储在电子设备中的图像或视频,还可以为其它外部设备拍摄的图像等等,本申请实施例对此 不作限定。The image to be processed is an image collected by the under-screen imaging system. As shown in FIG. 2A , the under-screen imaging system may include a display screen and a camera disposed below the display screen; for example, the image to be processed may be the user in the above embodiment. The real-time preview image when an image or video is captured by an electronic device, as shown in FIG. 3 , can also be the image or video that has been captured in the above embodiment and stored in the electronic device, and can also be an image captured by other external devices, etc. , which is not limited in the embodiments of the present application.
待处理图像可以包括:高亮物体及其产生的眩光,其中,高亮物体可以为待处理图像中呈现指定颜色和/或亮度的拍摄对象;眩光为高亮物体的光线透过显示屏及该显示屏下的摄像头所呈现出的耀眼光芒,该眩光可以为彩虹形态的眩光。指定颜色可以为白色,指定亮度可以为白色在YUV颜色编码的亮度值,或者白色在RGB色彩模式中的灰度值。The image to be processed may include: a highlight object and the glare generated by it, wherein the highlight object may be a photographic object showing a specified color and/or brightness in the image to be processed; glare is the light of the highlight object passing through the display screen and the The dazzling light from the camera under the display, which can be a rainbow-shaped glare. The specified color can be white, and the specified brightness can be the brightness value of white in YUV color coding, or the gray value of white in RGB color mode.
示例性的,可以以待处理图像中某一区域内大部分像素点的灰度值不小于预设灰度阈值作为条件,检测到待处理图像中是否存在高亮物体。具体实现过程可以参照上述实施例中的相关表述,此处不再赘述。Exemplarily, it may be detected whether there is a bright object in the image to be processed under the condition that the grayscale values of most of the pixels in a certain area of the image to be processed are not less than a preset grayscale threshold. For a specific implementation process, reference may be made to the relevant expressions in the foregoing embodiments, and details are not repeated here.
步骤1302、根据高亮物体的光谱成分,生成第一点扩散函数;第一点扩散函数为屏下成像系统中至少一个通道的点扩散函数。Step 1302: Generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system.
其中,可以通过电子设备的所配置的光谱测量设备或者外部的光谱测量设备采集高亮物体的光谱成分,或者可以将不同光源类型对应的光谱成分预存在电子设备中,从而通过识别待处理图像中高亮物体的光源类型,确定该高亮物体的光谱成分,例如,高亮物体的光谱成分如图6所示。Among them, the spectral components of the highlighted objects can be collected by the spectral measurement device configured in the electronic device or the external spectral measurement device, or the spectral components corresponding to different light source types can be pre-stored in the electronic device, so as to identify the high brightness in the image to be processed. The light source type of the bright object determines the spectral composition of the bright object. For example, the spectral composition of the bright object is shown in Figure 6.
在一种可能的实现方式中,该步骤还可以包括:获取光谱测量设备所采集的高亮物体的光谱成分。其中,光谱测量设备可以为电子设备的所配置的光谱测量设备或者外部的光谱测量设备。该实现方式的具体说明,可以参照上述实施例中图4中步骤402中关于“获取高亮物体的光谱成分的方式中方式一及方式三”的相关表述,此处不再赘述。In a possible implementation manner, the step may further include: acquiring the spectral components of the bright object collected by the spectral measurement device. The spectral measurement device may be a configured spectral measurement device of the electronic device or an external spectral measurement device. For the specific description of this implementation, reference may be made to the relevant expressions about "the first and third modes of the method for obtaining the spectral components of the highlighted object" in step 402 in FIG.
这样,通过电子设备自带的光谱测量设备,可以在拍摄时实时获取高亮物体的光谱成分,无需依赖外部设备,提高了电子设备的功能完整性;或者,借助外部设备所采集的光谱成分,从而获取高亮物体的光谱成分,节约了电子设备的成本。In this way, the spectral components of the high-brightness objects can be acquired in real time when shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment; or, with the help of spectral components collected by external equipment, Thus, the spectral components of the bright object are obtained, and the cost of electronic equipment is saved.
在一种可能的实现方式中,该步骤还可以包括:对待处理图像进行图像识别,确定高亮物体的光源类型;根据预设的不同光源类型的光谱成分,确定高亮物体的光谱成分。该实现方式的具体说明可以参照上述实施例中图4中步骤402中关于“获取高亮物体的光谱成分的方式中方式二”的相关表述,此处不再赘述。In a possible implementation manner, this step may further include: performing image recognition on the image to be processed to determine the light source type of the highlighted object; and determining the spectral composition of the highlighted object according to preset spectral components of different light source types. For the specific description of this implementation manner, reference may be made to the relevant expression about "the second manner in the manner of acquiring the spectral component of the highlighted object" in step 402 in FIG. 4 in the foregoing embodiment, which will not be repeated here.
这样,考虑不同光源的光谱成分是固定不变的,通过预先存储不同光源类型对应的光谱成分,可以在拍摄时识别出拍摄对象中高亮物体的光源类型,从而确定高亮物体的光谱成分;这样,电子设备无需额外配置光谱测量设备,即可获取高亮物体的光谱成分,节约了电子设备的成本。In this way, considering that the spectral components of different light sources are fixed, by pre-storing the spectral components corresponding to different light source types, the light source type of the highlighted object in the subject can be identified during shooting, thereby determining the spectral composition of the highlighted object; , the electronic device can obtain the spectral composition of the bright object without additional configuration of the spectral measurement device, which saves the cost of the electronic device.
其中,当屏下成像系统中包括RGB三通道,第一点扩散函数可以为红光通道的点扩散函数,还可以为绿光通道的点扩散函数,还可以为蓝光通道的点扩散函数,还可以为任意两个通道的点扩散函数,还可以为三通道的点扩散函数,本申请实施例对此不作限定,例如,可以为如图10所示的三通道的点扩散函数。Wherein, when the under-screen imaging system includes three RGB channels, the first point spread function may be the point spread function of the red light channel, the point spread function of the green light channel, or the point spread function of the blue light channel, or the point spread function of the blue light channel. It may be the point spread function of any two channels, or may be the point spread function of three channels, which is not limited in this embodiment of the present application. For example, it may be the point spread function of three channels as shown in FIG. 10 .
在一种可能的实现方式中,该步骤1302可以包括:根据高亮物体的光谱成分及显示屏的模型,生成第二点扩散函数;其中,第二点扩散函数为高亮物体的不同波长经过屏下成像系统后对应的点扩散函数;根据第二点扩散函数及摄像头的感光特性曲线,生成第一点扩散函数。In a possible implementation manner, step 1302 may include: generating a second point spread function according to the spectral components of the highlighted object and the model of the display screen; The corresponding point spread function behind the under-screen imaging system; the first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
其中,显示屏的模型可以预存在电子设备中,可以在执行步骤1302的同时,生成显示屏模型;显示屏模型可以包括:振幅调制函数A(m,n)和相位调制函数P(m,n)等,m,n表示显示 屏上的像素点的坐标。The model of the display screen may be pre-stored in the electronic device, and the display screen model may be generated while performing step 1302; the display screen model may include: an amplitude modulation function A(m,n) and a phase modulation function P(m,n) ), etc., m, n represent the coordinates of the pixels on the display screen.
该实现方式的具体说明可以参照上述实施例中图4中步骤403中关于“根据高亮物体的光谱成分,生成RGB三通道的点扩散函数的方式一”的相关表述,此处不再赘述。For the specific description of this implementation, reference may be made to the relevant statement in step 403 in FIG. 4 in the above-mentioned embodiment about “method 1 of generating the point spread function of three RGB channels according to the spectral components of the highlighted object”, which will not be repeated here.
示例性地,可以在电子设备的计算能力和存储能力满足需求的情况下,预先在电子设备中存储显示屏模型,进而可以根据该显示屏的模型,结合高亮物体的光谱成分,生成高亮物体的不同波长经过屏下成像系统的点扩散函数,进而结合摄像头的感光特性曲线得到屏下成像系统的至少一个通道的点扩散函数。这样,利用高亮物体的光谱成分,通过物理建模的方式获得真实眩光所对应的点扩散函数,从而可以有效消除由于显示屏周期性结构的色散效应,从而对于显示屏周期性结构引起的眩光,尤其是彩虹眩光具有优异的抑制作用,提升屏下成像系统的成像质量。Exemplarily, when the computing power and storage capacity of the electronic device meet the requirements, a display screen model can be stored in the electronic device in advance, and then a highlight can be generated according to the model of the display screen in combination with the spectral components of the highlighted object. Different wavelengths of the object pass through the point spread function of the under-screen imaging system, and then combine with the photosensitive characteristic curve of the camera to obtain the point spread function of at least one channel of the under-screen imaging system. In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated. , especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
在一种可能的实现方式中,根据高亮物体的光谱成分及显示屏的模型,生成第二点扩散函数,可以包括:将显示屏的模型作为透过率函数,结合光波传播因子,并以高亮物体的光谱成分作为权重,得到第二点扩散函数;其中,显示屏的模型包括:显示屏的振幅调制函数和相位调制特性函数;光波传播因子根据摄像头拍摄待处理图像时的焦距及高亮物体的波长确定。In a possible implementation manner, generating the second point spread function according to the spectral components of the highlighted object and the model of the display screen may include: taking the model of the display screen as the transmittance function, combining the light wave propagation factor, and using The spectral components of the highlighted objects are used as weights to obtain the second point spread function; wherein, the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; The wavelength of bright objects is determined.
示例性地,可以通过上述公式(1),根据摄像头拍摄时的实际焦距、显示屏上的像素点的坐标、摄像头的图像传感器上的像素点的坐标、高亮物体的光谱成分、显示屏的相位调制函数及振幅调制函数,确定第二点扩散函数。Exemplarily, the above formula (1) can be used to determine the actual focal length of the camera when shooting, the coordinates of the pixels on the display screen, the coordinates of the pixels on the image sensor of the camera, the spectral components of the highlighted objects, and the coordinates of the display screen. The phase modulation function and the amplitude modulation function determine the second point spread function.
进一步地,可以通过上述公式(2),根据第二点扩散函数、图像传感器的红色滤波片的透过率曲线、绿色滤波片的透过率曲线及蓝色滤波片的透过率曲线,进行积分运算,得到第一点扩散函数,其中,图像传感器的红色滤波片的透过率曲线、绿色滤波片的透过率曲线及蓝色滤波片的透过率曲线可以如图9所示。Further, by the above formula (2), according to the second point spread function, the transmittance curve of the red filter of the image sensor, the transmittance curve of the green filter and the transmittance curve of the blue filter, the Integrate operation to obtain the first point spread function, wherein the transmittance curve of the red filter, the transmittance curve of the green filter and the transmittance curve of the blue filter of the image sensor can be shown in FIG. 9 .
这样,从眩光产生的根本原因出发,即显示屏周期性的像素排布导致的衍射效应是对波长敏感的,将通过振幅调制函数和相位调制特性函数所构建的显示屏模型作为透过率函数,并根据摄像头拍摄待处理图像时的焦距及高亮物体的实际波长确定光波传播因子,同时以高亮物体的光谱成分作为权重,得到高亮物体的不同波长经过屏下成像系统后对应的点扩散函数,所得到的点扩散函数能够更加真实地表达高亮物体通过屏下成像产生眩光过程。In this way, starting from the root cause of glare, that is, the diffraction effect caused by the periodic pixel arrangement of the display screen is sensitive to wavelength, the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function. , and determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object, and at the same time use the spectral component of the highlighted object as the weight to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system. Spread function, the obtained point spread function can more realistically express the glare process of bright objects through under-screen imaging.
在一种可能的实现方式中,该步骤1302可以包括:根据高亮物体的光谱成分及预设的第三点扩散函数,生成所述第一点扩散函数;其中,第三点扩散函数为不同波长经过屏下成像系统后对应的点扩散函数。In a possible implementation manner, step 1302 may include: generating the first point spread function according to the spectral components of the highlighted object and a preset third point spread function; wherein the third point spread function is different The point spread function corresponding to the wavelength after passing through the under-screen imaging system.
示例性地,可以通过在实验室预先标定的方式,得到第三点扩散函数,并存储在电子设备中,例如,第三点扩散函数可以如图11所示,这样,通过在存储的不同波长经过屏下成像系统后所对应的点扩散函数,结合上述高亮物体的光谱成分,可以快速生成屏下成像系统中至少一个通道的点扩散函数,显著提升了计算效率,进而提升了去眩光的处理速度,可以适用于视频拍摄等处理数据量大且对处理速度要求高的去眩光处理场景。Exemplarily, the third point spread function can be obtained by pre-calibrating in the laboratory and stored in the electronic device, for example, the third point spread function can be shown in FIG. The point spread function corresponding to the under-screen imaging system, combined with the spectral components of the above-mentioned bright objects, can quickly generate the point spread function of at least one channel in the under-screen imaging system, which significantly improves the computational efficiency and improves the anti-glare effect. The processing speed can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements, such as video shooting.
该实现方式的具体说明可以参照上述实施例中图4中步骤403中关于“根据高亮物体的光谱成分,生成RGB三通道的点扩散函数的方式二”的相关表述,此处不再赘述。For the specific description of this implementation, refer to the relevant expression in step 403 in FIG. 4 in the above-mentioned embodiment about “Method 2 of generating the point spread function of three RGB channels according to the spectral components of the highlighted object”, which will not be repeated here.
步骤1303、对第一点扩散函数及待处理图像进行解卷积处理,得到去除眩光后的图像。Step 1303: Perform deconvolution processing on the first point spread function and the to-be-processed image to obtain a glare-removed image.
示例性地,可以通过神经网络学习、凸优化解卷积、非凸优化解卷积等非盲解卷积算法, 对第一点扩散函数及待处理图像进行解卷积运算,从而得到去除眩光后的图像,例如,去除眩光后的图像如图2B所示。Exemplarily, non-blind deconvolution algorithms such as neural network learning, convex optimization deconvolution, and non-convex optimization deconvolution can be used to perform deconvolution operations on the first point spread function and the image to be processed, thereby obtaining glare removal. The resulting image, for example, after removing the glare is shown in Figure 2B.
在一种可能的实现方式中,在步骤1303之前,还可以将待处理图像分割为第一区域及第二区域;其中,第一区域为高亮物体及其产生的眩光所在的区域;第二区域为待处理图像中第一区域以外的区域。相应的,步骤1303可以包括:通过对第一点扩散函数及第一区域,进行解卷积处理,得到去除眩光后的第一区域;将去除眩光后的第一区域与第二区域融合,得到去除眩光后的图像。In a possible implementation, before step 1303, the image to be processed may also be divided into a first area and a second area; wherein the first area is the area where the highlighted object and the glare generated by it are located; the second area The area is the area other than the first area in the image to be processed. Correspondingly, step 1303 may include: performing deconvolution processing on the first point spread function and the first region to obtain the first region after removing glare; and merging the first region and the second region after removing the glare to obtain Image after removing glare.
例如,第一区域可以为上述实施例中的预览图像中高亮物体所在的眩光区域,如图3中所示的眩光区域302,相应的,第二区域可以为上述实施例中的预览图像中除眩光区域以外的非眩光区域;再例如,第一区域可以为上述实施例中的眩光星芒区域,如图5中所示的眩光区域502,相应的,第二区域可以为上述实施例中的除眩光星芒区域以外的非眩光星芒区域。当第一区域为上述实施例中的眩光星芒区域,去除眩光后的第一区域可以如图12中所示。For example, the first area may be the glare area where the highlighted object is located in the preview image in the foregoing embodiment, such as the glare area 302 shown in FIG. 3 . Correspondingly, the second area may be the glare area except for the preview image in the foregoing embodiment. A non-glare area other than the glare area; for another example, the first area may be the glare starburst area in the above-mentioned embodiment, such as the glare area 502 shown in FIG. 5 , and correspondingly, the second area may be the above-mentioned embodiment. A non-glare starburst area other than the glare starburst area. When the first area is the glare starburst area in the above embodiment, the first area after removing the glare may be as shown in FIG. 12 .
这样,将待处理图像分割为高亮物体及其产生的眩光所在的第一区域及不包含高亮物体及其产生的眩光的第二区域,进而针对高亮物体及其产生的眩光所在的第一区域执行进一步的去眩光处理,提高了处理效率,节约了处理资源;同时,可以减轻或消除在去眩光处理过程中对待处理图像中不包含高亮物体及其产生的眩光的第二区域的影响。In this way, the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it, and then the first area where the highlighted object and the glare generated by it are located. Further anti-glare processing is performed on one area, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the second area in the image to be processed that does not contain bright objects and the glare generated by them during the anti-glare processing. influences.
该实现方式的具体说明可以参照上述实施例中图4中步骤400、步骤401、步骤404及步骤405中的相关表述,此处不再赘述。For the specific description of the implementation manner, reference may be made to the relevant expressions in step 400, step 401, step 404, and step 405 in FIG. 4 in the foregoing embodiment, which will not be repeated here.
本申请实施例中,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。In the embodiment of the present application, for the image to be processed collected by the under-screen imaging system, the image to be processed includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the image to be processed, different highlight objects are generated. The corresponding point spread function of at least one channel in the under-screen imaging system is used, and then the first point spread function and the image to be processed are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve Or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
上述实施例的各种可能的实现方式或说明参见上文,此处不再赘述。Various possible implementation manners or descriptions of the foregoing embodiments are referred to above, and details are not repeated here.
基于上述方法实施例的同一发明构思,本申请的实施例还提供了一种图像处理装置,该图像处理用于执行上述方法实施例所描述的技术方案。Based on the same inventive concept of the above method embodiments, the embodiments of the present application further provide an image processing apparatus, and the image processing is used to execute the technical solutions described in the above method embodiments.
图14示出了根据本申请一实施例的一种图像处理装置的结构图,如图14所示,该装置可以包括:获取模块1401,用于获取待处理图像;待处理图像为屏下成像系统所采集的图像,屏下成像系统包括显示屏及设置在显示屏下方的摄像头;待处理图像包括:高亮物体及其产生的眩光,其中,高亮物体为待处理图像中呈现指定颜色和/或亮度的拍摄对象;生成模块1402,用于根据高亮物体的光谱成分,生成第一点扩散函数;第一点扩散函数为屏下成像系统中至少一个通道的点扩散函数;处理模块1403,用于对第一点扩散函数及待处理图像进行解卷积处理,得到去除眩光后的图像。FIG. 14 shows a structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in FIG. 14 , the apparatus may include: an acquisition module 1401 for acquiring an image to be processed; the image to be processed is off-screen imaging For the images collected by the system, the under-screen imaging system includes a display screen and a camera set below the display screen; the images to be processed include: highlighted objects and the glare generated by them, wherein the highlighted objects are the specified colors and colors in the image to be processed. / or a photographic object of brightness; the generating module 1402 is used to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; the processing module 1403 , which is used to perform deconvolution processing on the first point spread function and the image to be processed to obtain an image after glare removal.
在一种可能的实现方式中,生成模块,还用于:根据高亮物体的光谱成分及显示屏的模型,生成第二点扩散函数;其中,第二点扩散函数为高亮物体的不同波长光线经过屏下成像系统后对应的点扩散函数;根据第二点扩散函数与摄像头的感光特性曲线,生成第一点扩散函数。In a possible implementation manner, the generation module is further configured to: generate a second point spread function according to the spectral components of the highlighted object and the model of the display screen; wherein the second point spread function is different wavelengths of the highlighted object The point spread function corresponding to the light passing through the under-screen imaging system; the first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
在一种可能的实现方式中,生成模块,还用于:根据高亮物体的光谱成分及预设的第三点扩散函数,生成第一点扩散函数;其中,第三点扩散函数为不同波长经过屏下成像系统后 对应的点扩散函数。In a possible implementation manner, the generating module is further configured to: generate a first point spread function according to the spectral components of the highlighted object and a preset third point spread function; wherein the third point spread function is a different wavelength The corresponding point spread function after passing through the under-screen imaging system.
在一种可能的实现方式中,该装置还可以包括:光谱测量设备,用于采集高亮物体的光谱成分。In a possible implementation manner, the apparatus may further include: a spectral measurement device, configured to collect spectral components of the highlighted object.
在一种可能的实现方式中,该装置还可以包括:采集模块,用于对待处理图像进行图像识别,确定高亮物体的光源类型;根据预设的不同光源类型的光谱成分,确定高亮物体的光谱成分。In a possible implementation manner, the device may further include: a collection module, configured to perform image recognition on the image to be processed, and determine the light source type of the highlighted object; and determine the highlighted object according to preset spectral components of different light source types spectral composition.
在一种可能的实现方式中,该装置还可以包括:分割模块,用于:将待处理图像分割为第一区域及第二区域;其中,第一区域为所述高亮物体及其产生的眩光所在的区域;第二区域为待处理图像中第一区域以外的区域;处理模块,还用于:通过对第一点扩散函数及第一区域,进行解卷积处理,得到去除眩光后的第一区域;将去除所述眩光后的第一区域与第二区域融合,得到去除所述眩光后的图像。In a possible implementation manner, the apparatus may further include: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein the first area is the highlighted object and its generated the area where the glare is located; the second area is an area other than the first area in the image to be processed; the processing module is also used for: deconvolution processing the first point spread function and the first area to obtain the glare-removed image The first area; the glare-removed first area is fused with the second area to obtain the glare-removed image.
在一种可能的实现方式中,生成模块,还用于:将显示屏的模型作为透过率函数,结合光波传播因子,并以高亮物体的光谱成分作为权重,得到第二点扩散函数;其中,显示屏的模型包括:显示屏的振幅调制函数和相位调制特性函数;光波传播因子根据摄像头拍摄待处理图像时的焦距及所述高亮物体的波长确定。In a possible implementation manner, the generation module is further used to: take the model of the display screen as the transmittance function, combine the light wave propagation factor, and take the spectral component of the highlighted object as the weight to obtain the second point spread function; Wherein, the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor is determined according to the focal length when the camera captures the image to be processed and the wavelength of the highlighted object.
本申请实施例中,针对屏下成像系统所采集的待处理图像,该待处理图像包括:高亮物体及其产生的眩光,根据待处理图像中高亮物体的光谱成分,生成与不同高亮物体相适应的屏下成像系统中至少一个通道的点扩散函数,进而对第一点扩散函数及待处理图像进行解卷积处理,得到去除高亮物体所产生的眩光后的图像;从而可以有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。In the embodiment of the present application, for the image to be processed collected by the under-screen imaging system, the image to be processed includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the image to be processed, different highlight objects are generated. The corresponding point spread function of at least one channel in the under-screen imaging system is used, and then the first point spread function and the image to be processed are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve Or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
上述实施例的各种可能的实现方式或说明参见上文,此处不再赘述。Various possible implementation manners or descriptions of the foregoing embodiments are referred to above, and details are not repeated here.
本申请的实施例提供了一种电子设备,该电子设备可以包括:显示屏、显示屏下方的摄像头、处理器和存储器;其中,摄像头用于透过显示屏采集待处理图像;显示屏用于显示待处理图像和去眩光后的图像;存储器,用于存储处理器可执行指令;处理器被配置为执行指令时实现上述实施例的图像处理方法。An embodiment of the present application provides an electronic device, which may include: a display screen, a camera below the display screen, a processor, and a memory; wherein, the camera is used to collect images to be processed through the display screen; the display screen is used for Display the image to be processed and the image after deglare; the memory is used to store the processor-executable instructions; the processor is configured to implement the image processing method of the above embodiment when the instruction is executed.
图15示出根据本申请一实施例的电子设备的结构示意图。以电子设备是手机为例,图15示出了手机200的结构示意图。FIG. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. Taking the electronic device as a mobile phone as an example, FIG. 15 shows a schematic structural diagram of the mobile phone 200 .
手机200可以包括处理器210,外部存储器接口220,内部存储器221,USB接口230,充电管理模块240,电源管理模块241,电池242,天线1,天线2,移动通信模块251,无线通信模块252,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,摄像头293,显示屏294,以及SIM卡接口295等。其中传感器模块280可以包括陀螺仪传感器280A,加速度传感器280B,接近光传感器280G、指纹传感器280H,触摸传感器280K(当然,手机200还可以包括其它传感器,比如色温传感器、温度传感器、压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等,图中未示出)。The mobile phone 200 may include a processor 210, an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc. The sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity light sensor 280G, a fingerprint sensor 280H, and a touch sensor 280K (of course, the mobile phone 200 may also include other sensors, such as a color temperature sensor, a temperature sensor, a pressure sensor, and a distance sensor. , magnetic sensor, ambient light sensor, air pressure sensor, bone conduction sensor, etc., not shown in the figure).
可以理解的是,本申请实施例示意的结构并不构成对手机200的具体限定。在本申请另一些实施例中,手机200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the mobile phone 200 . In other embodiments of the present application, the mobile phone 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器 (application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or Neural-network Processing Unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. The controller may be the nerve center and command center of the mobile phone 200 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 210 for storing instructions and data. In some embodiments, the memory in processor 210 is cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 210 . If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
处理器210可以运行本申请实施例提供的图像处理方法,以便于有效改善或消除显示屏下摄像头拍摄的图像产生的眩光,尤其是彩虹眩光,提高了图像的清晰度,提升了用户体验。处理器210可以包括不同的器件,比如集成CPU和GPU/NPU时,CPU和GPU/NPU可以配合执行本申请实施例提供的图像处理方法,比如图像处理方法中部分算法由CPU执行,另一部分算法由GPU/NPU执行,以得到较快的处理效率。The processor 210 can run the image processing method provided by the embodiments of the present application, so as to effectively improve or eliminate glare generated by the image captured by the camera under the display screen, especially rainbow glare, which improves the clarity of the image and improves the user experience. The processor 210 may include different devices. For example, when the CPU and the GPU/NPU are integrated, the CPU and the GPU/NPU may cooperate to execute the image processing method provided by the embodiments of the present application. For example, some algorithms in the image processing method are executed by the CPU, and another part of the algorithms Executed by GPU/NPU for faster processing efficiency.
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机200可以包括1个或N个显示屏294,N为大于1的正整数。显示屏294可用于显示由用户输入的信息或提供给用户的信息以及各种图形用户界面(graphical user interface,GUI)。例如,显示器294可以显示照片、视频、网页、或者文件等。当处理器210检测到用户的手指(或触控笔等)针对某一应用图标的触摸事件后,响应于该触摸事件,打开与该应用图标对应的应用的用户界面,并在显示器294上显示该应用的用户界面。 Display screen 294 is used to display images, videos, and the like. Display screen 294 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, cell phone 200 may include 1 or N display screens 294, where N is a positive integer greater than 1. The display screen 294 may be used to display information entered by or provided to the user as well as various graphical user interfaces (GUIs). For example, display 294 may display photos, videos, web pages, or documents, and the like. After the processor 210 detects a touch event of the user's finger (or stylus, etc.) on an application icon, in response to the touch event, the user interface of the application corresponding to the application icon is opened and displayed on the display 294 The user interface of the application.
在本申请实施例中,显示屏294可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接显示屏。In this embodiment of the present application, the display screen 294 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
摄像头293(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频,摄像头293可以设置在显示屏294的下方。通常,摄像头293可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。The camera 293 (a front camera or a rear camera, or a camera can be used as both a front camera and a rear camera) is used to capture still images or videos, and the camera 293 can be arranged below the display screen 294 . Generally, the camera 293 may include a photosensitive element such as a lens group and an image sensor, wherein the lens group includes a plurality of lenses (convex or concave) for collecting the light signal reflected by the object to be photographed, and transmitting the collected light signal to the image sensor . The image sensor generates an original image of the object to be photographed according to the light signal.
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行手机200的各种功能应用以及数据处理。内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用等)的代码等。存储数据区可存储手机200使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。 Internal memory 221 may be used to store computer executable program code, which includes instructions. The processor 210 executes various functional applications and data processing of the mobile phone 200 by executing the instructions stored in the internal memory 221 . The internal memory 221 may include a storage program area and a storage data area. Wherein, the storage program area may store the operating system, the code of the application (such as a camera application, etc.), and the like. The storage data area may store data created during the use of the mobile phone 200 (such as images and videos collected by the camera application) and the like.
内部存储器221还可以存储本申请实施例提供的图像处理方法对应的一个或多个计算机程序。该一个或多个计算机程序被存储在上述存储器221中并被配置为被该一个或多个处理 器210执行,该一个或多个计算机程序包括指令,上述指令可以用于执行上述相应实施例中的各个步骤,该计算机程序可以包括获取模块1401,用于获取待处理图像;待处理图像为屏下成像系统所采集的图像,屏下成像系统包括显示屏及设置在显示屏下方的摄像头;待处理图像包括:高亮物体及其产生的眩光,其中,高亮物体为待处理图像中呈现指定颜色和/或亮度的拍摄对象;生成模块1402,用于根据高亮物体的光谱成分,生成第一点扩散函数;第一点扩散函数为屏下成像系统中至少一个通道的点扩散函数;处理模块1403,用于对第一点扩散函数及待处理图像进行解卷积处理,得到去除眩光后的图像。The internal memory 221 may also store one or more computer programs corresponding to the image processing methods provided in the embodiments of the present application. The one or more computer programs are stored in the above-mentioned memory 221 and configured to be executed by the one or more processors 210, and the one or more computer programs include instructions that can be used to perform the corresponding embodiments described above. The computer program may include an acquisition module 1401 for acquiring an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display screen and a camera set below the display screen; The processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a photographic object exhibiting a specified color and/or brightness in the image to be processed; the generating module 1402 is configured to generate a first a point spread function; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; the processing module 1403 is used to perform deconvolution processing on the first point spread function and the image to be processed to obtain a glare-removed image Image.
此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。In addition, the internal memory 221 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
当然,本申请实施例提供的图像处理方法的代码还可以存储在外部存储器中。这种情况下,处理器210可以通过外部存储器接口220运行存储在外部存储器中的图像处理方法的代码。Certainly, the code of the image processing method provided by the embodiment of the present application may also be stored in an external memory. In this case, the processor 210 may execute the code of the image processing method stored in the external memory through the external memory interface 220 .
示例性的,手机200的显示屏294显示主界面,主界面中包括多个应用(比如相机应用等)的图标。用户通过触摸传感器280K点击主界面中相机应用的图标,触发处理器210启动相机应用,打开摄像头293。显示屏294显示相机应用的界面,例如拍摄界面。Exemplarily, the display screen 294 of the mobile phone 200 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, etc.). The user clicks the icon of the camera application in the main interface through the touch sensor 280K, which triggers the processor 210 to start the camera application and turn on the camera 293 . Display screen 294 displays an interface of a camera application, such as a capture interface.
移动通信模块251可以提供应用在手机200上的包括2G/3G/4G/5G等无线通信的解决方案。在本申请实施例中,移动通信模块251还可以用于与其它设备进行信息交互(如:获取光谱成分、待处理图像等)。The mobile communication module 251 can provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the mobile phone 200 . In this embodiment of the present application, the mobile communication module 251 may also be used for information interaction with other devices (eg, acquiring spectral components, images to be processed, etc.).
手机200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机200可以接收按键290输入,产生与手机200的用户设置以及功能控制有关的键信号输入。手机200可以利用马达291产生振动提示(比如开启去眩光功能提示)。手机200中的指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,提示信息、未接来电,通知等。The mobile phone 200 can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, and an application processor. Such as music playback, recording, etc. The cell phone 200 can receive key 290 input and generate key signal input related to user settings and function control of the cell phone 200 . The mobile phone 200 can use the motor 291 to generate a vibration prompt (for example, a prompt for turning on the anti-glare function). The indicator 292 in the mobile phone 200 can be an indicator light, which can be used to indicate a charging state, a change in power, or a message, a prompt message, a missed call, a notification, and the like.
应理解,在实际应用中,手机200可以包括比图13所示的更多或更少的部件,本申请实施例不作限定。图示手机200仅是一个范例,并且手机200可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。It should be understood that, in practical applications, the mobile phone 200 may include more or less components than those shown in FIG. 13 , which are not limited in this embodiment of the present application. The illustrated handset 200 is merely an example, and the handset 200 may have more or fewer components than those shown, two or more components may be combined, or may have different component configurations. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
电子设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备的软件结构。The software system of the electronic device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of an electronic device.
图16是本申请实施例的电子设备的软件结构框图。FIG. 16 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces. In some embodiments, the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图16所示,应用程序包可以包括电话、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in Figure 16, the application package may include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图16所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in Figure 16, the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。A window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data and make these data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications. A display interface can consist of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in virtual machines. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。A system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
2D图形引擎是2D绘图的绘图引擎。2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
本申请的实施例提供了一种图像处理装置,包括:处理器以及用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令时实现上述方法。An embodiment of the present application provides an image processing apparatus, including: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above method when executing the instructions.
本申请的实施例提供了一种非易失性计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。Embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, implement the above method.
本申请的实施例提供了一种计算机程序产品,包括计算机可读代码,或者承载有计算机可读代码的非易失性计算机可读存储介质,当所述计算机可读代码在电子设备的处理器中运行时,所述电子设备中的处理器执行上述方法。Embodiments of the present application provide a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的 例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read Only Memory,ROM)、可擦式可编程只读存储器(Electrically Programmable Read-Only-Memory,EPROM或闪存)、静态随机存取存储器(Static Random-Access Memory,SRAM)、便携式压缩盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。A computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (Electrically Programmable Read-Only-Memory, EPROM or flash memory), static random access memory (Static Random-Access Memory, SRAM), portable compact disk read-only memory (Compact Disc Read-Only Memory, CD - ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices, such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the foregoing .
这里所描述的计算机可读程序指令或代码可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。Computer readable program instructions or code described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
用于执行本申请操作的计算机程序指令可以是汇编指令、指令集架构(Instruction Set Architecture,ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或可编程逻辑阵列(Programmable Logic Array,PLA),该电子电路可以执行计算机可读程序指令,从而实现本申请的各个方面。The computer program instructions used to perform the operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as the "C" language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, may be connected to an external computer (eg, use an internet service provider to connect via the internet). In some embodiments, electronic circuits, such as programmable logic circuits, Field-Programmable Gate Arrays (FPGA), or Programmable Logic Arrays (Programmable Logic Arrays), are personalized by utilizing state information of computer-readable program instructions. Logic Array, PLA), the electronic circuit can execute computer readable program instructions to implement various aspects of the present application.
这里参照根据本申请实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本申请的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams. These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
附图中的流程图和框图显示了根据本申请的多个实施例的装置、系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行相应的功能或动作的硬件(例如电路或ASIC(Application Specific Integrated Circuit,专用集成电路))来实现,或者可以用硬件和软件的组合,如固件等来实现。It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in hardware (eg, circuits or ASICs (Application) that perform the corresponding functions or actions. Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented by a combination of hardware and software, such as firmware.
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其它变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其它单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。While the invention has been described herein in connection with various embodiments, those skilled in the art will understand and understand from a review of the drawings, the disclosure, and the appended claims in practicing the claimed invention. Other variations of the disclosed embodiments are implemented. In the claims, the word "comprising" does not exclude other components or steps, and "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that these measures cannot be combined to advantage.
以上已经描述了本申请的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。Various embodiments of the present application have been described above, and the foregoing descriptions are exemplary, not exhaustive, and not limiting of the disclosed embodiments. Numerous modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or improvement over the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (16)

  1. 一种图像处理方法,其特征在于,所述方法包括:An image processing method, characterized in that the method comprises:
    获取待处理图像;所述待处理图像为屏下成像系统所采集的图像,所述屏下成像系统包括显示屏及设置在所述显示屏下方的摄像头;所述待处理图像包括:高亮物体及其产生的眩光,其中,所述高亮物体为所述待处理图像中呈现指定颜色或亮度的拍摄对象;Obtain an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display screen and a camera disposed below the display screen; the image to be processed includes: a highlighted object and the resulting glare, wherein the highlighted object is a photographic object showing a specified color or brightness in the to-be-processed image;
    根据所述高亮物体的光谱成分,生成第一点扩散函数;所述第一点扩散函数为所述屏下成像系统中至少一个通道的点扩散函数;generating a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system;
    对所述第一点扩散函数及所述待处理图像进行解卷积处理,得到去除所述眩光后的图像。Perform deconvolution processing on the first point spread function and the to-be-processed image to obtain an image after removing the glare.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述高亮物体的光谱成分,生成第一点扩散函数,包括:The method according to claim 1, wherein the generating the first point spread function according to the spectral components of the highlighted object comprises:
    根据所述高亮物体的光谱成分及所述显示屏的模型,生成第二点扩散函数;其中,所述第二点扩散函数为所述高亮物体的不同波长经过所述屏下成像系统后对应的点扩散函数;According to the spectral components of the highlighted object and the model of the display screen, a second point spread function is generated; wherein, the second point spread function is obtained after the different wavelengths of the highlighted object pass through the under-screen imaging system The corresponding point spread function;
    根据所述第二点扩散函数及所述摄像头的感光特性曲线,生成所述第一点扩散函数。The first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述高亮物体的光谱成分,生成第一点扩散函数,包括:The method according to claim 1, wherein the generating the first point spread function according to the spectral components of the highlighted object comprises:
    根据所述高亮物体的光谱成分及预设的第三点扩散函数,生成所述第一点扩散函数;其中,所述第三点扩散函数为不同波长经过所述屏下成像系统后对应的点扩散函数。The first point spread function is generated according to the spectral components of the highlighted object and a preset third point spread function; wherein, the third point spread function is corresponding to different wavelengths after passing through the under-screen imaging system point spread function.
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    获取光谱测量设备所采集的所述高亮物体的光谱成分。Obtain the spectral components of the bright object collected by the spectral measuring device.
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    对所述待处理图像进行图像识别,确定所述高亮物体的光源类型;Perform image recognition on the to-be-processed image to determine the light source type of the highlighted object;
    根据预设的不同光源类型的光谱成分,确定所述高亮物体的光谱成分。According to preset spectral components of different light source types, the spectral components of the highlighted object are determined.
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    将所述待处理图像分割为第一区域及第二区域;其中,所述第一区域为所述高亮物体及其产生的眩光所在的区域;所述第二区域为所述待处理图像中第一区域以外的区域;Divide the image to be processed into a first area and a second area; wherein, the first area is the area where the bright object and the glare generated by it are located; the second area is the area in the image to be processed areas other than the first area;
    所述对所述第一点扩散函数及所述待处理图像进行解卷积处理,得到去除所述眩光后的图像,包括:The performing deconvolution processing on the first point spread function and the to-be-processed image to obtain the image after removing the glare, including:
    通过对所述第一点扩散函数及所述第一区域,进行解卷积处理,得到去除所述眩光后的 第一区域;By performing deconvolution processing on the first point spread function and the first region, the first region after removing the glare is obtained;
    将所述去除所述眩光后的第一区域与所述第二区域融合,得到所述去除所述眩光后的图像。The glare-removed first area is fused with the second area to obtain the glare-removed image.
  7. 根据权利要求2所述的方法,其特征在于,所述根据所述高亮物体的光谱成分及所述显示屏的模型,生成第二点扩散函数,包括:The method according to claim 2, wherein the generating a second point spread function according to the spectral components of the highlighted object and the model of the display screen, comprising:
    将所述显示屏的模型作为透过率函数,结合光波传播因子,并以所述高亮物体的光谱成分作为权重,得到所述第二点扩散函数;Taking the model of the display screen as a transmittance function, combining the light wave propagation factor, and taking the spectral components of the highlighted object as a weight, the second point spread function is obtained;
    其中,所述显示屏的模型包括:所述显示屏的振幅调制函数和相位调制特性函数;所述光波传播因子根据所述摄像头拍摄所述待处理图像时的焦距及所述高亮物体的波长确定。Wherein, the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor is based on the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object Sure.
  8. 一种图像处理装置,其特征在于,所述装置包括:An image processing device, characterized in that the device comprises:
    获取模块,用于获取待处理图像;所述待处理图像为屏下成像系统所采集的图像,所述屏下成像系统包括显示屏及设置在所述显示屏下方的摄像头;所述待处理图像包括:高亮物体及其产生的眩光,其中,所述高亮物体为所述待处理图像中呈现指定颜色或亮度的拍摄对象;an acquisition module, configured to acquire an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display screen and a camera arranged under the display screen; the image to be processed Including: a highlight object and the glare generated by it, wherein the highlight object is a shooting object showing a specified color or brightness in the image to be processed;
    生成模块,用于根据所述高亮物体的光谱成分,生成第一点扩散函数;所述第一点扩散函数为所述屏下成像系统中至少一个通道的点扩散函数;a generating module, configured to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system;
    处理模块,用于对所述第一点扩散函数及所述待处理图像进行解卷积处理,得到去除所述眩光后的图像。A processing module, configured to perform deconvolution processing on the first point spread function and the to-be-processed image to obtain an image after removing the glare.
  9. 根据权利要求8所述的装置,其特征在于,所述生成模块,还用于:根据所述高亮物体的光谱成分及所述显示屏的模型,生成第二点扩散函数;其中,所述第二点扩散函数为所述高亮物体的不同波长光线经过所述屏下成像系统后对应的点扩散函数;根据所述第二点扩散函数与所述摄像头的感光特性曲线,生成所述第一点扩散函数。The device according to claim 8, wherein the generating module is further configured to: generate a second point spread function according to the spectral components of the highlighted object and the model of the display screen; wherein, the The second point spread function is the point spread function corresponding to the light of different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the second point spread function and the photosensitive characteristic curve of the camera, the first point spread function is generated. A little spread function.
  10. 根据权利要求8所述的装置,其特征在于,所述生成模块,还用于:根据所述高亮物体的光谱成分及预设的第三点扩散函数,生成所述第一点扩散函数;其中,所述第三点扩散函数为不同波长经过所述屏下成像系统后对应的点扩散函数。The device according to claim 8, wherein the generating module is further configured to: generate the first point spread function according to the spectral components of the highlighted object and a preset third point spread function; Wherein, the third point spread function is a point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
  11. 根据权利要求8所述的装置,其特征在于,所述装置还包括:光谱测量设备,用于采集所述高亮物体的光谱成分。The apparatus according to claim 8, characterized in that, the apparatus further comprises: a spectrum measuring device, configured to collect the spectral components of the highlighted object.
  12. 根据权利要求8所述的装置,其特征在于,所述装置还包括:采集模块,用于:对所述待处理图像进行图像识别,确定所述高亮物体的光源类型;根据预设的不同光源类型的光谱成分,确定所述高亮物体的光谱成分。The device according to claim 8, wherein the device further comprises: an acquisition module, configured to: perform image recognition on the to-be-processed image, and determine the light source type of the highlighted object; The spectral composition of the light source type, to determine the spectral composition of the highlighted object.
  13. 根据权利要求8所述的装置,其特征在于,所述装置还包括:分割模块,用于:将 所述待处理图像分割为第一区域及第二区域;其中,所述第一区域为所述高亮物体及其产生的眩光所在的区域;所述第二区域为所述待处理图像中第一区域以外的区域;The device according to claim 8, wherein the device further comprises: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein the first area is the The area where the highlighted object and the glare generated by it are located; the second area is the area other than the first area in the image to be processed;
    所述处理模块,还用于:通过对所述第一点扩散函数及所述第一区域,进行解卷积处理,得到去除所述眩光后的第一区域;将所述去除所述眩光后的第一区域与所述第二区域融合,得到所述去除所述眩光后的图像。The processing module is further configured to: perform deconvolution processing on the first point spread function and the first region to obtain the first region after removing the glare; The first area of is fused with the second area to obtain the image after removing the glare.
  14. 根据权利要求9所述的装置,其特征在于,所述生成模块,还用于:将所述显示屏的模型作为透过率函数,结合光波传播因子,并以所述高亮物体的光谱成分作为权重,得到所述第二点扩散函数;其中,所述显示屏的模型包括:所述显示屏的振幅调制函数和相位调制特性函数;所述光波传播因子根据所述摄像头拍摄所述待处理图像时的焦距及所述高亮物体的波长确定。The device according to claim 9, wherein the generating module is further configured to: take the model of the display screen as a transmittance function, combine with a light wave propagation factor, and use the spectral component of the highlighted object As the weight, the second point spread function is obtained; wherein, the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; The focal length of the image and the wavelength of the highlighted object are determined.
  15. 一种电子设备,其特征在于,包括:显示屏、显示屏下方的摄像头、处理器和存储器;其中,An electronic device, characterized by comprising: a display screen, a camera below the display screen, a processor and a memory; wherein,
    所述摄像头用于透过所述显示屏采集待处理图像;The camera is used to collect images to be processed through the display screen;
    所述显示屏用于显示所述待处理图像和去眩光后的图像;the display screen is used for displaying the to-be-processed image and the image after glare removal;
    所述存储器,用于存储处理器可执行指令;the memory for storing processor executable instructions;
    所述处理器被配置为执行所述指令时实现权利要求1-7任意一项所述的方法。The processor is configured to implement the method of any of claims 1-7 when executing the instructions.
  16. 一种非易失性计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1-7中任意一项所述的方法。A non-volatile computer-readable storage medium on which computer program instructions are stored, characterized in that, when the computer program instructions are executed by a processor, the method described in any one of claims 1-7 is implemented.
PCT/CN2021/137491 2020-12-14 2021-12-13 Image processing method and apparatus, electronic device, and storage medium WO2022127738A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011474431.3A CN114626995A (en) 2020-12-14 2020-12-14 Image processing method and device, electronic equipment and storage medium
CN202011474431.3 2020-12-14

Publications (1)

Publication Number Publication Date
WO2022127738A1 true WO2022127738A1 (en) 2022-06-23

Family

ID=81897184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137491 WO2022127738A1 (en) 2020-12-14 2021-12-13 Image processing method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN114626995A (en)
WO (1) WO2022127738A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050083517A1 (en) * 2003-10-03 2005-04-21 Abu-Tarif Asad Methods, system, and program product for the detection and correction of spherical aberration
US20180091705A1 (en) * 2016-09-26 2018-03-29 Rambus Inc. Methods and Systems for Reducing Image Artifacts
CN107907483A (en) * 2017-08-14 2018-04-13 西安电子科技大学 A kind of super-resolution spectrum imaging system and method based on scattering medium
CN110087051A (en) * 2019-04-19 2019-08-02 清华大学 Cromogram dazzle minimizing technology and system based on HSV color space
CN111123538A (en) * 2019-09-17 2020-05-08 印象认知(北京)科技有限公司 Image processing method and method for adjusting diffraction screen structure based on point spread function
CN111246053A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Image processing method and electronic device
CN111812758A (en) * 2020-07-21 2020-10-23 欧菲微电子技术有限公司 Diffractive optical element, manufacturing method thereof, optical system under screen and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050083517A1 (en) * 2003-10-03 2005-04-21 Abu-Tarif Asad Methods, system, and program product for the detection and correction of spherical aberration
US20180091705A1 (en) * 2016-09-26 2018-03-29 Rambus Inc. Methods and Systems for Reducing Image Artifacts
CN107907483A (en) * 2017-08-14 2018-04-13 西安电子科技大学 A kind of super-resolution spectrum imaging system and method based on scattering medium
CN110087051A (en) * 2019-04-19 2019-08-02 清华大学 Cromogram dazzle minimizing technology and system based on HSV color space
CN111123538A (en) * 2019-09-17 2020-05-08 印象认知(北京)科技有限公司 Image processing method and method for adjusting diffraction screen structure based on point spread function
CN111246053A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Image processing method and electronic device
CN111812758A (en) * 2020-07-21 2020-10-23 欧菲微电子技术有限公司 Diffractive optical element, manufacturing method thereof, optical system under screen and electronic equipment

Also Published As

Publication number Publication date
CN114626995A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US9571739B2 (en) Camera timer
WO2018072271A1 (en) Image display optimization method and device
US20220350470A1 (en) User Profile Picture Generation Method and Electronic Device
CN114586008A (en) Method and electronic equipment for displaying page elements
EP4120183A1 (en) Image enhancement method and electronic device
US20230043815A1 (en) Image Processing Method and Electronic Device
US20230345113A1 (en) Display control method and apparatus, electronic device, and medium
CN114640783B (en) Photographing method and related equipment
CN113938602B (en) Image processing method, electronic device, chip and readable storage medium
CN114115619A (en) Application program interface display method and electronic equipment
US20230245441A9 (en) Image detection method and apparatus, and electronic device
CN114926351B (en) Image processing method, electronic device, and computer storage medium
CN113452969B (en) Image processing method and device
EP4109879A1 (en) Image color retention method and device
US20230353864A1 (en) Photographing method and apparatus for intelligent framing recommendation
CN115589539B (en) Image adjustment method, device and storage medium
WO2022127738A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN116152123A (en) Image processing method, electronic device, and readable storage medium
CN114640798B (en) Image processing method, electronic device, and computer storage medium
US20240143077A1 (en) Machine Learning Based Forecasting of Human Gaze
EP4296840A1 (en) Method and apparatus for scrolling to capture screenshot
US20240062392A1 (en) Method for determining tracking target and electronic device
WO2016044983A1 (en) Image processing method and apparatus and electronic device
CN116708996B (en) Photographing method, image optimization model training method and electronic equipment
US20240040235A1 (en) Video shooting method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905667

Country of ref document: EP

Kind code of ref document: A1