WO2022127738A1 - Procédé et appareil de traitement d'image, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de traitement d'image, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022127738A1
WO2022127738A1 PCT/CN2021/137491 CN2021137491W WO2022127738A1 WO 2022127738 A1 WO2022127738 A1 WO 2022127738A1 CN 2021137491 W CN2021137491 W CN 2021137491W WO 2022127738 A1 WO2022127738 A1 WO 2022127738A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point spread
glare
spread function
display screen
Prior art date
Application number
PCT/CN2021/137491
Other languages
English (en)
Chinese (zh)
Inventor
张华�
王津男
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022127738A1 publication Critical patent/WO2022127738A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/42Diffraction optics, i.e. systems including a diffractive element being designed for providing a diffractive effect
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method, apparatus, electronic device, and storage medium.
  • the existing mobile phone adopts a full-screen mobile phone composed of a lifting front lens, and a sliding-type full-screen mobile phone. These designs make the visual experience of the mobile phone close to 100% full screen, but its complex mechanical structure will increase the thickness and weight of the mobile phone, occupying too much internal space, and the repeated use of the mechanical mechanism is also difficult to meet the needs of mobile phone users.
  • the mobile phone can increase the screen ratio of the display.
  • the camera can be set under the display screen, but due to the occlusion of the display screen, the images captured by the camera set under the display screen have glare, especially when shooting bright objects, the glare is serious, which greatly reduces the imaging effect of the camera and affects the user. experience.
  • an embodiment of the present application provides an image processing method, the method includes: acquiring an image to be processed; the to-be-processed image is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display a screen and a camera arranged under the display screen; the to-be-processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a specified color and/or brightness in the to-be-processed image photographing an object; generating a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; for the first point spread function
  • the function and the image to be processed are subjected to deconvolution processing to obtain the image after the glare has been removed.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • the generating a first point spread function according to the spectral components of the highlighted object includes: according to the spectrum of the highlighted object components and the model of the display screen to generate a second point spread function; wherein, the second point spread function is the point spread function corresponding to different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the The second point spread function and the photosensitive characteristic curve of the camera are used to generate the first point spread function.
  • the point spread function of the different wavelengths of the highlighted object passing through the under-screen imaging system can be generated, and then combined with the photosensitive characteristic curve of the camera to obtain the under-screen imaging system.
  • the point spread function of at least one channel In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • the generating a first point spread function according to the spectral components of the highlighted object includes: according to the spectrum of the highlighted object composition and a preset third point spread function to generate the first point spread function; wherein, the third point spread function is the point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
  • the point spread function of at least one channel in the under-screen imaging system can be quickly generated through the preset point spread functions of different wavelengths after passing through the under-screen imaging system, combined with the spectral components of the above-mentioned high-brightness objects.
  • the calculation efficiency is improved, and the processing speed of anti-glare is improved, and it can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements such as video shooting.
  • the method further includes: acquiring the spectral components of the high-brightness object collected by the spectral measurement device.
  • the spectral components of high-brightness objects can be acquired in real time during shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment;
  • the spectral components can be obtained to obtain the spectral components of bright objects, which saves the cost of electronic equipment.
  • the method further includes: performing image recognition on the to-be-processed image to determine the light source type of the highlighted object; The spectral components of different light source types are determined to determine the spectral components of the highlighted object.
  • the light source type of the highlighted object in the photographed object can be identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the electronic device does not need to be additionally configured with a spectral measurement device , the spectral composition of the bright object can be obtained, and the cost of electronic equipment can be saved.
  • the method further includes: dividing the to-be-processed image into a first area and a second area; wherein the first area is the area where the bright object and its glare are located; the second area is the area other than the first area in the image to be processed; the pair of the first point spread function and the image to be processed Performing deconvolution processing to obtain the image after removing the glare, comprising: performing deconvolution processing on the first point spread function and the first area to obtain the first area after removing the glare; The glare-removed first area is fused with the second area to obtain the glare-removed image.
  • the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it. Further anti-glare processing is performed in the first area where it is located, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the first area that does not contain bright objects and the glare generated by them in the image to be processed during the anti-glare processing. The effect of the second area.
  • the generation of the The two point spread function includes: taking the model of the display screen as the transmittance function, combining the light wave propagation factor, and taking the spectral component of the highlighted object as the weight to obtain the second point spread function; wherein, the The model of the display screen includes: an amplitude modulation function and a phase modulation characteristic function of the display screen; the light wave propagation factor is determined according to the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transparent
  • the over-rate function is used to determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object.
  • the spectral components of the highlighted object are used as weights to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • the obtained point spread function can more realistically express the process of glare generated by bright objects through under-screen imaging.
  • an embodiment of the present application provides an image processing apparatus, the apparatus includes: an acquisition module, configured to acquire an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the screen
  • the lower imaging system includes a display screen and a camera arranged under the display screen;
  • the to-be-processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a specified color in the to-be-processed image and/or a photographic object of brightness;
  • a generating module configured to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point of at least one channel in the under-screen imaging system a spread function;
  • a processing module configured to perform deconvolution processing on the first point spread function and the to-be-processed image to obtain the image after the glare has been removed.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • the generating module is further configured to: generate a second point spread function; wherein, the second point spread function is the point spread function corresponding to the light of different wavelengths of the highlighted object after passing through the under-screen imaging system; according to the second point spread function and the camera The photosensitive characteristic curve is used to generate the first point spread function.
  • the point spread functions of the different wavelengths of the highlighted objects passing through the under-screen imaging system can be generated, and then combined with the photosensitive characteristic curve of the camera, the under-screen imaging system can be obtained.
  • the point spread function of at least one channel In this way, the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • the generation module is further configured to: generate the generation module according to the spectral components of the highlighted object and a preset third point spread function The first point spread function; wherein, the third point spread function is the point spread function corresponding to different wavelengths after passing through the under-screen imaging system.
  • the point spread function of at least one channel in the under-screen imaging system can be quickly generated by using the corresponding point spread functions of different preset wavelengths after passing through the under-screen imaging system, combined with the spectral components of the above-mentioned high-brightness objects,
  • the calculation efficiency is significantly improved, and the processing speed of anti-glare is improved, which can be applied to the anti-glare processing scene with large amount of processing data and high processing speed requirements such as video shooting.
  • the apparatus further includes: a spectral measurement device, configured to collect spectral components of the highlighted object.
  • the spectral components of high-brightness objects can be acquired in real time during shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment;
  • the spectral components can be obtained to obtain the spectral components of bright objects, which saves the cost of electronic equipment.
  • the device further includes: an acquisition module, configured to: perform image recognition on the to-be-processed image, and determine the size of the highlighted object.
  • an acquisition module configured to: perform image recognition on the to-be-processed image, and determine the size of the highlighted object.
  • Light source type according to preset spectral components of different light source types, determine the spectral components of the highlighted object.
  • the light source type of the highlighted object in the photographed object can be identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the electronic device does not need to be additionally configured with a spectral measurement device , the spectral composition of the bright object can be obtained, and the cost of electronic equipment can be saved.
  • the apparatus further includes: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein , the first area is the area where the bright object and the glare generated by it are located; the second area is the area other than the first area in the image to be processed; the processing module is also used for: by Perform deconvolution processing on the first point spread function and the first region to obtain the first region after removing the glare; deconvolute the first region and the second region after removing the glare Fusion to obtain the image after removing the glare.
  • a segmentation module configured to: segment the to-be-processed image into a first area and a second area; wherein , the first area is the area where the bright object and the glare generated by it are located; the second area is the area other than the first area in the image to be processed; the processing module is also used for: by Perform deconvolution processing on the first point spread function and the first region to obtain the first region after removing the
  • the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it. Further anti-glare processing is performed in the first area where it is located, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the first area that does not contain bright objects and the glare generated by them in the image to be processed during the anti-glare processing. The effect of the second area.
  • the generating module is further configured to: use the model of the display screen as a transmittance function , combined with the light wave propagation factor, and using the spectral components of the highlighted object as the weight, the second point spread function is obtained; wherein, the model of the display screen includes: the amplitude modulation function and phase modulation characteristics of the display screen function; the light wave propagation factor is determined according to the focal length when the camera captures the to-be-processed image and the wavelength of the highlighted object.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transparent
  • the over-rate function is used to determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object.
  • the spectral components of the highlighted object are used as weights to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • the obtained point spread function can more realistically express the process of glare generated by bright objects through under-screen imaging.
  • embodiments of the present application provide an electronic device, including: a display screen, a camera below the display screen, a processor, and a memory; wherein, the camera is used to collect images to be processed through the display screen;
  • the display screen is used to display the to-be-processed image and the image after glare removal;
  • the memory is used to store processor-executable instructions;
  • the processor is configured to implement the above-mentioned first aspect or One or more image processing methods in multiple possible implementation manners of the first aspect.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, characterized in that, when the computer program instructions are executed by a processor, the above-mentioned first aspect is implemented Or one or more image processing methods in multiple possible implementation manners of the first aspect.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • embodiments of the present application provide a computer program product, comprising computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in an electronic
  • the processor in the electronic device executes the first aspect or one or more of the image processing methods in multiple possible implementations of the first aspect.
  • the to-be-processed image includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the to-be-processed image, a different highlight object is generated.
  • the point spread function of at least one channel in the adaptive off-screen imaging system, and then the first point spread function and the to-be-processed image are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve or Eliminates glare, especially rainbow glare, generated by images captured by cameras under the display screen, improving image clarity and user experience.
  • FIG. 1 shows a schematic diagram of a mobile phone configured with an under-screen camera according to an embodiment of the present application
  • FIGS. 2A-2B show schematic diagrams of displaying images through the mobile phone 10 according to an embodiment of the present application
  • FIG. 3 shows a schematic diagram of a shooting interface according to an embodiment of the present application
  • FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present application
  • FIG. 5 shows a schematic diagram of extracting the shape of the highlighted object and the glare starburst area in the glare area 302 in the above-mentioned FIG. 3 according to an embodiment of the present application;
  • FIG. 6 shows the spectral components of a strong light source collected by a color temperature sensor according to an embodiment of the present application
  • FIG. 7 shows a schematic diagram of a broadening of the point spread function of the under-screen imaging system according to an embodiment of the present application
  • FIG. 8 shows a partial schematic diagram of a display screen according to an embodiment of the present application.
  • FIG. 9 shows a schematic diagram of a filter transmittance curve according to an embodiment of the present application.
  • FIG. 10 shows a schematic diagram of the point spread function of the generated RBG three-channel according to an embodiment of the present application
  • FIG. 11 shows a schematic diagram of the point spread function corresponding to the pre-calibrated light of different wavelengths passing through the under-screen imaging system according to an embodiment of the present application
  • FIG. 12 shows a schematic diagram of a partial image after deglare according to an embodiment of the present application.
  • FIG. 13 shows a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 14 shows a structural diagram of an image processing apparatus according to an embodiment of the present application.
  • FIG. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 16 shows a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • Point spread function for an optical system, the light field distribution of the output image when the input object is a point light source is called the point spread function, also known as the point spread function.
  • a point light source can be represented by a delta function (point pulse), and the light field distribution of the output image is called an impulse response, so the point spread function is also the impulse response function of the optical system.
  • the imaging performance of an optical system can be expressed by the point spread function of the optical system.
  • Non-blind deconvolution also known as non-blind deconvolution
  • the image formed by the optical system can be understood as the result of the convolution of the original image and the point spread function of the optical system.
  • the point spread function of the optical system, the process of restoring the original image is called non-blind deconvolution.
  • OLED The principle of OLED is the phenomenon that organic semiconductor materials and light-emitting materials are driven by an electric field and lead to light emission through carrier injection and recombination.
  • OLED is a device that uses a multilayer organic thin film structure to generate electroluminescence. It is easy to fabricate and only requires low driving voltage. These main features make OLED very prominent in the application of flat panel displays. OLED displays are lighter and thinner than LCDs, with high brightness, low power consumption, fast response, high definition, good flexibility and high luminous efficiency.
  • the embodiment of the present application provides an image processing method, and the method can be applied to an electronic device, and the electronic device can include a display screen and a camera under the display screen.
  • the display screen can be a touch screen or a non-touch screen.
  • the touch screen can control the electronic device by clicking or sliding on the screen with a finger, a stylus, etc.
  • the non-touch screen device can be connected to Input devices such as mouse, keyboard, touch panel, etc., through which the electronic device is controlled;
  • the camera may include a front camera and/or a rear camera.
  • the electronic device of the present application may be a smart phone, a netbook, a tablet computer, a notebook computer, a wearable electronic device (such as a smart bracelet, a smart watch, etc.), a car device, a TV, a virtual reality device, an audio system, and an electronic ink. , etc., the embodiments of the present application do not limit the specific type of the electronic device.
  • FIG. 1 shows a schematic diagram of a mobile phone configured with an under-screen camera according to an embodiment of the present application
  • the mobile phone 10 may include a display screen 101 and a camera 102 ; wherein, the display screen 101 For displaying images or videos, etc., the camera 102 is arranged below the display screen 101, and is used to shoot images or videos through the display screen 101; One), the display screen 101 and the camera 102 constitute the under-screen imaging system of the mobile phone 10 .
  • FIG. 2A is a schematic diagram of the original image displayed by the mobile phone 10 before being processed by the image processing method according to the embodiment of the present application.
  • FIG. 2B is an image displayed by the mobile phone 10
  • the photographed sun 204 appears in the original image 201 in the form of rainbow glare (not shown in the figure).
  • the main reasons for glare are: (1)
  • the display screen used to display images and other content consists of many pixels representing red, green and blue colors, and these pixels are periodically arranged; when the camera located under the display screen passes through the display Diffraction effect occurs when external light passes through the monitor when shooting with a screen.
  • the diffraction effect of the display screen is sensitive to the wavelength, that is, the diffraction effect is different for light of different wavelengths, and the diffraction broadening of the light of different wavelengths after passing through the display screen and the lens of the camera is different, which is called dispersion.
  • Phenomenon this phenomenon of chromatic dispersion gives rise to rainbow glare in captured images.
  • an anti-glare solution is: increasing the lens aperture of the camera to increase the amount of light entering; using a large-sized image sensor to improve the image sensor's ability to sense dark light; adding a new type of high transmittance light Materials, improve the light transmittance of the display screen; adopt the high dynamic range image (High Dynamic Range, HDR) data acquisition mode to improve the dynamic range of the image obtained by the camera and reduce the glare characteristics.
  • HDR High Dynamic Range
  • there is another anti-glare solution optimizing the pixel arrangement, pixel structure, and pixel driving circuit design of the OLED display screen.
  • optimizing the pixel arrangement, pixel structure, and pixel driving circuit design of the OLED display screen By improving the local characteristics of the periodic pixel structure of the display screen, the energy distribution of the point spread function of the under-screen imaging system and the distribution of glare can be optimized.
  • this solution can only improve the glare effect to a certain extent, and the improvement effect is limited; This solution cannot eliminate the rainbow glare caused by the imaging of the camera under the display.
  • the image processing method can generate a point spread function suitable for different highlighted objects according to the spectral components of the highlighted objects in the image, and then solve the problem by solving
  • the convolution process obtains a high-definition image after glare removal; thus, the glare generated by the image captured by the camera under the display screen can be effectively improved or eliminated, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
  • the electronic device when there is a bright object in the captured image, the electronic device can turn on the anti-glare function, so as to improve or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
  • the highlighted objects may include strong light sources, such as the sun, the moon, lights, display screens, etc., and may also include objects whose surfaces reflect strong light, such as glass, metal, and the like.
  • the front camera of the mobile phone is set under the display screen, the user takes a selfie through the front camera of the mobile phone outdoors, and the user turns on the front camera function of the mobile phone, so that the display screen enters the shooting mode. interface, the user can observe the captured picture in real time through the preview image displayed in the shooting interface; exemplarily, the preview image is the original image 201 shown in the above-mentioned FIG.
  • the sun appears as rainbow glare, and at the same time, the rainbow glare blocks some objects (such as mountain peaks), and the mobile phone turns on the anti-glare function, performs real-time anti-glare processing, and obtains a high-definition image with no glare removed.
  • the high-definition image is shown in Figure 2B above.
  • the glare is eliminated in the high-definition image displayed on the shooting interface, and the sun and some mountain peaks originally blocked by the rainbow glare of the sun can be seen; Preview the glare of the sun (rainbow glare) in the image so that you can get a high-definition image with no glare.
  • the user can trigger an instruction to enter the shooting, the mobile phone responds to the instruction, turns on the camera, and at the same time the display screen enters the shooting interface; for example, the user can click the main page in the display screen, or the boot page, or take a photo in other applications. icon to make the display screen enter the shooting interface; the user can also press the physical button for shooting set on the mobile phone to make the display screen enter the shooting interface; the user can also make the display screen enter the shooting interface through voice commands; the user can also The display screen is brought into the shooting interface by means of a shortcut gesture; in practical applications, the user can also make the display screen enter the shooting interface by other means, which is not limited in this embodiment of the present application.
  • the shooting interface may also include aperture, night scene, portrait, photo, video, professional, etc.
  • the shooting interface can also include options for functions such as flash, HDR, AI, settings, and color tone.
  • the user can select different functions, turn on or off the flash, adjust the color tone, etc.
  • the shooting interface can also include focal length adjustment options, so that the preview image can be enlarged or reduced accordingly by adjusting the focal length; in addition, the shooting interface It may also include a shooting button, an album button, a camera switching button, and the like.
  • Method 1 The mobile phone detects whether there is a bright object in the preview image. When it detects that there is a bright object in the preview image, the mobile phone automatically turns on the anti-glare function; when no bright object is detected in the preview image, the mobile phone does not. Turn on the anti-glare function.
  • the highlighted object includes a photographic object exhibiting a specified color and/or brightness in the preview image.
  • the specified color may be white
  • the specified brightness may be the luminance value of white in YUV color coding, or the gray value of white in RGB color mode, and so on.
  • the mobile phone can detect whether there are highlighted objects in the preview image on the condition that the gray values of most of the pixels in a certain area in the preview image are not less than the preset gray value threshold.
  • the grayscale values of most of the pixels are not less than the preset grayscale threshold, it is determined that there are bright objects in the preview image, and at this time, the mobile phone automatically turns on the anti-glare function.
  • the grayscale value range of each pixel in the preview image is 0-255, white is 255, and black is 0. Considering that the area where the highlighted object is usually white, you can set the preset grayscale threshold equal to 255 or a value around 255.
  • the preset grayscale threshold can be 255, a window with a size of 100*100 can be selected, and the window can be sequentially slid in the preview image, so that the entire preview image can be spread; each time the window is slid, each pixel in the window is calculated. If the gray value of more than half of the pixels in the window is 255, it is determined that there is a bright object in the preview image, and at this time, the mobile phone automatically turns on the anti-glare function.
  • the mobile phone may continue to execute the current shooting mode or other preset shooting modes (eg, a preset image enhancement mode).
  • the mobile phone can automatically detect whether there is a highlight object in the preview image, and automatically turn on the anti-glare function when a highlight object is detected; the anti-glare function can be turned on without the user's manual operation, which improves the performance of the mobile phone. How intelligent the operation is.
  • Manner 2 The mobile phone receives an instruction to enable the anti-glare function triggered by the user, and in response to the instruction, enables the anti-glare function.
  • the user can observe and determine whether there is a highlight object by previewing the image or the photographed object, and when the user determines that there is a highlight object, the user can click the preset on the shooting interface to enable the anti-glare function or enter the anti-glare mode.
  • pictures trigger the command to enable the anti-glare function; the command to enable the anti-glare function may also be triggered by means of preset voice commands, shortcut gestures, physical buttons, etc. of the mobile phone, which is not limited in this embodiment of the present application.
  • the camera interface of the mobile phone 10 may include an icon 301 of the anti-glare function. The user can click the icon 301 of the anti-glare function to trigger an instruction to enable the anti-glare function. Anti-glare function.
  • the user independently judges whether there is a bright object when taking a photo, and selects an instruction to trigger the anti-glare function according to the judgment result, so that the mobile phone can turn on the anti-glare function and meet the user's demand for autonomous control of the mobile phone.
  • Method 3 The mobile phone detects whether there is a highlight object in the preview image. When it detects that there is a highlight object in the preview image, the mobile phone can send a prompt message to the user to prompt the user whether to enable the anti-glare function; if the user triggers the anti-glare function function command, the mobile phone will turn on the anti-glare function in response to the command.
  • the prompt information that the mobile phone can send to the user can include: voice prompt information, vibration, light flashing, icon flashing, etc.; for example, as shown in FIG. 3 above, the mobile phone 10 can detect that there is a highlighted object in the preview image.
  • the icon 301 of the anti-glare function in the control shooting interface flashes, thereby prompting the user to select and click the icon 301 of the anti-glare function.
  • the mobile phone will enable the anti-glare function.
  • the mobile phone may display the prompt message "Enable the anti-glare function" in the preview image area of the shooting interface, thereby prompting the user to enable the anti-glare mode, wherein, the specific implementation of enabling the anti-glare mode by the user may refer to the above-mentioned method for enabling the anti-glare function on the mobile phone.
  • the related descriptions in Section 2 will not be repeated here; if the user triggers an instruction to enable the anti-glare function, the mobile phone responds to the instruction to enable the anti-glare function.
  • the user may forget the anti-glare function of the mobile phone or be unfamiliar with the anti-glare function when taking pictures, and the user can be prompted to turn on the anti-glare function through the prompt information, thereby improving the user experience.
  • the mobile phone can also display a prompt box around the detected image area where there are highlighted objects to remind that there are highlighted objects in this area.
  • the user can select the prompt box (for example, click the inner area of the prompt box) to choose to perform glare removal.
  • Feature image area For example, click the inner area of the prompt box.
  • the mobile phone may also enable the anti-glare function in other ways, which is not limited in this embodiment of the present application.
  • FIG. 4 shows a flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 4 , the method may include the following steps:
  • Step 400 The mobile phone determines the glare area where the highlighted object in the preview image is located.
  • the number of highlighted objects may be one or more, and each glare area includes at least one highlighted object.
  • each glare area may include one highlighted object.
  • the number of glare areas and the number of highlighted objects same.
  • the highlighted object occupies a plurality of connected pixels whose gray value is not less than a preset threshold (eg, 255), and the glare area where the highlighted object is located at least includes these connected pixels occupied by the highlighted object. of pixels.
  • the glare area may be in the shape of a rectangle, a circle, or the like.
  • the glare area may be a rectangular area, and a rectangular window with a size of m*n may be constructed, where m and n are both integers greater than 1.
  • the initial values of m and n may be 100. Sliding on the preview image, when a pixel with a gray value exceeding 255 appears in the rectangular window, adjust the values of m and n, so that multiple connected pixels, including the pixel, whose gray value exceeds 255 are connected to each other. When the pixels all fall on the rectangular window, the values of m and n are fixed at this time, and the rectangular window is the glare area. Multiple pixels connected to each other with a gray value of more than 255 in the rectangular window are a highlight object.
  • the glare area 302 is a rectangular area, and the glare area includes the pixels occupied by the sun in the preview image.
  • this step 400 is an optional step, that is, after the anti-glare function is enabled on the mobile phone, this step 400 can be executed, so that the processing efficiency of the following steps can be improved and processing resources can be saved; at the same time, the processing process can be reduced or eliminated.
  • the mobile phone after the mobile phone enables the anti-glare function, the mobile phone can also directly perform the following step 401.
  • Step 401 the mobile phone identifies the shape of the highlighted object, and extracts the glare starburst area.
  • the mobile phone can identify the shape of the highlighted object in the glare area where the highlighted object is located in the preview image determined in the above step 400, and extract the glare starburst area in the glare area; Identify the shape of the highlighted object, and extract the glare starburst area in the preview image, where the glare starburst area is the area where glare appears and is in the shape of a starburst, and the area other than the glare starburst area in the preview image is non-glare Astral region.
  • the mobile phone can automatically identify the shape of the highlighted object through the trained neural network.
  • the shape of the strong light source may include a ring light source, a strip light source, a point light source, and the like.
  • Pre-select photos of light sources of different shapes to train the neural network input the glare area or preview image into the trained neural network, and then output the glare area or the shape of the highlighted object in the preview image; the mobile phone can also use conventional overexposure Detection and other methods to identify the shape of the highlighted object.
  • the mobile phone can detect the glare area or the glare starburst area in the preview image based on the position of the highlighted object.
  • the center of the highlighted object can be used as the cross star.
  • the cross star At the center, continuously adjust the size of the cross star, so that the pixels occupied by the highlighted object all fall within the cross star.
  • the pixels included in the cross star are the glare star area.
  • the glare starburst region or the glare starburst region in the preview image is cropped to extract the glare starburst region; in this way, further anti-glare processing can be performed on the extracted glare starburst region to improve the This improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the impact of processing on other areas in the image that do not contain bright objects and their glare.
  • this step 401 is an optional step, that is, after the anti-glare function is enabled on the mobile phone, this step 401 may be executed, and the following step 402 may also be directly executed.
  • the following is an exemplary description of performing step 402 after performing step 401 .
  • FIG. 5 shows a schematic diagram of extracting the shape of the highlighted object and the glare starburst area in the glare area 302 in FIG. 3 according to an embodiment of the present application.
  • the shape of the sun is extracted 501, and at the same time, the glare starburst region 502 is extracted.
  • Step 402 the mobile phone acquires the spectral component Ispe ( ⁇ ) of the highlighted object, where ⁇ represents the wavelength.
  • the mobile phone is equipped with a spectral measurement device (for example, a spectral sensor, a color temperature sensor, a spectral camera, etc.), and the spectral components of a bright object (such as a strong light source) are collected by the spectral sensor.
  • a spectral measurement device for example, a spectral sensor, a color temperature sensor, a spectral camera, etc.
  • the spectral components of a bright object such as a strong light source
  • the mobile phone 10 in FIG. 1 may be configured with a color temperature sensor, and FIG. 6 shows the spectral components of the strong light source collected by the color temperature sensor according to an embodiment of the present application.
  • the spectral component Ispe ( ⁇ ) of the sun 204 in the photographed object in FIG. 2B shows the relative intensities corresponding to different wavelengths ⁇ .
  • the mobile phone may also be configured with a hyperspectral detection device, and the hyperspectral detection device collects the spectral components of the bright object, thereby further improving the accuracy of the collected spectral components.
  • the spectral components of the highlighted objects can be obtained in real time during shooting through the color temperature sensor or hyperspectral detection equipment built in the mobile phone, without relying on external equipment, which improves the functional integrity of the mobile phone.
  • the mobile phone can determine the spectral components of the identified highlighted objects by identifying the light source types of the highlighted objects in the photographed object, and according to preset spectral components corresponding to different light source types.
  • the spectral components of different strong light sources are fixed.
  • the spectral components are stored in the mobile phone; when shooting, the mobile phone can identify the light source type of the highlighted object in the subject through computer vision and other methods. From the pre-stored spectral components of a plurality of light source types, the spectral components of the sun are determined.
  • the spectral components corresponding to different light source types are stored in advance, and the light source type of the highlighted object in the subject is identified during shooting, so as to determine the spectral composition of the highlighted object; in this way, the mobile phone does not need to be additionally equipped with a spectral sensor or hyperspectral spectrum.
  • Spectral measurement equipment such as detection equipment can obtain the spectral components of bright objects, saving the cost of mobile phones.
  • Method 3 The mobile phone can receive the spectral components of the bright object input from the outside.
  • a spectral measurement device (such as a hyperspectral detection device, a color temperature sensor, a spectral camera, etc.) disposed in the external environment collects the spectral components of the highlighted objects, and inputs the collected spectral components of the highlighted objects into the mobile phone. , the mobile phone receives the inputted spectral components, so as to obtain the spectral components of the bright objects.
  • the mobile phone can obtain the spectral components of the bright object by using the spectral components collected by the external device, which saves the cost of the mobile phone.
  • Step 403 The mobile phone generates a point spread function of three RGB channels according to the spectral components of the highlighted object.
  • FIG. 7 shows a schematic diagram of the broadening of the point spread function of the under-screen imaging system according to an embodiment of the present application, as shown in FIG.
  • the distribution of the point spread function of the under-screen imaging system will expand horizontally and vertically along the display screen, resulting in a serious smearing effect, which greatly reduces the imaging quality of the camera under the display screen. Therefore, in order to improve the imaging quality, in this step, the mobile phone uses the above-mentioned acquired spectral components of the bright object to generate the point spread function of the RGB three channels of the under-screen imaging system.
  • the mobile phone generates the point spread function of the RGB three channels of the under-screen imaging system through the display screen model, the spectral components of the highlighted objects obtained above, and the photosensitive characteristic curve (transmittance curve) of the filter of the image sensor of the camera. .
  • the display screen model can be pre-stored in the mobile phone, and the display screen model can include: amplitude modulation function A(m,n) and phase modulation function P(m,n), etc., m,n represent the pixels on the display screen
  • the amplitude modulation function and phase modulation function can be determined according to factors such as the pixel point distribution on the display screen, the wiring method of the display screen and the material of the display screen.
  • FIG. 8 shows a partial schematic diagram of a display screen according to an embodiment of the present application. As shown in FIG. 8 , the distribution of pixel points on the display screen can be obtained, wherein each pixel point includes a red photon pixel, a green photon pixel, and a blue photon pixel. subpixels.
  • the photosensitive characteristic curve of the filter of the image sensor of the camera can be determined according to the relevant parameters of the camera configured on the mobile phone, and the filter of the image sensor can include: red filter, green filter, blue filter;
  • a schematic diagram of the filter transmittance curve of an embodiment of the application, as shown in FIG. 9 , the transmittance curve Fr, g, b ( ⁇ ) of the filter includes: the transmittance curve Fr of the red filter ( ⁇ ), the transmittance curve F g ( ⁇ ) of the green filter, and the transmittance curve F b ( ⁇ ) of the blue filter, where ⁇ is the wavelength.
  • the mobile phone can use the spectral composition I spe ( ⁇ ) of the sun in the photographed object collected by the color temperature sensor in the above-mentioned FIG. 6 and the preset display model (A(m,n) and P(m,n)) , generate the point spread function corresponding to the different wavelengths of the bright object (the sun) after passing through the under-screen imaging system, and then pass the transmittance curve F r,g,b ( ⁇ ) of the filter of the image sensor to generate the under-screen imaging The point spread function of the RGB three channels of the system.
  • the mobile phone can also generate the point spread function corresponding to the different wavelengths of the highlighted object after passing through the under-screen imaging system through the spectral components of the highlighted object collected by the hyperspectral detection device and the preset display screen model, so that the The point spread function of different wavelengths of the generated highlight objects after passing through the under-screen imaging system is closer to the actual situation, thereby improving the effect of anti-glare.
  • the display screen model can be used as the transmittance function, combined with the light wave propagation factor, and the spectral components of the highlighted object obtained above are used as weights to obtain the points corresponding to the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • Spread function wherein, the model of the display screen may include: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor may be determined according to the focal length of the camera to capture the image to be processed and the wavelength of the highlighted object.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object and at the same time use the spectral component of the highlighted object as the weight to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • Spread function the obtained point spread function can more realistically express the glare process of bright objects through under-screen imaging.
  • the point spread function I(u, v; ⁇ ) corresponding to different wavelengths of the highlighted object after passing through the under-screen imaging system is shown in the following formula (1):
  • represents the wavelength
  • I spe ( ⁇ ) represents the spectral component of the highlighted object
  • f represents the actual focal length of the camera when shooting
  • m, n represent the coordinates of the pixels on the display screen
  • u, v Represents the coordinates of the pixel on the image sensor
  • A(m,n) represents the amplitude modulation function
  • P(m,n) represents the phase modulation function.
  • PSF r represents the point spread function of the red channel
  • PSF g represents the point spread function of the green channel
  • PSF b represents the point spread function of the blue channel
  • I(u, v; ⁇ ) represents the highlight
  • represents the wavelength
  • u, v represent the coordinates of the pixel points on the image sensor
  • F r ( ⁇ ) represents the transmittance curve of the red filter of the image sensor
  • F g ( ⁇ ) represents the transmittance curve of the green filter of the image sensor
  • F b ( ⁇ ) represents the transmittance curve of the blue filter of the image sensor.
  • FIG. 10 shows a schematic diagram of point spread functions of three RBG channels generated according to an embodiment of the present application.
  • the point spread functions of the red light channel, the point spread function of the green light channel, and the point spread function wherein the point spread function of the red light channel of the same pixel and the point spread function of the green light channel of the pixel point are separated by a sub-pixel point, and the point spread function of the red light channel of the same pixel point is the same as the pixel point.
  • the point spread function of the blue light channel is separated by one sub-pixel point
  • the point spread function of the green light channel of the same pixel point and the point spread function of the blue light channel of the pixel point are separated by one sub-pixel point.
  • the display screen model can be stored in the mobile phone in advance, and combined with the acquired spectral components of the highlighted object, different wavelengths of the highlighted object can be generated and imaged under the screen.
  • the point spread functions corresponding to different wavelengths of different wavelengths of the highlighted object passing through the display screen and the lens of the camera under the display screen, and then the point spread function of the three RGB channels of the under-screen imaging system is obtained.
  • the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • Method 2 The mobile phone generates the point spread function of the RGB three channels of the under-screen imaging system according to the pre-calibrated point spread functions of different wavelengths after passing through the under-screen imaging system and the spectral components of the highlighted objects obtained above.
  • the point spread functions corresponding to different wavelengths after passing through the under-screen imaging system can be pre-calibrated in the laboratory and stored in the mobile phone in advance;
  • the schematic diagram of the point spread function corresponding to the wavelength after passing through the under-screen imaging system, as shown in Figure 11, is the point spread function corresponding to different wavelengths in the visible light band (400nm-780nm) after passing through the under-screen imaging system, Wherein, the interval between wavelengths may be 6 nm, 3 nm, or the like.
  • the point spread function corresponding to the different wavelengths stored in the mobile phone after passing through the under-screen imaging system can be directly called, and integrated with the spectral components of the highlighted objects obtained above to obtain the under-screen imaging system.
  • RGB three-channel point spread function RGB three-channel point spread function.
  • the point spread functions corresponding to different wavelengths after passing through the under-screen imaging system are pre-stored in the mobile phone, and the point spread functions of the RGB three channels of the under-screen imaging system can be quickly generated in combination with the above-mentioned acquisition of the spectral components of the highlighted objects.
  • the computing efficiency is significantly improved, and the processing speed of the mobile phone's anti-glare is improved. It can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements such as video shooting.
  • step 402 and step 403 can be executed before the above step 400 or step 401, can also be executed after step 400 or step 401, and can also be executed simultaneously with step 400 or step 401. Not limited.
  • Step 404 using the generated point spread function of the three RGB channels and the above-mentioned glare starburst area, and adopting a preset non-blind deconvolution algorithm to remove the glare in the glare starburst area.
  • the preset non-blind deconvolution algorithm may include: neural network learning, convex optimization deconvolution, non-convex optimization deconvolution and other existing non-blind deconvolution algorithms, combined with the above-identified highlighted objects.
  • the deconvolution results of the RGB three channels are obtained, so as to obtain the local area after the glare removal. image.
  • FIG. 12 shows a schematic diagram of a partial image after glare removal according to an embodiment of the present application.
  • the partial image 1201 after glare removal is generated by removing the glare in the glare starburst area 502 in FIG. 5 above.
  • Step 405 Perform fusion processing on the de-glare partial image and the preview image to obtain a de-glare high-definition image.
  • the glare area and the non-glare area where the partial image after glare removal is located may be spliced, so as to obtain a final high-definition image without glare.
  • the high-definition image after de-glare can be displayed in real time on the shooting interface of the display screen.
  • FIG. 2B in the high-definition image after de-glare, there is no glare or there is less glare, especially it can be eliminated.
  • Rainbow glare so that the brightness in the image can be smoothly transitioned to more realistically reflect the captured subject.
  • this glare is sensitive to the wavelength.
  • This generates a point spread function suitable for different bright objects, and then obtains a high-definition image with no glare through non-blind deconvolution processing, thereby effectively improving or even eliminating the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
  • which can achieve the glare-free imaging effect of the camera under the display screen when the display screen is blocked, so that the user cannot perceive the effect of rainbow glare on the final image when using the camera shooting function under the display screen.
  • the user performs anti-glare processing on the image or video that has been captured by the electronic device.
  • the images or videos that have been captured are images or videos captured by the camera under the display screen.
  • the electronic device can turn on the glare removal function, so as to improve or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare.
  • the front camera of the mobile phone is set under the display screen, and the user takes a selfie through the front camera of the mobile phone outdoors.
  • the captured image is the one shown in FIG. 2A above.
  • the original image 201 after the self-timer is completed, the captured image is stored in the mobile phone.
  • the user turns on the image editing function of the mobile phone, so that the display screen enters the image editing interface.
  • Glare that is, the sun appears as rainbow glare, and at the same time, the rainbow glare blocks some objects (such as mountain peaks), and the mobile phone turns on the anti-glare function and performs anti-glare processing, thereby obtaining a high-definition image with no glare.
  • the high-definition image with no glare is shown in Figure 2B above.
  • the glare has been eliminated, and the sun and some mountain peaks originally blocked by the rainbow glare of the sun can be seen;
  • the glare of the sun (rainbow glare) in the captured image so that a glare-removed high-definition image can be obtained.
  • the user when viewing an image, the user can make the display screen enter the image editing interface by clicking an editing button, or by using a voice command, or the like.
  • the anti-glare processing is performed to obtain the high-definition image with the glare removed.
  • the glare generated in the captured image is sensitive to the wavelength. Therefore, by obtaining the spectrum of the highlighted object components, and thus generate a point spread function suitable for different bright objects, and then obtain a high-definition image with no glare through non-blind deconvolution processing, so as to effectively improve or even eliminate the glare generated by the image captured by the camera under the display screen. Especially the rainbow glare improves the clarity of the image.
  • FIG. 13 shows a flowchart of an image processing method according to an embodiment of the present application.
  • the execution subject of the method may be an electronic device, for example, the mobile phone in FIG. 1 , and the method may be Include the following steps:
  • Step 1301 Acquire an image to be processed.
  • the image to be processed is an image collected by the under-screen imaging system.
  • the under-screen imaging system may include a display screen and a camera disposed below the display screen; for example, the image to be processed may be the user in the above embodiment.
  • the real-time preview image when an image or video is captured by an electronic device as shown in FIG. 3 , can also be the image or video that has been captured in the above embodiment and stored in the electronic device, and can also be an image captured by other external devices, etc. , which is not limited in the embodiments of the present application.
  • the image to be processed may include: a highlight object and the glare generated by it, wherein the highlight object may be a photographic object showing a specified color and/or brightness in the image to be processed; glare is the light of the highlight object passing through the display screen and the The dazzling light from the camera under the display, which can be a rainbow-shaped glare.
  • the specified color can be white
  • the specified brightness can be the brightness value of white in YUV color coding, or the gray value of white in RGB color mode.
  • it may be detected whether there is a bright object in the image to be processed under the condition that the grayscale values of most of the pixels in a certain area of the image to be processed are not less than a preset grayscale threshold.
  • a preset grayscale threshold For a specific implementation process, reference may be made to the relevant expressions in the foregoing embodiments, and details are not repeated here.
  • Step 1302 Generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system.
  • the spectral components of the highlighted objects can be collected by the spectral measurement device configured in the electronic device or the external spectral measurement device, or the spectral components corresponding to different light source types can be pre-stored in the electronic device, so as to identify the high brightness in the image to be processed.
  • the light source type of the bright object determines the spectral composition of the bright object.
  • the spectral composition of the bright object is shown in Figure 6.
  • the step may further include: acquiring the spectral components of the bright object collected by the spectral measurement device.
  • the spectral measurement device may be a configured spectral measurement device of the electronic device or an external spectral measurement device.
  • reference may be made to the relevant expressions about "the first and third modes of the method for obtaining the spectral components of the highlighted object" in step 402 in FIG.
  • the spectral components of the high-brightness objects can be acquired in real time when shooting through the spectral measurement equipment built into the electronic equipment, without relying on external equipment, which improves the functional integrity of the electronic equipment; or, with the help of spectral components collected by external equipment, Thus, the spectral components of the bright object are obtained, and the cost of electronic equipment is saved.
  • this step may further include: performing image recognition on the image to be processed to determine the light source type of the highlighted object; and determining the spectral composition of the highlighted object according to preset spectral components of different light source types.
  • this step may further include: performing image recognition on the image to be processed to determine the light source type of the highlighted object; and determining the spectral composition of the highlighted object according to preset spectral components of different light source types.
  • the electronic device can obtain the spectral composition of the bright object without additional configuration of the spectral measurement device, which saves the cost of the electronic device.
  • the first point spread function may be the point spread function of the red light channel, the point spread function of the green light channel, or the point spread function of the blue light channel, or the point spread function of the blue light channel. It may be the point spread function of any two channels, or may be the point spread function of three channels, which is not limited in this embodiment of the present application. For example, it may be the point spread function of three channels as shown in FIG. 10 .
  • step 1302 may include: generating a second point spread function according to the spectral components of the highlighted object and the model of the display screen; The corresponding point spread function behind the under-screen imaging system; the first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
  • the model of the display screen may be pre-stored in the electronic device, and the display screen model may be generated while performing step 1302; the display screen model may include: an amplitude modulation function A(m,n) and a phase modulation function P(m,n) ), etc., m, n represent the coordinates of the pixels on the display screen.
  • a display screen model can be stored in the electronic device in advance, and then a highlight can be generated according to the model of the display screen in combination with the spectral components of the highlighted object.
  • Different wavelengths of the object pass through the point spread function of the under-screen imaging system, and then combine with the photosensitive characteristic curve of the camera to obtain the point spread function of at least one channel of the under-screen imaging system.
  • the point spread function corresponding to the real glare can be obtained through physical modeling by using the spectral components of the high-brightness objects, so that the dispersion effect due to the periodic structure of the display screen can be effectively eliminated, and the glare caused by the periodic structure of the display screen can be effectively eliminated.
  • especially the rainbow glare has an excellent suppression effect, which improves the imaging quality of the under-screen imaging system.
  • generating the second point spread function according to the spectral components of the highlighted object and the model of the display screen may include: taking the model of the display screen as the transmittance function, combining the light wave propagation factor, and using The spectral components of the highlighted objects are used as weights to obtain the second point spread function; wherein, the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; The wavelength of bright objects is determined.
  • the above formula (1) can be used to determine the actual focal length of the camera when shooting, the coordinates of the pixels on the display screen, the coordinates of the pixels on the image sensor of the camera, the spectral components of the highlighted objects, and the coordinates of the display screen.
  • the phase modulation function and the amplitude modulation function determine the second point spread function.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • the display screen model constructed by the amplitude modulation function and the phase modulation characteristic function is used as the transmittance function.
  • determine the light wave propagation factor according to the focal length of the camera to capture the image to be processed and the actual wavelength of the highlighted object and at the same time use the spectral component of the highlighted object as the weight to obtain the different wavelengths of the highlighted object after passing through the under-screen imaging system.
  • Spread function the obtained point spread function can more realistically express the glare process of bright objects through under-screen imaging.
  • step 1302 may include: generating the first point spread function according to the spectral components of the highlighted object and a preset third point spread function; wherein the third point spread function is different The point spread function corresponding to the wavelength after passing through the under-screen imaging system.
  • the third point spread function can be obtained by pre-calibrating in the laboratory and stored in the electronic device, for example, the third point spread function can be shown in FIG.
  • the point spread function corresponding to the under-screen imaging system, combined with the spectral components of the above-mentioned bright objects, can quickly generate the point spread function of at least one channel in the under-screen imaging system, which significantly improves the computational efficiency and improves the anti-glare effect.
  • the processing speed can be applied to the anti-glare processing scene with a large amount of processing data and high processing speed requirements, such as video shooting.
  • Step 1303 Perform deconvolution processing on the first point spread function and the to-be-processed image to obtain a glare-removed image.
  • non-blind deconvolution algorithms such as neural network learning, convex optimization deconvolution, and non-convex optimization deconvolution can be used to perform deconvolution operations on the first point spread function and the image to be processed, thereby obtaining glare removal.
  • the resulting image for example, after removing the glare is shown in Figure 2B.
  • the image to be processed may also be divided into a first area and a second area; wherein the first area is the area where the highlighted object and the glare generated by it are located; the second area The area is the area other than the first area in the image to be processed.
  • step 1303 may include: performing deconvolution processing on the first point spread function and the first region to obtain the first region after removing glare; and merging the first region and the second region after removing the glare to obtain Image after removing glare.
  • the first area may be the glare area where the highlighted object is located in the preview image in the foregoing embodiment, such as the glare area 302 shown in FIG. 3 .
  • the second area may be the glare area except for the preview image in the foregoing embodiment.
  • the first area after removing the glare may be as shown in FIG. 12 .
  • the image to be processed is divided into a first area where the highlighted object and the glare generated by it are located, and a second area that does not include the highlighted object and the glare generated by it, and then the first area where the highlighted object and the glare generated by it are located. Further anti-glare processing is performed on one area, which improves processing efficiency and saves processing resources; at the same time, it can reduce or eliminate the second area in the image to be processed that does not contain bright objects and the glare generated by them during the anti-glare processing. influences.
  • step 400 For the specific description of the implementation manner, reference may be made to the relevant expressions in step 400, step 401, step 404, and step 405 in FIG. 4 in the foregoing embodiment, which will not be repeated here.
  • the image to be processed includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the image to be processed, different highlight objects are generated.
  • the corresponding point spread function of at least one channel in the under-screen imaging system is used, and then the first point spread function and the image to be processed are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve Or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
  • the embodiments of the present application further provide an image processing apparatus, and the image processing is used to execute the technical solutions described in the above method embodiments.
  • FIG. 14 shows a structural diagram of an image processing apparatus according to an embodiment of the present application.
  • the apparatus may include: an acquisition module 1401 for acquiring an image to be processed; the image to be processed is off-screen imaging
  • the under-screen imaging system includes a display screen and a camera set below the display screen; the images to be processed include: highlighted objects and the glare generated by them, wherein the highlighted objects are the specified colors and colors in the image to be processed.
  • the generating module 1402 is used to generate a first point spread function according to the spectral components of the highlighted object; the first point spread function is the point spread function of at least one channel in the under-screen imaging system; the processing module 1403 , which is used to perform deconvolution processing on the first point spread function and the image to be processed to obtain an image after glare removal.
  • the generation module is further configured to: generate a second point spread function according to the spectral components of the highlighted object and the model of the display screen; wherein the second point spread function is different wavelengths of the highlighted object
  • the point spread function corresponding to the light passing through the under-screen imaging system; the first point spread function is generated according to the second point spread function and the photosensitive characteristic curve of the camera.
  • the generating module is further configured to: generate a first point spread function according to the spectral components of the highlighted object and a preset third point spread function; wherein the third point spread function is a different wavelength The corresponding point spread function after passing through the under-screen imaging system.
  • the apparatus may further include: a spectral measurement device, configured to collect spectral components of the highlighted object.
  • the device may further include: a collection module, configured to perform image recognition on the image to be processed, and determine the light source type of the highlighted object; and determine the highlighted object according to preset spectral components of different light source types spectral composition.
  • a collection module configured to perform image recognition on the image to be processed, and determine the light source type of the highlighted object; and determine the highlighted object according to preset spectral components of different light source types spectral composition.
  • the apparatus may further include: a segmentation module, configured to: segment the to-be-processed image into a first area and a second area; wherein the first area is the highlighted object and its generated the area where the glare is located; the second area is an area other than the first area in the image to be processed; the processing module is also used for: deconvolution processing the first point spread function and the first area to obtain the glare-removed image The first area; the glare-removed first area is fused with the second area to obtain the glare-removed image.
  • a segmentation module configured to: segment the to-be-processed image into a first area and a second area; wherein the first area is the highlighted object and its generated the area where the glare is located; the second area is an area other than the first area in the image to be processed; the processing module is also used for: deconvolution processing the first point spread function and the first area to obtain the glare-removed image The first area; the
  • the generation module is further used to: take the model of the display screen as the transmittance function, combine the light wave propagation factor, and take the spectral component of the highlighted object as the weight to obtain the second point spread function;
  • the model of the display screen includes: the amplitude modulation function and the phase modulation characteristic function of the display screen; the light wave propagation factor is determined according to the focal length when the camera captures the image to be processed and the wavelength of the highlighted object.
  • the image to be processed includes: a highlight object and the glare generated by it, and according to the spectral components of the highlight object in the image to be processed, different highlight objects are generated.
  • the corresponding point spread function of at least one channel in the under-screen imaging system is used, and then the first point spread function and the image to be processed are deconvolved to obtain an image after removing the glare generated by the bright object; thus, it can effectively improve Or eliminate the glare generated by the image captured by the camera under the display screen, especially the rainbow glare, which improves the clarity of the image and enhances the user experience.
  • An embodiment of the present application provides an electronic device, which may include: a display screen, a camera below the display screen, a processor, and a memory; wherein, the camera is used to collect images to be processed through the display screen; the display screen is used for Display the image to be processed and the image after deglare; the memory is used to store the processor-executable instructions; the processor is configured to implement the image processing method of the above embodiment when the instruction is executed.
  • FIG. 15 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. Taking the electronic device as a mobile phone as an example, FIG. 15 shows a schematic structural diagram of the mobile phone 200 .
  • the mobile phone 200 may include a processor 210, an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc.
  • a processor 210 an external memory interface 220, an internal memory 221, a USB interface 230, a charging management module 240, a power management module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication module 251, a wireless communication module 252, Audio module 270, speaker 270A, receiver 270B, microphone 270C, headphone jack 270D, sensor module 280, buttons 290, motor 291, indicator 292, camera 293, display screen 294, SIM card interface 295, etc.
  • the sensor module 280 may include a gyroscope sensor 280A, an acceleration sensor 280B, a proximity light sensor 280G, a fingerprint sensor 280H, and a touch sensor 280K (of course, the mobile phone 200 may also include other sensors, such as a color temperature sensor, a temperature sensor, a pressure sensor, and a distance sensor. , magnetic sensor, ambient light sensor, air pressure sensor, bone conduction sensor, etc., not shown in the figure).
  • the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the mobile phone 200 .
  • the mobile phone 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example, the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or Neural-network Processing Unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 200 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is cache memory.
  • the memory may hold instructions or data that have just been used or recycled by the processor 210 . If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
  • the processor 210 can run the image processing method provided by the embodiments of the present application, so as to effectively improve or eliminate glare generated by the image captured by the camera under the display screen, especially rainbow glare, which improves the clarity of the image and improves the user experience.
  • the processor 210 may include different devices. For example, when the CPU and the GPU/NPU are integrated, the CPU and the GPU/NPU may cooperate to execute the image processing method provided by the embodiments of the present application. For example, some algorithms in the image processing method are executed by the CPU, and another part of the algorithms Executed by GPU/NPU for faster processing efficiency.
  • Display screen 294 is used to display images, videos, and the like.
  • Display screen 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • cell phone 200 may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the display screen 294 may be used to display information entered by or provided to the user as well as various graphical user interfaces (GUIs). For example, display 294 may display photos, videos, web pages, or documents, and the like.
  • GUIs graphical user interfaces
  • display 294 may display photos, videos, web pages, or documents, and the like.
  • the display screen 294 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the camera 293 (a front camera or a rear camera, or a camera can be used as both a front camera and a rear camera) is used to capture still images or videos, and the camera 293 can be arranged below the display screen 294 .
  • the camera 293 may include a photosensitive element such as a lens group and an image sensor, wherein the lens group includes a plurality of lenses (convex or concave) for collecting the light signal reflected by the object to be photographed, and transmitting the collected light signal to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • Internal memory 221 may be used to store computer executable program code, which includes instructions.
  • the processor 210 executes various functional applications and data processing of the mobile phone 200 by executing the instructions stored in the internal memory 221 .
  • the internal memory 221 may include a storage program area and a storage data area.
  • the storage program area may store the operating system, the code of the application (such as a camera application, etc.), and the like.
  • the storage data area may store data created during the use of the mobile phone 200 (such as images and videos collected by the camera application) and the like.
  • the internal memory 221 may also store one or more computer programs corresponding to the image processing methods provided in the embodiments of the present application.
  • the one or more computer programs are stored in the above-mentioned memory 221 and configured to be executed by the one or more processors 210, and the one or more computer programs include instructions that can be used to perform the corresponding embodiments described above.
  • the computer program may include an acquisition module 1401 for acquiring an image to be processed; the image to be processed is an image collected by an under-screen imaging system, and the under-screen imaging system includes a display screen and a camera set below the display screen;
  • the processed image includes: a highlight object and the glare generated by it, wherein the highlight object is a photographic object exhibiting a specified color and/or brightness in the image to be processed;
  • the generating module 1402 is configured to generate a first a point spread function; the first point spread function is the point spread function of at least one channel in the under-screen imaging system;
  • the processing module 1403 is used to perform deconvolution processing on the first point spread function and the image to be processed to obtain a glare-removed image Image.
  • the internal memory 221 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the code of the image processing method provided by the embodiment of the present application may also be stored in an external memory.
  • the processor 210 may execute the code of the image processing method stored in the external memory through the external memory interface 220 .
  • the display screen 294 of the mobile phone 200 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, etc.).
  • Display screen 294 displays an interface of a camera application, such as a capture interface.
  • the mobile communication module 251 can provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the mobile phone 200 .
  • the mobile communication module 251 may also be used for information interaction with other devices (eg, acquiring spectral components, images to be processed, etc.).
  • the mobile phone 200 can implement audio functions through an audio module 270, a speaker 270A, a receiver 270B, a microphone 270C, an earphone interface 270D, and an application processor. Such as music playback, recording, etc.
  • the cell phone 200 can receive key 290 input and generate key signal input related to user settings and function control of the cell phone 200 .
  • the mobile phone 200 can use the motor 291 to generate a vibration prompt (for example, a prompt for turning on the anti-glare function).
  • the indicator 292 in the mobile phone 200 can be an indicator light, which can be used to indicate a charging state, a change in power, or a message, a prompt message, a missed call, a notification, and the like.
  • the mobile phone 200 may include more or less components than those shown in FIG. 13 , which are not limited in this embodiment of the present application.
  • the illustrated handset 200 is merely an example, and the handset 200 may have more or fewer components than those shown, two or more components may be combined, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the software system of the electronic device may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present application take an Android system with a layered architecture as an example to exemplarily describe the software structure of an electronic device.
  • FIG. 16 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • An embodiment of the present application provides an image processing apparatus, including: a processor and a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above method when executing the instructions.
  • Embodiments of the present application provide a non-volatile computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, implement the above method.
  • Embodiments of the present application provide a computer program product, including computer-readable codes, or a non-volatile computer-readable storage medium carrying computer-readable codes, when the computer-readable codes are stored in a processor of an electronic device When running in the electronic device, the processor in the electronic device executes the above method.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (Electrically Programmable Read-Only-Memory, EPROM or flash memory), static random access memory (Static Random-Access Memory, SRAM), portable compact disk read-only memory (Compact Disc Read-Only Memory, CD - ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanically encoded devices, such as punch cards or raised structures in grooves on which instructions are stored, and any suitable combination of the foregoing .
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read-only memory
  • EPROM Errically Programmable Read-Only-Memory
  • SRAM static random access memory
  • portable compact disk read-only memory Compact Disc Read-Only Memory
  • CD - ROM Compact Disc Read-Only Memory
  • DVD Digital Video Disc
  • memory sticks floppy disks
  • Computer readable program instructions or code described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or in one or more source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network—including a Local Area Network (LAN) or a Wide Area Network (WAN)—or, may be connected to an external computer (eg, use an internet service provider to connect via the internet).
  • electronic circuits such as programmable logic circuits, Field-Programmable Gate Arrays (FPGA), or Programmable Logic Arrays (Programmable Logic Arrays), are personalized by utilizing state information of computer-readable program instructions.
  • Logic Array, PLA the electronic circuit can execute computer readable program instructions to implement various aspects of the present application.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in hardware (eg, circuits or ASICs (Application) that perform the corresponding functions or actions. Specific Integrated Circuit, application-specific integrated circuit)), or can be implemented by a combination of hardware and software, such as firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Studio Devices (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention concerne un procédé et un appareil de traitement d'image, un dispositif électronique et un support de stockage, le procédé comprenant les étapes suivantes : acquérir une image à traiter ; l'image à traiter est une image capturée par un système d'imagerie sous écran, le système d'imagerie sous écran comprenant un écran d'affichage et une caméra disposée sous l'écran d'affichage ; l'image à traiter comprend un objet mis en évidence et le reflet produit par celui-ci, l'objet mis en évidence étant un objet photographié d'une couleur ou d'une luminosité spécifiée dans l'image à traiter ; en fonction de la composition spectrale de l'objet mis en évidence, produire une première fonction d'étalement ponctuel ; la première fonction d'étalement ponctuel est une fonction d'étalement ponctuel d'au moins un canal dans le système d'imagerie sous écran ; et effectuer un traitement de déconvolution sur la première fonction d'étalement ponctuel et l'image à traiter pour obtenir une image sans reflets. Dans la présente demande, le reflet produit par une image photographiée par une caméra sous un écran d'affichage peut être amélioré ou éliminé, en particulier le reflet en arc-en-ciel, ce qui augmente la netteté de l'image et améliore l'expérience de l'utilisateur.
PCT/CN2021/137491 2020-12-14 2021-12-13 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage WO2022127738A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011474431.3A CN114626995A (zh) 2020-12-14 2020-12-14 一种图像处理方法、装置、电子设备及存储介质
CN202011474431.3 2020-12-14

Publications (1)

Publication Number Publication Date
WO2022127738A1 true WO2022127738A1 (fr) 2022-06-23

Family

ID=81897184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/137491 WO2022127738A1 (fr) 2020-12-14 2021-12-13 Procédé et appareil de traitement d'image, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114626995A (fr)
WO (1) WO2022127738A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050083517A1 (en) * 2003-10-03 2005-04-21 Abu-Tarif Asad Methods, system, and program product for the detection and correction of spherical aberration
US20180091705A1 (en) * 2016-09-26 2018-03-29 Rambus Inc. Methods and Systems for Reducing Image Artifacts
CN107907483A (zh) * 2017-08-14 2018-04-13 西安电子科技大学 一种基于散射介质的超分辨光谱成像系统及方法
CN110087051A (zh) * 2019-04-19 2019-08-02 清华大学 基于hsv色彩空间的彩色图眩光去除方法及系统
CN111123538A (zh) * 2019-09-17 2020-05-08 印象认知(北京)科技有限公司 图像处理方法及基于点扩散函数调整衍射屏结构的方法
CN111246053A (zh) * 2020-01-22 2020-06-05 维沃移动通信有限公司 图像处理方法及电子设备
CN111812758A (zh) * 2020-07-21 2020-10-23 欧菲微电子技术有限公司 衍射光学元件及其制备方法、屏下光学系统及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050083517A1 (en) * 2003-10-03 2005-04-21 Abu-Tarif Asad Methods, system, and program product for the detection and correction of spherical aberration
US20180091705A1 (en) * 2016-09-26 2018-03-29 Rambus Inc. Methods and Systems for Reducing Image Artifacts
CN107907483A (zh) * 2017-08-14 2018-04-13 西安电子科技大学 一种基于散射介质的超分辨光谱成像系统及方法
CN110087051A (zh) * 2019-04-19 2019-08-02 清华大学 基于hsv色彩空间的彩色图眩光去除方法及系统
CN111123538A (zh) * 2019-09-17 2020-05-08 印象认知(北京)科技有限公司 图像处理方法及基于点扩散函数调整衍射屏结构的方法
CN111246053A (zh) * 2020-01-22 2020-06-05 维沃移动通信有限公司 图像处理方法及电子设备
CN111812758A (zh) * 2020-07-21 2020-10-23 欧菲微电子技术有限公司 衍射光学元件及其制备方法、屏下光学系统及电子设备

Also Published As

Publication number Publication date
CN114626995A (zh) 2022-06-14

Similar Documents

Publication Publication Date Title
US9571739B2 (en) Camera timer
WO2018072271A1 (fr) Procédé et dispositif d'optimisation d'affichage d'image
US20220350470A1 (en) User Profile Picture Generation Method and Electronic Device
CN114586008A (zh) 显示页面元素的方法和电子设备
EP4120183A1 (fr) Procédé d'amélioration d'image et dispositif électronique
US20230345113A1 (en) Display control method and apparatus, electronic device, and medium
CN113938602B (zh) 图像处理方法、电子设备、芯片及可读存储介质
US20230245441A9 (en) Image detection method and apparatus, and electronic device
CN114640783B (zh) 一种拍照方法及相关设备
CN114926351B (zh) 图像处理方法、电子设备以及计算机存储介质
CN117499779B (zh) 一种图像预览方法、设备以及存储介质
CN113452969B (zh) 图像处理方法和装置
EP4109879A1 (fr) Procédé et dispositif de conservation des couleurs des images
CN116152123B (zh) 图像处理方法、电子设备及可读存储介质
US20230353864A1 (en) Photographing method and apparatus for intelligent framing recommendation
WO2022127738A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
CN114640798B (zh) 图像处理方法、电子设备及计算机存储介质
US11989345B1 (en) Machine learning based forecasting of human gaze
CN113518172A (zh) 图像处理方法和装置
CN115589539B (zh) 一种图像调节的方法、设备及存储介质
EP4296840A1 (fr) Procédé et appareil de défilement pour capturer une capture d'écran
US20240062392A1 (en) Method for determining tracking target and electronic device
WO2016044983A1 (fr) Procédé et appareil de traitement d'image, et dispositif électronique
CN116708996B (zh) 一种拍照方法、图像优化模型训练方法及电子设备
US20240040235A1 (en) Video shooting method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21905667

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21905667

Country of ref document: EP

Kind code of ref document: A1