WO2022198861A1 - 一种干涉条纹的校正方法及屏下系统 - Google Patents

一种干涉条纹的校正方法及屏下系统 Download PDF

Info

Publication number
WO2022198861A1
WO2022198861A1 PCT/CN2021/107941 CN2021107941W WO2022198861A1 WO 2022198861 A1 WO2022198861 A1 WO 2022198861A1 CN 2021107941 W CN2021107941 W CN 2021107941W WO 2022198861 A1 WO2022198861 A1 WO 2022198861A1
Authority
WO
WIPO (PCT)
Prior art keywords
corrected
image
correction parameter
correction
parameter set
Prior art date
Application number
PCT/CN2021/107941
Other languages
English (en)
French (fr)
Inventor
兰富洋
王兆民
杨鹏
黄源浩
肖振中
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2022198861A1 publication Critical patent/WO2022198861A1/zh
Priority to US18/221,662 priority Critical patent/US20230370730A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • the present application belongs to the technical field of image processing, and in particular relates to a method for correcting interference fringes and an under-screen system.
  • the camera module under the screen will become the standard configuration of most mobile phones.
  • the imaging principle of the under-screen camera module is as follows: the illumination light source (for example: infrared laser, etc.) in the under-screen camera module fills the target area with light through the screen, and the acquisition module (for example: camera) in the under-screen camera module Photograph the illuminated object to obtain an infrared image.
  • the illumination light source for example: infrared laser, etc.
  • the acquisition module for example: camera
  • the light beam emitted by the illumination light source is divided into multiple columns of beams through the screen. phase delay. Multiple columns of beams meet and overlap on the target area, resulting in interference fringes, resulting in poor image quality.
  • the embodiments of the present application provide a method for correcting interference fringes, an under-screen system, and a computer-readable storage medium, which can solve the problem that multiple columns of light beams meet and overlap on a target area, resulting in interference fringes, resulting in lower quality of collected images.
  • a method for correcting interference fringes an under-screen system, and a computer-readable storage medium, which can solve the problem that multiple columns of light beams meet and overlap on a target area, resulting in interference fringes, resulting in lower quality of collected images.
  • a first aspect of the embodiments of the present application provides a method for calibrating interference fringes, the calibration method comprising:
  • the set of correction parameters includes different correction parameters corresponding to different coordinate positions;
  • the average depth value refers to the average value of the depth values corresponding to a plurality of pixels to be corrected in the image to be corrected;
  • the different target correction parameters corresponding to the different coordinate positions in the target correction parameter set correct the first pixel values of the pixels to be corrected located at the different coordinate positions in the to-be-corrected image, and obtain the corrected image.
  • a second aspect of the embodiments of the present application provides an under-screen system based on interference fringe correction, characterized in that it includes a display screen, an illumination light source, a collection module, a processor, and a memory, wherein:
  • the illumination light source is used to emit light beams to the target area through the display screen
  • the acquisition module is used for receiving the light signal reflected by the target area and passing through the display screen, acquiring the infrared image of the target area, and transmitting it to the processor;
  • the processor is configured to correct the infrared image by using the preset correction parameter set and the interference fringe correction method described in any of the above embodiments.
  • the memory for storing the set of correction parameters and a computer program executable on the processor.
  • a third aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the correction method in the first aspect above .
  • the beneficial effects of the embodiments of the present application are: because the interference fringes are usually bright and dark, and the positions of the interference fringes in the image to be corrected with different depth values are different, the present application selects the image according to the average depth value of the image to be corrected. A set of target correction parameters. And according to the target correction parameter set, the target correction parameters corresponding to the pixels to be corrected at different coordinate positions are obtained, and the first pixel value of the pixels to be corrected is corrected according to the target correction parameters. In the above manner, the first pixel values of pixel points to be corrected at different coordinate positions in the image to be corrected are adjusted based on the average depth value corresponding to the target correction parameter set, so as to reduce the defect of poor quality of the collected image caused by interference fringes.
  • FIG. 1 shows a schematic flowchart of a method for correcting interference fringes provided by the present application
  • FIG. 2 shows a specific schematic flowchart of step 101 in a method for correcting interference fringes provided by the present application
  • Fig. 3 shows the schematic diagram of the interference fringe translation variation curve provided by the present application
  • FIG. 4 shows a schematic diagram of the division strategy of the shooting distance provided by the present application
  • FIG. 5 shows a specific schematic flowchart of step 1012 in a method for correcting interference fringes provided by the present application
  • step 103 shows a specific schematic flowchart of step 103 in a method for correcting interference fringes provided by the present application
  • FIG. 7 shows a specific schematic flowchart of step 103 in a method for correcting interference fringes provided by the present application
  • FIG. 8 shows a schematic diagram of the calibration process provided by the present application.
  • FIG. 9 shows a specific schematic flowchart of step 104 in a method for correcting interference fringes provided by the present application.
  • Figure 10a shows a schematic diagram of light reflection provided by the present application
  • Figure 10b shows a schematic diagram of light reflection provided by the present application
  • FIG. 11 is a schematic diagram of an off-screen system provided by an embodiment of the present invention.
  • FIG. 12 shows a schematic diagram of a functional architecture of a processor provided by the present application.
  • the interference phenomenon refers to the phenomenon that the vibration intensity is redistributed due to the superposition and synthesis of two (or multiple) waves of the same amplitude, frequency and fixed phase. In the superposition area of waves, the amplitude increases in some places, and decreases in some places, and the vibration intensity has a fixed distribution of strong and weak in space, forming interference fringes.
  • the under-screen camera module (the under-screen camera module includes an illumination light source and a collection module) generates a first light beam and a second light beam when the illumination light source passes through the display screen during the imaging process.
  • the first beam and the second beam will interfere stably on the receiving plane (ie, the target area), resulting in interference fringes.
  • the interference fringes vary with the shooting distance between the target area and the camera module under the screen. change and zoom. At different shooting distances, the interference fringes in the infrared images collected by the acquisition module are shifted in the direction of parallax.
  • the embodiments of the present application provide a method for correcting interference fringes, an under-screen system, and a computer-readable storage medium, which can solve the above-mentioned technical problems.
  • FIG. 1 shows a schematic flowchart of a method for correcting interference fringes provided by the present application.
  • the calibration method includes the following steps:
  • Step 101 Acquire a set of correction parameters for different shooting distances; the set of correction parameters includes different correction parameters corresponding to different coordinate positions.
  • the interference fringes at different shooting distances only have offsets in the distribution position in the image, and their size does not change, and the interference fringes at the same shooting distance have the same distribution position in the image. If the distribution position of the interference fringes in the image can be determined, the interference fringes can be eliminated. Therefore, based on the above rules, the present application acquires sets of correction parameters for different shooting distances, so as to adaptively correct the pixels to be corrected in the image to be corrected.
  • the division strategy for different shooting distances may be uniform division and non-uniform division, and the intervals for different shooting distances may be preset according to the correction fineness.
  • the set of correction parameters is the correction parameters corresponding to different coordinate positions in the image corresponding to the same target plane.
  • Correction parameters are used to enhance pixel values, attenuate pixel values, or require no correction (ie, the correction parameter is 1).
  • the correction parameter is 1
  • the pixel values need to be enhanced for the pixels located in the dark fringe area, and the pixel values need to be weakened for the pixel points located in the bright fringe area.
  • the correction parameter set may be pre-stored data, and when performing step 101, only the correction parameter set pre-stored in the memory needs to be acquired.
  • the correction parameter set can also be obtained by the following optional embodiments (wherein, the pre-stored correction parameter set also needs to be pre-calculated by the following optional embodiments):
  • step 101 includes the following steps 1011 to 1012. Please refer to FIG. 2 , which shows a specific schematic flowchart of step 101 in a method for correcting interference fringes provided by the present application.
  • Step 1011 Collect a plurality of interference fringe images with different shooting distances;
  • the interference fringe images refer to the images obtained by the light source irradiating the target plane through the display screen to form interference fringes and collected by the acquisition module;
  • the shooting distance Refers to the distance between the plane perpendicular to the optical axis of the collection module and the collection module;
  • the interference fringe image is used to reflect the second pixel values of the interference fringes at different coordinate positions.
  • the target plane can be used to display interference fringes generated by the illuminating light source passing through the screen (the illuminating light source includes but not limited to light sources such as lasers or LEDs), and the size of the target plane is larger than the viewing range of the acquisition module at different shooting distances.
  • the acquisition module collects the interference fringes appearing on the target plane to obtain an interference fringe image.
  • a plane with a white background and a uniform surface texture is preferred as the target plane.
  • the division strategies for different shooting distances may be uniform division and non-uniform division, and the intervals for different shooting distances may be preset according to the correction fineness.
  • What is difficult to notice is that when the target plane moves back and forth along the optical axis in a region close to the acquisition module, the amount of translational variation of the interference fringes on the target plane is large.
  • the target plane moves back and forth along the optical axis in an area far from the acquisition module, the translational variation of the interference fringes on the target plane is small.
  • FIG. 3 shows a schematic diagram of an interference fringe translation variation curve provided by the present application. As shown in Figure 3, there is a downward trend between the amount of fringe translation change and the shooting distance.
  • FIG. 4 shows a schematic diagram of the division strategy of the shooting distance provided by the present application. As shown in Figure 4, as the target plane moves away from the acquisition module, the sampling positions of the target plane are gradually sparser. To scientifically divide the distribution density of shooting distances.
  • one or more original images may be collected as interference fringe images.
  • multi-frame averaging can be performed to obtain an interference fringe image.
  • the interference fringe image may be preprocessed to improve the image quality of the interference fringe image, thereby obtaining a finer correction effect.
  • the preprocessing includes image processing means such as noise reduction or gray value adjustment.
  • Step 1012 Normalize the second pixel values of each initial pixel point in the interference fringe image one by one to obtain the correction parameter sets corresponding to the different shooting distances.
  • the present application normalizes the second pixel value of each initial pixel point in the interference fringe image to obtain the corresponding correction parameter sets for the different shooting distances.
  • the specific process is as follows:
  • step 1012 includes the following steps A1 to A2. Please refer to FIG. 5.
  • FIG. 5 shows a specific schematic flowchart of step 1012 in a method for correcting interference fringes provided by the present application.
  • Step A1 acquiring the maximum second pixel value in the interference fringe image.
  • Step A2 Divide the second pixel value of each initial pixel point in the interference fringe image by the maximum second pixel value to obtain the correction parameter set.
  • Each initial pixel in the interference fringe image adopts the following formula to obtain the set of correction parameters:
  • M represents the maximum second pixel value
  • Ia represents the correction parameter in the correction parameter set
  • Ib represents the second pixel value of the initial pixel point.
  • the correction parameter range of different coordinate positions in the correction parameter set obtained by the above formula is between [0, 1].
  • Step 102 Acquire an image to be corrected, and calculate an average depth value of the image to be corrected; the average depth value refers to an average value of depth values corresponding to a plurality of pixels to be corrected in the image to be corrected.
  • the image to be corrected in the target area is acquired, the depth values of a plurality of pixels to be corrected in the image to be corrected are calculated, and the average depth values of the plurality of pixels to be corrected are averaged to obtain an average depth value.
  • Method 1 The illumination light source projects a structured light beam to the target area, and the acquisition module receives the beam reflected by the target area and forms an electrical signal.
  • the electrical signal is transmitted to the processor, the processor processes the electrical signal, calculates the intensity information reflecting the light beam to form a structured light pattern, and finally performs matching calculation or trigonometry based on the structured light pattern to obtain a plurality of pixels to be corrected The depth value of the point.
  • Method 2 The illuminating light source projects an infrared beam to the target area, and the acquisition module receives the beam reflected by the target area and forms an electrical signal.
  • the electrical signal is transmitted to the processor, and the processor processes the electrical signal to calculate the phase difference, and based on the phase difference indirectly calculates the time-of-flight used by the light beam from the illumination light source to be received by the acquisition module, and further calculates based on the time-of-flight Depth values of multiple pixels to be corrected.
  • the infrared beam may include pulsed and continuous wave types, which are not limited herein.
  • Method 3 The illumination light source projects an infrared pulse beam to the target object, and the acquisition module receives the beam reflected by the target object and forms an electrical signal.
  • the electrical signal is transmitted to the processor, and the processor counts the electrical signal to obtain a waveform histogram, and directly calculates the flight time from the illumination light source to the acquisition module reception according to the histogram, and further calculates a number of waiting times based on the flight time. Correct the depth value of the pixel point.
  • Step 103 From the correction parameter sets of different shooting distances, select a target correction parameter set corresponding to the average depth value.
  • each average depth value may correspond to the same shooting distance. If the distribution interval of different shooting distances is not small enough, the average depth value and shooting distance may not be exactly equal.
  • a target correction parameter set corresponding to the shooting distance equal to the average depth value is selected.
  • step 103 includes the following steps 1031 to 1032. Please refer to FIG. 6.
  • FIG. 6 shows a specific schematic flowchart of step 103 in a method for correcting interference fringes provided by the present application.
  • Step 1031 among the different shooting distances, select a first shooting distance with the smallest difference from the average depth value.
  • Step 1032 Use the correction parameter set corresponding to the first shooting distance as the target correction parameter set corresponding to the average depth value.
  • step 1031 and step 1032 that is, the difference is 0 under this condition.
  • Step 104 Correct the first pixel values of the pixels to be corrected located at the different coordinate positions in the image to be corrected according to different target correction parameters corresponding to the different coordinate positions in the target correction parameter set, to obtain: Corrected image.
  • the present application converts each target correction parameter, and the conversion process is shown in the following continuous embodiment:
  • step 104 includes the following steps B1 to B2.
  • FIG. 7 shows a specific schematic flowchart of step 103 in a method for calibrating interference fringes provided by the present application.
  • Step B1 Substitute the target correction parameter set into a first preset formula to obtain a first correction parameter set.
  • the target correction parameter set needs to be corrected by the first preset formula.
  • the first preset formula is as follows:
  • Ia represents the target correction parameter in the target correction parameter set
  • Ib represents the first correction parameter of the first correction parameter set
  • La represents the shooting distance corresponding to the first correction parameter set
  • Lb represents the average depth value
  • Step B2 correcting the first pixel values of the pixels to be corrected located at the different coordinate positions in the image to be corrected according to the different first correction parameters corresponding to the different coordinate positions in the first correction parameter set , to get the corrected image.
  • a corrected image may be obtained by dividing the first pixel values of the plurality of pixels to be corrected by the respective first correction parameters corresponding to the plurality of pixels to be corrected.
  • the first pixel values of the plurality of pixels to be corrected may also be divided by the respective first correction parameters corresponding to the plurality of pixels to be corrected, and multiplied by a preset adjustment coefficient (the adjustment coefficient is used to adjust the correction intensity, which can be preset according to the actual application scenario) to obtain a corrected image.
  • a preset adjustment coefficient the adjustment coefficient is used to adjust the correction intensity, which can be preset according to the actual application scenario
  • FIG. 8 shows a schematic diagram of the calibration process provided by the present application.
  • the pixel point a to be corrected, the pixel point b to be corrected, and the pixel point c to be corrected in the image to be corrected are taken as an example.
  • the image to be corrected corresponds to a unique correction parameter set (ie, a target correction parameter set).
  • a target correction parameter set ie, a target correction parameter set.
  • the set of correction parameters is also an image, except that the second pixel values at different coordinate positions in the image are the correction parameters.
  • the coordinate positions of the pixel point a to be corrected and the pixel point d1 are the same, the coordinate position of the pixel point b to be corrected and the pixel point d2 are the same, and the coordinate position of the pixel point c to be corrected and the pixel point d3 are the same.
  • the corrected pixel point A is obtained by dividing the first pixel value of the pixel point a to be corrected by the second pixel value of the pixel point d1 (ie, the correction parameter).
  • the corrected pixel point B is obtained by dividing the first pixel value of the pixel point b to be corrected by the second pixel value of the pixel point d2 (ie, the correction parameter).
  • the corrected pixel point C is obtained by dividing the first pixel value of the pixel point c to be corrected by the second pixel value of the pixel point d3 (ie, the correction parameter).
  • FIG. 8 is only used as an example, and the number of corrected pixels in the correction parameter set in FIG. 8 , the number of to-be-corrected pixels in the to-be-corrected image, and the positions are not limited.
  • step 104 includes the following steps C1 to C4. Please refer to FIG. 9.
  • FIG. 9 shows a specific schematic flowchart of step 104 in a method for correcting interference fringes provided by the present application.
  • Step C1 Calculate the parallax between the interference fringe image corresponding to the target correction parameter set and the to-be-corrected image according to a second preset formula.
  • FIG. 10a shows a schematic diagram of light reflection provided by the present application.
  • the straight line represents the target plane.
  • the illuminating light source emits light and irradiates the target plane, the light is reflected from the target plane to the acquisition module. Since the positions of the collection module and the illumination light source are different, the paths of the emitted light and the reflected light are different. However, since the average depth value may not match exactly the same shooting distance, if the average depth value and the shooting distance are not equal, there is a certain difference in the coordinate positions of the correction parameters. Please refer to FIG.
  • FIG. 10b which shows a schematic diagram of light reflection provided by the present application.
  • the irregular figure is the target area (ie, foreground) in the image to be corrected, and point a maps to a pixel in the image to be corrected.
  • the straight line represents the target plane. Since the paths of the emitted light and the reflected light are different, if the camera corrects the first pixel of point a with the correction parameters of point p2 matching the reflected light, the result will be wrong (the correction parameters of point P1 should be used to correct the first pixel of point a. ).
  • the present application reconstructs the target correction parameter set (that is, reconstructs the mapping relationship between the correction parameter and the coordinate position in the target correction parameter set, to correct the positional deviation. It is possible to utilize the correction parameter at point p1 to point a the first pixel is corrected).
  • the parallax generated by the illumination light source and the position of the camera is calculated.
  • the second preset formula is as follows:
  • d represents the parallax
  • La represents the shooting distance corresponding to the target correction parameter set
  • Lb represents the average depth value
  • b represents the optical axis distance between the camera in the camera module and the illumination light source
  • f represents the focal length of the camera module.
  • Step C2 Add the parallax to the value of the coordinate position of each second pixel point in the target correction parameter set to obtain a second correction parameter set.
  • parallax is in the X-axis direction, add the parallax to the X-axis value in the coordinate position to obtain the second correction parameter set.
  • the Y-axis value in the coordinate position is added to the parallax to obtain the second correction parameter set.
  • steps C1 to C2 do not need to be performed.
  • Step C3 Substitute the second correction parameter set into a third preset formula to obtain the third correction parameter set.
  • the third preset formula is as follows:
  • Ic represents the second correction parameter set
  • Id represents the third correction parameter set
  • La represents the shooting distance corresponding to the target correction parameter set
  • Lb represents the average depth value
  • Step C4 correcting the first pixel values of the pixels to be corrected located at the different coordinate positions in the image to be corrected according to the different second correction parameters corresponding to the different coordinate positions in the third correction parameter set , to get the corrected image.
  • Steps C2 to C4 are the same as steps B1 to B2 in the foregoing optional embodiment, and specific reference is made to steps B1 to B2 in the foregoing optional embodiment, which will not be repeated here.
  • the target correction parameter set is obtained through the average depth value of the first pixel point. And according to the second pixel position of the first pixel point, the target correction parameter in the target correction parameter set is acquired, and the first pixel value of the first pixel point is corrected according to the target correction parameter.
  • the first pixel values of the first pixel points at different positions are adjusted based on the average depth value, so as to reduce the problem of poor quality of the captured image caused by interference fringes.
  • FIG. 11 is a schematic diagram of an off-screen system according to an embodiment of the present invention.
  • the off-screen system 110 of this embodiment includes an illumination light source 111 , a collection module 112 , a processor 113 , a memory 114 and a display screen 115 , wherein:
  • the illumination light source 111 is used to emit infrared light beams to the target area 116 through the display screen 115;
  • the acquisition module 112 is used for receiving the optical signal reflected by the target area and passing through the display screen 115 and acquiring the infrared image of the target area 116 to transmit to the processor 113 and to the processor;
  • the processor 113 is configured to correct the infrared image by using the preset correction parameter set and the interference fringe correction method described in any of the above embodiments;
  • a memory 114 for storing a set of correction parameters and a computer program executable on the processor.
  • any one of the illumination light source 111 and the acquisition module 112 is under the display screen 115, if the infrared image collected by the acquisition module 115 contains interference fringes, the above-mentioned interference fringe correction method can still be used to correct the infrared image. , there is no restriction here.
  • the under-screen system 110 further includes a flood light module 117 , and the flood light module 117 projects the light beam to the target area 116 through the display screen 117 .
  • Flood light beam the acquisition module 112, on the one hand, receives the structured light signal reflected by the target area and transmits it to the processor 113 to obtain the depth value of the target area; on the other hand, receives the flood light signal reflected by the target area to form an infrared image, and further according to The above method corrects the infrared image.
  • the under-screen system 110 does not need to perform supplementary lighting, and the acquisition module can directly collect the infrared image, and further according to the above method Infrared images are corrected.
  • the processor 113 executes the steps in the embodiment of the interference fringe correction method. More specifically, this step can be executed by one or more units in the processor 113, so as to complete the present application, the unit that can be divided (Refer to FIG. 12, which shows a schematic diagram of a processor functional architecture provided by the present application)
  • the specific execution functions are as follows:
  • an acquisition unit 121 configured to acquire an infrared image of the target area acquired by the acquisition module
  • a depth calculation unit 122 configured to calculate the average depth value of the infrared image;
  • the average depth value refers to the average value of the depth values corresponding to a plurality of pixels to be corrected in the infrared image;
  • a selection unit 123 configured to select a target correction parameter set corresponding to the average depth value from the correction parameter sets of the different shooting distances stored in the memory;
  • the correction unit 124 is configured to correct the first pixel value of the pixel to be corrected located at the different coordinate positions in the infrared image according to the different target correction parameters corresponding to the different coordinate positions in the target correction parameter set , to get the corrected image.
  • the present application provides an under-screen system based on interference fringe correction, which selects a target correction parameter set according to the average depth value of the image to be corrected. And according to the target correction parameter set, the target correction parameters corresponding to the pixels to be corrected at different coordinate positions are obtained, and the first pixel value of the pixels to be corrected is corrected according to the target correction parameters. In the above manner, the first pixel values of pixel points to be corrected at different coordinate positions in the image to be corrected are adjusted based on the average depth value corresponding to the target correction parameter set, so as to reduce the defect of poor quality of the collected image caused by interference fringes.
  • the off-screen system includes but is not limited to the above-mentioned modules and the above-mentioned combined forms. Include more or fewer components than shown, or combine some components, or different components.
  • the camera module includes a collection module and an illumination light source.
  • the illumination light source includes a light source and an optical component (the optical component may include a diffractive optical element, etc.) and the like.
  • the light source can be an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), or a light source array composed of multiple light sources.
  • the light beam emitted by the light source can be visible light, infrared light, or ultraviolet light.
  • the light beam emitted by the light source can form a uniform, random or specially designed intensity distribution projection pattern on the reference plane.
  • the acquisition module includes modules such as an image sensor and a lens unit, and the lens unit receives part of the light beam reflected back by the object and images it on the image sensor.
  • the image sensor can be Charge Coupled Device (CCD), Complementary Metal-Oxide-Semiconductor Transistor (CMOS), Avalanche Diode (AD) or Single Photon Avalanche Diode (Single Photon Avalanche Diode) , SPAD) and other image sensors.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal-Oxide-Semiconductor Transistor
  • AD Avalanche Diode
  • SPAD Single Photon Avalanche Diode
  • the processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf processors. Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory may be an internal storage unit of the under-screen system, such as a hard disk or a memory of an under-screen system.
  • the memory can also be an external storage device of the under-screen system, such as a plug-in hard disk equipped on the under-screen system, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) , SD) card, flash memory card (Flash Card), etc.
  • the memory may also include both an internal storage unit of the under-screen system and an external storage device.
  • the memory is used for storing the computer program and other programs and data required by the one kind of roaming control device.
  • the memory may also be used to temporarily store data that has been output or is to be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/living detection device, recording medium, computer memory, read-only memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium.
  • ROM read-only memory
  • RAM random access Memory
  • electrical carrier signal telecommunication signal and software distribution medium.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/network device and method may be implemented in other manners.
  • the apparatus/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to monitoring of ".
  • the phrases “if it is determined” or “if the [described condition or event] is monitored” can be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is monitored. ]” or “in response to the detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种干涉条纹的校正方法及屏下系统,所述校正方法包括:获取不同拍摄距离的校正参数集合(步骤101);获取待校正图像,并计算所述待校正图像的平均深度值(步骤102);在所述不同拍摄距离的校正参数集合中,选择所述平均深度值对应的目标校正参数集合(步骤103);根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像(步骤104)。通过上述方式,基于待校正图像的平均深度值调整不同坐标位置的待校正像素点的第一像素值,以减弱干涉条纹导致采集的图像质量较差的缺陷。

Description

一种干涉条纹的校正方法及屏下系统 技术领域
本申请属于图像处理的技术领域,尤其涉及一种干涉条纹的校正方法及屏下系统。
背景技术
随着手机厂商对全面屏的不断优化,屏下摄像模块将成为大部分手机的标配。其中,屏下摄像模块的成像原理如下:屏下摄像模块中的照射光源(例如:红外激光等)透过屏幕对目标区域进行补光,屏下摄像模块中的采集模组(例如:摄像头)拍摄被照明的物体,得到红外图像。
然而,由于显示屏的物理特征,导致照射光源发出的光束透过屏幕被分为多列光束,一部分光束直接透过显示屏射出,另一部分光束由于在显示屏上发生反射等原因,具有不同的相位延迟。多列光束在目标区域上相遇叠加,产生干涉条纹,导致采集的图像质量较差。
发明内容
有鉴于此,本申请实施例提供了一种干涉条纹的校正方法、屏下系统以及计算机可读存储介质,可以解决多列光束在目标区域上相遇叠加,产生干涉条纹,导致采集的图像质量较差的技术问题。
本申请实施例的第一方面提供了一种干涉条纹的校正方法,所述校正方法包括:
获取不同拍摄距离的校正参数集合;所述校正参数集合中包括不同坐标位置对应的不同的校正参数;
获取待校正图像,并计算所述待校正图像的平均深度值;所述平均深度值是指所述待校正图像中多个待校正像素点对应深度值的平均值;
在所述不同拍摄距离的校正参数集合中,选择所述平均深度值对应的目标校正参数集合;
根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
本申请实施例的第二方面提供了一种基于干涉条纹校正的屏下系统,其特征在于,包括显示屏、照射光源、采集模组、处理器及存储器,其中:
所述照射光源,用于透过所述显示屏向目标区域发射光束;
所述采集模组,用于接收经所述目标区域反射后透过所述显示屏的光信号并获取目标区域的红外图像,传输至所述处理器;
所述处理器,用于利用预设的校正参数集合和上述任一实施例方案记载的干涉条纹校正方法对红外图像进行校正。
所述存储器,用于存储所述校正参数集合以及可在所述处理器上运行的计算机程序。
本申请实施例的第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质存 储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所述校正方法的步骤。
本申请实施例与现有技术相比存在的有益效果是:由于干涉条纹通常明暗相间,而不同深度值的待校正图像中的干涉条纹位置不同,故本申请根据待校正图像的平均深度值选择目标校正参数集合。并根据目标校正参数集合,得到处于不同坐标位置的待校正像素点对应的目标校正参数,并根据目标校正参数校正待校正像素点的第一像素值。上述方式,基于平均深度值对应目标校正参数集合调整待校正图像中处于不同坐标位置的待校正像素点的第一像素值,减弱干涉条纹导致采集的图像质量较差的缺陷。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1示出了本申请提供的一种干涉条纹的校正方法的示意性流程图;
图2示出了本申请提供的一种干涉条纹的校正方法中步骤101具体示意性流程图;
图3示出了本申请提供的干涉条纹平移变化量曲线示意图;
图4示出了本申请提供的拍摄距离的划分策略示意图;
图5示出了本申请提供的一种干涉条纹的校正方法中步骤1012具体示意性流程图;
图6示出了本申请提供的一种干涉条纹的校正方法中步骤103具体示意性流程图;
图7示出了本申请提供的一种干涉条纹的校正方法中步骤103具体示意性流程图;
图8示出了本申请提供的校正过程示意图;
图9示出了本申请提供的一种干涉条纹的校正方法中步骤104具体示意性流程图;
图10a示出了本申请提供的光线反射示意图;
图10b示出了本申请提供的光线反射示意图;
图11是本发明一实施例提供的一种屏下系统的示意图;
图12示出了本申请提供的一种处理器功能架构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了更好地理解本申请解决的技术问题,故在此对上述背景技术进行进一步说明:
干涉现象是指同振幅、频率和固定位相的两列(或多列)波的叠加合成而引起振动强度重新分布的现象。在波的叠加区有的地方振幅增加,有的地方振幅减小,振动强度在空间出现强弱相间的固定分布,形成干涉条纹。
现有技术中,屏下摄像模块(屏下摄像模块中包括照射光源以及采集模组)在成像过程中,照射光源透过显示屏时,产生第一光束和第二光束,第一光束和第二光束间存在固定相 位差,进而第一光束与第二光束会在接收平面(即目标区域)上发生稳定干涉,产生干涉条纹,干涉条纹随着目标区域与屏下摄像模块之间拍摄距离的变化而产生缩放。不同拍摄距离下采集模组采集红外图像中的干涉条纹在视差方向上发生平移。
有鉴于此,本申请实施例提供了一种干涉条纹的校正方法、屏下系统以及计算机可读存储介质,可以解决上述技术问题。
请参见图1,图1示出了本申请提供的一种干涉条纹的校正方法的示意性流程图。如图1所示,该校正方法包括如下步骤:
步骤101,获取不同拍摄距离的校正参数集合;所述校正参数集合中包括不同坐标位置对应的不同的校正参数。
众所周知,不同拍摄距离的干涉条纹在图像中仅分布位置存在偏移,其大小不会发生改变,相同拍摄距离的干涉条纹在图像中分布位置一致。若能确定图像中干涉条纹的分布位置,则可针对干涉条纹进行消除处理。故本申请基于上述规律,获取不同拍摄距离的校正参数集合,以适应性校正待校正图像中的待校正像素点。其中,对于不同拍摄距离的划分的策略可以为均匀划分和非均匀划分,对于不同拍摄距离的间隔可根据校正精细度进行预设。
校正参数集合为同一目标平面对应的图像中不同坐标位置对应的校正参数。校正参数用于增强像素值、减弱像素值或无需校正(即校正参数为1)。其中,由于干涉条纹为明暗交替的条纹,故对于位于暗纹区域的像素点需要增强像素值,对于位于明纹区域的像素点需要减弱像素值。
在步骤101中,校正参数集合可以为预存数据,在执行步骤101时,仅需获取预存在存储器中的校正参数集合。
在步骤101中,校正参数集合也可以通过如下可选实施例得到(其中,预存的校正参数集合,也需通过如下可选实施例进行预先计算得到):
作为本申请的一个可选实施例,在步骤101包括如下步骤1011至步骤1012。请参见图2,图2示出了本申请提供的一种干涉条纹的校正方法中步骤101具体示意性流程图。
步骤1011,采集多个不同拍摄距离的干涉条纹图像;所述干涉条纹图像是指光源透过显示屏照射在目标平面上形成干涉条纹,并由采集模组采集而得的图像;所述拍摄距离是指垂直于采集模组光轴的平面与所述采集模组之间的距离;所述干涉条纹图像用于反映干涉条纹在不同坐标位置的第二像素值。
在屏下摄像模块前,摆放垂直于采集模组光轴或照射光源光轴的目标平面。所述目标平面可用于显示由照射光源透过屏幕(照射光源包括但不限于激光或LED等光源)产生的干涉条纹,且目标平面尺寸大于采集模组在不同拍摄距离的取景范围。采集模组采集目标平面上显现的干涉条纹,得到干涉条纹图像。
为了更好地反映干涉条纹在不同坐标位置的第二像素值,故优先采用白色背景且表面质地均匀的平面,作为目标平面。
其中,对于不同拍摄距离的划分策略可以为均匀划分和非均匀划分,对于不同拍摄距离的间隔可根据校正精细度进行预设。难以察觉的是,当目标平面在距离采集模组较近的区域 内沿着光轴方向前后移动时,目标平面上的干涉条纹平移变化量较大。当目标平面在距离采集模组较远的区域内沿着光轴方向前后移动时,目标平面上的干涉条纹平移变化量较小。请参见图3,图3示出了本申请提供的干涉条纹平移变化量曲线示意图。如图3所示条纹平移变化量与拍摄距离之间呈曲线下降的趋势。
故基于上述规律,本申请对不同拍摄距离的划分策略为“近密远疏”。请参见图4,图4示出了本申请提供的拍摄距离的划分策略示意图。如图4所示,随着目标平面远离采集模组,目标平面的采样位置逐渐稀疏。以对拍摄距离的分布密度进行科学划分。
作为本申请的一个可选实施例,在同一个拍摄距离下,可采集一张或多张原始图像,作为干涉条纹图像。其中,当干涉条纹图像为多个时,可进行多帧平均,得到干涉条纹图像。
作为本申请的一个可选实施例,在获取到干涉条纹图像后,可针对干涉条纹图像进行预处理,以提高干涉条纹图像的图像质量,进而得到更加精细的校正效果。所述预处理包括降噪或灰度值调整等图像处理手段。
步骤1012,逐个将所述干涉条纹图像中各个初始像素点的第二像素值进行归一化处理,得到所述不同拍摄距离各自对应的所述校正参数集合。
由于干涉条纹为明暗交替的条纹,明纹的区域像素值较高,暗纹的区域像素值较低。故本申请将干涉条纹图像中各个初始像素点的第二像素值进行归一化处理,得到所述不同拍摄距离各自对应的校正参数集合,具体过程如下:
作为本申请的一个可选实施例,在步骤1012包括如下步骤A1至步骤A2。请参见图5,图5示出了本申请提供的一种干涉条纹的校正方法中步骤1012具体示意性流程图。
每个拍摄距离对应的干涉条纹图像执行如下步骤,得到不同拍摄距离各自对应的校正参数集合:
步骤A1,获取所述干涉条纹图像中的最大第二像素值。
步骤A2,将所述干涉条纹图像中每个初始像素点的第二像素值除以所述最大第二像素值,得到所述校正参数集合。
干涉条纹图像中每个初始像素点采用如下公式,得到校正参数集合:
Figure PCTCN2021107941-appb-000001
其中,M表示最大第二像素值,Ia表示校正参数集合中的校正参数,Ib表示初始像素点的第二像素值。
由上述公式得到的校正参数集合中不同坐标位置的校正参数范围处于[0,1]之间。
步骤102,获取待校正图像,并计算所述待校正图像的平均深度值;所述平均深度值是指所述待校正图像中多个待校正像素点对应深度值的平均值。
获取目标区域的待校正图像,并计算待校正图像中多个待校正像素点的深度值,进而对多个待校正像素点的深度值进行平均求取,得到平均深度值。
计算多个待校正像素点的深度值的方式包括如下三种:
方式①:照射光源向目标区域投射结构光光束,采集模组接收经目标区域反射回的光束 并形成电信号。电信号传输至处理器,处理器对该电信号进行处理,计算出反映该光束的强度信息以形成结构光图案,最后基于该结构光图案进行匹配计算或三角法计算,得到多个待校正像素点的深度值。
方式②:照射光源向目标区域投射红外光束,采集模组接收经目标区域反射回的光束并形成电信号。电信号传输至处理器,处理器对该电信号进行处理以计算出相位差,并基于该相位差间接计算光束由照射光源发射到采集模组接收所用的飞行时间,进一步基于该飞行时间计算出多个待校正像素点的深度值。应当理解的是,红外光束可包括脉冲型和连续波型,此处不作限制。
方式③:照射光源向目标对象投射红外脉冲光束,采集模组接收经目标对象反射回的光束并形成电信号。电信号传输至处理器,处理器对电信号进行计数以获取波形直方图,并根据直方图直接计算由照射光源发射到采集模组接收所用的飞行时间,进一步基于该飞行时间计算出多个待校正像素点的深度值。
步骤103,在所述不同拍摄距离的校正参数集合中,选择所述平均深度值对应的目标校正参数集合。
可以理解的是,若不同拍摄距离的分布间隔足够小,则每个平均深度值都可对应相等的拍摄距离。若不同拍摄距离的分布间隔不足够小,则平均深度值与拍摄距离可能不完全相等。当不同拍摄距离中,存在与平均深度值相等的拍摄距离时,则选择与平均深度值相等的拍摄距离对应的目标校正参数集合。
当不同拍摄距离中,不存在与平均深度值相等的拍摄距离时,则执行如下可选实施例:
作为本申请的一个可选实施例,在步骤103包括如下步骤1031至步骤1032。请参见图6,图6示出了本申请提供的一种干涉条纹的校正方法中步骤103具体示意性流程图。
步骤1031,在所述不同拍摄距离中,选择与所述平均深度值差值最小的第一拍摄距离。
步骤1032,将所述第一拍摄距离对应的校正参数集合,作为所述平均深度值对应的目标校正参数集合。
可以理解的是,上述条件“若不同拍摄距离的分布间隔足够小”的情况,也适用于步骤1031与步骤1032,即该条件下差值为0。
步骤104,根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
由于校正图像中不同待校正像素点的深度值不同,而校正参数集合中的校正参数的拍摄距离(也即深度值)是统一的,故不同待校正像素点与各自对应的校正参数之间的深度值可能不一致。为了使待校正像素点与对应校正参数之间的深度值一致,故本申请将每个目标校正参数进行换算,换算过程如下可续实施例所示:
作为本申请的一个可选实施例,在步骤104包括如下步骤B1至步骤B2。请参见图7,图7示出了本申请提供的一种干涉条纹的校正方法中步骤103具体示意性流程图。
步骤B1,将所述目标校正参数集合代入第一预设公式,得到第一校正参数集合。
由于平均深度值与拍摄距离之间存在差值,故需通过第一预设公式纠正目标校正参数集合。
所述第一预设公式如下:
Figure PCTCN2021107941-appb-000002
其中,Ia表示所述目标校正参数集合中的目标校正参数,Ib表示所述第一校正参数集合的第一校正参数,La表示所述第一校正参数集合对应的拍摄距离,Lb表示所述平均深度值。
步骤B2,根据所述第一校正参数集合中所述不同坐标位置对应的不同的第一校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
在步骤B2中,可将多个待校正像素点的第一像素值除以多个待校正像素点各自对应的第一校正参数,得到校正后的图像。
在步骤B2中,也可将多个待校正像素点的第一像素值除以多个待校正像素点各自对应的第一校正参数,并乘以预设的调整系数(调整系数用于调整校正强度,可根据实际应用场景进行预设),得到校正后的图像。
为了更好地说明步骤B2的校正过程,请参见图8,图8示出了本申请提供的校正过程示意图。如图8所示,以待校正图像中的待校正像素点a、待校正像素点b和待校正像素点c为例。待校正图像对应唯一的校正参数集合(即目标校正参数集合)。可以理解的是,校正参数集合也是一种图像,只是图像中的不同坐标位置的第二像素值为校正参数。其中,待校正像素点a与像素点d1的坐标位置相同,待校正像素点b与像素点d2的坐标位置相同,待校正像素点c与像素点d3的坐标位置相同。将待校正像素点a的第一像素值除以像素点d1第二像素值(即校正参数),得到校正后的像素点A。将待校正像素点b的第一像素值除以像素点d2第二像素值(即校正参数),得到校正后的像素点B。将待校正像素点c的第一像素值除以像素点d3第二像素值(即校正参数),得到校正后的像素点C。
需要说明的是,图8仅仅起示例作用,对于图8中校正参数集合中校正像素点的数量、待校正图像中待校正像素点的数量以及位置不做任何限定。
作为本申请的一个可选实施例,在步骤104包括如下步骤C1至步骤C4。请参见图9,图9示出了本申请提供的一种干涉条纹的校正方法中步骤104具体示意性流程图。
步骤C1,根据第二预设公式,计算所述目标校正参数集合对应的干涉条纹图像与所述待校正图像之间的视差。
由于在屏下摄像模块中采集模组与照射光源的位置通常为对齐放置,即两者处于不同位置。请参见图10a,图10a示出了本申请提供的光线反射示意图。如图10a所示,直线代表目标平面,当照射光源射出光线,照射在目标平面上,光线由目标平面反射至采集模组中。由于采集模组与照射光源位置不同,故发射光线与反射光线的路径不同。然而,由于平均深度值可能无法匹配完全相等的拍摄距离,若平均深度值与拍摄距离不相等,则校正参数的坐标位置存在一定差异。请参见图10b,图10b示出了本申请提供的光线反射示意图。如图10b 所示,不规则图形为待校正图像中的目标区域(即前景),a点映射待校正图像中的一个像素点。直线代表目标平面。由于发射光线与反射光线的路径不同,若相机以反射光线匹配的p2点的校正参数对a点第一像素进行校正,则结果错误(应以P1点的校正参数对a点第一像素进行校正)。
基于上述考量,故本申请针对目标校正参数集合进行重构(即重新构建目标校正参数集合中校正参数与坐标位置的映射关系,以纠正位置偏差。使得能够利用p1点的处校正参数对a点第一像素进行校正)。
首先,根据第二预设公式,计算由照射光源与相机位置产生的视差,第二预设公式如下:
Figure PCTCN2021107941-appb-000003
其中,d表示所述视差,La表示目标校正参数集合对应的拍摄距离,Lb表示所述平均深度值,b表示摄像模块中摄像头和照射光源之间的光轴距离,f表示摄像模块的焦距。
步骤C2,将所述目标校正参数集合中各个第二像素点的坐标位置的值加上所述视差,得到第二校正参数集合。
若视差处于X轴方向,则将坐标位置中的X轴值加上视差,得到第二校正参数集合。
若视差处于Y轴方向,则将坐标位置中的Y轴值加上视差,得到第二校正参数集合。
值得注意的是,当采集模组与照射光源的位置相同时,无需执行步骤C1至步骤C2。
步骤C3,将所述第二校正参数集合代入第三预设公式,得到所述第三校正参数集合。
所述第三预设公式如下:
Figure PCTCN2021107941-appb-000004
其中,Ic表示所述第二校正参数集合,Id表示所述第三校正参数集合,La表示所述目标校正参数集合对应的拍摄距离,Lb表示平均深度值。
步骤C4,根据所述第三校正参数集合中所述不同坐标位置对应的不同的第二校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
步骤C2至步骤C4,与上述可选实施例中的步骤B1至步骤B2相同,具体参照上述可选实施例中的步骤B1至步骤B2,在此不再赘述。
在本实施例中,通过第一像素点的平均深度值获取目标校正参数集合。并根据第一像素点的第二像素位置,获取目标校正参数集合中的目标校正参数,并根据目标校正参数校正第一像素点的第一像素值。通过上述方式,基于平均深度值调整不同位置第一像素点的第一像素值,以减弱干涉条纹导致采集的图像质量较差的问题。
图11是本发明一实施例提供的一种屏下系统的示意图。如图11所示,该实施例的屏下系统110,包括照射光源111、采集模组112、处理器113、存储器114及显示屏115,其中:
照射光源111,用于透过显示屏115向目标区域116发射红外光束;
采集模块112,用于接收经目标区域反射后透过显示屏115的光信号并获取目标区域116 的红外图像以传输至处理器113,传输至所述处理器;
处理器113,用于利用预设的校正参数集合和上述任一实施例方案记载的干涉条纹校正方法对红外图像进行校正;
存储器114,用于存储校正参数集合以及可在所述处理器上运行的计算机程序。
应当说明的是,照射光源111和采集模块112中任一模块处于显示屏115下时,采集模块115采集的红外图像中若含有干涉条纹,仍可采用上述干涉条纹的校正方法对红外图像进行校正,此处不作限制。
在一个实施例中,若照射光源111透过显示屏115向目标区域116发射结构光光束时,屏下系统110还包括泛光模块117,泛光模块117透过显示屏117向目标区域116投射泛光光束,采集模块112,一方面接收经目标区域反射的结构光信号传输至处理器113获取目标区域的深度值,另一方面接收经目标区域反射的泛光信号形成红外图像,并进一步根据上述方法对红外图像进行校正。
需要理解的是,若照射光源111透过显示屏115向目标区域116中发射红外光束,则屏下系统110不需要进行补光照明,采集模组可直接采集红外图像,并进一步根据上述方法对红外图像进行校正。
在一个实施例中,所述处理器113执行干涉条纹校正方法实施例中的步骤,更具体地,该步骤可由处理器113中一个或多个单元执行,以完成本申请,可被分割的单元(请参见图12,图12示出了本申请提供的一种处理器功能架构示意图)的具体执行功能如下:
获取单元121,用于获取采集模组采集目标区域的红外图像;
深度计算单元122,用于计算所述红外图像的平均深度值;所述平均深度值是指所述红外图像中多个待校正像素点对应深度值的平均值;
选择单元123,用于在存储在存储器的所述不同拍摄距离的校正参数集合中,选择所述平均深度值对应的目标校正参数集合;
校正单元124,用于根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述红外图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
本申请提供一种基于干涉条纹校正的屏下系统,根据待校正图像的平均深度值选择目标校正参数集合。并根据目标校正参数集合,得到处于不同坐标位置的待校正像素点对应的目标校正参数,并根据目标校正参数校正待校正像素点的第一像素值。上述方式,基于平均深度值对应目标校正参数集合调整待校正图像中处于不同坐标位置的待校正像素点的第一像素值,减弱干涉条纹导致采集的图像质量较差的缺陷。
本领域技术人员可以理解,所述屏下系统中包括但不限于上述模块及上述的组合形式,图11仅仅是一种屏下系统的示例,并不构成对一种屏下系统的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。
所述摄像模块包括采集模组以及照射光源。其中照射光源包括光源和光学组件(光学组件可以包括衍射光学元件等)等。光源可以是边发射激光器(EEL)、垂直腔面发射激光器(VCSEL) 等光源,也可以是多个光源组成的光源阵列,光源所发射的光束可以是可见光、红外光或紫外光等。光源所发射的光束可以在参考平面上形成均匀、随机或者特殊设计的强度分布投影图案。采集模组中包括图像传感器以及透镜单元等模块,透镜单元接收由物体反射回的部分光束并成像在图像传感器上。图像传感器可以是电荷耦合元件(Charge Coupled Device,CCD)、互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor Transistor,CMOS)、雪崩二极管(Avalanche Diode,AD)或单光子雪崩二极管(Single Photon Avalanche Diode,SPAD)等组成的图像传感器。
所述处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器可以是所述一种屏下系统的内部存储单元,例如一种屏下系统的硬盘或内存。所述存储器也可以是所述一种屏下系统的外部存储设备,例如所述一种屏下系统上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器还可以既包括所述一种屏下系统的内部存储单元也包括外部存储设备。所述存储器用于存储所述计算机程序以及所述一种漫游控制设备所需的其他程序和数据。所述存储器还可以用于暂时地存储已经输出或者将要输出的数据。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以 存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/活体检测设备的任何实体或装置、记录介质、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于监测到”。类似地,短语“如果确定”或“如果监测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦监测到[所描述条件或事件]”或“响应于监测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种干涉条纹的校正方法,其特征在于,所述校正方法,包括:
    获取不同拍摄距离的校正参数集合;所述校正参数集合中包括不同坐标位置对应的不同的校正参数;
    获取待校正图像,并计算所述待校正图像的平均深度值;所述平均深度值是指所述待校正图像中多个待校正像素点对应深度值的平均值;
    在所述不同拍摄距离的校正参数集合中,选择所述平均深度值对应的目标校正参数集合;
    根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
  2. 如权利要求1所述校正方法,其特征在于,所述获取不同拍摄距离的校正参数集合,包括:
    采集多个不同拍摄距离的干涉条纹图像;所述干涉条纹图像是指光源透过显示屏照射在目标平面上形成干涉条纹,并由采集模组采集而得的图像;所述拍摄距离是指垂直于采集模组光轴的平面与所述采集模组之间的距离;所述干涉条纹图像用于反映干涉条纹在不同坐标位置的第二像素值;
    逐个将所述干涉条纹图像中各个初始像素点的第二像素值进行归一化处理,得到所述不同拍摄距离各自对应的所述校正参数集合。
  3. 如权利要求2所述校正方法,其特征在于,所述逐个将所述干涉条纹图像中各个初始像素点的第二像素值进行归一化处理,得到所述不同拍摄距离各自对应的所述校正参数集合,包括:
    获取所述干涉条纹图像中的最大第二像素值;
    将所述干涉条纹图像中每个初始像素点的第二像素值除以所述最大第二像素值,得到所述校正参数集合。
  4. 如权利要求1所述校正方法,其特征在于,所述在所述不同拍摄距离的校正参数集合中,选择所述平均深度值对应的目标校正参数集合,包括:
    在所述不同拍摄距离中,选择与所述平均深度值差值最小的第一拍摄距离;
    将所述第一拍摄距离对应的校正参数集合,作为所述平均深度值对应的目标校正参数集合。
  5. 如权利要求1所述校正方法,其特征在于,所述根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像,包括:
    将所述目标校正参数集合代入第一预设公式,得到第一校正参数集合;
    所述第一预设公式如下:
    Figure PCTCN2021107941-appb-100001
    其中,I a表示所述目标校正参数集合中的目标校正参数,I b表示所述第一校正参数集合的第一校正参数,L a表示所述第一校正参数集合对应的拍摄距离,L b表示所述平均深度值;
    根据所述第一校正参数集合中所述不同坐标位置对应的不同的第一校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
  6. 如权利要求1所述校正方法,其特征在于,所述根据所述目标校正参数集合中所述不同坐标位置对应的不同的目标校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像,包括:
    根据第二预设公式,计算所述目标校正参数集合对应的干涉条纹图像与所述待校正图像之间的视差;
    所述第二预设公式如下:
    Figure PCTCN2021107941-appb-100002
    其中,d表示所述视差,L a表示目标校正参数集合对应的拍摄距离,L b表示所述平均深度值,b表示摄像模块中摄像头和照射光源之间的光轴距离,f表示摄像模块的焦距;
    将所述目标校正参数集合中各个第二像素点的坐标位置的值加上所述视差,得到第二校正参数集合;
    将所述第二校正参数集合代入第三预设公式,得到第三校正参数集合;
    所述第三预设公式如下:
    Figure PCTCN2021107941-appb-100003
    其中,I c表示所述第二校正参数集合,I d表示所述第三校正参数集合,L a表示所述目标校 正参数集合对应的拍摄距离,L b表示所述第一像素点的平均深度值;
    根据所述第三校正参数集合中所述不同坐标位置对应的不同的第二校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像。
  7. 如权利要求6所述校正方法,其特征在于,所述根据所述第三校正参数集合中所述不同坐标位置对应的不同的第二校正参数,校正所述待校正图像中位于所述不同坐标位置上的待校正像素点的第一像素值,得到校正后的图像,包括:
    将所述待校正像素点的第一像素值除以所述待校正像素点对应的第二校正参数,得到校正后的图像。
  8. 一种基于干涉条纹校正的屏下系统,其特征在于,包括显示屏、照射光源、采集模组、处理器及存储器,其中:
    所述照射光源,用于透过所述显示屏向目标区域发射红外光束;
    所述采集模组,用于接收经所述目标区域反射后透过所述显示屏的光信号并获取所述目标区域的红外图像,传输至所述处理器;
    所述处理器,用于利用预设的校正参数集合和所述权利要求1-7中任何一项所述的校正方法对所述红外图像进行校正;
    所述存储器,用于存储所述校正参数集合以及可在所述处理器上运行的计算机程序。
  9. 如权利要求8所述的屏下系统,其特征在于,若所述照射光源透过所述显示屏向目标区域发射的光束为结构光光束,所述屏下系统还包括用于向所述目标区域投射泛光光束的泛光模块,所述采集模组采集经所述目标区域反射的泛光光信号并获取所述目标区域的所述红外图像。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述校正方法的步骤。
PCT/CN2021/107941 2021-03-24 2021-07-22 一种干涉条纹的校正方法及屏下系统 WO2022198861A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/221,662 US20230370730A1 (en) 2021-03-24 2023-07-13 Interference fringe correction method and under-screen system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110316128.9 2021-03-24
CN202110316128.9A CN115131216A (zh) 2021-03-24 2021-03-24 一种干涉条纹的校正方法及屏下系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/221,662 Continuation US20230370730A1 (en) 2021-03-24 2023-07-13 Interference fringe correction method and under-screen system

Publications (1)

Publication Number Publication Date
WO2022198861A1 true WO2022198861A1 (zh) 2022-09-29

Family

ID=83373794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107941 WO2022198861A1 (zh) 2021-03-24 2021-07-22 一种干涉条纹的校正方法及屏下系统

Country Status (3)

Country Link
US (1) US20230370730A1 (zh)
CN (1) CN115131216A (zh)
WO (1) WO2022198861A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288175A1 (en) * 2011-05-10 2012-11-15 Canon Kabushiki Kaisha Image processing apparatus and method
CN103778887A (zh) * 2013-03-21 2014-05-07 西安电子科技大学 Led显示装置的亮度校正方法及装置
CN110024020A (zh) * 2016-11-23 2019-07-16 三星电子株式会社 显示装置、校准装置及其校准方法
CN111489694A (zh) * 2020-04-17 2020-08-04 苏州佳智彩光电科技有限公司 一种对amoled屏下摄像屏进行外部光学补偿的方法及系统
CN112202986A (zh) * 2020-09-30 2021-01-08 安谋科技(中国)有限公司 图像处理方法、图像处理装置、可读介质及其电子设备
CN112511761A (zh) * 2020-11-23 2021-03-16 Oppo广东移动通信有限公司 曲面屏显示补偿方法、装置、设备、存储介质及拍摄设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288175A1 (en) * 2011-05-10 2012-11-15 Canon Kabushiki Kaisha Image processing apparatus and method
CN103778887A (zh) * 2013-03-21 2014-05-07 西安电子科技大学 Led显示装置的亮度校正方法及装置
CN110024020A (zh) * 2016-11-23 2019-07-16 三星电子株式会社 显示装置、校准装置及其校准方法
CN111489694A (zh) * 2020-04-17 2020-08-04 苏州佳智彩光电科技有限公司 一种对amoled屏下摄像屏进行外部光学补偿的方法及系统
CN112202986A (zh) * 2020-09-30 2021-01-08 安谋科技(中国)有限公司 图像处理方法、图像处理装置、可读介质及其电子设备
CN112511761A (zh) * 2020-11-23 2021-03-16 Oppo广东移动通信有限公司 曲面屏显示补偿方法、装置、设备、存储介质及拍摄设备

Also Published As

Publication number Publication date
CN115131216A (zh) 2022-09-30
US20230370730A1 (en) 2023-11-16

Similar Documents

Publication Publication Date Title
US11575843B2 (en) Image sensor modules including primary high-resolution imagers and secondary imagers
WO2021008209A1 (zh) 深度测量装置及距离测量方法
US8830227B2 (en) Depth-based gain control
US8938099B2 (en) Image processing apparatus, method of controlling the same, distance measurement apparatus, and storage medium
WO2021120402A1 (zh) 一种融合的深度测量装置及测量方法
TW201709724A (zh) 用於判定一圖像之一深度圖的方法與設備
CN111751808A (zh) 用于补偿来自飞行时间相机的盖的光反射的方法和设备
WO2022198862A1 (zh) 一种图像的校正方法及及屏下系统
WO2020038255A1 (en) Image processing method, electronic apparatus, and computer-readable storage medium
CN111538024B (zh) 一种滤波ToF深度测量方法及装置
CN110378944B (zh) 深度图处理方法、装置和电子设备
JP2013198041A (ja) 画像処理装置
JP6540885B2 (ja) 色校正装置、色校正システム、色校正用ホログラム、色校正方法及びプログラム
WO2023273094A1 (zh) 一种光谱反射率的确定方法、装置及设备
US20230102878A1 (en) Projector and projection method
US20240020883A1 (en) Method, apparatus, and device for determining spectral reflection
CN102609152B (zh) 大视场角图像检测电子白板图像采集方法及装置
CN110456380B (zh) 飞行时间传感相机及其深度检测方法
WO2022198861A1 (zh) 一种干涉条纹的校正方法及屏下系统
CN212992427U (zh) 一种图像采集模组
CN111325691B (zh) 图像校正方法、装置、电子设备和计算机可读存储介质
CN107392955B (zh) 一种基于亮度的景深估算装置及方法
KR20050026949A (ko) 적외선 플래시 방식의 능동형 3차원 거리 영상 측정 장치
CN113238250B (zh) 一种消除屏下杂散光的方法、装置、屏下系统和存储介质
WO2020195755A1 (ja) 測距撮像システム、測距撮像方法、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932473

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE