WO2022198862A1 - 一种图像的校正方法及及屏下系统 - Google Patents

一种图像的校正方法及及屏下系统 Download PDF

Info

Publication number
WO2022198862A1
WO2022198862A1 PCT/CN2021/107950 CN2021107950W WO2022198862A1 WO 2022198862 A1 WO2022198862 A1 WO 2022198862A1 CN 2021107950 W CN2021107950 W CN 2021107950W WO 2022198862 A1 WO2022198862 A1 WO 2022198862A1
Authority
WO
WIPO (PCT)
Prior art keywords
corrected
pixel
image
interference fringe
target
Prior art date
Application number
PCT/CN2021/107950
Other languages
English (en)
French (fr)
Inventor
兰富洋
杨鹏
王兆民
黄源浩
肖振中
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2022198862A1 publication Critical patent/WO2022198862A1/zh
Priority to US18/209,696 priority Critical patent/US20230325979A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • the present application belongs to the technical field of image processing, and in particular, relates to an image correction method and an off-screen system.
  • the under-screen camera With the continuous optimization of the full screen by mobile phone manufacturers, the under-screen camera has become the standard configuration of most mobile phones. Among them, the under-screen camera usually uses an illumination light source to fill light on the target object through the display screen, so as to capture high-quality images.
  • the light beam emitted by the illumination light source is divided into two columns of beams through the display screen.
  • One column of beams is emitted directly through the display screen, and the other column of beams is reflected on the display screen. After multiple reflections shoot.
  • the two columns of light beams meet and overlap on the target area, resulting in interference fringes, resulting in poor image quality captured by the under-screen camera.
  • the embodiments of the present application provide an image correction method, an off-screen system, and a computer-readable storage medium, which can solve the problem that two beams of light meet and overlap on a target area, resulting in interference fringes, resulting in an image captured by an off-screen camera. Poor quality.
  • a first aspect of the embodiments of the present application provides an image correction method, and the correction method includes:
  • the interference fringe images refer to the images obtained by the illumination light source irradiating the target plane through the display screen to form interference fringes and collected by the acquisition module; the interference fringe images are used to reflect the first pixel values of the interference fringes at different coordinate positions;
  • the interference fringe image corresponding to the depth value of each pixel to be corrected is selected as the target interference fringe image corresponding to each pixel to be corrected;
  • the target coordinate position refers to the coordinate position of the to-be-corrected pixel point corresponding to the target interference fringe image in the to-be-corrected image;
  • a corrected image is obtained by correcting the second pixel value of each pixel to be corrected according to the first pixel value corresponding to each pixel to be corrected.
  • a second aspect of the embodiments of the present application provides a system under the screen, characterized in that it includes a display screen, an illumination light source, a collection module, a processor and a memory, wherein:
  • the illumination light source is used to emit infrared light beams to the target area through the display screen;
  • the target area includes a target plane;
  • the acquisition module is used for receiving the optical signal reflected by the target area and passing through the display screen, acquiring the infrared image of the target area, and transmitting it to the processor; the infrared image includes the image to be corrected and the interference striped image;
  • the processor configured to correct the infrared image by using a preset interference fringe image and the correction method according to any one of claims 1-7;
  • the memory is used for storing the interference fringe images at different shooting distances and a computer program that can be run on the processor.
  • a third aspect of the embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the correction method in the first aspect above .
  • the embodiment of the present application has the beneficial effect that the interference fringes are usually bright and dark, and the positions of the interference fringes at different depths are different. Therefore, the present application matches the target interference fringe image corresponding to each to-be-corrected pixel point based on each to-be-corrected pixel point in the to-be-corrected image. And according to the target coordinate position of each pixel to be corrected, the first pixel value is obtained in the target interference fringe image. Correct the pixel points to be corrected according to the first pixel value (because the first pixel values of the interference fringes at different coordinate positions can be well reflected in the target interference fringe image, it can be treated based on the first pixel value in the target interference fringe image Correct the pixels for correction). In the above solution, the problem of poor quality of collected images caused by interference fringes is reduced by performing corrections for each pixel to be corrected.
  • FIG. 1 shows a schematic flowchart of a method for correcting interference fringes provided by the present application
  • Fig. 2 shows the schematic diagram of the interference fringe translation variation curve provided by the present application
  • FIG. 3 shows a schematic diagram of the division strategy of the shooting distance provided by the present application
  • FIG. 4 shows a specific schematic flowchart of step 103 in a method for correcting interference fringes provided by the present application
  • FIG. 6 shows a specific schematic flowchart of step 105 in a method for correcting interference fringes provided by the present application
  • FIG. 7 shows a specific schematic flowchart of step 1051 in a method for correcting interference fringes provided by the present application
  • FIG. 8 shows a specific schematic flowchart of step 1052 in a method for correcting interference fringes provided by the present application
  • FIG. 9 is a schematic diagram of an off-screen system provided by an embodiment of the present invention.
  • FIG. 10 shows a schematic diagram of a functional architecture of a processor provided by the present application.
  • the interference phenomenon refers to the phenomenon that the vibration intensity is redistributed due to the superposition and synthesis of two (or multiple) waves of the same amplitude, frequency and fixed phase. In the superposition area of waves, the amplitude increases in some places, and decreases in some places, and the vibration intensity has a fixed distribution of strong and weak in space, forming interference fringes.
  • the embodiments of the present application provide a method for correcting interference fringes, an under-screen system, and a computer-readable storage medium, which can solve the above-mentioned technical problems.
  • FIG. 1 shows a schematic flowchart of a method for calibrating interference fringes provided by the present application.
  • the calibration method is applied to a terminal device with an under-screen camera.
  • the calibration method includes the following steps:
  • Step 101 obtaining interference fringe images of different shooting distances;
  • the interference fringe images refer to the images obtained by the illumination light source irradiating the target plane through the display screen to form interference fringes and collected by the acquisition module;
  • the interference fringe images The first pixel value used to reflect the interference fringes at different coordinate positions.
  • the interference fringes at different shooting distances only have offsets in the distribution position in the image, and their size does not change, and the interference fringes at the same shooting distance have the same distribution position in the image. If the distribution positions of the interference fringes in the image (that is, the first pixel values at different coordinate positions) can be determined, the interference fringes can be eliminated. Therefore, based on the above rules, the present application acquires interference fringe images at different shooting distances, so as to adaptively correct the first pixels of different depth values in the image to be corrected.
  • a plane with a white background and a uniform surface texture is preferably used as the target plane.
  • the target plane is used to reflect the interference fringes generated by the illumination light source passing through the display screen (illumination light sources include but not limited to light sources such as infrared rays or lasers), and the size of the target plane is larger than that of the acquisition module (the acquisition module refers to the Modules, such as cameras, etc.), the viewing range of different shooting distances.
  • the acquisition module Before the acquisition module, place the target plane perpendicular to the optical axis of the acquisition module.
  • the acquisition module collects the interference fringes appearing on the target plane to obtain an interference fringe image.
  • the division strategies for different shooting distances may be uniform division and non-uniform division, and the intervals for different shooting distances may be preset according to the correction fineness.
  • Figure 2 It can be seen from Figure 2 that there is a downward trend between the amount of fringe translation change and the shooting distance. That is, when the target plane moves back and forth along the optical axis in an area close to the acquisition module, the target plane The interference fringes on the surface have a large translational variation. When the target plane moves back and forth along the optical axis in an area far from the acquisition module, the translational variation of the interference fringes on the target plane is small.
  • the division strategy for different shooting distances in this application is “close-density and distant-sparse”. As shown in FIG. The distribution density is scientifically divided.
  • one or more original images may be collected as interference fringe images.
  • multi-frame averaging can be performed to obtain an interference fringe image.
  • the interference fringe image may be preprocessed to improve the image quality of the interference fringe image, thereby obtaining a finer correction effect.
  • the preprocessing includes image processing means such as noise reduction or gray value adjustment.
  • Step 102 Acquire the image to be corrected, and calculate the depth value of each pixel to be corrected in the image to be corrected.
  • the to-be-corrected image of the target area is acquired, and the depth values of a plurality of to-be-corrected pixels in the to-be-corrected image are calculated.
  • the corresponding interference fringe images of different pixels to be corrected are obtained (it can be understood that, as can be seen from the above content, the positions of the interference fringes in the images corresponding to different shooting distances are different.
  • Different to be corrected The depth values between the pixel points are different, so it is necessary to match the corresponding interference fringe image for each pixel point to be corrected one by one).
  • the ways of obtaining the depth value include but are not limited to the following three ways:
  • Method 1 The illumination light source projects a structured light beam to the target area, and the collection module receives the beam reflected by the target area and forms an electrical signal, which is transmitted to the processor.
  • the processor processes the electrical signal, calculates the intensity information reflecting the light beam to form a structured light pattern, and finally performs matching calculation or trigonometry calculation based on the structured light pattern to obtain the depth values of a plurality of pixel points to be corrected.
  • Method 2 The illumination light source projects an infrared beam to the target area, and the acquisition module receives the beam reflected by the target area and forms an electrical signal, which is transmitted to the processor.
  • the processor processes the electrical signal to calculate the phase difference, and based on the phase difference indirectly calculates the flight time taken by the light beam from the illumination light source to be received by the camera. Based on the flight time, the depth values of a plurality of pixels to be corrected are calculated.
  • the infrared beam may include pulsed and continuous wave types, which are not limited herein.
  • Method 3 The illumination light source projects an infrared pulse beam to the target area, and the acquisition module receives the beam reflected by the target area, forms an electrical signal, and transmits it to the processor.
  • the processor counts the electrical signals to obtain a waveform histogram, and directly calculates the time of flight from the light source to be received by the camera according to the histogram, and calculates the depth values of multiple pixels to be corrected based on the time of flight.
  • the image to be corrected may be preprocessed to improve the image quality of the image to be corrected, thereby obtaining a more refined correction effect.
  • the preprocessing includes image processing means such as noise reduction or gray value adjustment.
  • Step 103 From the interference fringe images of different shooting distances, select an interference fringe image corresponding to the depth value of each pixel to be corrected as the target interference fringe image corresponding to each pixel to be corrected.
  • the shooting distance and the depth value may or may not be equal. It can be understood that, if the distribution intervals of different shooting distances are sufficiently small, each depth value may correspond to an equal shooting distance.
  • an interference fringe image corresponding to the shooting distance equal to the depth value is selected as the target interference fringe image.
  • step 103 includes the following steps 1031 to 1032.
  • FIG. 4 shows a specific schematic flowchart of step 103 in a method for calibrating interference fringes provided by the present application.
  • Step 1031 among the different shooting distances, select a target shooting distance with the smallest difference from the depth value.
  • the depth value is in the center of adjacent shooting distances. It can be seen from the above content and FIG. 2 that the longer the shooting distance is, the smaller the amount of fringe translation change is. Therefore, when there are two minimum target shooting distances, the largest shooting distance among the two is selected as the target shooting distance.
  • Step 1032 Use the interference fringe image corresponding to the target shooting distance as the target interference fringe image.
  • Step 104 Extract the first pixel value of the target coordinate position in the target interference fringe image; the target coordinate position refers to the coordinates of the to-be-corrected pixel corresponding to the target interference fringe image in the to-be-corrected image. Location.
  • the first pixel value of the target coordinate position is extracted from the corresponding target interference fringe image. For example, if the target coordinate position of the pixel to be corrected is (156, 256), then extract the first pixel value at the coordinate position (156, 256) in the target interference fringe image.
  • step 104 please refer to FIG. 5, taking the pixel point a to be corrected, the pixel point b to be corrected and the pixel point c to be corrected in the image to be corrected as an example.
  • the target interference fringe image corresponding to the pixel point a to be corrected is the first target interference fringe image where the pixel point d1 is located.
  • the target interference fringe image corresponding to the pixel point b to be corrected is the second target interference fringe image where the pixel point d2 is located.
  • the target interference fringe image corresponding to the pixel point c to be corrected is the third target interference fringe image where the pixel point d3 is located.
  • the pixel point d1 in the first target interference fringe image needs to be extracted (the coordinate position of the pixel point a to be corrected and the pixel point d1 are the same).
  • the pixel point d2 in the second target interference fringe image needs to be extracted (the coordinate position of the pixel point b to be corrected and the pixel point d2 are the same).
  • the pixel point d3 in the third target interference fringe image needs to be extracted (the coordinate position of the pixel point c to be corrected and the pixel point d3 are the same).
  • the corresponding pixels in the target interference fringe image can be integrated into a new frame of target interference fringe image, and the new interference fringe image can be used for the next step operation; Also directly use the pixel points in the target interference fringes to perform the next step, which is not limited here.
  • FIG. 5 only serves as an example, and does not limit the number of target interference fringe images, the number and positions of pixels to be corrected in the image to be corrected in FIG. 5 .
  • Step 105 Correct the second pixel value of each pixel to be corrected according to the first pixel value corresponding to each pixel to be corrected, to obtain a corrected image.
  • the interference fringe image can reflect the first pixel values of the interference fringes at different coordinate positions.
  • the pixel value needs to be increased, and for the pixels located in the bright pattern area, the pixel value needs to be reduced. Therefore, the second pixel value of each pixel to be corrected can be corrected according to the first pixel value at different coordinate positions to obtain a corrected image.
  • the calibration methods include the following two calibration methods:
  • the first correction method calculate the proportional relationship between all the first pixel values, and adjust the second pixel value of each pixel to be corrected according to the proportional relationship to obtain a corrected image. For example, assuming that the three first pixel values are 100, 150 and 200 respectively, the ratio relationship between the three first pixel values is 2:3:4. Multiply the second pixel values of the pixels to be corrected corresponding to the three first pixel values by 1/2, 1/3 and 1/4, respectively, to obtain the corrected image (here it is only assumed that there are only three the first pixel). However, since the pixels in the dark pattern area need to increase the pixel value, the pixels in the bright pattern area need to reduce the pixel value. If the first method is used for correction, the correction effect is poor. Only the high pixel value of the bright line area can be suppressed, so the present application provides a second and better correction method.
  • the second correction method as shown in the following optional embodiment:
  • step 105 includes the following steps 1051 to 1052. Please refer to FIG. 6.
  • FIG. 6 shows a specific schematic flowchart of step 105 in a method for correcting interference fringes provided by the present application.
  • Step 1051 Normalize all the first pixel values to obtain a first correction parameter corresponding to each pixel to be corrected.
  • each pixel value to be corrected in the interference fringe image is normalized to obtain the first correction parameter corresponding to each pixel point to be corrected.
  • step 1051 includes the following steps A1 to A2. Please refer to FIG. 7.
  • FIG. 7 shows a specific schematic flowchart of step 1051 in a method for correcting interference fringes provided by the present application.
  • Step A1 Obtain the largest first pixel value among all the first pixel values.
  • Step A2 Divide each of the first pixel values by the maximum first pixel value to obtain a first correction parameter corresponding to each pixel to be corrected.
  • M represents the maximum first pixel value
  • I a represents the first correction parameter
  • I b represents the first pixel value.
  • the range of the first correction parameter obtained by the above formula is between [0, 1].
  • Step 1052 Correct the second pixel value of each pixel to be corrected according to the first correction parameter corresponding to each pixel to be corrected to obtain a corrected image.
  • step 103 it can be known that the depth value and the shooting distance may or may not be equal. If they are not equal, perform steps B1 to B2 in the following optional embodiments. If they are equal, step B3 in the following optional embodiment is performed. The specific execution steps are shown in the following optional embodiment:
  • step 1052 includes the following steps B1 to B2.
  • FIG. 8 shows a specific schematic flowchart of step 1052 in a method for calibrating interference fringes provided by the present application.
  • Step B1 if the shooting distance of the target interference fringe image corresponding to the pixel to be corrected is not equal to the depth value of the pixel to be corrected, the first correction parameter is substituted into the first preset formula to obtain the second correction parameter.
  • the first correction parameter needs to be corrected by the first preset formula.
  • the first preset formula is as follows:
  • I a represents the first correction parameter
  • I b represents the second correction parameter
  • L a represents the shooting distance of the pixel to be corrected corresponding to the target interference fringe image
  • L b represents the depth value of the pixel to be corrected.
  • Step B2 Divide the second pixel value of the pixel to be corrected by the second correction parameter to obtain a corrected image.
  • the corrected image may be obtained by dividing the second pixel value of each pixel to be corrected by the second correction parameter corresponding to each pixel to be corrected. It is also possible to divide the second pixel value of each pixel to be corrected by the second correction parameter corresponding to each pixel to be corrected, and multiply it by a preset adjustment coefficient (the adjustment coefficient is used to adjust the correction intensity, which can be adjusted according to the actual application. The scene is preset), and the corrected image is obtained.
  • the second correction method can not only suppress the pixel value of the bright stripe area, but also increase the pixel value of the dark stripe area, so as to achieve an excellent correction effect.
  • Step B3 if the shooting distance of the target interference fringe image corresponding to the pixel to be corrected is equal to the depth value of the pixel to be corrected, then divide the second pixel value of the pixel to be corrected by the first correction parameter to obtain a correction post image.
  • the target interference fringe image corresponding to each pixel to be corrected is matched based on each pixel to be corrected in the image to be corrected. And according to the target coordinate position of each pixel to be corrected, the first pixel value is obtained in the target interference fringe image. Correct the pixel points to be corrected according to the first pixel value (because the first pixel values of the interference fringes at different coordinate positions can be well reflected in the target interference fringe image, it can be treated based on the first pixel value in the target interference fringe image Correct the pixels for correction). In the above solution, the problem of poor quality of collected images caused by interference fringes is reduced by performing corrections for each pixel to be corrected.
  • FIG. 9 is a schematic diagram of an off-screen system according to an embodiment of the present invention.
  • the under-screen system 90 of this embodiment includes an illumination light source 91, a collection module 92, a processor 93, a memory 94 and a display screen 95, wherein:
  • the light source 91 is illuminated, and the infrared beam is emitted to the target plane 96 through the display screen 95;
  • the acquisition module 92 receives the light signal reflected by the target plane and passes through the display screen 105, acquires the infrared image of the target area 96, and transmits it to the processor 93;
  • the processor 93 is configured to correct the infrared image by using the preset interference fringe image and the interference fringe correction method described in any of the above embodiments;
  • the memory 94 is used for storing interference fringe images at different shooting distances and a computer program that can be run on the processor.
  • any one of the illumination light source 91 and the acquisition module 92 is under the display screen 95, if the infrared image collected by the acquisition module 95 contains interference fringes, the above-mentioned interference fringe correction method can still be used to correct the infrared image. , there is no restriction here.
  • the under-screen system 90 further includes a flood light module 107 , and the flood light module 97 projects the light beam to the target area 96 through the display screen 97 .
  • the flood light beam, the acquisition module 92 on the one hand, receives the structured light signal reflected by the target area and transmits it to the processor 93 to obtain the depth value of the target area, and on the other hand receives the flood light signal reflected by the target area to form an infrared image, and further according to The above method corrects the infrared image.
  • the illuminating light source 91 transmits an infrared beam to the target area 96 through the display screen 95, the under-screen system 90 does not need to perform supplementary lighting, and the acquisition module can directly collect the infrared image, and further according to the above method. Infrared images are corrected.
  • the processor 93 executes the steps in the interference fringe correction method embodiment, more specifically, the step can be executed by one or more units in the processor 93 to complete the present application, the unit that can be divided (Refer to FIG. 10, which shows a schematic diagram of a processor functional architecture provided by the present application)
  • the specific execution functions are as follows:
  • an acquisition unit 1001 configured to acquire an infrared image of a target area acquired by an acquisition module
  • a depth calculation unit 1002 configured to calculate the depth value of each pixel to be corrected in the infrared image
  • the first selection unit 1003 is configured to select the interference fringe image corresponding to the depth value of each pixel point to be corrected from the interference fringe images of different shooting distances stored in the memory, as the interference fringe image corresponding to the depth value of each pixel point to be corrected The target interference fringe image corresponding to the pixel point;
  • the second selection unit 1004 is configured to extract the first pixel value of the target coordinate position in the target interference fringe image; the target coordinate position means that the pixel to be corrected corresponding to the target interference fringe image is in the infrared image the coordinate position;
  • the correction unit 1005 is configured to correct the second pixel value of each pixel to be corrected according to the first pixel value corresponding to each pixel to be corrected to obtain a corrected image.
  • the present application provides an off-screen system that matches a target interference fringe image corresponding to each pixel to be corrected based on each pixel to be corrected in an infrared image. And according to the target coordinate position of each pixel to be corrected, the first pixel value is obtained in the target interference fringe image. Correct the pixel points to be corrected according to the first pixel value (because the first pixel values of the interference fringes at different coordinate positions can be well reflected in the target interference fringe image, it can be treated based on the first pixel value in the target interference fringe image Correct the pixels for correction). In the above solution, the problem of poor quality of collected images caused by interference fringes is reduced by performing corrections for each pixel to be corrected.
  • the off-screen system includes but is not limited to the above-mentioned modules and the above-mentioned combined forms. It includes more or less components than the one shown, or some components are combined, or different components are included.
  • an off-screen system may also include input and output devices, network access devices, buses, and the like.
  • the camera module includes a collection module and an illumination light source.
  • the illumination light source includes a light source and an optical component (the optical component may include a diffractive optical element, etc.) and the like.
  • the light source can be an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), or a light source array composed of multiple light sources.
  • EEL edge emitting laser
  • VCSEL vertical cavity surface emitting laser
  • the light beam emitted by the light source can be visible light, infrared light, or ultraviolet light.
  • the light beam emitted by the light source can form a uniform, random or specially designed intensity distribution projection pattern on the reference plane. Included in the camera
  • the image sensor can be Charge Coupled Device (CCD), Complementary Metal-Oxide-Semiconductor Transistor (CMOS), Avalanche Diode (AD) or Single Photon Avalanche Diode (Single Photon Avalanche Diode) , SPAD) and other image sensors.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal-Oxide-Semiconductor Transistor
  • AD Avalanche Diode
  • SPAD Single Photon Avalanche Diode
  • the processor may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf processors. Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory may be an internal storage unit of the under-screen system, such as a hard disk or a memory of an under-screen system.
  • the memory can also be an external storage device of the under-screen system, such as a plug-in hard disk equipped on the under-screen system, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) , SD) card, flash memory card (Flash Card), etc.
  • the memory may also include both an internal storage unit of the under-screen system and an external storage device.
  • the memory is used for storing the computer program and other programs and data required by the one kind of roaming control device.
  • the memory may also be used to temporarily store data that has been output or is to be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/living detection device, recording medium, computer memory, read-only memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium.
  • ROM read-only memory
  • RAM random access Memory
  • electrical carrier signal telecommunication signal and software distribution medium.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/network device and method may be implemented in other manners.
  • the apparatus/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to monitoring of ".
  • the phrases “if it is determined” or “if the [described condition or event] is monitored” can be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is monitored. ]” or “in response to the detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.

Abstract

本申请适用于图像处理的技术领域,提供一种干涉条纹的校正方法及屏下系统,所述校正方法包括:获取不同拍摄距离的干涉条纹图像;获取待校正图像,并计算所述待校正图像中每个待校正像素点的深度值;在所述不同拍摄距离的干涉条纹图像中,选择所述每个待校正像素点的深度值对应的干涉条纹图像,作为所述每个待校正像素点对应的目标干涉条纹图像;提取所述目标干涉条纹图像中目标坐标位置的第一像素值;根据所述每个待校正像素点对应的第一像素值,校正所述每个待校正像素点的第二像素值,得到校正后的图像。上述方案,通过针对逐个待校正像素分别进行校正的方式,减弱了干涉条纹导致采集的图像质量较差的问题。

Description

一种图像的校正方法及及屏下系统 技术领域
本申请属于图像处理的技术领域,尤其涉及一种图像的校正方法及及屏下系统。
背景技术
随着手机产商对全面屏的不断优化,屏下摄像头成为了大部分手机的标配。其中,屏下摄像头通常采用照明光源透过显示屏对目标物体进行补光,以拍摄到高质量的图像。
然而,由于显示屏的物理特征,导致照明光源发出的光束透过显示屏被分为两列光束,一列光束直接透过显示屏射出,另一列光束在显示屏上发生反射,在多次反射后射出。两列光束在目标区域上相遇叠加,产生干涉条纹,导致屏下摄像头采集的图像质量较差。
发明内容
有鉴于此,本申请实施例提供了一种图像的校正方法、屏下系统以及计算机可读存储介质,可以解决两列光束在目标区域上相遇叠加,产生干涉条纹,导致屏下摄像头采集的图像质量较差。
本申请实施例的第一方面提供了一种图像的校正方法,所述校正方法包括:
获取不同拍摄距离的干涉条纹图像;所述干涉条纹图像是指照明光源透过显示屏照射在目标平面上形成干涉条纹,并由采集模组采集而得的图像;所述干涉条纹图像用于反映干涉条纹在不同坐标位置的第一像素值;
获取待校正图像,并计算所述待校正图像中每个待校正像素点的深度值;
在所述不同拍摄距离的干涉条纹图像中,选择所述每个待校正像素点的深度值对应的干涉条纹图像,作为所述每个待校正像素点对应的目标干涉条纹图像;
提取所述目标干涉条纹图像中目标坐标位置的第一像素值;所述目标坐标位置是指所述目标干涉条纹图像对应的待校正像素点在所述待校正图像中所处的坐标位置;
根据所述每个待校正像素点对应的第一像素值,校正所述每个待校正像素点的第二像素值,得到校正后的图像。
本申请实施例的第二方面提供了一种屏下系统,其特征在于,包括显示屏、照明光源、 采集模组、处理器及存储器,其中:
所述照明光源,用于透过所述显示屏向目标区域发射红外光束;所述目标区域包括目标平面;
所述采集模组,用于接收经所述目标区域反射后透过所述显示屏的光信号并获取所述目标区域的红外图像,传输至处理器;所述红外图像包括待校正图像以及干涉条纹图像;
所述处理器,用于利用预设的干涉条纹图像和所述权利要求1-7中任何一项所述的校正方法对所述红外图像进行校正;
所述存储器,用于存储不同拍摄距离的所述干涉条纹图像以及可在所述处理器上运行的计算机程序。
本申请实施例的第三方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面所述校正方法的步骤。
本申请实施例与现有技术相比存在的有益效果是:由于干涉条纹通常明暗相间,而不同深度的干涉条纹位置不同。故本申请基于待校正图像中的每个待校正像素点,匹配每个待校正像素点对应的目标干涉条纹图像。并根据每个待校正像素点的目标坐标位置,在目标干涉条纹图像中获取第一像素值。根据第一像素值对待校正像素点进行校正处理(由于目标干涉条纹图像中可以很好地反映干涉条纹在不同坐标位置的第一像素值,故可基于目标干涉条纹图像中的第一像素值对待校正像素点进行校正)。上述方案,通过针对逐个待校正像素分别进行校正的方式,减弱了干涉条纹导致采集的图像质量较差的问题。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1示出了本申请提供的一种干涉条纹的校正方法的示意性流程图;
图2示出了本申请提供的干涉条纹平移变化量曲线示意图;
图3示出了本申请提供的拍摄距离的划分策略示意图;
图4示出了本申请提供的一种干涉条纹的校正方法中步骤103具体示意性流程图;
图5示出了本申请提供的目标坐标位置示意图;
图6示出了本申请提供的一种干涉条纹的校正方法中步骤105具体示意性流程图;
图7示出了本申请提供的一种干涉条纹的校正方法中步骤1051具体示意性流程图;
图8示出了本申请提供的一种干涉条纹的校正方法中步骤1052具体示意性流程图;
图9是本发明一实施例提供的一种屏下系统的示意图;
图10示出了本申请提供的一种处理器功能架构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了更好地理解本申请解决的技术问题,故在此对上述背景技术进行进一步说明:
干涉现象是指同振幅、频率和固定位相的两列(或多列)波的叠加合成而引起振动强度重新分布的现象。在波的叠加区有的地方振幅增加,有的地方振幅减小,振动强度在空间出现强弱相间的固定分布,形成干涉条纹。
现有技术中,屏下摄像头在成像过程中,照明光源透过显示屏时,产生第一光束和第二光束,第一光束和第二光束间存在固定相位差,进而第一光束与第二光束会在目标区域上发生稳定干涉,产生干涉条纹,干涉条纹随着拍摄距离的变化而产生缩放。不同拍摄距离下采集模组(即屏下摄像头)采集图像中的干涉条纹在视差方向上发生平移。
有鉴于此,本申请实施例提供了一种干涉条纹的校正方法、屏下系统以及计算机可读存储介质,可以解决上述技术问题。
图1示出了本申请提供的一种干涉条纹的校正方法的示意性流程图,该校正方法应用于具有屏下相机的终端设备中,该校正方法包括如下步骤:
步骤101,获取不同拍摄距离的干涉条纹图像;所述干涉条纹图像是指照明光源透过显示屏照射在目标平面上形成干涉条纹,并由采集模组采集而得的图像;所述干涉条纹图像用于反映干涉条纹在不同坐标位置的第一像素值。
众所周知,不同拍摄距离的干涉条纹在图像中仅分布位置存在偏移,其大小不会发生改变,相同拍摄距离的干涉条纹在图像中分布位置一致。若能确定干涉条纹在图像中的分布位置(也即不同坐标位置的第一像素值),则可针对干涉条纹进行消除处理。故本申请基于上述规律,获取不同拍摄距离的干涉条纹图像,以适应性校正待校正图像中不同深度值的第一像素点。
在一个实施例中,为了更好地反映干涉条纹在不同坐标位置的第一像素值,故优先采用白色背景且表面质地均匀的平面,作为目标平面。目标平面用于反射由照明光源透过显示屏 (照明光源包括但不限于红外线或激光等光源)产生的干涉条纹,且目标平面尺寸大于采集模组(采集模组是指用于采集光线信息的模组,例如:摄像头等)在不同拍摄距离的取景范围。
在采集模组前,摆放垂直于采集模组光轴的目标平面。采集模组采集在目标平面上显现的干涉条纹,得到干涉条纹图像。
在一个实施例中,对于不同拍摄距离的划分策略可以为均匀划分和非均匀划分,对于不同拍摄距离的间隔可根据校正精细度进行预设。请参见图2,由图2可知,条纹平移变化量与拍摄距离之间呈曲线下降的趋势,即当目标平面在距离采集模组较近的区域内沿着光轴方向前后移动时,目标平面上的干涉条纹平移变化量较大。当目标平面在距离采集模组较远的区域内沿着光轴方向前后移动时,目标平面上的干涉条纹平移变化量较小。
故基于上述规律,本申请对不同拍摄距离的划分策略为“近密远疏”,如图3所示,随着目标平面远离近密远疏,目标平面的采样位置逐渐稀疏,以对拍摄距离的分布密度进行科学划分。
作为本申请的一个可选实施例,在同一个拍摄距离下,可采集一张或多张原始图像,作为干涉条纹图像。其中,当干涉条纹图像为多个时,可进行多帧平均,得到干涉条纹图像。
作为本申请的一个可选实施例,在获取到干涉条纹图像后,可针对干涉条纹图像进行预处理,以提高干涉条纹图像的图像质量,进而得到更加精细的校正效果。所述预处理包括降噪或灰度值调整等图像处理手段。
步骤102,获取待校正图像,并计算所述待校正图像中每个待校正像素点的深度值。
获取目标区域的待校正图像,并计算待校正图像中多个待校正像素点的深度值。以根据不同待校正像素点的深度值,获取不同待校正像素点各自对应的干涉条纹图像(可以理解的是,正如上述内容可知,不同拍摄距离对应图像中干涉条纹的位置不同。而不同待校正像素点之间的深度值大小不一,故需逐一针对每个待校正像素点匹配各自对应的干涉条纹图像)。
在一个实施例中,获取深度值的方式包括但不限于如下三种方式:
方式①:照明光源向目标区域投射结构光光束,采集模组接收经目标区域反射回的光束并形成电信号,并传输至处理器。处理器对该电信号进行处理,计算出反映该光束的强度信息以形成结构光图案,最后基于该结构光图案进行匹配计算或三角法计算,得到多个待校正像素点的深度值。
方式②:照明光源向目标区域投射红外光束,采集模组接收经目标区域反射回的光束并形成电信号,并传输至处理器。处理器对该电信号进行处理以计算出相位差,并基于该相位差间接计算光束由照明光源发射到摄像头接收所用的飞行时间。基于该飞行时间计算出多个 待校正像素点的深度值。应当理解的是,红外光束可包括脉冲型和连续波型,此处不作限制。
方式③:照明光源向目标区域投射红外脉冲光束,采集模组接收经目标区域反射回的光束并形成电信号,并传输至处理器。处理器对电信号进行计数以获取波形直方图,并根据直方图直接计算由照明光源发射到摄像头接收所用的飞行时间,基于该飞行时间计算出多个待校正像素点的深度值。
作为本申请的一个可选实施例,在获取到待校正图像后,可针对待校正图像进行预处理,以提高待校正图像的图像质量,进而得到更加精细的校正效果。所述预处理包括降噪或灰度值调整等图像处理手段。
步骤103,在所述不同拍摄距离的干涉条纹图像中,选择所述每个待校正像素点的深度值对应的干涉条纹图像,作为所述每个待校正像素点对应的目标干涉条纹图像。
如图2所示,由于受拍摄距离的分布间隔的影响,故拍摄距离与深度值可能相等,也可能不相等。可以理解的是,若不同拍摄距离的分布间隔足够小,则每个深度值都可对应相等的拍摄距离。
在一个实施例中,不同拍摄距离中存在与深度值相等的拍摄距离时,选择与深度值相等的拍摄距离对应的干涉条纹图像,作为目标干涉条纹图像。
在另一个实施例中,当不同拍摄距离中不存在与深度值相等的拍摄距离时,则执行如下可选实施例:
作为本申请的一个可选实施例,在步骤103包括如下步骤1031至步骤1032。请参见图4,图4示出了本申请提供的一种干涉条纹的校正方法中步骤103具体示意性流程图。
针对所述每个待校正像素点分别执行如下步骤:
步骤1031,在所述不同拍摄距离中,选择与所述深度值差值最小的目标拍摄距离。
作为申请的一个可选实施例,由于深度值差值最小的目标拍摄距离可能存在两个的情况,即深度值处于相邻拍摄距离的中心。由上述内容以及图2可知,拍摄距离越远条纹平移变化量越小。故当最小的目标拍摄距离为两个时,则选择两者中最大的拍摄距离作为所述目标拍摄距离。
步骤1032,将所述目标拍摄距离对应的干涉条纹图像,作为所述目标干涉条纹图像。
可以理解的是,上述条件“当不同拍摄距离中存在与深度值相等的拍摄距离时”的情况,也适用于步骤1031与步骤1032,即该条件下拍摄距离与深度值之间差值为0。
步骤104,提取所述目标干涉条纹图像中目标坐标位置的第一像素值;所述目标坐标位置是指所述目标干涉条纹图像对应的待校正像素点在所述待校正图像中所处的坐标位置。
每个待校正像素点在对应的目标干涉条纹图像中提取目标坐标位置的第一像素值。例 如:待校正像素点的目标坐标位置为(156,256),则提取目标干涉条纹图像中坐标位置(156,256)处的第一像素值。
为了更好地解释步骤104,请参见图5,以待校正图像中的待校正像素点a、待校正像素点b和待校正像素点c为例。待校正像素点a对应的目标干涉条纹图像为像素点d1所处的第一目标干涉条纹图像。待校正像素点b对应的目标干涉条纹图像为像素点d2所处的第二目标干涉条纹图像。待校正像素点c对应的目标干涉条纹图像为像素点d3所处的第三目标干涉条纹图像。针对待校正像素点a,需提取第一目标干涉条纹图像中的像素点d1(待校正像素点a与像素点d1的坐标位置相同)。针对待校正像素点b,需提取第二目标干涉条纹图像中的像素点d2(待校正像素点b与像素点d2的坐标位置相同)。针对待校正像素点c,需提取第三目标干涉条纹图像中的像素点d3(待校正像素点c与像素点d3的坐标位置相同)。应当理解的是,遍历待校正图像中的每一个像素点后,可将目标干涉条纹图像中对应的像素点整合为一帧新的目标干涉条纹图像,利用新的干涉条纹图像进行下一步操作;亦直接利用目标干涉条纹中的像素点进行下一步操作,此处不作限制。
需要说明的是,图5仅仅起示例作用,对于图5中目标干涉条纹图像的数量、待校正图像中待校正像素点的数量以及位置不做任何限定。
步骤105,根据所述每个待校正像素点对应的第一像素值,校正所述每个待校正像素点的第二像素值,得到校正后的图像。
由于干涉条纹图像可以反映干涉条纹在不同坐标位置的第一像素值。而位于暗纹区域的像素点需要提高像素值,对于位于明纹区域的像素点需要减弱像素值。故可根据不同坐标位置的第一像素值校正每个待校正像素点的第二像素值,得到校正后的图像。校正方式包括如下两种校正方式:
第一种校正方式:计算所有第一像素值之间的比例关系,根据比例关系调整每个待校正像素点的第二像素值,得到校正后的图像。例如:假设三个第一像素值分别为100、150以及200,则三个第一像素值之间的比例关系为2:3:4。将三个第一像素值各自对应的待校正像素点的第二像素值,分别乘以1/2、1/3以及1/4,得到校正后的图像(此处仅仅是假设图像中仅有三个第一像素点)。然而,由于暗纹区域的像素点需要提升像素值,明纹区域的像素点需要减弱像素值。若采用第一种方式进行校正,则校正效果较差。仅能抑制明纹区域的高像素值,故本申请提供了更优的第二种校正方式。
第二种校正方式:如以下可选实施例所示:
作为本申请的一个可选实施例,在步骤105包括如下步骤1051至步骤1052。请参见图 6,图6示出了本申请提供的一种干涉条纹的校正方法中步骤105具体示意性流程图。
步骤1051,对所有所述第一像素值进行归一化,得到所述每个待校正像素点对应的第一校正参数。
由于干涉条纹为明暗交替的条纹,明纹的区域的像素值较高,暗纹区域的像素值度较低。故本申请将干涉条纹图像中各个待校正像素值进行归一化处理,得到每个待校正像素点对应的第一校正参数,具体过程如下:
作为本申请的一个可选实施例,在步骤1051包括如下步骤A1至步骤A2。请参见图7,图7示出了本申请提供的一种干涉条纹的校正方法中步骤1051具体示意性流程图。
步骤A1,获取所有所述第一像素值中的最大第一像素值。
步骤A2,将每个所述第一像素值除以所述最大第一像素值,得到所述每个待校正像素点对应的第一校正参数。
每个第一像素点采用如下公式,得到每个待校正像素点对应的第一校正参数:
Figure PCTCN2021107950-appb-000001
其中,M表示最大第一像素值,I a表示第一校正参数,I b表示第一像素值。由上述公式得到的第一校正参数范围处于[0,1]之间。
步骤1052,根据所述每个待校正像素点对应的第一校正参数,校正所述每个待校正像素点的第二像素值,得到校正后的图像。
根据步骤103可知,深度值与拍摄距离可能相等,也可能不相等。若不相等,则执行如下可选实施例中的步骤B1至步骤B2。若相等,则执行如下可选实施例中的步骤B3。具体执行步骤如下可选实施例所示:
作为本申请的一个可选实施例,在步骤1052包括如下步骤B1至步骤B2。请参见图8,图8示出了本申请提供的一种干涉条纹的校正方法中步骤1052具体示意性流程图。
针对所述每个待校正像素点分别执行如下步骤,得到校正后的图像:
步骤B1,若待校正像素点对应目标干涉条纹图像的拍摄距离与该待校正像素点的深度值不相等,则将所述第一校正参数代入第一预设公式,得到第二校正参数。
由于深度值与拍摄距离之间存在差值,故需通过第一预设公式纠正第一校正参数。
所述第一预设公式如下:
Figure PCTCN2021107950-appb-000002
其中,I a表示所述第一校正参数,I b表示所述第二校正参数,L a表示待校正像素点对应目标干涉条纹图像的拍摄距离,L b表示待校正像素点的深度值。
步骤B2,将该待校正像素点的第二像素值除以所述第二校正参数,得到校正后的图像。
其中,在步骤B2中,可将每个待校正像素点的第二像素值除以每个待校正像素点对应的第二校正参数,得到校正后的图像。也可将每个待校正像素点的第二像素值除以每个待校正像素点对应的第二校正参数,并乘以预设的调整系数(调整系数用于调整校正强度,可根据实际应用场景进行预设),得到校正后的图像。
可以理解的是,当校正参数趋近于1时,校正幅度低。当校正参数趋近于0时,校正幅度高。故第二种校正方式即可以抑制明纹区域的像素值,又可以提高暗纹区域的像素值,从而达到优良的校正效果。
步骤B3,若待校正像素点对应目标干涉条纹图像的拍摄距离与该待校正像素点的深度值相等,则将该待校正像素点的第二像素值除以所述第一校正参数,得到校正后的图像。
在本实施例中,基于待校正图像中的每个待校正像素点,匹配每个待校正像素点对应的目标干涉条纹图像。并根据每个待校正像素点的目标坐标位置,在目标干涉条纹图像中获取第一像素值。根据第一像素值对待校正像素点进行校正处理(由于目标干涉条纹图像中可以很好地反映干涉条纹在不同坐标位置的第一像素值,故可基于目标干涉条纹图像中的第一像素值对待校正像素点进行校正)。上述方案,通过针对逐个待校正像素分别进行校正的方式,减弱了干涉条纹导致采集的图像质量较差的问题。
图9是本发明一实施例提供的一种屏下系统的示意图。如图9所示,该实施例的屏下系统90,包括照射光源91、采集模组92、处理器93、存储器94及显示屏95,其中:
照射光源91,透过显示屏95向目标平面96发射红外光束;
采集模块92,接收经目标平面反射后透过显示屏105的光信号并获取目标区域96的红外图像,传输至处理器93;
处理器93,用于利用预设的干涉条纹图像和上述任一实施例方案记载的干涉条纹的校正方法对红外图像进行校正;
存储器94,用于存储不同拍摄距离的干涉条纹图像以及可在所述处理器上运行的计算机程序。
应当说明的是,照射光源91和采集模块92中任一模块处于显示屏95下时,采集模块95采集的红外图像中若含有干涉条纹,仍可采用上述干涉条纹的校正方法对红外图像进行校正,此处不作限制。
在一个实施例中,若照射光源91透过显示屏95向目标区域96发射结构光光束时,屏 下系统90还包括泛光模块107,泛光模块97透过显示屏97向目标区域96投射泛光光束,采集模块92,一方面接收经目标区域反射的结构光信号传输至处理器93获取目标区域的深度值,另一方面接收经目标区域反射的泛光信号形成红外图像,并进一步根据上述方法对红外图像进行校正。
需要理解的是,若照射光源91透过显示屏95向目标区域96中发射红外光束,则屏下系统90不需要进行补光照明,采集模组可直接采集红外图像,并进一步根据上述方法对红外图像进行校正。
在一个实施例中,所述处理器93执行干涉条纹校正方法实施例中的步骤,更具体地,该步骤可由处理器93中一个或多个单元执行,以完成本申请,可被分割的单元(请参见图10,图10示出了本申请提供的一种处理器功能架构示意图)的具体执行功能如下:
获取单元1001,用于获取采集模组采集目标区域的红外图像;
深度计算单元1002,用于计算所述红外图像中每个待校正像素点的深度值;
第一选择单元1003,用于在存储在存储器中的所述不同拍摄距离的干涉条纹图像中,选择所述每个待校正像素点的深度值对应的干涉条纹图像,作为所述每个待校正像素点对应的目标干涉条纹图像;
第二选择单元1004,用于提取所述目标干涉条纹图像中目标坐标位置的第一像素值;所述目标坐标位置是指所述目标干涉条纹图像对应的待校正像素点在所述红外图像中所处的坐标位置;
校正单元1005,用于根据所述每个待校正像素点对应的第一像素值,校正所述每个待校正像素点的第二像素值,得到校正后的图像。
本申请提供一种屏下系统,基于红外图像中的每个待校正像素点,匹配每个待校正像素点对应的目标干涉条纹图像。并根据每个待校正像素点的目标坐标位置,在目标干涉条纹图像中获取第一像素值。根据第一像素值对待校正像素点进行校正处理(由于目标干涉条纹图像中可以很好地反映干涉条纹在不同坐标位置的第一像素值,故可基于目标干涉条纹图像中的第一像素值对待校正像素点进行校正)。上述方案,通过针对逐个待校正像素分别进行校正的方式,减弱了干涉条纹导致采集的图像质量较差的问题。
本领域技术人员可以理解,所述屏下系统中包括但不限于上述模块及上述的组合形式,图9仅仅是一种屏下系统的示例,并不构成对一种屏下系统的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述一种屏下系统还可以包括输入输出设备、网络接入设备、总线等。
所述摄像模块包括采集模组以及照明光源。其中照明光源包括光源和光学组件(光学组 件可以包括衍射光学元件等)等。光源可以是边发射激光器(EEL)、垂直腔面发射激光器(VCSEL)等光源,也可以是多个光源组成的光源阵列,光源所发射的光束可以是可见光、红外光或紫外光等。光源所发射的光束可以在参考平面上形成均匀、随机或者特殊设计的强度分布投影图案。摄像头中包括
图像传感器以及透镜单元等模块,透镜单元接收由物体反射回的部分光束并成像在图像传感器上。图像传感器可以是电荷耦合元件(Charge Coupled Device,CCD)、互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor Transistor,CMOS)、雪崩二极管(Avalanche Diode,AD)或单光子雪崩二极管(Single Photon Avalanche Diode,SPAD)等组成的图像传感器。
所述处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器可以是所述一种屏下系统的内部存储单元,例如一种屏下系统的硬盘或内存。所述存储器也可以是所述一种屏下系统的外部存储设备,例如所述一种屏下系统上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器还可以既包括所述一种屏下系统的内部存储单元也包括外部存储设备。所述存储器用于存储所述计算机程序以及所述一种漫游控制设备所需的其他程序和数据。所述存储器还可以用于暂时地存储已经输出或者将要输出的数据。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、 模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/活体检测设备的任何实体或装置、记录介质、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于监测到”。类似地,短语“如果确定”或“如果监测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦监测到[所描述条件或事件]”或“响应于监测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种干涉条纹的校正方法,其特征在于,所述校正方法,包括:
    获取不同拍摄距离的干涉条纹图像;所述干涉条纹图像是指照明光源透过显示屏照射在目标平面上形成干涉条纹,并由采集模组采集而得的图像;所述干涉条纹图像用于反映干涉条纹在不同坐标位置的第一像素值;
    获取待校正图像,并计算所述待校正图像中每个待校正像素点的深度值;
    在所述不同拍摄距离的干涉条纹图像中,选择所述每个待校正像素点的深度值对应的干涉条纹图像,作为所述每个待校正像素点对应的目标干涉条纹图像;
    提取所述目标干涉条纹图像中目标坐标位置的第一像素值;所述目标坐标位置是指所述目标干涉条纹图像对应的待校正像素点在所述待校正图像中所处的坐标位置;
    根据所述每个待校正像素点对应的第一像素值,校正所述每个待校正像素点的第二像素值,得到校正后的图像。
  2. 如权利要求1所述校正方法,其特征在于,所述根据所述每个待校正像素点对应的第一像素值,校正所述每个待校正像素点的第二像素值,得到校正后的图像,包括:
    对所有所述第一像素值进行归一化,得到所述每个待校正像素点对应的第一校正参数;
    根据所述每个待校正像素点对应的第一校正参数,校正所述每个待校正像素点的第二像素值,得到校正后的图像。
  3. 如权利要求2所述校正方法,其特征在于,所述对所有所述第一像素值进行归一化,得到所述每个待校正像素点对应的第一校正参数,包括:
    获取所有所述第一像素值中的最大第一像素值;
    将每个所述第一像素值除以所述最大第一像素值,得到所述每个待校正像素点对应的第一校正参数。
  4. 如权利要求2所述校正方法,其特征在于,所述根据所述每个待校正像素点对应的第一校正参数,校正所述每个待校正像素点的第二像素值,得到校正后的图像,包括:
    针对所述每个待校正像素点分别执行如下步骤,得到校正后的图像:
    若待校正像素点对应目标干涉条纹图像的拍摄距离与该待校正像素点的深度值不相等,则将所述第一校正参数代入第一预设公式,得到第二校正参数;
    所述第一预设公式如下:
    Figure PCTCN2021107950-appb-100001
    其中,I a表示所述第一校正参数,I b表示所述第二校正参数,L a表示待校正像素点对应目标干涉条纹图像的拍摄距离,L b表示待校正像素点的深度值;
    将该待校正像素点的第二像素值除以所述第二校正参数,得到校正后的图像。
  5. 如权利要求2所述校正方法,其特征在于,所述根据所述每个待校正像素点对应的第一校正参数,校正所述每个待校正像素点的第二像素值,得到校正后的图像,包括:
    针对所述每个待校正像素点分别执行如下步骤,得到校正后的图像:
    若待校正像素点对应目标干涉条纹图像的拍摄距离与该待校正像素点的深度值相等,则将该待校正像素点的第二像素值除以所述第一校正参数,得到校正后的图像。
  6. 如权利要求1所述校正方法,其特征在于,所述在所述不同拍摄距离的干涉条纹图像中,选择所述每个待校正像素点的深度值对应的干涉条纹图像,作为所述每个待校正像素点对应的目标干涉条纹图像,包括:
    针对所述每个待校正像素点分别执行如下步骤:
    在所述不同拍摄距离中,选择与所述深度值差值最小的目标拍摄距离;
    将所述目标拍摄距离对应的干涉条纹图像,作为所述目标干涉条纹图像。
  7. 如权利要求6所述校正方法,其特征在于,所述在所述不同拍摄距离中,选择与所述深度值差值最小的目标拍摄距离,包括:
    若所述深度值处于相邻拍摄距离的中心,则将所述相邻拍摄距离中最大拍摄距离作为所述目标拍摄距离。
  8. 一种屏下系统,其特征在于,包括显示屏、照明光源、采集模组、处理器及存储器,其中:
    所述照明光源,用于透过所述显示屏向目标区域发射红外光束;所述目标区域包括目标平面;
    所述采集模组,用于接收经所述目标区域反射后透过所述显示屏的光信号并获取所述目标区域的红外图像,传输至处理器;
    所述处理器,用于利用预设的干涉条纹图像和所述权利要求1-7中任何一项所述的校正方法所述红外图像进行校正;
    所述存储器,用于存储不同拍摄距离的所述干涉条纹图像以及可在所述处理器上运行的计算机程序。
  9. 如权利要求8所述的屏下系统,其特征在于,若所述照明光源透过所述显示屏向目标区域发射的光束为结构光光束,所述屏下系统还包括用于向所述目标区域投射泛光光束的泛光模块,所述采集模组采集经所述目标区域反射的泛光光信号并获取所述目标平面的所述红外图像。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述校正方法的步骤。
PCT/CN2021/107950 2021-03-24 2021-07-22 一种图像的校正方法及及屏下系统 WO2022198862A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/209,696 US20230325979A1 (en) 2021-03-24 2023-06-14 Image correction method, and under-screen system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110315005.3 2021-03-24
CN202110315005.3A CN115131215A (zh) 2021-03-24 2021-03-24 一种图像的校正方法及及屏下系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/209,696 Continuation US20230325979A1 (en) 2021-03-24 2023-06-14 Image correction method, and under-screen system

Publications (1)

Publication Number Publication Date
WO2022198862A1 true WO2022198862A1 (zh) 2022-09-29

Family

ID=83373776

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107950 WO2022198862A1 (zh) 2021-03-24 2021-07-22 一种图像的校正方法及及屏下系统

Country Status (3)

Country Link
US (1) US20230325979A1 (zh)
CN (1) CN115131215A (zh)
WO (1) WO2022198862A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021124961A1 (de) 2021-09-27 2023-03-30 Ifm Electronic Gmbh Kamerasystem mit Interferenzmusterunterdrückung
CN117132589B (zh) * 2023-10-23 2024-04-16 深圳明锐理想科技股份有限公司 一种条纹图校正方法、光学检测设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150208048A1 (en) * 2014-01-21 2015-07-23 Lite-On It Corporation Image correction method and image projection apparatus using the same
CN105956530A (zh) * 2016-04-25 2016-09-21 中科院微电子研究所昆山分所 一种图像校正方法及装置
CN109087253A (zh) * 2017-06-13 2018-12-25 杭州海康威视数字技术股份有限公司 一种图像校正方法及装置
CN109242901A (zh) * 2017-07-11 2019-01-18 深圳市道通智能航空技术有限公司 应用于三维相机的图像校准方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150208048A1 (en) * 2014-01-21 2015-07-23 Lite-On It Corporation Image correction method and image projection apparatus using the same
CN105956530A (zh) * 2016-04-25 2016-09-21 中科院微电子研究所昆山分所 一种图像校正方法及装置
CN109087253A (zh) * 2017-06-13 2018-12-25 杭州海康威视数字技术股份有限公司 一种图像校正方法及装置
CN109242901A (zh) * 2017-07-11 2019-01-18 深圳市道通智能航空技术有限公司 应用于三维相机的图像校准方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FAN ZHI GANG, LI RUN SHUN, CUI ZHAN HUA: "The Study of the Interferogram Processing", OPTICAL TECHNIQUE, vol. 26, no. 3, 20 May 2000 (2000-05-20), pages 258 - 259+262, XP055973959, ISSN: 1002-1582, DOI: 10.13741/j.cnki.11-1879/o4.2000.03.023 *

Also Published As

Publication number Publication date
US20230325979A1 (en) 2023-10-12
CN115131215A (zh) 2022-09-30

Similar Documents

Publication Publication Date Title
US11575843B2 (en) Image sensor modules including primary high-resolution imagers and secondary imagers
US10893260B2 (en) Depth mapping with a head mounted display using stereo cameras and structured light
CN109767467B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
US8830227B2 (en) Depth-based gain control
CN106851124B (zh) 基于景深的图像处理方法、处理装置和电子装置
WO2022198862A1 (zh) 一种图像的校正方法及及屏下系统
WO2018161758A1 (zh) 曝光控制方法、曝光控制装置及电子装置
JP2013156109A (ja) 距離計測装置
WO2023273094A1 (zh) 一种光谱反射率的确定方法、装置及设备
CN108174085A (zh) 一种多摄像头的拍摄方法、拍摄装置、移动终端和可读存储介质
CN102609152B (zh) 大视场角图像检测电子白板图像采集方法及装置
CN107270867B (zh) 一种主动测距系统以及方法
WO2023273412A1 (zh) 一种光谱反射率的确定方法、装置及设备
CN113362253B (zh) 一种图像阴影校正方法、系统及其装置
JP3938122B2 (ja) 擬似3次元画像生成装置および生成方法並びにそのためのプログラムおよび記録媒体
CN111325691B (zh) 图像校正方法、装置、电子设备和计算机可读存储介质
CN105872392A (zh) 具有动态曝光时间的光学测距系统
WO2022198861A1 (zh) 一种干涉条纹的校正方法及屏下系统
JPH09126758A (ja) 車両用環境認識装置
CN112752088B (zh) 深度图像生成方法及装置、参考图像生成方法、电子设备
US20220116545A1 (en) Depth-assisted auto focus
CN110567585B (zh) 一种实时红外图像“锅盖效应”抑制方法
CN109981992B (zh) 一种在高环境光变化下提升测距准确度的控制方法及装置
CN110390689B (zh) 深度图处理方法、装置和电子设备
CN109565544B (zh) 位置指定装置及位置指定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932474

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE