WO2021160001A1 - 图像获取方法和装置 - Google Patents

图像获取方法和装置 Download PDF

Info

Publication number
WO2021160001A1
WO2021160001A1 PCT/CN2021/075085 CN2021075085W WO2021160001A1 WO 2021160001 A1 WO2021160001 A1 WO 2021160001A1 CN 2021075085 W CN2021075085 W CN 2021075085W WO 2021160001 A1 WO2021160001 A1 WO 2021160001A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared
visible light
brightness
image
exposure
Prior art date
Application number
PCT/CN2021/075085
Other languages
English (en)
French (fr)
Inventor
涂娇姣
蓝晶
苏益波
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21753011.2A priority Critical patent/EP4102817A4/en
Publication of WO2021160001A1 publication Critical patent/WO2021160001A1/zh
Priority to US17/886,761 priority patent/US20220392182A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/75Circuitry for compensating brightness variation in the scene by influencing optical camera components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/131Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • This application relates to image processing technology, in particular to an image acquisition method and device.
  • the signal-to-noise ratio of the red-green-blue sensor (RGB Sensor) is severely reduced, the detail loss is obvious, and the image quality deteriorates sharply.
  • the visible light and infrared dual-channel fusion technology came into being pregnancy. This technology collects both visible light image and infrared image at the same time. From the infrared image, the brightness information with higher signal-to-noise ratio and better detail can be extracted, while the visible light image contains color information. By fusing the two, the information can be obtained. Target image with low noise ratio and better detail performance.
  • the most commonly used solution is dual-sensor fusion, that is, one RGB Sensor is used to collect visible light images, the other infrared sensor is used to collect infrared images, and the two are fused to obtain the target image.
  • Another solution is to collect the RGBIR image through the RGB-infrared (RGBIR) Sensor, separate the RGB component and the IR component from the RGBIR image, and then merge the two to obtain the target image.
  • RGBIR RGB-infrared
  • the former's dual-sensor structure has parallax problems and needs to be calibrated, which increases the complexity and design difficulty of the fusion algorithm.
  • the latter needs to increase the splitting processing, but the splitting cannot completely separate the RGB component and the IR component. It may cause difficulty in color reproduction, and may also cause blurry images after fusion.
  • the present application provides an image acquisition method and device to improve the adjustment efficiency and obtain a target image with better signal-to-noise ratio and detail performance.
  • the present application provides an image acquisition method, including: acquiring first original image data, the first original image data being acquired by the image sensor based on the initial visible light exposure parameters and the light intensity of the infrared fill light, the The first original image data includes visible light image data and infrared image data; the brightness of the visible light image is obtained according to the first original image data; the visible light exposure parameter is adjusted according to the first difference, and the first difference is the brightness of the visible light image and The preset difference between the brightness of the target visible light image; the brightness of the infrared image is obtained according to the first original image data; the light intensity of the infrared fill light is adjusted according to the second difference, and the second difference is the infrared image The difference between the brightness of the target and the preset target infrared image brightness; to obtain the second original image data, which is collected by the image sensor based on the adjusted visible light exposure parameters and the intensity of the infrared fill light ; The target image is obtained by fusing the visible light image data and the
  • This application collects the first original image data based on the initial visible light exposure parameters and the light intensity of the infrared fill light, and according to the visible light component and the infrared component in the first original image data, the visible light exposure parameters and the light intensity of the infrared fill light are respectively Make adjustments, and the two adjustment processes are independent of each other, and are completely unaffected by the other. Not only can the adjustment efficiency be improved, but also the target image with better signal-to-noise ratio and detail performance can be obtained, and the fused target image can be eliminated. Phenomenon.
  • adjusting the visible light exposure parameter according to the first difference value includes: if the absolute value of the first difference value is not within a preset first range, performing the adjustment of the visible light exposure parameter according to the first difference value Adjust the exposure parameters.
  • This application adjusts the visible light exposure parameters only when the brightness of the visible light image is not within the tolerable range of the expected effect, which can improve the adjustment efficiency.
  • the method before adjusting the visible light exposure parameters according to the first difference, further includes: determining whether the exposure parameter allocation strategy set includes an exposure parameter allocation strategy corresponding to the first difference; Adjusting the visible light exposure parameters according to the first difference includes: if the exposure parameter allocation strategy set includes the exposure parameter allocation strategy corresponding to the first difference, adjusting the visible light according to the exposure parameter allocation strategy corresponding to the first difference Exposure parameter; if the exposure parameter allocation strategy set does not include the exposure parameter allocation strategy corresponding to the first difference, a new exposure parameter allocation strategy is added, and the visible light exposure parameter is adjusted according to the new exposure parameter allocation strategy.
  • the visible light exposure parameters include one or more of exposure time, aperture diameter, or exposure gain; adjusting the visible light exposure parameters according to the first difference includes: when the brightness of the visible light image is higher than the target visible light When the image brightness is low, increase one or more of the exposure time, aperture diameter or exposure gain; when the brightness of the visible light image is higher than the target visible light image brightness, reduce one or more of the exposure time, aperture diameter or exposure gain indivual.
  • This application reduces one or more of the exposure time, aperture diameter, or exposure gain for visible light images with high brightness, and increases one of the exposure time, aperture diameter, or exposure gain for visible light images with low brightness. Or more, the efficiency of adjusting the visible light exposure parameters can be improved, so that the adjustment of the visible light exposure parameters meets the actual brightness requirements.
  • adjusting the light intensity of the infrared fill light according to the second difference value includes: if the absolute value of the second difference value is not within the preset second range, then according to the second difference value. The difference adjusts the intensity of the infrared fill light.
  • This application only adjusts the light intensity of the infrared supplement light lamp when the brightness of the infrared image is not within the tolerable range of the expected effect, which can improve the adjustment efficiency.
  • adjusting the light intensity of the infrared supplement light lamp according to the second difference value includes: increasing the light intensity of the infrared supplement light lamp when the brightness of the infrared image is lower than the brightness of the target infrared image; When the brightness of the infrared image is higher than that of the target infrared image, reduce the intensity of the infrared fill light.
  • This application reduces the intensity of the infrared fill light for infrared images with high brightness, and increases the intensity of the infrared fill light for infrared images with low brightness, which can improve the light intensity adjustment efficiency of the infrared fill light. , So that the adjustment of the intensity of the infrared supplement light lamp meets the actual brightness requirements.
  • increasing the intensity of the infrared fill light includes: increasing the intensity of the infrared fill light by reducing the duty cycle of the pulse width modulation PWM of the infrared fill light; reducing the infrared fill light
  • the light intensity of the fill light includes: reducing the light intensity of the infrared fill light by increasing the PWM duty cycle of the infrared fill light.
  • adjusting the visible light exposure parameters according to the first difference value includes: if the absolute value of the first difference value of the consecutive N frames of visible light images is not within the first range, then according to the first difference value The value adjusts the visible light exposure parameter, and N is a positive integer.
  • This application only adjusts the visible light exposure parameters when it is determined that the brightness of multiple frames of visible light images is not within the tolerable range of the expected effect, which can avoid a single frame jump.
  • adjusting the visible light exposure parameters according to the first difference includes: increasing the exposure time, aperture diameter or exposure gain when the brightness of consecutive N frames of visible light images is lower than the brightness of the target visible light image Or, when the brightness of consecutive N frames of visible light images is higher than the brightness of the target visible light image, reduce one or more of the exposure time, aperture diameter, or exposure gain.
  • adjusting the light intensity of the infrared fill light according to the second difference value includes: if the absolute value of the second difference value of the continuous M frames of infrared images is not within the preset second range , The light intensity of the infrared supplement light lamp is adjusted according to the second difference value, and M is a positive integer.
  • This application only adjusts the light intensity of the infrared fill light when it is determined that the brightness of multiple frames of infrared images is not within the tolerable range of the expected effect, which can avoid the situation of single frame jump.
  • adjusting the light intensity of the infrared supplement light lamp according to the second difference value includes: when the brightness of the continuous M frames of infrared images is lower than the target infrared image brightness, increasing the infrared supplement light lamp When the brightness of continuous M frames of infrared images is higher than that of the target infrared image, reduce the intensity of the infrared fill light.
  • increasing one or more of the exposure time, aperture diameter, or exposure gain includes: in a static scene, first increase the exposure time, then increase the exposure gain, and finally increase the aperture diameter; In the scene, first increase the exposure gain, then increase the exposure time, and finally increase the aperture diameter; reduce one or more of the exposure time, aperture diameter, or exposure gain, including: in a still scene, reduce the exposure time first, then reduce Lower the exposure gain, and finally reduce the aperture diameter; in sports scenes, first reduce the exposure gain, then reduce the exposure time, and finally reduce the aperture diameter.
  • this application specifies the corresponding parameter adjustment sequence, which can reduce the number of parameter adjustments and improve the adjustment efficiency.
  • the ultra-high-resolution RGBIR Sensor is used to obtain high-resolution visible light images and infrared images, and the two high-resolution images are fused to obtain a good signal-to-noise ratio and detailed performance.
  • the target image is used to obtain high-resolution visible light images and infrared images, and the two high-resolution images are fused to obtain a good signal-to-noise ratio and detailed performance.
  • the target image is used to obtain high-resolution visible light images and infrared images, and the two high-resolution images are fused to obtain a good signal-to-noise ratio and detailed performance.
  • a high-resolution RGBIR Sensor is used to obtain a lower-resolution image, and then a super-division algorithm is used for the low-resolution image to obtain a high-resolution visible image and infrared image.
  • Two high-resolution images are fused to obtain a target image with good signal-to-noise ratio and detail performance.
  • this application provides an image acquisition device, including:
  • the first acquisition module is configured to acquire first original image data, which is acquired by the image sensor based on the initial visible light exposure parameters and the light intensity of the infrared fill light, and the first original image data includes visible light images Data and infrared image data; the first processing module is used to obtain the brightness of the visible light image according to the first original image data; adjust the visible light exposure parameters according to the first difference, and the first difference is the brightness of the visible light image and the preset The difference between the brightness of the target visible light image; the second processing module is used to obtain the brightness of the infrared image according to the first original image data; the light intensity of the infrared fill light is adjusted according to the second difference, the second difference The value is the difference between the brightness of the infrared image and the preset target infrared image brightness; the second acquisition module is used to acquire second original image data, which is the image sensor based on the adjusted visible light exposure parameters The fusion module is used to fuse the visible light image data and the infrared image data in the second original image data to obtain
  • the first processing module is specifically configured to adjust the visible light exposure parameter according to the first difference if the absolute value of the first difference is not within the preset first range.
  • the first processing module is specifically configured to determine whether the exposure parameter allocation strategy set includes the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy set includes the first difference Value corresponding to the exposure parameter allocation strategy, adjust the visible light exposure parameters according to the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy set does not include the exposure parameter allocation strategy corresponding to the first difference, add a new According to the new exposure parameter allocation strategy, the visible light exposure parameters are adjusted according to the new exposure parameter allocation strategy.
  • the visible light exposure parameters include one or more of exposure time, aperture diameter, or exposure gain; the first processing module is specifically used to increase the brightness of the visible light image when the brightness of the visible light image is lower than that of the target visible light image.
  • One or more of the large exposure time, aperture diameter, or exposure gain when the brightness of the visible light image is higher than the target visible light image brightness, reduce one or more of the exposure time, aperture diameter, or exposure gain.
  • the second processing module is specifically configured to perform, if the absolute value of the second difference value is not within the preset second range, the light intensity of the infrared supplement light lamp according to the second difference value. Adjustment.
  • the second processing module is specifically used to increase the intensity of the infrared fill light when the brightness of the infrared image is lower than the brightness of the target infrared image; when the brightness of the infrared image is greater than the brightness of the target infrared image When it is high, reduce the intensity of the infrared fill light.
  • the second processing module is specifically configured to increase the light intensity of the infrared supplement light lamp by reducing the duty cycle of the pulse width modulation PWM of the infrared supplement light lamp; and by increasing the infrared supplement light lamp The PWM duty cycle reduces the intensity of the infrared fill light.
  • the first processing module is specifically configured to perform processing on the visible light exposure parameters according to the first difference if the absolute value of the first difference of N consecutive frames of visible light images is not within the first range. Adjust, N is a positive integer.
  • the first processing module is specifically configured to increase one or more of exposure time, aperture diameter, or exposure gain when the brightness of consecutive N frames of visible light images is lower than that of the target visible light image. Or, when the brightness of consecutive N frames of visible light images is higher than the brightness of the target visible light image, reduce one or more of the exposure time, aperture diameter, or exposure gain.
  • the second processing module is specifically configured to: if the absolute value of the second difference value of the continuous M frames of infrared images is not within the preset second range, perform the infrared image processing according to the second difference value.
  • the light intensity of the fill light is adjusted, and M is a positive integer.
  • the second processing module is specifically used to increase the intensity of the infrared fill light when the brightness of the continuous M frames of infrared images is lower than that of the target infrared image; when the brightness of the continuous M frames of infrared images is lower When the brightness is higher than that of the target infrared image, reduce the intensity of the infrared fill light.
  • the first processing module is specifically used to increase the exposure time first, then increase the exposure gain, and finally increase the aperture diameter in a stationary scene; in a sports scene, increase the exposure gain first, and then increase the exposure Time, increase the aperture diameter at the end; in a static scene, first reduce the exposure time, then reduce the exposure gain, and finally reduce the aperture diameter; in a sports scene, reduce the exposure gain first, then decrease the exposure time, and finally decrease Aperture diameter.
  • the present application provides an image acquisition device, including: one or more processors configured to call program instructions stored in a memory to perform the following steps: acquiring first original image data through an image sensor, and The first raw image data is acquired by the image sensor based on the initial visible light exposure parameters and the light intensity of the infrared fill light.
  • the first raw image data includes visible light image data and infrared image data; the visible light image is acquired according to the first raw image data Adjust the visible light exposure parameters according to the first difference, which is the difference between the brightness of the visible light image and the preset target visible light image brightness; obtain the brightness of the infrared image according to the first original image data ; Adjust the intensity of the infrared fill light according to the second difference, the second difference being the difference between the brightness of the infrared image and the preset target infrared image brightness; the second original image data is obtained through the image sensor The second original image data is collected by the image sensor based on the adjusted visible light exposure parameters and the light intensity of the infrared fill light; the target image is obtained by fusing the visible light image data and the infrared image data in the second original image data.
  • the first difference which is the difference between the brightness of the visible light image and the preset target visible light image brightness
  • the second original image data Adjust the intensity of the infrared fill light according to the second difference, the second
  • the processor is specifically configured to adjust the visible light exposure parameter according to the first difference if the absolute value of the first difference is not within the preset first range.
  • the processor is specifically configured to determine whether the exposure parameter allocation strategy set includes the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy set includes the exposure parameter allocation strategy corresponding to the first difference According to the exposure parameter allocation strategy, the visible light exposure parameters are adjusted according to the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy set does not include the exposure parameter allocation strategy corresponding to the first difference, a new exposure is added Parameter allocation strategy, and adjust the visible light exposure parameters according to the new exposure parameter allocation strategy.
  • the visible light exposure parameters include one or more of exposure time, aperture diameter, or exposure gain; the processor is specifically configured to increase the exposure when the brightness of the visible light image is lower than the brightness of the target visible light image.
  • One or more of time, aperture diameter or exposure gain when the brightness of the visible light image is higher than the brightness of the target visible light image, reduce one or more of the exposure time, aperture diameter or exposure gain.
  • the processor is specifically configured to adjust the light intensity of the infrared supplement light lamp according to the second difference value if the absolute value of the second difference value is not within the preset second range.
  • the processor is specifically used to increase the intensity of the infrared fill light when the brightness of the infrared image is lower than the brightness of the target infrared image; when the brightness of the infrared image is higher than the brightness of the target infrared image , Reduce the intensity of the infrared fill light.
  • the processor is specifically configured to increase the light intensity of the infrared fill light by reducing the duty cycle of the pulse width modulation PWM of the infrared fill light; and by increasing the PWM of the infrared fill light The duty cycle reduces the intensity of the infrared fill light.
  • the processor is specifically configured to adjust the visible light exposure parameter according to the first difference if the absolute value of the first difference of N consecutive frames of visible light images is not within the first range, N is a positive integer.
  • the processor is specifically configured to increase one or more of exposure time, aperture diameter, or exposure gain when the brightness of consecutive N frames of visible light images is lower than the brightness of the target visible light image; or When the brightness of consecutive N frames of visible light images is higher than the brightness of the target visible light image, reduce one or more of the exposure time, aperture diameter, or exposure gain.
  • the processor is specifically configured to, if the absolute value of the second difference value of the continuous M frames of infrared images is not within the preset second range, perform the infrared compensation light according to the second difference value.
  • the light intensity of the lamp is adjusted, and M is a positive integer.
  • the processor is specifically configured to increase the intensity of the infrared fill light when the brightness of the continuous M frames of infrared images is lower than the brightness of the target infrared image; when the brightness of the continuous M frames of infrared image When the brightness of the infrared image is higher than that of the target, reduce the intensity of the infrared fill light.
  • the processor is specifically used to increase the exposure time first, then increase the exposure gain, and finally increase the aperture diameter in a static scene; in a sports scene, increase the exposure gain first, and then increase the exposure time. Finally increase the aperture diameter; in a static scene, first reduce the exposure time, then reduce the exposure gain, and finally reduce the aperture diameter; in a sports scene, first reduce the exposure gain, then reduce the exposure time, and finally reduce the aperture diameter .
  • the present application provides a terminal device, including: an image sensor for collecting first original image data based on initial visible light exposure parameters and the light intensity of the infrared fill light, the first original image data including visible light images Data and infrared image data; the processor is configured to call software instructions in the memory to perform the following steps: obtain the brightness of the visible light image according to the first original image data; perform the visible light exposure parameters according to the first difference Adjustment, the first difference is the difference between the brightness of the visible light image and the preset target visible light image brightness; the brightness of the infrared image is obtained according to the first original image data; the brightness of the infrared image is obtained according to the second difference The light intensity of the infrared supplement light lamp is adjusted, and the second difference is the difference between the brightness of the infrared image and the preset target infrared image brightness; the image sensor is also used for adjusting based on the adjusted brightness The visible light exposure parameters and the light intensity of the infrared fill light collect second original image data; the processor
  • the processor is further configured to execute any method described in any one of the foregoing first aspect except the first possible implementation manner.
  • the present application provides a computer-readable storage medium, including a computer program that, when executed on a computer or processor, causes the computer or processor to execute any one of the above-mentioned aspects of the first aspect. The method described.
  • this application provides a computer program product, when the computer program product is executed by a computer or a processor, to implement the method described in any one of the above-mentioned first aspects.
  • the present application provides a chip including a processor and a memory, the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the above-mentioned first aspect The method of any one of.
  • Fig. 1 exemplarily shows a schematic structural diagram of a terminal device 100
  • 2a and 2b show the photosensitive results of two exemplary 2X2 array sorted RGBIR Sensors
  • Figures 3a and 3b show the photosensitivity results of two exemplary 4X4 array sorted RGBIR Sensors
  • Fig. 4 shows an exemplary structural schematic diagram of the image acquisition device
  • Figure 5 shows a schematic diagram of the hardware architecture of an independent exposure device
  • Fig. 6 is a flowchart of an exemplary image fusion method embodiment provided by an embodiment of the application.
  • FIG. 7 shows the photosensitive result of an exemplary traditional RGBIR Sensor
  • FIG. 8 is a schematic structural diagram of an embodiment of an image acquisition device of this application.
  • At least one (item) refers to one or more, and “multiple” refers to two or more.
  • “And/or” is used to describe the association relationship of associated objects, indicating that there can be three types of relationships, for example, “A and/or B” can mean: only A, only B, and both A and B , Where A and B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • the following at least one item (a) or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one of a, b, or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c" ", where a, b, and c can be single or multiple.
  • the terminal equipment of this application can also be called user equipment (UE), which can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; it can also be deployed on the water (such as ships, etc.); it can also be deployed on In the air (for example, airplanes, balloons, satellites, etc.).
  • the terminal device can be a terminal device 100 (mobile phone), a tablet computer (pad), a computer with wireless transceiver function, a virtual reality (VR) device, an augmented reality (AR) device, a monitoring device, and a smart phone.
  • VR virtual reality
  • AR augmented reality
  • This application is not limited to screens, smart TVs, wireless devices in remote medical (remote medical), or wireless devices in smart home (smart home).
  • Fig. 1 exemplarily shows a schematic structural diagram of a terminal device 100.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 151, and a wireless communication module 152 , Audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, SIM card interface 195 and so on.
  • the sensor module 180 may include a gyroscope sensor 180A, an acceleration sensor 180B, a proximity light sensor 180G, a fingerprint sensor 180H, a touch sensor 180K, and a hinge sensor 180M (Of course, the terminal device 100 may also include other sensors, such as a temperature sensor, a pressure sensor, Distance sensors, magnetic sensors, ambient light sensors, air pressure sensors, bone conduction sensors, etc. (not shown in the figure).
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal device 100.
  • the terminal device 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the terminal device 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the CPU and GPU can cooperate to execute the method provided in the embodiment of the present application. For example, part of the algorithm in the method is executed by the CPU, and another part of the algorithm is executed by the GPU to obtain Faster processing efficiency.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • emitting diode AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the terminal device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the camera 193 (a front camera or a rear camera, or a camera can be used as a front camera or a rear camera) is used to capture still images or videos.
  • the camera 193 may include photosensitive elements such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lens or concave lens) for collecting light signals reflected by the object to be photographed and transmitting the collected light signals to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • the image sensor used in this application may be a new type of RGBIR Sensor.
  • the camera 193 may also include components such as an image signal processing (ISP) module, an infrared lamp drive control module, and an infrared fill light lamp.
  • ISP image signal processing
  • the traditional RGB Sensor can only receive light in the red, green, and blue bands.
  • the RGBIR Sensor can receive light in the red, green, blue, and infrared bands.
  • a low-illuminance scene if the light intensity of the red, green, and blue bands is very weak, the image quality obtained by the RGB Sensor will be poor.
  • RGBIR Sensor can not only obtain light in the red, green, and blue bands, but also can obtain light in the infrared band.
  • the light in the infrared band provides better brightness information
  • the light in the red, green and blue bands Provide a small amount of brightness information and color information, so that the image quality obtained by using RGBIR Sensor is better.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and signal processing of the terminal device 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store operating system, application program (such as camera application, WeChat application, etc.) codes and so on.
  • the storage data area can store data created during the use of the terminal device 100 (such as images and videos collected by a camera application) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the code of the method provided in the embodiments of the present application may also be stored in an external memory.
  • the processor 110 may run the code stored in the external memory through the external memory interface 120.
  • the function of the sensor module 180 is described below.
  • the gyroscope sensor 180A may be used to determine the movement posture of the terminal device 100.
  • the angular velocity of the terminal device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180A can be used to detect the current motion state of the terminal device 100, such as shaking or static.
  • the acceleration sensor 180B can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three axes). That is, the gyro sensor 180A can be used to detect the current motion state of the terminal device 100, such as shaking or static.
  • the proximity light sensor 380G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the terminal device 100 emits infrared light to the outside through the light emitting diode.
  • the terminal device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100. When insufficient reflected light is detected, the terminal device 100 can determine that there is no object near the terminal device 100.
  • the gyroscope sensor 180A (or the acceleration sensor 180B) may send the detected motion state information (such as angular velocity) to the processor 110.
  • the processor 110 determines whether it is currently in the hand-held state or the tripod state based on the motion state information (for example, when the angular velocity is not 0, it means that the terminal device 100 is in the hand-held state).
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the terminal device 100, which is different from the position of the display screen 194.
  • the display screen 194 of the terminal device 100 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • the display screen 194 displays an interface of the camera application, such as a viewfinder interface.
  • the wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 151, the wireless communication module 152, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the terminal device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 151 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the terminal device 100.
  • the mobile communication module 151 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 151 can receive electromagnetic waves by the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 151 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 151 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 151 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 151 or other functional modules.
  • the wireless communication module 152 can provide applications on the terminal device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 152 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 152 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 152 can also receive the signal to be sent from the processor 110, perform frequency modulation and amplification, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the antenna 1 of the terminal device 100 is coupled with the mobile communication module 151, and the antenna 2 is coupled with the wireless communication module 152, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the terminal device 100 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the terminal device 100 may receive a key 190 input, and generate a key signal input related to user settings and function control of the terminal device 100.
  • the terminal device 100 may use the motor 191 to generate a vibration notification (such as a vibration notification of an incoming call).
  • the indicator 192 in the terminal device 100 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 in the terminal device 100 is used to connect to a SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal device 100.
  • the terminal device 100 may include more or less components than those shown in FIG. 1, which is not limited in the embodiment of the present application.
  • the present application provides a new type of RGBIR Sensor, which can realize independent light-sensing of visible light and infrared (infrared, IR) light, strip the IR signal from the light-sensing result of the visible light signal, and improve the color accuracy of the light-sensing result of the sensor.
  • RGBIR Sensor which can realize independent light-sensing of visible light and infrared (infrared, IR) light, strip the IR signal from the light-sensing result of the visible light signal, and improve the color accuracy of the light-sensing result of the sensor.
  • FIGS 2a and 2b show the light-sensing results of two exemplary 2X2 array-sorted RGBIR Sensors
  • Figures 3a and 3b show the light-sensing results of two exemplary 4X4 array-sorted RGBIR Sensors.
  • each grid represents a pixel
  • R represents a red pixel
  • G represents a green pixel
  • B represents a blue pixel
  • IR represents an infrared light pixel
  • 2X2 array ordering means that the smallest repeating unit of the RGBIR four-component arrangement is a 2X2 array.
  • the 2X2 array unit contains all the components of R, G, B, and IR; the 4X4 array ordering means that the smallest repeating unit of the RGBIR four-component arrangement is a 4X4 array, and the 4X4 array unit contains all the components. It should be understood that there may also be 2X2 array sorting in other arrangements and RGBIR Sensors in 4X4 array sorting, and the embodiment of the present application does not limit the arrangement of RGBIR Sensors.
  • Fig. 4 shows an exemplary structural diagram of an image acquisition device.
  • the image acquisition device includes a lens 401, a new RGBIR Sensor 402, an image signal processor (ISP) 403, and image fusion It consists of a module 404, an infrared lamp drive control module 405, and an infrared supplement light lamp 406.
  • the lens 401 is used to capture still images or videos, collect light signals reflected by the object to be photographed, and transmit the collected light signals to the image sensor.
  • the new RGBIR Sensor 402 generates original image data (visible light image data and infrared image data) of the object to be photographed according to the light signal.
  • the ISP module 403 is used to adjust the visible light exposure parameters and the intensity of the infrared fill light according to the original image of the object to be photographed until the convergence condition of the AE algorithm is met, and it is also used to separate the visible light image and infrared light from the original image of the object to be photographed. image.
  • the image fusion module 404 is used for fusing the separated visible light image and infrared image to obtain a target image.
  • the infrared lamp driving control module 405 is configured to control the infrared supplement light lamp 406 according to the intensity of the infrared supplement light lamp configured by the ISP module 403.
  • the infrared fill light 406 is used to provide infrared light.
  • the image acquisition device can adopt a single lens plus a single RGBIR Sensor, or a dual lens plus a dual RGBIR Sensor, or a single lens plus a beam splitter and a dual RGBIR Sensor structure.
  • the single lens structure can save costs, while the single RGBIR
  • the structure of the Sensor can simplify the structure of the camera. This application does not specifically limit this.
  • FIG. 4 shows a schematic diagram of the hardware architecture of an independent exposure device.
  • the exposure control device includes: at least one central processing unit (Central Processing Unit, CPU), at least one memory, a microcontroller (Microcontroller Unit, MCU), a receiving interface, a transmitting interface, and the like.
  • the exposure control device 1600 further includes: a dedicated video or graphics processor, and a graphics processing unit (Graphics Processing Unit, GPU), etc.
  • the CPU may be a single-CPU processor or a multi-core (multi-CPU) processor; optionally, the CPU may be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses.
  • the exposure control can be partly completed by software codes running on a general-purpose CPU or MCU, and partly completed by hardware logic circuits; or it can also be completely completed by software codes running on a general-purpose CPU or MCU.
  • the memory 302 may be a non-power-down volatile memory, such as an embedded multimedia card (Embedded Multi Media Card, EMMC), universal flash storage (Universal Flash Storage, UFS), or read-only memory (Read-Only Memory, ROM), or other types of static storage devices that can store static information and instructions, or volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM), or can store information and Other types of dynamic storage devices for instructions can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory, CD-ROM or other optical discs Storage, optical disc storage (including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store program codes in the form of instructions or data structures and can Any other computer-readable storage medium accessed by the computer, but not limited to this.
  • the receiving interface may be a data input interface of the processor chip.
  • the independent exposure device further includes: a pixel array.
  • the independently exposed device includes at least two types of pixels, that is, the independently exposed device can be a sensor including a control unit or a logic control circuit, or in other words, the independently exposed device can be A sensor that independently controls exposure.
  • the independent exposure device may be an RGBIR sensor, an RGBW sensor, an RCCB sensor, etc., which independently control exposure.
  • visible light pixels are classified as one type of pixel, that is, R pixels, G pixels, and B pixels are classified as one type of pixels, and IR pixels, W pixels, or C pixels are classified as one type of pixels. Pixels are considered to be another type of pixels.
  • RGBIR sensors include two types of pixels: visible light pixels and IR pixels
  • RGBW sensors include two types of pixels: visible light pixels and W pixels
  • RCCB sensors include two types of pixels. Pixels: visible light pixels and C pixels.
  • each pixel component is considered to be a type of pixel.
  • the RGBIR sensor includes four types of pixels: R, G, B, and IR
  • the RGBW sensor includes: R, G
  • the RCCB sensor includes three types of pixels: R, B and C.
  • the senor is an RGBIR sensor.
  • the RGBIR sensor can realize the independent exposure of visible light pixels and IR pixels, and can also realize independent exposure of the four components of R, G, B, and IR.
  • the at least two control units include: a first control unit and a second control unit; the first control unit is used to control the exposure start time of the visible light pixels; the second control unit Used to control the exposure start time of IR pixels.
  • At least two control units include: a first control unit, a second control unit, a third control unit, and a fourth control unit; the first control unit is used for Control the exposure start time of R pixels; the second control unit is used to control the exposure start time of G pixels; the third control unit is used to control the exposure start time of B pixels; the fourth control unit is used to control the exposure of IR pixels Start time.
  • the senor is an RGBW sensor.
  • the RGBW sensor can realize the independent exposure of visible light pixels and W pixels, and can also realize independent exposure of the four components of R, G, B, and W.
  • the at least two control units include: a first control unit and a second control unit; the first control unit is used to control the exposure start time of the visible light pixels; the second control unit Used to control the exposure start time of W pixels.
  • At least two control units include: a first control unit, a second control unit, a third control unit, and a fourth control unit; the first control unit is used for Control the exposure start time of the R pixel; the second control unit is used to control the exposure start time of the G pixel; the third control unit is used to control the exposure start time of the B pixel; the fourth control unit is used to control the exposure of the W pixel Start time.
  • the senor is an RCCB sensor.
  • the RCCB sensor can realize the independent exposure of visible light pixels and C pixels, and can also realize independent exposure of the three components of R, B, and C.
  • the at least two control units include: a first control unit and a second control unit; the first control unit is used to control the exposure start time of the visible light pixels; the second control unit Used to control the exposure start time of C pixels.
  • At least two control units include: a first control unit, a second control unit and a third control unit; the first control unit is used to control the start of exposure of the R pixel Time; the second control unit is used to control the exposure start time of the B pixel; the third control unit is used to control the exposure start time of the C pixel.
  • the independent exposure device can also control the exposure time of at least two types of pixels to meet a preset ratio based on at least two control units.
  • a preset ratio based on at least two control units.
  • the first control unit and the second control unit controlling the exposure time of the visible light pixels and the IR pixels to meet the preset ratio; or based on the first control unit, the second control unit, the third control unit and the fourth control unit control The exposure time of the R, G, B, and IR pixels meets the preset ratio.
  • first control unit and the second control unit to control the exposure time of the visible light pixels and the W pixels to meet the preset ratio; or, based on the first control unit, the second control unit, the third control unit, and the fourth control unit to control R , G, B, and W pixels’ exposure time meets the preset ratio; or based on the first control unit and the second control unit controlling the exposure time of the visible light pixel and the C pixel to meet the preset ratio; or based on the first control unit, the The second control unit and the third control unit control the exposure time of the R, B, and C pixels to meet the preset ratio.
  • the independent exposure device further includes: an exposure end control unit for uniformly controlling the exposure end time of all pixels in the pixel array.
  • FIG. 6 is a flowchart of an exemplary image fusion method embodiment provided by an embodiment of the application. As shown in FIG. 6, the method in this embodiment may be executed by the above-mentioned terminal device, image acquisition device, or processor.
  • the image fusion method may include:
  • Step 601 Acquire first original image data, which is collected by the image sensor based on the initial visible light exposure parameters and the light intensity of the infrared fill light.
  • the first original image data includes visible light image data and infrared image data.
  • the traditional sensor's exposure algorithm can only configure parameters for a single visible light scene or infrared scene, so the original image data it collects is three-dimensional RGB information or one-dimensional IR information.
  • Ordinary RGBIR Sensor's exposure algorithm can configure visible light exposure parameters for visible light scenes, or configure infrared light intensity for infrared scenes. Therefore, the collected original image data includes four-dimensional RGBIR information, that is, three-dimensional RGB components (visible light). Image data) and one-dimensional IR components (infrared image data).
  • the IR component is included in each pixel, and on some pixels, the R component and the IR component, the G component and the IR component, or the B component and the IR component are mixed together. Complete separation of the two components.
  • this application uses a new type of RGBIR Sensor (as described in the embodiment shown in Figure 4 and Figure 5), and its exposure algorithm can configure visible light exposure parameters for visible light scenes, or configure infrared fill light for infrared scenes.
  • Light intensity so the collected original image data includes four-dimensional RGBIR information, that is, three-dimensional RGB components (visible light image data) and one-dimensional IR components (infrared image data).
  • the R component, G component, B component and IR component are completely separated on all pixels, which is the premise for the realization of the image fusion method provided by this application.
  • the resolution of the obtained visible light image and the infrared image is smaller than that of the original image.
  • RGBIR Sensors to obtain lower-resolution visible light and infrared images can omit the super-resolution calculation step, which not only avoids the problem of image edge blurring caused by the use of the super-resolution algorithm, but also simplifies the image processing process. It should be noted that other methods can also be used to improve the resolution of the visible light image and the infrared image, such as a super-resolution algorithm, which is not specifically limited in this application.
  • the initial visible light exposure parameters and the intensity of the infrared fill light are usually obtained based on previous settings, historical experience data, or default settings, etc., so they may not fully meet the requirements of the current shooting scene.
  • This application will adjust the aforementioned two types of parameters through the following steps to obtain a high-quality target image.
  • Step 602 Obtain the brightness of the visible light image according to the first original image data.
  • the terminal device can separate the visible light image data from the first original image data, obtain the visible light image according to the visible light image data, and then divide the visible light image into image blocks, for example, divide it into m ⁇ n or m ⁇ m images according to a fixed size Piece.
  • For each image block obtain the original image data of the image block (the part of the data corresponding to the image block in the first original image data), according to the RGrGbB of each pixel of the corresponding image block in the original image data of the image block Calculating the brightness is usually based on the GrGb component to calculate the brightness, and averaging the brightness of these pixels to obtain the brightness of the image block.
  • the terminal device can also use other pixel granularity, for example, take each pixel RGrGbB to calculate the brightness, or take one of the RGrGbB for every two pixels to calculate the brightness, or every n
  • the pixel takes the RGrGbB of one pixel to calculate the brightness, and so on.
  • the terminal device can also obtain the brightness of each image block through the histogram of the visible light image.
  • the terminal device can set a weight for different image blocks, for example, set a higher weight for the image block of interest, and then perform a weighted average of the brightness of all image blocks (that is, the brightness of each image block is multiplied by the weight of the image block, After the addition, calculate the average value) to obtain the brightness of the visible light image.
  • this application can also use other methods to divide the image block of the visible light image; other methods can also be used to calculate the brightness of each image block in the visible light image; other methods can also be used to set the weight of each image block, such as brightness The image block is set with a higher weight; other methods can also be used to calculate the brightness of the visible light image, such as directly averaging the brightness of each image block to obtain the brightness of the visible light image. This application does not specifically limit the above methods.
  • Step 603 Adjust the visible light exposure parameters according to the first difference.
  • the first difference is the difference between the brightness of the visible light image and the preset target visible light image brightness.
  • Setting the target visible light image brightness provides a convergent condition for the automatic exposure (AE) algorithm.
  • the target visible light image brightness is the optimal brightness that the visible light image is expected to achieve. This value can be set through experience or through big data statistics. Setting, there is no specific limitation on this. Generally, the greater the brightness of the target visible light image is set, the higher the brightness of the visible light image obtained after the adjustment of the visible light exposure parameters is completed.
  • the adjustment process of the visible light exposure parameters of the terminal equipment can be a process that is repeated many times. If the convergence conditions of the AE algorithm are not reached through one adjustment of the visible light exposure parameters, the visible light exposure parameters need to be adjusted continuously until the AE algorithm is reached. Convergence conditions.
  • the convergence condition of the AE algorithm means that the absolute value of the first difference is within the set first range.
  • the absolute value of the first difference is less than the set threshold, for example,
  • the terminal device can adjust the visible light exposure parameters by the following method: determine whether the exposure parameter allocation strategy set includes the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy set includes the exposure corresponding to the first difference Parameter allocation strategy, adjust the visible light exposure parameters according to the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy set does not include the exposure parameter allocation strategy corresponding to the first difference, add a new exposure parameter allocation Strategy, and adjust the visible light exposure parameters according to the new exposure parameter allocation strategy.
  • the AE algorithm may include two exposure parameter allocation strategy sets, one is a common exposure parameter allocation strategy set, and the other is an extended exposure parameter allocation strategy set.
  • Table 1 shows an example of a common exposure parameter allocation strategy set. As shown in Table 1, the set includes three exposure parameter allocation strategies, corresponding to three coefficients, which are the true coefficient values related to exposure. Each type of exposure parameter allocation strategy may include one or more of the three components of the visible light exposure parameters: exposure time (IntTime), system gain (SysGain), and aperture (IrisFNO).
  • ExpoTime exposure time
  • SysGain system gain
  • IrisFNO aperture
  • the terminal device compares the first difference with the three coefficients in Table 1.
  • the three coefficients are arranged in ascending order. If the first difference is greater than or equal to coefficient 1 and less than or equal to coefficient 3, it can be preliminarily determined
  • the terminal device queries the coefficient corresponding to the first difference in Table 1. If the brightness of the visible light image is higher than the target visible light image brightness exceeds the tolerance value, adjust one of the visible light exposure parameters (exposure time (IntTime), system gain) One of (SysGain) and aperture (IrisFNO)) is adjusted to the value of the same parameter in the previous row of the table item where the coefficient is located.
  • Exposure time IntTime
  • system gain One of (SysGain) and aperture (IrisFNO)
  • the tolerance value If the brightness of the visible light image is lower than the target visible light image and exceeds the tolerance value, adjust one of the visible light exposure parameters (exposure time (IntTime), system gain (SysGain), and iris (IrisFNO)) to this coefficient The value of the same parameter in the entry in the next row of the entry.
  • Exposure time IntTime
  • SysGain system gain
  • IrisFNO iris
  • the terminal device needs to add a new exposure parameter allocation strategy to Table 1 according to the set rules, and then adjust the visible light exposure parameters according to the new exposure parameter allocation strategy.
  • the aforementioned set rules may include: if the first difference is less than the coefficient 1, each parameter value corresponding to the coefficient 1 is reduced according to a certain ratio as the parameter value corresponding to the first difference, and the parameter value is compared with the first difference.
  • the composed table items are added to the coefficient 1 in Table 1; if the first difference is greater than the coefficient 3, each parameter value corresponding to the coefficient 3 is enlarged according to a certain ratio as the parameter value corresponding to the first difference, and the parameter The entry consisting of the value and the first difference is added under the coefficient 3 in Table 1. For example, if the first difference is greater than the coefficient 3, the terminal device will add a row below the coefficient 3 in Table 1 (the difference is 4, 60000, 2018, 0), and use this set of values as the adjusted exposure time (IntTime) , System gain (SysGain) and iris (IrisFNO).
  • IntTime adjusted exposure time
  • SysGain System gain
  • IrisFNO iris
  • the terminal device adds a row above the coefficient 1 in Table 1 (the difference is 0, 100, 512, 0), and uses this set of values as the adjusted exposure time (IntTime ), system gain (SysGain) and iris (IrisFNO).
  • IntTime adjusted exposure time
  • SysGain system gain
  • IrisFNO iris
  • an example of an extended exposure parameter allocation strategy set is shown in Table 2.
  • the set includes five exposure parameter allocation strategies, corresponding to five coefficients, which are the true coefficient values related to exposure
  • Each type of exposure parameter allocation strategy can include one or more of the three components of the visible light exposure parameters: exposure time (IntTime), system gain (SysGain), and aperture (IrisFNO), where the system gain can include Again,
  • the physical meaning of Dgain and ISPDgain Again is the magnification of the analog signal in the Sensor, the physical meaning of Dgain is the magnification of the digital signal in the Sensor, and the physical meaning of ISPDgain is the magnification of the digital signal outside the Sensor.
  • Dgain IspDgain IrisFNO Factor 1 100 1024 1024 0 0 Factor 2 200 1024 1024 0 0 Factor 3 40000 1024 1024 0 0 Factor 4 40000 2048 1024 0 0 Factor 5 40000 4096 1024 0 0
  • the terminal device compares the first difference with the five coefficients in Table 2.
  • the five coefficients are arranged in ascending order. If the first difference is greater than or equal to coefficient 1 and less than or equal to coefficient 5, it can be preliminarily determined
  • Exposure time Exposure time (IntTime)
  • One of Dgain, ISPDgain and IrisFNO is adjusted to the value of the same parameter in the previous row of the table item where the coefficient is located.
  • the tolerance value adjust one of the visible light exposure parameters (one of the exposure time (IntTime), Again, Dgain, ISPDgain, and IrisFNO) to the coefficient The value of the same parameter in the entry in the next row of the entry.
  • the terminal device can directly read a set of values (200, 1024, 1024, 0, 0) corresponding to the difference 2 as the adjusted exposure time (IntTime) , Again, Dgain, ISPDgain, and aperture (IrisFNO); if there is no first difference, the terminal device queries Table 2 for the difference that is closest to the first difference (that is, calculates the difference and the first difference listed in Table 2). The difference of a difference, the one with the smallest result), read the set of values corresponding to the smallest one as the adjusted exposure time (IntTime), Again, Dgain, ISPDgain, and aperture (IrisFNO).
  • the terminal device needs to add a new exposure parameter allocation strategy to Table 2 according to the set rules, and then adjust the visible light exposure parameters according to the new exposure parameter allocation strategy.
  • the aforementioned set rules may include: if the first difference is less than the coefficient 1, each parameter value corresponding to the coefficient 1 is reduced according to a certain ratio as the parameter value corresponding to the first difference, and the parameter value is compared with the first difference.
  • the composed table items are added to the coefficient 1 in Table 1; if the first difference is greater than the coefficient 5, each parameter value corresponding to the coefficient 5 is enlarged according to a certain ratio as the parameter value corresponding to the first difference, and the parameter The entry consisting of the value and the first difference is added under the coefficient 5 in Table 1. For example, if the first difference is greater than the coefficient 5, the terminal device will add a row below the coefficient 5 in Table 1 (the difference 6, 60000, 4096, 1024, 0, 0), and use this group of values as the adjusted exposure. Time (IntTime), Again, Dgain, ISPDgain and aperture (IrisFNO).
  • the terminal device adds a row above the coefficient 1 in Table 1 (the difference is 0, 100, 512, 1024, 0, 0), and this set of values is used as the adjusted value.
  • Exposure time IntTime
  • Dgain Dgain
  • ISPDgain ISPDgain and aperture
  • the terminal device can adjust these parameters in sequence according to the actual application scenario. For example, in a static scene, first adjust the exposure time, then adjust the exposure gain, and finally adjust the aperture size. Specifically, the terminal device can adjust the exposure time first, if After the exposure time is adjusted, the convergence condition of the AE algorithm can not be satisfied, and the system gain can be adjusted. If the convergence condition of the AE algorithm is still not satisfied after the system gain adjustment, the iris can be adjusted again.
  • a stationary scene means that both the target and the camera are in a stationary state during shooting, or both the target and the camera are moving at the same speed and direction at a uniform speed, and the target and the camera are relatively stationary at this time.
  • the moving scene is the high-speed movement of the target during shooting, or the high-speed movement of the camera. High speed or low speed is judged by the user based on experience.
  • the visible light exposure parameters can also be adjusted by other methods, which are not specifically limited in this application.
  • the terminal device's adjustment of visible light exposure parameters can be divided into two directions: when the brightness of the visible light image is lower than that of the target visible light image (the first difference is a negative value) When, increase the visible light exposure parameters (exposure time, aperture diameter or exposure gain one or more); when the brightness of the visible light image is higher than the target visible light image brightness (the first difference is a positive value), reduce the visible light exposure Parameters (one or more of exposure time, aperture diameter, or exposure gain).
  • the terminal device may adjust the visible light exposure parameters only when it is determined that the first difference of N consecutive frames of visible light images does not meet the convergence condition of the AE algorithm.
  • the terminal device can increase one or more of the exposure time, aperture diameter, or exposure gain; or, on the basis that none of the first differences of the above-mentioned consecutive N frames of visible light images satisfy the convergence conditions of the AE algorithm
  • the brightness of the consecutive N frames of visible light images is higher than that of the target visible light image (the first difference is all positive)
  • the terminal device can reduce one or more of the exposure time, aperture diameter, or exposure gain. Since human eyes are more sensitive to overexposure, in the above method, N when the first difference is a positive value is set to be smaller than N when the first difference is a negative value.
  • the terminal device can ignore the infrared image part when performing steps 602 and 203.
  • the processing object is the visible light image data in the first original image data and the visible light image obtained therefrom. At this time, the light intensity of the infrared fill light will not affect the visible light image. And the following steps begin to process the infrared image data and the infrared image obtained therefrom.
  • Step 604 Obtain the brightness of the infrared image according to the first original image data.
  • the terminal device can separate the infrared image data from the first original image data, obtain the infrared image according to the infrared image data, and then divide the infrared image into image blocks, for example, divide it into m ⁇ n or m ⁇ m images according to a fixed size Piece.
  • each image block For each image block, obtain the original image data of the image block (the part of the data corresponding to the image block in the first original image data), and calculate the brightness of each pixel of the corresponding image block according to the original image data of the image block , Averaging the brightness of these pixels to get the brightness of the image block.
  • the terminal device can also use other pixel granularity, for example, take the brightness of each pixel, or take the brightness of one of every two pixels, or take one every n pixels The brightness of the pixels, and so on.
  • the terminal device can also obtain the brightness of each image block through the histogram of the infrared image.
  • the terminal device can set a weight for different image blocks, for example, set a higher weight for the image block of interest, and then perform a weighted average of the brightness of all image blocks (that is, the brightness of each image block is multiplied by the weight of the image block, After the addition, calculate the average value) to obtain the brightness of the infrared image.
  • this application can also use other methods to divide the infrared image into image blocks; other methods can also be used to calculate the brightness of each image block in the infrared image; other methods can also be used to set the weight of each image block, such as brightness Set a higher weight for the image block; other methods can also be used to calculate the brightness of the infrared image, such as directly averaging the brightness of each image block to obtain the brightness of the infrared image. This application does not specifically limit the above methods.
  • Step 605 Adjust the light intensity of the infrared supplement light lamp according to the second difference value.
  • the second difference is the difference between the brightness of the infrared image and the preset target infrared image brightness.
  • Setting the brightness of the target infrared image provides a convergent condition for the AE algorithm.
  • the brightness of the target infrared image is the optimal brightness that the infrared image is expected to achieve. This value can be set through experience or through big data statistics. limited. Generally, the higher the brightness of the target infrared image is set, the higher the brightness of the infrared image obtained after the adjustment of the intensity of the infrared fill light is completed.
  • the process of adjusting the intensity of the infrared supplement light by the terminal device can be a process that is repeated multiple times. If the intensity of the infrared supplement light is adjusted once and the convergence condition of the AE algorithm is not reached, the infrared supplement light needs to be continued. The light intensity of the lamp is adjusted until the convergence condition of the AE algorithm is reached.
  • the convergence condition of the AE algorithm means that the absolute value of the second difference is within the first set range.
  • the convergence condition of the AE algorithm is the premise for the terminal device to determine whether to adjust the intensity of the infrared supplement light. As long as the second difference does not meet the convergence condition of the AE algorithm, the terminal device can adjust the intensity of the infrared supplement light. Make adjustments.
  • the terminal equipment can adjust the intensity of the infrared supplement light lamp in two directions: when the brightness of the infrared image is lower than that of the target infrared image, increase the infrared The light intensity of the fill light; when the brightness of the infrared image is higher than that of the target infrared image, reduce the light intensity of the infrared fill light.
  • the terminal device can adjust the intensity of the infrared fill light by adjusting the duty cycle of the pulse width modulation (PWM) of the infrared fill light. If the intensity of the infrared fill light needs to be increased, the infrared fill light can be reduced.
  • PWM pulse width modulation
  • the PWM duty cycle of the light lamp if the light intensity of the infrared supplement light lamp needs to be reduced, the PWM duty ratio of the infrared supplement light lamp can be increased.
  • the light intensity of the infrared supplement light lamp can also be adjusted by other methods, which is not specifically limited in this application.
  • the terminal device may adjust the light intensity of the infrared supplemental light only when it is determined that the second difference value of the continuous M frames of infrared images does not meet the convergence condition of the AE algorithm.
  • the terminal device can increase the intensity of the infrared fill light; or, on the basis that none of the above-mentioned second difference values of the continuous M frames of infrared images satisfy the convergence conditions of the AE algorithm, the continuous N frames of infrared images The brightness of is higher than that of the target infrared image (the second difference is all positive), and the terminal device can reduce the light intensity of the infrared fill light at this time. Since human eyes are more sensitive to overexposure, in the above method, M when the second difference is a positive value is set to be smaller than M when the second difference is a negative value.
  • the terminal device performs steps 604 and 605
  • its processing object is the first original image.
  • the adjustment of the intensity of the infrared fill light at this time is a completely independent process, and it will not affect the already adjusted visible light exposure parameters, and it will not The image received by the RGB components in the original image data.
  • Step 606 Obtain second original image data, which is collected by the image sensor based on the adjusted visible light exposure parameters and the light intensity of the infrared fill light;
  • the terminal device can obtain the visible light exposure parameters and the light intensity of the infrared fill light that meet the convergence conditions of the AE algorithm.
  • the terminal device sets the RGBIR Sensor according to the visible light exposure parameters that meet the convergence conditions of the AE algorithm.
  • the infrared supplement light is set according to the intensity of the infrared supplement light that meets the convergence condition of the AE algorithm, and then the second original image data is acquired through the RGBIR Sensor.
  • Step 607 Obtain a target image by fusing the visible light image data and the infrared image data in the second original image data.
  • the terminal device separates the visible light image data and the infrared image data from the second original image data, obtains the visible light image according to the visible light image data, obtains the infrared image according to the infrared image data, and fuses the visible light image and the infrared image to obtain the target image. Since the visible light exposure parameters and the light intensity of the infrared fill light are adaptively set through the foregoing steps, only one RGBIR image needs to be collected at this time to obtain a high-quality fused image, which is compared with the need to collect multiple images in the prior art. Only RGBIR images can get high-quality fused images, which simplifies the steps of image fusion and improves efficiency.
  • this application can use an ultra-high resolution RGBIR Sensor to obtain an image, and then downsample the image to obtain a high-resolution visible light image and an infrared image.
  • the two high-resolution images can be fused to obtain signal-to-noise Target image with better performance than and detail.
  • the fusion processing of these two high-resolution images can also obtain a target image with better signal-to-noise ratio and detail performance.
  • this application can also use the traditional RGBIR Sensor, but it is necessary to add a light splitter between the lens and the RGBIR Sensor to separate the visible light component and the infrared component as much as possible.
  • the infrared image is obtained under the light intensity setting of the infrared fill light that satisfies the convergence condition of the AE algorithm, and the visible light image And the infrared image is obtained based on the original image data collected at the same time, which can eliminate the phenomenon of blurred images in the fused target image.
  • This application collects the first original image data based on the initial visible light exposure parameters and the light intensity of the infrared fill light, and according to the visible light component and the infrared component in the first original image data, the visible light exposure parameters and the light intensity of the infrared fill light are respectively Make adjustments, and the two adjustment processes are independent of each other, and are completely unaffected by the other. Not only can the adjustment efficiency be improved, but also the target image with better signal-to-noise ratio and detail performance can be obtained, and the fused target image can be eliminated. Phenomenon. Moreover, after adjusting the visible light exposure parameters and the light intensity of the infrared fill light, only an original image containing visible light and infrared light can be collected to obtain a higher-quality target image, which improves the image acquisition efficiency.
  • FIG. 8 is a schematic structural diagram of an embodiment of an image acquisition device of this application.
  • the device of this embodiment may include: a first acquisition module 801, a first processing module 802, a second processing module 803, and a second acquisition module 804 and fusion module 805.
  • the first acquisition module 801 is configured to acquire first original image data, which is acquired by the image sensor based on the initial visible light exposure parameters and the light intensity of the infrared fill light.
  • the image data includes visible light image data and infrared image data;
  • the first processing module 802 is configured to obtain the brightness of the visible light image according to the first original image data; adjust the visible light exposure parameters according to the first difference, and the first A difference is the difference between the brightness of the visible light image and the preset target visible light image brightness;
  • the second processing module 803 is configured to obtain the brightness of the infrared image according to the first original image data; according to the second difference Value adjusts the light intensity of the infrared supplement light lamp, and the second difference is the difference between the brightness of the infrared image and the preset target infrared image brightness;
  • the second acquisition module 804 is configured to acquire The second raw image data, the second raw image data is collected by the image sensor based on the adjusted visible light exposure parameters and the light intensity of the infrared fill light;
  • the fusion module 805 is used for the second raw image data
  • the visible light image data and the infrared image data are fused to obtain the target image.
  • the first processing module 802 is specifically configured to: if the absolute value of the first difference value is not within a preset first range, perform the calculation of the first difference value according to the first difference value. Adjust the visible light exposure parameters.
  • the first processing module 802 is specifically configured to determine whether the exposure parameter allocation strategy set includes the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy The set includes the exposure parameter allocation strategy corresponding to the first difference, then the visible light exposure parameter is adjusted according to the exposure parameter allocation strategy corresponding to the first difference; if the exposure parameter allocation strategy is in the set If the exposure parameter allocation strategy corresponding to the first difference is not included, a new exposure parameter allocation strategy is added, and the visible light exposure parameter is adjusted according to the new exposure parameter allocation strategy.
  • the visible light exposure parameters include one or more of exposure time, aperture diameter, or exposure gain; the first processing module 802 is specifically configured to be used when the brightness of the visible light image is higher than that of the exposure gain.
  • the first processing module 802 is specifically configured to be used when the brightness of the visible light image is higher than that of the exposure gain.
  • the brightness of the target visible light image is low, increase one or more of the exposure time, the aperture diameter, or the exposure gain; when the brightness of the visible light image is higher than the brightness of the target visible light image, decrease One or more of the exposure time, the aperture diameter, or the exposure gain.
  • the second processing module 803 is specifically configured to: if the absolute value of the second difference value is not within the preset second range, perform the calculation of the second difference value according to the second difference value. The light intensity of the infrared fill light is adjusted.
  • the second processing module 803 is specifically configured to increase the intensity of the infrared fill light when the brightness of the infrared image is lower than the brightness of the target infrared image; When the brightness of the infrared image is higher than the brightness of the target infrared image, the light intensity of the infrared supplement light lamp is reduced.
  • the second processing module 803 is specifically configured to increase the light intensity of the infrared supplement light lamp by reducing the duty cycle of the pulse width modulation PWM of the infrared supplement light lamp; The light intensity of the infrared supplement light lamp is reduced by increasing the PWM duty ratio of the infrared supplement light lamp.
  • the first processing module 802 is specifically configured to: if the absolute value of the first difference of N consecutive frames of visible light images is not within the first range, then according to the The first difference adjusts the visible light exposure parameter, and N is a positive integer.
  • the first processing module 802 is specifically configured to increase the exposure time, the exposure time, and the One or more of the aperture diameter or the exposure gain; or, when the brightness of the consecutive N frames of visible light images is higher than the brightness of the target visible light image, reduce the exposure time, the aperture diameter or the One or more of the exposure gains.
  • the second processing module 803 is specifically configured to: if the absolute value of the second difference of the continuous M frames of infrared images is not within the preset second range, then according to the The second difference value adjusts the light intensity of the infrared supplement light lamp, and M is a positive integer.
  • the second processing module 803 is specifically configured to increase the intensity of the infrared fill light when the brightness of the continuous M frames of infrared images is lower than the brightness of the target infrared image. Light intensity; when the brightness of the continuous M frames of infrared images is higher than the brightness of the target infrared image, the light intensity of the infrared supplementary light is reduced.
  • the first processing module 802 is specifically configured to first increase the exposure time, then increase the exposure gain, and finally increase the aperture diameter in a stationary scene; in a sports scene , First increase the exposure gain, then increase the exposure time, and finally increase the aperture diameter; in a static scene, first decrease the exposure time, then decrease the exposure gain, and finally decrease the aperture diameter ; In a sports scene, first reduce the exposure gain, then reduce the exposure time, and finally reduce the aperture diameter.
  • the image acquisition device may further include an image acquisition module.
  • the image acquisition module may be an image sensor.
  • the image acquisition module may also be a camera.
  • the device in this embodiment can be used to implement the technical solution of the method embodiment shown in FIG. 6, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the steps of the foregoing method embodiments may be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the processor can be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware encoding processor, or executed and completed by a combination of hardware and software modules in the encoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the memory mentioned in the above embodiments may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), and electrically available Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • static random access memory static random access memory
  • dynamic RAM dynamic RAM
  • DRAM dynamic random access memory
  • synchronous dynamic random access memory synchronous DRAM, SDRAM
  • double data rate synchronous dynamic random access memory double data rate SDRAM, DDR SDRAM
  • enhanced synchronous dynamic random access memory enhanced SDRAM, ESDRAM
  • synchronous connection dynamic random access memory serial DRAM, SLDRAM
  • direct rambus RAM direct rambus RAM
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (personal computer, server, or network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Abstract

本申请提供一种图像获取方法和装置。本申请图像获取方法,包括:获取第一原始图像数据,所述第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集得到的;根据第一原始图像数据获取可见光图像的亮度;根据第一差值对可见光曝光参数进行调整,第一差值为可见光图像的亮度与预设的目标可见光图像亮度之间的差值;根据第一原始图像数据获取红外图像的亮度;根据第二差值对红外补光灯的光强进行调整,第二差值为红外图像的亮度与预设的目标红外图像亮度之间的差值;获取第二原始图像数据,所述第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;根据第二原始图像数据融合得到目标图像。本申请可以提高调整效率,得到信噪比和细节表现更好的目标图像。

Description

图像获取方法和装置
本申请要求于2020年2月14日提交中国专利局、申请号为202010093488.2、申请名称为“图像获取方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术,尤其涉及一种图像获取方法和装置。
背景技术
在低照度条件下,红绿蓝传感器(red green blue Sensor,RGB Sensor)的信噪比严重下降,细节损失明显,图像质量急剧恶化,为了解决这一问题,可见光和红外双路融合技术应运而生。该技术同时采集可见光图像和红外图像,从红外图像中可以提取到信噪比更高、细节更好的亮度信息,而可见光图像中则包含了颜色信息,对两者进行融合处理,可以得到信噪比低且细节表现更好的目标图像。
目前最常用的方案为双Sensor融合,即一个RGB Sensor用于采集可见光图像,另一个红外Sensor用于采集红外图像,对两者进行融合处理得到目标图像。另一种方案是通过红绿蓝红外(RGB-infrared,RGBIR)Sensor采集RGBIR图像,从RGBIR图像中分离出RGB分量和IR分量,然后再对两者进行融合处理得到目标图像。
但是上述两种方案,前者的双Sensor结构存在视差问题,需要标定,提高了融合算法的复杂性和设计难度,后者需要增加分光处理,但分光不能做到将RGB分量和IR分量完全分离,导致色彩还原困难,还可能会导致融合后有模糊影像的问题。
发明内容
本申请提供一种图像获取方法和装置,以提高调整效率,得到信噪比和细节表现更好的目标图像。
第一方面,本申请提供一种图像获取方法,包括:获取第一原始图像数据,该第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集得到的,该第一原始图像数据包括可见光图像数据和红外图像数据;根据第一原始图像数据获取可见光图像的亮度;根据第一差值对可见光曝光参数进行调整,该第一差值为该可见光图像的亮度与预设的目标可见光图像亮度之间的差值;根据第一原始图像数据获取红外图像的亮度;根据第二差值对红外补光灯的光强进行调整,该第二差值为该红外图像的亮度与预设的目标红外图像亮度之间的差值;获取第二原始图像数据,该第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;根据第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
本申请基于初始的可见光曝光参数和红外补光灯的光强采集第一原始图像数据,并根据第一原始图像数据中的可见光分量和红外分量分别对可见光曝光参数和红外补光灯的光强进行调整,且两个调整过程相互独立,完全不受另一个的影响,不但可以提高调整效率,还可以得到信噪比和细节表现更好的目标图像,消除融合后的目标图像有模糊影像的 现象。而且经对可见光曝光参数和红外补光灯的光强的调整后,可以只采集一张包含可见光和红外光的原始图像就可以融合得到质量较高的目标图像,在简化了融合算法的前提下提升了融合图像的质量。
在一种可能的实现方式中,根据第一差值对可见光曝光参数进行调整,包括:若第一差值的绝对值没有在预设的第一范围内,则根据该第一差值对可见光曝光参数进行调整。
本申请只在可见光图像的亮度不在预期效果的可容忍范围内时才对可见光曝光参数调整,可以提高调整效率。
在一种可能的实现方式中,在根据第一差值对可见光曝光参数进行调整之前,该方法还包括:确定曝光参数分配策略集合中是否包括与该第一差值对应的曝光参数分配策略;根据第一差值对可见光曝光参数进行调整,包括:若曝光参数分配策略集合中包括与第一差值对应的曝光参数分配策略,则按照与该第一差值对应的曝光参数分配策略调整可见光曝光参数;若曝光参数分配策略集合中不包括与第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据该新的曝光参数分配策略调整该可见光曝光参数。
在一种可能的实现方式中,可见光曝光参数包括曝光时间、光圈直径或曝光增益中的一个或多个;根据第一差值对可见光曝光参数进行调整,包括:当可见光图像的亮度比目标可见光图像亮度低时,增大曝光时间、光圈直径或曝光增益中的一个或多个;当可见光图像的亮度比目标可见光图像亮度高时,减小曝光时间、光圈直径或曝光增益中的一个或多个。
本申请针对亮度偏高的可见光图像,调小其曝光时间、光圈直径或曝光增益中的一个或多个,针对亮度偏低的可见光图像,增大其曝光时间、光圈直径或曝光增益中的一个或多个,可以提高可见光曝光参数的调整效率,使得可见光曝光参数的调整符合实际亮度需求。
在一种可能的实现方式中,根据第二差值对红外补光灯的光强进行调整,包括:若该第二差值的绝对值没有在预设的第二范围内,则根据第二差值对红外补光灯的光强进行调整。
本申请只在红外图像的亮度不在预期效果的可容忍范围内时才对红外补光灯的光强调整,可以提高调整效率。
在一种可能的实现方式中,根据第二差值对红外补光灯的光强进行调整,包括:当红外图像的亮度比目标红外图像亮度低时,增大红外补光灯的光强;当红外图像的亮度比目标红外图像亮度高时,减小红外补光灯的光强。
本申请针对亮度偏高的红外图像,减小红外补光灯的光强,针对亮度偏低的红外图像,增大红外补光灯的光强,可以提高红外补光灯的光强的调整效率,使得红外补光灯的光强的调整符合实际亮度需求。
在一种可能的实现方式中,增大红外补光灯的光强,包括:通过减小红外补光灯的脉冲宽度调制PWM的占空比增大红外补光灯的光强;减小红外补光灯的光强,包括:通过增大红外补光灯的PWM的占空比减小红外补光灯的光强。
在一种可能的实现方式中,根据第一差值对可见光曝光参数进行调整,包括:若连续N帧可见光图像的第一差值的绝对值均没有在第一范围内,则根据第一差值对可见光曝光参数进行调整,N为正整数。
本申请在确定多帧可见光图像均存在亮度不在预期效果的可容忍范围内时,才会对可见光曝光参数调整,可以避免出现单帧跳变的情况。
在一种可能的实现方式中,根据第一差值对可见光曝光参数进行调整,包括:当连续N帧可见光图像的亮度均比目标可见光图像亮度低时,增大曝光时间、光圈直径或曝光增益中的一个或多个;或者,当连续N帧可见光图像的亮度均比目标可见光图像亮度高时,减小曝光时间、光圈直径或曝光增益中的一个或多个。
在一种可能的实现方式中,根据第二差值对红外补光灯的光强进行调整,包括:若连续M帧红外图像的第二差值的绝对值均没有在预设的第二范围内,则根据第二差值对红外补光灯的光强进行调整,M为正整数。
本申请在确定多帧红外图像均存在亮度不在预期效果的可容忍范围内时,才会对红外补光灯的光强调整,可以避免出现单帧跳变的情况。
在一种可能的实现方式中,根据第二差值对红外补光灯的光强进行调整,包括:当连续M帧红外图像的亮度均比目标红外图像亮度低时,增大红外补光灯的光强;当连续M帧红外图像的亮度均比目标红外图像亮度高时,减小红外补光灯的光强。
在一种可能的实现方式中,增大曝光时间、光圈直径或曝光增益中的一个或多个,包括:在静止场景下,先增加曝光时间,再增加曝光增益,最后增加光圈直径;在运动场景下,先增加曝光增益,再增加曝光时间,最后增加光圈直径;减小曝光时间、光圈直径或曝光增益中的一个或多个,包括:在静止场景下,先减小曝光时间,再减小曝光增益,最后减小光圈直径;在运动场景下,先减小曝光增益,再减小曝光时间,最后减小光圈直径。
本申请针对不同的场景,指定相应的参数调整顺序,可以减少参数调整的次数,提高调整效率。
在一种可能的实现方式中,使用超高分辨率的RGBIR Sensor,得到高分辨率的可见光图像和红外图像,对这两张高分辨率的图像进行融合处理,得到信噪比和细节表现良好的目标图像。
在一种可能的实现方式中,使用高分辨率的RGBIR Sensor,得到较低分辨率的图像,再对低分辨率的图像使用超分算法得到高分辨率的可见光图像和红外图像,对这两张高分辨率的图像进行融合处理,得到信噪比和细节表现良好的目标图像。
第二方面,本申请提供一种图像获取装置,包括:
第一获取模块,用于获取第一原始图像数据,该第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集得到的,该第一原始图像数据包括可见光图像数据和红外图像数据;第一处理模块,用于根据第一原始图像数据获取可见光图像的亮度;根据第一差值对可见光曝光参数进行调整,该第一差值为可见光图像的亮度与预设的目标可见光图像亮度之间的差值;第二处理模块,用于根据第一原始图像数据获取红外图像的亮度;根据第二差值对红外补光灯的光强进行调整,该第二差值为红外图像的亮度与预设的目标红外图像亮度之间的差值;第二获取模块,用于获取第二原始图像数据,该第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;融合模块,用于根据第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
在一种可能的实现方式中,第一处理模块,具体用于若第一差值的绝对值没有在预设的第一范围内,则根据第一差值对可见光曝光参数进行调整。
在一种可能的实现方式中,第一处理模块,具体用于确定曝光参数分配策略集合中是否包括与第一差值对应的曝光参数分配策略;若曝光参数分配策略集合中包括与第一差值对应的曝光参数分配策略,则按照与第一差值对应的曝光参数分配策略调整可见光曝光参数;若曝光参数分配策略集合中不包括与第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据新的曝光参数分配策略调整可见光曝光参数。
在一种可能的实现方式中,可见光曝光参数包括曝光时间、光圈直径或曝光增益中的一个或多个;第一处理模块,具体用于当可见光图像的亮度比目标可见光图像亮度低时,增大曝光时间、光圈直径或曝光增益中的一个或多个;当可见光图像的亮度比目标可见光图像亮度高时,减小曝光时间、光圈直径或曝光增益中的一个或多个。
在一种可能的实现方式中,第二处理模块,具体用于若第二差值的绝对值没有在预设的第二范围内,则根据第二差值对红外补光灯的光强进行调整。
在一种可能的实现方式中,第二处理模块,具体用于当红外图像的亮度比目标红外图像亮度低时,增大红外补光灯的光强;当红外图像的亮度比目标红外图像亮度高时,减小红外补光灯的光强。
在一种可能的实现方式中,第二处理模块,具体用于通过减小红外补光灯的脉冲宽度调制PWM的占空比增大红外补光灯的光强;通过增大红外补光灯的PWM的占空比减小红外补光灯的光强。
在一种可能的实现方式中,第一处理模块,具体用于若连续N帧可见光图像的第一差值的绝对值均没有在第一范围内,则根据第一差值对可见光曝光参数进行调整,N为正整数。
在一种可能的实现方式中,第一处理模块,具体用于当连续N帧可见光图像的亮度均比目标可见光图像亮度低时,增大曝光时间、光圈直径或曝光增益中的一个或多个;或者,当连续N帧可见光图像的亮度均比目标可见光图像亮度高时,减小曝光时间、光圈直径或曝光增益中的一个或多个。
在一种可能的实现方式中,第二处理模块,具体用于若连续M帧红外图像的第二差值的绝对值均没有在预设的第二范围内,则根据第二差值对红外补光灯的光强进行调整,M为正整数。
在一种可能的实现方式中,第二处理模块,具体用于当连续M帧红外图像的亮度均比目标红外图像亮度低时,增大红外补光灯的光强;当连续M帧红外图像的亮度均比目标红外图像亮度高时,减小红外补光灯的光强。
在一种可能的实现方式中,第一处理模块,具体用于在静止场景下,先增加曝光时间,再增加曝光增益,最后增加光圈直径;在运动场景下,先增加曝光增益,再增加曝光时间,最后增加光圈直径;在静止场景下,先减小曝光时间,再减小曝光增益,最后减小光圈直径;在运动场景下,先减小曝光增益,再减小曝光时间,最后减小光圈直径。
第三方面,本申请提供一种图像获取装置,包括:一个或多个处理器,被配置为调用存储在存储器中的程序指令,以执行如下步骤:通过图像传感器获取第一原始图像数据,该第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集 得到的,该第一原始图像数据包括可见光图像数据和红外图像数据;根据第一原始图像数据获取可见光图像的亮度;根据第一差值对可见光曝光参数进行调整,该第一差值为可见光图像的亮度与预设的目标可见光图像亮度之间的差值;根据第一原始图像数据获取红外图像的亮度;根据第二差值对红外补光灯的光强进行调整,该第二差值为红外图像的亮度与预设的目标红外图像亮度之间的差值;通过图像传感器获取第二原始图像数据,该第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;根据第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
在一种可能的实现方式中,该处理器具体用于若第一差值的绝对值没有在预设的第一范围内,则根据第一差值对可见光曝光参数进行调整。
在一种可能的实现方式中,该处理器具体用于确定曝光参数分配策略集合中是否包括与第一差值对应的曝光参数分配策略;若曝光参数分配策略集合中包括与第一差值对应的曝光参数分配策略,则按照与第一差值对应的曝光参数分配策略调整可见光曝光参数;若曝光参数分配策略集合中不包括与第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据新的曝光参数分配策略调整可见光曝光参数。
在一种可能的实现方式中,可见光曝光参数包括曝光时间、光圈直径或曝光增益中的一个或多个;该处理器具体用于当可见光图像的亮度比目标可见光图像亮度低时,增大曝光时间、光圈直径或曝光增益中的一个或多个;当可见光图像的亮度比目标可见光图像亮度高时,减小曝光时间、光圈直径或曝光增益中的一个或多个。
在一种可能的实现方式中,该处理器具体用于若第二差值的绝对值没有在预设的第二范围内,则根据第二差值对红外补光灯的光强进行调整。
在一种可能的实现方式中,该处理器具体用于当红外图像的亮度比目标红外图像亮度低时,增大红外补光灯的光强;当红外图像的亮度比目标红外图像亮度高时,减小红外补光灯的光强。
在一种可能的实现方式中,该处理器具体用于通过减小红外补光灯的脉冲宽度调制PWM的占空比增大红外补光灯的光强;通过增大红外补光灯的PWM的占空比减小红外补光灯的光强。
在一种可能的实现方式中,该处理器具体用于若连续N帧可见光图像的第一差值的绝对值均没有在第一范围内,则根据第一差值对可见光曝光参数进行调整,N为正整数。
在一种可能的实现方式中,该处理器具体用于当连续N帧可见光图像的亮度均比目标可见光图像亮度低时,增大曝光时间、光圈直径或曝光增益中的一个或多个;或者,当连续N帧可见光图像的亮度均比目标可见光图像亮度高时,减小曝光时间、光圈直径或曝光增益中的一个或多个。
在一种可能的实现方式中,该处理器具体用于若连续M帧红外图像的第二差值的绝对值均没有在预设的第二范围内,则根据第二差值对红外补光灯的光强进行调整,M为正整数。
在一种可能的实现方式中,该处理器具体用于当连续M帧红外图像的亮度均比目标红外图像亮度低时,增大红外补光灯的光强;当连续M帧红外图像的亮度均比目标红外图像亮度高时,减小红外补光灯的光强。
在一种可能的实现方式中,该处理器具体用于在静止场景下,先增加曝光时间,再增 加曝光增益,最后增加光圈直径;在运动场景下,先增加曝光增益,再增加曝光时间,最后增加光圈直径;在静止场景下,先减小曝光时间,再减小曝光增益,最后减小光圈直径;在运动场景下,先减小曝光增益,再减小曝光时间,最后减小光圈直径。
第四方面,本申请提供一种终端设备,包括:图像传感器,用于基于初始的可见光曝光参数和红外补光灯的光强采集第一原始图像数据,所述第一原始图像数据包括可见光图像数据和红外图像数据;处理器,被配置为调用存储器中的软件指令,以执行如下步骤:根据所述第一原始图像数据获取可见光图像的亮度;根据第一差值对所述可见光曝光参数进行调整,所述第一差值为所述可见光图像的亮度与预设的目标可见光图像亮度之间的差值;根据所述第一原始图像数据获取红外图像的亮度;根据第二差值对所述红外补光灯的光强进行调整,所述第二差值为所述红外图像的亮度与预设的目标红外图像亮度之间的差值;所述图像传感器,还用于基于调整后的可见光曝光参数和红外补光灯的光强采集第二原始图像数据;所述处理器,还用于根据所述第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
在一种可能的实现方式中,所述处理器还用于执行上述第一方面中除第一种可能的实现方式以外的其他任一项所述的方法。
第五方面,本申请提供一种计算机可读存储介质,包括计算机程序,所述计算机程序在计算机或处理器上被执行时,使得所述计算机或处理器执行上述第一方面中任一项所述的方法。
第六方面,本申请提供一种计算机程序产品,当所述计算机程序产品被计算机或处理器执行时,以实现如上述第一方面中任一项所述的方法。
第七方面,本申请提供一种芯片,包括处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于调用并运行所述存储器中存储的计算机程序,以执行如上述第一方面中任一项所述的方法。
附图说明
图1示例性的示出了一种终端设备100的结构示意图;
图2a和2b示出两种示例性的2X2阵列排序的RGBIR Sensor的感光结果;
图3a和3b示出两种示例性的4X4阵列排序的RGBIR Sensor的感光结果;
图4示出了图像获取装置的一个示例性的结构示意图;
图5示出了一种独立曝光的装置的硬件架构示意图;
图6为本申请实施例提供的一种示例性的图像融合方法实施例的流程图;
图7示出一种示例性的传统的RGBIR Sensor的感光结果;
图8为本申请图像获取装置实施例的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请中的附图,对本申请中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书实施例和权利要求书及附图中的术语“第一”、“第二”等仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。
本申请的终端设备又可称之为用户设备(user equipment,UE),可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。终端设备可以是终端设备100(mobile phone)、平板电脑(pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、监控设备、智能大屏、智能电视、远程医疗(remote medical)中的无线设备或智慧家庭(smart home)中的无线设备等,本申请对此不作限定。
图1示例性的示出了一种终端设备100的结构示意图。终端设备100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块151,无线通信模块152,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。其中传感器模块180可以包括陀螺仪传感器180A,加速度传感器180B,接近光传感器180G、指纹传感器180H,触摸传感器180K、转轴传感器180M(当然,终端设备100还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等,图中未示出)。
可以理解的是,本申请实施例示意的结构并不构成对终端设备100的具体限定。在本申请另一些实施例中,终端设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是终端设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
当处理器110集成不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的方法,比如该方法中的部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端设备100可以包括1个或N个显示屏194,N为大于1的正整数。
摄像头193(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。本申请所采用的图像传感器可以是一种新型RGBIR Sensor。此外摄像头193还可以包括图像信号处理(image signal processing,ISP)模块、红外灯驱动控制模块以及红外补光灯等部件。传统的RGB Sensor只能接收红、绿、蓝波段的光线。而RGBIR Sensor可以接收红、绿、蓝、红外波段的光线。在低照度场景中,如果红、绿、蓝波段的光强很弱,会导致利用RGB Sensor获取的图像质量很差。而RGBIR Sensor在低照度场景中,除了可以获取红、绿、蓝波段的光线,还可以获取红外波段的光线,由红外波段的光线提供更好的亮度信息,由红、绿、蓝波段的光线提供少量的亮度信息和色彩信息,这样利用RGBIR Sensor获取的图像质量也更好。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行终端设备100的各种功能应用以及信号处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储终端设备100使用过程中所创建的数据(比如相机应用采集的图像、视频等)等。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的方法的代码还可以存储在外部存储器中。这种情况下,处理器110可以通过外部存储器接口120运行存储在外部存储器中的代码。
下面介绍传感器模块180的功能。
陀螺仪传感器180A,可以用于确定终端设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180A确定终端设备100围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器180A可以用于检测终端设备100当前的运动状态,比如抖动还是静止。
加速度传感器180B可检测终端设备100在各个方向上(一般为三轴)加速度的大小。即陀螺仪传感器180A可以用于检测终端设备100当前的运动状态,比如抖动还是静止。
接近光传感器380G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。终端设备100通过发光二极管向外发射红外光。终端设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定终端设备100附近有物体。当检测到不充分的反射光时,终端设备100可以确定终端设备100附近没有物体。
陀螺仪传感器180A(或加速度传感器180B)可以将检测到的运动状态信息(比如角速度)发送给处理器110。处理器110基于运动状态信息确定当前是手持状态还是脚架状态(比如,角速度不为0时,说明终端设备100处于手持状态)。
指纹传感器180H用于采集指纹。终端设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于终端设备100的表面,与显示屏194所处的位置不同。
示例性的,终端设备100的显示屏194显示主界面,主界面中包括多个应用(比如相机应用、微信应用等)的图标。用户通过触摸传感器180K点击主界面中相机应用的图标,触发处理器110启动相机应用,打开摄像头193。显示屏194显示相机应用的界面,例如取景界面。
终端设备100的无线通信功能可以通过天线1,天线2,移动通信模块151,无线通信模块152,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。终端设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块151可以提供应用在终端设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块151可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块151可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块151还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块151的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块151的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等) 输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块151或其他功能模块设置在同一个器件中。
无线通信模块152可以提供应用在终端设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块152可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块152经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块152还可以从处理器110接收待发送的信号,对其进行调频、放大,经天线2转为电磁波辐射出去。
在一些实施例中,终端设备100的天线1和移动通信模块151耦合,天线2和无线通信模块152耦合,使得终端设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
另外,终端设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。终端设备100可以接收按键190输入,产生与终端设备100的用户设置以及功能控制有关的键信号输入。终端设备100可以利用马达191产生振动提示(比如来电振动提示)。终端设备100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。终端设备100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和终端设备100的接触和分离。
应理解,在实际应用中,终端设备100可以包括比图1所示的更多或更少的部件,本申请实施例不作限定。
本申请提供一种新型RGBIR Sensor,可以实现可见光和红外(infrared,IR)光的独立感光,从可见光信号的感光结果中剥离出IR信号,提升传感器感光结果的色彩准确度。
图2a和2b示出两种示例性的2X2阵列排序的RGBIR Sensor的感光结果,图3a和3b示出两种示例性的4X4阵列排序的RGBIR Sensor的感光结果。图中每个格代表一个像素,R表示红色像素,G表示绿色像素,B表示蓝色像素,IR表示红外光像素,2X2阵列排序指RGBIR四分量排列的最小重复单元为一个2X2的阵列,该2X2的阵列单元内包含了R、G、B、IR所有分量;4X4阵列排序指RGBIR四分量排列的最小重复单元为一个4X4的阵列,该4X4的阵列单元内包含了所有分量。应当理解还可以有其他排列方式的 2X2阵列排序以及4X4阵列排序的RGBIR Sensor,本申请实施例对RGBIR Sensor的排列方式不做限定。
图4示出了图像获取装置的一个示例性的结构示意图,如图4所示,该图像获取装置包括镜头401、新型RGBIR Sensor 402、图像信号处理器(image signal processor,ISP)403、图像融合模块404、红外灯驱动控制模块405以及红外补光灯406等模块组成。其中,镜头401用于捕获静态图像或视频,采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。新型RGBIR Sensor 402根据所述光信号生成待拍摄物体的原始图像数据(可见光图像数据和红外图像数据)。ISP模块403用于根据待拍摄物体的原始图像对可见光曝光参数和红外补光灯的光强进行调整直到满足AE算法的收敛条件,还用于从待拍摄物体的原始图像分离出可见光图像和红外图像。图像融合模块404用于对分离出的可见光图像和红外图像进行融合得到目标图像。红外灯驱动控制模块405用于根据ISP模块403配置的红外补光灯的光强控制红外补光灯406。红外补光灯406用于提供红外光照。
可选的,图像获取装置可以采用单镜头加单RGBIR Sensor,或者双镜头加双RGBIR Sensor,或者单镜头加分光片和双RGBIR Sensor的结构,其中单镜头的结构可以节约成本,而单RGBIR Sensor的结构可以简化摄像头的结构。本申请对此不做具体限定。
在图4所示的图像获取装置中,本申请可以采用一种独立曝光的装置,以承担新型RGBIR Sensor 402的功能。图5示出了一种独立曝光的装置的硬件架构示意图。该曝光控制装置包括:至少一个中央处理单元(Central Processing Unit,CPU)、至少一个存储器、微控制器(Microcontroller Unit,MCU)、接收接口和发送接口等。可选的,该曝光控制装置1600还包括:专用的视频或图形处理器,以及图形处理单元(Graphics Processing Unit,GPU)等。
可选的,CPU可以是一个单核(single-CPU)处理器或多核(multi-CPU)处理器;可选的,CPU可以是多个处理器构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。在一种可选的情况中,曝光控制可以一部分由跑在通用CPU或MCU上的软件代码完成,一部分由硬件逻辑电路完成;或者也可以全部由跑在通用CPU或MCU上的软件代码完成。可选的,存储器302可以是非掉电易失性存储器,例如是嵌入式多媒体卡(Embedded Multi Media Card,EMMC)、通用闪存存储(Universal Flash Storage,UFS)或只读存储器(Read-Only Memory,ROM),或者是可存储静态信息和指令的其他类型的静态存储设备,还可以是掉电易失性存储器(volatile memory),例如随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的程序代码并能够由计算机存取的任何其他计算机可读存储介质,但不限于此。该接收接口可以为处理器芯片的数据输入的接口。
在一种可能的实施方式中,该独立曝光的装置还包括:像素阵列。在这种情况中,该独立曝光的装置包括至少两种类型的像素,也即该独立曝光的装置可以为包含控制单元或 者逻辑控制电路在内的传感器,或者说,该独立曝光的装置为可以独立控制曝光的传感器。示例性的,该独立曝光的装置可以为独立控制曝光的RGBIR传感器、RGBW传感器以及RCCB传感器等。
应当理解,在一种可选的情况中,可见光像素被归为一种类型的像素,也即R像素、G像素和B像素被归为一种类型的像素,而IR像素、W像素或者C像素被认为是另外一种类型的像素,例如,RGBIR传感器包括两种类型的像素:可见光像素和IR像素,RGBW传感器包括两种类型的像素:可见光像素和W像素,RCCB传感器包括两种类型的像素:可见光像素和C像素。
在另外一种可选的情况中,每个像素分量被认为是一种类型的像素,例如,RGBIR传感器包括:R、G、B和IR四种类型的像素,RGBW传感器包括:R、G、B和W四种类型的像素,RCCB传感器包括:R、B和C三种类型的像素。
在一种可能的实施方式中,传感器为RGBIR传感器,RGBIR传感器可以实现可见光像素和IR像素分别独立曝光,也可以实现R、G、B、IR四分量分别独立曝光。
对于可见光像素和IR像素分别独立曝光的RGBIR传感器,该至少两个控制单元包括:第一控制单元和第二控制单元;第一控制单元用于控制可见光像素的曝光起始时间;第二控制单元用于控制IR像素的曝光起始时间。
对于R、G、B、IR四分量分别独立曝光的RGBIR传感器,至少两个控制单元包括:第一控制单元、第二控制单元、第三控制单元和第四控制单元;第一控制单元用于控制R像素的曝光起始时间;第二控制单元用于控制G像素的曝光起始时间;第三控制单元用于控制B像素的曝光起始时间;第四控制单元用于控制IR像素的曝光起始时间。
在一种可能的实施方式中,传感器为RGBW传感器,RGBW传感器可以实现可见光像素和W像素分别独立曝光,也可以实现R、G、B、W四分量分别独立曝光。
对于可见光像素和W像素分别独立曝光的RGBW传感器,该至少两个控制单元包括:第一控制单元和第二控制单元;第一控制单元用于控制可见光像素的曝光起始时间;第二控制单元用于控制W像素的曝光起始时间。
对于R、G、B、W四分量分别独立曝光的RGBW传感器,至少两个控制单元包括:第一控制单元、第二控制单元、第三控制单元和第四控制单元;第一控制单元用于控制R像素的曝光起始时间;第二控制单元用于控制G像素的曝光起始时间;第三控制单元用于控制B像素的曝光起始时间;第四控制单元用于控制W像素的曝光起始时间。
在一种可能的实施方式中,传感器为RCCB传感器,RCCB传感器可以实现可见光像素和C像素分别独立曝光,也可以实现R、B、C三分量分别独立曝光。
对于可见光像素和C像素分别独立曝光的RGBW传感器,该至少两个控制单元包括:第一控制单元和第二控制单元;第一控制单元用于控制可见光像素的曝光起始时间;第二控制单元用于控制C像素的曝光起始时间。
对于R、B、C三分量分别独立曝光的RCCB传感器,至少两个控制单元包括:第一控制单元、第二控制单元和第三控制单元;第一控制单元用于控制R像素的曝光起始时间;第二控制单元用于控制B像素的曝光起始时间;第三控制单元用于控制C像素的曝光起始时间。
在一种可能的实施方式中,该独立曝光的装置还可以基于至少两个控制单元控制至少 两种类型的像素的曝光时间满足预设比例。示例性的,基于第一控制单元和第二控制单元控制可见光像素和IR像素的曝光时间满足预设比例;或者基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R、G、B和IR像素的曝光时间满足预设比例。或者,基于第一控制单元和第二控制单元控制可见光像素和W像素的曝光时间满足预设比例;或者,基于第一控制单元、第二控制单元、第三控制单元和第四控制单元控制R、G、B和W像素的曝光时间满足预设比例;或者,基于第一控制单元和第二控制单元控制可见光像素和C像素的曝光时间满足预设比例;或者,基于第一控制单元、第二控制单元和第三控制单元控制R、B和C像素的曝光时间满足预设比例。
在一种可能的实施方式中,该独立曝光的装置还包括:曝光结束控制单元,用于统一控制像素阵列中的所有像素的曝光结束时间。
图6为本申请实施例提供的一种示例性的图像融合方法实施例的流程图,如图6所示,本实施例的方法可以由上述终端设备、图像获取装置或处理器执行。该图像融合方法可以包括:
步骤601、获取第一原始图像数据,第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集得到的。
第一原始图像数据包括可见光图像数据和红外图像数据。传统Sensor的曝光算法只能针对单独的可见光场景或红外场景配置参数,因此其采集到的原始图像数据为三维的RGB信息或者一维的IR信息。普通RGBIR Sensor的曝光算法既可以针对可见光场景配置可见光曝光参数,也可以针对红外场景配置红外补光灯的光强,因此采集到的原始图像数据包括四维的RGBIR信息,即三维的RGB分量(可见光图像数据)和一维的IR分量(红外图像数据)。但是如图7所示,IR分量在每个像素上都包含了,而且在部分像素上是R分量和IR分量、G分量和IR分量或者B分量和IR分量混合在一起的,无法做到对两种分量的彻底分离。而本申请使用的是新型的RGBIR Sensor(如图4和图5所示实施例中所述),其曝光算法既可以针对可见光场景配置可见光曝光参数,也可以针对红外场景配置红外补光灯的光强,因此采集到的原始图像数据包括四维的RGBIR信息,即三维的RGB分量(可见光图像数据)和一维的IR分量(红外图像数据)。而如图2a、2b、3a和3b所示,R分量、G分量、B分量分别和IR分量在所有像素上完全分离开,这正是本申请提供的图像融合方法实现的前提。
由于可见光图像数据和红外图像数据是基于原始图像数据经过降采样得到的,因此得到的可见光图像和红外图像的分辨率比原始图像的分辨率小。可以选择比目标可见光图像和红外图像分辨率大的RGBIR Sensor分辨率,这样就可以得到理想分辨率的可见光图像和红外图像。例如,使用分辨率为2H×2W的RGBIR Sensor,得到分辨率为H×W的可见光图像和红外图像。使用高分辨率的RGBIR Sensor得到较低分辨率的可见光图像和红外图像,可以省略超分计算步骤,不但避免由于利用超分算法导致的图像边缘模糊的问题,还可以简化图像处理的流程。需要说明的是,还可以采用其他方法提升可见光图像和红外图像的分辨率,例如超分算法,本申请对此不作具体限定。
初始的可见光曝光参数和红外补光灯的光强通常是根据前次设置、历史经验数据或默认设置等方式得来的,因此其未必完全符合当前的拍摄场景需求。本申请将通过以下步骤调整前述两类参数,以得到高质量的目标图像。
步骤602、根据第一原始图像数据获取可见光图像的亮度。
终端设备可以从第一原始图像数据中分离出可见光图像数据,根据可见光图像数据得到可见光图像,然后对可见光图像进行图像块的划分,例如按照固定尺寸将其分成m×n或m×m个图像块。
针对每一个图像块,获取该图像块的原始图像数据(第一原始图像数据中对应于该图像块的那一部分数据),根据该图像块的原始图像数据中对应图像块的每个像素的RGrGbB计算亮度,通常是根据GrGb分量计算该亮度,对这些像素的亮度求平均值得到该图像块的亮度。除了采用每个像素计算亮度的粒度外,终端设备也可以采用其他的像素粒度,例如,取每个像素RGrGbB计算亮度,或者每两个像素取其中之一的RGrGbB计算亮度,或者每隔n个像素取一个像素的RGrGbB计算亮度,等等。终端设备也可以通过可见光图像的直方图获取各图像块的亮度。
终端设备可以对不同的图像块设置一个权重,例如对感兴趣的图像块设置较高权重,然后对所有图像块的亮度进行加权平均(即每个图像块的亮度乘以该图像块的权重,相加后再计算平均值)得到可见光图像的亮度。
需要说明的是,本申请还可以采用其他方法对可见光图像进行图像块划分;也可以采用其他方法计算可见光图像中各个图像块的亮度;也可以采用其他方法设置各个图像块的权重,例如亮度的图像块设置较高的权重;也可以采用其他方法计算可见光图像的亮度,例如直接对各个图像块的亮度求平均值得到可见光图像的亮度等。本申请对上述方法均此不作具体限定。
步骤603、根据第一差值对可见光曝光参数进行调整。
第一差值为可见光图像的亮度与预设的目标可见光图像亮度之间的差值。设定目标可见光图像亮度为自动曝光(automatic exposure,AE)算法提供了收敛的条件,目标可见光图像亮度为期望可见光图像达到的最优亮度,该值可以通过经验设定,也可以通过大数据统计设定,对此不作具体限定。通常目标可见光图像亮度设置得越大,可见光曝光参数调整结束后得到的可见光图像的亮度越高。
终端设备对可见光曝光参数的调整过程可以是一个多次重复的过程,如果通过一次对可见光曝光参数的调整没有达到AE算法的收敛条件,就需要继续对可见光曝光参数进行调整,直到达到AE算法的收敛条件。AE算法的收敛条件是指第一差值的绝对值在设定的第一范围内,一种表示方法为第一差值的绝对值小于设定阈值,例如,|第一差值|<X,X为设定阈值;另一种表示方法为第一差值在设定范围内,例如,x<第一差值<y,x和y为第一范围的上下边界,x为负数,y为正数,x和y的绝对值可以相等也可以不相等。因此AE算法的收敛条件是终端设备确定是否要对可见光曝光参数进行调整的前提,只要第一差值不满足AE算法的收敛条件,终端设备就可以对可见光曝光参数进行调整。
终端设备可以通过以下方法对可见光曝光参数进行调整:确定曝光参数分配策略集合中是否包括与第一差值对应的曝光参数分配策略;若曝光参数分配策略集合中包括与第一差值对应的曝光参数分配策略,则按照与第一差值对应的曝光参数分配策略调整可见光曝光参数;若曝光参数分配策略集合中不包括与第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据新的曝光参数分配策略调整可见光曝光参数。
AE算法可以包括两种曝光参数分配策略集合,一种是普通型曝光参数分配策略集合, 另一种是扩展型曝光参数分配策略集合。
表1中示出了普通型曝光参数分配策略集合的一个示例,如表1所示,该集合中包括三种曝光参数分配策略,分别对应三种系数,该系数是与曝光相关的真实系数值,每种曝光参数分配策略可包括可见光曝光参数中的曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)这三个分量中的一个或多个分量。
表1
系数 IntTime SysGain IrisFNO
系数1 100 1024 0
系数2 40000 1024 0
系数3 40000 2048 0
终端设备将第一差值和表1中的三个系数进行比较,该三个系数按照从小到大的顺序排列,如果第一差值大于或等于系数1且小于或等于系数3,可以初步判定该普通型曝光参数分配策略集合中存在与第一差值对应的曝光参数分配策略。然后终端设备在表1中查询与第一差值对应的系数,若可见光图像的亮度高于目标可见光图像亮度超出容忍值,则调整可见光曝光参数中的一个值(曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)的其中之一)调整为该系数所在表项的上一行表项中、同一参数的值。若可见光图像的亮度低于目标可见光图像亮度超出容忍值,则调整可见光曝光参数中的一个值(曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)的其中之一)调整为该系数所在表项的下一行表项中、同一参数的值。
如果第一差值小于系数1,或者大于系数3,可以确定该普通型曝光参数分配策略集合中不包括与第一差值对应的曝光参数分配策略。终端设备需要根据设定的规则往表1中增加新的曝光参数分配策略,然后根据新的曝光参数分配策略调整可见光曝光参数。前述设定的规则可以包括:若第一差值小于系数1,则根据一定比例减小系数1对应的各个参数值作为第一差值对应的参数值,并将该参数值和第一差值组成的表项添加在表1中的系数1之上;若第一差值大于系数3,则根据一定比例放大系数3对应的各个参数值作为第一差值对应的参数值,并将该参数值和第一差值组成的表项添加在表1中的系数3之下。例如,第一差值大于系数3,则终端设备在表1中系数3的下方增加一行(差值4,60000,2018,0),将这一组值分别作为调整后的曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)。又例如,第一差值小于系数1,则终端设备在表1中系数1的上方增加一行(差值0,100,512,0),将这一组值分别作为调整后的曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)。
表2中示出了扩展型曝光参数分配策略集合的一个示例,如表2所示,该集合中包括五种曝光参数分配策略,分别对应五种系数,该系数是与曝光相关的真实系数值,每种曝光参数分配策略可包括可见光曝光参数中的曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)这三个分量中的一个或多个分量,其中系统增益又可以包括Again,Dgain和ISPDgain,Again的物理意义是Sensor中模拟信号的放大倍数,Dgain的物理意义是Sensor中数字信号的放大倍数,ISPDgain的物理意义是Sensor外数字信号的放大倍数。
表2
系数 IntTime Again Dgain IspDgain IrisFNO
系数1 100 1024 1024 0 0
系数2 200 1024 1024 0 0
系数3 40000 1024 1024 0 0
系数4 40000 2048 1024 0 0
系数5 40000 4096 1024 0 0
终端设备将第一差值和表2中的五个系数进行比较,该五个系数按照从小到大的顺序排列,如果第一差值大于或等于系数1且小于或等于系数5,可以初步判定该扩展型曝光参数分配策略集合中存在与第一差值对应的曝光参数分配策略。然后终端设备在表2中查询与第一差值对应的系数,若可见光图像的亮度高于目标可见光图像亮度超出容忍值,则调整可见光曝光参数中的一个值(曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO)的其中之一)调整为该系数所在表项的上一行表项中、同一参数的值。若可见光图像的亮度低于目标可见光图像亮度超出容忍值,则调整可见光曝光参数中的一个值(曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO)的其中之一)调整为该系数所在表项的下一行表项中、同一参数的值。
如果有第一差值(例如差值2),则终端设备可以直接读取与差值2对应的一组值(200,1024,1024,0,0)分别作为调整后的曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO);如果没有第一差值,则终端设备在表2中查询与第一差值最接近的差值(即计算表2中列出的差值和第一差值的差,结果最小者),读取最小者对应的一组值分别作为调整后的曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO)。
如果第一差值小于系数1,或者大于系数5,可以确定该扩展型曝光参数分配策略集合中不包括与第一差值对应的曝光参数分配策略。终端设备需要根据设定的规则往表2中增加新的曝光参数分配策略,然后根据新的曝光参数分配策略调整可见光曝光参数。前述设定的规则可以包括:若第一差值小于系数1,则根据一定比例减小系数1对应的各个参数值作为第一差值对应的参数值,并将该参数值和第一差值组成的表项添加在表1中的系数1之上;若第一差值大于系数5,则根据一定比例放大系数5对应的各个参数值作为第一差值对应的参数值,并将该参数值和第一差值组成的表项添加在表1中的系数5之下。例如,第一差值大于系数5,则终端设备在表1中系数5的下方增加一行(差值6,60000,4096,1024,0,0),将这一组值分别作为调整后的曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO)。又例如,第一差值小于系数1,则终端设备在表1中系数1的上方增加一行(差值0,100,512,1024,0,0),将这一组值分别作为调整后的曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO)。
基于上述可见光曝光参数中的曝光时间(IntTime)、系统增益(SysGain)和光圈(IrisFNO)这三个分量,或者曝光时间(IntTime)、Again、Dgain、ISPDgain和光圈(IrisFNO)这五个分量,终端设备可以根据实际的应用场景,对这些参数依顺序分别调整,例如,静止场景下,先调整曝光时间、再调整曝光增益、最后调整光圈大小,具体的,终端设备可以先调曝光时间,如果曝光时间调整之后还是不能满足AE算法的收敛条件,可以再调整系统增益,如果系统增益调整之后仍然不能满足AE算法的收敛条件,可以再调整光圈。 应当理解,静止场景为在拍摄时目标和摄像机均处于静止状态,或者目标和摄像机均以相同速度和方向匀速同步运动中,此时目标和摄像机相对静止。又例如,运动场景下,先调整曝光增益、再调整曝光时间、最后调整光圈大小。应当理解,运动场景为在拍摄时目标高速运动,或者摄像机高速运动。高速或低速由使用者根据经验判断。
需要说明的是,还可以通过其他方法对可见光曝光参数进行调整,本申请对此不作具体限定。
根据可见光图像的亮度和目标可见光图像亮度之间的比较,终端设备对可见光曝光参数的调整可以分为两个方向:当可见光图像的亮度比目标可见光图像亮度低(第一差值为负值)时,增大可见光曝光参数(曝光时间、光圈直径或曝光增益中的一个或多个);当可见光图像的亮度比目标可见光图像亮度高(第一差值为正值)时,减小可见光曝光参数(曝光时间、光圈直径或曝光增益中的一个或多个)。
为了防止出现单帧跳变的情况,终端设备可以在确定连续N帧可见光图像的第一差值都没有满足AE算法的收敛条件时,才对可见光曝光参数进行调整。
进一步的,在上述连续N帧可见光图像的第一差值都没有满足AE算法的收敛条件的基础下,该连续N帧可见光图像的亮度均比目标可见光图像亮度低(第一差值均为负值),此时终端设备可以增大曝光时间、光圈直径或曝光增益中的一个或多个;或者,在上述连续N帧可见光图像的第一差值都没有满足AE算法的收敛条件的基础下,该连续N帧可见光图像的亮度均比目标可见光图像亮度高(第一差值均为正值),此时终端设备可以减小曝光时间、光圈直径或曝光增益中的一个或多个。由于人眼对于过曝比较敏感,所以上述方法中会将第一差值为正值时的N设置的比第一差值为负值时的N小一些。
需要说明的是,基于RGBIR Sensor的特性,其可以做到对原始图像数据中的RGB分量和IR分量做到完全分离,因此终端设备在执行步骤602和203时,可以不必理会红外图像部分,其处理对象是第一原始图像数据中的可见光图像数据,以及由此得到的可见光图像,此时红外补光灯的光强并不会对可见光图像造成影响。而以下步骤开始对红外图像数据,以及由此得到的红外图像进行处理。
步骤604、根据第一原始图像数据获取红外图像的亮度。
终端设备可以从第一原始图像数据中分离出红外图像数据,根据红外图像数据得到红外图像,然后对红外图像进行图像块的划分,例如按照固定尺寸将其分成m×n或m×m个图像块。
针对每一个图像块,获取该图像块的原始图像数据(第一原始图像数据中对应于该图像块的那一部分数据),根据该图像块的原始图像数据计算对应图像块的每个像素的亮度,对这些像素的亮度求平均值得到该图像块的亮度。除了采用每个像素计算亮度的粒度外,终端设备也可以采用其他的像素粒度,例如,取每个像素的亮度,或者每两个像素取其中之一的亮度,或者每隔n个像素取一个像素的亮度,等等。终端设备也可以通过红外图像的直方图获取各图像块的亮度。
终端设备可以对不同的图像块设置一个权重,例如对感兴趣的图像块设置较高权重,然后对所有图像块的亮度进行加权平均(即每个图像块的亮度乘以该图像块的权重,相加后再计算平均值)得到红外图像的亮度。
需要说明的是,本申请还可以采用其他方法对红外图像的进行图像块划分;也可以采 用其他方法计算红外图像中各个图像块的亮度;也可以采用其他方法设置各个图像块的权重,例如亮度的图像块设置较高的权重;也可以采用其他方法计算红外图像的亮度,例如直接对各个图像块的亮度求平均值得到红外图像的亮度等。本申请对上述方法均此不作具体限定。
步骤605、根据第二差值对红外补光灯的光强进行调整。
第二差值为红外图像的亮度与预设的目标红外图像亮度之间的差值。设定目标红外图像亮度为AE算法提供了收敛的条件,目标红外图像亮度为期望红外图像达到的最优亮度,该值可以通过经验设定,也可以通过大数据统计设定,对此不作具体限定。通常目标红外图像亮度设置得越大,红外补光灯的光强调整结束后得到的红外图像的亮度越高。
终端设备对红外补光灯的光强的调整过程可以是一个多次重复的过程,如果通过一次对红外补光灯的光强的调整没有达到AE算法的收敛条件,就需要继续对红外补光灯的光强进行调整,直到达到AE算法的收敛条件。AE算法的收敛条件是指第二差值的绝对值在设定的第一范围内,一种表示方法为第二差值的绝对值小于设定阈值,例如,|第二差值|<X,X为设定阈值;另一种表示方法为第二差值在设定范围内,例如,x<第二差值<y,x和y为第一范围的上下边界,x为负数,y为正数,x和y的绝对值可以相等也可以不相等。因此AE算法的收敛条件是终端设备确定是否要对红外补光灯的光强进行调整的前提,只要第二差值不满足AE算法的收敛条件,终端设备就可以对红外补光灯的光强进行调整。
根据红外图像的亮度和目标红外图像亮度之间的比较,终端设备对红外补光灯的光强的调整可以分为两个方向:当红外图像的亮度比目标红外图像亮度低时,增大红外补光灯的光强;当红外图像的亮度比目标红外图像亮度高时,减小红外补光灯的光强。
终端设备可以通过调整红外补光灯的脉冲宽度调制(pulse width modulation,PWM)的占空比调整红外补光灯的光强,如果需要增大红外补光灯的光强,可以减小红外补光灯的PWM的占空比,如果需要减小红外补光灯的光强,可以增大红外补光灯的PWM的占空比。
需要说明的是,还可以通过其他方法调整红外补光灯的光强,本申请对此不作具体限定。
为了防止出现单帧跳变的情况,终端设备可以在确定连续M帧红外图像的第二差值都没有满足AE算法的收敛条件时,才对红外补光灯的光强进行调整。
进一步的,在上述连续M帧红外图像的第二差值都没有满足AE算法的收敛条件的基础下,该连续M帧红外图像的亮度均比目标红外图像亮度低(第二差值均为负值),此时终端设备可以增大红外补光灯的光强;或者,在上述连续M帧红外图像的第二差值都没有满足AE算法的收敛条件的基础下,该连续N帧红外图像的亮度均比目标红外图像亮度高(第二差值均为正值),此时终端设备可以减小红外补光灯的光强。由于人眼对于过曝比较敏感,所以上述方法中会将第二差值为正值时的M设置的比第二差值为负值时的M小一些。
需要说明的是,基于RGBIR Sensor的特性,其可以做到对原始图像数据中的RGB分量和IR分量做到完全分离,因此终端设备在执行步骤604和605时,其处理对象是第一原始图像数据中的红外图像数据,以及由此得到的红外图像,此时对红外补光灯的光强的调整是完全独立的过程,并不会对已经调整好的可见光曝光参数造成影响,也不会受到原 始图像数据中的RGB分量的影像。
步骤606、获取第二原始图像数据,第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;
经过步骤602-605的调整,终端设备可以得到满足AE算法的收敛条件的可见光曝光参数和红外补光灯的光强,此时终端设备根据满足AE算法的收敛条件的可见光曝光参数设置RGBIR Sensor,并根据满足AE算法的收敛条件的红外补光灯的光强设置红外补光灯,再通过RGBIR Sensor采集得到第二原始图像数据。
步骤607、根据第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
终端设备从第二原始图像数据中分离出可见光图像数据和红外图像数据,根据可见光图像数据得到可见光图像,根据红外图像数据得到红外图像,对可见光图像和红外图像进行融合得到目标图像。由于通过前述步骤自适应设置了可见光曝光参数和红外补光灯的光强,因此此时只需采集一张RGBIR图像即可得到高质量的融合图像,相较于现有技术中需要采集多张RGBIR图像才能得到高质量的融合图像,简化了图像融合的步骤,提高效率。
一方面,本申请可以采用超高分辨率的RGBIR Sensor得到图像,再对该图像降采样得到高分辨率的可见光图像和红外图像,对这两张高分辨率的图像进行融合处理可以得到信噪比和细节表现更好的目标图像。也可以采用高分辨率的RGBIR Sensor得到高分辨率的图像,对该图像降采样得到低分辨率的图像,再对低分辨率的图像使用超分算法得到高分辨率的可见光图像和红外图像,对这两张高分辨率的图像进行融合处理同样可以得到信噪比和细节表现更好的目标图像。除上述新型RGBIR Sensor外,本申请还可以采用传统的RGBIR Sensor,但需要在镜头和RGBIR Sensor之间加上分光片,以尽量分离出可见光分量和红外分量。
另一方面,由于可见光图像是在满足AE算法的收敛条件的可见光曝光参数设置下得到的,红外图像是在满足AE算法的收敛条件的红外补光灯的光强设置下得到的,而且可见光图像和红外图像是基于同时采集到的原始图像数据得到的,可以消除融合后的目标图像有模糊影像的现象。
本申请基于初始的可见光曝光参数和红外补光灯的光强采集第一原始图像数据,并根据第一原始图像数据中的可见光分量和红外分量分别对可见光曝光参数和红外补光灯的光强进行调整,且两个调整过程相互独立,完全不受另一个的影响,不但可以提高调整效率,还可以得到信噪比和细节表现更好的目标图像,消除融合后的目标图像有模糊影像的现象。而且经对可见光曝光参数和红外补光灯的光强的调整后,可以只采集一张包含可见光和红外光的原始图像就可以融合得到质量较高的目标图像,提高图像获取效率。
图8为本申请图像获取装置实施例的结构示意图,如图8所示,本实施例的装置可以包括:第一获取模块801,第一处理模块802,第二处理模块803、第二获取模块804和融合模块805。其中,第一获取模块801,用于获取第一原始图像数据,所述第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集得到的,所述第一原始图像数据包括可见光图像数据和红外图像数据;第一处理模块802,用于根据所述第一原始图像数据获取可见光图像的亮度;根据第一差值对所述可见光曝光参数进行调 整,所述第一差值为所述可见光图像的亮度与预设的目标可见光图像亮度之间的差值;第二处理模块803,用于根据所述第一原始图像数据获取红外图像的亮度;根据第二差值对所述红外补光灯的光强进行调整,所述第二差值为所述红外图像的亮度与预设的目标红外图像亮度之间的差值;第二获取模块804,用于获取第二原始图像数据,所述第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;融合模块805,用于根据所述第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。应当理解,在一种可能的实施方式中,第一获取模块和第二获取模块在物理上可以是同一个获取模块,例如,第一获取模块和第二获取模块可以均为处理器芯片的传输接口,该传输接口用于收发数据。
在一种可能的实现方式中,所述第一处理模块802,具体用于若所述第一差值的绝对值没有在预设的第一范围内,则根据所述第一差值对所述可见光曝光参数进行调整。
在一种可能的实现方式中,所述第一处理模块802,具体用于确定曝光参数分配策略集合中是否包括与所述第一差值对应的曝光参数分配策略;若所述曝光参数分配策略集合中包括与所述第一差值对应的曝光参数分配策略,则按照所述与所述第一差值对应的曝光参数分配策略调整所述可见光曝光参数;若所述曝光参数分配策略集合中不包括与所述第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据所述新的曝光参数分配策略调整所述可见光曝光参数。
在一种可能的实现方式中,所述可见光曝光参数包括曝光时间、光圈直径或曝光增益中的一个或多个;所述第一处理模块802,具体用于当所述可见光图像的亮度比所述目标可见光图像亮度低时,增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个;当所述可见光图像的亮度比所述目标可见光图像亮度高时,减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个。
在一种可能的实现方式中,所述第二处理模块803,具体用于若所述第二差值的绝对值没有在预设的第二范围内,则根据所述第二差值对所述红外补光灯的光强进行调整。
在一种可能的实现方式中,所述第二处理模块803,具体用于当所述红外图像的亮度比所述目标红外图像亮度低时,增大所述红外补光灯的光强;当所述红外图像的亮度比所述目标红外图像亮度高时,减小所述红外补光灯的光强。
在一种可能的实现方式中,所述第二处理模块803,具体用于通过减小所述红外补光灯的脉冲宽度调制PWM的占空比增大所述红外补光灯的光强;通过增大所述红外补光灯的PWM的占空比减小所述红外补光灯的光强。
在一种可能的实现方式中,所述第一处理模块802,具体用于若连续N帧可见光图像的所述第一差值的绝对值均没有在所述第一范围内,则根据所述第一差值对所述可见光曝光参数进行调整,N为正整数。
在一种可能的实现方式中,所述第一处理模块802,具体用于当所述连续N帧可见光图像的亮度均比所述目标可见光图像亮度低时,增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个;或者,当所述连续N帧可见光图像的亮度均比所述目标可见光图像亮度高时,减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个。
在一种可能的实现方式中,所述第二处理模块803,具体用于若连续M帧红外图像的所述第二差值的绝对值均没有在预设的第二范围内,则根据所述第二差值对所述红外补光 灯的光强进行调整,M为正整数。
在一种可能的实现方式中,所述第二处理模块803,具体用于当所述连续M帧红外图像的亮度均比所述目标红外图像亮度低时,增大所述红外补光灯的光强;当所述连续M帧红外图像的亮度均比所述目标红外图像亮度高时,减小所述红外补光灯的光强。
在一种可能的实现方式中,所述第一处理模块802,具体用于在静止场景下,先增加所述曝光时间,再增加所述曝光增益,最后增加所述光圈直径;在运动场景下,先增加所述曝光增益,再增加所述曝光时间,最后增加所述光圈直径;在静止场景下,先减小所述曝光时间,再减小所述曝光增益,最后减小所述光圈直径;在运动场景下,先减小所述曝光增益,再减小所述曝光时间,最后减小所述光圈直径。在一种可能的实施方式中,该图像获取装置还可以包括图像采集模块,示例性的,该图像采集模块可以是图像传感器。例如RGBIR传感器、RGBW传感器或者RGBC传感器等,该图像采集模块也可以是摄像头。
本实施例的装置,可以用于执行图6所示方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、特定应用集成电路(application-specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。本申请实施例公开的方法的步骤可以直接体现为硬件编码处理器执行完成,或者用编码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
上述各实施例中提及的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可 以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种图像获取方法,其特征在于,包括:
    获取第一原始图像数据,所述第一原始图像数据为图像传感器基于初始的可见光曝光参数和红外补光灯的光强采集得到的,所述第一原始图像数据包括可见光图像数据和红外图像数据;
    根据所述第一原始图像数据获取可见光图像的亮度;
    根据第一差值对所述可见光曝光参数进行调整,所述第一差值为所述可见光图像的亮度与预设的目标可见光图像亮度之间的差值;
    根据所述第一原始图像数据获取红外图像的亮度;
    根据第二差值对所述红外补光灯的光强进行调整,所述第二差值为所述红外图像的亮度与预设的目标红外图像亮度之间的差值;
    获取第二原始图像数据,所述第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;
    根据所述第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据第一差值对所述可见光曝光参数进行调整,包括:
    若所述第一差值的绝对值没有在预设的第一范围内,则根据所述第一差值对所述可见光曝光参数进行调整。
  3. 根据权利要求1或2所述的方法,其特征在于,在所述根据第一差值对所述可见光曝光参数进行调整之前,所述方法还包括:
    确定曝光参数分配策略集合中是否包括与所述第一差值对应的曝光参数分配策略;
    所述根据第一差值对所述可见光曝光参数进行调整,包括:
    若所述曝光参数分配策略集合中包括与所述第一差值对应的曝光参数分配策略,则按照所述与所述第一差值对应的曝光参数分配策略调整所述可见光曝光参数;
    若所述曝光参数分配策略集合中不包括与所述第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据所述新的曝光参数分配策略调整所述可见光曝光参数。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述可见光曝光参数包括曝光时间、光圈直径或曝光增益中的一个或多个;
    所述根据第一差值对所述可见光曝光参数进行调整,包括:
    当所述可见光图像的亮度比所述目标可见光图像亮度低时,增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个;
    当所述可见光图像的亮度比所述目标可见光图像亮度高时,减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述根据第二差值对所述红外补光灯的光强进行调整,包括:
    若所述第二差值的绝对值没有在预设的第二范围内,则根据所述第二差值对所述红外补光灯的光强进行调整。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述根据第二差值对所述红外补光灯的光强进行调整,包括:
    当所述红外图像的亮度比所述目标红外图像亮度低时,增大所述红外补光灯的光强;
    当所述红外图像的亮度比所述目标红外图像亮度高时,减小所述红外补光灯的光强。
  7. 根据权利要求6所述的方法,其特征在于,所述增大所述红外补光灯的光强,包括:
    通过减小所述红外补光灯的脉冲宽度调制PWM的占空比增大所述红外补光灯的光强;
    所述减小所述红外补光灯的光强,包括:
    通过增大所述红外补光灯的PWM的占空比减小所述红外补光灯的光强。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述根据第一差值对所述可见光曝光参数进行调整,包括:
    若连续N帧可见光图像的所述第一差值的绝对值均没有在所述第一范围内,则根据所述第一差值对所述可见光曝光参数进行调整,N为正整数。
  9. 根据权利要求8所述的方法,其特征在于,所述根据第一差值对所述可见光曝光参数进行调整,包括:
    当所述连续N帧可见光图像的亮度均比所述目标可见光图像亮度低时,增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个;或者,
    当所述连续N帧可见光图像的亮度均比所述目标可见光图像亮度高时,减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述根据第二差值对所述红外补光灯的光强进行调整,包括:
    若连续M帧红外图像的所述第二差值的绝对值均没有在预设的第二范围内,则根据所述第二差值对所述红外补光灯的光强进行调整,M为正整数。
  11. 根据权利要求10所述的方法,其特征在于,所述根据第二差值对所述红外补光灯的光强进行调整,包括:
    当所述连续M帧红外图像的亮度均比所述目标红外图像亮度低时,增大所述红外补光灯的光强;
    当所述连续M帧红外图像的亮度均比所述目标红外图像亮度高时,减小所述红外补光灯的光强。
  12. 根据权利要求4至11任一项所述的方法,其特征在于,所述增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个,包括:
    在静止场景下,先增加所述曝光时间,再增加所述曝光增益,最后增加所述光圈直径;在运动场景下,先增加所述曝光增益,再增加所述曝光时间,最后增加所述光圈直径;
    所述减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个,包括:
    在静止场景下,先减小所述曝光时间,再减小所述曝光增益,最后减小所述光圈直径;在运动场景下,先减小所述曝光增益,再减小所述曝光时间,最后减小所述光圈直径。
  13. 一种图像获取装置,其特征在于,包括:
    第一获取模块,用于获取第一原始图像数据,所述第一原始图像数据为图像传感器基 于初始的可见光曝光参数和红外补光灯的光强采集得到的,所述第一原始图像数据包括可见光图像数据和红外图像数据;
    第一处理模块,用于根据所述第一原始图像数据获取可见光图像的亮度;根据第一差值对所述可见光曝光参数进行调整,所述第一差值为所述可见光图像的亮度与预设的目标可见光图像亮度之间的差值;
    第二处理模块,用于根据所述第一原始图像数据获取红外图像的亮度;根据第二差值对所述红外补光灯的光强进行调整,所述第二差值为所述红外图像的亮度与预设的目标红外图像亮度之间的差值;
    第二获取模块,用于获取第二原始图像数据,所述第二原始图像数据为图像传感器基于调整后的可见光曝光参数和红外补光灯的光强采集得到的;
    融合模块,用于并根据所述第二原始图像数据中的可见光图像数据和红外图像数据融合得到目标图像。
  14. 根据权利要求13所述的装置,其特征在于,所述第一处理模块,具体用于:
    若所述第一差值的绝对值没有在预设的第一范围内,则根据所述第一差值对所述可见光曝光参数进行调整。
  15. 根据权利要求13或14所述的装置,其特征在于,所述第一处理模块,具体用于:
    确定曝光参数分配策略集合中是否包括与所述第一差值对应的曝光参数分配策略;
    若所述曝光参数分配策略集合中包括与所述第一差值对应的曝光参数分配策略,则按照所述与所述第一差值对应的曝光参数分配策略调整所述可见光曝光参数;
    若所述曝光参数分配策略集合中不包括与所述第一差值对应的曝光参数分配策略,则增加新的曝光参数分配策略,并根据所述新的曝光参数分配策略调整所述可见光曝光参数。
  16. 根据权利要求13-15中任一项所述的装置,其特征在于,所述可见光曝光参数包括曝光时间、光圈直径或曝光增益中的一个或多个;所述第一处理模块,具体用于:
    当所述可见光图像的亮度比所述目标可见光图像亮度低时,增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个;
    当所述可见光图像的亮度比所述目标可见光图像亮度高时,减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个。
  17. 根据权利要求13-16中任一项所述的装置,其特征在于,所述第二处理模块,具体用于:
    若所述第二差值的绝对值没有在预设的第二范围内,则根据所述第二差值对所述红外补光灯的光强进行调整。
  18. 根据权利要求13-17中任一项所述的装置,其特征在于,所述第二处理模块,具体用于:
    当所述红外图像的亮度比所述目标红外图像亮度低时,增大所述红外补光灯的光强;
    当所述红外图像的亮度比所述目标红外图像亮度高时,减小所述红外补光灯的光强。
  19. 根据权利要求18所述的装置,其特征在于,所述第二处理模块,具体用于:
    通过减小所述红外补光灯的脉冲宽度调制PWM的占空比增大所述红外补光灯的光强;
    通过增大所述红外补光灯的PWM的占空比减小所述红外补光灯的光强。
  20. 根据权利要求13-19中任一项所述的装置,其特征在于,所述第一处理模块,具体用于:
    若连续N帧可见光图像的所述第一差值的绝对值均没有在所述第一范围内,则根据所述第一差值对所述可见光曝光参数进行调整,N为正整数。
  21. 根据权利要求20所述的装置,其特征在于,所述第一处理模块,具体用于:
    当所述连续N帧可见光图像的亮度均比所述目标可见光图像亮度低时,增大所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个;或者,
    当所述连续N帧可见光图像的亮度均比所述目标可见光图像亮度高时,减小所述曝光时间、所述光圈直径或所述曝光增益中的一个或多个。
  22. 根据权利要求13-21中任一项所述的装置,其特征在于,所述第二处理模块,具体用于:
    若连续M帧红外图像的所述第二差值的绝对值均没有在预设的第二范围内,则根据所述第二差值对所述红外补光灯的光强进行调整,M为正整数。
  23. 根据权利要求22所述的装置,其特征在于,所述第二处理模块,具体用于:
    当所述连续M帧红外图像的亮度均比所述目标红外图像亮度低时,增大所述红外补光灯的光强;
    当所述连续M帧红外图像的亮度均比所述目标红外图像亮度高时,减小所述红外补光灯的光强。
  24. 根据权利要求16至23任一项所述的装置,其特征在于,所述第一处理模块,具体用于:
    在静止场景下,先增加所述曝光时间,再增加所述曝光增益,最后增加所述光圈直径;在运动场景下,先增加所述曝光增益,再增加所述曝光时间,最后增加所述光圈直径;在静止场景下,先减小所述曝光时间,再减小所述曝光增益,最后减小所述光圈直径;在运动场景下,先减小所述曝光增益,再减小所述曝光时间,最后减小所述光圈直径。
  25. 一种图像获取装置,其特征在于,包括:
    一个或多个处理器,被配置为调用存储在存储器中的程序指令,以执行如权利要求1-12中任一项所述的方法。
  26. 一种计算机可读存储介质,其特征在于,包括计算机程序,所述计算机程序在计算机或处理器上被执行时,使得所述计算机或处理器执行权利要求1-12中任一项所述的方法。
PCT/CN2021/075085 2020-02-14 2021-02-03 图像获取方法和装置 WO2021160001A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21753011.2A EP4102817A4 (en) 2020-02-14 2021-02-03 IMAGE ACQUISITION METHOD AND DEVICE
US17/886,761 US20220392182A1 (en) 2020-02-14 2022-08-12 Image acquisition method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010093488.2A CN113271414B (zh) 2020-02-14 2020-02-14 图像获取方法和装置
CN202010093488.2 2020-02-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/886,761 Continuation US20220392182A1 (en) 2020-02-14 2022-08-12 Image acquisition method and device

Publications (1)

Publication Number Publication Date
WO2021160001A1 true WO2021160001A1 (zh) 2021-08-19

Family

ID=77227324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075085 WO2021160001A1 (zh) 2020-02-14 2021-02-03 图像获取方法和装置

Country Status (4)

Country Link
US (1) US20220392182A1 (zh)
EP (1) EP4102817A4 (zh)
CN (1) CN113271414B (zh)
WO (1) WO2021160001A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024017675A1 (de) * 2022-07-19 2024-01-25 Mercedes-Benz Group AG Kamerasystem und verfahren zu dessen betrieb

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114041769A (zh) * 2021-11-25 2022-02-15 深圳市商汤科技有限公司 心率测量方法及装置、电子设备及计算机可读存储介质
CN114007021B (zh) * 2021-12-31 2022-06-17 杭州魔点科技有限公司 调节补光灯亮度的方法、系统和摄像装置
CN116723409A (zh) * 2022-02-28 2023-09-08 荣耀终端有限公司 自动曝光方法与电子设备
CN116962886A (zh) * 2022-04-07 2023-10-27 安霸国际有限合伙企业 用于rgb-ir传感器的智能自动曝光控制
CN115278098A (zh) * 2022-06-28 2022-11-01 北京驭光科技发展有限公司 图像拍摄的方法、系统及其拍摄装置
CN116033275B (zh) * 2023-03-29 2023-08-15 荣耀终端有限公司 一种自动曝光方法、电子设备及计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160057367A1 (en) * 2014-08-25 2016-02-25 Hyundai Motor Company Method for extracting rgb and nir using rgbw sensor
CN108965654A (zh) * 2018-02-11 2018-12-07 浙江宇视科技有限公司 基于单传感器的双光谱摄像机系统和图像处理方法
CN109547701A (zh) * 2019-01-04 2019-03-29 Oppo广东移动通信有限公司 图像拍摄方法、装置、存储介质及电子设备
WO2019071613A1 (zh) * 2017-10-13 2019-04-18 华为技术有限公司 一种图像处理方法及装置
CN110784660A (zh) * 2019-11-13 2020-02-11 广州洪荒智能科技有限公司 一种摄像亮度控制方法、系统、设备和介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100563306C (zh) * 2007-07-25 2009-11-25 北京中星微电子有限公司 一种摄像装置补光灯自动控制方法和装置
CN101872106B (zh) * 2010-05-21 2012-07-11 深圳市艾威视数码科技有限公司 智能红外摄像机及其智能调节红外光强度的方法
US9819849B1 (en) * 2016-07-01 2017-11-14 Duelight Llc Systems and methods for capturing digital images
JP6471953B2 (ja) * 2014-05-23 2019-02-20 パナソニックIpマネジメント株式会社 撮像装置、撮像システム、及び撮像方法
WO2016154926A1 (zh) * 2015-03-31 2016-10-06 深圳市大疆创新科技有限公司 成像装置及其补光控制方法、系统,以及可移动物体
CN105959592A (zh) * 2016-05-30 2016-09-21 东莞市中控电子技术有限公司 一种自适应光线的生物计量信息采集装置和方法
CN107734225B (zh) * 2017-10-24 2019-06-07 维沃移动通信有限公司 一种拍摄方法及装置
CN109712102B (zh) * 2017-10-25 2020-11-27 杭州海康威视数字技术股份有限公司 一种图像融合方法、装置及图像采集设备
CN109951646B (zh) * 2017-12-20 2021-01-15 杭州海康威视数字技术股份有限公司 图像融合方法、装置、电子设备及计算机可读存储介质
CN108737728B (zh) * 2018-05-03 2021-06-11 Oppo广东移动通信有限公司 一种图像拍摄方法、终端及计算机存储介质
CN112153302B (zh) * 2020-09-28 2022-04-15 北京骑胜科技有限公司 红外光源控制方法、装置、电子设备和可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160057367A1 (en) * 2014-08-25 2016-02-25 Hyundai Motor Company Method for extracting rgb and nir using rgbw sensor
WO2019071613A1 (zh) * 2017-10-13 2019-04-18 华为技术有限公司 一种图像处理方法及装置
CN108965654A (zh) * 2018-02-11 2018-12-07 浙江宇视科技有限公司 基于单传感器的双光谱摄像机系统和图像处理方法
CN109547701A (zh) * 2019-01-04 2019-03-29 Oppo广东移动通信有限公司 图像拍摄方法、装置、存储介质及电子设备
CN110784660A (zh) * 2019-11-13 2020-02-11 广州洪荒智能科技有限公司 一种摄像亮度控制方法、系统、设备和介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4102817A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024017675A1 (de) * 2022-07-19 2024-01-25 Mercedes-Benz Group AG Kamerasystem und verfahren zu dessen betrieb

Also Published As

Publication number Publication date
US20220392182A1 (en) 2022-12-08
CN113271414B (zh) 2022-11-18
EP4102817A1 (en) 2022-12-14
CN113271414A (zh) 2021-08-17
EP4102817A4 (en) 2023-07-19

Similar Documents

Publication Publication Date Title
WO2021160001A1 (zh) 图像获取方法和装置
CN112150399B (zh) 基于宽动态范围的图像增强方法及电子设备
WO2022262260A1 (zh) 一种拍摄方法及电子设备
WO2023016025A1 (zh) 一种拍照方法及设备
WO2023015981A1 (zh) 图像处理方法及其相关设备
US20240119566A1 (en) Image processing method and apparatus, and electronic device
US20200204723A1 (en) Image processing apparatus, imaging apparatus, image processing method, imaging method, and program
CN113630558B (zh) 一种摄像曝光方法及电子设备
WO2023231583A1 (zh) 图像处理方法及其相关设备
WO2023160285A1 (zh) 视频处理方法和装置
WO2023226612A1 (zh) 一种曝光参数确定方法和装置
CN108200352B (zh) 一种调解图片亮度的方法、终端及存储介质
CN116416122B (zh) 图像处理方法及其相关设备
US20240046604A1 (en) Image processing method and apparatus, and electronic device
WO2024032033A1 (zh) 视频处理的方法及电子设备
CN115802183B (zh) 图像处理方法及其相关设备
CN115550575A (zh) 图像处理方法及其相关设备
CN113168672B (zh) 用于图像处理的色调映射和色调控制集成
CN115442536B (zh) 一种曝光参数的确定方法、装置、影像系统和电子设备
CN117422646B (zh) 去反光模型的训练方法、去反光模型和去反光方法
CN116051368B (zh) 图像处理方法及其相关设备
CN114363693B (zh) 画质调整方法及装置
US20240121516A1 (en) Separate exposure control for pixel sensors of an image sensor
CN116347212B (zh) 一种自动拍照方法及电子设备
CN116452437B (zh) 高动态范围图像处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21753011

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021753011

Country of ref document: EP

Effective date: 20220905

NENP Non-entry into the national phase

Ref country code: DE