WO2021147804A1 - 一种成像系统及图像处理方法 - Google Patents

一种成像系统及图像处理方法 Download PDF

Info

Publication number
WO2021147804A1
WO2021147804A1 PCT/CN2021/072427 CN2021072427W WO2021147804A1 WO 2021147804 A1 WO2021147804 A1 WO 2021147804A1 CN 2021072427 W CN2021072427 W CN 2021072427W WO 2021147804 A1 WO2021147804 A1 WO 2021147804A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel
type
image signal
sub
data
Prior art date
Application number
PCT/CN2021/072427
Other languages
English (en)
French (fr)
Inventor
聂鑫鑫
於敏杰
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2021147804A1 publication Critical patent/WO2021147804A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals

Definitions

  • This application relates to the field of computer vision technology, and in particular to an imaging system and an image processing method.
  • the image sensor uses a photoelectric conversion device, such as CCD (Charge-coupled Device, charge coupled period), CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor), etc.
  • the image sensor includes multiple types of channels, such as RGB (color) channels. Each type of channel is arranged to correspond to the pixels in the pixel array. One channel responds to the light component passing through the channel, and a corresponding pixel is obtained. The optical signal is converted into an image signal.
  • the image sensor includes RGB channels.
  • the RGB channel can also respond to part of the NIR light. Energy, resulting in poor imaging effect. Therefore, in the corresponding imaging system, interpolation is used to interpolate the RGB channels, and the obtained interpolated image has a higher resolution, thereby improving the imaging effect.
  • the purpose of the embodiments of the present application is to provide an imaging system and an image processing method to improve the imaging effect of the imaging system.
  • the specific technical solutions are as follows:
  • an embodiment of the present application provides an imaging system, which includes: an image sensor, a statistical unit, and an exposure control unit; the image sensor includes multiple types of channels;
  • the image sensor is used to convert light signals into image signals, where the light signals include light components in a variety of wavelength ranges;
  • the statistical unit is used to obtain image signals; extract image data of various channels in the image signal; perform statistics on the image data of various channels to obtain statistical data of various channels; send statistical data of various channels to exposure control unit;
  • the exposure control unit is used to receive the statistical data of various channels sent by the statistical unit; for any type of channel, according to the statistical data of the type of channel, calculate the corresponding exposure parameter of the type of channel, and based on the exposure parameter, control the The image data of the class channel is adjusted for brightness.
  • an embodiment of the present application provides an image processing method applied to an imaging system; the method includes:
  • the exposure parameter corresponding to the type of channel is calculated, and based on the exposure parameter, the brightness adjustment of the image data of the type of channel is controlled.
  • the imaging system includes: an image sensor, a statistical unit, and an exposure control unit.
  • the statistical unit is used to obtain the image signal obtained by the image sensor converting the light signal, extract the image data of various channels in the image signal, and perform statistics on the image data of various channels to obtain statistical data of various channels.
  • the statistical data of various channels are sent to the exposure control unit; the exposure control unit is used to receive the statistical data of various channels sent by the statistical unit; for any type of channel, according to the statistical data of the type of channel, calculate the corresponding Exposure parameters, and based on the exposure parameters, control the brightness adjustment of the image data of this type of channel.
  • a statistical unit and an exposure control unit are added to the imaging system.
  • the statistical unit performs statistics on the image data of each type of channel.
  • the exposure control unit calculates the corresponding channel based on the statistical data of a type of channel. Exposure parameters, and based on the calculated exposure parameters, control the brightness adjustment of the image data of this type of channel, and independently expose the image data of a type of channel according to the actual situation of different energy of the light component in response to different types of channels.
  • the brightness of the image data of a type of channel is controlled within a proper brightness range, thereby improving the final imaging effect.
  • FIG. 1 is a schematic structural diagram of an imaging system according to an embodiment of the application.
  • 2a is a schematic diagram of an arrangement of image sensors according to an embodiment of the application.
  • 2b is a schematic diagram of an arrangement of image sensors according to another embodiment of the application.
  • 2c is a schematic diagram of an arrangement of image sensors according to still another embodiment of the application.
  • FIG. 3 is a schematic diagram of the spectral response curves of RGB channels and NIR channels according to an embodiment of the application;
  • FIG. 4 is a schematic diagram of the spectral response curve of the W channel according to an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of an imaging system according to another embodiment of the application.
  • Fig. 6 is a schematic diagram of a spectral pass rate curve of a filter unit according to an embodiment of the application.
  • FIG. 7 is a schematic structural diagram of an imaging system according to still another embodiment of the application.
  • FIG. 8 is a schematic structural diagram of an imaging system according to another embodiment of the application.
  • FIG. 9 is a schematic diagram of a near-infrared energy distribution curve at 850 nm according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of a flow of exposure gain adjustment according to an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of an imaging system according to another embodiment of the application.
  • FIG. 12 is a schematic diagram of an implementation flow of a processing unit according to an embodiment of the application.
  • FIG. 13 is a schematic diagram of an implementation flow of a post-processing module according to an embodiment of the application.
  • FIG. 14 is a schematic diagram of the implementation process of a post-processing module according to another embodiment of the application.
  • 15 is a schematic diagram of the implementation process of a post-processing module according to still another embodiment of the application.
  • FIG. 16 is a schematic diagram of an implementation flow of a processing unit according to another embodiment of the application.
  • FIG. 17 is a schematic diagram of an implementation flow of a post-processing module according to another embodiment of the application.
  • FIG. 18 is a schematic diagram of an implementation process of a processing unit according to still another embodiment of the application.
  • FIG. 19 is a schematic diagram of an implementation flow of a processing unit according to another embodiment of this application.
  • FIG. 20 is a schematic flowchart of an image processing method according to an embodiment of the application.
  • embodiments of the present application provide an imaging system and an image processing method.
  • the imaging system includes an image sensor 11, a statistical unit 12 and an exposure control unit 13.
  • the image sensor 11 includes multiple types of channels, and each channel is used to respond to passing light components.
  • the image sensor 11 is used to convert light signals into image signals, and the light signals include light components in a variety of wavelength bands;
  • the statistical unit 12 is used to obtain image signals, extract image data of various channels in the image signals, and The image data of various channels are respectively counted to obtain statistical data of various channels, and the statistical data of various channels are sent to the exposure control unit 13;
  • the exposure control unit 13 is used to receive the statistics of various channels sent by the statistical unit 12 Data, for any type of channel, calculate the exposure parameter corresponding to the type of channel according to the statistical data of the type of channel, and control the brightness adjustment of the image data of the type of channel based on the exposure parameter.
  • the statistical unit performs statistics on the image data of each type of channel
  • the exposure control unit calculates the exposure parameters corresponding to this type of channel according to the statistical data of a type of channel, and controls the exposure parameters based on the calculated exposure parameters.
  • the brightness of the image data of this type of channel is adjusted. According to the actual situation that the energy of the light component of the different types of channels is different, independent exposure is performed on a type of channel, so that the brightness of the type of channel is controlled within a suitable brightness range, thereby improving To the final imaging effect.
  • the imaging system may be an image capturing system (such as a digital camera, a camcorder, a surveillance camera, etc.), and the imaging system may also be an imaging module installed on a computer, a multimedia player, a mobile phone, and other devices.
  • image capturing system such as a digital camera, a camcorder, a surveillance camera, etc.
  • imaging system may also be an imaging module installed on a computer, a multimedia player, a mobile phone, and other devices.
  • the image sensor 11 includes multiple types of channels, and each type of channel is used to respond to light components in a different wavelength range.
  • the image sensor 11 may include: a first-type channel that responds to light components in the visible light waveband, and a second-type channel that responds to light components in the near-infrared light waveband.
  • the image sensor 11 is an RGB-IR sensor, which specifically includes two types of channels.
  • the first type of channel is an RGB channel
  • the second type of channel may be an NIR channel.
  • the image sensor arrangement shown in Fig. 2a, Fig. 2b or Fig. 2c is obtained through various channels, and each channel responds to the light component passing through the channel.
  • the R (red) channel and the B (blue) channel account for 1/8 of the total number of pixels, the NIR channel accounted for 1/4 of the total number of pixels, and the G (green) channel accounted for 1/2 of the total number of pixels;
  • the R and B channels account for 1/4 of the total number of pixels, the G channel accounted for 1/2 of the total number of pixels, and the NIR channel accounted for all pixels;
  • the R and B channels accounted for 3 of the total number of pixels.
  • G channel occupies 3/8 of the total number of pixels
  • NIR channel occupies 1/4 of the total number of pixels.
  • the spectral response of each channel is shown in Figure 3.
  • the light component of the first type of channel is within the range of 400nm to 700nm.
  • the relative response is not lower than the relative response in the wavelength range of 700nm to 1000nm; in the wavelength range of 800nm to 1000nm, the relative response of the optical component of the second type of channel is not lower than the relative response of the optical component of the first type of channel.
  • the second type of channel can also be a W channel.
  • the W channel is a channel that can respond to light components in the visible light waveband and light components in the near-infrared light waveband.
  • the spectral response of the W channel is shown in Figure 4, and the light components of the W channel all respond within the wavelength range of 400 nm to 1000 nm.
  • a lens (not shown in FIG. 1) is provided on the input side of the image sensor 11 for receiving the light signal reflected by the target object.
  • the optical signal includes visible light component and near-infrared light component, etc., and the lens can make the visible light component and the near-infrared light component meet the confocal requirements.
  • the statistical unit 12 is used to perform image data statistics, and may be a chip with arithmetic function.
  • image data statistics When the image data is counted, it mainly performs brightness statistics on the image data.
  • the image data of each type of channel is that type of channel.
  • the pixel value of the corresponding position in the pixel array the pixel value can reflect the pixel brightness. Therefore, when performing statistics, you can directly count the pixel values in a type of channel, and the statistical brightness is the statistical data of this type of channel.
  • the first type of channel includes a plurality of color channels
  • the second type of channel includes a near-infrared channel
  • the statistical unit 12 may be specifically configured to: calculate the statistical value of the image data of the first type of channel as the statistical data of the first type of channel based on the image data of at least one of the multiple color channels; For image data, calculate the statistical value of the image data of the second type of channel as the statistical data of the second type of channel.
  • the first type of channel is the color channel, which specifically includes the red channel, the green channel and the blue channel, or the red channel, the yellow channel and the blue channel.
  • the image data of each color channel needs to be counted, and the first type will be calculated
  • Channel image data statistical values (such as image data average, image data and value, etc.) are used as the statistical data of the first type of channel, or the average value of the image data of the red channel or the green channel can be calculated as the statistical data of the first type of channel.
  • the second type of channel is the near-infrared channel, and the image data of the near-infrared channel is counted, and the calculated statistical value of the image data of the second type of channel is used as the statistical data of the second type of channel.
  • the statistical unit 12 may be specifically used for:
  • From the image signal extract the image data of each color channel and the image data of the near-infrared channel; calculate the average value of the image data of each color channel and the image of the near-infrared channel according to the image data of each color channel and the image data of the near-infrared channel respectively Data average; weighted summation of the average value of the image data of each color channel; use the weighted summation result as the statistical data of the first type of channel, and the average value of the image data of the near-infrared channel as the statistical data of the second type of channel;
  • Data statistics include three statistical methods: global statistics, block statistics, and histogram statistics.
  • global statistics global statistics
  • block statistics block statistics
  • histogram statistics the three statistical methods of the statistical unit will be introduced respectively.
  • Histogram statistics First, you need to consider the number of bits of the input data of the R channel, G channel and B channel. If it exceeds or is less than 8 bits, you need to convert the input data to 8 bits, and then calculate the R channel, G channel, B channel, NIR Calculate the histograms for each channel to get Rhist, Ghist, Bhist, NIRhist; the number of gray levels of the histogram is 256; the histogram of each channel is weighted and averaged according to the number of gray levels as follows:
  • w(n) is the weight of each gray scale.
  • Y is the statistical data of the first type of channel
  • NIRave is the statistical data of the second type of channel.
  • the exposure control unit 13 may be a chip with arithmetic function, which receives the statistical data of various channels sent by the statistical unit 12, and calculates the exposure parameters corresponding to the various channels according to the statistical data of the various channels.
  • the parameters may include exposure time, analog gain, digital gain, etc., and the exposure parameters may also include the aperture of the lens.
  • the aperture may be considered to be a fixed size under certain conditions.
  • the exposure control unit 13 controls the brightness adjustment of the image data of the corresponding channels, it may
  • the exposure parameters are sent to the image sensor, and the image sensor applies the exposure parameters to the various channels, and adjusts the brightness of the various channels, so that the brightness of the image data of the various channels is within the preset brightness range; it can also send the exposure parameters
  • the image processing unit directly adjusts the image data of the corresponding channel output by the image sensor based on the exposure parameters, so that the brightness of the image data of various channels is within the preset brightness range.
  • different preset brightness ranges can be set, or the same preset brightness range can be set, which is not specifically limited here.
  • the final output image signal can be made to conform to the brightness range.
  • the brightness adjustment method is controlled based on the exposure parameters.
  • the exposure parameters are mainly sent to the image sensor or image processing unit, and the image sensor or image processing unit is controlled to adjust the exposure time and exposure gain.
  • the specific adjustment is The method is a conventional exposure adjustment method, which is not specifically limited here.
  • the functions of the statistical unit 12 and the exposure control unit 13 may be executed by a processor (or microprocessor), or may be executed by a multi-processor (or microprocessor).
  • the embodiment of the present application also provides an imaging system.
  • the imaging system includes an image sensor 11, a statistical unit 12, an exposure control unit 13 and a filter unit 14.
  • the image sensor 11 includes multiple types of channels, and the channels are used to respond to passing light components to obtain a pixel in the image signal.
  • the filter unit 14 is used to filter out other light components in the input optical signal except the light component in the specified wavelength range, for example, filter out other near-infrared light except the wavelength range of 700nm-1000nm, and
  • the filtered light signal is transmitted to the image sensor 11;
  • the image sensor 11 is used to convert the light signal into an image signal, and the light signal includes light components in a variety of wavelength ranges;
  • the statistical unit 12 is used to obtain image signals and extract image signals
  • the image data of all types of channels in the image data, the image data of each type of channel are respectively counted, the statistical data of each type of channel is obtained, and the statistical data of each type of channel is sent to the exposure control unit 13;
  • the exposure control unit 13 is used to receive statistics
  • the statistical data of various types of channels sent by the unit 12, for any type of channel, according to the statistical data of the type of channel, the corresponding exposure parameter of the type of channel is calculated, and based on the exposure parameter, the brightness of the image data of the type of channel is
  • the statistical unit performs statistics on the image data of each type of channel
  • the exposure control unit calculates the exposure parameters corresponding to this type of channel according to the statistical data of a type of channel, and controls the exposure parameters based on the calculated exposure parameters.
  • the brightness of the image data of this type of channel is adjusted. According to the actual situation that the energy of the light component of the different types of channels is different, independent exposure is performed on a type of channel, so that the brightness of the image data of the type of channel is controlled within the appropriate brightness range. , Thereby improving the final imaging effect.
  • a filter unit at the front end of the image sensor to filter the input light signal, the light components in the designated wavelength range can be passed through, and taken into the image sensor to filter out the light components in other wavelength ranges.
  • the filter unit 14 can pass the light components of visible light and the light components of near-infrared light in the designated wavelength range, and filter out the light components in other wavelength ranges.
  • the filter unit 14 can be a filter, and different filters can pass light components in different wavelength ranges. Therefore, the filter can be selected according to the wavelength range of visible light and the specified wavelength range, such as the filter unit shown in FIG. 6
  • the spectral pass rate curve of the red light is 700nm
  • the wavelength of green light is 546.1nm
  • the wavelength of blue light is 435.8nm
  • the wavelength of near-infrared light is 780 ⁇ 3000nm.
  • the wavelength range is 390 ⁇ 640nm.
  • the pass rate of the light component within 820-880nm can reach more than 60%, which can ensure that the green light, blue light and the near-infrared light within the wavelength range of 820-880nm in the visible light pass through, and other light components are filtered out.
  • the filter unit may include a switching device.
  • the switching device is used to switch the filtering state of the filter unit 14.
  • the filter unit 14 can be used to filter out other light components in the input light signal except the light component within the specified wavelength range when the filter state is turned on, and transmit the filtered light signal to the image sensor 11; When the filtering state is off, all light components in the light signal are transmitted to the image sensor 11.
  • the switching device can be implemented by hardware. For example: install the filter on a rotatable robotic arm, and in a scene where light component filtering is required, rotate the filter to the lens of the image sensor for light component filtering; In the scene, the filter in front of the lens of the image sensor is rotated to a position other than the lens. The filter is not in front of the lens of the image sensor and will not filter the light components entering the lens. In this way, the switching between the on and off of the filtering state is realized.
  • multiple filters can also be provided, for example, a filter that can pass the light component of visible light and the light component of near-infrared light of a specified wavelength band, and a filter that can only pass The light component of the visible light filter can be rotated and switched between the two filters according to actual needs.
  • the filter unit 14 may have a switching device for switching the passing state of the filter unit 14.
  • the filter unit 14 When the switching device switches the filtering state of the filter unit 14 to on, the filter unit 14 will filter out other light components in the input optical signal except for the light component within the specified wavelength range, and transmit the filtered light signal to
  • the image sensor 11 transmits only the light components in the designated wavelength range to the image sensor 11; when the switching device switches the filtering state of the filter unit 14 to off, all light components in the light signal are transmitted to the image sensor 11.
  • the filter unit 14 when the filtering state of the filter unit 14 is switched to on, the filter unit 14 can filter out other light components except visible light and part of the near-infrared light in the input optical signal, and transmit the filtered optical signal To the image sensor 11, only the light components of visible light and part of the near-infrared light are transmitted to the image sensor 11; the filter unit 14 can also filter out other light components in the input light signal except for visible light, and transmit the filtered light signal To the image sensor 11, only the light component of visible light is transmitted to the image sensor 11.
  • the embodiment of the present application also provides an imaging system.
  • the imaging system includes an image sensor 11, a statistical unit 12, an exposure control unit 13 and a light supplement unit 15.
  • the image sensor 11 includes multiple types of channels, and the channels are used to respond to passing light components to obtain a pixel in the image signal.
  • the light supplement unit 15 is used to perform near-infrared supplement light on the scene so that the input light signal includes near-infrared light; the image sensor 11 is used to convert the light signal into an image signal, and the light signal includes a variety of wavelength ranges.
  • the statistical unit 12 is used to obtain image signals, extract image data of various channels in the image signal, perform statistics on the image data of various channels, obtain statistical data of various channels, and calculate statistics of various channels
  • the data is sent to the exposure control unit 13; the exposure control unit 13 is used to receive the statistical data of various channels sent by the statistical unit 12, for any type of channel, according to the statistical data of the type of channel, calculate the corresponding exposure parameter of the type of channel , And based on the exposure parameter, control the brightness adjustment of the image data of this type of channel.
  • the statistical unit performs statistics on the image data of each type of channel
  • the exposure control unit calculates the exposure parameters corresponding to this type of channel according to the statistical data of a type of channel, and controls the exposure parameters based on the calculated exposure parameters.
  • the brightness of the image data of this type of channel is adjusted. According to the actual situation that the energy of the light component of the different types of channels is different, independent exposure is performed on a type of channel, so that the brightness of the image data of the type of channel is controlled within the appropriate brightness range. , Thereby improving the final imaging effect.
  • the light supplement unit the near-infrared light component in the optical signal is increased, so that the brightness of the near-infrared light channel is increased, which is convenient for improving the near-infrared light imaging.
  • the imaging system includes an image sensor 11, a statistical unit 12, an exposure control unit 13, a filter unit 14 and a light supplement unit 15.
  • the image sensor 11 includes multiple types of channels, and the channels are used to respond to the passed light components to obtain a pixel in the image signal.
  • the light supplement unit 15 is used to perform near-infrared supplement light on the scene so that the input light signal includes near-infrared light; the filter unit 14 is used to filter out the light components in the input light signal except for the light components in the specified wavelength range.
  • the optical signal includes light components in a variety of wavelength ranges; the statistical unit 12 is used to obtain image signals, extract image data of various channels in the image signal, and perform statistics on the image data of various channels to obtain Statistical data, the statistical data of various channels are sent to the exposure control unit 13; the exposure control unit 13 is used to receive the statistical data of various channels sent by the statistical unit 12, for any type of channel, according to the statistical data of that type of channel Calculate the exposure parameter corresponding to this type of channel, and based on the exposure parameter, control the brightness adjustment of the image data of this type of channel.
  • the supplementary light unit 15 is used for near-infrared supplementary light, of course, it can also generate visible supplementary light at the same time.
  • the energy of the near-infrared supplement light generated by the supplement light unit 15 is distributed in the range of 650 nm to 1000 nm. Specifically, the energy is concentrated in the range of 750 nm to 900 nm, or the range of 900 nm to 1000 nm.
  • the filter unit 14 it is required that the energy distribution range of the near-infrared light of the light-filling unit 15 is not less than the pass range of the near-infrared light preset by the filter unit 14.
  • the filter unit 14 can be automatically turned on.
  • the near-infrared supplementary light is turned off during the day, you can The filter unit 14 is automatically closed.
  • a light supplement device in the near-infrared light band is used to supplement light.
  • an infrared lamp with a wavelength of 850nm can be used as the light supplement unit 15, or an infrared lamp with a wavelength of 750nm, 780nm, 850nm, 860nm, 940nm can be used as the light supplement unit 15.
  • the energy distribution curve is as As shown in Figure 9, the energy distribution is mainly concentrated in the range of 830nm to 880nm.
  • the image sensor 11 may include a second-type channel that responds to light components in the near-infrared light waveband.
  • the exposure control unit 13 can also be used to control the fill light unit 15 to adjust the fill light intensity according to the statistical data of the second type of channel.
  • the exposure control unit 13 can also control the light-filling unit 15 to adjust the light-filling intensity. After adjusting the light-filling intensity, the image data of the second type of channel can be adjusted to The predetermined brightness range.
  • the exposure control unit 13 may be specifically used to: if the statistical data of the second type of channel is greater than the first preset threshold, control the light supplement unit 15 to reduce the intensity of the emitted near-infrared light ; If the statistical data of the second type of channel is less than the second preset threshold, the supplementary light unit 15 is controlled to increase the intensity of the emitted near-infrared light.
  • the exposure control unit 13 controls the light-filling unit 15 to adjust the light-filling intensity, specifically judging whether the statistical data of the second type of channel is greater than the first preset threshold.
  • the statistical data of the second type of channel may be the image data of the second type of channel.
  • the corresponding first preset threshold is the maximum brightness threshold. If the statistical data is greater than the first preset threshold, it means that the brightness of the second type of channel is too high and needs to be reduced, and the light supplement unit 15 will be controlled to reduce the emission The intensity of near-infrared light.
  • the corresponding second preset threshold is the minimum brightness threshold.
  • the light supplement unit 15 will be controlled to increase the intensity of the emitted near-infrared light . Since the first preset threshold is the maximum brightness threshold and the second preset threshold is the minimum brightness threshold, in general, the first preset threshold is greater than the second preset threshold.
  • the exposure control unit 13 controls the fill light unit 15 to adjust the fill light intensity after the exposure control unit 13 controls the brightness adjustment on the image data of various channels, that is, if the exposure control unit 13 After controlling the brightness adjustment of the image data of various channels, the imaging effect is still poor (the image is too dark or too bright). At this time, the exposure control unit 13 can control the fill light unit 15 to adjust the fill light intensity to make the image brightness Meet the predetermined brightness range.
  • the image sensor may further include: a first-type channel that responds to light components in the visible light band; and the exposure control unit 13 may be specifically used for:
  • the first exposure time of the first type of channel the first target data corresponding to the first type of channel, the second exposure time of the second type of channel, and the second target data of the second type of channel; according to the statistics of the first type of channel Data and first target data, calculate the first data offset of the first type of channel, if the first data offset is not within the first preset range, then according to the statistical data of the first type of channel and the first target data, Calculate the first exposure gain; calculate the second data offset of the second type of channel according to the statistical data of the second type of channel and the second target data, if the second data offset is not within the second preset range, then Calculate the second exposure gain based on the statistical data and the second target data of the second type of channel; if the first exposure time is equal to the second exposure time, the control compensation is performed when the second exposure gain is less than the first preset gain threshold.
  • the light unit reduces the intensity of the emitted near-infrared light, and when the second exposure gain is greater than the second preset gain threshold, the supplementary light unit is controlled to increase the intensity of the emitted near-infrared light; if the first exposure time is not equal to the second exposure time , If the second exposure gain is less than the first preset gain threshold, the second exposure time is reduced, and if the second exposure gain is greater than the second preset gain threshold, the second exposure time is increased.
  • the steps for calculating Gain1 and Gain2 include:
  • the second type of channel, the data offset NIRave delta
  • the second step is to determine whether the data offset is within the preset range, If otherwise, perform the third step, in which, for the first type of channel and the second type of channel, the set preset range may be the same or different.
  • the first preset range is set for the first type of channel
  • the first preset range is set for the first type of channel.
  • the second type of channel is set with a second preset range;
  • the image sensor recognizes that T1 is equal to T2.
  • Gain2 is less than the first preset gain threshold
  • the light supplement unit can be controlled to reduce the emission of near-infrared light. It can be seen that the first preset gain threshold refers to the minimum acceptable second exposure gain.
  • the fill light unit When Gain2 is greater than the second preset gain threshold, the fill light unit can be controlled to increase the emission The intensity of the infrared light realizes the adjustment of the fill light intensity.
  • the second preset gain threshold refers to the maximum acceptable second exposure gain. Therefore, in general, the first preset gain threshold is less than the second preset Gain threshold: It is recognized that T1 and T2 are different.
  • the brightness adjustment of the channel's image data can be achieved by reducing T2.
  • T2 When Gain2 is greater than the second preset gain threshold, you can increase T2 Realize the brightness adjustment of the image data of the channel.
  • the embodiment of the present application also provides an imaging system.
  • the imaging system includes an image sensor 11, a statistical unit 12, an exposure control unit 13 and a processing unit 16.
  • the image sensor 11 includes multiple types of channels, and the channels are used to respond to passing light components to obtain a pixel in the image signal.
  • the image sensor 11 is used to convert light signals into image signals, and the light signals include light components in a variety of wavelength bands;
  • the processing unit 16 may be a chip with arithmetic function, used to obtain the image signals output by the image sensor 11,
  • the current exposure parameters corresponding to various channels and the associated information between the various channels according to the current exposure parameters corresponding to the various channels and the associated information between the various channels, determine the correlation between each two types of channels, and determine the correlation between each two types of channels.
  • the statistical unit 12 is used to obtain image signals and extract images of various channels in the image signal Data, the image data of various channels are respectively counted to obtain statistical data of various channels, and the statistical data of various channels are sent to the exposure control unit 13;
  • the exposure control unit 13 is used to receive the various types sent by the statistical unit 12 Channel statistical data, for any type of channel, according to the statistical data of the type of channel, calculate the corresponding exposure parameter of the type of channel, and based on the exposure parameter, control the brightness adjustment of the image data of the type of channel.
  • the functions of the statistical unit 12, the exposure control unit 13, and the processing unit 16 may be executed by one processor, or may be executed by multiple processors, which are not specifically limited here.
  • the statistical unit performs statistics on the image data of each type of channel
  • the exposure control unit calculates the exposure parameters corresponding to this type of channel according to the statistical data of a type of channel, and controls the exposure parameters based on the calculated exposure parameters.
  • the brightness of the image data of this type of channel is adjusted. According to the actual situation that the energy of the light component of the different types of channels is different, independent exposure is performed on a type of channel, so that the brightness of the image data of the type of channel is controlled within the appropriate brightness range. , Thereby improving the final imaging effect.
  • the processing unit analyzes the correlation between each two types of channels, and removes the light components of the other type of channels included in one type of channel. The correlation is related to the exposure parameters, which is beneficial to obtain more accurate color information.
  • the processing unit 16 is a logic platform that contains image processing algorithms or programs.
  • the platform can be a CPU (Central Processing Unit), NP (Network Processor, network processor), etc.; it can also be a DSP (Digital Signal Processor, Digital signal processor), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • CPU Central Processing Unit
  • NP Network Processor, network processor
  • DSP Digital Signal Processor, Digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the processing unit 16 processes the image signal to obtain image data of various channels, and can obtain current exposure parameters corresponding to various channels (which may include exposure time, exposure gain, etc.), According to the current exposure parameters corresponding to various channels and the correlation information between the various channels, the correlation between each two types of channels can be determined.
  • the correlation information refers to the relationship between a certain image attribute of one channel and another channel.
  • the influence of a certain image attribute for example, the influence of the brightness of the NIR channel and the color of the RGB channel. In actual imaging, a part of the near-infrared light energy enters the pixel corresponding to the RGB channel, causing the image deviation of the pixel corresponding to the RGB channel.
  • the associated information refers to which RGB channel colors are affected by the near-infrared light.
  • the correlation indicates the degree and size of the image attribute associated information.
  • the image attribute associated information can be analyzed by the processing unit after acquiring the image signal, or it can be sent to the processing unit based on the image signal analysis in advance.
  • the correlation is Refers to the calculation of the proportion of near-infrared light energy in the pixels corresponding to the RGB channels through calibration and other means.
  • the correlation between each two types of channels the magnitude and extent of the mutual influence of the image data between the two channels can be clearly known Therefore, according to the correlation between each two types of channels, the light component of another type of channel contained in one type of channel can be removed, so as to restore the original signal of one type of channel.
  • removing the light components of one type of channel from another type of channel can be achieved by using a coefficient matrix to perform weighting processing on image data of multiple types of channels, where the coefficient matrix can be a pre-calibrated matrix.
  • the image sensor 11 may include: a first type channel that responds to light components in the visible light band, and a second type channel that responds to light components in the near-infrared light band.
  • the processing unit 16 may be specifically used to: obtain the image signal output by the image sensor, the current exposure parameters corresponding to the first type of channel and the second type of channel, and the correlation between the color of the first type of channel and the brightness of the second type of channel Information, or obtain the image signal output by the image sensor, the current exposure parameters corresponding to the first type of channel and the second type of channel, and the correlation information between the brightness of the first type of channel and the brightness of the second type of channel;
  • the current exposure parameters corresponding to the type channel and the second type channel normalize the image data of the first type channel and the image data of the second type channel in the image signal to the same exposure parameter; determine the first type channel based on the associated information
  • the weight of the channel and the weight of the second type of channel according to the weight of the first type of channel and the weight of the second type of channel, and the normalized image data of the first type of channel and the image data of the second type of channel, remove the The light component of the second type of channel in the first type of channel.
  • the image data of the first-type channel and the second-type channel in the image signal can be combined according to the current exposure parameters of the first-type channel and the second-type channel.
  • the image data of the channels are normalized to the same exposure parameter, so that the image data of the first type of channel and the image data of the second type of channel under the same exposure parameter are logically obtained, based on the color of the first type of channel and the second type of channel.
  • the correlation information between the brightness of the channel determines the weight of the first type of channel and the weight of the second type of channel.
  • the weight of the first type of channel and the weight of the second type of channel can form a coefficient matrix, which can be based on the correlation Information calibration 3*4 matrix.
  • the second type contained in the first type of channel can be removed The light component of the channel.
  • the above-mentioned normalization process can also be realized by adjusting the aforementioned coefficient matrix, which will not be repeated here.
  • the correlation information between the color of the first type of channel and the brightness of the second type of channel can be specifically obtained by pre-analyzing the correlation information between the brightness of each color channel in the first type of channel and the brightness of the second type of channel.
  • the correlation information can also directly adopt the correlation information between the brightness of each color channel in the first type of channel and the brightness of the second type of channel.
  • the image sensor 11 may include: a first type channel that responds to light components in the visible light band, and a second type channel that responds to light components in the near-infrared light band.
  • the processing unit 16 may be specifically used to: obtain the image signal output by the image sensor, the current exposure parameters corresponding to the first type of channel and the second type of channel, and the correlation between the color of the first type of channel and the brightness of the second type of channel Information, or obtain the image signal output by the image sensor, the current exposure parameters corresponding to the first type of channel and the second type of channel, and the correlation information between the brightness of the first type of channel and the brightness of the second type of channel; The current exposure parameters corresponding to the type channel and the second type channel, and the associated information, determine the weight of the first type channel and the weight of the second type channel; according to the weight of the first type channel and the weight of the second type channel, the first type The image data of the channel and the image data of the second-type channel are removed from the light component of the second-type channel included in the first-type channel.
  • the image sensor 11 For the scene where the image sensor 11 includes the first type channel and the second type channel, it can also be based on the current exposure parameters of the first type channel and the second type channel and the color of the first type channel and the second type channel obtained by pre-analysis.
  • the weights of the first type of channel and the weight of the second type of channel, and the image data of the first type of channel and the image data of the second type of channel the light component of the second type of channel contained in the first type of channel can be removed.
  • the determined weight of the first type of channel and the weight of the second type of channel are related to the current exposure parameters, which can be used to obtain more accurate color information.
  • the above-mentioned embodiments only provide implementations for removing the light components of the second-type channels included in the first-type channels.
  • the light components of the first-type channels included in the second-type channels can also be removed in the above-mentioned manner, and then channel fusion is performed to obtain a fused image signal.
  • the processing unit 16 may also be used to fuse image data of various channels from which the light components of other types of channels have been removed to obtain a fused image signal.
  • the specific fusion process can be: filtering the image of the visible light channel (hereinafter referred to as the visible light image) and the image of the near-infrared light channel (hereinafter referred to as the near-infrared light image) respectively to obtain the visible light base layer image and the near-infrared light base image Layer image, sequentially calculate the gray value of each pixel of the visible light image, and the ratio or difference of the gray value of each pixel corresponding to the visible light base layer image, use the result set as the visible light texture coefficient, and calculate the near-infrared light in turn
  • the gray value of each pixel of the image is the ratio or difference of the gray value of each pixel corresponding to the near-infrared light base layer image, and the calculated result set is used as the near-infrared light texture coefficient, which is calculated according to the preset edge detection.
  • the visible light image is convolved to obtain the visible light texture intensity information
  • the near-infrared light image is convolved to obtain the near-infrared light texture intensity information, according to the visible light texture coefficient, the near-infrared light texture Coefficient, visible light texture intensity information, and near-infrared light texture intensity information to obtain the fusion texture coefficient
  • the visible light image is convolved to obtain the visible light local brightness image, and according to the preset first convolution kernel
  • the second convolution kernel performs convolution processing on the near-infrared light image to obtain the near-infrared light local brightness image, calculates the brightness offset between the visible light local brightness image and the near-infrared light local brightness image, and according to the visible light local brightness image, near-infrared light
  • the light local brightness image and the brightness offset are obtained to obtain the reflection coefficient.
  • the near-infrared light image is fuse
  • the processing unit 16 After the processing unit 16 eliminates the light components of various channels, it can fuse the image data of various channels, which can enhance the color signal and improve the quality of the fused image data under low illumination. After the fused image signal is obtained, the image signal can be sent to the statistical unit 12, and the statistical unit 12 performs image data statistics to provide a basis for exposure control of the exposure control unit 13, or can be directly output to the user.
  • the image data of the first-type channel and the second-type channel may also be jointly denoised.
  • the specific joint denoising method may be to use the image data of the second-type channel to compare the image data of the first-type channel.
  • the data is guided, that is, the image data of the second type of channel is used as the reference data, and the image data of the first type of channel is processed for noise reduction, which reduces the noise while reducing the loss of effective information.
  • the processing unit includes a signal decomposition module and a post-processing module
  • the signal decomposition module is used to obtain the image signal, decompose the visible light signal and the near-infrared light signal of the image signal, and output the decomposed first decomposed image signal and the second decomposed image signal, where the first decomposed image signal is a visible light image Signal, the second decomposed image is a near-infrared light image signal. Since the visible light signal and the near-infrared light signal have different data formats, they can be decomposed according to the different data formats;
  • the post-processing module is used to obtain the first decomposed image signal, the second decomposed image signal, the first current exposure parameter corresponding to the first type of channel, the second current exposure parameter corresponding to the second type of channel, and the first type of channel and the second current exposure parameter. Correlation information between the two types of channels; determine the correlation between the first type of channel and the second type of channel according to the first current exposure parameter, the second current exposure parameter and the associated information; determine the first output image according to the correlation Signal and/or second output image signal, wherein the first output image signal is a first decomposed image signal from which near-infrared light components are removed.
  • the second output image signal may be a second decomposed image signal with visible light components removed, or may be a second decomposed image signal without visible light components removed.
  • the signal decomposition module can be specifically used for:
  • Obtain the image signal respectively up-sample the color components of the visible light signal and the near-infrared light signal in the image signal to obtain the image signal of each color component and the image signal of the near-infrared light; combine the image signals of each color component to obtain Output the first decomposed image signal, and output the image signal of the near-infrared light as the second decomposed image signal;
  • the image signal Acquire the image signal, the first current exposure gain corresponding to the first type of channel, and the second current exposure gain corresponding to the second type of channel; if the second current exposure gain is less than the first current exposure gain, according to the second type of channel in the image signal If the second current exposure gain is greater than the first current exposure gain, then according to the image data of the first type channel in the image signal, the second type of channel image
  • the data is subjected to edge judgment interpolation; the image signal of each color component of the interpolated visible light signal and the image signal of the near-infrared light are obtained; the image signals of each color component are combined to obtain the first decomposed image signal for output, and the near-infrared light
  • the image signal of is output as the second decomposed image signal.
  • the post-processing module may include: a first processing sub-module, a second processing sub-module, a color restoration sub-module, and a third processing sub-module;
  • the first processing sub-module is used to obtain the first decomposed image signal, and preprocess the first decomposed image signal to obtain the first sub-processed image signal;
  • the second processing sub-module is used to obtain the second decomposed image signal, preprocess the second decomposed image signal to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, digital gain, At least one processing method in noise reduction;
  • the color restoration sub-module is used to obtain the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information between the first type channel and the second type channel;
  • a current exposure parameter and a second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second type of channel The weight of the first type of channel and the weight of the second type of channel, and the normalized first sub-processed image signal and second sub-processed image signal to obtain the first restored image signal;
  • the third processing sub-module is used to process the first restored image signal to obtain the first output image signal.
  • the post-processing module may include: a first processing sub-module, a second processing sub-module, a color restoration sub-module, and a fourth processing sub-module;
  • the first processing sub-module is used to obtain the first decomposed image signal, and preprocess the first decomposed image signal to obtain the first sub-processed image signal;
  • the second processing sub-module is used to obtain the second decomposed image signal, preprocess the second decomposed image signal to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, digital gain, At least one processing method in noise reduction;
  • the color restoration sub-module is used to obtain the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information between the first type channel and the second type channel;
  • a current exposure parameter and a second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second type of channel The weight of the; according to the weight of the first type of channel and the weight of the second type of channel, and the normalized first sub-processed image signal and second sub-processed image signal, the second restored image signal is obtained;
  • the fourth processing sub-module is used to process the second restored image signal to obtain the second output image signal.
  • the post-processing module may include: a first processing sub-module, a second processing sub-module, a color restoration sub-module, a third processing sub-module, a fourth processing sub-module, and a fifth processing sub-module. Processing sub-module;
  • the first processing sub-module is used to obtain the first decomposed image signal, and preprocess the first decomposed image signal to obtain the first sub-processed image signal;
  • the second processing sub-module is used to obtain the second decomposed image signal, preprocess the second decomposed image signal to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, digital gain, At least one processing method in noise reduction;
  • the color restoration sub-module is used to obtain the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information between the first type channel and the second type channel;
  • a current exposure parameter and a second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second type of channel
  • the weight of according to the weight of the first type of channel and the weight of the second type of channel, and the normalized first sub-processed image signal and second sub-processed image signal, the first restored image signal and the second restored image signal are obtained ;
  • the third processing sub-module is used to process the first restored image signal to obtain the third sub-processed image signal
  • the fourth processing sub-module is used to process the second restored image signal to obtain the fourth sub-processed image signal
  • the fifth processing sub-module is used to process the third sub-processed image signal and the fourth sub-processed image signal to obtain the first output image signal.
  • the post-processing module may include: a first processing sub-module, a second processing sub-module, a color restoration sub-module, a third processing sub-module, a fourth processing sub-module, and a fifth processing sub-module. Processing sub-module;
  • the first processing sub-module is used to obtain the first decomposed image signal, and preprocess the first decomposed image signal to obtain the first sub-processed image signal;
  • the second processing sub-module is used to obtain the second decomposed image signal, preprocess the second decomposed image signal to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, digital gain, At least one processing method in noise reduction;
  • the color restoration sub-module is used to obtain the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information between the first type channel and the second type channel;
  • a current exposure parameter and a second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second type of channel
  • the weight of according to the weight of the first type of channel and the weight of the second type of channel, and the normalized first sub-processed image signal and second sub-processed image signal, the first restored image signal and the second restored image signal are obtained ;
  • the third processing sub-module is used to process the first restored image signal to obtain the third sub-processed image signal
  • the fourth processing sub-module is used to process the second restored image signal to obtain the fourth sub-processed image signal
  • the fifth processing sub-module is used to process the third sub-processed image signal and the fourth sub-processed image signal to obtain the first output image signal and the second output image signal.
  • the processing unit may further include a preprocessing module
  • the preprocessing module is used to obtain the image signal output by the image sensor, preprocess the image signal, and send the preprocessed image signal to the signal decomposition module.
  • processing unit 16 can be implemented in a variety of ways, which will be introduced separately below.
  • the first implementation is a first implementation:
  • the processing unit 16 includes a signal decomposition module and a post-processing module.
  • the signal analysis module logically decomposes the visible light signal and the near-infrared light signal of the input image signal, and decomposes the image signal into a first decomposed image signal and a second decomposed image signal;
  • the post-processing module decomposes the first decomposed image signal and the second decomposed image signal.
  • the image signal is divided into two for processing, and the first output image signal is output.
  • the image signal transmitted by the image sensor to the processing unit includes both visible light signal and near-infrared light signal. Therefore, the signal decomposition module logically decomposes these two image signals, and outputs the decomposed first decomposed image signal and second decomposed image signal. .
  • one processing method can be to separately up-sampling the visible light R, G, B signals and the near-infrared NIR signal (bilinear up-sampling or other up-sampling methods can also be used) to obtain R , G, B, NIR image signals, these four image signals are full-resolution image signals, and then R, G, B image signals are respectively output as the first decomposed image signal or combined into a visible light image signal as the first decomposed image The signal is output, and the near-infrared NIR image signal is output as the second decomposed image signal.
  • another processing method may be to perform an interpolation operation on the RGBIR image signal, and the interpolation operation may be an interpolation based on edge judgment.
  • the interpolation operation may be an interpolation based on edge judgment.
  • a channel with better imaging quality can be used as a guide, and a channel with poor imaging quality can be guided to perform edge decision interpolation.
  • the imaging quality can be judged by the gain.
  • the NIR channel when the exposure gain of the NIR channel is less than the exposure gain of the R, G, and B channels, the NIR channel is used to guide the R, G, and B channels for edge judgment interpolation; when the R, G, and B channels are When the exposure gain is less than the exposure gain of the NIR channel, the R, G, and B channels are used to guide the NIR channel to perform edge judgment interpolation.
  • image signals of R, G, B, and NIR are obtained, the R, G, and B image signals are combined into a visible light image signal and output as the first decomposed image signal, and the near-infrared NIR image signal is output as the second decomposed image signal.
  • the post-processing module is used to jointly process the first decomposed image signal and the second decomposed image signal to obtain the first output image signal.
  • the post-processing module can be implemented in a variety of ways.
  • the first implementation of the post-processing module is shown in Figure 13.
  • the first processing sub-module can perform one or more of dead pixel correction, black level correction, data gain, and noise reduction on the first decomposed image signal.
  • the second processing sub-module can perform one or more of dead pixel correction, black level correction, data gain, and noise reduction on the second decomposed image signal to obtain the second sub-processed image signal .
  • One processing method can be based on the gain g1 of the RGB channel, the exposure time t1, and the gain g2 of the NIR channel.
  • t2 adjusts the second sub-processed image signal as follows:
  • the first sub-processed image signal RGB image signal
  • the adjusted second sub-processed image signal NIR' image signal
  • the third processing sub-module performs further processing on the first restored image signal, including but not limited to digital gain, white balance, color correction, curve mapping, noise reduction, enhancement, etc., and finally obtains a color first output image signal.
  • the second implementation of the post-processing module is shown in FIG. 14.
  • the first processing sub-module and the second processing sub-module can adopt the same implementation as the first processing sub-module and the second processing sub-module in the embodiment shown in FIG. 13 Ways, I won’t repeat them here.
  • a processing method of the color restoration sub-module can directly output the second sub-processed image signal as the second restored image signal, or it can weight the first sub-processed image signal and the second sub-processed image signal as the second restored image signal. Image signal output.
  • the fourth processing sub-module performs further processing on the second restored image signal, including but not limited to digital gain, white balance, color correction, curve mapping, noise reduction, enhancement, etc., and finally obtains a black and white second output image signal.
  • the third implementation of the post-processing module is shown in FIG. 15.
  • the first processing sub-module and the second processing sub-module can adopt the same implementation as the first processing sub-module and the second processing sub-module in the embodiment shown in FIG. 13 Way.
  • the color restoration submodule can output the first restored image signal in the same implementation manner as the color restoration submodule in the embodiment shown in FIG. 13, and output the second restoration in the same implementation manner as the color restoration submodule in the embodiment shown in FIG. 14 Image signal.
  • the third processing sub-module may adopt the same implementation manner as the third processing sub-module in the embodiment shown in FIG. 13 to obtain the third sub-processed image signal
  • the fourth processing sub-module may adopt the same method as the fourth processing in the embodiment shown in FIG.
  • the sub-module is implemented in the same manner to obtain the fourth sub-processed image signal.
  • the fifth processing sub-module processes the third sub-processed image signals and the fourth sub-processed image signals to obtain the first output image signal.
  • the processing methods of the fifth processing sub-module include but are not limited to noise reduction, fusion, enhancement, etc.
  • the processing unit 16 includes a signal decomposition module and a post-processing module.
  • the signal analysis module logically decomposes the input image signal, and decomposes the image signal into a first decomposed image signal and a second decomposed image signal;
  • the post-processing module processes the first decomposed image signal and the second decomposed image signal, and outputs The first output image signal and the second output image signal.
  • the signal decomposition module can adopt the same implementation manner as the signal decomposition module in the first implementation manner, which will not be repeated here.
  • the implementation of the post-processing module is shown in Figure 17.
  • the first processing sub-module, the second processing sub-module, the color restoration sub-module, the third processing sub-module, and the fourth processing sub-module can be the same as those in the embodiment shown in Figure 15
  • the corresponding modules are implemented in the same way.
  • the fifth processing sub-module processes the third sub-processed image signal and the fourth sub-processed image signal, and the processing methods include but are not limited to noise reduction, fusion, enhancement, etc., to obtain a color first output image signal and a black and white second output Image signal.
  • the third implementation mode is the third implementation mode.
  • the processing unit 16 includes a pre-processing module, a signal decomposition module, and a post-processing module.
  • the preprocessing module preprocesses the input image signal and outputs the preprocessed image signal
  • the signal analysis module logically decomposes the preprocessed image signal, and decomposes the preprocessed image signal into a first decomposed image signal and a second decomposed image signal
  • the post-processing module processes the first decomposed image signal and the second decomposed image signal, and outputs the first output image signal and the second output image signal.
  • the preprocessing module preprocesses the input image signal to obtain a preprocessed image, where the preprocessing includes, but is not limited to, black level correction, dead pixel correction, digital gain, noise reduction, etc.
  • the signal decomposition module can adopt the same implementation manner as the signal decomposition module in the first implementation manner, which will not be repeated here.
  • the post-processing module can adopt the same implementation manner as the post-processing module in the first implementation manner, which will not be repeated here.
  • the processing unit 16 includes a preprocessing module, a signal decomposition module, and a post-processing module.
  • the preprocessing module preprocesses the input image signal and outputs the preprocessed image signal
  • the signal analysis module logically decomposes the preprocessed image signal, and decomposes the preprocessed image signal into a first decomposed image signal and a second decomposed image signal
  • the post-processing module processes the first decomposed image signal and the second decomposed image signal, and outputs the first output image signal and the second output image signal.
  • the preprocessing module can adopt the same implementation manner as the preprocessing module in the third implementation manner, which will not be repeated here.
  • the signal decomposition module can adopt the same implementation manner as the signal decomposition module in the first implementation manner, which will not be repeated here.
  • the post-processing module can adopt the same implementation manner as the post-processing module in the second implementation manner, which will not be repeated here.
  • the embodiment of the present application provides an image processing method, which is applied to an imaging system; as shown in FIG. 20, the method includes:
  • S201 Obtain an image signal from an image sensor, and extract image data of various channels in the image signal.
  • S202 Perform statistics on image data of various channels to obtain statistical data of various channels.
  • S203 For any type of channel, calculate the exposure parameter corresponding to the type of channel according to the statistical data of the type of channel, and control the brightness adjustment of the image data of the type of channel based on the exposure parameter.
  • various types of channels include a first type of channel that responds to light components in the visible light band, and a second type of channel that responds to light components in the near-infrared light band.
  • One type of channel includes multiple color channels, and the second type of channel includes near-infrared channels;
  • S202 may specifically include: calculating the image data statistics of the first type of channel as the statistical data of the first type of channel based on the image data of at least one of the multiple color channels; calculating the second type of statistics based on the image data of the near-infrared channel The statistical value of the image data of the channel is used as the statistical data of the second type of channel.
  • the step of extracting image data of various channels in the image signal in S201 can be specifically implemented by the following steps: extracting the image data of each color channel and the near-infrared channel from the image signal Image data;
  • the step of calculating the statistical value of the image data of the first type of channel as the statistical data of the first type of channel can be specifically implemented by the following steps: extracting each color from the image signal Channel image data; calculate the average value of the image data of each color channel according to the image data of each color channel; perform weighted summation on the average value of the image data of each color channel; use the result of the weighted summation as the statistical data of the first type of channel;
  • the step of calculating the statistical value of the image data of the second-type channel as the statistical data of the second-type channel can be implemented by the following steps: calculate the image of the near-infrared channel based on the image data of the near-infrared channel Data mean; the mean value of the image data of the near-infrared channel is used as the statistical data of the second type of channel.
  • the step of extracting image data of various channels in the image signal in S201 can be specifically implemented by the following steps: divide the image signal into blocks to obtain multiple image signal blocks; An image signal block, from which the image data of each color channel and the image data of the near-infrared channel are extracted;
  • the step of calculating the statistical value of the image data of the first type of channel as the statistical data of the first type of channel can be specifically implemented by the following steps: according to each color in each image signal block Calculate the average value of the image data of each color channel for the image data of the channel; perform a weighted summation on the average value of the image data of each color channel; use the result of the weighted summation as the statistical data of the first type of channel;
  • the step of calculating the statistical value of the image data of the second-type channel as the statistical data of the second-type channel can be implemented by the following steps: according to the image data of the near-infrared channel in each image signal block, calculate The average value of the image data of the near-infrared channel; the average value of the image data of the near-infrared channel is used as the statistical data of the second type of channel.
  • the step of extracting image data of various channels in the image signal in S201 can be specifically implemented by the following steps: extracting the image data of each color channel and the near-infrared channel from the image signal Image data;
  • the step of calculating the statistical value of the image data of the first type of channel as the statistical data of the first type of channel can be specifically implemented by the following steps: according to the image data of each color channel, Obtain the histogram of each color channel; calculate the weighted average of the gray levels in the histogram of each color channel to obtain the average value of the image data of each color channel; perform weighted summation on the average value of the image data of each color channel; calculate the weight The result of sum is used as the statistical data of the first type of channel;
  • the step of calculating the statistical value of the image data of the second-type channel as the statistical data of the second-type channel can be implemented by the following steps: the statistical unit obtains the near-infrared channel according to the image data of the near-infrared channel The histogram of the near-infrared channel; the weighted average calculation of the gray levels in the histogram of the near-infrared channel to obtain the average value of the image data of the near-infrared channel; the average value of the image data of the near-infrared channel is used as the statistical data of the second type of channel.
  • various types of channels include second type channels that respond to light components in the near-infrared light band;
  • S203 may specifically be: if it is determined that the statistical data of the second type of channel is greater than the first preset threshold, control the light supplement unit to reduce the intensity of the emitted near-infrared light; if it is determined that the statistical data of the second type of channel is less than the second preset Threshold value, the light supplement unit is controlled to increase the intensity of the emitted near-infrared light, wherein the first preset threshold value is greater than the second preset threshold value.
  • various types of channels include a first type of channel that responds to light components in the visible light band, and a second type of channel that responds to light components in the near-infrared light band;
  • S203 may specifically include: acquiring the first exposure time of the first type of channel, the first target data corresponding to the first type of channel, the second exposure time of the second type of channel, and the second target data corresponding to the second type of channel; Calculate the first data offset of the first type of channel with the statistical data and the first target data of the first type of channel.
  • the first data offset is not within the first preset range, then according to the statistical data of the first type of channel and The first target data, calculate the first exposure gain; according to the statistical data of the second type of channel and the second target data, calculate the second data offset of the second type of channel, if the second data offset is not in the second preset Within the range, the second exposure gain is calculated according to the statistical data of the second type of channel and the second target data; if the first exposure time is equal to the second exposure time, the second exposure gain is less than the first preset gain threshold.
  • the light supplement unit is controlled to reduce the intensity of the emitted near-infrared light, and when the second exposure gain is greater than the second preset gain threshold, the light supplement unit is controlled to increase the intensity of the emitted near-infrared light, wherein the first preset gain The threshold is less than the second preset gain threshold; if the first exposure time is not equal to the second exposure time, if the second exposure gain is less than the first preset gain threshold, the second exposure time is reduced, and the second exposure When the gain is greater than the second preset gain threshold, the second exposure time is increased.
  • the method may further include the following steps:
  • various types of channels include a first type of channel that responds to light components in the visible light band, and a second type of channel that responds to light components in the near-infrared light band;
  • the correlation between the two types of channels, according to the correlation between each two types of channels, the step of removing the light components of the other type of channels contained in one type of channel, specifically can be achieved through the following steps:
  • the current exposure parameters corresponding to the second type of channel, the image data of the first type of channel and the image data of the second type of channel in the image signal are normalized to the same exposure parameter; based on the associated information, the weight and the first type of channel are determined.
  • various types of channels include a first type of channel that responds to light components in the visible light band, and a second type of channel that responds to light components in the near-infrared light band;
  • the correlation between the two types of channels, according to the correlation between each two types of channels, the step of removing the light components of the other type of channels contained in one type of channel, specifically can be achieved through the following steps:
  • the current exposure parameters corresponding to the second type of channel, and the associated information determine the weight of the first type of channel and the weight of the second type of channel; according to the weight of the first type of channel and the weight of the second type of channel, the image data of the first type of channel And the image data of the second-type channel, removing the light component of the second-type channel included in the first-type channel.
  • the The correlation information between the two types of channels determines the correlation between each two types of channels. According to the correlation between each two types of channels, after the step of removing the light components of the other type of channels contained in one type of channel, the The method may also include the following steps:
  • the image data of various channels from which the light components of other types of channels have been removed are fused to obtain a fused image signal.
  • various types of channels include a first type of channel that responds to light components in the visible light band, and a second type of channel that responds to light components in the near-infrared light band;
  • the steps of the optical component of a type of channel can be specifically implemented through the following steps:
  • the current exposure parameters corresponding to the first type of channel and the second type of channel, and the associated information determine the weight of the first type of channel and the weight of the second type of channel; according to the weight of the first type of channel and the weight of the second type of channel, The image data of the first type of channel and the image data of the second type of channel are removed from the light component of the second type of channel included in the first type of channel.
  • the weight of the first type of channel and the weight of the second type of channel are determined according to the current exposure parameters corresponding to the first type of channel and the second type of channel, and the associated information;
  • the step of removing the light component of the second-type channel contained in the first-type channel by the weight of the second-type channel and the weight of the second-type channel, the image data of the first-type channel and the image data of the second-type channel, can be specifically passed.
  • the image data of the first type of channel and the image data of the second type of channel in the image signal are normalized to the same exposure parameter; based on the associated information, the first type of channel is determined
  • the weight of the first type of channel and the weight of the second type of channel according to the weight of the first type of channel and the weight of the second type of channel, and the normalized image data of the first type of channel and the image data of the second type of channel, Remove the light components of the second-type channels included in the first-type channels.
  • various types of channels include a first type of channel that responds to light components in the visible light band, and a second type of channel that responds to light components in the near-infrared light band;
  • the method may further include the following steps:
  • the visible light signal and the near-infrared light signal of the image signal are decomposed to obtain the first decomposed image signal and the second decomposed image signal after decomposition, wherein the first decomposed image signal is the visible light image signal, and the second decomposed image is the near-infrared light Image signal
  • the steps of the optical component of a type of channel can be specifically implemented through the following steps:
  • the step of decomposing the visible light signal and the near-infrared light signal of the image signal to obtain the decomposed first decomposed image signal and the second decomposed image signal can be specifically implemented by the following steps:
  • Obtain the image signal respectively up-sample the color components of the visible light signal and the near-infrared light signal in the image signal to obtain the image signal of each color component and the image signal of the near-infrared light; combine the image signals of each color component to obtain First decompose the image signal, and use the image signal of the near-infrared light as the second decomposed image signal;
  • the image signal Acquire the image signal, the first current exposure gain corresponding to the first type of channel, and the second current exposure gain corresponding to the second type of channel; if the second current exposure gain is less than the first current exposure gain, according to the second type of channel in the image signal If the second current exposure gain is greater than the first current exposure gain, then according to the image data of the first type channel in the image signal, the second type of channel image
  • the data is subjected to edge judgment interpolation; the image signal of each color component of the interpolated visible light signal and the image signal of the near-infrared light are obtained; the image signals of each color component are combined to obtain the first decomposed image signal, and the image of the near-infrared light is obtained The signal is used as the second decomposed image signal.
  • the first type channel and the second type channel are determined according to the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information.
  • the method may also include the following steps:
  • the first decomposed image signal is preprocessed to obtain the first sub-processed image signal;
  • the second decomposed image signal is preprocessed to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, At least one processing method of digital gain and noise reduction;
  • the step of outputting the image signal and/or second outputting the image signal can be specifically implemented through the following steps:
  • the first current exposure parameter and the second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second The weight of the class channel; according to the weight of the first class channel and the weight of the second class channel, and the normalized first sub-processed image signal and second sub-processed image signal, the first restored image signal with restored color is obtained ; Process the first restored image signal to obtain a color first output image signal.
  • the first type channel and the second type channel are determined according to the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information.
  • the method may also include the following steps:
  • the first decomposed image signal is preprocessed to obtain the first sub-processed image signal;
  • the second decomposed image signal is preprocessed to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, At least one processing method of digital gain and noise reduction;
  • the step of outputting the image signal and/or second outputting the image signal can be specifically implemented through the following steps:
  • the first current exposure parameter and the second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second The weight of the class channel; the normalized second sub-processed image signal is used as the second restored image signal to restore black and white colors, or according to the weight of the first type of channel and the weight of the second type of channel, the normalized The first sub-processed image signal and the second sub-processed image signal are weighted to obtain a second restored image signal of restored black and white colors; the second restored image signal is processed to obtain a second output image signal of black and white.
  • the first type channel and the second type channel are determined according to the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information.
  • the method may also include the following steps:
  • the first decomposed image signal is preprocessed to obtain the first sub-processed image signal;
  • the second decomposed image signal is preprocessed to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, At least one processing method of digital gain and noise reduction;
  • the step of outputting the image signal and/or second outputting the image signal can be specifically implemented through the following steps:
  • the first current exposure parameter and the second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second The weight of the class channel; according to the weight of the first class channel and the weight of the second class channel, and the normalized first sub-processed image signal and second sub-processed image signal, the first restored image signal with restored color is obtained And restore the second restored image signal of black and white colors; process the first restored image signal to obtain the third sub-processed image signal; process the second restored image signal to obtain the fourth sub-processed image signal; perform the third sub-processing The image signal and the fourth sub-processed image signal are subjected to at least noise reduction, fusion, and enhancement processing to obtain a color first output image signal.
  • the first type channel and the second type channel are determined according to the first current exposure parameter corresponding to the first type channel, the second current exposure parameter corresponding to the second type channel, and the associated information.
  • the method may also include the following steps:
  • the first decomposed image signal is preprocessed to obtain the first sub-processed image signal;
  • the second decomposed image signal is preprocessed to obtain the second sub-processed image signal, where the preprocessing includes dead pixel correction, black level correction, At least one processing method of digital gain and noise reduction;
  • the step of outputting the image signal and/or second outputting the image signal can be specifically implemented through the following steps:
  • the first current exposure parameter and the second current exposure parameter normalize the first sub-processed image signal and the second sub-processed image signal to the same exposure parameter; based on the associated information, determine the weight of the first type of channel and the second The weight of the class channel; according to the weight of the first class channel and the weight of the second class channel, and the normalized first sub-processed image signal and second sub-processed image signal, the first restored image signal with restored color is obtained And restore the second restored image signal of black and white colors; process the first restored image signal to obtain the third sub-processed image signal; process the second restored image signal to obtain the fourth sub-processed image signal; perform the third sub-processing
  • the image signal and the fourth sub-processed image signal are processed at least for noise reduction, fusion, and enhancement to obtain a color first output image signal, and the fourth sub-processed image signal is at least processed for noise reduction and enhancement to obtain a black and white second output image Signal.
  • the method before the step of decomposing the visible light signal and the near-infrared light signal of the image signal, and outputting the decomposed first decomposed image signal and the second decomposed image signal, the method may also Including the following steps:
  • the image data of each type of channel are separately counted, the exposure parameters corresponding to this type of channel are calculated based on the statistical data of one type of channel, and the image of this type of channel is controlled based on the calculated exposure parameter
  • the brightness of the data is adjusted, and the image data of one type of channel is independently exposed for the actual situation of different energy of the light components responding to different types of channels, so that the brightness of the image data of one type of channel is controlled within the appropriate brightness range, thus Improve the final imaging effect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种成像系统及图像处理方法,成像系统包括: 图像传感器、统计单元和曝光控制单元。统计单元对每一类通道的图像数据分别进行统计,曝光控制单元根据一类通道的统计数据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道的图像数据进行独立曝光,使得一类通道的图像数据的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。

Description

一种成像系统及图像处理方法
本申请要求于2020年01月22日提交中国专利局、申请号为202010073691.3、发明名称为“一种成像系统及图像处理方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术领域,特别是涉及一种成像系统及图像处理方法。
背景技术
在当前的成像系统中,通常采用图像传感器成像。图像传感器使用光电转换器件,例如CCD(Charge-coupled Device,电荷耦合期间)、CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)等。图像传感器包括多类通道,例如RGB(彩色)通道,各类通道被分别布置为与像素阵列中的像素对应,一个通道对通过该通道的光分量进行响应,对应得到一个像素,进而能够实现将光信号转换为图像信号。
但是,由于各类通道之间的图像信号会相互影响,例如在需要红外补光等场景下,图像传感器中包括RGB通道,RGB通道除了会响应彩色光的能量以外,还能响应部分NIR光的能量,导致成像效果较差。因此,相应的成像系统中,采用插值方式对RGB通道进行插值,得到的插值图像具有较高的分辨率,从而提高了成像效果。
然而,在实际环境中,各类通道所响应的光分量的能量存在较大差异,采用上述插值方法实际的成像效果并不理想。
发明内容
本申请实施例的目的在于提供一种成像系统及图像处理方法,以提高成像系统的成像效果。具体技术方案如下:
第一方面,本申请实施例提供了一种成像系统,该系统包括:图像传感器、统计单元和曝光控制单元;图像传感器包括多类通道;
图像传感器,用于将光信号转换为图像信号,其中,光信号包括多种波段范围内的光分量;
统计单元,用于获取图像信号;提取图像信号中各类通道的图像数据;对各类通道的图像数据分别进行统计,得到各类通道的统计数据;将各类通道的统计数据发送至曝光控制单元;
曝光控制单元,用于接收统计单元发送的各类通道的统计数据;针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
第二方面,本申请实施例提供了一种图像处理方法,应用于成像系统;该方法包括:
从图像传感器获取图像信号,提取图像信号中各类通道的图像数据;
对各类通道的图像数据分别进行统计,得到各类通道的统计数据;
针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
本申请实施例提供的一种成像系统及图像处理方法,成像系统包括:图像传感器、统计单元和曝光控制单元。其中,统计单元,用于获取图像传感器对光信号转换得到的图像信号,提取图像信号中各类通道的图像数据,对各类通道的图像数据分别进行统计,得到各类通道的统计数据,将各类通道的统计数据发送至曝光控制单元;曝光控制单元,用于接收统计单元发送的各类通道的统计数据;针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
在本申请实施例中,成像系统中新增统计单元和曝光控制单元,统计单元对每一类通道的图像数据分别进行统计,曝光控制单元根据一类通道的统计数据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道的图像数据进行独立曝光,使得一类通道的图像数据的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例的成像系统的结构示意图;
图2a为本申请一实施例的图像传感器排列方式示意图;
图2b为本申请另一实施例的图像传感器排列方式示意图;
图2c为本申请再一实施例的图像传感器排列方式示意图;
图3为本申请实施例的RGB通道和NIR通道的光谱响应曲线的示意图;
图4为本申请实施例的W通道的光谱响应曲线的示意图;
图5为本申请另一实施例的成像系统的结构示意图;
图6为本申请实施例的滤光单元的光谱通过率曲线的示意图;
图7为本申请再一实施例的成像系统的结构示意图;
图8为本申请又一实施例的成像系统的结构示意图;
图9为本申请实施例的850nm近红外能量分布曲线的示意图;
图10为本申请实施例的曝光增益调整的流程示意图;
图11为本申请又一实施例的成像系统的结构示意图;
图12为本申请一实施例的处理单元的实现流程示意图;
图13为本申请一实施例的后处理模块的实现流程示意图;
图14为本申请另一实施例的后处理模块的实现流程示意图;
图15为本申请再一实施例的后处理模块的实现流程示意图;
图16为本申请另一实施例的处理单元的实现流程示意图;
图17为本申请又一实施例的后处理模块的实现流程示意图;
图18为本申请再一实施例的处理单元的实现流程示意图;
图19为本申请又一实施例的处理单元的实现流程示意图;
图20为本申请实施例的图像处理方法的流程示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了提高成像系统的成像效果,本申请实施例提供了一种成像系统及图像处理方法。
本申请实施例提供了一种成像系统,如图1所示,该成像系统包括图像传感器11、统计单元12和曝光控制单元13。图像传感器11包括多类通道,每个通道用于对通过的光分量进行响应。
其中,图像传感器11,用于将光信号转换为图像信号,光信号包括多种波段范围内的光分量;统计单元12,用于获取图像信号,提取图像信号中各类通道的图像数据,对各类通道的图像数据分别进行统计,得到各类通道的统计数据,将各类通道的统计数据发送至曝光控制单元13;曝光控制单元13,用于接收统计单元12发送的各类通道的统计数据,针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
应用本申请实施例,统计单元对每一类通道的图像数据分别进行统计,曝光控制单元根据一类通道的统计数据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道进行独立曝光,使得一类通道的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。
本申请实施例提供的成像系统可以是图像拍摄系统(例如数码相机、便携式摄像机、监控相机等),成像系统还可以是安装在计算机、多媒体播放器、手机等设备上的成像模块。
图像传感器11包括多类通道,每一类通道用于响应不同波段范围内的光分量。在一种实现方式中,图像传感器11可以包括:响应可见光波段范围内的光分量的第一类通道,以 及响应近红外光波段范围内的光分量的第二类通道。
该实现方式中,图像传感器11为RGB-IR传感器,具体包括两类通道,例如,第一类通道为RGB通道,第二类通道可以为NIR通道。通过各类通道得到如图2a、图2b或者图2c所示的图像传感器排列方式,每个通道对通过该通道的光分量进行响应。图2a中R(红色)通道和B(蓝色)通道分别占总像素数的1/8、NIR通道占总像素数的1/4、G(绿色)通道占总像素数的1/2;图2b中R通道和B通道分别占总像素数的1/4、G通道占总像素数的1/2、NIR通道占所有像素;图2c中R通道和B通道分别占总像素数的3/16、G通道占总像素数的3/8、NIR通道占总像素数的1/4。
其中,每个通道的光谱响应如图3所示,为了保证第一类通道响应可见光分量、第二类通道响应近红外光分量,第一类通道的光分量在400nm~700nm的波段范围内的相对响应不低于在700nm~1000nm的波段范围内的相对响应;在800nm~1000nm的波段范围内,第二类通道的光分量的相对响应不低于第一类通道的光分量的相对响应。
第二类通道除了可以是NIR通道以外,还可以是W通道,W通道是一种可以响应可见光波段范围内的光分量和近红外光波段范围内的光分量的通道。W通道的光谱响应如图4所示,W通道的光分量在400nm~1000nm的波段范围内全部响应。
在图像传感器11的输入侧设置有镜头(图1中未示出),用于接收目标物体反射的光信号。其中光信号包括可见光分量和近红外光分量等,且镜头能够使可见光分量和近红外光分量满足共焦要求。
本申请实施例中,统计单元12用于进行图像数据统计,可以是具有运算功能的芯片,在统计图像数据时,主要是对图像数据进行亮度统计,每类通道的图像数据即为该类通道在像素阵列中相应位置的像素值,像素值能够体现像素亮度,因此,在进行统计时,可以直接对一类通道中的像素值进行统计,得到统计亮度即为该类通道的统计数据。
在一种实现方式中,第一类通道包括多个色彩通道,第二类通道包括近红外通道。
相应的,统计单元12,具体可以用于:根据多个色彩通道中至少一个色彩通道的图像数据,计算第一类通道的图像数据统计值作为第一类通道的统计数据;根据近红外通道的图像数据,计算第二类通道的图像数据统计值作为第二类通道的统计数据。
第一类通道为色彩通道,具体包含红色通道、绿色通道和蓝色通道,或者包含红色通道、黄色通道和蓝色通道,需要对各色彩通道的图像数据进行统计,将计算出的第一类通道的图像数据统计值(例如图像数据均值、图像数据和值等)作为第一类通道的统计数据,也可以是计算红色通道或者绿色通道的图像数据均值作为第一类通道的统计数据。第二类通道为近红外通道,对近红外通道的图像数据进行统计,将计算出的第二类通道的图像数据统计值作为第二类通道的统计数据。
在本申请实施例的一种实现方式中,统计单元12,具体可以用于:
从图像信号中,提取各色彩通道的图像数据及近红外通道的图像数据;分别根据各色 彩通道的图像数据和近红外通道的图像数据,计算各色彩通道的图像数据均值和近红外通道的图像数据均值;对各色彩通道的图像数据均值进行加权求和;将加权求和的结果作为第一类通道的统计数据,将近红外通道的图像数据均值作为第二类通道的统计数据;
或者,
将图像信号进行分块,得到多个图像信号块;针对任一图像信号块,从该图像信号块中提取各色彩通道的图像数据及近红外通道的图像数据;分别根据各图像信号块中各色彩通道的图像数据和近红外通道的图像数据,计算各色彩通道的图像数据均值和近红外通道的图像数据均值;对各色彩通道的图像数据均值进行加权求和;将加权求和的结果作为第一类通道的统计数据,将近红外通道的图像数据均值作为第二类通道的统计数据;
或者,
从图像信号中,提取各色彩通道的图像数据及近红外通道的图像数据;分别根据各色彩通道的图像数据和近红外通道的图像数据,得到各色彩通道的直方图和近红外通道的直方图;分别对各色彩通道的直方图和近红外通道的直方图中的灰阶数进行加权平均计算,得到各色彩通道的图像数据均值和近红外通道的图像数据均值;对各色彩通道的图像数据均值进行加权求和;将加权求和的结果作为第一类通道的统计数据,将近红外通道的图像数据均值作为第二类通道的统计数据。
数据统计包括全局统计、分块统计和直方图统计三种统计方式,下面,以RGB-IR传感器为例,分别对统计单元的这三种统计方式进行介绍。
全局统计:对R通道、G通道、B通道、NIR通道分别计算图像数据均值,分别得到R通道的图像数据均值Rave、G通道的图像数据均值Gave、B通道的图像数据均值Bave、NIR通道的图像数据均值NIRave。将Rave、Gave和Bave按一定权重加权,得到Y=kr*Rave+kg*Gave+kb*Bave,其中,kr为R通道权重、kg为G通道权重、kb为B通道权重,且kr+kg+kb=1。Y即为第一类通道的统计数据,NIRave即为第二类通道的统计数据。
分块统计:将图像信号进行分块,分别对每个分块的R通道、G通道、B通道、NIR通道计算图像数据均值,得到R通道的图像数据均值R ijave、G通道的图像数据均值G ijave、B通道的图像数据均值B ijave、NIR通道的图像数据均值NIR ijave(i、j表示分块的坐标,0<i<M,0<j<N,M、N为垂直和水平方向的分块数);每个通道的所有块进行加权平均,得到R通道的图像数据均值
Figure PCTCN2021072427-appb-000001
G通道的图像数据均值
Figure PCTCN2021072427-appb-000002
B通道的图像数据均值
Figure PCTCN2021072427-appb-000003
NIR通道的图像数据均值
Figure PCTCN2021072427-appb-000004
再将Rave、Gave和Bave按一定权重加权,得到Y=kr*Rave+kg*Gave+kb*Bave,其中,kr为R通道权重、kg为G通道权重、kb为B通道权重,且kr+kg+kb=1。Y即为第一类通道的统计数据,NIRave即为第二类通道的统计数据。
直方图统计:首先需要考虑R通道、G通道和B通道输入数据的位数,若超过或不足8位,则需要将输入数据转为8位,然后对R通道、G通道、B通道、NIR通道分别计算直方图,得到Rhist、Ghist、Bhist、NIRhist;直方图的灰阶数为256;对每个通道的直方图根据灰阶数进行加权平均如下式:
Figure PCTCN2021072427-appb-000005
Figure PCTCN2021072427-appb-000006
Figure PCTCN2021072427-appb-000007
Figure PCTCN2021072427-appb-000008
其中,w(n)为每个灰阶的权重。再将Rave、Gave和Bave按一定权重加权,得到Y=kr*Rave+kg*Gave+kb*Bave,其中,kr为R通道权重、kg为G通道权重、kb为B通道权重,且kr+kg+kb=1。Y即为第一类通道的统计数据,NIRave即为第二类通道的统计数据。
本申请实施例中,曝光控制单元13可以是具有运算功能的芯片,接收统计单元12发送的各类通道的统计数据,根据各类通道的统计数据,计算得到各类通道对应的曝光参数,曝光参数可以包括曝光时间、模拟增益、数字增益等,曝光参数还可以包括镜头的光圈,在本申请实施例中,光圈可以认为在某种条件下固定大小。在得到各类通道对应的曝光参数后,基于曝光参数,控制对相应通道的图像数据进行亮度调整。具体的,根据各类通道的统计数据计算得到的各类通道对应的曝光参数为各类通道当前实际的曝光参数,曝光控制单元13在控制对相应通道的图像数据进行亮度调整时,可以是将曝光参数发送至图像传感器,由图像传感器将曝光参数作用于各类通道,对各类通道进行亮度调整,使得各类通道的图像数据的亮度在预设亮度范围内;也可以是将曝光参数发送至图像处理单元,由图像处理单元基于曝光参数直接对图像传感器输出的相应通道的图像数据进行调整,使得各类通道的图像数据的亮度在预设亮度范围内。针对不同通道的图像数据可以设置不同的预设亮度范围,也可以设置相同的预设亮度范围,这里不做具体限定。在对每类通道的图像数据都进行亮度调整后,可以使得最终输出的图像信号符合亮度范围。
在计算得到曝光参数后,基于曝光参数控制进行亮度调整的方式,主要将曝光参数发送给图像传感器或者图像处理单元,控制图像传感器或者图像处理单元调整曝光时间、曝光增益等方式实现,具体的调整方式为常规的曝光调整方式,这里不做具体限定。
统计单元12、曝光控制单元13的功能可以是一个处理器(或微处理器)执行,也可以是多处理器(或微处理器)完成。
基于图1所示实施例,本申请实施例还提供了一种成像系统,如图5所示,该成像系统包括图像传感器11、统计单元12、曝光控制单元13和滤光单元14。图像传感器11包括多类通道,通道用于对通过的光分量进行响应,得到图像信号中的一个像素。
其中,滤光单元14,用于过滤掉输入的光信号中除指定波段范围中的光分量以外的其他光分量,例如过滤掉除波长在700nm-1000nm的波段范围以外的其他近红外光,并透射过滤后的光信号至图像传感器11;图像传感器11,用于将光信号转换为图像信号,光信号包括多种波段范围内的光分量;统计单元12,用于获取图像信号,提取图像信号中各类通道的图像数据,对各类通道的图像数据分别进行统计,得到各类通道的统计数据,将各类通道的统计数据发送至曝光控制单元13;曝光控制单元13,用于接收统计单元12发送的各类通道的统计数据,针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
应用本申请实施例,统计单元对每一类通道的图像数据分别进行统计,曝光控制单元根据一类通道的统计数据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道进行独立曝光,使得一类通道的图像数据的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。并且,通过在图像传感器的前端增加滤光单元,将输入的光信号进行过滤,能够使指定波段范围中的光分量通过,摄入到图像传感器,过滤掉其他波段范围内的光分量。
滤光单元14能够使可见光的光分量和指定波段范围内的近红外光的光分量通过,而过滤掉其他波段范围内的光分量。滤光单元14可以是滤光片,不同的滤光片可以通过不同波段范围的光分量,因此,可以根据可见光的波段范围和指定波段范围选择滤光片,如图6所示的滤光单元的光谱通过率曲线,红光的波长为700nm、绿光的波长为546.1nm、蓝光的波长为435.8nm、近红外光的波长为780~3000nm,采用该滤光单元,波段范围在390~640nm和820~880nm内的光分量的通过率可达60%以上,则可以保证可见光中的绿光、蓝光和及波段范围820~880nm内的近红外光通过,其他光分量被过滤掉。
在本申请实施例的一种实现方式中,滤光单元可以包括切换装置。其中,切换装置,用于切换滤光单元14的过滤状态。
滤光单元14,可以用于在过滤状态为开启的情况下,过滤掉输入的光信号中除指定波段范围内的光分量以外的其他光分量,并透射过滤后的光信号至图像传感器11;在过滤状态为关闭的情况下,透射光信号中的所有光分量至图像传感器11。
当滤光单元为滤光片时,切换装置可以通过硬件来实现。例如:将滤光片安装在可旋转的机械臂上,在需要进行光分量过滤的场景下,将滤光片旋转至图像传感器的镜头前, 进行光分量过滤;在不需要进行光分量过滤的场景下,将图像传感器的镜头前的滤光片旋转至除镜头之外的其他位置,滤光片不在图像传感器的镜头前,不会对进入镜头的光分量进行过滤。这样,就实现了过滤状态的开启和关闭之间的切换。
当然,在本申请的另一种实现方式下,还可以设置多个滤光片,例如设置一个可以通过可见光的光分量和指定波段近红外光的光分量的滤光片,和一个仅可以通过可见光的光分量的滤光片,根据实际需求可以对这两个滤光片进行旋转切换。
滤光单元14可以具有切换装置,用于对滤光单元14的过来状态进行切换。当切换装置将滤光单元14的过滤状态切换为开启时,滤光单元14会过滤掉输入的光信号中除指定波段范围内的光分量以外的其他光分量,并透射过滤后的光信号至图像传感器11,只通过指定波段范围内的光分量透射至图像传感器11;当切换装置将滤光单元14的过滤状态切换为关闭时,透射光信号中的所有光分量至图像传感器11。在一种情况下,滤光单元14的过滤状态切换为开启时,滤光单元14可以过滤掉输入的光信号中除可见光和部分近红外光以外的其他光分量,并透射过滤后的光信号至图像传感器11,只通过可见光和部分近红外光的光分量透射至图像传感器11;滤光单元14也可以过滤掉输入的光信号中除可见光以外的其他光分量,并透射过滤后的光信号至图像传感器11,只通过可见光的光分量透射至图像传感器11。
基于图1所示实施例,本申请实施例还提供了一种成像系统,如图7所示,该成像系统包括图像传感器11、统计单元12、曝光控制单元13和补光单元15。图像传感器11包括多类通道,通道用于对通过的光分量进行响应,得到图像信号中的一个像素。
其中,补光单元15,用于对场景进行近红外补光,以使输入的光信号包括近红外光;图像传感器11,用于将光信号转换为图像信号,光信号包括多种波段范围内的光分量;统计单元12,用于获取图像信号,提取图像信号中各类通道的图像数据,对各类通道的图像数据分别进行统计,得到各类通道的统计数据,将各类通道的统计数据发送至曝光控制单元13;曝光控制单元13,用于接收统计单元12发送的各类通道的统计数据,针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
应用本申请实施例,统计单元对每一类通道的图像数据分别进行统计,曝光控制单元根据一类通道的统计数据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道进行独立曝光,使得一类通道的图像数据的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。并且,通过增加补光单元,增加光信号中的近红外光分量,使得近红外光通道的亮度增加,便于提高近红外光成像。
在另一种实施方式中,如图8所示,该成像系统包括图像传感器11、统计单元12、曝光控制单元13、滤光单元14和补光单元15。图像传感器11包括多类通道,通道用于对通 过的光分量进行响应,得到图像信号中的一个像素。
其中,补光单元15,用于对场景进行近红外补光,以使输入的光信号包括近红外光;滤光单元14,用于过滤掉输入的光信号中除指定波段范围中的光分量以外的其他光分量,并透射过滤后的光信号至图像传感器11,其中,指定波段范围中的光分量包括补光单元发射的近红外光;图像传感器11,用于将光信号转换为图像信号,光信号包括多种波段范围内的光分量;统计单元12,用于获取图像信号,提取图像信号中各类通道的图像数据,对各类通道的图像数据分别进行统计,得到各类通道的统计数据,将各类通道的统计数据发送至曝光控制单元13;曝光控制单元13,用于接收统计单元12发送的各类通道的统计数据,针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
补光单元15用于近红外补光,当然也可以同时产生可见光补光。补光单元15产生的近红外补光的能量分布在650nm~1000nm的范围内,具体的,能量集中在750nm~900nm的范围内,或者900nm~1000nm的范围内。为了保证滤光单元14可以通过补光单元15的近红外补光,要求补光单元15的近红外补光的能量分布范围不小于滤光单元14预先设置的近红外光通过范围。
用户可以根据自己的实际需求开启滤光单元14,当然,在一些场景下,例如在夜间开启近红外补光的功能下,可以自动开启滤光单元14,在白天关闭近红外补光时,可以自动关闭滤光单元14。
本申请实施例中,采用近红外光波段的补光器材进行补光。具体的,可以采用波长为850nm的红外灯作为补光单元15,也可以采用750nm、780nm、850nm、860nm、940nm的红外灯作为补光单元15,以850nm红外灯为例,其能量分布曲线如图9所示,能量分布主要集中在830nm~880nm范围内。
在本申请实施例的一种实现方式中,图像传感器11,可以包括响应近红外光波段范围内的光分量的第二类通道。曝光控制单元13,还可以用于根据第二类通道的统计数据,控制补光单元15调整补光强度。
曝光控制单元13除了可以控制对各类通道的图像数据进行亮度调整之外,还可以控制补光单元15调整补光强度,经过对补光强度的调整,可以使第二类通道的图像数据符合预定亮度范围。
在本申请实施例的一种实现方式中,曝光控制单元13,具体可以用于:若第二类通道的统计数据大于第一预设阈值,则控制补光单元15降低发射近红外光的强度;若第二类通道的统计数据小于第二预设阈值,则控制补光单元15提高发射近红外光的强度。
曝光控制单元13控制补光单元15调整补光强度的方式,具体是判断第二类通道的统计数据是否大于第一预设阈值,第二类通道的统计数据可以是第二类通道的图像数据的亮 度统计结果,相应的第一预设阈值是最大亮度阈值,如果统计数据大于第一预设阈值,则说明第二类通道的亮度太高,需要降低,则会控制补光单元15降低发射近红外光的强度。相应的第二预设阈值是最小亮度阈值,如果统计数据小于第二预设阈值,则说明第二类通道的亮度太低,需要提高,则会控制补光单元15提高发射近红外光的强度。由于第一预设阈值是最大亮度阈值、第二预设阈值是最小亮度阈值,因此,一般情况下,第一预设阈值大于第二预设阈值。
在本申请实施例的一种可实现方式中,曝光控制单元13控制补光单元15调整补光强度可以在曝光控制单元13控制对各类通道的图像数据进行亮度调整之后,即如果曝光控制单元13控制对各类通道的图像数据进行亮度调整之后,成像效果仍然较差(图像太暗或者太亮),此时,曝光控制单元13可以控制补光单元15调整补光强度,以使得图像的亮度符合预定亮度范围。
在本申请实施例的一种实现方式中,图像传感器还可以包括:响应可见光波段范围内的光分量的第一类通道;曝光控制单元13,具体可以用于:
获取第一类通道的第一曝光时间、第一类通道对应的第一目标数据、第二类通道的第二曝光时间及第二类通道对应的第二目标数据;根据第一类通道的统计数据及第一目标数据,计算第一类通道的第一数据偏移量,若第一数据偏移量不在第一预设范围内,则根据第一类通道的统计数据及第一目标数据,计算第一曝光增益;根据第二类通道的统计数据及第二目标数据,计算第二类通道的第二数据偏移量,若第二数据偏移量不在第二预设范围内,则根据第二类通道的统计数据及第二目标数据,计算第二曝光增益;若第一曝光时间与第二曝光时间相等,则在第二曝光增益小于第一预设增益阈值的情况下,控制补光单元降低发射近红外光的强度,在第二曝光增益大于第二预设增益阈值的情况下,控制补光单元提高发射近红外光的强度;若第一曝光时间与第二曝光时间不相等,则在第二曝光增益小于第一预设增益阈值的情况下,减小第二曝光时间,在第二曝光增益大于第二预设增益阈值的情况下,增大第二曝光时间。
如图10所示,假设曝光时间为固定时间,例如40ms或10ms,第一类通道的曝光时间为T1、增益为Gain1,第二类通道的曝光时间为T2、增益为Gain2。计算Gain1和Gain2的步骤包括:第一步,根据通道的统计数据和目标数据,计算数据偏移量,即针对第一类通道,数据偏移量Y delta=|Y current-Y target|,针对第二类通道,数据偏移量NIRave delta=|NIRave current-NIRave target|,其中,目标数据Y target和NIRave target是预先设置的;第二步,判断数据偏移量是否在预设范围内,若否则执行第三步,其中,针对第一类通道和第二类通道,所设置的预设范围可以相同也可以不同,具体的,针对第一类通道设置有第一预设范围,针对第二类通道设置有第二预设范围;第三步,根据通道的统计数据和目标数据,计算更新的曝光增益,即Gain=Ytarget/Ycurrent。然后将计算得到的更新的曝光增益和设定的曝光时间发送至图像传感器,图像传感器识别到T1与T2相等,当Gain2小 于第一预设增益阈值时,可以控制补光单元降低发射近红外光的强度实现补光强度的调整,可见,第一预设增益阈值指的是可接受的第二曝光增益的最小值,当Gain2大于第二预设增益阈值时,可以控制补光单元提高发射近红外光的强度实现补光强度的调整,可见,第二预设增益阈值指的是可接受的第二曝光增益的最大值,因此,一般情况下,第一预设增益阈值小于第二预设增益阈值;识别到T1和T2不同,当Gain2小于第一预设增益阈值时,可以通过缩小T2实现通道的图像数据的亮度调整,当Gain2大于第二预设增益阈值时,可以通过增大T2实现通道的图像数据的亮度调整。
基于图1所示实施例,本申请实施例还提供了一种成像系统,如图11所示,该成像系统包括图像传感器11、统计单元12、曝光控制单元13和处理单元16。图像传感器11包括多类通道,通道用于对通过的光分量进行响应,得到图像信号中的一个像素。
其中,图像传感器11,用于将光信号转换为图像信号,光信号包括多种波段范围内的光分量;处理单元16可以是具有运算功能的芯片,用于获取图像传感器11输出的图像信号、各类通道对应的当前曝光参数以及各类通道之间的关联信息,根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通道之间的相关性,根据每两类通道之间的相关性,针对每个像素点,去除包含在一类通道中的另一类通道的光分量;统计单元12,用于获取图像信号,提取图像信号中各类通道的图像数据,对各类通道的图像数据分别进行统计,得到各类通道的统计数据,将各类通道的统计数据发送至曝光控制单元13;曝光控制单元13,用于接收统计单元12发送的各类通道的统计数据,针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
统计单元12、曝光控制单元13和处理单元16的功能可以由一个处理器执行,也可以是多个处理器执行,这里不做具体限定。
应用本申请实施例,统计单元对每一类通道的图像数据分别进行统计,曝光控制单元根据一类通道的统计数据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道进行独立曝光,使得一类通道的图像数据的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。并且,处理单元通过分析每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量,相关性与曝光参数相关,有利于获得更准确的色彩信息。
处理单元16是一个包含图像处理算法或程序的逻辑平台,该平台可以是CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processor,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
处理单元16在从图像传感器11获取到图像信号后,对图像信号进行处理得到各类通道的图像数据,并且可以获取到各类通道对应的当前曝光参数(可以包括曝光时间、曝光增益等),根据各类通道对应的当前曝光参数及各类通道之间的关联信息,可以确定出每两类通道之间的相关性,其中,关联信息是指一个通道的某一个图像属性对另一个通道的某一个图像属性的影响,例如,NIR通道的亮度与RGB通道的色彩的影响,在实际成像时,会有一部分近红外光能量进入RGB通道对应的像素点,造成RGB通道对应像素点的图像偏色,为了能够准确恢复色彩需要减去RGB通道对应像素点中所包含的红外能量,关联信息就是指哪些RGB通道的色彩受到了近红外光的影响。相关性表示了图像属性关联信息的程度和大小,图像属性关联信息可以是处理单元在获取到图像信号后分析得到的,也可以图像传感器预先根据图像信号分析好发送给处理单元的,相关性即指通过标定等手段,计算出RGB通道对应的像素点中近红外光能量的占比,根据每两类通道之间的相关性,可以明确知道两个通道之间图像数据相互影响的大小和程度,因此,根据每两类通道之间的相关性,可以去除包含在一类通道中的另一类通道的光分量,从而还原出一类通道的原始信号。具体在实现时,在一类通道中去除另一类通道的光分量,可以是通过使用系数矩阵对多类通道的图像数据进行加权处理实现,其中,系数矩阵可以是一个预先标定好的矩阵。
在本申请实施例的一种实现方式中,图像传感器11可以包括:响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道。
处理单元16,具体可以用于:获取图像传感器输出的图像信号、第一类通道和第二类通道对应的当前曝光参数,以及第一类通道的色彩和第二类通道的亮度之间的关联信息,或者,获取图像传感器输出的图像信号、第一类通道和第二类通道对应的当前曝光参数,以及第一类通道的亮度和第二类通道的亮度之间的关联信息;根据第一类通道和第二类通道对应的当前曝光参数,将图像信号中第一类通道的图像数据和第二类通道的图像数据归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一类通道的图像数据和第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量。
针对于图像传感器11包括第一类通道和第二类通道的场景,可以先根据第一类通道和第二类通道的当前曝光参数,将图像信号中第一类通道的图像数据和第二类通道的图像数据归一化至同一曝光参数下,从而在逻辑上获得相同曝光参数下的第一类通道的图像数据和第二类通道的图像数据,基于第一类通道的色彩和第二类通道的亮度之间的关联信息,确定第一类通道的权重和第二类通道的权重,第一类通道的权重和第二类通道的权重可以组成一个系数矩阵,该系数矩阵可以是根据关联信息标定的3*4矩阵。利用第一类通道的权重和第二类通道的权重以及归一化后的第一类通道的图像数据和第二类通道的图像数据,则可去除包含在第一类通道中的第二类通道的光分量。上述归一化过程也可以通过调 整前述系数矩阵实现,这里不再赘述。第一类通道的色彩和第二类通道的亮度之间的关联信息,具体可以是通过预先分析第一类通道中各色彩通道的亮度和第二类通道的亮度之间的关联信息得到,当然关联信息也可以直接采用第一类通道中各色彩通道的亮度和第二类通道的亮度之间的关联信息。
在本申请实施例的一种实现方式中,图像传感器11可以包括:响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道。
处理单元16,具体可以用于:获取图像传感器输出的图像信号、第一类通道和第二类通道对应的当前曝光参数,以及第一类通道的色彩和第二类通道的亮度之间的关联信息,或者,获取图像传感器输出的图像信号、第一类通道和第二类通道对应的当前曝光参数,以及第一类通道的亮度和第二类通道的亮度之间的关联信息;根据第一类通道和第二类通道对应的当前曝光参数,以及关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重、第一类通道的图像数据及第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量。
针对于图像传感器11包括第一类通道和第二类通道的场景,还可以根据第一类通道和第二类通道的当前曝光参数和预先分析得到的第一类通道的色彩和第二类通道的亮度之间的关联信息,确定第一类通道的权重和第二类通道的权重,第一类通道的权重和第二类通道的权重可以组成一个系数矩阵,该系数矩阵可以是根据关联信息标定的3*4矩阵。利用第一类通道的权重和第二类通道的权重以及第一类通道的图像数据和第二类通道的图像数据,则可去除包含在第一类通道中的第二类通道的光分量。确定出的第一类通道的权重和第二类通道的权重与当前曝光参数相关,有利用获得更准确的色彩信息。
在后续应用时,一般更关心图像信号中的彩色分量,因此,上述实施例仅给出了去除包含在第一类通道中的第二类通道的光分量的实施方式。而为了更好的成像效果,也可以按照上述方式去除包含在第二类通道中的第一类通道的光分量,然后再进行通道融合得到融合的图像信号。
在本申请实施例的一种实现方式中,处理单元16,还可以用于将已去除其他类通道的光分量的各类通道的图像数据进行融合,得到融合后的图像信号。
具体的融合过程可以为:分别对可见光通道的图像(以下称为可见光图像)和近红外光通道的图像(以下称为近红外光图像)进行滤波处理,得到可见光基础层图像和近红外光基础层图像,依次计算可见光图像每一像素点的灰度值,与可见光基础层图像对应每一像素点的灰度值的比值或差值,将计算结果集作为可见光纹理系数,依次计算近红外光图像每一像素点的灰度值,与近红外光基础层图像对应每一像素点的灰度值的比值或差值,将计算结果集作为近红外光纹理系数,根据预设的边缘检测算子,对可见光图像进行卷积处理,得到可见光纹理强度信息,并根据边缘检测算子,对近红外光图像进行卷积处理,得到近红外光纹理强度信息,根据可见光纹理系数、近红外光纹理系数、可见光纹理强度 信息、以及近红外光纹理强度信息,得到融合纹理系数;根据预设的第一卷积核,对可见光图像进行卷积处理,得到可见光局部亮度图像,并根据预设的第二卷积核,对近红外光图像进行卷积处理,得到近红外光局部亮度图像,计算可见光局部亮度图像与近红外光局部亮度图像的亮度偏移量,并根据可见光局部亮度图像、近红外光局部亮度图像、以及亮度偏移量,得到反射系数,根据融合纹理系数和反射系数,对近红外光图像进行融合处理,得到融合图像。
处理单元16在对各类通道进行光分量消除后,可以将各类通道的图像数据进行融合,能够增强彩色信号、提升低照度下融合后的图像数据的质量。在得到融合后的图像信号后,可以将该图像信号发送给统计单元12,由统计单元12进行图像数据的统计,给曝光控制单元13的曝光控制提供依据,也可以直接输出给用户。
本申请实施例中,还可以对第一类通道和第二类通道的图像数据进行联合降噪,具体的联合降噪方式可以是,利用第二类通道的图像数据对第一类通道的图像数据进行引导,也就是将第二类通道的图像数据作为基准数据,对第一类通道的图像数据进行降噪处理,降低噪声的同时减少有效信息损失。
在本申请实施例的一种实现方式中,处理单元包括信号分解模块及后处理模块;
信号分解模块,用于获取图像信号,对图像信号的可见光信号和近红外光信号进行分解,输出分解后的第一分解图像信号和第二分解图像信号,其中,第一分解图像信号为可见光图像信号,第二分解图像为近红外光图像信号,由于可见光信号和近红外光信号有不同的数据格式,则可以根据数据格式的不同进行分解;
后处理模块,用于获取第一分解图像信号、第二分解图像信号、第一类通道对应的第一当前曝光参数、第二类通道对应的第二当前曝光参数,以及第一类通道和第二类通道之间的关联信息;根据第一当前曝光参数、第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性;根据相关性,确定第一输出图像信号和/或第二输出图像信号,其中,第一输出图像信号为去除近红外光分量的第一分解图像信号。第二输出图像信号可以为去除可见光分量的第二分解图像信号,也可以为未去除可见光分量的第二分解图像信号。
在本申请实施例的一种实现方式中,信号分解模块,具体可以用于:
获取图像信号;分别对图像信号中可见光信号的各色彩分量和近红外光信号进行上采样,得到各色彩分量的图像信号以及近红外光的图像信号;将各色彩分量的图像信号进行组合,得到第一分解图像信号进行输出,并将近红外光的图像信号作为第二分解图像信号进行输出;
或者,
获取图像信号、第一类通道对应的第一当前曝光增益以及第二类通道对应的第二当前曝光增益;若第二当前曝光增益小于第一当前曝光增益,则根据图像信号中第二类通道的 图像数据,对第一类通道的图像数据进行边缘判决插值,若第二当前曝光增益大于第一当前曝光增益,则根据图像信号中第一类通道的图像数据,对第二类通道的图像数据进行边缘判决插值;获得插值后的可见光信号的各色彩分量的图像信号以及近红外光的图像信号;将各色彩分量的图像信号进行组合,得到第一分解图像信号进行输出,并将近红外光的图像信号作为第二分解图像信号进行输出。
在本申请实施例的一种实现方式中,后处理模块,可以包括:第一处理子模块、第二处理子模块、色彩恢复子模块及第三处理子模块;
第一处理子模块,用于获取第一分解图像信号,对第一分解图像信号进行预处理,得到第一子处理图像信号;
第二处理子模块,用于获取第二分解图像信号,对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
色彩恢复子模块,用于获取第一类通道对应的第一当前曝光参数、第二类通道对应的第二当前曝光参数,以及第一类通道和第二类通道之间的关联信息;根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到第一恢复图像信号;
第三处理子模块,用于对第一恢复图像信号进行处理,得到第一输出图像信号。
在本申请实施例的一种实现方式中,后处理模块,可以包括:第一处理子模块、第二处理子模块、色彩恢复子模块及第四处理子模块;
第一处理子模块,用于获取第一分解图像信号,对第一分解图像信号进行预处理,得到第一子处理图像信号;
第二处理子模块,用于获取第二分解图像信号,对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
色彩恢复子模块,用于获取第一类通道对应的第一当前曝光参数、第二类通道对应的第二当前曝光参数,以及第一类通道和第二类通道之间的关联信息;根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到第二恢复图像信号;
第四处理子模块,用于对第二恢复图像信号进行处理,得到第二输出图像信号。
在本申请实施例的一种实现方式中,后处理模块,可以包括:第一处理子模块、第二 处理子模块、色彩恢复子模块、第三处理子模块、第四处理子模块及第五处理子模块;
第一处理子模块,用于获取第一分解图像信号,对第一分解图像信号进行预处理,得到第一子处理图像信号;
第二处理子模块,用于获取第二分解图像信号,对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
色彩恢复子模块,用于获取第一类通道对应的第一当前曝光参数、第二类通道对应的第二当前曝光参数,以及第一类通道和第二类通道之间的关联信息;根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到第一恢复图像信号和第二恢复图像信号;
第三处理子模块,用于对第一恢复图像信号进行处理,得到第三子处理图像信号;
第四处理子模块,用于对第二恢复图像信号进行处理,得到第四子处理图像信号;
第五处理子模块,用于对第三子处理图像信号和第四子处理图像信号进行处理,得到第一输出图像信号。
在本申请实施例的一种实现方式中,后处理模块,可以包括:第一处理子模块、第二处理子模块、色彩恢复子模块、第三处理子模块、第四处理子模块及第五处理子模块;
第一处理子模块,用于获取第一分解图像信号,对第一分解图像信号进行预处理,得到第一子处理图像信号;
第二处理子模块,用于获取第二分解图像信号,对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
色彩恢复子模块,用于获取第一类通道对应的第一当前曝光参数、第二类通道对应的第二当前曝光参数,以及第一类通道和第二类通道之间的关联信息;根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到第一恢复图像信号和第二恢复图像信号;
第三处理子模块,用于对第一恢复图像信号进行处理,得到第三子处理图像信号;
第四处理子模块,用于对第二恢复图像信号进行处理,得到第四子处理图像信号;
第五处理子模块,用于对第三子处理图像信号和第四子处理图像信号进行处理,得到第一输出图像信号和第二输出图像信号。
在本申请实施例的一种实现方式中,处理单元还可以包括预处理模块;
预处理模块,用于获取图像传感器输出的图像信号,对图像信号进行预处理,将预处理后的图像信号发送至信号分解模块。
综上述,处理单元16具体可以采用多种方式进行实施,下面分别进行介绍。
第一种实施方式:
如图12所示,处理单元16包括信号分解模块和后处理模块。其中,信号分析模块对输入的图像信号的可见光信号和近红外光信号进行逻辑分解,将图像信号分解为第一分解图像信号和第二分解图像信号;后处理模块对第一分解图像信号和第二分解图像信号进行处理,输出第一输出图像信号。
图像传感器传输给处理单元的一路图像信号中同时包括可见光信号和近红外光信号,因此信号分解模块对这两种图像信号进行逻辑分解,输出分解后的第一分解图像信号和第二分解图像信号。
对于信号分解模块,一种处理方式可以是分别对可见光的R、G、B信号和近红外的NIR信号进行上采样(可以是双线性上采样,也可以采用其他上采样方法),获得R、G、B、NIR的图像信号,这四个图像信号为全分辨率图像信号,而后将R、G、B图像信号分别作为第一分解图像信号输出或者组合成可见光图像信号作为第一分解图像信号输出,将近红外NIR图像信号作为第二分解图像信号输出。
对于信号分解模块,另一种处理方式可以是对RGBIR图像信号进行插值运算,插值运算可以是基于边缘判断的插值。其中,进行插值时可以采用成像质量较优的通道作为引导,引导成像质量较差的通道进行边缘判决插值。成像质量可以由增益判断,例如,当NIR通道的曝光增益小于R、G、B通道的曝光增益时,采用NIR通道引导R、G、B通道进行边缘判决插值;当R、G、B通道的曝光增益小于NIR通道的曝光增益时,则采用R、G、B通道引导NIR通道进行边缘判决插值。插值后获得R、G、B、NIR的图像信号,将R、G、B图像信号组合成可见光图像信号作为第一分解图像信号输出,将近红外NIR图像信号作为第二分解图像信号输出。
后处理模块用于对第一分解图像信号和第二分解图像信号进行联合处理,获得第一输出图像信号。后处理模块可以由多种实现方式。
后处理模块的第一种实现方式如图13所示,第一处理子模块可以对第一分解图像信号进行坏点校正、黑电平校正、数据增益、降噪中的一个或多个处理,得到第一子处理图像信号;第二处理子模块可以对第二分解图像信号进行坏点校正、黑电平校正、数据增益、降噪中的一个或多个处理,得到第二子处理图像信号。将第一子处理图像信号和第二子处理图像信号归一化到同一增益和曝光时间下,一种处理方式可以是根据RGB通道的增益g1、曝光时间t1及NIR通道的增益g2、曝光时间t2对第二子处理图像信号进行如下调整:
Figure PCTCN2021072427-appb-000009
利用预先标定的系数矩阵A,对第一子处理图像信号(RGB图像信号)和调整后的第二子处理图像信号(NIR'图像信号)进行联合处理,获得恢复色彩的第一恢复图像信号,处理方式如下式所示:
Figure PCTCN2021072427-appb-000010
此外,不排除使用其他方式达到对第一子处理图像信号和第二子处理图像信号归一化到同一曝光时间和增益的目的,如对第一子处理图像信号进行比例缩放,或对系数矩阵A进行比例缩放等。
第三处理子模块对第一恢复图像信号进行进一步处理,包括但不限于数字增益、白平衡、色彩校正、曲线映射、降噪、增强等,最后获得彩色的第一输出图像信号。
后处理模块的第二种实现方式如图14所示,第一处理子模块和第二处理子模块可以采用与图13所示实施例中第一处理子模块和第二处理子模块相同的实现方式,这里不再赘述。色彩恢复子模块的一种处理方式可以将第二子处理图像信号作为第二恢复图像信号直接输出,也可以对第一子处理图像信号和第二子处理图像信号进行加权后,作为第二恢复图像信号输出。第四处理子模块对第二恢复图像信号进行进一步处理,包括但不限于数字增益、白平衡、色彩校正、曲线映射、降噪、增强等,最后获得黑白的第二输出图像信号。
后处理模块的第三种实现方式如图15所示,第一处理子模块和第二处理子模块可以采用与图13所示实施例中第一处理子模块和第二处理子模块相同的实现方式。色彩恢复子模块可以采用与图13所示实施例中色彩恢复子模块相同的实现方式输出第一恢复图像信号、采用与图14所示实施例中色彩恢复子模块相同的实现方式输出第二恢复图像信号。第三处理子模块可以采用与图13所示实施例中第三处理子模块相同的实现方式得到第三子处理图像信号,第四处理子模块可以采用与图14所示实施例中第四处理子模块相同的实现方式得到第四子处理图像信号。第五处理子模块对第三子处理图像信号和第四子处理图像信号进行处理,获得第一输出图像信号,第五处理子模块的处理方式包括但不限于降噪、融合、增强等。
第二种实施方式:
如图16所示,处理单元16包括信号分解模块和后处理模块。其中,信号分析模块对输入的图像信号进行逻辑分解,将图像信号分解为第一分解图像信号和第二分解图像信号;后处理模块对第一分解图像信号和第二分解图像信号进行处理,输出第一输出图像信号和第二输出图像信号。
信号分解模块可以采用与第一种实施方式中信号分解模块相同的实现方式,这里不再赘述。
后处理模块的实现方式如图17所示,第一处理子模块、第二处理子模块、色彩恢复子模块、第三处理子模块、第四处理子模块可以采用与图15所示实施例中的相应模块相同的实现方式。第五处理子模块对第三子处理图像信号和第四子处理图像信号进行处理,处理方式包括但不限于降噪、融合、增强等,获得彩色的第一输出图像信号和黑白的第二输出图像信号。
第三种实施方式:
如图18所示,处理单元16包括预处理模块、信号分解模块和后处理模块。其中,预处理模块对输入的图像信号进行预处理,输出预处理图像信号;信号分析模块对预处理图像信号进行逻辑分解,将预处理图像信号分解为第一分解图像信号和第二分解图像信号;后处理模块对第一分解图像信号和第二分解图像信号进行处理,输出第一输出图像信号和第二输出图像信号。
预处理模块对输入的图像信号进行预处理,获得预处理图像,其中,预处理包括但不限于黑电平校正、坏点校正、数字增益、降噪等。
信号分解模块可以采用与第一种实施方式中信号分解模块相同的实现方式,这里不再赘述。
后处理模块可以采用与第一种实施方式中后处理模块相同的实现方式,这里不再赘述。
第四种实施方式:
如图19所示,处理单元16包括预处理模块、信号分解模块和后处理模块。其中,预处理模块对输入的图像信号进行预处理,输出预处理图像信号;信号分析模块对预处理图像信号进行逻辑分解,将预处理图像信号分解为第一分解图像信号和第二分解图像信号;后处理模块对第一分解图像信号和第二分解图像信号进行处理,输出第一输出图像信号和第二输出图像信号。
预处理模块可以采用与第三种实施方式中预处理模块相同的实现方式,这里不再赘述。
信号分解模块可以采用与第一种实施方式中信号分解模块相同的实现方式,这里不再赘述。
后处理模块可以采用与第二种实施方式中后处理模块相同的实现方式,这里不再赘述。
本申请实施例提供了一种图像处理方法,应用于成像系统;如图20所示,该方法包括:
S201,从图像传感器获取图像信号,提取图像信号中各类通道的图像数据。
S202,对各类通道的图像数据分别进行统计,得到各类通道的统计数据。
S203,针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
在本申请实施例的一种实现方式中,各类通道包括响应可见光波段范围内的光分量的 第一类通道,以及响应近红外光波段范围内的光分量的第二类通道,其中,第一类通道包括多个色彩通道,第二类通道包括近红外通道;
S202具体可以为:根据多个色彩通道中至少一个色彩通道的图像数据,计算第一类通道的图像数据统计值作为第一类通道的统计数据;根据近红外通道的图像数据,计算第二类通道的图像数据统计值作为第二类通道的统计数据。
在本申请实施例的一种实现方式中,S201中提取图像信号中各类通道的图像数据的步骤,具体可以通过如下步骤实现:从图像信号中,提取各色彩通道的图像数据及近红外通道的图像数据;
根据多个色彩通道中至少一个色彩通道的图像数据,计算第一类通道的图像数据统计值作为第一类通道的统计数据的步骤,具体可以通过如下步骤实现:从图像信号中,提取各色彩通道的图像数据;根据各色彩通道的图像数据,计算各色彩通道的图像数据均值;对各色彩通道的图像数据均值进行加权求和;将加权求和的结果作为第一类通道的统计数据;
根据近红外通道的图像数据,计算第二类通道的图像数据统计值作为第二类通道的统计数据的步骤,具体可以通过如下步骤实现:根据近红外通道的图像数据,计算近红外通道的图像数据均值;将近红外通道的图像数据均值作为第二类通道的统计数据。
在本申请实施例的一种实现方式中,S201中提取图像信号中各类通道的图像数据的步骤,具体可以通过如下步骤实现:将图像信号进行分块,得到多个图像信号块;针对任一图像信号块,从该图像信号块中提取各色彩通道的图像数据及近红外通道的图像数据;
根据多个色彩通道中至少一个色彩通道的图像数据,计算第一类通道的图像数据统计值作为第一类通道的统计数据的步骤,具体可以通过如下步骤实现:根据各图像信号块中各色彩通道的图像数据,计算各色彩通道的图像数据均值;对各色彩通道的图像数据均值进行加权求和;将加权求和的结果作为第一类通道的统计数据;
根据近红外通道的图像数据,计算第二类通道的图像数据统计值作为第二类通道的统计数据的步骤,具体可以通过如下步骤实现:根据各图像信号块中近红外通道的图像数据,计算近红外通道的图像数据均值;将近红外通道的图像数据均值作为第二类通道的统计数据。
在本申请实施例的一种实现方式中,S201中提取图像信号中各类通道的图像数据的步骤,具体可以通过如下步骤实现:从图像信号中,提取各色彩通道的图像数据及近红外通道的图像数据;
根据多个色彩通道中至少一个色彩通道的图像数据,计算第一类通道的图像数据统计值作为第一类通道的统计数据的步骤,具体可以通过如下步骤实现:根据各色彩通道的图像数据,得到各色彩通道的直方图;对各色彩通道的直方图中的灰阶数进行加权平均计算, 得到各色彩通道的图像数据均值;对各色彩通道的图像数据均值进行加权求和;将加权求和的结果作为第一类通道的统计数据;
根据近红外通道的图像数据,计算第二类通道的图像数据统计值作为第二类通道的统计数据的步骤,具体可以通过如下步骤实现:统计单元根据近红外通道的图像数据,得到近红外通道的直方图;对近红外通道的直方图中的灰阶数进行加权平均计算,得到近红外通道的图像数据均值;将近红外通道的图像数据均值作为第二类通道的统计数据。
在本申请实施例的一种实现方式中,各类通道包括响应近红外光波段范围内的光分量的第二类通道;
S203具体可以为:若判断出第二类通道的统计数据大于第一预设阈值,则控制补光单元降低发射近红外光的强度;若判断出第二类通道的统计数据小于第二预设阈值,则控制补光单元提高发射近红外光的强度,其中,第一预设阈值大于第二预设阈值。
在本申请实施例的一种实现方式中,各类通道包括响应可见光波段范围内的光分量的第一类通道、响应近红外光波段范围内的光分量的第二类通道;
S203具体可以为:获取第一类通道的第一曝光时间、第一类通道对应的第一目标数据、第二类通道的第二曝光时间及第二类通道对应的第二目标数据;根据第一类通道的统计数据及第一目标数据,计算第一类通道的第一数据偏移量,若第一数据偏移量不在第一预设范围内,则根据第一类通道的统计数据及第一目标数据,计算第一曝光增益;根据第二类通道的统计数据及第二目标数据,计算第二类通道的第二数据偏移量,若第二数据偏移量不在第二预设范围内,则根据第二类通道的统计数据及第二目标数据,计算第二曝光增益;若第一曝光时间与第二曝光时间相等,则在第二曝光增益小于第一预设增益阈值的情况下,控制补光单元降低发射近红外光的强度,在第二曝光增益大于第二预设增益阈值的情况下,控制补光单元提高发射近红外光的强度,其中,第一预设增益阈值小于第二预设增益阈值;若第一曝光时间与第二曝光时间不相等,则在第二曝光增益小于第一预设增益阈值的情况下,减小第二曝光时间,在第二曝光增益大于第二预设增益阈值的情况下,增大第二曝光时间。
在本申请实施例的一种实现方式中,该方法还可以包括如下步骤:
获取图像传感器输出的图像信号、各类通道对应的当前曝光参数以及各类通道之间的关联信息,根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通道之间的相关性,根据每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量。
在本申请实施例的一种实现方式中,各类通道包括响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
获取图像传感器输出的图像信号、各类通道对应的当前曝光参数以及各类通道之间的关联信息,根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通 道之间的相关性,根据每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤,具体可以通过如下步骤实现:
获取图像传感器输出的图像信号、第一类通道和第二类通道对应的当前曝光参数,以及第一类通道的色彩和第二类通道的亮度之间的关联信息;根据第一类通道和第二类通道对应的当前曝光参数,将图像信号中第一类通道的图像数据和第二类通道的图像数据归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一类通道的图像数据和第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量。
可选的,各类通道包括响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
获取图像传感器输出的图像信号、各类通道对应的当前曝光参数以及各类通道之间的关联信息,根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通道之间的相关性,根据每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤,具体可以通过如下步骤实现:
获取图像传感器输出的图像信号、第一类通道和第二类通道对应的当前曝光参数,以及第一类通道的色彩和第二类通道的亮度之间的关联信息;根据第一类通道和第二类通道对应的当前曝光参数,以及关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重、第一类通道的图像数据及第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量。
在本申请实施例的一种实现方式中,在获取图像传感器输出的图像信号、各类通道对应的当前曝光参数以及各类通道之间的关联信息,根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通道之间的相关性,根据每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤之后,该方法还可以包括如下步骤:
将已去除其他类通道的光分量的各类通道的图像数据进行融合,得到融合后的图像信号。
在本申请实施例的一种实现方式中,各类通道包括响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通道之间的相关性,根据每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤,具体可以通过如下步骤实现:
根据第一类通道和第二类通道对应的当前曝光参数,以及关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重、第一类通道的图像数据及第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量。
在本申请实施例的一种实现方式中,根据第一类通道和第二类通道对应的当前曝光参 数,以及关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重、第一类通道的图像数据及第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量的步骤,具体可以通过如下步骤实现:
根据第一类通道和第二类通道对应的当前曝光参数,将图像信号中第一类通道的图像数据和第二类通道的图像数据归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一类通道的图像数据和第二类通道的图像数据,去除包含在第一类通道中的第二类通道的光分量。
在本申请实施例的一种实现方式中,各类通道包括响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
在获取图像传感器输出的图像信号、各类通道对应的当前曝光参数以及各类通道之间的关联信息的步骤之后,该方法还可以包括如下步骤:
对图像信号的可见光信号和近红外光信号进行分解,得到分解后的第一分解图像信号和第二分解图像信号,其中,第一分解图像信号为可见光图像信号,第二分解图像为近红外光图像信号;
根据各类通道对应的当前曝光参数及各类通道之间的关联信息,确定每两类通道之间的相关性,根据每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤,具体可以通过如下步骤实现:
根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性;根据相关性,确定第一输出图像信号和/或第二输出图像信号,其中,第一输出图像信号为去除近红外光分量的第一分解图像信号。
在本申请的一种实现方式中,对图像信号的可见光信号和近红外光信号进行分解,得到分解后的第一分解图像信号和第二分解图像信号的步骤,具体可以通过如下步骤实现:
获取图像信号;分别对图像信号中可见光信号的各色彩分量和近红外光信号进行上采样,得到各色彩分量的图像信号以及近红外光的图像信号;将各色彩分量的图像信号进行组合,得到第一分解图像信号,并将近红外光的图像信号作为第二分解图像信号;
或者,
获取图像信号、第一类通道对应的第一当前曝光增益以及第二类通道对应的第二当前曝光增益;若第二当前曝光增益小于第一当前曝光增益,则根据图像信号中第二类通道的图像数据,对第一类通道的图像数据进行边缘判决插值,若第二当前曝光增益大于第一当前曝光增益,则根据图像信号中第一类通道的图像数据,对第二类通道的图像数据进行边缘判决插值;获得插值后的可见光信号的各色彩分量的图像信号以及近红外光的图像信号;将各色彩分量的图像信号进行组合,得到第一分解图像信号,并将近红外光的图像信号作 为第二分解图像信号。
在本申请实施例的一种实现方式中,在根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性的步骤之前,该方法还可以包括如下步骤:
对第一分解图像信号进行预处理,得到第一子处理图像信号;对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性;根据相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,具体可以通过如下步骤实现:
根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号;对第一恢复图像信号进行处理,得到彩色的第一输出图像信号。
在本申请实施例的一种实现方式中,在根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性的步骤之前,该方法还可以包括如下步骤:
对第一分解图像信号进行预处理,得到第一子处理图像信号;对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性;根据相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,具体可以通过如下步骤实现:
根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;将归一化后的第二子处理图像信号作为恢复黑白色彩的第二恢复图像信号,或者,根据第一类通道的权重和第二类通道的权重,对归一化后的第一子处理图像信号和第二子处理图像信号进行加权,得到恢复黑白色彩的第二恢复图像信号;对第二恢复图像信号进行处理,得到黑白的第二输出图像信号。
在本申请实施例的一种实现方式中,在根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性的步骤之前,该方法还可以包括如下步骤:
对第一分解图像信号进行预处理,得到第一子处理图像信号;对第二分解图像信号进 行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性;根据相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,具体可以通过如下步骤实现:
根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号和恢复黑白色彩的第二恢复图像信号;对第一恢复图像信号进行处理,得到第三子处理图像信号;对第二恢复图像信号进行处理,得到第四子处理图像信号;对第三子处理图像信号和第四子处理图像信号至少进行降噪、融合、增强处理,得到彩色的第一输出图像信号。
在本申请实施例的一种实现方式中,在根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性的步骤之前,该方法还可以包括如下步骤:
对第一分解图像信号进行预处理,得到第一子处理图像信号;对第二分解图像信号进行预处理,得到第二子处理图像信号,其中,预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
根据第一类通道对应的第一当前曝光参数、第二类通道对应第二当前曝光参数及关联信息,确定第一类通道和第二类通道之间的相关性;根据相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,具体可以通过如下步骤实现:
根据第一当前曝光参数及第二当前曝光参数,将第一子处理图像信号和第二子处理图像信号归一化至同一曝光参数下;基于关联信息,确定第一类通道的权重和第二类通道的权重;根据第一类通道的权重和第二类通道的权重,以及归一化后的第一子处理图像信号和第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号和恢复黑白色彩的第二恢复图像信号;对第一恢复图像信号进行处理,得到第三子处理图像信号;对第二恢复图像信号进行处理,得到第四子处理图像信号;对第三子处理图像信号和第四子处理图像信号至少进行降噪、融合、增强处理,得到彩色的第一输出图像信号,对第四子处理图像信号至少进行降噪、增强处理,得到黑白的第二输出图像信号。
在本申请实施例的一种实现方式中,在对图像信号的可见光信号和近红外光信号进行分解,输出分解后的第一分解图像信号和第二分解图像信号的步骤之前,该方法还可以包括如下步骤:
获取图像传感器输出的图像信号,对图像信号进行预处理。
应用本申请实施例,对每一类通道的图像数据分别进行统计,根据一类通道的统计数 据计算出这类通道对应的曝光参数,并基于计算出的曝光参数,控制对该类通道的图像数据进行亮度调整,针对不同类通道所响应的光分量的能量不同的实际情况,对一类通道的图像数据进行独立曝光,使得一类通道的图像数据的亮度控制在合适的亮度范围内,从而提高了最终的成像效果。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (39)

  1. 一种成像系统,其特征在于,所述系统包括:图像传感器、统计单元和曝光控制单元;所述图像传感器包括多类通道;
    所述图像传感器,用于将光信号转换为图像信号,所述光信号包括多种波段范围内的光分量;
    所述统计单元,用于获取所述图像信号;提取所述图像信号中各类通道的图像数据;对所述各类通道的图像数据分别进行统计,得到所述各类通道的统计数据;将所述各类通道的统计数据发送至所述曝光控制单元;
    所述曝光控制单元,用于接收所述统计单元发送的所述各类通道的统计数据;针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
  2. 根据权利要求1所述的系统,其特征在于,所述图像传感器包括:响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道。
  3. 根据权利要求2所述的系统,其特征在于,所述第一类通道包括多个色彩通道,所述第二类通道包括近红外通道;
    所述统计单元,具体用于:
    根据所述多个色彩通道中至少一个色彩通道的图像数据,计算所述第一类通道的图像数据统计值作为所述第一类通道的统计数据;
    根据所述近红外通道的图像数据,计算所述第二类通道的图像数据统计值作为所述第二类通道的统计数据。
  4. 根据权利要求3所述的系统,其特征在于,所述统计单元,具体用于:
    从所述图像信号中,提取各色彩通道的图像数据及所述近红外通道的图像数据;分别根据所述各色彩通道的图像数据和所述近红外通道的图像数据,计算所述各色彩通道的图像数据均值和所述近红外通道的图像数据均值;对所述各色彩通道的图像数据均值进行加权求和;将所述加权求和的结果作为所述第一类通道的统计数据,将所述近红外通道的图像数据均值作为所述第二类通道的统计数据;
    或者,
    将所述图像信号进行分块,得到多个图像信号块;针对任一图像信号块,从该图像信号块中提取各色彩通道的图像数据及所述近红外通道的图像数据;分别根据各图像信号块中所述各色彩通道的图像数据和所述近红外通道的图像数据,计算所述各色彩通道的图像数据均值和所述近红外通道的图像数据均值;对所述各色彩通道的图像数据均值进行加权求和;将所述加权求和的结果作为所述第一类通道的统计数据,将所述近红外通道的图像数据均值作为所述第二类通道的统计数据;
    或者,
    从所述图像信号中,提取各色彩通道的图像数据及所述近红外通道的图像数据;分别根据所述各色彩通道的图像数据和所述近红外通道的图像数据,得到所述各色彩通道的直方图和所述近红外通道的直方图;分别对所述各色彩通道的直方图和所述近红外通道的直方图中的灰阶数进行加权平均计算,得到所述各色彩通道的图像数据均值和所述近红外通道的图像数据均值;对所述各色彩通道的图像数据均值进行加权求和;将所述加权求和的结果作为所述第一类通道的统计数据,将所述近红外通道的图像数据均值作为所述第二类通道的统计数据。
  5. 根据权利要求1所述的系统,其特征在于,所述系统还包括:滤光单元;
    所述滤光单元,用于过滤掉输入的光信号中除指定波段范围中的光分量以外的其他光分量,并透射过滤后的所述光信号至所述图像传感器。
  6. 根据权利要求5所述的系统,其特征在于,所述滤光单元包括切换装置;
    所述切换装置,用于切换所述滤光单元的过滤状态;
    所述滤光单元,用于在所述过滤状态为开启的情况下,过滤掉输入的光信号中除指定波段范围内的光分量以外的其他光分量,并透射过滤后的所述光信号至所述图像传感器;在所述过滤状态为关闭的情况下,透射所述光信号中的所有光分量至所述图像传感器。
  7. 根据权利要求1或5所述的系统,其特征在于,所述系统还包括:补光单元;
    所述补光单元,用于对场景进行近红外补光,以使输入的光信号包括近红外光。
  8. 根据权利要求7所述的系统,其特征在于,所述图像传感器,包括响应近红外光波段范围内的光分量的第二类通道;
    所述曝光控制单元,还用于根据所述第二类通道的统计数据,控制所述补光单元调整补光强度。
  9. 根据权利要求8所述的系统,其特征在于,所述曝光控制单元,具体用于:
    若所述第二类通道的统计数据大于第一预设阈值,则控制所述补光单元降低发射所述近红外光的强度;
    若所述第二类通道的统计数据小于第二预设阈值,则控制所述补光单元提高发射所述近红外光的强度,所述第一预设阈值大于所述第二预设阈值。
  10. 根据权利要求8所述的系统,其特征在于,所述图像传感器还包括:响应可见光波段范围内的光分量的第一类通道;所述曝光控制单元,具体用于:
    获取所述第一类通道的第一曝光时间、所述第一类通道对应的第一目标数据、所述第二类通道的第二曝光时间及所述第二类通道对应的第二目标数据;
    根据所述第一类通道的统计数据及所述第一目标数据,计算所述第一类通道的第一数据偏移量,若所述第一数据偏移量不在第一预设范围内,则根据所述第一类通道的统计数据及所述第一目标数据,计算第一曝光增益;
    根据所述第二类通道的统计数据及所述第二目标数据,计算所述第二类通道的第二数 据偏移量,若所述第二数据偏移量不在第二预设范围内,则根据所述第二类通道的统计数据及所述第二目标数据,计算第二曝光增益;
    若所述第一曝光时间与所述第二曝光时间相等,则在所述第二曝光增益小于第一预设增益阈值的情况下,控制所述补光单元降低发射所述近红外光的强度,在所述第二曝光增益大于第二预设增益阈值的情况下,控制所述补光单元提高发射所述近红外光的强度,所述第一预设增益阈值小于所述第二预设增益阈值;
    若所述第一曝光时间与所述第二曝光时间不相等,则在所述第二曝光增益小于所述第一预设增益阈值的情况下,减小所述第二曝光时间,在所述第二曝光增益大于所述第二预设增益阈值的情况下,增大所述第二曝光时间。
  11. 根据权利要求1所述的系统,其特征在于,所述系统还包括:处理单元;
    所述处理单元,用于获取所述图像传感器输出的图像信号、所述各类通道对应的当前曝光参数,以及所述各类通道之间的关联信息;根据所述各类通道对应的当前曝光参数及所述各类通道之间的关联信息,确定每两类通道之间的相关性;根据所述每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量。
  12. 根据权利要求11所述的系统,其特征在于,所述图像传感器包括:响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
    所述处理单元,具体用于:
    获取所述图像传感器输出的图像信号、所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述第一类通道的色彩和所述第二类通道的亮度之间的关联信息;或者,获取所述图像传感器输出的图像信号、所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述第一类通道的亮度和所述第二类通道的亮度之间的关联信息;
    根据所述第一类通道和所述第二类通道对应的当前曝光参数,将所述图像信号中所述第一类通道的图像数据和所述第二类通道的图像数据归一化至同一曝光参数下;
    基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一类通道的图像数据和所述第二类通道的图像数据,去除包含在所述第一类通道中的所述第二类通道的光分量。
  13. 根据权利要求11所述的系统,其特征在于,所述图像传感器包括:响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
    所述处理单元,具体用于:
    获取所述图像传感器输出的图像信号、所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述第一类通道的色彩和所述第二类通道的亮度之间的关联信息;或者,获取所述图像传感器输出的图像信号、所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述第一类通道的亮度和所述第二类通道的亮度之间的关联信息;
    根据所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    根据所述第一类通道的权重和所述第二类通道的权重、所述第一类通道的图像数据及所述第二类通道的图像数据,去除包含在所述第一类通道中的所述第二类通道的光分量。
  14. 根据权利要求11所述的系统,其特征在于,所述处理单元包括信号分解模块及后处理模块;所述图像传感器包括:响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
    所述信号分解模块,用于获取图像信号,对所述图像信号的可见光信号和近红外光信号进行分解,输出分解后的第一分解图像信号和第二分解图像信号,所述第一分解图像信号为可见光图像信号,所述第二分解图像为近红外光图像信号;
    所述后处理模块,用于获取所述第一分解图像信号、所述第二分解图像信号、所述第一类通道对应的第一当前曝光参数、所述第二类通道对应的第二当前曝光参数,以及所述第一类通道和所述第二类通道之间的关联信息;根据所述第一当前曝光参数、所述第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性;根据所述相关性,确定第一输出图像信号和/或第二输出图像信号,所述第一输出图像信号为去除近红外光分量的所述第一分解图像信号。
  15. 根据权利要求14所述的系统,其特征在于,所述信号分解模块,具体用于:
    获取图像信号;分别对所述图像信号中可见光信号的各色彩分量和近红外光信号进行上采样,得到所述各色彩分量的图像信号以及近红外光的图像信号;将所述各色彩分量的图像信号进行组合,得到第一分解图像信号进行输出,并将所述近红外光的图像信号作为第二分解图像信号进行输出;
    或者,
    获取图像信号、所述第一类通道对应的第一当前曝光增益以及所述第二类通道对应的第二当前曝光增益;若所述第二当前曝光增益小于所述第一当前曝光增益,则根据所述图像信号中所述第二类通道的图像数据,对所述第一类通道的图像数据进行边缘判决插值,若所述第二当前曝光增益大于所述第一当前曝光增益,则根据所述图像信号中所述第一类通道的图像数据,对所述第二类通道的图像数据进行边缘判决插值;获得插值后的可见光信号的各色彩分量的图像信号以及近红外光的图像信号;将所述各色彩分量的图像信号进行组合,得到第一分解图像信号进行输出,并将所述近红外光的图像信号作为第二分解图像信号进行输出。
  16. 根据权利要求14所述的系统,其特征在于,所述后处理模块,包括:第一处理子模块、第二处理子模块、色彩恢复子模块及第三处理子模块;
    所述第一处理子模块,用于获取所述第一分解图像信号,对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    所述第二处理子模块,用于获取所述第二分解图像信号,对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述色彩恢复子模块,用于获取所述第一类通道对应的第一当前曝光参数、所述第二类通道对应的第二当前曝光参数,以及所述第一类通道和所述第二类通道之间的关联信息;根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一子处理图像信号和所述第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号;
    所述第三处理子模块,用于对所述第一恢复图像信号进行处理,得到彩色的第一输出图像信号。
  17. 根据权利要求14所述的系统,其特征在于,所述后处理模块,包括:第一处理子模块、第二处理子模块、色彩恢复子模块及第四处理子模块;
    所述第一处理子模块,用于获取所述第一分解图像信号,对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    所述第二处理子模块,用于获取所述第二分解图像信号,对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述色彩恢复子模块,用于获取所述第一类通道对应的第一当前曝光参数、所述第二类通道对应的第二当前曝光参数,以及所述第一类通道和所述第二类通道之间的关联信息;根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;将归一化后的所述第二子处理图像信号作为恢复黑白色彩的第二恢复图像信号,或者,根据所述第一类通道的权重和所述第二类通道的权重,对归一化后的所述第一子处理图像信号和所述第二子处理图像信号进行加权,得到恢复黑白色彩的第二恢复图像信号;
    所述第四处理子模块,用于对所述第二恢复图像信号进行处理,得到黑白的第二输出图像信号。
  18. 根据权利要求14所述的系统,其特征在于,所述后处理模块,包括:第一处理子模块、第二处理子模块、色彩恢复子模块、第三处理子模块、第四处理子模块及第五处理子模块;
    所述第一处理子模块,用于获取所述第一分解图像信号,对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    所述第二处理子模块,用于获取所述第二分解图像信号,对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述色彩恢复子模块,用于获取所述第一类通道对应的第一当前曝光参数、所述第二类通道对应的第二当前曝光参数,以及所述第一类通道和所述第二类通道之间的关联信息;根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一子处理图像信号和所述第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号和恢复黑白色彩的第二恢复图像信号;
    所述第三处理子模块,用于对所述第一恢复图像信号进行处理,得到第三子处理图像信号;
    所述第四处理子模块,用于对所述第二恢复图像信号进行处理,得到第四子处理图像信号;
    所述第五处理子模块,用于对所述第三子处理图像信号和所述第四子处理图像信号至少进行降噪、融合、增强处理,得到彩色的第一输出图像信号。
  19. 根据权利要求14所述的系统,其特征在于,所述后处理模块,包括:第一处理子模块、第二处理子模块、色彩恢复子模块、第三处理子模块、第四处理子模块及第五处理子模块;
    所述第一处理子模块,用于获取所述第一分解图像信号,对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    所述第二处理子模块,用于获取所述第二分解图像信号,对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述色彩恢复子模块,用于获取所述第一类通道对应的第一当前曝光参数、所述第二类通道对应的第二当前曝光参数,以及所述第一类通道和所述第二类通道之间的关联信息;根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一子处理图像信号和所述第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号和恢复黑白色彩的第二恢复图像信号;
    所述第三处理子模块,用于对所述第一恢复图像信号进行处理,得到第三子处理图像信号;
    所述第四处理子模块,用于对所述第二恢复图像信号进行处理,得到第四子处理图像 信号;
    所述第五处理子模块,用于对所述第三子处理图像信号和所述第四子处理图像信号至少进行降噪、融合、增强处理,得到彩色的第一输出图像信号,对所述第四子处理图像信号至少进行降噪、增强处理,得到黑白的第二输出图像信号。
  20. 根据权利要求14所述的系统,其特征在于,所述处理单元还包括预处理模块;
    所述预处理模块,用于获取所述图像传感器输出的图像信号,对所述图像信号进行预处理,将预处理后的所述图像信号发送至所述信号分解模块。
  21. 根据权利要求11所述的系统,其特征在于,所述处理单元,还用于将已去除其他类通道的光分量的各类通道的图像数据进行融合,得到融合后的图像信号。
  22. 一种图像处理方法,其特征在于,应用于成像系统;所述方法包括:
    从图像传感器获取图像信号,提取所述图像信号中各类通道的图像数据;
    对所述各类通道的图像数据分别进行统计,得到所述各类通道的统计数据;
    针对任一类通道,根据该类通道的统计数据,计算该类通道对应的曝光参数,并基于该曝光参数,控制对该类通道的图像数据进行亮度调整。
  23. 根据权利要求22所述的方法,其特征在于,所述各类通道包括响应可见光波段范围内的光分量的第一类通道、响应近红外光波段范围内的光分量的第二类通道;所述第一类通道包括多个色彩通道,所述第二类通道包括近红外通道;
    所述对所述各类通道的图像数据分别进行统计,得到所述各类通道的统计数据的步骤,包括:
    根据所述多个色彩通道中至少一个色彩通道的图像数据,计算所述第一类通道的图像数据统计值作为所述第一类通道的统计数据;
    根据所述近红外通道的图像数据,计算所述第二类通道的图像数据统计值作为所述第二类通道的统计数据。
  24. 根据权利要求23所述的方法,其特征在于,所述提取所述图像信号中各类通道的图像数据的步骤,包括:
    从所述图像信号中,提取各色彩通道的图像数据及所述近红外通道的图像数据;
    所述根据所述多个色彩通道中至少一个色彩通道的图像数据,计算所述第一类通道的图像数据统计值作为所述第一类通道的统计数据的步骤,包括:
    从所述图像信号中,提取各色彩通道的图像数据;
    根据所述各色彩通道的图像数据,计算所述各色彩通道的图像数据均值;
    对所述各色彩通道的图像数据均值进行加权求和;将所述加权求和的结果作为所述第一类通道的统计数据;
    所述根据所述近红外通道的图像数据,计算所述第二类通道的图像数据统计值作为所述第二类通道的统计数据的步骤,包括:
    根据所述近红外通道的图像数据,计算所述近红外通道的图像数据均值;
    将所述近红外通道的图像数据均值作为所述第二类通道的统计数据。
  25. 根据权利要求23所述的方法,其特征在于,所述提取所述图像信号中各类通道的图像数据的步骤,包括:
    将所述图像信号进行分块,得到多个图像信号块;
    针对任一图像信号块,从该图像信号块中提取各色彩通道的图像数据及所述近红外通道的图像数据;
    所述根据所述多个色彩通道中至少一个色彩通道的图像数据,计算所述第一类通道的图像数据统计值作为所述第一类通道的统计数据的步骤,包括:
    根据各图像信号块中所述各色彩通道的图像数据,计算所述各色彩通道的图像数据均值;
    对所述各色彩通道的图像数据均值进行加权求和;
    将所述加权求和的结果作为所述第一类通道的统计数据;
    所述根据所述近红外通道的图像数据,计算所述第二类通道的图像数据统计值作为所述第二类通道的统计数据的步骤,包括:
    根据各图像信号块中所述近红外通道的图像数据,计算所述近红外通道的图像数据均值;
    将所述近红外通道的图像数据均值作为所述第二类通道的统计数据。
  26. 根据权利要求23所述的方法,其特征在于,所述提取所述图像信号中各类通道的图像数据的步骤,包括:
    从所述图像信号中,提取各色彩通道的图像数据及所述近红外通道的图像数据;
    所述根据所述多个色彩通道中至少一个色彩通道的图像数据,计算所述第一类通道的图像数据统计值作为所述第一类通道的统计数据的步骤,包括:
    根据所述各色彩通道的图像数据,得到所述各色彩通道的直方图;
    对所述各色彩通道的直方图中的灰阶数进行加权平均计算,得到所述各色彩通道的图像数据均值;对所述各色彩通道的图像数据均值进行加权求和;
    将所述加权求和的结果作为所述第一类通道的统计数据;
    所述根据所述近红外通道的图像数据,计算所述第二类通道的图像数据统计值作为所述第二类通道的统计数据的步骤,包括:
    根据所述近红外通道的图像数据,得到所述近红外通道的直方图;
    对所述近红外通道的直方图中的灰阶数进行加权平均计算,得到所述近红外通道的图像数据均值;
    将所述近红外通道的图像数据均值作为所述第二类通道的统计数据。
  27. 根据权利要求22所述的方法,其特征在于,所述各类通道包括响应近红外光波 段范围内的光分量的第二类通道;
    所述基于该曝光参数,控制对该类通道的图像数据进行亮度调整的步骤,包括:
    若判断出所述第二类通道的统计数据大于第一预设阈值,则控制所述补光单元降低发射所述近红外光的强度;
    若判断出所述第二类通道的统计数据小于第二预设阈值,则控制所述补光单元提高发射所述近红外光的强度,所述第一预设阈值大于所述第二预设阈值。
  28. 根据权利要求22所述的方法,其特征在于,所述各类通道包括响应可见光波段范围内的光分量的第一类通道、响应近红外光波段范围内的光分量的第二类通道;
    所述基于该曝光参数,控制对该类通道的图像数据进行亮度调整的步骤,包括:
    获取所述第一类通道的第一曝光时间、所述第一类通道对应的第一目标数据、所述第二类通道的第二曝光时间及所述第二类通道对应的第二目标数据;
    根据所述第一类通道的统计数据及所述第一目标数据,计算所述第一类通道的第一数据偏移量,若所述第一数据偏移量不在第一预设范围内,则根据所述第一类通道的统计数据及所述第一目标数据,计算第一曝光增益;
    根据所述第二类通道的统计数据及所述第二目标数据,计算所述第二类通道的第二数据偏移量,若所述第二数据偏移量不在第二预设范围内,则根据所述第二类通道的统计数据及所述第二目标数据,计算第二曝光增益;
    若所述第一曝光时间与所述第二曝光时间相等,则在所述第二曝光增益小于第一预设增益阈值的情况下,控制所述补光单元降低发射所述近红外光的强度,在所述第二曝光增益大于第二预设增益阈值的情况下,控制所述补光单元提高发射所述近红外光的强度,所述第一预设增益阈值小于所述第二预设增益阈值;
    若所述第一曝光时间与所述第二曝光时间不相等,则在所述第二曝光增益小于所述第一预设增益阈值的情况下,减小所述第二曝光时间,在所述第二曝光增益大于所述第二预设增益阈值的情况下,增大所述第二曝光时间。
  29. 根据权利要求22所述的方法,其特征在于,所述方法还包括:
    获取所述图像传感器输出的图像信号、所述各类通道对应的当前曝光参数以及所述各类通道之间的关联信息;
    根据所述各类通道对应的当前曝光参数及所述各类通道之间的关联信息,确定每两类通道之间的相关性,根据所述每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量。
  30. 根据权利要求29所述的方法,其特征在于,所述各类通道包括响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
    所述根据所述各类通道对应的当前曝光参数及所述各类通道之间的关联信息,确定每两类通道之间的相关性,根据所述每两类通道之间的相关性,去除包含在一类通道中的另 一类通道的光分量的步骤,包括:
    根据所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;根据所述第一类通道的权重和所述第二类通道的权重、所述第一类通道的图像数据及所述第二类通道的图像数据,去除包含在所述第一类通道中的所述第二类通道的光分量。
  31. 根据权利要求30所述的方法,其特征在于,所述根据所述第一类通道和所述第二类通道对应的当前曝光参数,以及所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;根据所述第一类通道的权重和所述第二类通道的权重、所述第一类通道的图像数据及所述第二类通道的图像数据,去除包含在所述第一类通道中的所述第二类通道的光分量的步骤,包括:
    根据所述第一类通道和所述第二类通道对应的当前曝光参数,将所述图像信号中所述第一类通道的图像数据和所述第二类通道的图像数据归一化至同一曝光参数下;
    基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一类通道的图像数据和所述第二类通道的图像数据,去除包含在所述第一类通道中的所述第二类通道的光分量。
  32. 根据权利要求29所述的方法,其特征在于,所述各类通道包括响应可见光波段范围内的光分量的第一类通道,以及响应近红外光波段范围内的光分量的第二类通道;
    在所述获取所述图像传感器输出的图像信号、所述各类通道对应的当前曝光参数以及所述各类通道之间的关联信息的步骤之后,所述方法还包括:
    对所述图像信号的可见光信号和近红外光信号进行分解,得到分解后的第一分解图像信号和第二分解图像信号,所述第一分解图像信号为可见光图像信号,所述第二分解图像为近红外光图像信号;
    所述根据所述各类通道对应的当前曝光参数及所述各类通道之间的关联信息,确定每两类通道之间的相关性,根据所述每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤,包括:
    根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性;
    根据所述相关性,确定第一输出图像信号和/或第二输出图像信号,所述第一输出图像信号为去除近红外光分量的所述第一分解图像信号。
  33. 根据权利要求32所述的方法,其特征在于,所述对所述图像信号的可见光信号和近红外光信号进行分解,得到分解后的第一分解图像信号和第二分解图像信号的步骤,包括:
    获取图像信号;分别对所述图像信号中可见光信号的各色彩分量和近红外光信号进行 上采样,得到所述各色彩分量的图像信号以及近红外光的图像信号;将所述各色彩分量的图像信号进行组合,得到第一分解图像信号,并将所述近红外光的图像信号作为第二分解图像信号;
    或者,
    获取图像信号、所述第一类通道对应的第一当前曝光增益以及所述第二类通道对应的第二当前曝光增益;若所述第二当前曝光增益小于所述第一当前曝光增益,则根据所述图像信号中所述第二类通道的图像数据,对所述第一类通道的图像数据进行边缘判决插值,若所述第二当前曝光增益大于所述第一当前曝光增益,则根据所述图像信号中所述第一类通道的图像数据,对所述第二类通道的图像数据进行边缘判决插值;获得插值后的可见光信号的各色彩分量的图像信号以及近红外光的图像信号;将所述各色彩分量的图像信号进行组合,得到第一分解图像信号,并将所述近红外光的图像信号作为第二分解图像信号。
  34. 根据权利要求32所述的方法,其特征在于,在所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性的步骤之前,所述方法还包括:
    对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性;根据所述相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,包括:
    根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;
    基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一子处理图像信号和所述第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号;
    对所述第一恢复图像信号进行处理,得到彩色的第一输出图像信号。
  35. 根据权利要求32所述的方法,其特征在于,在所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性的步骤之前,所述方法还包括:
    对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性;根据所述 相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,包括:
    根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;
    基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    将归一化后的所述第二子处理图像信号作为恢复黑白色彩的第二恢复图像信号,或者,根据所述第一类通道的权重和所述第二类通道的权重,对归一化后的所述第一子处理图像信号和所述第二子处理图像信号进行加权,得到恢复黑白色彩的第二恢复图像信号;
    对所述第二恢复图像信号进行处理,得到黑白的第二输出图像信号。
  36. 根据权利要求32所述的方法,其特征在于,在所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性的步骤之前,所述方法还包括:
    对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性;根据所述相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,包括:
    根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;
    基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一子处理图像信号和所述第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号和恢复黑白色彩的第二恢复图像信号;
    对所述第一恢复图像信号进行处理,得到第三子处理图像信号;
    对所述第二恢复图像信号进行处理,得到第四子处理图像信号;
    对所述第三子处理图像信号和所述第四子处理图像信号至少进行降噪、融合、增强处理,得到彩色的第一输出图像信号。
  37. 根据权利要求32所述的方法,其特征在于,在所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性的步骤之前,所述方法还包括:
    对所述第一分解图像信号进行预处理,得到第一子处理图像信号;
    对所述第二分解图像信号进行预处理,得到第二子处理图像信号,所述预处理包括坏点校正、黑电平校正、数字增益、降噪中的至少一种处理方式;
    所述根据所述第一类通道对应的第一当前曝光参数、所述第二类通道对应第二当前曝 光参数及所述关联信息,确定所述第一类通道和所述第二类通道之间的相关性;根据所述相关性,确定第一输出图像信号和/或第二输出图像信号的步骤,包括:
    根据所述第一当前曝光参数及所述第二当前曝光参数,将所述第一子处理图像信号和所述第二子处理图像信号归一化至同一曝光参数下;
    基于所述关联信息,确定所述第一类通道的权重和所述第二类通道的权重;
    根据所述第一类通道的权重和所述第二类通道的权重,以及归一化后的所述第一子处理图像信号和所述第二子处理图像信号,得到恢复彩色色彩的第一恢复图像信号和恢复黑白色彩的第二恢复图像信号;
    对所述第一恢复图像信号进行处理,得到第三子处理图像信号;
    对所述第二恢复图像信号进行处理,得到第四子处理图像信号;
    对所述第三子处理图像信号和所述第四子处理图像信号至少进行降噪、融合、增强处理,得到彩色的第一输出图像信号,对所述第四子处理图像信号至少进行降噪、增强处理,得到黑白的第二输出图像信号。
  38. 根据权利要求32所述的方法,其特征在于,在所述对所述图像信号的可见光信号和近红外光信号进行分解,输出分解后的第一分解图像信号和第二分解图像信号的步骤之前,所述方法还包括:
    获取所述图像传感器输出的图像信号,对所述图像信号进行预处理。
  39. 根据权利要求29所述的方法,其特征在于,在所述根据所述每两类通道之间的相关性,去除包含在一类通道中的另一类通道的光分量的步骤之后,所述方法还包括:
    将已去除其他类通道的光分量的各类通道的图像数据进行融合,得到融合后的图像信号。
PCT/CN2021/072427 2020-01-22 2021-01-18 一种成像系统及图像处理方法 WO2021147804A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010073691.3 2020-01-22
CN202010073691.3A CN113163124B (zh) 2020-01-22 2020-01-22 一种成像系统及图像处理方法

Publications (1)

Publication Number Publication Date
WO2021147804A1 true WO2021147804A1 (zh) 2021-07-29

Family

ID=76881954

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/072427 WO2021147804A1 (zh) 2020-01-22 2021-01-18 一种成像系统及图像处理方法

Country Status (2)

Country Link
CN (2) CN115297268B (zh)
WO (1) WO2021147804A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071026A (zh) * 2022-01-18 2022-02-18 睿视(天津)科技有限公司 基于红分量特征检测的自动曝光控制方法及装置
CN115361494A (zh) * 2022-10-09 2022-11-18 浙江双元科技股份有限公司 一种多线扫描图像传感器的高速频闪图像处理系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110193967A1 (en) * 2010-02-10 2011-08-11 Sony Corporation Imaging device, imaging device control method and program
CN107438170A (zh) * 2016-05-25 2017-12-05 杭州海康威视数字技术股份有限公司 一种图像透雾方法及实现图像透雾的图像采集设备
CN108353134A (zh) * 2015-10-30 2018-07-31 三星电子株式会社 使用多重曝光传感器的拍摄装置及其拍摄方法
CN110493506A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和系统
CN110493531A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和系统

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100568926C (zh) * 2006-04-30 2009-12-09 华为技术有限公司 自动曝光控制参数的获得方法及控制方法和成像装置
US8436914B2 (en) * 2008-11-07 2013-05-07 Cisco Technology, Inc. Method for automatic exposure control within a video capture device
CN104113743B (zh) * 2013-04-18 2017-09-15 深圳中兴力维技术有限公司 低照度下彩色摄像机自动白平衡处理方法及装置
JP6183238B2 (ja) * 2014-02-06 2017-08-23 株式会社Jvcケンウッド 撮像装置及び撮像装置の制御方法
CN104980628B (zh) * 2014-04-13 2018-09-11 比亚迪股份有限公司 图像传感器和监控系统
CN106375645B (zh) * 2015-07-21 2019-08-30 杭州海康威视数字技术股份有限公司 一种基于红外摄像装置的自适应控制系统
CN205666883U (zh) * 2016-03-23 2016-10-26 徐鹤菲 支持近红外光与可见光成像的复合成像系统和移动终端
CN108024106B (zh) * 2016-11-04 2019-08-23 上海富瀚微电子股份有限公司 支持rgbir和rgbw格式的颜色校正装置及方法
CN108419062B (zh) * 2017-02-10 2020-10-02 杭州海康威视数字技术股份有限公司 图像融合设备和图像融合方法
CN107798652A (zh) * 2017-10-31 2018-03-13 广东欧珀移动通信有限公司 图像处理方法、装置、可读存储介质和电子设备
CN107948521B (zh) * 2017-12-01 2019-12-27 深圳市同为数码科技股份有限公司 一种基于ae和awb统计信息的摄像机日夜模式切换系统
EP3514600A1 (en) * 2018-01-19 2019-07-24 Leica Instruments (Singapore) Pte. Ltd. Method for fluorescence intensity normalization
CN108600725B (zh) * 2018-05-10 2024-03-19 浙江芯劢微电子股份有限公司 一种基于rgb-ir图像数据的白平衡校正装置及方法
CN110493495B (zh) * 2019-05-31 2022-03-08 杭州海康威视数字技术股份有限公司 图像采集装置和图像采集的方法
CN110493533B (zh) * 2019-05-31 2021-09-07 杭州海康威视数字技术股份有限公司 图像采集装置及图像采集方法
CN110519489B (zh) * 2019-06-20 2021-04-06 杭州海康威视数字技术股份有限公司 图像采集方法及装置
CN110602420B (zh) * 2019-09-30 2022-02-15 杭州海康威视数字技术股份有限公司 相机、黑电平调整方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110193967A1 (en) * 2010-02-10 2011-08-11 Sony Corporation Imaging device, imaging device control method and program
CN108353134A (zh) * 2015-10-30 2018-07-31 三星电子株式会社 使用多重曝光传感器的拍摄装置及其拍摄方法
CN107438170A (zh) * 2016-05-25 2017-12-05 杭州海康威视数字技术股份有限公司 一种图像透雾方法及实现图像透雾的图像采集设备
CN110493506A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和系统
CN110493531A (zh) * 2018-12-12 2019-11-22 杭州海康威视数字技术股份有限公司 一种图像处理方法和系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071026A (zh) * 2022-01-18 2022-02-18 睿视(天津)科技有限公司 基于红分量特征检测的自动曝光控制方法及装置
CN114071026B (zh) * 2022-01-18 2022-04-05 睿视(天津)科技有限公司 基于红分量特征检测的自动曝光控制方法及装置
CN115361494A (zh) * 2022-10-09 2022-11-18 浙江双元科技股份有限公司 一种多线扫描图像传感器的高速频闪图像处理系统及方法

Also Published As

Publication number Publication date
CN113163124B (zh) 2022-06-03
CN115297268A (zh) 2022-11-04
CN113163124A (zh) 2021-07-23
CN115297268B (zh) 2024-01-05

Similar Documents

Publication Publication Date Title
US11252345B2 (en) Dual-spectrum camera system based on a single sensor and image processing method
US10257484B2 (en) Imaging processing device and imaging processing method
CN108419061B (zh) 基于多光谱的图像融合设备、方法及图像传感器
JP4346634B2 (ja) 目標物検出装置
CN107451969B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
CN108600725B (zh) 一种基于rgb-ir图像数据的白平衡校正装置及方法
US8666153B2 (en) Image input apparatus
US8767103B2 (en) Color filter, image processing apparatus, image processing method, image-capture apparatus, image-capture method, program and recording medium
US7796814B2 (en) Imaging device
US20140078247A1 (en) Image adjuster and image adjusting method and program
WO2021147804A1 (zh) 一种成像系统及图像处理方法
EP2775719A1 (en) Image processing device, image pickup apparatus, and storage medium storing image processing program
US20120287286A1 (en) Image processing device, image processing method, and program
US10110825B2 (en) Imaging apparatus, imaging method, and program
WO2020119504A1 (zh) 一种图像处理方法和系统
CN110493532B (zh) 一种图像处理方法和系统
US9392242B2 (en) Imaging apparatus, method for controlling imaging apparatus, and storage medium, for underwater flash photography
CN107835351B (zh) 一种双摄像头模组以及终端
CN110493531B (zh) 一种图像处理方法和系统
US20100231740A1 (en) Image processing apparatus, image processing method, and computer program
US20130266220A1 (en) Color signal processing circuit, color signal processing method, color reproduction evaluating method, imaging apparatus, electronic apparatus and testing device
US11544862B2 (en) Image sensing device and operating method thereof
CN112422940A (zh) 一种自适应颜色校正方法
Misaka et al. FPGA implementation of an algorithm that enables color constancy
JP2010245851A (ja) 固体撮像装置及びホワイトバランス処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21743864

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21743864

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 21743864

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21743864

Country of ref document: EP

Kind code of ref document: A1