WO2023005870A1 - 一种图像处理方法及相关设备 - Google Patents

一种图像处理方法及相关设备 Download PDF

Info

Publication number
WO2023005870A1
WO2023005870A1 PCT/CN2022/107602 CN2022107602W WO2023005870A1 WO 2023005870 A1 WO2023005870 A1 WO 2023005870A1 CN 2022107602 W CN2022107602 W CN 2022107602W WO 2023005870 A1 WO2023005870 A1 WO 2023005870A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
processed
processing
spectral information
Prior art date
Application number
PCT/CN2022/107602
Other languages
English (en)
French (fr)
Inventor
曾毅华
翟其彦
万磊
钟顺才
李自亮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023005870A1 publication Critical patent/WO2023005870A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Colour balance circuits, e.g. white balance circuits or colour temperature control

Definitions

  • the embodiments of the present application relate to the field of image processing, and in particular, to an image processing method and related equipment.
  • the most commonly used color processing method in the industry is to calibrate multiple light sources in an offline scene to obtain correction parameters under different light sources, and then adjust the original image collected by the color camera according to the correction parameters to obtain The target image shown by the user.
  • the above-mentioned method of calibrating multiple light sources in an offline scene can be understood as estimating the light source, and the correction parameters obtained in this way are not accurate enough, which affects subsequent color processing.
  • Embodiments of the present application provide an image processing method and related equipment.
  • a multispectral sensor to collect environmental spectral information corresponding to the image to be processed
  • the image to be processed can be adjusted in real time.
  • the ambient spectral information corresponding to the collected image to be processed compared with the method of estimating the light source in the prior art, the adjustment quality of the target image can be improved.
  • the first aspect of the embodiments of the present application provides an image processing method, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the method can be applied to an image processing device.
  • the image processing device includes a color camera and a multi-spectral sensor.
  • the method includes: acquiring the first image to be processed through the color camera; acquiring first environmental spectral information through the multi-spectral sensor, and combining the first environmental spectral information with the The first image to be processed corresponds to the same shooting scene; the white balance gain is obtained based on the first image to be processed and the first environmental spectral information; the first processing is performed on the first image to be processed to obtain the first target image; wherein, the first processing Includes white balance processing based on white balance gain.
  • the image to be processed can be adjusted in real time by introducing a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • the adjustment quality of the target image can be improved.
  • the above-mentioned first image to be processed and the first target image are solid color images or color images with a large area of solid color.
  • the method of white balancing the pure color image through the first environmental spectral information collected by the multispectral sensor can improve the accuracy of the target image. Adjust the quality.
  • the image is whitened using the first environmental spectral information collected by the multispectral sensor and the white balance gain obtained by the neural network.
  • a balanced approach can improve the adjustment quality of the target image.
  • the above steps further include: acquiring multiple spectral response functions of the color camera; acquiring multiple compensation values based on the first environmental spectral information and the multiple spectral response functions;
  • the first processing further includes: color uniform color shading processing based on multiple compensation values.
  • the above steps further include: obtaining the tristimulus value curve and the reflectance of the color card; based on the first environmental spectral information, multiple spectral response functions, reflectance and The tristimulus value curve acquires a color correction matrix; the first processing further includes: color space conversion processing based on the color correction matrix.
  • the above step acquires a color correction matrix based on the spectral information of the first environment, multiple spectral response functions, reflectance and tristimulus value curves, including: The spectral information is converted into a light source curve; the first response value of the color card to the color camera is obtained based on multiple spectral response functions, light source curves, and reflectance; The second response value of the color space, the first human eye color space is the response space corresponding to the human eye matching function; the color correction matrix is obtained based on the first response value and the second response value, and the color correction matrix is used to represent the first response value and the second response value The relationship between the two response values.
  • the color space conversion matrix is obtained by obtaining two response values, and the conversion matrix of the color space is generated by the first environmental spectral information collected by the multi-spectral sensor, compared with the offline calibration in the prior art , which can improve the quality of color reproduction.
  • the foregoing first processing further includes: performing post-processing on the white balance processed image to obtain the first target image.
  • this method can be understood as performing color adaptation based on the human eye color response space (the response space formed by the CIE1931 human eye matching function) and other human eye color response spaces (such as the color appearance model CIECAM02 in CAT02
  • the calculated response space) transformation relationship adjusts the image, which is beneficial to the subsequent processing of the white balance.
  • the above steps further include: displaying the first target image to the user.
  • the first image to be processed is adjusted through the first environmental spectral information collected by the multi-spectral sensor, and the adjusted image is displayed to the user to improve the color processing effect of the image and improve user experience.
  • the above steps further include: acquiring a second image to be processed through a color camera; acquiring second environmental spectral information through a multispectral sensor, and the second environmental spectral information is related to the first
  • the two images to be processed correspond to the same shooting scene; filter parameters are determined based on the similarity between the first environmental spectral information and the second environmental spectral information; based on the filter parameters, the first target image and the second image to be processed are filtered to obtain correction parameters;
  • the second image to be processed is adjusted based on the correction parameter to obtain a second target image.
  • the correction parameters of the second image to be processed are determined by the similarity, so as to improve the stability of color processing in the time domain while taking into account the sensitivity, that is, to avoid the flickering of the color effect in the time domain, and at the same time Can respond to changes in the environment in a timely manner leading to parameter adjustments.
  • the above step determines the filter parameters based on the similarity between the first environmental spectral information and the second environmental spectral information, including: generating a filter intensity function based on the similarity; The intensity function determines the filtering parameters.
  • the similarity and filtering strength There is a positive correlation between similarity and filtering strength, that is, the greater the similarity, the stronger the filtering strength.
  • the historical correction parameters can be used, or the historical correction parameters (that is, through the first environmental spectral information and the color channel in the first image to be processed The weight of the correction parameter obtained) is larger, and the weight of the new correction parameter (that is, the correction parameter obtained through the second environmental spectral information and the second image to be processed) is smaller, so as to obtain the correction parameter of the second image to be processed.
  • the difference between the first environmental spectral information and the second environmental spectral information is large (for example: the difference between indoor and outdoor environments), new correction parameters can be used, or the weight of the new correction parameters is larger, and the weight of the historical correction parameters is smaller, so as to obtain the correction parameters of the second image to be processed.
  • the filter strength function is generated by the similarity.
  • the higher the similarity the greater the weight of the correction parameters of the historical frame, which realizes the improvement of the temporal stability of color processing while taking into account the sensitivity, that is, to avoid color
  • the effect is flickering in the time domain, and at the same time it can respond to changes in the environment in a timely manner and lead to parameter adjustments.
  • the second aspect of the embodiments of the present application provides an image processing method, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the method can be applied to an image processing device.
  • the image processing device includes a color camera and a multi-spectral sensor.
  • the method includes: acquiring the first image to be processed through the color camera; acquiring first environmental spectral information through the multi-spectral sensor, and combining the first environmental spectral information with the The first image to be processed corresponds to the same shooting scene; multiple spectral response functions of the color camera are obtained; multiple compensation values are obtained based on multiple first environmental spectral information and multiple spectral response functions; the first image to be processed is first Processing to obtain a first target image; wherein, the first processing includes color uniform color shading processing based on a plurality of compensation values to perform color uniform color shading processing.
  • the color of the image needs to be uniformed by offline calibration, real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the uniform color compensation value generated by the first environmental spectral information collected by the multi-spectral sensor compared with offline calibration in the prior art, the quality of uniform color can be improved.
  • the third aspect of the embodiments of the present application provides an image processing method, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the method can be applied to an image processing device.
  • the image processing device includes a color camera and a multi-spectral sensor.
  • the method includes: acquiring the first image to be processed based on the color camera; acquiring first environmental spectral information based on the multi-spectral sensor, and combining the first environmental spectral information with the The first image to be processed corresponds to the same shooting scene; obtain multiple spectral response functions of the color camera; obtain the tristimulus value curve and the reflectance of the color card; based on the first environmental spectral information, multiple spectral response functions, reflectance and three
  • the stimulus value curve acquires a color correction matrix; performing first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color space conversion processing based on the color correction matrix.
  • the above step: obtaining the color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves includes: converting the first Environmental spectral information is transformed into light source curve; based on multiple spectral response functions, light source curves, and reflectance, the first response value of the color card to the color camera is obtained; based on the tristimulus value curve, light source curve, and reflectance
  • the second response value of the eye color space, the first human eye color space is the response space corresponding to the human eye matching function; the color correction matrix is obtained based on the first response value and the second response value, and the color correction matrix is used to represent the first response value and The conversion relationship between the second response values.
  • the color space conversion matrix is obtained by obtaining two response values, and the conversion matrix of the color space is generated by the first environmental spectral information collected by the multi-spectral sensor, compared with the offline calibration in the prior art , which can improve the quality of color reproduction.
  • the above steps further include: adjusting the image after color space conversion processing based on the conversion relationship between the first human eye color space and the second human eye color space,
  • the second human eye color space is a corresponding response space when the color appearance model performs color adaptation.
  • this method can be understood as performing color adaptation based on the human eye color response space (the response space formed by the CIE1931 human eye matching function) and other human eye color response spaces (such as the color appearance model CIECAM02 in CAT02
  • the calculated response space) transformation relationship adjusts the image, which is beneficial to the subsequent processing of the white balance.
  • the fourth aspect of the embodiments of the present application provides an image processing device, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the image processing equipment includes:
  • the first acquiring unit is used to acquire the first image to be processed through the color camera
  • the second acquisition unit is configured to acquire first environmental spectral information through a multispectral sensor, where the first environmental spectral information corresponds to the same shooting scene as the first image to be processed;
  • the processing unit is configured to perform first processing on the first image to be processed to obtain a first target image, where the first processing includes white balance processing based on white balance gain.
  • the above-mentioned first image to be processed and the first target image are solid color images or color images with a large area of solid color.
  • the above-mentioned third acquisition unit is specifically configured to input the first environmental spectrum information and the first image to be processed into a trained neural network to obtain a white balance gain;
  • the trained neural network is obtained by training the neural network with the training data as the input of the neural network, and the value of the loss function is less than the threshold.
  • the training data includes training original images and training spectral information, training original images and training spectral information
  • the output of the neural network includes the white balance gain.
  • the loss function is used to indicate the difference between the white balance gain output by the neural network and the actual white balance gain.
  • the actual white balance gain is determined by the response of the gray card in the shooting scene The value is processed.
  • the above-mentioned device further includes:
  • the fourth acquisition unit is used to acquire multiple spectral response functions of the color camera
  • the fourth acquisition unit is further configured to acquire multiple estimated values based on the first environmental spectral information and multiple spectral response functions;
  • the fourth acquisition unit is further configured to calculate a plurality of compensation values based on a plurality of estimated values
  • the processing unit is also used for color uniform color shading processing based on multiple compensation values.
  • the above-mentioned device further includes:
  • the fifth acquisition unit is used to acquire the tristimulus value curve and the reflectance of the color card
  • the fifth acquisition unit is further configured to acquire a color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves;
  • the processing unit is also used for color space conversion processing based on the color correction matrix.
  • the above-mentioned fifth acquisition unit is specifically configured to convert the first environmental spectrum information into a light source curve
  • the fifth acquisition unit is specifically used to acquire the first response value of the color card to the color camera based on multiple spectral response functions, light source curves and reflectivity;
  • the fifth acquisition unit is specifically configured to acquire the second response value of the color card to the first human eye color space based on the tristimulus value curve, the light source curve and the reflectivity, and the first human eye color space is the response space corresponding to the human eye matching function;
  • the fifth obtaining unit is specifically configured to obtain a color correction matrix based on the first response value and the second response value, and the color correction matrix is used to represent the correlation between the first response value and the second response value.
  • the above processing unit is further configured to post-process the white balance processed image to obtain the first target image.
  • the above-mentioned device further includes:
  • the display unit is used for displaying the first target image to the user.
  • the above-mentioned first acquiring unit is further configured to acquire the second image to be processed through a color camera;
  • the second acquiring unit is further configured to acquire second environmental spectral information through a multispectral sensor, where the second environmental spectral information corresponds to the same shooting scene as the second image to be processed;
  • Equipment also includes:
  • a determination unit configured to determine filter parameters based on the similarity between the first environmental spectral information and the second environmental spectral information
  • a filtering unit configured to filter the first target image and the second image to be processed based on filtering parameters to obtain correction parameters
  • the processing unit is further configured to adjust the second image to be processed based on the correction parameter to obtain a second target image.
  • the above-mentioned determining unit is specifically configured to generate a filter strength function based on the similarity
  • the determining unit is specifically configured to determine the filtering parameters based on the filtering intensity function.
  • the fifth aspect of the embodiment of the present application provides an image processing device, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the image processing equipment includes:
  • the first acquiring unit is used to acquire the first image to be processed through the color camera
  • the second acquisition unit is configured to acquire first environmental spectral information through a multispectral sensor, where the first environmental spectral information corresponds to the same shooting scene as the first image to be processed;
  • the third acquisition unit is used to acquire multiple spectral response functions of the color camera
  • the third acquisition unit is further configured to acquire multiple compensation values based on the first environmental spectral information and multiple spectral response functions;
  • the processing unit is configured to perform first processing on the first image to be processed to obtain a first target image, and the first processing includes color uniform color shading processing based on a plurality of compensation values to perform color uniform color shading processing.
  • the sixth aspect of the embodiments of the present application provides an image processing device, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the image processing equipment includes:
  • the first acquiring unit is used to acquire the first image to be processed through the color camera
  • the second acquisition unit is configured to acquire first environmental spectral information through a multispectral sensor, where the first environmental spectral information corresponds to the same shooting scene as the first image to be processed;
  • the third acquisition unit is used to acquire multiple spectral response functions of the color camera
  • the third acquisition unit is also used to acquire the tristimulus value curve and the reflectance of the color card;
  • the third acquisition unit is further configured to acquire a color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves;
  • the processing unit is configured to perform first processing on the first image to be processed to obtain a first target image, and the first processing includes color space conversion processing based on a color correction matrix.
  • the above-mentioned third acquisition unit is specifically configured to convert the first environmental spectrum information into a light source curve
  • the third acquisition unit is specifically used to acquire the first response value of the color card to the color camera based on multiple spectral response functions, light source curves, and reflectance;
  • the third acquisition unit is specifically used to acquire the second response value of the color card to the first human eye color space based on the tristimulus value curve, the light source curve, and the reflectance, and the first human eye color space is a response space corresponding to the human eye matching function;
  • the third obtaining unit is specifically configured to obtain a color correction matrix based on the first response value and the second response value, and the color correction matrix is used to represent a conversion relationship between the first response value and the second response value.
  • the above-mentioned processing unit is further configured to convert the color space of the processed image based on the conversion relationship between the first human eye color space and the second human eye color space After adjustment, the second human eye color space is the corresponding response space when the color appearance model performs color adaptation.
  • the seventh aspect of the embodiments of the present application provides an image processing device, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the image processing device includes a color camera, a multispectral sensor and an image processor;
  • a color camera for obtaining the first image to be processed
  • a multispectral sensor configured to acquire first environmental spectral information, where the first environmental spectral information corresponds to the same shooting scene as the first image to be processed;
  • An image processor configured to obtain a white balance gain based on the first image to be processed and the first environmental spectral information; and perform a first process on the first image to be processed to obtain a first target image, and the first process includes white balance gain based on White balance processing.
  • the eighth aspect of the embodiments of the present application provides an image processing device, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the image processing device includes a color camera, a multispectral sensor and an image processor;
  • a color camera for obtaining the first image to be processed
  • a multispectral sensor configured to acquire first environmental spectral information, where the first environmental spectral information corresponds to the same shooting scene as the first image to be processed;
  • An image processor for obtaining multiple spectral response functions of the color camera
  • the image processor is also used to obtain multiple compensation values based on the first environmental spectral information and multiple spectral response functions;
  • the image processor is further configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color uniform color shading processing based on a plurality of compensation values.
  • the ninth aspect of the embodiments of the present application provides an image processing device, which can be applied to image color processing scenarios such as white balance, color restoration, and color uniformity.
  • the image processing device includes a color camera, a multispectral sensor and an image processor;
  • a color camera for obtaining the first image to be processed
  • a multi-spectral sensor configured to acquire first environmental spectral information corresponding to the first image to be processed
  • An image processor for obtaining multiple spectral response functions of the color camera
  • the image processor is also used to obtain the tristimulus value curve and the reflectance of the color card;
  • the image processor is also used to obtain a color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves;
  • the image processor is further configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color space conversion processing based on a color correction matrix.
  • the tenth aspect of the present application provides an image processing device, the image processing device executes the method in the aforementioned first aspect or any possible implementation of the first aspect, or executes the aforementioned second aspect or any possible implementation of the second aspect The method in the implementation manner, or execute the method in the aforementioned third aspect or any possible implementation manner of the third aspect.
  • the eleventh aspect of the present application provides an image processing device, including: a processor, the processor is coupled with a memory, and the memory is used to store programs or instructions.
  • the image processing device realizes the above-mentioned
  • the first aspect or the method in any possible implementation of the first aspect or enabling the image processing device to implement the above second aspect or the method in any possible implementation of the second aspect, or enabling the image processing device to implement the above
  • the twelfth aspect of the present application provides a computer-readable medium on which computer programs or instructions are stored, and when the computer programs or instructions are run on the computer, the computer can perform any possible operation of the aforementioned first aspect or the first aspect.
  • the thirteenth aspect of the present application provides a computer program product.
  • the computer program product When the computer program product is executed on a computer, the computer executes the aforementioned first aspect or any possible implementation of the first aspect, the second aspect or any possible implementation of the second aspect. Any possible implementation, the third aspect, or a method in any possible implementation of the third aspect.
  • the fourth, seventh, tenth, eleventh, twelfth, thirteenth aspects or the technical effects brought by any of the possible implementations may refer to the first aspect or the different possible implementations of the first aspect The resulting technical effects will not be repeated here.
  • the fifth, eighth, tenth, eleventh, twelfth, thirteenth aspects or the technical effects brought by any of the possible implementations may refer to the second aspect or the different possible implementations of the second aspect The resulting technical effects will not be repeated here.
  • the sixth, ninth, tenth, eleventh, twelfth, thirteenth aspects or the technical effects brought by any of the possible implementations may refer to the second aspect or the different possible implementations of the second aspect The resulting technical effects will not be repeated here.
  • the embodiments of the present application have the following advantages: by introducing a multi-spectral sensor to collect environmental spectral information corresponding to the image to be processed, not only the image to be processed can be adjusted in real time. In addition, compared with the way of estimating the light source in the prior art, the adjustment quality of the target image can also be improved.
  • FIG. 1 is a schematic flow chart of an image processing method provided by an embodiment of the present invention
  • FIG. 2 and FIG. 3 are two example diagrams of the first environmental spectral information provided by the embodiment of the present application.
  • Fig. 4 is an example diagram of the first image to be processed provided by the embodiment of the present application.
  • FIG. 5 is an example diagram of an image after white balance processing provided by the embodiment of the present application.
  • FIG. 6 is an example diagram of an image before color uniformity processing and an image after color uniformity processing provided by the embodiment of the present application
  • FIG. 7 is another example diagram of an image before color space conversion processing provided by the embodiment of the present application.
  • FIG. 8 is an example diagram of an image after color space conversion processing provided by the embodiment of the present application.
  • FIG. 9 is another schematic flowchart of an image processing method provided by an embodiment of the present invention.
  • FIG. 10 is another schematic flowchart of an image processing method provided by an embodiment of the present invention.
  • FIG. 11 to FIG. 14 are diagrams showing several structural examples of the image processing device in the embodiment of the present application.
  • Embodiments of the present application provide an image processing method and related equipment.
  • a multispectral sensor to collect environmental spectral information corresponding to the image to be processed
  • the image to be processed can be adjusted in real time.
  • the ambient spectral information corresponding to the collected image to be processed compared with the method of estimating the light source in the prior art, the adjustment quality of the target image can be improved.
  • white balance To restore white objects to white no matter under any light source. For the color cast phenomenon that occurs when shooting under a specific light source, it is compensated by strengthening the corresponding complementary color. If the white object is restored to white, the images of other scenes will be close to the color vision habits of the human eye.
  • the "balance" in white balance can be understood as correcting the color difference caused by different color temperatures, so that white objects appear truly white.
  • Color shading refers to the problem that the color may be uneven in space under the same plane. For example: when using a mobile phone camera to take pictures, it often appears that the center of the mobile phone photo is red, and the photo has black corners. The core reason is that the limitation of the space of the mobile phone has led to some trade-offs in the design of the optical system. Parallel light will focus after a certain distance after passing through the convex lens. Generally, cameras can extend the focal length very long because of the loose space, while mobile phones can only shorten the focal length as much as possible, allowing the light to focus at a very short distance behind the lens. Although both focusing methods can achieve the purpose of imaging on the photosensitive element, the effects of the two are far from each other.
  • the camera has different refractive indices for light of different wavelengths, there will be a certain difference in the direction of travel after passing through the lens.
  • the focal length is very short, the surrounding scattered light cannot be completely overlapped due to premature focusing, which causes the problem that the central light is larger and the peripheral light is less. This is the redness in the center of the mobile phone photo mentioned above. That is the root cause of the color shading phenomenon.
  • the most commonly used color processing method in the industry is to calibrate multiple light sources in an offline scene to obtain correction parameters under different light sources, and adjust the original image collected by the color camera according to the correction parameters to obtain The target image shown by the user.
  • the above-mentioned method of calibrating multiple light sources in an offline scene can be understood as estimating the light source, and the correction parameters obtained in this way are not accurate enough, which affects subsequent color processing.
  • an embodiment of the present application provides an image processing method, which can adjust the image to be processed in real time by introducing a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • the adjustment quality of the target image can be improved.
  • the image processing method provided in the embodiment of the present application can be applied to color processing scenarios such as white balance, color uniformity, and color restoration.
  • an embodiment of an image processing method provided by an embodiment of the present application, the method may be applied to an image processing device, and the image processing device includes a color camera and a multi-spectral sensor.
  • This embodiment includes step 101 to step 104 .
  • the embodiment shown in FIG. 1 can be understood as performing white balance processing on the image to be processed.
  • Step 101 acquire a first image to be processed through a color camera.
  • the color camera in the embodiment of the present application can be understood as an RGB sensor, which can capture the color of the scene and take color photos.
  • the color camera can be a monocular camera or a binocular camera, which is arranged at the front position (ie, the front camera) or the rear position (ie, the rear camera) on the housing of the main body of the image processing device.
  • the color camera may be an ultra-wide-angle color camera, a wide-angle color camera, or a telephoto color camera, etc., which are not specifically limited here.
  • the first image to be processed is acquired through a color camera, and the first image to be processed may be an original RAW image collected by the color camera.
  • the color camera in the embodiment of the present application is used to collect color images or pure color images, and the specific structure of the color camera is not limited here.
  • the first image to be processed is a solid-color image (or called a monochrome image) or a large-area single-color image.
  • the first image to be processed may be an original RAW domain image (also referred to as a RAW image), and the RAW image may be a metal oxide semiconductor element (complementary metal-oxide semiconductor, CMOS) or a charge coupled device (charge coupled device , CCD) image sensor converts the light source signal captured by the camera into the raw data of the digital signal, which has not been processed by the image signal processor (ISP).
  • the RAW image may be a bayer image in a Bayer (bayer) format.
  • Step 102 acquiring first environmental spectral information through a multi-spectral sensor.
  • the multi-spectral sensor in the embodiment of the present application is used to collect the spectrum.
  • the spectrum (or called the optical spectrum) can be understood as the monochromatic light dispersed by the wavelength Or patterns arranged in order of frequency.
  • the multispectral sensor can collect spectra in the 350-1000 nanometer band, and the field of view (field of view, FOV) is plus or minus 35 degrees.
  • the multispectral sensor can also include 8 visible light bands and multiple special wave bands (for example: full light band channel, scintillation frequency detection channel and/or infrared channel, etc.), or include 10 visible light bands and multiple special wave bands, understandably Yes, the above-mentioned number of visible light bands is just an example. In practical applications, there may be fewer or more visible light bands. This article only uses 8 visible light bands as an example for exemplary description.
  • the first environmental spectral information is acquired through the multi-spectral sensor, and the first environmental spectral information corresponds to the same shooting scene as the first image to be processed.
  • the same shooting scene may mean that the distance between the position of the color camera when collecting the first image to be processed and the position of the multispectral sensor when collecting the first environmental spectral information is less than a certain threshold (for example: the color camera collects the first image to be processed).
  • a certain threshold for example: the color camera collects the first image to be processed.
  • the distance between the position of the image and the position when the multispectral sensor collects the first environmental spectral information is 1 meter, and the threshold is 2 meters, that is, the distance is less than the threshold, then it can be determined that the first image to be processed is the same as the first environmental spectral information shooting scene).
  • the position in the above can be a relative position or a geographical position, and if the position is a relative position, the relative position can be determined by establishing a scene model, etc.; if the position is a geographical position, it can be based on a global positioning system (global positioning system, GPS) or The location of the first device and the location of the second device determined by the Beidou navigation system, etc., and then the distance between the two locations is obtained.
  • a global positioning system global positioning system, GPS
  • the same shooting scene can also be judged according to the intensity of light, for example: based on whether the weather type when collecting the first image to be processed is similar to the weather type when collecting the first environmental spectral information to judge whether the first environmental spectral information and the first Whether the image to be processed is the same shooting scene, for example: if it is sunny when the first image to be processed is collected, and it is sunny when the first environmental spectral information is collected, it can be determined that the first environmental spectral information and the first image to be processed are the same shooting scene . If it is sunny when the first image to be processed is collected, and it is rainy when the first environmental spectral information is collected, it can be determined that the first environmental spectral information and the first image to be processed do not belong to the same shooting scene.
  • the first environmental spectrum information may be a light source spectrum or a reflection spectrum. That is, the light source spectrum is the spectrum corresponding to the light source irradiating the first image to be processed, and the reflection spectrum is the spectrum corresponding to the light reflected by the object in the first image to be processed.
  • the first environmental spectrum information may be information that can characterize the environmental spectrum, such as sampling points of the multispectral sensor or an environmental spectrum map, wherein the number of sampling points (or the number of channels of the multispectral sensor) is related to the design of the multispectral sensor (such as the number of visible light bands, special bands, etc.), it can be 8, it can also be 10, and it can also be a smaller or greater number of sampling points.
  • the embodiment of the present application only uses multispectral The sensor collects 8 sampling points as an example for description.
  • Fig. 2 and Fig. 3 are two examples of the first environmental spectral information, it can be understood that the first environmental information may be 8 two-dimensional arrays, for example: (color temperature, light intensity).
  • Step 103 obtaining a white balance gain based on the first image to be processed and the first environmental spectral information.
  • the white balance gain based on the first image to be processed and the first environmental spectral information. It can be based on the first environmental spectral information to obtain the white point of the light source, or based on the first environmental spectral information White balance gain.
  • the white point of the light source can be understood as 1/white balance gain
  • the white balance gain can be understood as red gain (Rgain) and blue gain (Bgain).
  • the Rgain and Bgain or the white point of the light source can be obtained through the gray world algorithm, the total reflection algorithm, or the input neural network.
  • the grayscale world algorithm may not be used to avoid the assumption of the grayscale world (that is, for an image with a large number of color changes, the average value of the three RGB components tends to the same grayscale value) White balance fails.
  • the neural network is used as an example for description: the first image to be processed has a size of 16*16.
  • the values of the 8 sampling points may be up-sampled to a size of 8*8, and the first image to be processed is down-sampled to a size of 8*8.
  • the above-mentioned up-sampling or down-sampling steps can also be performed after being put into the neural network, that is, the first environmental spectral information and the first image to be processed are used as the input of the neural network.
  • the neural network may be a deep neural network, a convolutional neural network, etc., which are not limited here.
  • the trained neural network is obtained by training the neural network with the training data as the input of the neural network, and the value of the loss function is less than the threshold.
  • the training data includes training original images and training spectral information, training original images and training spectral information
  • the output of the neural network includes the white balance gain.
  • the loss function is used to indicate the difference between the white balance gain output by the neural network and the actual white balance gain.
  • the actual white balance gain is determined by the response of the gray card in the shooting scene The value is processed. Among them, since the RGB channel values of the gray card are equal or approximate, it is helpful to judge the white balance gain according to the response value of the gray card in the shooting scene.
  • Step 104 performing first processing on the first image to be processed to obtain a first target image.
  • first processing is performed on the first image to be processed. Get the first target image.
  • the first processing includes white balance processing based on white balance gain.
  • Rgain is multiplied by the value of the red channel in the first image to be processed
  • Bgain is multiplied by the value of the blue channel in the first image to be processed, to obtain the adjusted values of each channel, and then realize the first image to be processed Handles white balance processing of images.
  • the adjustment method may be to directly multiply Rgain and Bgain by pixel values in the first image to be processed respectively.
  • multiple compensation values may be adjusted according to RGGB, and then multiplied by pixels in the first image to be processed in the Bayer domain, which is not specifically limited here.
  • FIG. 4 is an example diagram of the first image to be processed
  • FIG. 5 is an example diagram of the first target image.
  • the first processing in this embodiment may also include but not limited to one or more of the following post-processing algorithms: automatic exposure control (automatic exposure control, AEC), automatic Gain control (automatic gain control, AGC), color correction, lens correction, noise removal/noise reduction, dead pixel removal, linearity correction, color interpolation, image downsampling, level compensation, etc.
  • some image enhancement algorithms can also be included, such as gamma (Gamma) correction, contrast enhancement and sharpening, color noise removal and edge enhancement in YUV color space, color enhancement, color space conversion (for example, RGB is converted to YUV) and so on.
  • the first target image is, for example, an image in YUV or RGB format.
  • the first target image may be displayed to the user.
  • the image processing device further includes an image processor, and the image processor is used to execute step 103 and step 104 .
  • the image to be processed can be adjusted in real time by introducing a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • the environmental spectrum information corresponding to the first image to be processed is contained, compared with the way of estimating the light source in the prior art, the adjustment quality of the first target image can be improved.
  • the first image collected by the multispectral sensor A method of performing white balance on a pure color image based on ambient spectral information can improve the adjustment quality of the target image.
  • the first processing further includes any combination (or understood as at least one) of various color processing such as color restoration and color uniformity, which will be described separately below.
  • the first one is color shading processing.
  • the color uniformity processing may be before or after the white balance processing. If the color uniformity processing is before the white balance processing, the processing object of the color uniformity processing is the first image to be processed. If the color uniformity processing is after the white balance processing, the processing object of the color uniformity processing is the first image to be processed after the white balance processing.
  • the following description only takes the color uniform processing of the first image to be processed as an example. Of course, the color uniform processing may also be performed on the image after white balance processing or other color processing, which is not limited here.
  • the steps of the embodiment shown in FIG. 1 may further include: acquiring multiple spectral response functions of the color camera. Acquiring multiple estimated values based on the first environmental spectral information and multiple spectral response functions, and calculating multiple compensation values based on the multiple estimated values, the first processing further includes: color uniform color shading processing based on the multiple compensation values.
  • the number of compensation values can be one-to-one corresponding to the number of pixels in the first image to be processed, or the number of compensation values is smaller than the number of pixels in the first image to be processed (it can also be understood as a region corresponding to a compensation value, a The area includes multiple pixels of the first image to be processed).
  • the above step: the specific way of obtaining multiple spectral response functions of the color camera may be: measuring the spectral response of the pixel position of the color camera through a monochromator to obtain multiple spectral response functions. Or the response function of the color camera is determined by adjusting different light intensities of the light source in an off-line manner.
  • the above steps: the specific method of obtaining multiple estimated values based on the first environmental spectral information and multiple spectral response functions may be: up-sampling the values of 8 sampling points, and comparing the up-sampled values with multiple spectral response functions Integrate to get multiple estimates.
  • the above-mentioned step of acquiring multiple compensation values may be: taking the central pixel of the first image to be processed as a reference to obtain the compensation values of pixels other than the central pixel in the first image to be processed. Then the size of the compensation value is up-sampled to the spatial size of the camera image. And use the upsampled compensation value for color uniform color shading processing.
  • the manner of color uniform color shading processing may be directly multiplying multiple compensation values by pixels in the first image to be processed respectively. It is also possible to adjust multiple compensation values according to RGGB, and then multiply by the pixels in the first image to be processed in the Bayer domain to complete the color shading process, which is not specifically limited here.
  • 8 multispectral sampling values are up-sampled to obtain 256 multispectral sampling values.
  • Multiple spectral response functions are 256 (vertical size of the image)*256 (horizontal size of the image)*3 (pixel channel)*256 (number of multi-spectral channels), multiple sampling values after upsampling are paired by the following formula Integrating with multiple spectral response functions yields multiple estimates.
  • Compensation values of pixels other than the central pixel in the first image to be processed are obtained by using the following formula 2 with the central pixel of the first image to be processed as a reference.
  • the size of the compensation value is up-sampled to the spatial size of the camera image by Equation 3.
  • the upsampled compensation value is used for color shading processing through Formula 4.
  • V(x, y, c) is a plurality of estimated values, and the plurality of estimated values may be 256 (vertical size of the image)*256 (horizontal size of the image)*3 (pixel channel).
  • S( ⁇ ) is 256 sampling values obtained after upsampling of 8 multispectral sampling values.
  • F(x, y, c, ⁇ ) is a spectral response function corresponding to multiple xy.
  • x and y are the spatial dimensions of the spectral response of the color camera, for example, 256*256.
  • c is the pixel channel of the color camera.
  • only 3 pixel channels are used as an example for description.
  • is the response wavelength of the color camera.
  • the value range of the response wavelength is 380 nanometers (nm) to 780 nanometers (nm), which is the wavelength range of visible light to the human eye.
  • K I-size (x', y', c) interplication_xy(K F-size (x, y, c)).
  • I'(x',y',c) KI -size (x',y',c)I(x',y',c).
  • K F-size (x, y, c) is a plurality of compensation values corresponding to xy.
  • V(x center , y center , c) is the central pixel point of the first image to be processed.
  • interplication_xy is used to indicate upsampling of xy.
  • K I-size (x', y', c) is to upsample xy to obtain multiple compensation values corresponding to the camera image size.
  • x', y' are the spatial dimensions of the camera image, for example 3000*4000.
  • I'(x',y',c) is the vertical size, horizontal size and pixel channel of the image after color uniformity processing.
  • I(x', y', c) is the vertical size, horizontal size and pixel channel of the first image to be processed.
  • FIG. 6 is an example diagram of the first image to be processed (ie, the corresponding image before color shading removal) and the image after color uniform processing (ie, the corresponding image after color shading removal).
  • the second type is color restoration processing (also called color space conversion processing).
  • the color restoration processing has no time sequence relationship with the aforementioned white balance processing and color uniform processing.
  • the first processing includes color restoration and white balance processing as an example for description. If the color restoration processing is before the white balance processing, the processing object of the color restoration processing is the first image to be processed. If the color restoration processing is after the white balance processing, the processing object of the color restoration processing is the first image to be processed after the white balance processing.
  • the following description only takes the color uniform processing of the first image to be processed as an example. Of course, the color uniform processing may also be performed on the image after white balance processing or other color processing, which is not limited here.
  • the steps of the embodiment shown in FIG. 1 may further include: obtaining the tristimulus value curve and the reflectance of the color card.
  • a color correction matrix is obtained based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves.
  • the first processing also includes: color space conversion processing based on the color correction matrix.
  • the above step: obtaining the color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves may specifically include: converting the first environmental spectral information into a light source curve.
  • Obtain the first response value of the color card to the color camera based on multiple spectral response functions, light source curves and reflectance (it can also be understood as the imaging of the color card in a space composed of three axes of RGB).
  • the second response value of the color card to the first human eye color color space is obtained based on the tristimulus value curve, the light source curve and the reflectance.
  • a color correction matrix is obtained based on the first response value and the second response value.
  • the above-mentioned first human eye color space may be a response space corresponding to the human eye matching function.
  • the color correction matrix is used to represent the correlation between the first response value and the second response value.
  • the color correction matrix can also be understood as a transformation matrix of two color spaces.
  • the conversion matrix of the color space is a 3 ⁇ 3 matrix. It is equivalent to using one color space as the target and another color space as the source, and using the least squares method to obtain the transformation matrix.
  • the above-mentioned reflectance of the color card can be the reflectance of the standard 24 color cards, and can also be replaced by a regular rectangular wave, a custom curve, etc., which is not limited here.
  • the human eye matching function may be a human eye matching function under Commission Internationale de l'Eclairage (CIE) 1931 or other standards.
  • the tristimulus value curve may be a tristimulus value curve under CIE1931 or other specifications. Specifically, there is no limitation here.
  • the first response value of the color card to the color camera can be obtained through formula five, multiple spectral response functions, light source curves, and reflectance.
  • the second response value of the color card to the color space of the first human eye color is obtained by formula six, the three-stimulus value curve, the light source curve, and the reflectance.
  • Second response value ⁇ xyz( ⁇ )*I( ⁇ )*R( ⁇ ).
  • css( ⁇ ) is the response curve corresponding to multiple spectral response functions.
  • I( ⁇ ) is the light source curve.
  • R( ⁇ ) is the reflectance of the color card.
  • xyz( ⁇ ) is the tristimulus value curve.
  • FIG. 7 is an example diagram of an image before color space conversion processing.
  • FIG. 8 is an example diagram of an image processed by color space conversion.
  • the image after color space conversion processing may also be adjusted according to the conversion relationship between the first human-eye color space and the second human-eye color space.
  • the second human eye color space is the corresponding response space when the color appearance model performs color adaptation, so that the quality of white balance can be improved subsequently.
  • This method can be understood as adjusting the conversion relationship between the human eye color response space (response space formed by CIE1931 human eye matching function) and other human eye color response spaces (such as the response space for color adaptation CAT02 calculation in the color appearance model CIECAM02)
  • the third target image can be understood as adjusting the conversion relationship between the human eye color response space (response space formed by CIE1931 human eye matching function) and other human eye color response spaces (such as the response space for color adaptation CAT02 calculation in the color appearance model CIECAM02) The third target image.
  • the third type is time-domain stability processing.
  • the steps of the embodiment shown in FIG. 1 may further include: acquiring a second image to be processed by a color camera.
  • the second environmental spectrum information corresponding to the second image to be processed is acquired through the multispectral sensor.
  • Filtering parameters are determined based on the similarity between the first environmental spectral information and the second environmental spectral information.
  • the first target image and the second image to be processed are filtered based on the filtering parameters to obtain correction parameters.
  • the second image to be processed is adjusted based on the correction parameter to obtain a second target image.
  • determining the filter parameters based on the similarity between the first environmental spectral information and the second environmental spectral information may specifically include: generating a filter intensity function based on the similarity. Filter parameters are determined based on the filter strength function.
  • the time interval between collecting the first image to be processed and the second image to be processed by the color camera is less than a preset time period.
  • the first image to be processed and the second image to be processed are two frames of images collected by the color camera at adjacent moments.
  • the specific way of obtaining the similarity between the first environmental spectral information and the second environmental spectral information may be: determine the first spectral curve through a plurality of sampling values corresponding to the first environmental spectral information, and determine the first spectral curve through a plurality of sampling values corresponding to the second environmental spectral information.
  • the sampled values determine a second spectral curve.
  • the similarity between the first spectral curve and the second spectral curve is calculated by using a curve similarity algorithm (eg, cosine similarity, etc.).
  • the filter strength function is generated based on the similarity, and the similarity and the filter strength are positively correlated, that is, the greater the similarity, the stronger the filter strength.
  • the description is made by taking the filter strength function as an example of a three-stage function. It can be understood that the filter strength function can be set as a first-order function or a higher-order function, which is not specifically limited here.
  • An example of a three-segment filter strength function is as follows:
  • the filtering weight is 1, that is, the above-mentioned correction parameters are used, in other words, the correction parameters used in the first processing (for example: the above-mentioned Rgain and Bgain, white point, estimated value , color correction matrix);
  • the filtering weight is less than 1 and greater than 0, and the filtering weight is in the range of 0-1 according to the difference between the similarity and the first threshold The closer the similarity is to the first threshold, the closer the filtering weight is to 1, and the farther the similarity is to the first threshold, the closer the filtering weight is to 0.
  • the filtering weight is 0, that is, the correction parameters of the second image to be processed are recalculated (the solution method is similar to the first processing, which is called the second processing here, and the first processing is the same as the first processing
  • the difference between the two processes is that the first environmental spectral information in the first processing is replaced by the second environmental spectral information in the second processing, and the first image to be processed is replaced by the second image to be processed).
  • the first threshold and the second threshold mentioned above can be set according to actual needs, and are not specifically limited here.
  • the first threshold is 90%
  • the second threshold is 10%.
  • the above-mentioned correction parameters obtained by filtering the first target image and the second image to be processed based on the filtering weights may be Rgain and Bgain used in white balance, or a color correction matrix in color restoration, or is an estimated value in color uniformity, which is not limited here.
  • the correction parameters use the correction parameters to adjust the second image to be processed to obtain the second target image (the adjustment method is similar to the above, and will not be repeated here).
  • the first threshold is 90%
  • the second threshold is 10%
  • the similarity is 70%, that is, the second segment of the filter function is used to determine the correction parameter.
  • the determination of the correction parameter (here, the estimated value of color uniformity) of the second image to be processed is described by taking the correction parameter as an estimated value in color uniformity and formula 7 as an example.
  • K I-size (x', y', c) is an estimated value related to the first image to be processed in the above color restoration (or understood to be an estimated value of the first target image), for example, the filtering weight is 0.5.
  • K I-size (x”, y”, c) is the estimated value related to the second image to be processed in the above color restoration (the solution method is similar to that of the aforementioned color restoration, except that the aforementioned first environmental spectral information is replaced by the second 2 environmental spectrum information, replace the first image to be processed with the second image to be processed).
  • the historical correction parameters can be used, or the historical correction parameters (that is, through the first environmental spectral information and the color channel in the first image to be processed
  • the correction parameter obtained, or called the correction parameter used in the first processing has a larger weight
  • the new correction parameter that is, the correction parameter obtained through the second environmental spectral information and the second image to be processed, or called the second The weight of the correction parameter used in the processing
  • the difference between the first environmental spectral information and the second environmental spectral information is large (for example: the difference between indoor and outdoor environments), new correction parameters can be used, or the weight of the new correction parameters is larger, and the weight of the historical correction parameters is smaller, so as to obtain the correction parameters of the second image to be processed.
  • the stability of color processing in the time domain is improved while taking into account the sensitivity, that is, to avoid the flickering of the color effect in the time domain, and at the same time to respond to changes in the environment in a timely manner to cause parameter adjustments.
  • FIG. 9 shows another embodiment of the image processing method provided by the embodiment of the present application.
  • the method can be applied to an image processing device, and the image processing device includes a color camera and a multi-spectral sensor.
  • This embodiment includes step 901 to step 905 .
  • the embodiment shown in FIG. 9 can be understood as performing color uniform processing on the image to be processed.
  • Step 901 acquire a first image to be processed by a color camera.
  • Step 902 acquire first environmental spectral information through a multi-spectral sensor.
  • Step 901 and step 902 in this implementation are similar to steps 101 and 102 in the foregoing embodiment shown in FIG. 1 , and will not be repeated here.
  • Step 903 acquiring multiple spectral response functions of the color camera.
  • a specific manner of obtaining the multiple spectral response functions of the color camera may be: measuring the spectral responses of the pixel positions of the color camera with a monochromator to obtain multiple spectral response functions. Or after determining the color camera, measure the photosensitive properties. For example, the response function of the color camera under different light intensities of different light sources is adjusted offline.
  • Step 904 Obtain multiple compensation values based on the first environmental spectral information and multiple spectral response functions.
  • the number of compensation values can be one-to-one corresponding to the number of pixels in the first image to be processed, or the number of compensation values is smaller than the number of pixels in the first image to be processed (it can also be understood as a region corresponding to a compensation value, a The area includes multiple pixels of the first image to be processed).
  • Step 905 performing first processing on the first image to be processed to obtain a first target image.
  • the first processing is performed on the first image to be processed to obtain the first target image.
  • the first processing includes color uniform color shading processing based on multiple compensation values.
  • the first processing in this embodiment may also include but not limited to one or more of the following post-processing algorithms: automatic exposure control (automatic exposure control, AEC ), automatic gain control (automatic gain control, AGC), color correction, lens correction, noise removal/noise reduction, dead pixel removal, linear correction, color interpolation, image downsampling, level compensation, etc.
  • some image enhancement algorithms can also be included, such as gamma (Gamma) correction, contrast enhancement and sharpening, color noise removal and edge enhancement in YUV color space, white balance, color space conversion (for example, RGB is converted to YUV) and so on.
  • the first target image is, for example, an image in YUV or RGB format.
  • the first target image may be displayed to the user.
  • the image processing device further includes an image processor, and the image processor is used to execute step 903 and step 904.
  • the size of the offset value may be up-sampled to the spatial size of the camera image. And use the up-sampled compensation value to adjust the first image to be processed.
  • color processing such as white balance and color restoration can also be performed on the first target image.
  • processing method reference can be made to the description in the foregoing embodiments, and details will not be repeated here.
  • the image to be processed can be adjusted in real time by introducing a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • the environmental spectrum information corresponding to the first image to be processed is contained, compared with the way of estimating the light source in the prior art, the adjustment quality of the first target image can be improved.
  • real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the uniform color compensation value generated by the first environmental spectral information collected by the multi-spectral sensor compared with offline calibration in the prior art, the quality of uniform color can be improved.
  • FIG. 10 shows another embodiment of the image processing method provided by the embodiment of the present application.
  • the method can be applied to an image processing device, and the image processing device includes a color camera and a multi-spectral sensor.
  • This embodiment includes step 1001 to step 1006.
  • the embodiment shown in FIG. 10 can be understood as performing color restoration processing on the image to be processed.
  • Step 1001 acquire a first image to be processed through a color camera.
  • Step 1002 acquire first environmental spectral information through a multi-spectral sensor.
  • Step 1001 and step 1002 in this implementation are similar to step 101 and step 102 in the aforementioned embodiment shown in FIG. 1 , and will not be repeated here.
  • Step 1003 acquiring multiple spectral response functions of the color camera.
  • a specific manner of obtaining the multiple spectral response functions of the color camera may be: measuring the spectral responses of the pixel positions of the color camera with a monochromator to obtain multiple spectral response functions.
  • the response function of the color camera is determined by adjusting different light intensities of the light source in an off-line manner.
  • Step 1004 acquiring the tristimulus value curve and the reflectance of the color card.
  • the reflectance of the color card can be the reflectance of a standard 24 color card, and can also be replaced by a regular rectangular wave, a custom curve, etc., which is not limited here.
  • the tristimulus value curve may be a tristimulus value curve under CIE1931 or other specifications. Specifically, there is no limitation here.
  • Step 1005 Obtain a color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves.
  • the first environmental spectrum information is converted into a light source curve.
  • the second response value of the color card to the first human eye color color space is obtained based on the tristimulus value curve, the light source curve and the reflectance.
  • a color correction matrix is obtained based on the first response value and the second response value.
  • the above-mentioned first human eye color space may be a response space corresponding to the human eye matching function.
  • the color correction matrix is used to represent the correlation between the first response value and the second response value.
  • the color correction matrix can also be understood as a transformation matrix of two color spaces.
  • the conversion matrix of the color space is a 3 ⁇ 3 matrix. It is equivalent to using one color space as the target and another color space as the source, and using the least squares method to obtain the transformation matrix.
  • the human eye matching function may be a human eye matching function under Commission Internationale de l'Eclairage (CIE) 1931 or other standards.
  • the tristimulus value curve may be a tristimulus value curve under CIE1931 or other specifications. Specifically, there is no limitation here.
  • the first response value of the color card to the color camera can be obtained through the aforementioned formula five, multiple spectral response functions, light source curves, and reflectance.
  • the second response value of the color card to the first human eye color color space is obtained through the aforementioned formula six, the three-stimulus value curve, the light source curve, and the reflectance.
  • Step 1006 performing first processing on the first image to be processed to obtain a first target image.
  • the first processing is performed on the first image to be processed to obtain the first target image.
  • the first processing includes color space conversion processing based on a color correction matrix.
  • the first processing in this embodiment may also include but not limited to one or more of the following post-processing algorithms: automatic exposure control (automatic exposure control, AEC), Automatic gain control (AGC), color correction, lens correction, noise removal/noise reduction, dead pixel removal, linearity correction, color interpolation, image downsampling, level compensation, etc.
  • AEC automatic exposure control
  • AGC Automatic gain control
  • color correction lens correction
  • noise removal/noise reduction dead pixel removal
  • linearity correction color interpolation
  • color interpolation image downsampling
  • image downsampling level compensation
  • image enhancement algorithms may also be included, such as gamma correction, contrast enhancement and sharpening, color noise removal and edge enhancement in YUV color space, white balance, color restoration, etc.
  • the first target image is, for example, an image in YUV or RGB format.
  • the first target image may be displayed to the user.
  • the image processing device further includes an image processor, and the image processor is used to execute steps 1003 to 1006.
  • the color correction matrix is a 3 ⁇ 3 matrix.
  • the image after color space conversion processing may also be adjusted according to the conversion relationship between the first human-eye color space and the second human-eye color space.
  • the second human eye color space is the corresponding response space when the color appearance model performs color adaptation, so that the quality of white balance can be improved subsequently.
  • This method can be understood as adjusting the conversion relationship between the human eye color response space (response space formed by CIE1931 human eye matching function) and other human eye color response spaces (such as the response space for color adaptation CAT02 calculation in the color appearance model CIECAM02) first target image.
  • color processing such as white balance and color uniformity can also be performed on the image after the color restoration processing.
  • the image to be processed can be adjusted in real time by introducing a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • the environmental spectrum information corresponding to the first image to be processed is contained, compared with the way of estimating the light source in the prior art, the adjustment quality of the first target image can be improved.
  • real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the conversion matrix of the color space is generated from the first environmental spectral information collected by the multi-spectral sensor, the quality of color restoration can be improved compared with the offline calibration in the prior art.
  • An embodiment of the image processing device in the embodiment of the present application includes:
  • the first acquiring unit 1101 is configured to acquire a first image to be processed through a color camera
  • the second acquiring unit 1102 is configured to acquire first environmental spectral information through a multispectral sensor
  • a third acquiring unit 1103, configured to acquire a white balance gain based on the first image to be processed and the first environmental spectral information
  • the processing unit 1104 is configured to perform first processing on the first image to be processed to obtain a first target image, where the first processing includes white balance processing based on white balance gain.
  • the image processing device may also include the following units:
  • a fourth acquisition unit 1105 configured to acquire multiple spectral response functions of the color camera
  • the fifth acquisition unit 1106 is used to acquire the tristimulus value curve and the reflectance of the color card
  • the display unit 1107 is configured to display the first target image to the user.
  • a determining unit 1108, configured to determine filter parameters based on the similarity between the first environmental spectral information and the second environmental spectral information;
  • the filtering unit 1109 is configured to filter the first target image and the second image to be processed based on the filtering parameters to obtain correction parameters.
  • each unit in the image processing device the operations performed by each unit in the image processing device are similar to those described in the foregoing embodiment shown in FIG. 1 , and will not be repeated here.
  • the processing unit 1104 uses the multi-spectral sensor The manner in which the collected first environmental spectral information performs white balance on the pure color image can improve the adjustment quality of the target image.
  • another embodiment of the image processing device in the embodiment of the present application includes: a first acquisition unit 1201 , a second acquisition unit 1202 , a third acquisition unit 1203 and a processing unit 1204 .
  • each unit is specifically configured to perform the following functions:
  • a first acquiring unit 1201, configured to acquire a first image to be processed through a color camera
  • the second acquiring unit 1202 is configured to acquire first environmental spectral information through a multispectral sensor
  • the third acquiring unit 1203 is configured to acquire multiple spectral response functions of the color camera
  • the third acquiring unit 1203 is further configured to acquire multiple compensation values based on the first environmental spectral information and multiple spectral response functions;
  • the processing unit 1204 is configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color uniform color shading processing based on a plurality of compensation values to perform color uniform color shading processing.
  • each unit in the image processing device is similar to those described in the foregoing embodiment shown in FIG. 9 , and will not be repeated here.
  • the second acquisition unit 1202 contains the environmental spectrum information corresponding to the first image to be processed collected by the second acquisition unit 1202, compared with the method of estimating the light source in the prior art, the adjustment quality of the first target image can be improved .
  • the prior art that needs to restore the color of the image through offline calibration real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the compensation value for color uniformity generated by the third acquisition unit 1203 through the first environmental spectral information collected by the multispectral sensor the quality of color uniformity can be improved compared with offline calibration in the prior art.
  • each unit is specifically configured to perform the following functions:
  • a first acquiring unit 1201, configured to acquire a first image to be processed through a color camera
  • the second acquiring unit 1202 is configured to acquire first environmental spectral information through a multispectral sensor
  • the third acquiring unit 1203 is configured to acquire multiple spectral response functions of the color camera
  • the third acquisition unit 1203 is also used to acquire the tristimulus value curve and the reflectance of the color card;
  • the third acquiring unit 1203 is further configured to acquire a color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves;
  • the processing unit 1204 is configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color space conversion processing based on a color correction matrix.
  • each unit in the image processing device is similar to those described in the foregoing embodiment shown in FIG. 10 , and will not be repeated here.
  • the environmental spectral information corresponding to the first image to be processed collected by the second acquisition unit 1202 is included, compared with the method of estimating light sources in the prior art, the adjustment quality of the first target image can be improved.
  • the prior art that needs to restore the color of the image through offline calibration real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the third acquisition unit 1203 since the third acquisition unit 1203 generates a conversion matrix of the color space through the first environmental spectral information collected by the multi-spectral sensor, compared with offline calibration in the prior art, the quality of color reproduction can be improved.
  • another embodiment of the image processing device in the embodiment of the present application includes: a color camera 1301 , a multispectral sensor 1302 and an image processor 1303 .
  • each unit is specifically configured to perform the following functions:
  • a color camera 1301, configured to acquire the first image to be processed
  • the image processor 1303 is configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes white balance processing based on white balance gain.
  • the image processor 1303 For the scene of a pure color image (that is, the image scene is not rich in color or a large-area monochrome object appears), compared with the grayscale world algorithm used in the prior art for white balancing, the image processor 1303 The adjustment quality of the target image can be improved by using the first environmental spectral information collected by the multispectral sensor 1302 to white balance the pure color image.
  • each unit is specifically configured to perform the following functions:
  • a color camera 1301, configured to acquire the first image to be processed
  • the image processor 1303 is further configured to acquire multiple compensation values based on the first environmental spectral information and multiple spectral response functions;
  • the image processor 1303 is further configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color uniform color shading processing based on multiple compensation values.
  • the image to be processed can be adjusted in real time by introducing a multispectral sensor to collect environmental spectral information corresponding to the image to be processed.
  • the environmental spectrum information corresponding to the first image to be processed is contained, compared with the way of estimating the light source in the prior art, the adjustment quality of the first target image can be improved.
  • real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the image processor 1303 generates a uniform color compensation value through the first environmental spectral information collected by the multi-spectral sensor 1302 , compared with offline calibration in the prior art, the quality of color uniformity can be improved.
  • each unit is specifically configured to perform the following functions:
  • a color camera 1301, configured to acquire the first image to be processed
  • the image processor 1303 is also used to obtain the tristimulus value curve and the reflectance of the color card;
  • the image processor 1303 is further configured to obtain a color correction matrix based on the first environmental spectral information, multiple spectral response functions, reflectance and tristimulus value curves;
  • the image processor 1303 is further configured to perform first processing on the first image to be processed to obtain a first target image; wherein, the first processing includes color space conversion processing based on a color correction matrix.
  • the image to be processed can be adjusted in real time by introducing the multispectral sensor 1302 to collect environmental spectral information corresponding to the image to be processed.
  • the environmental spectrum information corresponding to the first image to be processed is contained, compared with the way of estimating the light source in the prior art, the adjustment quality of the first target image can be improved.
  • real-time calculation can be realized, and problems caused by possible errors in offline table selection can be avoided.
  • the image processor 1303 since the image processor 1303 generates the conversion matrix of the color space through the first environmental spectral information collected by the multi-spectral sensor 1302, compared with offline calibration in the prior art, the quality of color reproduction can be improved.
  • the embodiment of the present application provides another image processing device.
  • the image processing device can be any image processing device including mobile phone, tablet computer, personal digital assistant (PDA), point of sales (POS), vehicle-mounted computer, etc. :
  • FIG. 14 is a block diagram showing a partial structure of a mobile phone related to the image processing device provided by the embodiment of the present application.
  • the mobile phone includes: a radio frequency (radio frequency, RF) circuit 1410, a memory 1420, an input unit 1430, a display unit 1440, a color camera 1451, a multispectral sensor 1452, an audio circuit 1460, and a wireless fidelity (WiFi) module 1470, processor 1480, power supply 1490 and other components.
  • RF radio frequency
  • the RF circuit 1410 can be used for sending and receiving information or receiving and sending signals during a call. In particular, after receiving the downlink information from the base station, it is processed by the processor 1480; in addition, the designed uplink data is sent to the base station.
  • the RF circuit 1410 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (low noise amplifier, LNA), a duplexer, and the like.
  • RF circuitry 1410 may also communicate with networks and other devices via wireless communications.
  • the above wireless communication can use any communication standard or protocol, including but not limited to global system of mobile communication (global system of mobile communication, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access) multiple access (CDMA), wideband code division multiple access (WCDMA), long term evolution (LTE), e-mail, short message service (short messaging service, SMS), etc.
  • GSM global system of mobile communication
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • WCDMA wideband code division multiple access
  • LTE long term evolution
  • e-mail short message service
  • SMS short message service
  • the memory 1420 can be used to store software programs and modules, and the processor 1480 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1420 .
  • Memory 1420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.); Data created by the use of mobile phones (such as audio data, phonebook, etc.), etc.
  • the memory 1420 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the input unit 1430 can be used to receive input numbers or character information, and generate key signal input related to user settings and function control of the mobile phone.
  • the input unit 1430 may include a touch panel 1431 and other input devices 1432 .
  • the touch panel 1431 also referred to as a touch screen, can collect touch operations of the user on or near it (for example, the user uses any suitable object or accessory such as a finger or a stylus on the touch panel 1431 or near the touch panel 1431). operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 1431 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and sends it to the to the processor 1480, and can receive and execute commands sent by the processor 1480.
  • the touch panel 1431 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 1430 may also include other input devices 1432 .
  • other input devices 1432 may include but not limited to one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), trackball, mouse, joystick, and the like.
  • the display unit 1440 may be used to display information input by or provided to the user and various menus of the mobile phone.
  • the display unit 1440 may include a display panel 1441.
  • the display panel 1441 may be configured in the form of a liquid crystal display (liquid crystal display, LCD) or an organic light-emitting diode (OLED).
  • the touch panel 1431 can cover the display panel 1441, and when the touch panel 1431 detects a touch operation on or near it, it sends it to the processor 1480 to determine the type of the touch event, and then the processor 1480 determines the type of the touch event according to the The type provides a corresponding visual output on the display panel 1441 .
  • the touch panel 1431 and the display panel 1441 are used as two independent components to realize the input and input functions of the mobile phone, in some embodiments, the touch panel 1431 and the display panel 1441 can be integrated to form a mobile phone. Realize the input and output functions of the mobile phone.
  • the mobile phone may also include a color camera 1451 and a multi-spectral sensor 1452.
  • the color camera 1451 is specifically used to collect color images or pure color images (or called monochrome images).
  • the multi-spectral sensor 1452 is used to acquire environmental spectral information corresponding to the image.
  • the mobile phone may also include other types of sensors, such as proximity sensors, motion sensors, and other sensors. Specifically, the proximity sensor can turn off the display panel 1441 and/or the backlight when the mobile phone is moved to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (generally three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the application of mobile phone posture (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; as for other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. repeat.
  • mobile phone posture such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tap
  • other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. repeat.
  • the audio circuit 1460, the speaker 1461, and the microphone 1462 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 1460 can transmit the electrical signal converted from the received audio data to the speaker 1461, and the speaker 1461 converts it into an audio signal for output; After being received, it is converted into audio data, and then the audio data is processed by the output processor 1480, and then sent to another mobile phone through the RF circuit 1410, or the audio data is output to the memory 1420 for further processing.
  • WiFi is a short-distance wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1470. It provides users with wireless broadband Internet access.
  • FIG. 14 shows a WiFi module 1470, it can be understood that it is not an essential component of the mobile phone.
  • the processor 1480 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. By running or executing software programs and/or modules stored in the memory 1420, and calling data stored in the memory 1420, execution Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole.
  • the processor 1480 may include one or more processing units; preferably, the processor 1480 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface and application programs, etc. , the modem processor mainly handles wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 1480.
  • the mobile phone also includes a power supply 1490 (such as a battery) for supplying power to various components.
  • a power supply 1490 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 1480 through the power management system, so as to realize functions such as managing charging, discharging, and power consumption management through the power management system.
  • the mobile phone may also include a camera, a Bluetooth module, etc., which will not be repeated here.
  • the processor 1480 included in the image processing device may execute the functions in the foregoing embodiments shown in FIG. 1 to FIG. 10 , which will not be repeated here.
  • the disclosed system, device and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separated, and a component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be fully or partially realized by software, hardware, firmware or any combination thereof.
  • the integrated units When the integrated units are implemented using software, they may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, or a magnetic tape), an optical medium (such as a DVD), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc.

Abstract

本申请实施例公开了一种图像处理方法,可以应用于白平衡、色彩还原等颜色处理场景,该方法包括:通过彩色摄像头获取第一待处理图像(101);通过多光谱传感器获取第一环境光谱信息(102),第一环境光谱信息与第一待处理图像对应同一个拍摄场景;基于第一待处理图像与第一环境光谱信息获取白平衡增益(103);对第一待处理图像进行第一处理,得到第一目标图像(104);其中,第一处理包括基于白平衡增益的白平衡处理。通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,不仅可以实时对待处理图像进行调整。另外相较于现有技术中估计光源的方式,还可以提升目标图像的调整质量。

Description

一种图像处理方法及相关设备
本申请要求于2021年7月29日提交中国专利局、申请号为202110867203.0、发明名称为“一种图像处理方法及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理领域,尤其涉及一种图像处理方法及相关设备。
背景技术
随着用户的拍照需求逐渐增加,对于电子设备拍摄得到的图像的质量要求也越来越高。用户在利用电子设备拍摄影像时,由于拍摄环境的不同,导致电子设备的成像与实物之间存在差异。对于相机成像来说,颜色处理是关乎效果的重要部分,而其中的色彩不均匀性(color shading)、白平衡和色彩还原又是影响颜色效果的关键因素。
目前,业界最常用的颜色处理方法是通过在离线的场景下对多个光源进行标定,从而得到不同光源下的校正参数,再根据校正参数对彩色摄像头采集的原始图像进行调整,得到用于向用户展示的目标图像。
然而,上述在离线场景下的多个光源标定方式可以理解为是估计光源,该种方式下获取的校正参数不够准确,影响后续的色彩处理。
发明内容
本申请实施例提供了一种图像处理方法及相关设备。通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另外由于是采集的待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升目标图像的调整质量。
本申请实施例第一方面提供了一种图像处理方法,该方法可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该方法可以应用于图像处理设备,图像处理设备包括彩色摄像头与多光谱传感器,方法包括:通过彩色摄像头获取第一待处理图像;通过多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;基于第一待处理图像与第一环境光谱信息获取白平衡增益;对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于白平衡增益的白平衡处理。
本申请实施例中,通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另外由于是采集的待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升目标图像的调整质量。
可选地,在第一方面的一种可能的实现方式中,上述的第一待处理图像与第一目标图像为纯色图像或大面积是纯色的彩色图像。
该种可能的实现方式中,相较于现有技术中用灰度世界算法的方式进行白平衡,通过多光谱传感器采集的第一环境光谱信息对纯色图像进行白平衡的方式可以提升目标图像的调整质量。
可选地,在第一方面的一种可能的实现方式中,上述步骤:基于第一待处理图像与第一环境光谱信息获取白平衡增益,包括:将第一环境光谱信息与第一待处理图像输入训练好的神经网络得到白平衡增益;训练好的神经网络是通过以训练数据作为神经网络的输入,以损失函数的值小于阈值为目标对神经网络进行训练获取,训练数据包括训练原始图像与训练光谱信息,训练原始图像与训练光谱信息对应同一个拍摄场景,神经网络的输出包括白平衡增益,损失函数用于指示神经网络输出的白平衡增益与实际白平衡增益之间的差异,实际白平衡增益由灰卡在拍摄场景下的响应值处理得到。
该种可能的实现方式中,相较于现有技术中用灰度世界算法的方式进行白平衡,使用通过多光谱传感器采集的第一环境光谱信息以及神经网络获取的白平衡增益对图像进行白平衡的方式可以提升目标图像的调整质量。
可选地,在第一方面的一种可能的实现方式中,上述步骤还包括:获取彩色摄像头的多个光谱响应函数;基于第一环境光谱信息与多个光谱响应函数获取多个补偿值;第一处理还包括:基于多个补偿值的色彩均匀color shading处理。
该种可能的实现方式中,相对于现有技术中需要通过离线标定的方式对图像进行色彩均匀,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成的色彩均匀的补偿值,相对于现有技术中离线标定,可以提升色彩均匀的质量。
可选地,在第一方面的一种可能的实现方式中,上述步骤还包括:获取三刺激值曲线与色卡的反射率;基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;第一处理还包括:基于颜色校正矩阵的颜色空间转换处理。
该种可能的实现方式中,相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
可选地,在第一方面的一种可能的实现方式中,上述步骤基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵,包括:将第一环境光谱信息转化为光源曲线;基于多个光谱响应函数、光源曲线以及反射率获取色卡对彩色摄像头的第一响应值;基于三刺激值曲线、光源曲线以及反射率获取色卡对第一人眼色彩空间的第二响应值,第一人眼色彩空间为人眼匹配函数对应的响应空间;基于第一响应值与第二响应值获取颜色校正矩阵,颜色校正矩阵用于表示第一响应值与第二响应值之间的关联关系。
该种可能的实现方式中,通过获取两个响应值的方式获取颜色空间转换矩阵,且由于通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
可选地,在第一方面的一种可能的实现方式中,上述的第一处理还包括:对经过白平衡处理后的图像进行后处理,得到第一目标图像。
该种可能的实现方式中,该种方式可以理解为是根据人眼色彩响应空间(CIE1931人眼匹配函数构成的响应空间)与其他人眼色彩响应空间(例如色貌模型CIECAM02中进行色适应CAT02计算的响应空间)转换关系调整图像,有利于后续对白平衡进行处理。
可选地,在第一方面的一种可能的实现方式中,上述步骤还包括:向用户显示第一目标图像。
该种可能的实现方式中,通过多光谱传感器采集的第一环境光谱信息对第一待处理图像进行调整,并向用户展示调整后的图像,提升图像的色彩处理效果,提升用户体验。
可选地,在第一方面的一种可能的实现方式中,上述步骤还包括:通过彩色摄像头获取第二待处理图像;通过多光谱传感器获取第二环境光谱信息,第二环境光谱信息与第二待处理图像对应同一个拍摄场景;基于第一环境光谱信息与第二环境光谱信息的相似度确定滤波参数;基于滤波参数对第一目标图像与第二待处理图像进行滤波,得到校正参数;基于校正参数调整第二待处理图像得到第二目标图像。
该种可能的实现方式中,通过相似度确定第二待处理图像的校正参数,实现了提升颜色处理时域稳定性的同时,兼顾灵敏性,即避免颜色效果在时域上的闪烁,同时又能及时响应环境的变化导致参数调整。
可选地,在第一方面的一种可能的实现方式中,上述步骤基于第一环境光谱信息与第二环境光谱信息的相似度确定滤波参数,包括:基于相似度生成滤波强度函数;基于滤波强度函数确定滤波参数。相似度与滤波强度为正相关的关系,即相似度越大,滤波强度越强。换句话说,如果第一环境光谱信息与第二环境光谱信息的差异较小,则可以使用历史校正参数,或者历史校正参数(即通过第一环境光谱信息与第一待处理图像中的颜色通道得到的校正参数)的权重大一些,新的校正参数(即通过第二环境光谱信息与第二待处理图像得到的校正参数)的权重小一些,从而得到第二待处理图像的校正参数。如果第一环境光谱信息与第二环境光谱信息的差异较大(例如:室内与室外环境的差别),则可以使用新的校正参数,或者新的校正参数的权重大一些,历史校正参数的权重小一些,从而得到第二待处理图像的校正参数。
该种可能的实现方式中,通过相似度生成滤波强度函数,相似度越高,历史帧的校正参数的权重越大,实现了提升颜色处理时域稳定性的同时,兼顾灵敏性,即避免颜色效果在时域上的闪烁,同时又能及时响应环境的变化导致参数调整。
本申请实施例第二方面提供了一种图像处理方法,该方法可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该方法可以应用于图像处理设备,图像处理设备包括彩色摄像头与多光谱传感器,方法包括:通过彩色摄像头获取第一待处理图像;通过多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;获取彩色摄像头的多个光谱响应函数;基于多个第一环境光谱信息与多个光谱响应函数获取多个补偿值;对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于多个补偿值的色彩均匀color shading处理进行色彩均匀color shading处理。
本实施例中,相对于现有技术中需要通过离线标定的方式对图像进行色彩均匀,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成的色彩均匀的补偿值,相对于现有技术中离线标定,可以提升色彩均匀的质量。
本申请实施例第三方面提供了一种图像处理方法,该方法可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该方法可以应用于图像处理设备,图像处理设备包括彩色 摄像头与多光谱传感器,方法包括:基于彩色摄像头获取第一待处理图像;基于多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;获取彩色摄像头的多个光谱响应函数;获取三刺激值曲线与色卡的反射率;基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于颜色校正矩阵的颜色空间转换处理。
本申请实施例中,相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
可选地,在第三方面的一种可能的实现方式中,上述步骤:基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵,包括:将第一环境光谱信息转化为光源曲线;基于多个光谱响应函数、光源曲线、反射率获取色卡对彩色摄像头的第一响应值;基于三刺激值曲线、光源曲线、反射率获取色卡对第一人眼色彩空间的第二响应值,第一人眼色彩空间为人眼匹配函数对应的响应空间;基于第一响应值与第二响应值获取颜色校正矩阵,颜色校正矩阵用于表示第一响应值与第二响应值之间的转换关系。
该种可能的实现方式中,通过获取两个响应值的方式获取颜色空间转换矩阵,且由于通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
可选地,在第三方面的一种可能的实现方式中,上述步骤还包括:基于第一人眼色彩空间与第二人眼色彩空间的转换关系对颜色空间转换处理后的图像进行调整,第二人眼色彩空间为色貌模型进行色适应时所对应的响应空间。
该种可能的实现方式中,该种方式可以理解为是根据人眼色彩响应空间(CIE1931人眼匹配函数构成的响应空间)与其他人眼色彩响应空间(例如色貌模型CIECAM02中进行色适应CAT02计算的响应空间)转换关系调整图像,有利于后续对白平衡进行处理。
本申请实施例第四方面提供了一种图像处理设备,该图像处理设备可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该图像处理设备包括:
第一获取单元,用于通过彩色摄像头获取第一待处理图像;
第二获取单元,用于通过多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;
处理单元,用于对第一待处理图像进行第一处理,得到第一目标图像,第一处理包括基于白平衡增益的白平衡处理。
可选地,在第四方面的一种可能的实现方式中,上述的第一待处理图像与第一目标图像为纯色图像或者大面积为纯色的彩色图像。
可选地,在第四方面的一种可能的实现方式中,上述的第三获取单元,具体用于将第一环境光谱信息与第一待处理图像输入训练好的神经网络得到白平衡增益;训练好的神经网络是通过以训练数据作为神经网络的输入,以损失函数的值小于阈值为目标对神经网络进行训练获取,训练数据包括训练原始图像与训练光谱信息,训练原始图像与训练光谱信息对应同 一个拍摄场景,神经网络的输出包括白平衡增益,损失函数用于指示神经网络输出的白平衡增益与实际白平衡增益之间的差异,实际白平衡增益由灰卡在拍摄场景下的响应值处理得到。
可选地,在第四方面的一种可能的实现方式中,上述的设备还包括:
第四获取单元,用于获取彩色摄像头的多个光谱响应函数;
第四获取单元,还用于基于第一环境光谱信息与多个光谱响应函数获取多个估计值;
第四获取单元,还用于基于多个估计值计算多个补偿值;
处理单元,还用于基于多个补偿值的色彩均匀color shading处理。
可选地,在第四方面的一种可能的实现方式中,上述的设备还包括:
第五获取单元,用于获取三刺激值曲线与色卡的反射率;
第五获取单元,还用于基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;
处理单元,还用于基于颜色校正矩阵的颜色空间转换处理。
可选地,在第四方面的一种可能的实现方式中,上述的第五获取单元,具体用于将第一环境光谱信息转化为光源曲线;
第五获取单元,具体用于基于多个光谱响应函数、光源曲线以及反射率获取色卡对彩色摄像头的第一响应值;
第五获取单元,具体用于基于三刺激值曲线、光源曲线以及反射率获取色卡对第一人眼色彩空间的第二响应值,第一人眼色彩空间为人眼匹配函数对应的响应空间;
第五获取单元,具体用于基于第一响应值与第二响应值获取颜色校正矩阵,颜色校正矩阵用于表示第一响应值与第二响应值之间的关联关系。
可选地,在第四方面的一种可能的实现方式中,上述的处理单元,还用于对经过白平衡处理后的图像进行后处理,得到第一目标图像。
可选地,在第四方面的一种可能的实现方式中,上述的设备还包括:
显示单元,用于向用户显示第一目标图像。
可选地,在第四方面的一种可能的实现方式中,上述的第一获取单元,还用于通过彩色摄像头获取第二待处理图像;
第二获取单元,还用于通过多光谱传感器获取第二环境光谱信息,第二环境光谱信息与第二待处理图像对应同一个拍摄场景;
设备还包括:
确定单元,用于基于第一环境光谱信息与第二环境光谱信息的相似度确定滤波参数;
滤波单元,用于基于滤波参数对第一目标图像与第二待处理图像进行滤波,得到校正参数;
处理单元,还用于基于校正参数调整第二待处理图像得到第二目标图像。
可选地,在第四方面的一种可能的实现方式中,上述的确定单元,具体用于基于相似度生成滤波强度函数;
确定单元,具体用于基于滤波强度函数确定滤波参数。
本申请实施例第五方面提供了一种图像处理设备,该图像处理设备可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该图像处理设备包括:
第一获取单元,用于通过彩色摄像头获取第一待处理图像;
第二获取单元,用于通过多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;
第三获取单元,用于获取彩色摄像头的多个光谱响应函数;
第三获取单元,还用于基于第一环境光谱信息与多个光谱响应函数获取多个补偿值;
处理单元,用于对第一待处理图像进行第一处理,得到第一目标图像,第一处理包括基于多个补偿值的色彩均匀color shading处理进行色彩均匀color shading处理。
本申请实施例第六方面提供了一种图像处理设备,该图像处理设备可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该图像处理设备包括:
第一获取单元,用于通过彩色摄像头获取第一待处理图像;
第二获取单元,用于通过多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;
第三获取单元,用于获取彩色摄像头的多个光谱响应函数;
第三获取单元,还用于获取三刺激值曲线与色卡的反射率;
第三获取单元,还用于基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;
处理单元,用于对第一待处理图像进行第一处理,得到第一目标图像,第一处理包括基于颜色校正矩阵的颜色空间转换处理。
可选地,在第六方面的一种可能的实现方式中,上述的第三获取单元,具体用于将第一环境光谱信息转化为光源曲线;
第三获取单元,具体用于基于多个光谱响应函数、光源曲线、反射率获取色卡对彩色摄像头的第一响应值;
第三获取单元,具体用于基于三刺激值曲线、光源曲线、反射率获取色卡对第一人眼色彩空间的第二响应值,第一人眼色彩空间为人眼匹配函数对应的响应空间;
第三获取单元,具体用于基于第一响应值与第二响应值获取颜色校正矩阵,颜色校正矩阵用于表示第一响应值与第二响应值之间的转换关系。
可选地,在第六方面的一种可能的实现方式中,上述的处理单元,还用于基于第一人眼色彩空间与第二人眼色彩空间的转换关系对颜色空间转换处理后的图像进行调整,第二人眼色彩空间为色貌模型进行色适应时所对应的响应空间。
本申请实施例第七方面提供了一种图像处理设备,该图像处理设备可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该图像处理设备包括彩色摄像头、多光谱传感器与图像处理器;
彩色摄像头,用于获取第一待处理图像;
多光谱传感器,用于获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;
图像处理器,用于基于第一待处理图像与第一环境光谱信息获取白平衡增益;并对第一待处理图像进行第一处理,得到第一目标图像,第一处理包括基于白平衡增益的白平衡处理。
本申请实施例第八方面提供了一种图像处理设备,该图像处理设备可以应用于白平衡、 色彩还原、色彩均匀等图像的颜色处理场景。该图像处理设备包括彩色摄像头、多光谱传感器与图像处理器;
彩色摄像头,用于获取第一待处理图像;
多光谱传感器,用于获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景;
图像处理器,用于获取彩色摄像头的多个光谱响应函数;
图像处理器,还用于基于第一环境光谱信息与多个光谱响应函数获取多个补偿值;
图像处理器,图像处理器,还用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于多个补偿值的色彩均匀color shading处理。
本申请实施例第九方面提供了一种图像处理设备,该图像处理设备可以应用于白平衡、色彩还原、色彩均匀等图像的颜色处理场景。该图像处理设备包括彩色摄像头、多光谱传感器与图像处理器;
彩色摄像头,用于获取第一待处理图像;
多光谱传感器,用于获取与第一待处理图像对应的第一环境光谱信息;
图像处理器,用于获取彩色摄像头的多个光谱响应函数;
图像处理器,还用于获取三刺激值曲线与色卡的反射率;
图像处理器,还用于基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;
图像处理器,还用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于颜色校正矩阵的颜色空间转换处理。
本申请第十方面提供了一种图像处理设备,该图像处理设备执行前述第一方面或第一方面的任意可能的实现方式中的方法,或执行前述第二方面或第二方面的任意可能的实现方式中的方法,或执行前述第三方面或第三方面的任意可能的实现方式中的方法。
本申请第十一方面提供了一种图像处理设备,包括:处理器,处理器与存储器耦合,存储器用于存储程序或指令,当程序或指令被处理器执行时,使得该图像处理设备实现上述第一方面或第一方面的任意可能的实现方式中的方法,或者使得该图像处理设备实现上述第二方面或第二方面的任意可能的实现方式中的方法,或者使得该图像处理设备实现上述第三方面或第三方面的任意可能的实现方式中的方法。
本申请第十二方面提供了一种计算机可读介质,其上存储有计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行前述第一方面或第一方面的任意可能的实现方式中的方法,或者使得计算机执行前述第二方面或第二方面的任意可能的实现方式中的方法,或者使得计算机执行前述第三方面或第三方面的任意可能的实现方式中的方法。
本申请第十三方面提供了一种计算机程序产品,该计算机程序产品在计算机上执行时,使得计算机执行前述第一方面或第一方面的任意可能的实现方式、第二方面或第二方面的任意可能的实现方式、第三方面或第三方面的任意可能的实现方式中的方法。
其中,第四、第七、第十、第十一、第十二、第十三方面或者其中任一种可能实现方式所带来的技术效果可参见第一方面或第一方面不同可能实现方式所带来的技术效果,此处不再赘述。
其中,第五、第八、第十、第十一、第十二、第十三方面或者其中任一种可能实现方式所带来的技术效果可参见第二方面或第二方面不同可能实现方式所带来的技术效果,此处不再赘述。
其中,第六、第九、第十、第十一、第十二、第十三方面或者其中任一种可能实现方式所带来的技术效果可参见第二方面或第二方面不同可能实现方式所带来的技术效果,此处不再赘述。
从以上技术方案可以看出,本申请实施例具有以下优点:通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,不仅可以实时对待处理图像进行调整。另外相较于现有技术中估计光源的方式,还可以提升目标图像的调整质量。
附图说明
图1为本发明实施例提供的图像处理方法的一个流程示意图;
图2与图3为本申请实施例提供的第一环境光谱信息的两种示例图;
图4为本申请实施例提供的第一待处理图像的一个示例图;
图5为本申请实施例提供的经过白平衡处理后图像的一个示例图;
图6为本申请实施例提供的色彩均匀处理之前的图像与色彩均匀处理之后的图像的一个示例图;
图7为本申请实施例提供的颜色空间转化处理之前的图像的另一个示例图;
图8为本申请实施例提供的颜色空间转化处理之后的图像的一种示例图;
图9为本发明实施例提供的图像处理方法的另一个流程示意图;
图10为本发明实施例提供的图像处理方法的另一个流程示意图;
图11-图14为本申请实施例中图像处理设备的几个结构示例图。
具体实施方式
本申请实施例提供了一种图像处理方法及相关设备。通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另外由于是采集的待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升目标图像的调整质量。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获取的所有其他实施例,都属于本发明保护的范围。
为了便于理解,下面先对本申请实施例主要涉及的相关术语和概念进行介绍。
1、白平衡
白平衡(white balance)通俗的理解就是“不管在任何光源下,都能将白色物体还原为白色”,对在特定光源下拍摄时出现的偏色现象,通过加强对应的补色来进行补偿。如果将白色物体还原为白色,那其他景物的影像就会接近人眼的色彩视觉习惯。白平衡中的“平衡”可以理解为是要对不同色温所引起的色差进行校正,从而使白色的物体呈现真正的白色。
2、色彩不均匀性
色彩不均匀性(color shading)是指同一平面下可能出现颜色在空间上不均匀的问题。例如:在使用手机相机拍照时常表现为手机照片中心发红,照片有黑角,核心原因是手机空间的局限性导致光学系统设计做了一些取舍。平行光在穿过凸透镜后会在一段距离后聚焦。一般相机因为空间宽松,因此可以把焦距拉得很长,而手机则只能尽量缩短焦距,让光在镜片后非常近的距离聚焦。虽然两种聚焦方法都可以达到在感光元件上成像的目的,但两者的效果却相去甚远。又因为相机对不同波长的光存在不同的折射率,在经过透镜后的前进方向会有一定的差别。在焦距很短的情况下,四周的散射光由于聚焦过早而无法完全重合,这就造成了中心光量更大而周边光量较少的问题,这便是前面提到的手机照片中心发红,也就是color shading现象的根本原因。
目前,业界最常用的颜色处理方法是通过在离线的场景下对多个光源进行标定,从而得到不同光源下的校正参数,在根据校正参数对彩色摄像头采集的原始图像进行调整,得到用于向用户展示的目标图像。
然而,上述在离线场景下的多个光源标定方式可以理解为是估计光源,该种方式下获取的校正参数不够准确,影响后续的色彩处理。
为了解决上述问题,本申请实施例提供了一种图像处理方法,通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另外由于是采集的待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升目标图像的调整质量。
下面结合附图对本申请实施例提供的图像处理方法进行详细的介绍。
本申请实施例提供的图像处理方法可以应用于白平衡、色彩均匀、色彩还原等颜色处理的场景。
请参阅图1,本申请实施例提供的图像处理方法一个实施例,该方法可以应用于图像处理设备,该图像处理设备包括彩色摄像头与多光谱传感器。该实施例包括步骤101至步骤104。图1所示的实施例可以理解为是对待处理图像进行白平衡处理。
步骤101,通过彩色摄像头获取第一待处理图像。
本申请实施例中的彩色摄像头可以理解为是一种RGB传感器,可以摄取场景的色彩,拍摄彩色照片。彩色摄像头在具体中可以是单目摄像头或双目摄像头,设置于图像处理设备机身主体的壳体上面的前方位置(即前置摄像头)或后方位置(即后置摄像头)。另外,该彩色摄像头可以是超广角彩色摄像头、广角彩色摄像头或长焦彩色摄像头等,具体此处不做限定。
通过彩色摄像头获取第一待处理图像,该第一待处理图像可以是彩色摄像头采集的原始RAW图像。
本申请实施例中的彩色摄像头用于采集彩色图像或纯色图像,对于彩色摄像头的具体结构此处不做限定。
可选地,第一待处理图像为纯色图像(或称为单色图像)或大面积单一颜色的图像。
可选地,第一待处理图像可以是原始RAW域图像(也可以称为RAW图像),RAW图像可以是金属氧化物半导体元件(complementary metal-oxide semiconductor,CMOS)或电荷耦合元件(charge coupled device,CCD)图像传感器将摄像头捕捉到的光源信号转化为数字信号的原始数据,该原始数据尚未经过图像信号处理器(image signal processor,ISP)处理。 该RAW图像具体可以是采用拜耳(bayer)格式的bayer图像。
步骤102,通过多光谱传感器获取第一环境光谱信息。
本申请实施例中的多光谱传感器用于采集光谱,光谱(或称为光学频谱)可以理解为是复色光经过色散系统(例如棱镜、光栅)分光后,被色散开的单色光按波长或频率大小而依次排列的图案。
示例性的,多光谱传感器可以采集350-1000纳米波段的光谱,视场角(field of view,FOV)大小为正负35度。多光谱传感器还可以包括8个可见光波段与多个特殊波段(例如:全光段通道、闪烁频率检测通道和/或红外通道等),或者包括10个可见光波段与多个特殊波段,可以理解的是,上述可见光波段的数量只是举例,在实际应用中,还可以是更少或更多数量的可见光波段,本文仅以8个可见光波段为例进行示例性描述。
通过多光谱传感器获取第一环境光谱信息,第一环境光谱信息与第一待处理图像对应同一个拍摄场景。
本申请实施例中的同一拍摄场景可以理解为以下属性中的至少一项被满足:
1、同一拍摄场景可以是指采集第一待处理图像时彩色摄像头的位置与采集第一环境光谱信息时多光谱传感器的位置之间的距离小于某一阈值(例如:彩色摄像头采集第一待处理图像时的位置与多光谱传感器采集第一环境光谱信息时的位置之间距离为1米,阈值为2米,即距离小于阈值,则可以确定第一待处理图像与第一环境光谱信息为同一拍摄场景)。
上述中的位置可以是相对位置或地理位置等,如果位置是相对位置,可以通过建立场景模型等方式确定相对位置;如果位置是地理位置,可以是基于全球定位系统(global positioning system,GPS)或北斗导航系统等确定的第一设备位置与第二设备的位置,进而得到两个位置之间的距离。
2、同一拍摄场景还可以是根据光照强度来评判,例如:基于采集第一待处理图像时的天气类型与采集第一环境光谱信息时的天气类型是否相近来判断第一环境光谱信息与第一待处理图像是否为同一拍摄场景,例如:若采集第一待处理图像时为晴天,采集第一环境光谱信息时为晴天,则可以确定第一环境光谱信息与第一待处理图像为同一拍摄场景。若采集第一待处理图像时为晴天,采集第一环境光谱信息时为雨天,则可以确定第一环境光谱信息与第一待处理图像不属于同一拍摄场景。
可以理解的是,上述确定第一待处理图像与第第一环境光谱信息为同一拍摄场景只是举例,实际应用中,还可以有其他方式,具体此处不做限定。
其中,第一环境光谱信息可以是光源光谱也可以是反射光谱。相当于,光源光谱为照射第一待处理图像的光源对应的光谱,反射光谱为第一待处理图像中拍摄对象反射的光对应的光谱。
另外,第一环境光谱信息可以是多光谱传感器的采样点或者环境光谱图等可以表征环境光谱的信息,其中,采样点的数量(或称为多光谱传感器的通道数量)与多光谱传感器的设计(例如可见光波段、特殊波段等波段的数量)有关,可以是8个,也可以是10个,还可以是更少或更多数量的采样点,延续上述举例,本申请实施例仅以多光谱传感器采集的是8个采样点为例进行描述。
示例性的,图2与图3为第一环境光谱信息的两种示例,可以理解的是,第一环境信息 可以是8个二维数组,例如:(色温,光强)。
步骤103,基于第一待处理图像与第一环境光谱信息获取白平衡增益。
本申请实施例中基于第一待处理图像与第一环境光谱信息获取白平衡增益的方式有多种,可以是基于第一环境光谱信息获取光源白点,也可以是基于第一环境光谱信息获取白平衡增益。其中,光源白点可以理解为是1/白平衡增益,白平衡增益可以理解为是红色增益(Rgain)与蓝色增益(Bgain)。
可以理解的是,可以通过灰度世界算法、全反射算法或输入神经网络等方式得到Rgain与Bgain或光源白点。当然,若是在纯色场景下,可以不采用灰度世界算法,避免由于灰度世界的假设(即对于一幅大量色彩变化的图像,三个分量RGB的平均值趋于同一灰度值)导致的白平衡失效。
可选地,以通过神经网络为例进行描述:第一待处理图像为16*16大小。可以将8个采样点的数值上采样至8*8大小,并将第一待处理图像下采样至8*8大小。并将上采样后的8个采样点与下采样后的第一待处理图像输入神经网络得到Rgain与Bgain。当然,上述上采样或下采样的步骤也可以放入神经网络后进行,即将第一环境光谱信息与第一待处理图像作为神经网络的输入。其中,神经网络可以是深度神经网络、卷积神经网络等,具体此处不做限定。训练好的神经网络是通过以训练数据作为神经网络的输入,以损失函数的值小于阈值为目标对神经网络进行训练获取,训练数据包括训练原始图像与训练光谱信息,训练原始图像与训练光谱信息对应同一个拍摄场景,神经网络的输出包括白平衡增益,损失函数用于指示神经网络输出的白平衡增益与实际白平衡增益之间的差异,实际白平衡增益由灰卡在拍摄场景下的响应值处理得到。其中,由于灰卡的RGB通道数值是相等或近似的,根据灰卡在拍摄场景下的响应值有利于判断白平衡增益。
步骤104,对第一待处理图像进行第一处理,得到第一目标图像。
获取白平衡增益之后,对第一待处理图像进行第一处理。得到第一目标图像。其中,第一处理包括基于白平衡增益的白平衡处理。
可选地,将Rgain乘以第一待处理图像中红色通道的数值,并将Bgain乘以第一待处理图像中蓝色通道的数值,得到调整后各个通道的数值,进而实现对第一待处理图像的白平衡处理。
另外,调整的方式可以是直接将Rgain与Bgain分别乘以第一待处理图像中的像素点值。也可以是根据RGGB调整多个补偿值,再乘以拜耳(bayer)域的第一待处理图像中的像素点,具体此处不做限定。
示例性的,图4为第一待处理图像的示例图,图5为第一目标图像的示例图。
本实施例中的第一处理除了包含基于白平衡增益的白平衡处理,此外,还可包括但不限于以下的一种或多种后处理算法:自动曝光控制(automatic exposure control,AEC)、自动增益控制(automatic gain control,AGC)、色彩校正、镜头矫正、噪声去除/降噪、坏点去除、线性纠正、颜色插值、图像下采样、电平补偿,等等。此外在一些实例,还可包括一些图像增强算法,例如伽马(Gamma)矫正、对比度增强和锐化、在YUV色彩空间上彩噪去除与边缘加强、色彩加强、色彩空间转换(例如RGB转换为YUV)等等。第一目标图像例如是YUV或者RGB格式的图像。
可选地,获取第一目标图像之后,可以向用户显示该第一目标图像。
可选地,图像处理设备还包括图像处理器,且该图像处理器用于执行步骤103与步骤104。
本实施例中,一方面,通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另一方面,由于含有第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。另一方面,针对纯色图像的场景(即图像场景颜色不丰富或者出现大面积单色物体),相较于现有技术中用灰度世界算法的方式进行白平衡,通过多光谱传感器采集的第一环境光谱信息对纯色图像进行白平衡的方式可以提升目标图像的调整质量。
在一种可能实现的方式中,第一处理还包括色彩还原、色彩均匀等多种色彩处理的任意组合(或者理解为至少一种),下面分别描述。
第一种,色彩均匀(color shading)处理。
本实施例中,色彩均匀处理可以在白平衡处理之前或之后,若色彩均匀处理在白平衡处理之前,则色彩均匀处理的处理对象是第一待处理图像。若色彩均匀处理在白平衡处理之后,则色彩均匀处理的处理对象是经过白平衡处理后的第一待处理图像。下面仅以对第一待处理图像进行色彩均匀处理为例进行描述,当然也可以是对白平衡处理或其他方式的颜色处理后的图像进行色彩均匀处理,具体此处不做限定。
该种情况下,图1所示实施例的步骤还可以包括:获取彩色摄像头的多个光谱响应函数。基于第一环境光谱信息与多个光谱响应函数获取多个估计值,基于多个估计值计算多个补偿值,第一处理还包括:基于多个补偿值的色彩均匀color shading处理。补偿值的数量可以与第一待处理图像中像素点的数量一一对应,或者补偿值的数量小于第一待处理图像中像素点的数量(也可以理解为是一个区域对应一个补偿值,一个区域包括第一待处理图像的多个像素点)。其中,上述步骤:获取彩色摄像头的多个光谱响应函数的具体方式可以是:通过单色仪测量彩色摄像头的像素位置的光谱响应得到多个光谱响应函数。或者是通过离线的方式调整光源的不同光强确定彩色摄像头的响应函数。上述步骤:基于第一环境光谱信息与多个光谱响应函数获取多个估计值的具体方式可以是:对8个采样点的数值进行上采样,并对上采样后的值与多个光谱响应函数进行积分得到多个估计值。上述获取多个补偿值的步骤可以是:以第一待处理图像的中心像素点为基准得到第一待处理图像中除了中心像素点以外的像素点的补偿值。再将补偿值的尺寸上采样至相机图像的空间尺寸。并使用上采样后的补偿值进行色彩均匀color shading处理。
另外,色彩均匀color shading处理的方式可以是直接将多个补偿值分别乘以第一待处理图像中的像素点。也可以是根据RGGB调整多个补偿值,再乘以拜耳(bayer)域的第一待处理图像中的像素点进而完成color shading处理,具体此处不做限定。
示例性的,延续之前的举例,8个多光谱采样值经过上采样后得到256个多光谱采样值。多个光谱响应函数为256(图像的纵向尺寸)*256(图像的横向尺寸)*3(像素通道)*256(多光谱通道数),通过下述公式一对上采样后的多个采样值与多个光谱响应函数进行积分得到多个估计值。通过下述公式二以第一待处理图像的中心像素点为基准得到第一待处理图像中除了中心像素点以外的像素点的补偿值。通过公式三将补偿值的尺寸上采样至相机图像的 空间尺寸。通过公式四使用上采样后的补偿值进行色彩均匀(color shading)处理。
公式一:
Figure PCTCN2022107602-appb-000001
其中,V(x,y,c)是多个估计值,该多个估计值可以是256(图像的纵向尺寸)*256(图像的横向尺寸)*3(像素通道)。S(λ)是多光谱的8个采样值经过上采样后得到256个采样值。F(x,y,c,λ)是多个xy对应的光谱响应函数。x与y是彩色摄像头光谱响应的空间尺寸,例如是256*256。c是彩色摄像头的像素通道,这里仅以3个像素通道为例进行描述,可以理解的是,在实际应用中,彩色摄像头的像素通道数量可以更多,具体此处不做限定。λ是彩色摄像头的响应波长,一般情况下响应波长的取值范围是380纳米(nm)至780纳米(nm),即人眼可见光的波长范围。
公式二:
Figure PCTCN2022107602-appb-000002
公式三:
K I-size(x',y',c)=interplation_xy(K F-size(x,y,c))。
公式四:
I'(x',y',c)=K I-size(x',y',c)I(x',y',c)。
其中,K F-size(x,y,c)是xy对应的多个补偿值。V(x center,y center,c)是第一待处理图像的中心像素点。interplation_xy用于表示对xy进行上采样。K I-size(x',y',c)是对xy进行上采样得到与相机图像尺寸对应的多个补偿值。x',y'是相机图像的空间尺寸,例如3000*4000。I'(x',y',c)是经过色彩均匀处理后图像的纵向尺寸、横向尺寸以及像素通道。I(x',y',c)是第一待处理图像的纵向尺寸、横向尺寸以及像素通道。
可以理解的是,上述公式一、公式二、公式三以及公式四只是一种举例,在实际应用中,还可以有其他形式的公式一、公式二、公式三以及公式四,具体此处不做限定。
示例性的,图6为第一待处理图像(即去color shading前对应的图像)与经过色彩均匀处理后的图像(即去color shading后对应的图像)的示例图。
该种方式下,相对于现有技术中需要通过离线标定的方式对图像进行色彩均匀,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成的色彩均匀的补偿值,相对于现有技术中离线标定,可以提升色彩均匀的质量。
第二种,色彩还原处理(也可以称为颜色空间转换处理)。
本实施例中,色彩还原处理与前述的白平衡处理、色彩均匀处理并没有时序关系。以第一处理包括色彩还原与白平衡处理为例进行说明。若色彩还原处理在白平衡处理之前,则色彩还原处理的处理对象是第一待处理图像。若色彩还原处理在白平衡处理之后,则色彩还原处理的处理对象是经过白平衡处理后的第一待处理图像。下面仅以对第一待处理图像进行色彩均匀处理为例进行描述,当然也可以是对白平衡处理或其他方式的颜色处理后的图像进行色彩均匀处理,具体此处不做限定。
该种情况下,图1所示实施例的步骤还可以包括:获取三刺激值曲线与色卡的反射率。基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵。第一处理还包括:基于颜色校正矩阵的颜色空间转换处理。
其中,上述步骤:基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵具体可以包括:将第一环境光谱信息转化为光源曲线。基于多个光谱响应函数、光源曲线以及反射率获取色卡对彩色摄像头的第一响应值(也可以理解为是色卡在以RGB为三个轴构成的空间下的成像)。基于三刺激值曲线、光源曲线以及反射率获取色卡对第一人眼颜色彩空间的第二响应值。基于第一响应值与第二响应值获取颜色校正矩阵。上述的第一人眼颜色空间可以是人眼匹配函数对应的响应空间。颜色校正矩阵用于表示第一响应值与第二响应值之间的关联关系。
该颜色校正矩阵也可以理解为是两个颜色空间的转换矩阵。可选地,该颜色空间的转换矩阵为3X3的矩阵。相当于将一个颜色空间作为目标,将另一个颜色空间作为源,使用最小二乘法得到转换矩阵。
可以理解的是,上述的色卡的反射率可以是标准24色卡的反射率,还可以被替换为符合规律的矩形波、自定义的曲线等,具体此处不做限定。
可选地,人眼匹配函数可以是国际照明委员会(CIE)1931或者其他规范下的人眼匹配函数。三刺激值曲线可以是CIE1931或其他规范下的三刺激值曲线。具体此处不做限定。
示例性的,可以通过公式五、多个光谱响应函数、光源曲线以及反射率获取色卡对彩色摄像头的第一响应值。通过公式六、三刺激值曲线、光源曲线以及反射率获取色卡对第一人眼颜色彩空间的第二响应值。
公式五:
第一响应值=∫css(λ)*I(λ)*R(λ)。
公式六:
第二响应值=∫xyz(λ)*I(λ)*R(λ)。
其中,css(λ)是多个光谱响应函数对应的响应曲线。I(λ)是光源曲线。R(λ)是色卡的反 射率。xyz(λ)是三刺激值曲线。
可以理解的是,上述公式五与公式六只是一种举例,在实际应用中,还可以有其他形式的公式五与公式六,具体此处不做限定。
示例性的,图7为颜色空间转化处理之前的图像的示例图。图8为经过颜色空间转化处理的图像的示例图。
可选地,获取第一人眼色彩空间之后,还可以根据第一人眼色彩空间与第二人眼色彩空间的转换关系对颜色空间转换处理后的图像进行调整。该第二人眼色彩空间为色貌模型进行色适应时所对应的响应空间,以便于后续可以提升白平衡的质量。
该种方式可以理解为是根据人眼色彩响应空间(CIE1931人眼匹配函数构成的响应空间)与其他人眼色彩响应空间(例如色貌模型CIECAM02中进行色适应CAT02计算的响应空间)转换关系调整第三目标图像。
该种方式下,相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
第三种,时域稳定性处理。
该种情况下,图1所示实施例的步骤还可以包括:通过彩色摄像头获取第二待处理图像。通过多光谱传感器获取与第二待处理图像对应的第二环境光谱信息。基于第一环境光谱信息与第二环境光谱信息的相似度确定滤波参数。基于滤波参数对第一目标图像与第二待处理图像进行滤波,得到校正参数。基于校正参数调整第二待处理图像得到第二目标图像。
其中,上述步骤:基于第一环境光谱信息与第二环境光谱信息的相似度确定滤波参数具体可以包括:基于相似度生成滤波强度函数。基于滤波强度函数确定滤波参数。
可选地,彩色摄像头采集第一待处理图像与第二待处理图像的时间间隔小于预设时间段。进一步的,第一待处理图像与第二待处理图像是彩色摄像头采集的相邻时刻的两帧图像。
上述第一环境光谱信息与第二环境光谱信息的相似度的具体获取方式可以是:通过第一环境光谱信息对应的多个采样值确定第一光谱曲线,通过第二环境光谱信息对应的多个采样值确定第二光谱曲线。利用曲线相似度算法(例如:余弦相似度等)计算第一光谱曲线与第二光谱曲线的相似度。基于相似度生成滤波强度函数,相似度与滤波强度为正相关的关系,即相似度越大,滤波强度越强。
示例性的,以该滤波强度函数是三段式函数为例进行示例性描述,可以理解的是,该滤波强度函数可以设置为一阶函数或者高阶函数,具体此处不做限定。一种三段式滤波强度函数的示例具体如下:
第一段:若相似度大于第一阈值,则滤波权重为1,即采用上述的校正参数,换句话说第一处理中所用的校正参数(例如:上述的Rgain与Bgain、白点、估计值、颜色校正矩阵);
第二段:若相似度小于或等于第一阈值,且大于或等于第二阈值,则滤波权重小于1且大于0,根据相似度与第一阈值的差值在滤波权重为0-1的范围内进行线性插值;相似度与第一阈值越近,滤波权重越趋近于1,相似度与第一阈值越远,滤波权重越趋近于0。
第三段:若相似度小于第二阈值,则滤波权重为0,即重新计算第二待处理图像的校正 参数(求解方式与第一处理类似,这里称为第二处理,第一处理与第二处理不同的只是将前述第一处理的第一环境光谱信息替换为第二处理中的第二环境光谱信息,将第一待处理图像替换为第二待处理图像)。
可以理解的是,上述中的第一阈值、第二阈值可以根据实际需要设置,具体此处不做限定。例如:第一阈值为90%,第二阈值为10%。
可选地,上述基于滤波权重对第一目标图像与第二待处理图像进行滤波得到的校正参数,可以是白平衡中所用的Rgain与Bgain,也可以是色彩还原中的颜色校正矩阵,还可以是色彩均匀中的估计值,具体此处不做限定。获取校正参数后,使用校正参数调整第二待处理图像得到第二目标图像(调整方式与前述类似,此处不再赘述)。
示例性的,第一阈值为90%,第二阈值为10%,相似度为70%,即采用滤波函数的第二段确定校正参数。以校正参数是色彩均匀中的估计值以及采用公式七为例对确定第二待处理图像的校正参数(这里是色彩均匀的估计值)进行描述。
公式七:
Figure PCTCN2022107602-appb-000003
其中,K I-size(x',y',c)是上述色彩还原中与第一待处理图像相关的估计值(或者理解为是第一目标图像的估计值),例如滤波权重为0.5。K I-size(x”,y”,c)是上述色彩还原中与第二待处理图像相关的估计值(求解方式与前述色彩还原的类似,只是将前述的第一环境光谱信息替换为第二环境光谱信息,将第一待处理图像替换为第二待处理图像)。
换句话说,如果第一环境光谱信息与第二环境光谱信息的差异较小,则可以使用历史校正参数,或者历史校正参数(即通过第一环境光谱信息与第一待处理图像中的颜色通道得到的校正参数,或称为第一处理中用的校正参数)的权重大一些,新的校正参数(即通过第二环境光谱信息与第二待处理图像得到的校正参数,或称为第二处理中的用的校正参数)的权重小一些,从而得到第二待处理图像的校正参数。如果第一环境光谱信息与第二环境光谱信息的差异较大(例如:室内与室外环境的差别),则可以使用新的校正参数,或者新的校正参数的权重大一些,历史校正参数的权重小一些,从而得到第二待处理图像的校正参数。
该种方式下,实现了提升颜色处理时域稳定性的同时,兼顾灵敏性,即避免颜色效果在时域上的闪烁,同时又能及时响应环境的变化导致参数调整。
可以理解的是,上述三种颜色处理的方式是随意组合,具体此处不做限定。
请参阅图9,本申请实施例提供的图像处理方法的另一个实施例,该方法可以应用于图像处理设备,该图像处理设备包括彩色摄像头与多光谱传感器。该实施例包括步骤901至步骤905。图9所示的实施例可以理解为是对待处理图像进行色彩均匀处理。
步骤901,通过彩色摄像头获取第一待处理图像。
步骤902,通过多光谱传感器获取第一环境光谱信息。
本实施中的步骤901与步骤902与前述图1所示实施例中的步骤101与步骤102类似,此处不再赘述。
步骤903,获取彩色摄像头的多个光谱响应函数。
获取彩色摄像头的多个光谱响应函数的具体方式可以是:通过单色仪测量彩色摄像头的像素位置的光谱响应得到多个光谱响应函数。或者是确定彩色摄像头后,测量感光属性。例如通过离线的方式调整不同光源的不同光强下彩色摄像头的响应函数。
步骤904,基于第一环境光谱信息与多个光谱响应函数获取多个补偿值。
以第一待处理图像的中心像素点为基准得到第一待处理图像中除了中心像素点以外的像素点的补偿值。补偿值的数量可以与第一待处理图像中像素点的数量一一对应,或者补偿值的数量小于第一待处理图像中像素点的数量(也可以理解为是一个区域对应一个补偿值,一个区域包括第一待处理图像的多个像素点)。
步骤905,对第一待处理图像进行第一处理,得到第一目标图像。
获取补偿值之后,对第一待处理图像进行第一处理,得到第一目标图像。其中,第一处理包括基于多个补偿值的色彩均匀color shading处理。
本实施例中的第一处理除了包含基于多个补偿值的色彩均匀color shading处理,此外,还可包括但不限于以下的一种或多种后处理算法:自动曝光控制(automatic exposure control,AEC)、自动增益控制(automatic gain control,AGC)、色彩校正、镜头矫正、噪声去除/降噪、坏点去除、线性纠正、颜色插值、图像下采样、电平补偿,等等。此外在一些实例,还可包括一些图像增强算法,例如伽马(Gamma)矫正、对比度增强和锐化、在YUV色彩空间上彩噪去除与边缘加强、白平衡、色彩空间转换(例如RGB转换为YUV)等等。第一目标图像例如是YUV或者RGB格式的图像。
可选地,获取第一目标图像之后,可以向用户显示该第一目标图像。
可选地,图像处理设备还包括图像处理器,且该图像处理器用于执行步骤903与步骤904。
可以将补偿值的尺寸上采样至相机图像的空间尺寸。并使用上采样后的补偿值调整第一待处理图像。
具体描述可以参考前述实施例中第一种色彩均匀的相关描述,此处不再赘述。
可以理解的是,还可以在本实施例的基础上对第一目标图像进行白平衡、色彩还原等颜色处理,处理方式可参考前述实施例中的描述,具体此处不再赘述。
本实施例中,一方面,通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另一方面,由于含有第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成的色彩均匀的补偿值,相对于现有技术中离线标定,可以提升色彩均匀的质量。
请参阅图10,本申请实施例提供的图像处理方法的另一个实施例,该方法可以应用于图像处理设备,该图像处理设备包括彩色摄像头与多光谱传感器。该实施例包括步骤1001至步 骤1006。图10所示的实施例可以理解为是对待处理图像进行色彩还原处理。
步骤1001,通过彩色摄像头获取第一待处理图像。
步骤1002,通过多光谱传感器获取第一环境光谱信息。
本实施中的步骤1001与步骤1002与前述图1所示实施例中的步骤101与步骤102类似,此处不再赘述。
步骤1003,获取彩色摄像头的多个光谱响应函数。
获取彩色摄像头的多个光谱响应函数的具体方式可以是:通过单色仪测量彩色摄像头的像素位置的光谱响应得到多个光谱响应函数。或者是通过离线的方式调整光源的不同光强确定彩色摄像头的响应函数。
步骤1004,获取三刺激值曲线与色卡的反射率。
可选地,色卡的反射率可以是标准24色卡的反射率,还可以被替换为符合规律的矩形波、自定义的曲线等,具体此处不做限定。
可选地,三刺激值曲线可以是CIE1931或其他规范下的三刺激值曲线。具体此处不做限定。
步骤1005,基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵。
将第一环境光谱信息转化为光源曲线。基于多个光谱响应函数、光源曲线以及反射率获取色卡对彩色摄像头的第一响应值(也可以理解为是色卡在以RGB为三个轴构成的空间下的成像)。基于三刺激值曲线、光源曲线以及反射率获取色卡对第一人眼颜色彩空间的第二响应值。基于第一响应值与第二响应值获取颜色校正矩阵。上述的第一人眼颜色空间可以是人眼匹配函数对应的响应空间。颜色校正矩阵用于表示第一响应值与第二响应值之间的关联关系。
该颜色校正矩阵也可以理解为是两个颜色空间的转换矩阵。可选地,该颜色空间的转换矩阵为3X3的矩阵。相当于将一个颜色空间作为目标,将另一个颜色空间作为源,使用最小二乘法得到转换矩阵。
可选地,人眼匹配函数可以是国际照明委员会(CIE)1931或者其他规范下的人眼匹配函数。三刺激值曲线可以是CIE1931或其他规范下的三刺激值曲线。具体此处不做限定。
示例性的,可以通过前述的公式五、多个光谱响应函数、光源曲线以及反射率获取色卡对彩色摄像头的第一响应值。通过前述的公式六、三刺激值曲线、光源曲线以及反射率获取色卡对第一人眼颜色彩空间的第二响应值。
步骤1006,对第一待处理图像进行第一处理,得到第一目标图像。
获取颜色校正矩阵后,对第一待处理图像进行第一处理,得到第一目标图像。其中,第一处理包括基于颜色校正矩阵的颜色空间转换处理。
本实施例中的第一处理除了包含基于颜色校正矩阵的颜色空间转换处理,此外,还可包括但不限于以下的一种或多种后处理算法:自动曝光控制(automatic exposure control,AEC)、自动增益控制(automatic gain control,AGC)、色彩校正、镜头矫正、噪声去除/降噪、坏点去除、线性纠正、颜色插值、图像下采样、电平补偿,等等。此外在一些实例,还可包括一些图像增强算法,例如伽马(Gamma)矫正、对比度增强和锐化、在YUV色彩空间上彩噪去除与边缘加强、白平衡、色彩还原等等。第一目标图像例如是YUV或者RGB格式 的图像。
可选地,获取第一目标图像之后,可以向用户显示该第一目标图像。
可选地,图像处理设备还包括图像处理器,且该图像处理器用于执行步骤1003至步骤1006。
示例性的,颜色校正矩阵为3X3的矩阵。
可选地,获取第一人眼色彩空间之后,还可以根据第一人眼色彩空间与第二人眼色彩空间的转换关系对颜色空间转换处理后的图像进行调整。该第二人眼色彩空间为色貌模型进行色适应时所对应的响应空间,以便于后续可以提升白平衡的质量。
该种方式可以理解为是根据人眼色彩响应空间(CIE1931人眼匹配函数构成的响应空间)与其他人眼色彩响应空间(例如色貌模型CIECAM02中进行色适应CAT02计算的响应空间)转换关系调整第一目标图像。
具体描述可以参考前述实施例中第一种色彩还原的相关描述,此处不再赘述。
可以理解的是,还可以在本实施例的基础上对经过色彩还原处理后的图像进行白平衡、色彩均匀等颜色处理,处理方式可参考前述实施例中的描述,具体此处不再赘述。
本实施例中,一方面,通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另一方面,由于含有第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
上面对本申请实施例中的图像处理方法进行了描述,下面对本申请实施例中的图像处理设备进行描述,请参阅图11,本申请实施例中图像处理设备的一个实施例包括:
第一获取单元1101,用于通过彩色摄像头获取第一待处理图像;
第二获取单元1102,用于通过多光谱传感器获取第一环境光谱信息;
第三获取单元1103,用于基于第一待处理图像与第一环境光谱信息获取白平衡增益;
处理单元1104,用于对第一待处理图像进行第一处理,得到第一目标图像,第一处理包括基于白平衡增益的白平衡处理。可选地,图像处理设备还可以包括下述单元:
第四获取单元1105,用于获取彩色摄像头的多个光谱响应函数;
第五获取单元1106,用于获取三刺激值曲线与色卡的反射率;
显示单元1107,用于向用户显示第一目标图像。
确定单元1108,用于基于第一环境光谱信息与第二环境光谱信息的相似度确定滤波参数;
滤波单元1109,用于基于滤波参数对第一目标图像与第二待处理图像进行滤波,得到校正参数。
本实施例中,图像处理设备中各单元所执行的操作与前述图1所示实施例中描述的类似,此处不再赘述。
本实施例中,由于含有第二获取单元1102采集的第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。另一方面,针对纯 色图像的场景(即图像场景颜色不丰富或者出现大面积单色物体),相较于现有技术中用灰度世界算法的方式进行白平衡,处理单元1104通过多光谱传感器采集的第一环境光谱信息对纯色图像进行白平衡的方式可以提升目标图像的调整质量。
请参阅图12,本申请实施例中图像处理设备的另一个实施例包括:第一获取单元1201、第二获取单元1202、第三获取单元1203以及处理单元1204。
在一种可能实现的方式中,各单元具体用于执行下述功能:
第一获取单元1201,用于通过彩色摄像头获取第一待处理图像;
第二获取单元1202,用于通过多光谱传感器获取第一环境光谱信息;
第三获取单元1203,用于获取彩色摄像头的多个光谱响应函数;
第三获取单元1203,还用于基于第一环境光谱信息与多个光谱响应函数获取多个补偿值;
处理单元1204,用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于多个补偿值的色彩均匀color shading处理进行色彩均匀color shading处理。
该种可能实现的方式中,图像处理设备中各单元所执行的操作与前述图9所示实施例中描述的类似,此处不再赘述。
该种可能实现的方式中,由于含有第二获取单元1202是采集的第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于第三获取单元1203通过多光谱传感器采集的第一环境光谱信息生成的色彩均匀的补偿值,相对于现有技术中离线标定,可以提升色彩均匀的质量。
在另一种可能实现的方式中,各单元具体用于执行下述功能:
第一获取单元1201,用于通过彩色摄像头获取第一待处理图像;
第二获取单元1202,用于通过多光谱传感器获取第一环境光谱信息;
第三获取单元1203,用于获取彩色摄像头的多个光谱响应函数;
第三获取单元1203,还用于获取三刺激值曲线与色卡的反射率;
第三获取单元1203,还用于基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;
处理单元1204,用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于颜色校正矩阵的颜色空间转换处理。
该种可能实现的方式中,图像处理设备中各单元所执行的操作与前述图10所示实施例中描述的类似,此处不再赘述。
该种可能实现的方式中,由于含有第二获取单元1202采集的第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于第三获取单元1203通过多光谱传感器采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
请参阅图13,本申请实施例中图像处理设备的另一个实施例包括:彩色摄像头1301、多光谱传感器1302以及图像处理器1303。
在一种可能实现的方式中,各单元具体用于执行下述功能:
彩色摄像头1301,用于获取第一待处理图像;
多光谱传感器1302,用于获取第一环境光谱信息;
图像处理器1303,用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于白平衡增益的白平衡处理。
该种可能实现的方式中,针对纯色图像的场景(即图像场景颜色不丰富或者出现大面积单色物体),相较于现有技术中用灰度世界算法的方式进行白平衡,图像处理器1303通过多光谱传感器1302采集的第一环境光谱信息对纯色图像进行白平衡的方式可以提升目标图像的调整质量。
在另一种可能实现的方式中,各单元具体用于执行下述功能:
彩色摄像头1301,用于获取第一待处理图像;
多光谱传感器1302,用于获取第一环境光谱信息;
图像处理器1303,用于获取彩色摄像头的多个光谱响应函数;
图像处理器1303,还用于基于第一环境光谱信息与多个光谱响应函数获取多个补偿值;
图像处理器1303,还用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于多个补偿值的色彩均匀color shading处理。
该种可能实现的方式中,一方面,通过引入多光谱传感器采集与待处理图像对应的环境光谱信息,可以实时对待处理图像进行调整。另一方面,由于含有第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于图像处理器1303通过多光谱传感器1302采集的第一环境光谱信息生成的色彩均匀的补偿值,相对于现有技术中离线标定,可以提升色彩均匀的质量。
在另一种可能实现的方式中,各单元具体用于执行下述功能:
彩色摄像头1301,用于获取第一待处理图像;
多光谱传感器1302,用于获取第一环境光谱信息;
图像处理器1303,用于获取彩色摄像头的多个光谱响应函数;
图像处理器1303,还用于获取三刺激值曲线与色卡的反射率;
图像处理器1303,还用于基于第一环境光谱信息、多个光谱响应函数、反射率以及三刺激值曲线获取颜色校正矩阵;
图像处理器1303,还用于对第一待处理图像进行第一处理,得到第一目标图像;其中,第一处理包括基于颜色校正矩阵的颜色空间转换处理。
该种可能实现的方式中,一方面,通过引入多光谱传感器1302采集与待处理图像对应的 环境光谱信息,可以实时对待处理图像进行调整。另一方面,由于含有第一待处理图像对应的环境光谱信息,相较于现有技术中估计光源的方式,可以提升第一目标图像的调整质量。相对于现有技术中需要通过离线标定的方式对图像进行色彩还原,可以实现实时计算,并避免离线选表可能出错导致的问题。另外,由于图像处理器1303通过多光谱传感器1302采集的第一环境光谱信息生成颜色空间的转化矩阵,相对于现有技术中离线标定,可以提升色彩还原的质量。
请参阅图14,本申请实施例提供了另一种图像处理设备,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该图像处理设备可以为包括手机、平板电脑、个人数字助理(personal digital assistant,PDA)、销售终端设备(point of sales,POS)、车载电脑等任意图像处理设备,以图像处理设备为手机为例:
图14示出的是与本申请实施例提供的图像处理设备相关的手机的部分结构的框图。参考图14,手机包括:射频(radio frequency,RF)电路1410、存储器1420、输入单元1430、显示单元1440、彩色摄像头1451、多光谱传感器1452、音频电路1460、无线保真(wireless fidelity,WiFi)模块1470、处理器1480、以及电源1490等部件。本领域技术人员可以理解,图14中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图14对手机的各个构成部件进行具体的介绍:
RF电路1410可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1480处理;另外,将设计上行的数据发送给基站。通常,RF电路1410包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(low noise amplifier,LNA)、双工器等。此外,RF电路1410还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(global system of mobile communication,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、长期演进(long term evolution,LTE)、电子邮件、短消息服务(short messaging service,SMS)等。
存储器1420可用于存储软件程序以及模块,处理器1480通过运行存储在存储器1420的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1420可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1420可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元1430可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1430可包括触控面板1431以及其他输入设备1432。触控面板1431,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用 手指、触笔等任何适合的物体或附件在触控面板1431上或在触控面板1431附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1431可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1480,并能接收处理器1480发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1431。除了触控面板1431,输入单元1430还可以包括其他输入设备1432。具体地,其他输入设备1432可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元1440可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元1440可包括显示面板1441,可选的,可以采用液晶显示器(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)等形式来配置显示面板1441。进一步的,触控面板1431可覆盖显示面板1441,当触控面板1431检测到在其上或附近的触摸操作后,传送给处理器1480以确定触摸事件的类型,随后处理器1480根据触摸事件的类型在显示面板1441上提供相应的视觉输出。虽然在图14中,触控面板1431与显示面板1441是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1431与显示面板1441集成而实现手机的输入和输出功能。
手机还可包括彩色摄像头1451与多光谱传感器1452,彩色摄像头1451具体用于采集彩色图像或纯色图像(或称为单色图像)。多光谱传感器1452用于获取与图像对应的环境光谱信息。当然,手机还可以包括其他类型的传感器,比如接近传感器、运动传感器等其他传感器。具体地,接近传感器可在手机移动到耳边时,关闭显示面板1441和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路1460、扬声器1461,传声器1462可提供用户与手机之间的音频接口。音频电路1460可将接收到的音频数据转换后的电信号,传输到扬声器1461,由扬声器1461转换为声音信号输出;另一方面,传声器1462将收集的声音信号转换为电信号,由音频电路1460接收后转换为音频数据,再将音频数据输出处理器1480处理后,经RF电路1410以发送给比如另一手机,或者将音频数据输出至存储器1420以便进一步处理。
WiFi属于短距离无线传输技术,手机通过WiFi模块1470可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图14示出了WiFi模块1470,但是可以理解的是,其并不属于手机的必须构成。
处理器1480是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1420内的软件程序和/或模块,以及调用存储在存储器1420内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1480可包括一个或多个处理单元;优选的,处理器1480可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可 以理解的是,上述调制解调处理器也可以不集成到处理器1480中。
手机还包括给各个部件供电的电源1490(比如电池),优选的,电源可以通过电源管理系统与处理器1480逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本申请实施例中,该图像处理设备所包括的处理器1480可以执行前述图1至图10所示实施例中的功能,此处不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。
当使用软件实现所述集成的单元时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。

Claims (34)

  1. 一种图像处理方法,其特征在于,所述方法应用于图像处理设备,所述图像处理设备包括彩色摄像头与多光谱传感器,所述方法包括:
    通过所述彩色摄像头获取第一待处理图像;
    通过所述多光谱传感器获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    基于所述第一待处理图像与所述第一环境光谱信息获取白平衡增益;
    对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于所述白平衡增益的白平衡处理。
  2. 根据权利要求1所述的方法,其特征在于,所述第一待处理图像与所述第一目标图像为纯色图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述基于所述第一待处理图像与所述第一环境光谱信息获取白平衡增益,包括:
    将所述第一环境光谱信息与所述第一待处理图像输入训练好的神经网络得到所述白平衡增益;
    所述训练好的神经网络是通过以训练数据作为神经网络的输入,以损失函数的值小于阈值为目标对神经网络进行训练获取,所述训练数据包括训练原始图像与训练光谱信息,所述训练原始图像与训练光谱信息对应同一个拍摄场景,神经网络的输出包括白平衡增益,所述损失函数用于指示所述神经网络输出的白平衡增益与实际白平衡增益之间的差异,所述实际白平衡增益由灰卡在所述拍摄场景下的响应值处理得到。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述彩色摄像头的所述多个光谱响应函数;
    基于所述第一环境光谱信息与所述多个光谱响应函数获取多个补偿值;
    所述第一处理还包括:基于所述多个补偿值的色彩均匀color shading处理。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述三刺激值曲线与所述色卡的反射率;
    基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵;
    所述第一处理还包括:基于所述颜色校正矩阵的颜色空间转换处理。
  6. 根据权利要求5所述的方法,其特征在于,所述基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵,包括:
    将所述第一环境光谱信息转化为光源曲线;
    基于所述多个光谱响应函数、所述光源曲线以及所述反射率获取所述色卡对所述彩色摄像头的第一响应值;
    基于所述三刺激值曲线、所述光源曲线以及所述反射率获取所述色卡对第一人眼色彩空间的第二响应值,所述第一人眼色彩空间为人眼匹配函数对应的响应空间;
    基于所述第一响应值与所述第二响应值获取所述颜色校正矩阵,所述颜色校正矩阵用于表示所述第一响应值与所述第二响应值之间的关联关系。
  7. 根据权利要求1至6中任一项所述的方法,其特征在于,所述第一处理还包括:
    对经过所述白平衡处理后的图像进行后处理,得到所述第一目标图像。
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述方法还包括:
    向用户显示所述第一目标图像。
  9. 根据权利要求1至8中任一项所述的方法,其特征在于,所述方法还包括:
    通过所述彩色摄像头获取第二待处理图像;
    通过所述多光谱传感器获取第二环境光谱信息,所述第二环境光谱信息与所述第二待处理图像对应同一个拍摄场景;
    基于所述第一环境光谱信息与所述第二环境光谱信息的相似度确定滤波参数;
    基于所述滤波参数对所述第一目标图像与所述第二待处理图像进行滤波,得到校正参数;
    基于所述校正参数调整所述第二待处理图像得到第二目标图像。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述第一环境光谱信息与所述第二环境光谱信息的相似度确定滤波参数,包括:
    基于所述相似度生成滤波强度函数;
    基于所述滤波强度函数确定所述滤波参数。
  11. 一种图像处理方法,其特征在于,所述方法应用于图像处理设备,所述图像处理设备包括彩色摄像头与多光谱传感器,所述方法包括:
    通过所述彩色摄像头获取第一待处理图像;
    通过所述多光谱传感器获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    获取所述彩色摄像头的多个光谱响应函数;
    基于所述多个第一环境光谱信息与所述多个光谱响应函数获取多个补偿值;
    对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于所述多个补偿值的色彩均匀color shading处理。
  12. 一种图像处理方法,其特征在于,所述方法应用于图像处理设备,所述图像处理设备包括彩色摄像头与多光谱传感器,所述方法包括:
    基于所述彩色摄像头获取第一待处理图像;
    基于所述多光谱传感器获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    获取所述彩色摄像头的多个光谱响应函数;
    获取三刺激值曲线与色卡的反射率;
    基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵;
    对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于颜色校正矩阵的颜色空间转换处理。
  13. 根据权利要求12所述的方法,其特征在于,所述基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵,包括:
    将所述第一环境光谱信息转化为光源曲线;
    基于所述多个光谱响应函数、所述光源曲线、所述反射率获取所述色卡对所述彩色摄像头的第一响应值;
    基于所述三刺激值曲线、所述光源曲线、所述反射率获取所述色卡对第一人眼色彩空间的第二响应值,所述第一人眼色彩空间为人眼匹配函数对应的响应空间;
    基于所述第一响应值与所述第二响应值获取所述颜色校正矩阵,所述颜色校正矩阵用于表示所述第一响应值与所述第二响应值之间的转换关系。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    基于所述第一人眼色彩空间与第二人眼色彩空间的转换关系对所述颜色空间转换处理后的图像进行调整,所述第二人眼色彩空间为色貌模型进行色适应时所对应的响应空间。
  15. 一种图像处理设备,其特征在于,所述图像处理设备包括:
    第一获取单元,用于通过所述彩色摄像头获取第一待处理图像;
    第二获取单元,用于通过所述多光谱传感器获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    第三获取单元,用于基于所述第一待处理图像与所述第一环境光谱信息获取白平衡增益;
    处理单元,用于对所述第一待处理图像进行第一处理,得到第一目标图像,所述第一处理包括基于所述白平衡增益的白平衡处理。
  16. 根据权利要求15所述的设备,其特征在于,所述第一待处理图像与所述第一目标图像为纯色图像。
  17. 根据权利要求15或16所述的设备,其特征在于,所述第三获取单元,具体用于将所述第一环境光谱信息与所述第一待处理图像输入训练好的神经网络得到所述白平衡增益;
    所述训练好的神经网络是通过以训练数据作为神经网络的输入,以损失函数的值小于阈值为目标对神经网络进行训练获取,训练数据包括训练原始图像与训练光谱信息,所述训练原始图像与训练光谱信息对应同一个拍摄场景,神经网络的输出包括白平衡增益,所述损失函数用于指示所述神经网络输出的白平衡增益与实际白平衡增益之间的差异,所述实际白平衡增益由灰卡在所述拍摄场景下的响应值处理得到。
  18. 根据权利要求15至17中任一项所述的设备,其特征在于,所述设备还包括:
    第四获取单元,用于获取所述彩色摄像头的多个光谱响应函数;
    所述第四获取单元,还用于基于所述第一环境光谱信息与所述多个光谱响应函数获取多个补偿值;
    所述处理单元,还用于基于所述多个补偿值的色彩均匀color shading处理。
  19. 根据权利要求18所述的设备,其特征在于,所述设备还包括:
    第五获取单元,用于获取三刺激值曲线与色卡的反射率;
    所述第五获取单元,还用于基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵;
    所述处理单元,还用于基于所述颜色校正矩阵的颜色空间转换处理。
  20. 根据权利要求19所述的设备,其特征在于,所述第五获取单元,具体用于将所述第一环境光谱信息转化为光源曲线;
    所述第五获取单元,具体用于基于所述多个光谱响应函数、所述光源曲线以及所述反射 率获取所述色卡对所述彩色摄像头的第一响应值;
    所述第五获取单元,具体用于基于所述三刺激值曲线、所述光源曲线以及所述反射率获取所述色卡对第一人眼色彩空间的第二响应值,所述第一人眼色彩空间为人眼匹配函数对应的响应空间;
    所述第五获取单元,具体用于基于所述第一响应值与所述第二响应值获取所述颜色校正矩阵,所述颜色校正矩阵用于表示所述第一响应值与所述第二响应值之间的关联关系。
  21. 根据权利要求15至20中任一项所述的设备,其特征在于,所述处理单元,还用于对经过所述白平衡处理后的图像进行后处理,得到所述第一目标图像。
  22. 根据权利要求15至21中任一项所述的设备,其特征在于,所述设备还包括:
    显示单元,用于向用户显示所述第一目标图像。
  23. 根据权利要求15至22中任一项所述的设备,其特征在于,所述第一获取单元,还用于通过所述彩色摄像头获取第二待处理图像;
    所述第二获取单元,还用于通过所述多光谱传感器获取第二环境光谱信息,所述第二环境光谱信息与所述第二待处理图像对应同一个拍摄场景;
    所述设备还包括:
    确定单元,用于基于所述第一环境光谱信息与所述第二环境光谱信息的相似度确定滤波参数;
    滤波单元,用于基于所述滤波参数对所述第一目标图像与所述第二待处理图像进行滤波,得到校正参数;
    所述处理单元,还用于基于所述校正参数调整所述第二待处理图像得到第二目标图像。
  24. 根据权利要求23所述的设备,其特征在于,所述确定单元,具体用于基于所述相似度生成滤波强度函数;
    所述确定单元,具体用于基于所述滤波强度函数确定所述滤波参数。
  25. 一种图像处理设备,其特征在于,所述图像处理设备包括:
    第一获取单元,用于通过所述彩色摄像头获取第一待处理图像;
    第二获取单元,用于通过所述多光谱传感器获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    第三获取单元,用于获取所述彩色摄像头的多个光谱响应函数;
    所述第三获取单元,还用于基于所述第一环境光谱信息与所述多个光谱响应函数获取多个补偿值;
    处理单元,用于对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于所述多个补偿值的色彩均匀color shading处理进行色彩均匀color shading处理。
  26. 一种图像处理设备,其特征在于,所述图像处理设备包括:
    第一获取单元,用于通过所述彩色摄像头获取第一待处理图像;
    第二获取单元,用于通过所述多光谱传感器获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    第三获取单元,用于获取所述彩色摄像头的多个光谱响应函数;
    所述第三获取单元,还用于获取三刺激值曲线与色卡的反射率;
    所述第三获取单元,还用于基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵;
    处理单元,用于对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于颜色校正矩阵的颜色空间转换处理。
  27. 根据权利要求26所述的设备,其特征在于,所述第三获取单元,具体用于将所述第一环境光谱信息转化为光源曲线;
    所述第三获取单元,具体用于基于所述多个光谱响应函数、所述光源曲线、所述反射率获取所述色卡对所述彩色摄像头的第一响应值;
    所述第三获取单元,具体用于基于所述三刺激值曲线、所述光源曲线、所述反射率获取所述色卡对第一人眼色彩空间的第二响应值,所述第一人眼色彩空间为人眼匹配函数对应的响应空间;
    所述第三获取单元,具体用于基于所述第一响应值与所述第二响应值获取所述颜色校正矩阵,所述颜色校正矩阵用于表示所述第一响应值与所述第二响应值之间的转换关系。
  28. 根据权利要求27所述的设备,其特征在于,所述处理单元,还用于基于所述第一人眼色彩空间与第二人眼色彩空间的转换关系对所述颜色空间转换处理后的图像进行调整,所述第二人眼色彩空间为色貌模型进行色适应时所对应的响应空间。
  29. 一种图像处理设备,其特征在于,所述图像处理设备包括彩色摄像头、多光谱传感器与图像处理器;
    所述彩色摄像头,用于获取第一待处理图像;
    所述多光谱传感器,用于获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    所述图像处理器,用于对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于所述白平衡增益的白平衡处理。
  30. 一种图像处理设备,其特征在于,所述图像处理设备包括彩色摄像头、多光谱传感器与图像处理器;
    所述彩色摄像头,用于获取第一待处理图像;
    所述多光谱传感器,用于获取第一环境光谱信息,所述第一环境光谱信息与所述第一待处理图像对应同一个拍摄场景;
    所述图像处理器,用于获取所述彩色摄像头的多个光谱响应函数;
    所述图像处理器,还用于基于所述第一环境光谱信息与所述多个光谱响应函数获取多个补偿值;
    所述图像处理器,还用于对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于所述多个补偿值的色彩均匀color shading处理。
  31. 一种图像处理设备,其特征在于,所述图像处理设备包括彩色摄像头、多光谱传感器与图像处理器;
    所述彩色摄像头,用于获取第一待处理图像;
    所述多光谱传感器,用于获取与所述第一待处理图像对应的第一环境光谱信息;
    所述图像处理器,用于获取所述彩色摄像头的多个光谱响应函数;
    所述图像处理器,还用于获取三刺激值曲线与色卡的反射率;
    所述图像处理器,还用于基于所述第一环境光谱信息、所述多个光谱响应函数、所述反射率以及所述三刺激值曲线获取颜色校正矩阵;
    所述图像处理器,还用于对所述第一待处理图像进行第一处理,得到第一目标图像;
    其中,所述第一处理包括基于颜色校正矩阵的颜色空间转换处理。
  32. 一种图像处理设备,其特征在于,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得所述图像处理设备执行如权利要求1-14所述的方法。
  33. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,所述指令在计算机上执行时,使得所述计算机执行如权利要求1至14中任一项所述的方法。
  34. 一种计算机程序产品,其特征在于,所述计算机程序产品在计算机上执行时,使得所述计算机执行如权利要求1至14中任一项所述的方法。
PCT/CN2022/107602 2021-07-29 2022-07-25 一种图像处理方法及相关设备 WO2023005870A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110867203.0A CN115701128A (zh) 2021-07-29 2021-07-29 一种图像处理方法及相关设备
CN202110867203.0 2021-07-29

Publications (1)

Publication Number Publication Date
WO2023005870A1 true WO2023005870A1 (zh) 2023-02-02

Family

ID=85086275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107602 WO2023005870A1 (zh) 2021-07-29 2022-07-25 一种图像处理方法及相关设备

Country Status (2)

Country Link
CN (1) CN115701128A (zh)
WO (1) WO2023005870A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761082A (zh) * 2023-08-22 2023-09-15 荣耀终端有限公司 图像处理方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117270024B (zh) * 2023-11-20 2024-02-20 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 能谱响应函数的校正方法、装置、计算机设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05191825A (ja) * 1992-01-16 1993-07-30 Sanyo Electric Co Ltd ホワイトバランス補正装置
US20020122044A1 (en) * 2000-10-23 2002-09-05 Sun Microsystems, Inc. Multi-spectral color correction
US20120033099A1 (en) * 2009-03-30 2012-02-09 Politecnico Di Milano Photo-detector and method for detecting an optical radiation
WO2020156653A1 (en) * 2019-01-30 2020-08-06 Huawei Technologies Co., Ltd. Method for generating image data for machine learning based imaging algorithms
CN111586300A (zh) * 2020-05-09 2020-08-25 展讯通信(上海)有限公司 颜色校正方法、装置及可读存储介质
WO2021037934A1 (en) * 2019-08-28 2021-03-04 ams Sensors Germany GmbH Systems for characterizing ambient illumination
US11006088B1 (en) * 2020-11-03 2021-05-11 Grundium Oy Colour calibration of an imaging device
WO2021105398A1 (en) * 2019-11-27 2021-06-03 ams Sensors Germany GmbH Ambient light source classification

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05191825A (ja) * 1992-01-16 1993-07-30 Sanyo Electric Co Ltd ホワイトバランス補正装置
US20020122044A1 (en) * 2000-10-23 2002-09-05 Sun Microsystems, Inc. Multi-spectral color correction
US20120033099A1 (en) * 2009-03-30 2012-02-09 Politecnico Di Milano Photo-detector and method for detecting an optical radiation
WO2020156653A1 (en) * 2019-01-30 2020-08-06 Huawei Technologies Co., Ltd. Method for generating image data for machine learning based imaging algorithms
WO2021037934A1 (en) * 2019-08-28 2021-03-04 ams Sensors Germany GmbH Systems for characterizing ambient illumination
WO2021105398A1 (en) * 2019-11-27 2021-06-03 ams Sensors Germany GmbH Ambient light source classification
CN111586300A (zh) * 2020-05-09 2020-08-25 展讯通信(上海)有限公司 颜色校正方法、装置及可读存储介质
US11006088B1 (en) * 2020-11-03 2021-05-11 Grundium Oy Colour calibration of an imaging device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116761082A (zh) * 2023-08-22 2023-09-15 荣耀终端有限公司 图像处理方法及装置
CN116761082B (zh) * 2023-08-22 2023-11-14 荣耀终端有限公司 图像处理方法及装置

Also Published As

Publication number Publication date
CN115701128A (zh) 2023-02-07

Similar Documents

Publication Publication Date Title
JP6967160B2 (ja) 画像処理方法および関連デバイス
US20220207680A1 (en) Image Processing Method and Apparatus
TWI696146B (zh) 影像處理方法、裝置、電腦可讀儲存媒體和行動終端
WO2023005870A1 (zh) 一种图像处理方法及相关设备
US10827140B2 (en) Photographing method for terminal and terminal
CN107613191B (zh) 一种拍照方法、设备及计算机可读存储介质
TWI658433B (zh) 影像模糊方法、裝置、電腦可讀儲存媒體和電腦裝置
CN109688351B (zh) 一种图像信号处理方法、装置及设备
KR101903626B1 (ko) 로우 이미지 데이터의 색도 측정을 이용한 자동 화이트 밸런싱
CN107438163B (zh) 一种拍照方法、终端及计算机可读存储介质
WO2017071219A1 (zh) 检测皮肤区域的方法和检测皮肤区域的装置
CN107302663A (zh) 一种图像亮度调整方法、终端及计算机可读存储介质
EP4072131A1 (en) Image processing method and apparatus, terminal and storage medium
CN107038715A (zh) 一种图像处理方法及装置
WO2019091426A1 (zh) 摄像头组件、图像获取方法及移动终端
WO2014136323A1 (ja) 復元フィルタ生成装置及び方法、画像処理装置、撮像装置、復元フィルタ生成プログラム並びに記録媒体
CN108200347A (zh) 一种图像处理方法、终端和计算机可读存储介质
WO2021093712A1 (zh) 图像处理方法和相关产品
CN113507558B (zh) 去除图像眩光的方法、装置、终端设备和存储介质
CN110852951A (zh) 图像处理方法、装置、终端设备及计算机可读存储介质
CN108200352B (zh) 一种调解图片亮度的方法、终端及存储介质
US11032529B2 (en) Selectively applying color to an image
CN113542600A (zh) 一种图像生成方法、装置、芯片、终端和存储介质
CN112150357B (zh) 一种图像处理方法及移动终端
CN108933904B (zh) 一种拍照装置、拍照方法、移动终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22848475

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE