CN115471571A - Calibration method, image processing method, device, storage medium and electronic equipment - Google Patents

Calibration method, image processing method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN115471571A
CN115471571A CN202211069561.8A CN202211069561A CN115471571A CN 115471571 A CN115471571 A CN 115471571A CN 202211069561 A CN202211069561 A CN 202211069561A CN 115471571 A CN115471571 A CN 115471571A
Authority
CN
China
Prior art keywords
image
white balance
plane
spectral
light source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211069561.8A
Other languages
Chinese (zh)
Inventor
孙凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211069561.8A priority Critical patent/CN115471571A/en
Publication of CN115471571A publication Critical patent/CN115471571A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The disclosure provides a calibration method, an image processing device, a storage medium and electronic equipment, and relates to the technical field of computer vision. The calibration method comprises the following steps: acquiring a first image acquired by an image sensor for a scene containing K light sources and first spectral data of Z detection areas in the scene acquired by a spectral sensor array; determining projection positions of the K light sources in a first plane according to the first image to obtain K first positions; determining illumination quantification values of Z sensing areas in a second plane based on the first spectrum data of the Z detection areas, and determining projection positions of K light sources in the second plane according to distribution of the illumination quantification values of the Z sensing areas to obtain K second positions; and determining calibration parameters between the first plane and the second plane by using the mapping relation between the K first positions and the K second positions. The present disclosure enables accurate calibration between an image sensor and a spectral sensor array.

Description

Calibration method, image processing method, device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a calibration method, an image processing apparatus, a computer-readable storage medium, and an electronic device.
Background
With the development of sensors, in some shooting scenes, it is possible to use an image sensor for shooting images and a spectrum sensor for detecting spectrum data for shooting and information acquisition at the same time. By combining the image data with the spectral data, richer information can be obtained.
However, the image sensor and the spectrum sensor generally have parallax, which makes it difficult to spatially correspond the image data and the spectrum data, and makes it difficult to combine the two data.
Disclosure of Invention
The present disclosure provides a calibration method, an image processing method, a calibration apparatus, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby solving, at least to some extent, a problem that image data and spectrum data cannot be corresponded.
According to a first aspect of the present disclosure, there is provided a calibration method, including: acquiring a first image acquired by an image sensor for a scene containing K light sources and first spectral data acquired by a spectral sensor array for Z detection regions in the scene; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2; determining projection positions of the K light sources in a first plane according to the first image to obtain K first positions; the first plane is a plane of the image sensor; determining illumination quantification values of Z sensing areas in a second plane based on the first spectrum data of the Z detection areas, and determining projection positions of the K light sources in the second plane according to distribution of the illumination quantification values of the Z sensing areas to obtain K second positions; the second plane is a plane of the spectral sensor array; the sensing area is a projection of the detection area in the second plane; and determining calibration parameters between the first plane and the second plane by using the mapping relation between the K first positions and the K second positions.
According to a second aspect of the present disclosure, there is provided an image processing method comprising: acquiring a second image acquired by the image sensor and Z groups of second spectral data acquired by the spectral sensor array; z is a positive integer not less than 2; the spectral sensor array comprises Z spectral sensors; obtaining calibration parameters predetermined according to the calibration method of the first aspect of the present disclosure, determining a correspondence between the Z groups of second spectral data and Z sub-regions in the second image based on the calibration parameters, and determining white balance parameters of the sub-regions corresponding to the second spectral data according to each group of second spectral data, respectively, to obtain white balance parameters of each sub-region; and carrying out white balance processing on the second image by adopting the white balance parameters of each subarea to obtain a target image.
According to a third aspect of the present disclosure, there is provided a calibration apparatus, comprising: a data acquisition module configured to acquire a first image acquired by an image sensor of a scene containing K light sources and first spectral data acquired by a spectral sensor array of Z detection regions in the scene; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2; a first position determining module configured to determine projection positions of the K light sources in a first plane according to the first image to obtain K first positions; the first plane is a plane of the image sensor; a second position determining module configured to determine illumination quantization values of Z sensing areas in a second plane based on the first spectral data of the Z sensing areas, and determine projection positions of the K light sources in the second plane according to a distribution of the illumination quantization values of the Z sensing areas to obtain K second positions; the second plane is a plane of the spectral sensor array; the sensing region is a projection of the detection region in the second plane; a calibration parameter determination module configured to determine calibration parameters between the first plane and the second plane by using mapping relationships between the K first positions and the K second positions.
According to a fourth aspect of the present disclosure, there is provided an image processing apparatus comprising: a data acquisition module configured to acquire a second image acquired by the image sensor and Z sets of second spectral data acquired by the spectral sensor array; the spectral sensor array comprises Z spectral sensors; z is a positive integer not less than 2; a white balance parameter determination module, configured to acquire calibration parameters predetermined according to the calibration method of the first aspect of the present disclosure, determine a correspondence between the Z sets of second spectral data and Z sub-regions in the second image based on the calibration parameters, and determine white balance parameters of the sub-regions corresponding to the second spectral data according to each set of second spectral data, respectively, to obtain white balance parameters of each sub-region; and the white balance processing module is configured to perform white balance processing on the second image by adopting the white balance parameter of each sub-area to obtain a target image.
According to a fifth aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the calibration method of the first aspect and possible implementations thereof, or implements the image processing method of the second aspect and possible implementations thereof.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing executable instructions of the processor. Wherein the processor is configured to execute the calibration method of the first aspect and possible implementations thereof, or to implement the image processing method of the second aspect and possible implementations thereof, via execution of the executable instructions.
The technical scheme of the disclosure has the following beneficial effects:
a scheme for calibrating an image sensor and a spectral sensor array is provided. The method comprises the steps of setting at least 4 light sources in a scene, determining the projection position of each light source on a first plane corresponding to an image sensor according to a first image acquired by the image sensor for the scene, determining the projection position of each light source on a second plane corresponding to a spectrum sensor array according to first spectrum data acquired by the spectrum sensor array for the scene, and obtaining accurate calibration parameters based on the mapping relation between the projection positions on the two planes. Furthermore, point-to-point accurate mapping between the image and the spectrum data can be realized by utilizing the calibration parameters, so that the image data and the spectrum data can be accurately combined.
Drawings
Fig. 1 shows a schematic diagram of a mobile terminal in the present exemplary embodiment;
FIG. 2 shows a schematic diagram of one spectral splitter and a corresponding one of the spectral sensors in the present exemplary embodiment;
FIG. 3 shows a schematic diagram of a spectrum sensor array in the present exemplary embodiment;
FIG. 4 illustrates a flow chart of a calibration method in the exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a calibration environment in the exemplary embodiment;
fig. 6 shows a flowchart for determining the second position in the present exemplary embodiment;
fig. 7 shows a schematic diagram of calculating the amount of positional deviation in the present exemplary embodiment;
FIG. 8 shows a schematic diagram of four pairs of matching points in this exemplary embodiment;
fig. 9 shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 10 is a diagram illustrating determination of white balance parameters of pixel points in the present exemplary embodiment;
fig. 11 is a graph showing a comparison of the effects of the white balance processing;
FIG. 12 shows a schematic view of a calibration arrangement in the present exemplary embodiment;
fig. 13 shows a schematic diagram of an image processing apparatus in the present exemplary embodiment;
fig. 14 is a schematic view showing an electronic apparatus in the present exemplary embodiment;
Detailed Description
Exemplary embodiments of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings.
The drawings are schematic illustrations of the present disclosure and are not necessarily drawn to scale. Some of the block diagrams shown in the figures may be functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in hardware modules or integrated circuits, or in networks, processors or microcontrollers. Embodiments may be embodied in many different forms and should not be construed as limited to the examples set forth herein. The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough explanation of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that one or more of the specific details can be omitted, or one or more of the specific details can be replaced with other methods, components, devices, steps, etc., in implementing the aspects of the disclosure.
The image sensor is matched with a Bayer filter, can respond to red, green and blue wave bands in a visible light range, collects data of three channels of R, G and B, and generates an RGB image through algorithm processing. Because the information provided by an RGB image is limited, in some scenarios, a spectral sensor is also used to collect spectral data. For example, in a mapping scene, a visible light camera (whose main component is an image sensor) and a spectrum camera (whose main component is a spectrum sensor) may be installed in the shooting device at the same time, and besides shooting a two-dimensional image, the spectrum data collected by the spectrum camera can be used for analyzing terrain, landform and the like so as to construct a complex map model.
Fig. 1 shows a schematic diagram of a mobile terminal 100. The mobile terminal 100 may include a body 110, a visible light camera 120, and a spectral camera 130. The visible light camera 120 is internally provided with an image sensor for photographing RGB images; the spectrum camera 130 is provided with a spectrum sensor therein for collecting spectrum data, and may generate an image based on the spectrum data. The visible light camera 120 and the spectrum camera 130 are located at different positions, so that parallax exists when shooting and data collecting are carried out on the same scene, and the image data and the spectrum data are difficult to correspond spatially. For example, for different regions in an image, their corresponding spectral data cannot be determined, so that information obtained by analyzing the spectral data cannot be combined with the image.
In view of the above problems, exemplary embodiments of the present disclosure provide a calibration method, which can achieve precise calibration of an image sensor and a spectrum sensor array.
The following describes an image sensor and a spectrum sensor array.
An image sensor is a sensor that converts an optical signal into an electrical signal, and performs imaging by quantitatively characterizing the optical signal. The present disclosure is not limited to a specific type of image sensor, and may be a CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge Coupled Device) sensor.
The image sensor is usually used in conjunction with an image filter, which may also be considered as a component of the image sensor. The image filter is formed by arranging the monochromatic filters according to an array, and can be positioned on an incident light path of the image sensor, so that the image sensor can receive monochromatic light passing through the image filter. For example, the image filter may be composed of RGB monochromatic filters, such as bayer filters, for filtering out light in three different spectral ranges of R, G, and B (i.e., red, green, and blue monochromatic light). The Image sensor generates an Image by sensing an optical Signal passing through the Image filter, and the Image may be a RAW Image such as a RAW Image, or an RGB Image or a YUV Image processed by an ISP (Image Signal Processor).
Generally, an image sensor is formed by arranging a certain number of photosensitive elements in an array, where each photosensitive element corresponds to a pixel of a first image. The number of photosensitive elements may represent the resolution of the image sensor, for example, the photosensitive elements are arranged in an H × W array, H representing the number of rows and W representing the number of columns, the resolution of the image sensor may be H × W, and the size of the generated first image is also H × W, where H represents the image height and W represents the image width. Illustratively, H is 3000 and W is 4000.
The spectrum sensor is also a sensor that converts an optical signal into an electrical signal, and may be, for example, a CMOS or CCD sensor, which may be of the same type as the image sensor, or may be different. The spectral sensor is usually used in combination with a spectral splitter, which may also be considered as a component of the spectral sensor. The spectrum splitter is used for separating light of a specific waveband from incident light, and can be positioned on an incident light path of the spectrum sensor, so that the spectrum sensor can receive the light of the specific waveband after passing through the spectrum splitter, and spectrum data can be sensed. In contrast, image sensors typically sense only red, green, and blue monochromatic light in the visible range, while spectral sensors sense a greater variety of different wavelength bands over a larger spectral range (e.g., 350-1000 nm, covering the ultraviolet to infrared bands). The split type number of the spectral splitter may be referred to as the channel number of the spectral splitter or the spectral sensor.
The spectral beam splitter may be a filter, pyramid, etc. type of optical device. Taking a filter as an example, in an embodiment, each spectral splitter may include L optical filters having different peak wavelengths (or center wavelengths), so that the incident light is separated into L different wavelength bands after passing through the spectral splitter, and each corresponding spectral sensor 140 may include L photosensitive elements, which are respectively configured to sense optical signals filtered by the corresponding L optical filters in the spectral splitter, and obtain response data of L channels, that is, spectral data of L wavelength bands. If L is 1, namely the number of channels of the spectrum sensor is 1, the spectrum sensor is a single spectrum sensor; if L is a positive integer not less than 2, that is, the number of channels of the spectral sensor is greater than or equal to 2, the spectral sensor is a multispectral sensor.
Fig. 2 shows a schematic diagram of a spectral splitter and a corresponding spectral sensor, the number of channels of the spectral splitter is 3 × 4, the spectral splitter includes 12 filters arranged in a 3 × 4 array, which are respectively denoted as C1 to C12, and represent filters for filtering light of C1 to C12 channels, and a Peak Wavelength (Peak Wavelength) and a Full Width at Half Maximum (FWHM) of light of each channel can be referred to table 1, which covers 12 important bands in a range of 350 to 1000 nm. The spectrum sensor comprises 12 photosensitive elements which are arranged according to a 3 multiplied by 4 array and are respectively marked as Z1 to Z12, and the photosensitive elements are in one-to-one correspondence with the optical filters C1 to C12 of the spectrum light splitter 130. Incident light passes through the spectrum light splitter to form 12 optical signals with different wave bands shown in table 1, and Z1-Z12 respectively receive the 12 optical signals to obtain response data of 12 channels, namely spectrum data.
TABLE 1
Channel Peak wavelength/nm Full width at half maximum/nm
C1 395 20
C2 405 30
C3 425 22
C4 440 36
C5 475 42
C6 515 40
C7 550 35
C8 596 46
C9 640 50
C10 690 55
C11 745 60
C12 855 54
The spectrum sensor array is a device formed by arranging Z spectrum sensors according to an array. Z is a positive integer not less than 2, indicating that the spectral sensor array comprises at least two spectral sensors. Z may be considered the resolution of the spectral sensor array. Each spectral sensor outputs spectral data of one detection area in the shooting scene. The detection area refers to a local area in a scene, and in the process of shooting or collecting spectral data, light generated by each detection area (including light directly emitted by a light source in the detection area, light emitted or refracted by the surface of an object in the detection area, and the like) is incident to one spectral sensor in the spectral sensor array, so that the spectral sensor senses a light signal generated by the detection area and subjected to light splitting, and spectral data of the detection area is obtained. One detection area may be regarded as one pixel of spectral data, and the Z spectral sensors may output spectral data of pixel Z. Therefore, the spectrum sensor array can realize the refined detection of the spectrum data of the subareas, and the larger the value of Z is, the higher the refinement degree of the spectrum data is.
For example, Z = m × n, which indicates that Z spectral sensors 140 in the spectral sensor array may be arranged in an m × n array, where m indicates the number of rows and n indicates the number of columns. Fig. 3 shows a schematic layout of a spectral sensor array, where m =6, n =8, i.e. the spectral sensor array comprises 48 spectral sensors. Each spectral sensor may in turn comprise 3 x 4 light-sensitive elements to output 12 channels of response data.
Typically, Z < H × W, i.e., the resolution of the spectral sensor array, is lower than the resolution of the image sensor. The image sensor is used for imaging, and the high-resolution image sensor can generate a high-definition target image; the spectral sensor array is used for detecting spectral data, and the detection of the spectral data does not need to be refined to the extent of image pixels. According to the conventional calibration method, the pixels of the image sensor can be matched with the pixels of the spectrum sensor array, and the pixels of the spectrum sensor array are actually in a sub-pixel level, each pixel of the pixels can represent an area, so that the pixels of the image sensor can only be matched with the sub-pixels of the spectrum sensor array, and the accuracy of the calibration result is low. For example, the resolution of the image sensor is 3000 × 4000, the resolution of the spectrum sensor array is 6 × 8, and matching the pixels of 3000 × 4000 with the sub-pixels of 6 × 8 is obviously difficult to obtain an accurate matching result, and thus, it is difficult to achieve accurate calibration. The calibration method in the exemplary embodiment can realize accurate calibration of the image sensor and the spectrum sensor array with very large resolution difference.
The calibration method is described below with reference to fig. 4. Referring to fig. 4, the flow of the calibration method may include the following steps S410 to S440:
step S410, acquiring a first image acquired by an image sensor for a scene containing K light sources and first spectral data of Z detection areas in the scene acquired by a spectral sensor array; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2;
step S420, determining projection positions of the K light sources in the first plane according to the first image to obtain K first positions; the first plane is a plane of the image sensor;
step S430, determining illumination quantification values of Z sensing areas in a second plane based on the first spectrum data of the Z detection areas, and determining projection positions of K light sources in the second plane according to distribution of the illumination quantification values of the Z sensing areas to obtain K second positions; the second plane is the plane of the spectral sensor array; the sensing area is a projection of the detection area in a second plane;
step S440, determining calibration parameters between the first plane and the second plane by using the mapping relation between the K first positions and the K second positions.
Based on the method, a scheme for calibrating the image sensor and the spectrum sensor array is provided. The method comprises the steps of setting at least 4 light sources in a scene, determining the projection position of each light source on a first plane corresponding to an image sensor according to a first image acquired by the image sensor on the scene, determining the projection position of each light source on a second plane corresponding to a spectrum sensor array according to first spectrum data acquired by the spectrum sensor array on the scene, and obtaining accurate calibration parameters based on the mapping relation between the projection positions on the two planes. Furthermore, point-to-point accurate mapping between the image and the spectrum data can be realized by utilizing the calibration parameters, so that the image data and the spectrum data can be accurately combined.
Each step in fig. 4 is specifically explained as follows.
Referring to fig. 4, in step S410, a first image acquired by an image sensor for a scene containing K light sources and first spectral data of Z detection areas in the scene acquired by a spectral sensor array are acquired; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2.
In this exemplary embodiment, K light sources may be arranged in the scene to form a shooting environment dedicated to calibrating the image sensor and the spectral sensor array, which may be referred to as a calibration environment. In an embodiment, the scene may be a pure color scene or a scene close to a pure color scene, such as a pure color curtain, a wall, and the like, so that an object with a complex color in the scene may be prevented from interfering with the calibration result. The larger the contrast between the color of the scene and the color of the light source, the more favorable it is to achieve accurate calibration, for example, a scene with darker color can be used.
Generally, at least 4 matching point pairs are needed for image calibration, and if one matching point pair is provided for each light source, at least 4 light sources need to be set, so that K is a positive integer not less than 4. In one embodiment, there may be differences in color, color temperature, brightness, etc. between the K light sources, which facilitates distinguishing different light sources in the first image and the first spectral data, and is beneficial to improving the accuracy of the calibration result.
Under the condition that the calibration environment is built, an image sensor can be adopted to acquire an image of a scene to obtain a first image. The first image is an image acquired during calibration, which is distinguished from the second image below. The first image may be an original image, such as a RAW image, or may be an RGB image or a YUV image processed by an ISP. Meanwhile, the spectrum sensor array is used for collecting the spectrum data of the scene, specifically, light rays emitted by a light source in the scene are received by the spectrum sensor array, the light rays emitted by the light source irradiate the surface of an object in the scene to form reflected and refracted light rays, and the reflected and refracted light rays are also received by the spectrum sensor array to form first spectrum data. The first spectral data is spectral data collected during calibration, as distinguished from the second spectral data below.
FIG. 5 shows a schematic diagram of a calibration environment. Referring to fig. 5, 4 light sources A0, B0, C0, D0 may be disposed on a wall, and the mobile terminal 100 shown in fig. 1 may be disposed at a fixed distance from the wall, which may be a focal length or other distance of the visible light camera 120, such that the field of view of the visible light camera 120 can cover the 4 light sources at the same time. The first image is then captured with the visible light camera 120 and the first spectral data is collected with the spectral camera 130. In an embodiment, the positions of the 4 light sources, the focal length of the visible light camera 120, or the distance between the mobile terminal 100 and the wall may be adjusted, so that the 4 light sources are distributed in four areas, namely, an upper left area, an upper right area, a lower left area, and a lower right area in the first image, which avoids mutual interference caused by too close distance between the 4 light sources, and is beneficial to improving the accuracy of the calibration result.
In one embodiment, the above-mentioned acquiring the first image acquired by the image sensor for the scene containing the K light sources and the first spectral data of the Z detection areas in the scene acquired by the spectral sensor array may include the following two ways.
The method I comprises the following steps: under the condition that K light sources are turned on simultaneously, a first image is collected for a scene through an image sensor, and a group of first spectrum data is collected for the scene through a spectrum sensor array, wherein the group of first spectrum data comprises first spectrum data of Z detection areas in the scene.
Therefore, the information of the K light sources can be obtained only by one-time acquisition without multiple acquisition. The method is suitable for the condition that the mutual influence degree between the light sources is low, such as the condition that the light sources are far away, the color temperature or the brightness difference of the light sources is large, the light intensity of the light sources is low, and the like.
The second method comprises the following steps: under the condition that each light source is sequentially and independently started, K times of first images are collected on a scene through an image sensor, K times of first spectrum data are collected on the scene through a spectrum sensor array, and the first spectrum data collected at each time all comprise first spectrum data of Z detection areas in the scene.
Firstly, independently starting a first light source, closing other light sources, and acquiring a first image and first spectrum data once under the condition; then, turning on a second light source, turning off other light sources, and acquiring a first image and first spectrum data once under the condition; and so on, acquiring K times of the first image and the first spectrum data. In the second mode, the first image and the first spectrum data acquired each time only represent information of one light source, and the information of the K light sources is acquired through K times of acquisition. The method is suitable for the condition that the mutual influence degree between the light sources is higher, such as the conditions that the light sources are closer in distance, the color temperature or the brightness of the light sources are close, the light intensity of the light sources is higher, and the like. The mutual influence between different light sources can be avoided.
It should be understood that the first or second modes can be selected according to the specific situation of the light source and the scene. The present disclosure does not limit the number of the first image and the first spectral data.
With continuing reference to fig. 4, in step S420, projection positions of the K light sources in the first plane are determined according to the first image to obtain K first positions; the first plane is the plane of the image sensor.
The first plane may be a cross section of the image sensor in the optical path direction, i.e., an imaging plane of the image sensor. The first image is a projection of a scene containing K light sources in a first plane, the first image also including the projections of the K light sources in the first plane. The positions of the K light sources in the first image, i.e. the projection positions in the first plane.
In the first image, the brightness of the light source position is generally high. In one embodiment, the positions of the K light sources may be determined according to a brightness distribution in the first image. For example, the first image may be converted into a gray image, the gray value of each pixel point in the gray image may represent the brightness value of each pixel point in the first image, a gray peak point is detected in the gray image, the gray peak point refers to a pixel point whose gray value is higher than its neighboring point, and the position of the gray peak point is used as the projection position of the light source in the first plane.
In an embodiment, the determining the projection positions of the K light sources in the first plane according to the first image to obtain K first positions may include:
converting the first image into a gray level image, and performing binarization processing on the gray level image to obtain a binarized image;
k connected regions are extracted from the binary image, and the central position of each connected region is determined to obtain K first positions.
The first image may be an image in RGB format, and the first image is converted into a grayscale image, so that binarization processing is facilitated, and the amount of image data is reduced. The present disclosure does not limit the specific manner of the binarization processing. For example, a fixed threshold may be used, and since the color contrast between the light source and the background in the scene is large, a fixed threshold (e.g. 128) close to the gray level middle value of the light source and the background may be set, and the binarization processing may be implemented by using the threshold. An adaptive threshold may also be used, such as an adaptive threshold determined from the distribution of gray values in a gray image, by which binarization processing is implemented.
In one embodiment, the image may be denoised and contrast enhanced before the binarization process is performed. The image denoising can remove abnormal pixel values in the gray level image, reduce the interference of the abnormal pixel values to the binarization processing, and can perform image denoising by adopting modes such as median filtering, NLM (Non-Local Mean) filtering and the like. The Contrast enhancement can make the edge features in the gray scale image more prominent, which is convenient for high-quality binarization processing, for example, a CLAHE (Contrast Limited Adaptive Histogram Equalization) algorithm can be used for Contrast enhancement.
After obtaining the binary image, K connected regions are extracted from the binary image. The connected regions can be continuous regions with gray values of 1, one connected region can be regarded as an illumination region of one light source, and the K connected regions correspond to the K light sources. The center position (e.g., centroid, geometric center) of each connected region is then determined, and each center position is the projection position of each light source. Thereby obtaining K first positions.
Through the mode of converting the first image into the binary image and extracting the communicated region, a relatively accurate light source illumination region can be obtained, and then the central position of the communicated region is used as the projection position of the light source, so that the center of the illumination region can be accurately reflected by the projection position, and the accuracy of the projection position is ensured.
With continuing reference to fig. 4, in step S430, determining illumination quantized values of Z sensing areas in the second plane based on the first spectral data of the Z detection areas, and determining projection positions of the K light sources in the second plane according to a distribution of the illumination quantized values of the Z sensing areas to obtain K second positions; the second plane is the plane of the spectral sensor array; the sensing area is a projection of the detection area in a second plane.
The second plane may be a cross section of the spectral sensor array along the optical path direction, which may be regarded as an imaging plane of the spectral sensor array. Z detection zones in the scene are projected onto the second plane to form Z sensing zones. Each sensing region is the sensing region of each spectral sensor in the array of spectral sensors.
The first spectral data are capable of characterizing illumination intensities of light rays in the scene at different wavelength bands, and therefore an illumination quantification value may be determined based on the first spectral data. The illumination quantized value can be a quantized value of the illumination intensity of a certain or a plurality of wave bands, and can also be a comprehensive quantized value of the illumination intensity of the whole wave band. Each spectral sensor may detect first spectral data of a detection region in a scene, which may be converted into an illumination quantification value of a corresponding sensing region.
In one embodiment, the first spectral data of the Z sensing areas may be all converted into gray values, and the gray values are used as the illumination quantized values to obtain the illumination quantized values of the Z sensing areas. For example, an average value or a weighted average value may be calculated for the first spectral data of different wavelength bands in each detection region to obtain a gray value. Or mapping the first spectrum data of each detection area to a CIELab color space, adding data of different wave bands to obtain RGB data, and converting the RGB data into a gray value.
In one embodiment, if the light source in the scene is monochromatic light, the data at the wavelength can be extracted from the first spectrum data according to the wavelength of the light source, so as to serve as the illumination quantization value. Illustratively, the wavelengths of the K monochromatic light sources are respectively lambda 1 、λ 2 、…、λ K And extracting the spectral data at the K wavelengths from the first spectral data of each detection area to be used as the corresponding illumination quantitative value of each sensing area.
Generally, in the second plane, the higher the illumination quantization value is, the closer the projection position of the light source is. The projection positions of the K light sources in the second plane can be determined from the distribution of the illumination quantization values of the Z sensing areas.
In one embodiment, referring to fig. 6, the determining the projection positions of the K light sources in the second plane according to the distribution of the illumination quantization values of the Z sensing areas to obtain K second positions may include the following steps S610 and S620:
step S610, determining K light source projection areas in the Z sensing areas according to the distribution of the illumination quantization values of the Z sensing areas.
The light source projection area refers to an area including a light source projection position. If the projection position of a certain light source in the second plane is within a certain sensing area, the illumination quantification value of the sensing area is usually higher. Illustratively, K sensing areas may be selected from the Z sensing areas in order of the illumination quantization value from high to low, and used as K light source projection areas.
In an embodiment, the determining K light source projection areas in the Z sensing areas according to the distribution of the illumination quantization values of the Z sensing areas may include:
and determining K sensing areas with the illumination quantization values at the peak values in the Z sensing areas to obtain K light source projection areas.
The sensing area with the illumination quantization value at the peak value refers to the sensing area with the illumination quantization value at the peak value in the neighborhood range. The neighborhood range here may be set to any size, such as 3 × 3, 4 × 4, etc. Taking 3 × 3 as an example, if the illumination quantization value of a certain sensing area is at the peak value in the neighborhood range formed by 3 × 3 sensing areas, the sensing area can be used as a light source projection area.
In one embodiment, the range of the Z sensing regions may be divided into K local ranges according to the distribution of the K light sources, and a sensing region with an illumination quantization value at a peak value is determined in each local range, so as to obtain K light source projection regions.
Based on the above manner of determining the light source projection area by the sensing area in which the illumination quantization value is at the peak value, it is possible to reduce the situation of determining a plurality of light source projection areas for one light source, and more accurately correspond the K light source projection areas to the K light sources.
By determining K light source projection areas, the light source projection positions are actually roughly determined, and the precise projection positions of the light sources can be further determined in each light source projection area.
Step S620, determining the light source accurate projection position in each light source projection area according to the distribution of the illumination quantization values of each light source projection area and the plurality of sensing areas adjacent to the light source projection area, so as to obtain K second positions.
After the light source projection area is obtained, it cannot be determined at which precise position within the light source projection area the light source projection position is located. Generally, if the light source projection position is located at the center of the light source projection area, the illumination quantization values in the sensing areas adjacent to the light source projection area may be uniformly distributed, for example, the illumination quantization value of the sensing area adjacent to the left side of the light source projection area should be equal to the illumination quantization value of the sensing area adjacent to the right side thereof. If the illumination quantitative value of the sensing area adjacent to the left side of the light source projection area is larger than that of the sensing area adjacent to the right side of the light source projection area, it is indicated that the light source projection position is in a position which is closer to the left than the center in the light source projection area. Therefore, the accurate projection position of the light source in each light source projection area can be determined according to the distribution of the illumination quantization values of each light source projection area and a plurality of sensing areas adjacent to the light source projection area.
In one embodiment, the determining the precise projection position of the light source within each light source projection area according to the distribution of the illumination quantization values of each light source projection area and the plurality of sensing areas adjacent to the light source projection area may include the following steps:
for any light source projection area i, forming a2 x2 calculation area by the light source projection area i and three adjacent sensing areas;
calculating the position offset according to the distribution proportion of the illumination quantized values along the first axis of the second plane and the distribution proportion of the illumination quantized values along the second axis of the second plane in the calculation region;
and determining the accurate projection position of the light source in the light source projection area i based on the basic position and the position offset of the light source projection area i.
Referring to fig. 7, the second plane may include 6 × 8 sensing regions, and the sensing region located in the xth row and the yth column is referred to as a sensing region xy. The first axis of the second plane is an X axis, and the second axis is a Y axis. Assume that the scene contains 4 light sources A0, B0, C0, D0, whose projections on the second plane are A2, B2, C2, D2, respectively. Under the condition that the projection position of the light source is not determined, 4 light source projection areas are determined according to step S610, for example, one of the light source projection areas is the sensing area 22. The sensing region 22 and the adjacent sensing regions 23, 32, and 33 are divided into 4 sensing regions, thereby forming a2 × 2 calculation region. The base positions of the 4 sensing regions are (XL, YT), (XL +1, YT), (XL, YT + 1), (XL +1, YT + 1), which can be the coordinates of the center point of the sensing region. The illumination quantized values of the 4 sensing areas are gray22, gray23, gray32 and gray33 respectively. A distribution ratio of the illumination quantization values, which may be (gray 22+ gray 32)/(gray 22+ gray23+ gray32+ gray 33) or (gray 23+ gray 33)/(gray 22+ gray23+ gray32+ gray 33), along the X-axis, may be calculated, and the amount of positional deviation along the X-axis is determined according to the distribution ratio; the distribution ratio of the illumination quantization values, which may be (gray 22+ gray 23)/(gray 22+ gray23+ gray32+ gray 33) or (gray 32+ gray 33)/(gray 22+ gray23+ gray32+ gray 33), for example, along the Y axis, from which the amount of positional displacement along the Y axis is determined. And then adding the position offset to the coordinates of the basic position of the sensing region 22 to obtain the accurate projection position of the light source in the sensing region 22.
Illustratively, the light source accurate projection position can be calculated using the following formula:
Figure BDA0003827035070000111
Figure BDA0003827035070000112
wherein x is A2 、y A2 Respectively representing the X-axis coordinate and the Y-axis coordinate of the light source accurate projection position A2.
In one embodiment, if the light source projection area i is the sensing area jk, the sensing area located at jth row and kth column on the second plane. Three adjacent sensing areas, namely a sensing area j (k + 1), a sensing area (j + 1) k and a sensing area (j + 1) (k + 1), which are arranged at the right side, the lower side and the lower right side of the sensing area jk are obtained, and 4 sensing areas form a2 multiplied by 2 calculation area. The illumination quantized values of the 4 sensing areas are gray jk 、gray j(k+1) 、gray (j+1)k 、gray (j+1)(k+1) . The base positions of the 4 sensing areas are (k, j), (k +1, j), (k, j + 1), (k +1, j + 1), respectively, and it is assumed that the base position of each sensing area is the coordinate of the lower right corner point of the sensing area. The precise projected position of the light source within the sensing region jk can be calculated by the following formula:
Figure BDA0003827035070000121
Figure BDA0003827035070000122
Δx,Δy∈[-1,1] (4)
wherein, Δ X and Δ Y are respectively the position offset on the X axis and the Y axis, the position offset can be calculated by the formulas (3) and (4), and then the light source accurate projection position X of the light source accurate projection position in the induction area jk is calculated by the formula (2) jk 、y jk . Formula (3) considers the relation that the attenuation of light source illumination is in direct proportion to the square of the distance, so that the calculated position offset is accurate.
With continued reference to fig. 4, in step S440, calibration parameters between the first plane and the second plane are determined by using mapping relationships between the K first positions and the K second positions.
And the calibration parameters between the first plane and the second plane are the calibration parameters between the image sensor and the spectrum sensor array. The calibration parameters may be used to register the first plane and the second plane, thereby enabling point-to-point mapping between the first image and the first spectral data.
Assuming that any point in the three-dimensional space has a projection point (x 1, y 1) on the first plane, and a projection point (x 2, y 2) on the second plane, the (x 1, y 1) and (x 2, y 2) are a set of matching point pairs, and the following relationships are satisfied:
Figure BDA0003827035070000123
h denotes the calibration parameters between the first plane and the second plane and may be a 3 × 3 matrix, also called homography matrix. Further, there is the following relationship:
Figure BDA0003827035070000124
Figure BDA0003827035070000125
according to the corresponding relationship between the plane coordinates and the homogeneous coordinates, the following formula (7) can be further obtained:
Figure BDA0003827035070000126
further transformations are:
Figure BDA0003827035070000127
in solving the homography matrix H, a constraint H may typically be added 33 =1 (or
Figure BDA0003827035070000131
). Thus H includes 8 degrees of freedom. WhileAs can be seen from equation (9), a set of matching point pairs (x 1, y 1) and (x 2, y 2) can provide two equations. Therefore, 4 sets of non-collinear matching point pairs (which means that any three projection points on the plane are not on a straight line) are needed to provide 8 equations, i.e. H can be solved to obtain the calibration parameters.
Referring to fig. 8, the scene includes 4 light sources A0, B0, C0, D0, whose projections on the first plane are A1, B1, C1, D1, respectively, and the projections on the second plane are A2, B2, C2, D2, respectively, to form four sets of matching point pairs (A1, A2), (B1, B2), (C1, C2), (D1, D2), so that the calibration parameters between the first plane and the second plane can be calculated, thereby achieving the calibration between the image sensor and the spectrum sensor array.
Exemplary embodiments of the present disclosure also provide an image processing method for performing White Balance processing on an image, such as an AWB (Auto White Balance) function, to improve the visual effect of the image.
Fig. 9 shows a flow of an image processing method, which may include the following steps S910 to S930:
step S910, acquiring a second image acquired by the image sensor and Z groups of second spectral data acquired by the spectral sensor array; the spectral sensor array comprises Z spectral sensors; z is a positive integer not less than 2.
The second image is an image acquired from a certain shooting object in the image processing process, and may be an image to be processed. The second spectral data is spectral data acquired for the photographic subject in the image processing process, and comprises Z-group data, and each group of data is local spectral data of a detection area in the photographic subject.
Step S920, obtaining calibration parameters predetermined according to the calibration method in the exemplary embodiment, determining a corresponding relationship between Z groups of second spectral data and Z sub-regions in the second image based on the calibration parameters, and determining white balance parameters of the sub-regions corresponding to the second spectral data according to each group of second spectral data, so as to obtain the white balance parameters of each sub-region.
According to calibration parameters between the image sensor and the spectrum sensor array, the position of each pixel point in the second image in the second plane can be determined, and then which group of the Z groups of second spectrum data corresponds to the pixel point is determined, a sub-region is formed by the pixel points corresponding to the same group of second spectrum data in the second image, the second image can be divided into Z sub-regions, and the Z groups of second spectrum data correspond to the Z sub-regions one to one.
Further, a white balance parameter of a corresponding one of the sub-regions may be determined according to each set of the second spectral data. The second spectrum data can reflect the illumination condition of the sub-area, and a proper white balance parameter can be adopted according to the illumination condition. The white balance parameters may include Rgain parameters, bgain parameters, and the like.
In an embodiment, the determining the white balance parameter of the sub-region corresponding to the second spectrum data according to each group of the second spectrum data may include:
respectively determining the color temperature of the sub-region corresponding to the second spectrum data according to each group of second spectrum data, and taking the white balance parameter corresponding to the color temperature as the white balance parameter of the sub-region; or processing each group of second spectrum data by utilizing a pre-trained white balance parameter model to obtain white balance parameters of a sub-region corresponding to the second spectrum data.
If the white balance parameters are determined according to the color temperatures, the white balance parameters corresponding to different standard color temperatures can be configured in advance. For example, the white balance parameters corresponding to a plurality of standard light sources may be marked, and the corresponding relationship between the color temperature of the standard light source and the white balance parameters may be recorded to form a white balance parameter sequence. Under the condition of determining the color temperature of the subregion, if the color temperature of the subregion is the standard color temperature, the white balance parameter corresponding to the standard color temperature can be directly used as the white balance parameter of the subregion, and if the color temperature of the subregion is not the standard color temperature, the white balance parameter corresponding to the adjacent standard color temperature can be interpolated in the white balance parameter sequence to obtain the white balance parameter of the subregion.
The white balance parameter model is used for processing the input spectral data and outputting corresponding white balance parameters. The spectrum data and the white balance parameters serving as samples can be acquired in a manual labeling mode, and the samples are used for performing supervised learning on the white balance parameter model to obtain the trained white balance parameter model. Under the condition of obtaining the second spectrum data of each sub-region, the white balance parameter models can be respectively input to obtain the white balance parameters of the corresponding sub-regions.
Based on the color temperature corresponding white balance parameter or the white balance parameter model, the white balance parameter can be determined for each sub-region. The white balance parameter is adaptive to the local illumination condition of the subarea, and the accuracy is high.
Step S930, performing white balance processing on the second image by using the white balance parameter of each sub-region to obtain a target image.
The target image is an image of the second image subjected to white balance processing, and the visual effect of the target image is better than that of the second image. The target image may be output as a final image, or other image processing, such as image color correction, image stylization processing, etc., may be performed on the basis of the target image to obtain the final image, which is not limited in this disclosure.
In one embodiment, the white balance parameter of each sub-region may be used to perform white balance processing on each sub-region, so as to obtain the target image.
In one embodiment, before performing the white balance processing on the second image by using the white balance parameter of each sub-region, the image processing method may further include:
acquiring a global white balance parameter of a second image;
determining a white balance numerical range according to the global white balance parameter and the floating coefficient;
and if the white balance parameter of any sub-area exceeds the white balance numerical range, correcting the white balance parameter of the sub-area to be the boundary value of the white balance numerical range.
Wherein the global white balance parameter is a white balance parameter suitable for the second image as a whole. In one embodiment, the second spectrum data of the Z sub-regions may be summarized, for example, an average value, a weighted average value, or the like may be calculated to obtain global spectrum data of the second image. And determining white balance parameters according to the global spectral data. In one embodiment, the global color temperature of the second image can be determined according to the global spectral data of the second image, and the white balance parameter corresponding to the global color temperature is used as the global white balance parameter; or processing the global spectrum data of the second image by using a pre-trained white balance parameter model to obtain a global white balance parameter.
The difference between the white balance parameter of each sub-region and the global white balance parameter of the second image should not be too large, based on which the white balance numerical range can be determined from the global white balance parameter and the floating coefficient. The floating coefficient represents the size of the allowable floating range above and below the global white balance parameter, and if the floating coefficient is 0.2, the white balance value range can be [0.8GA,1.2GA ], and GA represents the global white balance parameter.
If the white balance parameter of a certain sub-area exceeds the white balance numerical range, the white balance parameter of the sub-area is corrected to be the boundary value of the white balance numerical range. If the white balance parameter of the sub-area exceeds the upper boundary value of the white balance numerical range, the white balance parameter can be corrected to the upper boundary value; if the white balance parameter of the sub-area exceeds the lower boundary value of the white balance numerical range, the white balance parameter may be corrected to the lower boundary value.
By the method for correcting the white balance parameters exceeding the white balance numerical range, the difference between the white balance parameters of each subarea can be prevented from being overlarge, the condition that the white balance effect between different subareas changes suddenly is improved, and the visual perception of the target image after the white balance processing is more real and natural.
In one embodiment, before the white balance processing is performed, the white balance parameters of each sub-region may be smoothed, and the situation of abrupt change of the white balance effect between different sub-regions may also be improved, so that the visual perception of the target image after the white balance processing is more real and natural. The smoothing method includes, but is not limited to: mean filtering, gaussian filtering, median filtering or a combination of multiple filtering methods. For example, a 3 × 3 (or other size) sliding window may be set to cover 3 × 3 sub-regions in the second image, the white balance parameters of each sub-region in the sliding window are subjected to median filtering, then to mean filtering, and the sliding window is sequentially slid until the entire second image is traversed, so as to implement smoothing processing on the white balance parameters of each sub-region.
In an embodiment, the white balance processing on the second image by using the white balance parameter of each sub-region to obtain the target image may include the following steps:
taking the white balance parameter of each sub-area as the white balance parameter of a reference point in each sub-area;
the white balance parameter of each pixel point in the second image is obtained by interpolating the white balance parameters of the reference points in the adjacent sub-regions;
and carrying out white balance processing on each pixel point in the second image by adopting the white balance parameter of each pixel point to obtain the target image.
The reference point may be any point within the sub-region, such as a center point. Under the condition of determining the white balance parameter of the reference point, the white balance parameter of each pixel point in the second image can be calculated in an interpolation mode. Specifically, the white balance parameters of the multiple reference points may be interpolated according to the distance between the pixel point and the nearby reference point to obtain the white balance parameter of each pixel point. Referring to fig. 10, four sub-regions located at the upper left corner of the second image are sub-region 11, sub-region 12, sub-region 21, and sub-region 22, and reference points in these 4 sub-regions are R11, R12, R21, and R22, respectively. The second image may be divided into three parts: corner portion, edge portion, middle portion. For a pixel point of the angle portion, there is usually only one reference point near the pixel point, for example, the reference point near F1 is only R11, and the white balance parameter of R11 can be used as the white balance parameter of F1; for the pixel points at the edge part, the number of the reference points near the edge part is usually two, for example, the reference points near F2 have R11 and R12, and the white balance parameters of R11 and R12 can be linearly interpolated according to the distance between F2 and R11 and R12 in the second image width direction to obtain the white balance parameter of F2; for the pixel point in the middle part, the number of the reference points near the pixel point is usually four, for example, the reference points near F3 include R11, R12, R21, and R22, and the white balance parameters of R11, R12, R21, and R22 may be bilinearly interpolated according to the distance between F3 and R11, R12, R21, and R22 along the second image width and height directions to obtain the white balance parameter of F3. From this, the white balance parameter of each pixel point in the second image can be determined.
Under the condition of obtaining the white balance parameter of each pixel point, the white balance parameter of each pixel point can be respectively adopted to carry out white balance processing on each pixel point, and a target image is obtained. Therefore, the white balance processing of pixel-level precision is realized, and the quality of a target image is improved.
Based on the method of fig. 9, under the condition that the second spectrum data accurately correspond to the second image based on the calibration parameters, the white balance parameters of the corresponding sub-regions are determined according to each group of second spectrum data, so that the white balance parameters are adapted to the illumination conditions of different parts in the shooting scene. And then white balance processing is carried out on the second image by adopting the white balance parameters of each sub-area to obtain a target image, and the effect of white balance processing and the quality of the target image can be improved aiming at the condition that the difference of illumination conditions between different parts in the second image is large.
Fig. 11 is a graph showing the effect of the global white balance processing compared with the local white balance processing. In fig. 11, the left image is the effect of performing white balance processing on the whole image by using the global white balance parameter, and since two light sources with large color temperature difference exist in the picture, after performing white balance processing on the whole image by using the global white balance parameter, the left and right parts in the picture have color mutation conditions, which affects the visual perception. Such a processing method does not sufficiently consider the difference between the left and right sides of the image, and is therefore not targeted. The right diagram shows the effect of performing white balance processing by using the white balance parameter of each sub-region in the exemplary embodiment, and the white balance processing under different parameters can be performed on different regions in the picture to adapt to the difference between light sources, colors, and the like between the different regions, so that the pertinence is strong, the transition between the different regions in the picture is smooth, the color abrupt change is not caused, and the image quality is improved.
The above description has been made for the calibration method and the image processing method, respectively. In the present exemplary embodiment, the resolution of the spectrum sensor array may be m × n (Z = m × n), which means that the spectrum sensor array is composed of m × n spectrum sensors arranged in an array. The resolution of the spectral sensor array is equivalent to the degree of fineness to which the spectral sensor array detects the lighting condition of the photographic subject, and how to arrange the spectral sensor array, supplementary explanation will be made below from several aspects.
(1) In an embodiment, considering that the illumination intensity of a general light source is gradually attenuated in a circular diffusion manner, the sensing area of each spectrum sensor may be set to be square, so as to conform to the circular diffusion rule of the illumination intensity, and facilitate more accurate detection of the spectrum data of each detection area in the photographic subject.
(2) In one embodiment, m/n = H/W, i.e. the aspect ratio of the spectral sensor array is made the same as the aspect ratio of the image sensor. For example, the resolution of the image sensor is 3000 × 4000, the resolution of the spectral sensor is 6 × 8, and the aspect ratios of the two are the same. Thereby enabling more accurate mapping of the image sensor to the spectral sensor array. For example, each of the spectrum sensors in the spectrum sensor array corresponds to a sensing region on the second plane, the sensing region corresponds to a sub-region of 500 × 500 size on the first plane, and 6 × 8 sensing regions respectively correspond to 6 × 8 sub-regions on the first plane.
(3) Generally, the number of light sources exceeding 5 in a subject is small. When an image is captured, 5 light sources are most uniformly distributed in a screen, that is, 5 light sources are distributed at four corners and the center of the screen, so that an area with different illumination conditions in the screen can be completely divided by 3 × 3=9 sub-areas. In other cases, two or more light sources in the 5 light sources are distributed more intensively, and the areas with different illumination conditions in the picture can be completely divided by less than 9 windows. Therefore, when the spectrum sensor array is arranged, the requirement for detecting the illumination condition is considered, and both m and n can be set to be positive integers not less than 3, so that the image can be divided into at least 9 sub-regions, the spectrum data of each sub-region is detected, the illumination condition of each sub-region is further determined, different local different illumination conditions in the shot object are fully detected, and the actual use requirement is met.
In one embodiment, considering that the total number of light sources in a field of view of a field object is not substantially more than 10, to avoid distributing 2 or more light sources in the same sub-area, the resolution of the spectral sensor array may be set to be more than 10, and thus mxn may be 3 × 4, 6 × 8, 9 × 12, etc.
(4) The higher the resolution of the spectral sensor array, i.e., the larger m and n, the finer the detection of the local spectral data, which is more beneficial to improving the fineness of the image processing. However, when processing an image, each sub-region in the image needs to calculate white balance parameters, color correction matrices, etc., i.e., the larger m and n are, the larger the amount of computation required for image processing is. Therefore, it is necessary to balance the fineness of image processing and the amount of computation to determine the appropriate resolution.
(5) If the resolution of the spectrum sensor array is too high, the sub-area in the image may be too small, and a pure color sub-area is likely to appear, which causes problems of color non-uniformity, color distortion, and the like. For example, in an indoor shooting scene, a large area Of pure color exists on a wall or a ceiling, an image is divided into m × n sub-regions according to the resolution Of a spectrum sensor array, and if m × n is large, each sub-region is small, and the FOV (Field Of Vision) is small, so that the image is easily concentrated on the large area Of pure color, which adversely affects illumination detection, analysis Of spectrum data, and the like, and finally causes problems such as uneven processing effect Of the entire image, color distortion, and the like. In this respect, it is desirable that the resolution of the spectral sensor array is not too high.
In one embodiment, combining the above 5 factors (1) - (5), the resolution of the spectrum sensor array can be set to 3 × 4 or 6 × 8.
Exemplary embodiments of the present disclosure also provide a calibration apparatus. Referring to fig. 12, the calibration apparatus 1200 may include:
a data acquisition module 1210 configured to acquire a first image acquired by an image sensor of a scene containing K light sources and first spectral data of Z detection regions in the scene acquired by a spectral sensor array; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2;
a first position determining module 1220 configured to determine projection positions of the K light sources in the first plane according to the first image to obtain K first positions; the first plane is a plane of the image sensor;
a second position determining module 1230, configured to determine quantized illumination values of Z sensing areas in the second plane based on the first spectral data of the Z detection areas, and determine projection positions of the K light sources in the second plane according to a distribution of the quantized illumination values of the Z sensing areas to obtain K second positions; the second plane is the plane of the spectral sensor array; the sensing area is a projection of the detection area in a second plane;
the calibration parameter determining module 1240 is configured to determine the calibration parameters between the first plane and the second plane by using the mapping relationship between the K first positions and the K second positions.
In an embodiment, the determining the projection positions of the K light sources in the first plane according to the first image to obtain K first positions includes:
converting the first image into a gray level image, and performing binarization processing on the gray level image to obtain a binarized image;
k connected regions are extracted from the binary image, and the central position of each connected region is determined to obtain K first positions.
In an embodiment, the determining the projection positions of the K light sources in the second plane according to the distribution of the illumination quantization values of the Z sensing areas to obtain K second positions includes:
determining K light source projection areas in the Z sensing areas according to the distribution of the illumination quantization values of the Z sensing areas;
and determining the accurate projection position of the light source in each light source projection area according to the distribution of the illumination quantized values of each light source projection area and the plurality of sensing areas adjacent to the light source projection area so as to obtain K second positions.
In an embodiment, the determining K light source projection areas in Z sensing areas according to the distribution of the illumination quantization values of the Z sensing areas includes:
and determining the sensing areas with the K illumination quantization values at the peak value in the Z sensing areas to obtain K light source projection areas.
In an embodiment, the determining the precise projection position of the light source in each light source projection area according to the distribution of the illumination quantization values of each light source projection area and a plurality of sensing areas adjacent to the light source projection area includes:
for any light source projection area i, forming a2 x2 calculation area by the light source projection area i and three adjacent sensing areas;
calculating the position offset according to the distribution proportion of the illumination quantized values along the first axis of the second plane and the distribution proportion of the illumination quantized values along the second axis of the second plane in the calculation region;
and determining the accurate projection position of the light source in the light source projection area i based on the basic position and the position offset of the light source projection area i.
In one embodiment, the acquiring first image acquired by the image sensor for a scene containing K light sources and first spectral data of Z detection areas in the scene acquired by the spectral sensor array includes:
under the condition that K light sources are simultaneously started, acquiring a first image for a scene through an image sensor, and acquiring a group of first spectrum data for the scene through a spectrum sensor array, wherein the group of first spectrum data comprises the first spectrum data of Z detection areas in the scene; or alternatively
Under the condition that each light source is sequentially and independently started, K times of first images are collected on a scene through an image sensor, K times of first spectrum data are collected on the scene through a spectrum sensor array, and the first spectrum data collected at each time all comprise first spectrum data of Z detection areas in the scene.
Exemplary embodiments of the present disclosure also provide an image processing apparatus. Referring to fig. 13, the image processing apparatus 1300 may include:
a data acquisition module 1310 configured to acquire a second image acquired by the image sensor and Z sets of second spectral data acquired by the spectral sensor array; the spectral sensor array comprises Z spectral sensors; z is a positive integer not less than 2.
A white balance parameter determining module 1320, configured to acquire calibration parameters predetermined according to the calibration method in the present exemplary embodiment, determine a corresponding relationship between Z groups of second spectral data and Z sub-regions in the second image based on the calibration parameters, and determine white balance parameters of the sub-regions corresponding to the second spectral data according to each group of second spectral data, respectively, to obtain white balance parameters of each sub-region;
a white balance processing module 1330 configured to perform white balance processing on the second image by using the white balance parameter of each sub-region to obtain a target image.
In an embodiment, the determining the white balance parameter of the sub-region corresponding to the second spectrum data according to each group of the second spectrum data includes:
respectively determining the color temperature of the subarea corresponding to the second spectrum data according to each group of second spectrum data, and taking the white balance parameter corresponding to the color temperature as the white balance parameter of the subarea; or
And respectively processing each group of second spectrum data by using a pre-trained white balance parameter model to obtain white balance parameters of the sub-region corresponding to the second spectrum data.
In one embodiment, the white balance processing module 1330 is further configured to:
acquiring a global white balance parameter of the second image before performing white balance processing on the second image by adopting the white balance parameter of each sub-area; determining a white balance numerical range according to the global white balance parameter and the floating coefficient; and if the white balance parameter of any sub-area exceeds the white balance numerical range, correcting the white balance parameter of the sub-area to be the boundary value of the white balance numerical range.
In an embodiment, the performing white balance processing on the second image by using the white balance parameter of each sub-region to obtain the target image includes:
taking the white balance parameter of each sub-area as the white balance parameter of a reference point in each sub-area;
the white balance parameter of each pixel point in the second image is obtained by interpolating the white balance parameters of the reference points in the adjacent sub-regions;
and carrying out white balance processing on each pixel point in the second image by adopting the white balance parameter of each pixel point to obtain the target image.
The specific details of each part in the above device have been described in detail in the method part embodiments, and thus are not described again.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium, which may be implemented in the form of a program product, including program code for causing an electronic device to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "exemplary method" section of this specification, when the program product is run on the electronic device. In an alternative embodiment, the program product may be embodied as a portable compact disc read only memory (CD-ROM) and include program code, and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Exemplary embodiments of the present disclosure also provide an electronic device, such as the terminal 110 described above. The electronic device may include a processor and a memory. The memory stores executable instructions of the processor, such as may be program code. The processor executes the executable instructions to perform the method for reporting battery data in the exemplary embodiment.
The following takes mobile terminal 1400 in fig. 14 as an example, and the configuration of the electronic device is exemplarily described. It will be appreciated by those skilled in the art that the configuration of figure 14 can be applied to fixed type devices, in addition to components specifically intended for mobile purposes.
As shown in fig. 14, the mobile terminal 1400 may specifically include: processor 1401, memory 1402, bus 1403, mobile communication module 1404, antenna 1, wireless communication module 1405, antenna 2, display 1406, camera module 1407, audio module 1408, power module 1409, and sensor module 1410.
Processor 1401 may include one or more processing units, such as: the Processor 1401 may include an AP (Application Processor), a modem Processor, a GPU (Graphics Processing Unit), an ISP (Image Signal Processor), a controller, an encoder, a decoder, a DSP (Digital Signal Processor), a baseband Processor, and/or an NPU (Neural-Network Processing Unit), etc.
The processor 1401 may be connected to the memory 1402 or other components by a bus 1403.
Memory 1402 may be used to store computer-executable program code, which may include instructions. The processor 1401 executes various functional applications of the mobile terminal 1400 and data processing by executing instructions stored in the memory 1402. The memory 1402 may also store application data, such as files for storing images, videos, and the like.
The communication function of the mobile terminal 1400 may be implemented by the mobile communication module 1404, the antenna 1, the wireless communication module 1405, the antenna 2, a modem processor, a baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. The mobile communication module 1404 may provide mobile communication solutions of 3G, 4G, 5G, etc. applied to the mobile terminal 1400. The wireless communication module 1405 may provide wireless communication solutions such as wireless local area network, bluetooth, near field communication, etc. applied to the mobile terminal 1400.
The display screen 1406 is used to implement display functions, such as displaying user interfaces, images, videos, and the like. The camera module 1407 is used to implement a shooting function, such as shooting the first image, the second image, and the like. The audio module 1408 is used for performing audio functions, such as playing audio, capturing voice, and the like. The power module 1409 is configured to implement power management functions, such as charging a battery, powering a device, monitoring a battery status, and so on.
The sensor module 1410 may include an image sensor and a spectrum sensor array, wherein the image sensor may be used to collect the first image, the second image, and the like, and the spectrum sensor array may be used to collect the first spectrum data, the second spectrum data, and the like. In one embodiment, the image sensor and the spectrum sensor array may also be disposed in the camera module 1407. In addition, the sensor module 1410 may also include other types of sensors for implementing corresponding sensing functions.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.), or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the following claims.

Claims (14)

1. A calibration method, comprising:
acquiring a first image acquired by an image sensor for a scene containing K light sources and first spectral data acquired by a spectral sensor array for Z detection regions in the scene; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2;
determining projection positions of the K light sources in a first plane according to the first image to obtain K first positions; the first plane is a plane of the image sensor;
determining illumination quantization values of Z sensing areas in a second plane based on the first spectrum data of the Z detection areas, and determining projection positions of the K light sources in the second plane according to the distribution of the illumination quantization values of the Z sensing areas to obtain K second positions; the second plane is a plane of the spectral sensor array; the sensing area is a projection of the detection area in the second plane;
and determining calibration parameters between the first plane and the second plane by using the mapping relation between the K first positions and the K second positions.
2. The method of claim 1, wherein determining the projection positions of the K light sources in a first plane from the first image to obtain K first positions comprises:
converting the first image into a gray image, and carrying out binarization processing on the gray image to obtain a binarized image;
and extracting K connected regions from the binarized image, and determining the central position of each connected region to obtain K first positions.
3. The method according to claim 1, wherein said determining the projection positions of the K light sources in the second plane according to the distribution of the illumination quantization values of the Z sensing regions to obtain K second positions comprises:
determining K light source projection areas in the Z sensing areas according to the distribution of the illumination quantization values of the Z sensing areas;
and determining the accurate projection position of the light source in each light source projection area according to the distribution of the illumination quantization values of each light source projection area and the plurality of sensing areas adjacent to the light source projection area so as to obtain the K second positions.
4. The method of claim 3, wherein said determining K light source projection areas among said Z sensing areas from the distribution of the illumination quantization values of said Z sensing areas comprises:
and determining K induction areas with the illumination quantization values at peak values in the Z induction areas to obtain the K light source projection areas.
5. The method of claim 3, wherein determining the precise projection position of the light source in each light source projection area according to the distribution of the illumination quantization values of each light source projection area and a plurality of sensing areas adjacent to the light source projection area comprises:
for any light source projection area, forming a2 x2 calculation area by the any light source projection area and three adjacent sensing areas;
calculating a position offset according to the distribution proportion of the illumination quantized values along the first axis of the second plane and the distribution proportion of the illumination quantized values along the second axis of the second plane in the calculation area;
and determining the accurate projection position of the light source in the projection area of any light source based on the basic position of the projection area of any light source and the position offset.
6. The method of claim 1, wherein acquiring first images acquired by an image sensor of a scene containing K light sources and first spectral data acquired by a spectral sensor array of Z detection regions in the scene comprises:
under the condition that the K light sources are simultaneously turned on, acquiring a first image for the scene through the image sensor, and acquiring a group of first spectrum data for the scene through the spectrum sensor array, wherein the group of first spectrum data comprises first spectrum data of Z detection areas in the scene; or
Under the condition that each light source is sequentially and independently started, K times of first images are collected on the scene through the image sensor, K times of first spectrum data are collected on the scene through the spectrum sensor array, and the first spectrum data collected at each time comprise the first spectrum data of Z detection areas in the scene.
7. An image processing method, characterized by comprising:
acquiring a second image acquired by the image sensor and Z groups of second spectral data acquired by the spectral sensor array; the spectral sensor array comprises Z spectral sensors; z is a positive integer not less than 2;
acquiring calibration parameters predetermined according to the method of any one of claims 1 to 6, determining correspondence between the Z sets of second spectral data and Z sub-regions in the second image based on the calibration parameters, and determining white balance parameters of the sub-regions corresponding to the second spectral data according to each set of second spectral data, respectively, to obtain white balance parameters of each sub-region;
and carrying out white balance processing on the second image by adopting the white balance parameters of each sub-area to obtain a target image.
8. The method of claim 7, wherein determining the white balance parameter of the sub-region corresponding to the second spectral data from each set of second spectral data, respectively, comprises:
respectively determining the color temperature of the subarea corresponding to the second spectrum data according to each group of second spectrum data, and taking the white balance parameter corresponding to the color temperature as the white balance parameter of the subarea; or
And respectively processing each group of second spectrum data by utilizing a pre-trained white balance parameter model to obtain white balance parameters of the sub-region corresponding to the second spectrum data.
9. The method of claim 7, wherein before the white balance processing of the second image using the white balance parameters of each sub-region, the method further comprises:
acquiring a global white balance parameter of the second image;
determining a white balance numerical range according to the global white balance parameter and the floating coefficient;
and if the white balance parameter of any subregion exceeds the white balance numerical range, correcting the white balance parameter of any subregion to be a boundary value of the white balance numerical range.
10. The method according to any one of claims 7 to 9, wherein performing white balance processing on the second image by using the white balance parameter of each sub-region to obtain a target image comprises:
taking the white balance parameter of each sub-area as the white balance parameter of the reference point in each sub-area;
obtaining a white balance parameter of each pixel point in the second image by interpolating the white balance parameters of the reference points in the adjacent sub-regions;
and carrying out white balance processing on each pixel point in the second image by adopting the white balance parameter of each pixel point to obtain the target image.
11. A calibration device, comprising:
a data acquisition module configured to acquire a first image acquired by an image sensor of a scene containing K light sources and first spectral data acquired by a spectral sensor array of Z detection regions in the scene; the spectral sensor array comprises Z spectral sensors; k is a positive integer not less than 4; z is a positive integer not less than 2;
a first position determining module configured to determine projection positions of the K light sources in a first plane according to the first image to obtain K first positions; the first plane is a plane of the image sensor;
a second position determining module configured to determine illumination quantization values of Z sensing areas in a second plane based on the first spectral data of the Z sensing areas, and determine projection positions of the K light sources in the second plane according to a distribution of the illumination quantization values of the Z sensing areas to obtain K second positions; the second plane is a plane of the spectral sensor array; the sensing region is a projection of the detection region in the second plane;
a calibration parameter determination module configured to determine calibration parameters between the first plane and the second plane by using mapping relationships between the K first positions and the K second positions.
12. An image processing apparatus characterized by comprising:
a data acquisition module configured to acquire a second image acquired by the image sensor and Z sets of second spectral data acquired by the spectral sensor array; the spectral sensor array comprises Z spectral sensors; z is a positive integer not less than 2;
a white balance parameter determination module configured to obtain calibration parameters predetermined according to the method of any one of claims 1 to 6, determine a correspondence between the Z sets of second spectral data and Z sub-regions in the second image based on the calibration parameters, and determine white balance parameters of the sub-regions corresponding to the second spectral data according to each set of second spectral data, respectively, to obtain white balance parameters of each sub-region;
and the white balance processing module is configured to perform white balance processing on the second image by adopting the white balance parameter of each subarea to obtain a target image.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 10.
14. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 10 via execution of the executable instructions.
CN202211069561.8A 2022-08-31 2022-08-31 Calibration method, image processing method, device, storage medium and electronic equipment Pending CN115471571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211069561.8A CN115471571A (en) 2022-08-31 2022-08-31 Calibration method, image processing method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211069561.8A CN115471571A (en) 2022-08-31 2022-08-31 Calibration method, image processing method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115471571A true CN115471571A (en) 2022-12-13

Family

ID=84369583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211069561.8A Pending CN115471571A (en) 2022-08-31 2022-08-31 Calibration method, image processing method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115471571A (en)

Similar Documents

Publication Publication Date Title
US9992457B2 (en) High resolution multispectral image capture
CN109712102B (en) Image fusion method and device and image acquisition equipment
Monno et al. Single-sensor RGB-NIR imaging: High-quality system design and prototype implementation
WO2018196568A1 (en) Image sensor, imaging method and electronic device
RU2537038C2 (en) Automatic white balance processing with flexible colour space selection
EP3542347B1 (en) Fast fourier color constancy
US8830341B2 (en) Selection of an optimum image in burst mode in a digital camera
KR101695252B1 (en) Camera system with multi-spectral filter array and image processing method thereof
US9581436B2 (en) Image processing device, image capturing apparatus, and image processing method
US10762655B1 (en) Disparity estimation using sparsely-distributed phase detection pixels
US10630920B2 (en) Image processing apparatus
CN113168669B (en) Image processing method, device, electronic equipment and readable storage medium
KR101663871B1 (en) Method and associated apparatus for correcting color artifact of image
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
EP2436187A1 (en) Four-channel color filter array pattern
WO2011094029A1 (en) Iteratively denoising color filter array images
WO2011094241A1 (en) Denoising cfa images using weighted pixel differences
CN111510692B (en) Image processing method, terminal and computer readable storage medium
CN110929615B (en) Image processing method, image processing apparatus, storage medium, and terminal device
US11245878B2 (en) Quad color filter array image sensor with aperture simulation and phase detection
CN108063933B (en) Image processing method and device, computer readable storage medium and computer device
JP6415094B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN115471571A (en) Calibration method, image processing method, device, storage medium and electronic equipment
CN115187559A (en) Illumination detection method and device for image, storage medium and electronic equipment
CN115239550A (en) Image processing method, image processing apparatus, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination