CN112991244A - Image fusion method, device, camera, storage medium and program product - Google Patents

Image fusion method, device, camera, storage medium and program product Download PDF

Info

Publication number
CN112991244A
CN112991244A CN202010268724.XA CN202010268724A CN112991244A CN 112991244 A CN112991244 A CN 112991244A CN 202010268724 A CN202010268724 A CN 202010268724A CN 112991244 A CN112991244 A CN 112991244A
Authority
CN
China
Prior art keywords
images
camera
image
lens
sensors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010268724.XA
Other languages
Chinese (zh)
Inventor
杨昆
方光祥
刘军
汪鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN112991244A publication Critical patent/CN112991244A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image fusion method, device and camera, a storage medium and a program product. A technique for obtaining calibration values in low-light scenes is to obtain calibration values between a plurality of images taken by a camera at different lens temperatures when the camera is taken in a high-light scene, so as to obtain a relationship between the lens temperature and the calibration values, wherein the relationship is, for example, a linear formula. When shooting in a low-illumination scene, the calibration value is difficult to obtain directly, so the calibration value between images in the low-illumination scene is calculated according to the linear formula obtained in advance and the lens temperature during shooting again, and the plurality of images shot in the low-illumination scene are aligned by using the calculated calibration value, so that the effect of fusing the images can be improved.

Description

Image fusion method, device, camera, storage medium and program product
Technical Field
The invention relates to the field of camera shooting, in particular to an image fusion technology.
Background
The camera is the core device in the monitoring field. However, under the low-illumination condition at night, the image effect obtained by the camera is not clear enough, so that the identification accuracy of the image is reduced sharply.
One solution is to take two images of visible light and infrared light by a camera and then fuse the two images to obtain a clear color image. For example, a beam splitter prism is installed in a camera, visible light and infrared light entering the camera are imaged on two sensors respectively through the beam splitting characteristic of the prism, so that a color image and a gray image are generated respectively, then pixel-level calibration and fusion are performed on the color image and the gray image, and finally a fused color image is generated.
Since the color image and the grayscale image represent the same subject, there is a correspondence between the pixels of the two images. It is desirable to find corresponding pixels in both images to align each pair of pixels that correspond to each other (or are closer together) as much as possible during the fusion, and then fuse the aligned pixels together to mitigate the occurrence of "ghosts". Referring to fig. 3, when the calibration is performed, the distance between corresponding pixels in the two images is about 10 pixels, which results in a significant ghost phenomenon in the fused image; referring to fig. 4, the distance between corresponding pixels in two images is about 2 pixels during calibration, so that the fused images have clear edges and no obvious ghost phenomenon occurs.
Therefore, it can be seen that how to align the pictures as accurately as possible is a key problem in image fusion. During the daytime, because the gray-scale image and the color image are clear enough to locate the corresponding pixels in the color image and the black body image, the calibration value required by alignment can be accurately obtained by applying the existing calibration algorithm. However, at night, since the color image is blurred, it is difficult to locate the relative pixels in the color image and the black body image, so that it is often difficult to accurately measure the calibration value. In addition, because there is an obvious temperature difference between the day and the night, the expansion with heat and the contraction with cold brought by the temperature difference bring about the physical expansion or contraction of the lens, which results in a larger deviation and a poor fusion effect if the calibration value of the day is applied to the calibration of the night image.
Therefore, finding a calibration value obtaining scheme suitable for a low-illumination scene is a problem to be solved in the field of image fusion.
Disclosure of Invention
In a first aspect, the present invention provides an embodiment of a method for image fusion, including: acquiring a plurality of images acquired by a camera;
acquiring a target temperature of a lens of the camera when the plurality of images are acquired; aligning the plurality of images according to a calibration value, and fusing the aligned plurality of images, wherein the calibration value is related to the target temperature.
According to the scheme, the calibration value is indirectly obtained through the temperature without directly measuring the calibration value, and the calibration value can be obtained in a scene (such as a low-illumination scene) where the calibration value is difficult to directly obtain. The quality of the fused image is improved.
In a first possible implementation manner of the first aspect, the camera includes a plurality of sensors, the plurality of images are generated by the plurality of sensors at the same time, and different sensors generate different images.
This solution introduces that the fused images are generated simultaneously by different sensors of the same camera, each sensor generating a different one of said images.
In a second possible implementation manner of the first aspect, the camera includes a single lens and a beam splitter, and the method further includes: the single lens receives light, the single lens transmits the received light to the optical splitter, the optical splitter splits the light from the single lens into a plurality of sub-light, the optical splitter transmits the plurality of sub-light to the plurality of sensors, and different sub-light corresponds to different sensors; alternatively, the camera includes: a plurality of lenses, the method further comprising: each lens of the plurality of lenses receives a sub-ray, and the plurality of lenses transmit the received plurality of sub-rays to the plurality of sensors, wherein different sub-rays correspond to different sensors.
The scheme introduces: a light transmission path within the camera when the camera is a monocular multi-sensor camera, and a light transmission path within the camera when the camera is a monocular multi-sensor camera.
In a third possible implementation manner of the first aspect, the camera includes a color sensor and a black and white sensor, and the plurality of images include: a color image generated by the color sensor and a grayscale image generated by the black and white sensor.
The scheme introduces a specific type of sensor.
Furthermore, the camera may comprise only a plurality of color sensors, or only a plurality of black and white sensors. The number of sensors summed at the cameras may be 2, 3, 4 or more.
In a fourth possible implementation manner of the first aspect, when the camera is located in a high-illuminance scene, the following are acquired when the lenses are located at different temperatures: calibrating values among images acquired by different sensors of the camera so as to obtain a corresponding relation between the temperature of the lens and the calibrated values;
and obtaining a calibration value corresponding to the target temperature according to the target temperature and the corresponding relation.
The scheme introduces a specific acquisition scheme of the calibration value. In practical applications, the target temperature and the corresponding relationship may be in the form of a calibration value calculation formula or a calibration value table.
In a fifth possible implementation manner of the first aspect, the number of the plurality of images is 2, wherein: the calibration values comprise a transverse calibration value x and a longitudinal calibration value y, wherein x is linearly related to the temperature of the lens, and y is linearly related to the temperature of the lens.
In a sixth possible implementation manner of the first aspect, the formats of the plurality of images are the same, and the format of the plurality of images is a RAW format; or the formats of the plurality of images are YUV formats, or the formats of the plurality of images are RGB formats.
In a second aspect, the present invention provides an embodiment of an image fusion apparatus, including: the image acquisition module is used for acquiring a plurality of images acquired by the camera; the temperature acquisition module is used for acquiring the target temperature of the lens of the camera when the plurality of images are acquired; a calibration module to align the plurality of images according to a calibration, wherein the calibration is related to the target temperature; and the fusion module is used for fusing the aligned images.
In a third aspect, the present invention provides an image fusion apparatus comprising: the interface is used for acquiring a plurality of images acquired by the camera;
a processor for obtaining a target temperature of a lens of the camera when acquiring the plurality of images; aligning the plurality of images according to a calibration value, and fusing the aligned plurality of images, wherein the calibration value is related to the target temperature.
In a fourth aspect, the present invention provides a camera comprising: an optical structure for transmitting a plurality of sub-rays to a plurality of sensors, each of the plurality of sub-rays corresponding to one of the plurality of sensors; each sensor of the plurality of sensors is used for performing photoelectric conversion on the received sub-rays, so as to generate a plurality of images; and the processor is used for acquiring the target temperature of the lens of the camera when the plurality of images are acquired, aligning the plurality of images according to a calibration value, and fusing the aligned plurality of images, wherein the calibration value is related to the target temperature.
The second, third and fourth aspects have similar technical effects as the first aspect, and have similar various possible implementations.
Drawings
Fig. 1 is a schematic structural diagram of an embodiment of a camera provided by the present invention.
Fig. 2 is a schematic structural diagram of an embodiment of a camera provided by the present invention.
Fig. 3 is a fused image effect diagram after fusion of 2 images with poor alignment.
Fig. 4 is a fused image effect diagram after fusion of 2 images with better alignment.
FIG. 5 is a graph of deviation values versus temperature for laboratory measurements.
FIG. 6 is a graph showing the relationship between the deviation values and the temperature generated by fitting the experimental results of FIG. 5.
FIG. 7 is a flow chart of an embodiment of obtaining a calibration equation.
FIG. 8 is a flow diagram of an embodiment of image fusion.
FIG. 9 is a topology diagram of an embodiment of an image fusion apparatus.
Detailed Description
A camera (camera) can take pictures and record videos, and components constituting the camera include lenses (lenses), an image sensor (also referred to as a sensor), and a processor. The lens is used for passing light; the sensor (sensor) is used for converting the optical signal from the lens into an electric signal and recording the electric signal in the form of an original image; the processor (processor) may perform encoding processing on the original image, such as encoding the RAW image into a JPEG image, a HEIF image, an h.264 video, an h.265 video, or the like, which is easily recognized by human eyes.
The lens functions to present an optical image of the observed object on the sensor of the camera. The lens combines optical parts (reflecting mirror, transmission mirror and prism) with different shapes and different media (plastic, glass or crystal) according to a certain mode, so that after light is transmitted or reflected by the optical parts, the transmission direction of the light is changed according to the needs of people and the light is received by a receiving device, and the optical imaging process of an object is completed. Generally, a lens is composed of a plurality of groups of lenses with different curvatures and different pitches. The selection of the indexes such as the distance, the curvature of the lens, the light transmission coefficient and the like determines the focal length of the lens. The main parameter indexes of the lens comprise: effective focal length, aperture, maximum image plane, field angle, distortion, relative illumination and the like, and all index values determine the comprehensive performance of the lens.
A sensor, a device for converting optical images into electronic signals, is widely used in video cameras and other electro-optical devices. Common sensors include: a charge-coupled device (CCD) and a Complementary Metal Oxide Semiconductor (CMOS). Both CCDs and CMOSs have a large number (e.g., tens of millions) of photodiodes (photodiodes), each referred to as a photosite, each photosite corresponding to a pixel. During exposure, the photodiode receives light and converts the light signal into an electrical signal containing brightness (or brightness and color), and the image is reconstructed accordingly. A Bayer array is a common image sensor technology, and uses Bayer color filters to make different pixels sensitive to one of red, blue and green primary colors, the pixels are interlaced together, and then the original image is obtained by demosaicing interpolation algorithm. The bayer array may be applied to a CCD or a CMOS, and a sensor to which the bayer array is applied is also called a bayer sensor. In addition to bayer sensors, there are sensor technologies such as X3 (developed by Foveon corporation), and the X3 technology employs three layers of photosensitive elements, each layer recording one of the color channels of RGB, and thus can capture an image sensor of all colors on one pixel.
The processor (also called Image processor) is used for Image Fusion and encoding, and the processor is physically a chip or a combination of multiple chips. The image fusion is to combine the image data collected by multiple sensors (or single sensor at different time) and related to the same shooting target, extract useful information in the channels of each image, and comprehensively generate high-quality images. The image fusion can improve the utilization rate of image information, improve the interpretation precision and reliability of a computer, improve the spatial resolution and spectral resolution of an original image and facilitate monitoring. It should be noted that the image acquired by the sensor includes an image directly generated by the sensor through photoelectric conversion, and also includes an image obtained after the image directly generated by the sensor through photoelectric conversion is processed by the ISP. Without being specifically stated, the images before being processed by the processor (e.g., fusion process, encoding process) all belong to the images acquired by the sensor.
In some cases, the same camera may be provided with multiple sensors, such as a dual-lens dual-sensor camera, and a single-lens dual-sensor camera, a single-lens three-sensor camera, which are used to image the same object to be photographed. When the number of the lenses is less than that of the sensors, a beam splitter can be arranged between the lenses and the sensors so as to split light entering from one lens onto a plurality of sensors and ensure that each sensor can receive the light. In these cameras, the number of processors is one. For convenience of description, when the camera does not have the beam splitter, a collection of all lenses of the camera is referred to as an optical structure, and each lens receives one sub-ray and transmits the received sub-ray to a corresponding sensor. When a camera has a beam splitter, a set of the beam splitter and all lenses of the camera is called an optical structure, in this case, there is a case where one lens corresponds to a plurality of sensors, and the beam splitter splits light received by the lens corresponding to the plurality of sensors into a plurality of sub-beams, which correspond to the sensors one-to-one. For example, for a single lens dual sensor camera, the beam splitter receives the light from the single lens, splits the received light into 2 sub-rays, and passes the 2 sub-rays to different sensors.
When a double-lens double-sensor camera or a single-lens double-sensor camera is used for shooting images (or recording videos), one sensor is a color sensor (color sensor) and is used for sensing visible light to generate a color image; the other sensor is a black and white sensor (monochrome sensor) for sensing infrared light or sensing light including infrared light (e.g., sensing infrared light + visible light) to generate a gray scale image. The processor aligns two images directly generated by the sensor by using a calibration value, and fuses the aligned images together to generate a color fused image (fusion image), wherein the generated fused image has better effect than the color image before fusion. Or, the two images directly generated by the sensor are respectively processed by the corresponding ISPs and then fused together, so as to generate a colorful fused image.
Optionally, the camera may further include a filter for filtering light entering the lens. For example: for a binocular dual-sensor camera, one of the lenses is provided with a visible light cut-off filter, so that visible light is prevented from being transmitted to the corresponding black-and-white sensor; alternatively, an infrared light cut filter is provided for the other lens, thereby preventing infrared light from being transmitted to the corresponding black-and-white sensor.
Optionally, the camera further includes an Image Signal Processor (ISP). The ISP is located between the sensors and the processor, each sensor corresponds to one ISP, and the ISP can perform color correction, Automatic White Balance (AWB) and other processing on the image generated by the sensor and send the processed image to the processor. When the ISP is not present in the camera, the processor communicates with the sensor through the interface, and the images fused by the processor are the plurality of images generated directly by the sensor. When the camera is provided with the ISP, the processor is communicated with the ISP through an interface, and the images fused by the processor can be a plurality of images directly generated by the sensor or a plurality of images processed by the ISP.
It should be noted that the present invention is also applicable to an embodiment where both sensors are color sensors, in addition to a combination of color sensor + black and white sensor, in which case, then there is a need for the fused two images to be both color images, for example, in a Wide Dynamic Range (WDR) scene. The invention is also applicable to embodiments where both sensors are black and white sensors, and then both images being fused are grayscale images. Furthermore, the embodiment of the present invention can also be applied to the case of three or more sensors, and then the number of images to be fused is also three or more.
Fig. 1 shows a camera 11 with two lenses and two sensors, and a lens 111 and a lens 121 are used for acquiring light outside the camera. The lens 111 transmits the sub-light 112 to the color sensor 113, the color sensor 113 converts the sub-light 112 into a color image and transmits the color image to the image signal processor 114, and the image signal processor 114 performs image signal processing on the received image and transmits the processed image to the processor 116 through the interface 115; the lens 121 transmits the sub-light 122 to the black-and-white sensor 123, the black-and-white sensor 123 converts the sub-light 122 into a gray image and transmits the gray image to the image signal processor 124, and the image signal processor 124 performs image signal processing on the received image and transmits the processed image to the processor 116 through the interface 115. The processor aligns the received gray level image and the color image, and fuses the aligned images to generate a fused image. The combination of the lens 111 and the lens 121 is referred to as an optical structure, which may further include a filter (not shown). The processor 115 is, for example, a system on a chip (SOC). The interface 115 and the processor 116 may be integrated in the camera 14 or may be an image fusion device separate from the camera 14.
Referring to the camera 14 of the dual-lens three-sensor of fig. 2, the lens 141 and the lens 151 are used to capture light outside the camera. The lens 141 transmits the sub-light 142 to the sensor 143, the sensor 143 converts the sub-light 142 into a color image and transmits the color image to the image signal processor 144, the image signal processor 144 performs image signal processing on the received image and then transmits the processed image to the processor 146 through the interface 145. In the camera 11, a lens 151 transmits a light ray 152 to a beam splitter prism 153, and 153 splits the light ray 152 into a sub-ray 154 and a sub-ray 155. The sub-ray 154 is transmitted to the sensor 156, the sensor 156 converts the sub-ray 154 into an image and transmits the image to the image signal processor 157, the image signal processor 157 performs image signal processing on the received image and transmits the processed image to the processor 146 through the interface 145; the sub-rays 155 are transmitted to a sensor 158, the sensor 158 converts the sub-rays 155 into an image and sends the image to an image signal processor 159, and the image signal processor 159 performs image signal processing on the received image and sends the processed image to the processor 146 through the interface 145. The processor aligns the received 3 images and fuses the aligned 3 images to produce a fused image. In the camera 14, a combination of the lens 141, the lens 151, and the beam splitter prism 153 is referred to as an optical structure, and an optical filter (not shown) may be further included in the optical structure. The interface 145 and the processor 146 may be integrated in the camera 14 or may be an image fusion device separate from the camera 14.
For convenience of description, the following embodiments, without specific explanation, take the example that the number of sensors is 2, one of the sensors is a black-and-white sensor for generating a gray-scale image, and the other sensor is a color sensor for generating a color image. Since the color image and the grayscale image represent the same subject, the pixels of the two fused images have a correspondence relationship. During fusion, if there is a deviation between two images, which results in that corresponding pixels are not fused together, the fusion effect is affected.
Embodiments of the present application have the following concepts:
deviation value: the distance used to describe the actual offset between corresponding pixels in the two (or more) images that need to be fused can be described by the number of pixels or by the distance. To some extent, the smaller the deviation value between the fused images, the smaller the effect of the fused image obtained after the fusion.
Calibrating value: and a parameter value for adjusting the offset distance value of two (or more) images to be fused, wherein the closer the calibration value and the offset value are, the less obvious the image ghosting phenomenon after fusion is. The calibration value can be described by distance or pixel number.
The calibration aims to obtain the deviation values of corresponding pixels in the two photographed images as accurately as possible, and the images are adjusted according to the calibration values, so that the deviation values between the images can be reduced, and the pixels corresponding to each other can be fused together as much as possible during fusion. The closer the calibration value and the deviation value are obtained, the better the effect of the obtained fusion image is. It should be noted that, in actual operation, the calibration value is obtained through measurement or calculation, and the obtained calibration value is often difficult to be completely the same as the deviation value, and in general, when the difference between the two values does not exceed the distance of 3 pixels, it is difficult for the naked eye of a person to observe an obvious ghost image. Referring to fig. 3, a fused image generated after 2 images with deviation values greater than 10 are fused shows a significant ghost; fig. 4 is a fused image generated from two images with deviation values less than 3, and ghost images are not obvious. The image calibration (image calibration) process is a process of aligning (image aligning) one image of the 2 images with the other image.
Taking the color and grayscale images collected by two 1/1.8 inch sensors of 400 ten thousand pixels as an example, the physical distance of 3 pixels is 0.009 mm. Therefore, after calibration, when the deviation value between the pixels corresponding to each other in the two images is not more than 0.009mm, the fused image has a good effect. In a high light scene (e.g. daytime), the calibration value close to the bias value can be directly obtained by measurement. In a low light scene (low light scene), for example, at night, it is difficult to obtain a calibration value close to the bias value by measurement due to insufficient illumination. However, due to the expansion caused by heat and the contraction caused by cold, if the calibration value of the image in the daytime is directly applied to the calibration of the image at night, the fusion effect is not good.
Specifically, the method comprises the following steps: the lens is made of materials such as glass, plastic and glue, and can also comprise materials such as a PCB (printed circuit board) and a sheet metal part, the lens is physically expanded (or contracted) due to expansion and contraction caused by temperature difference between day and night, and the size of the expanded (or contracted) is not obvious when being observed by naked eyes, but is enough to obviously influence the size of the deviation value relative to the size of 0.009 mm.
Even at night, there may be a significant temperature difference at different times (e.g., between the first and second midnight), which may cause a non-negligible significant difference in the deviation value. See fig. 5 for laboratory measurements that can be used to calibrate the relationship between temperature and longitudinal calibration.
The data of fig. 5 are derived from actual measurements of the images taken by the different sensors of the camera at the same time, the abscissa being the lens temperature (unit:c) and the ordinate being the longitudinal calibration (unit: pixels) between the color image and the grayscale image. The figure shows the results of 4 tests, which were all performed in a non-low light scene, and it can be seen that: at-20 ℃, there are two measurements, the vertical calibration values being the distance of 6 pixels (abbreviated as 6, and the same follows) and 8, respectively; at 0 ℃, there were 4 measurements (with overlap), 4, 4, 4, 5; at 25 ℃, there were 4 measurements (with overlap in the figure), 0, 1, 2; at 70 deg.C, there were 2 measurements, with longitudinal calibration values of-6, -8, respectively. From these data, it can be roughly seen that the temperature and the longitudinal calibration have a linear relationship approximately as shown in fig. 6. Fig. 5 is only an example of 4 test records, and when the number of tests is more, the obtained curve is more accurate, but also approximately conforms to the linear relation. The planar image may be described by two-dimensional coordinates, so the calibration values include a transverse calibration value and a longitudinal calibration value, fig. 6 only exemplifies the longitudinal calibration value, and actually, the transverse calibration value conforms to the rule through tests.
The embodiment of the invention obtains the calibration value between the lens temperature and the original image directly generated by different sensors of the camera (or the original image obtained after ISP processing) in advance under non-low illumination (such as daytime or under an artificial light source), so as to obtain the relation between the lens temperature and the calibration value (such as a formula for the calibration value changing along with the temperature change). When the calibration value is difficult to directly measure (for example, at low illumination), the lens temperature is obtained and is substituted into the calibration formula, and the corresponding calibration value can be obtained.
Referring to fig. 7 and the following steps, the experimental procedure for obtaining the calibration formula is described by way of specific example.
And step 21, arranging the camera for the experiment in the incubator, shooting by using the camera after an illuminating lamp in the incubator is turned on, and measuring the pixel deviation value of the color image relative to the gray image at different temperatures by using a camera processor. The color image and the grayscale image are both: generated directly by the sensor; or the color image and the grayscale image are both: the image directly generated by the sensor is generated after being processed by the ISP.
The temperature of the incubator is adjusted to t 1: at 25 ℃, the pixel calibration value measured as x 1: -67.5991, y 1: 400.3787, which means that if the grayscale image is translated horizontally to the right by a distance of 67.5991 pixels and vertically upwards by a distance of 400.3787 pixels, the grayscale image and the color image would theoretically coincide. After raising the temperature of the incubator to t 2: at 50 ℃, measured to give a calibration value x 2: -72.174, y 2: 404.1329, which means that if the grayscale image is translated horizontally to the right by a distance of 72.174 pixels and vertically upwards by a distance of 404.1329 pixels, the grayscale image and the color image theoretically coincide. Measurements are again taken at other temperatures, resulting in a greater number of measurements.
Step 22, the camera processor obtains a calibration formula of the pixel deviation value of the lens along with the temperature change by linear fitting according to the measurement results of the multiple times, wherein the calibration formula is as follows:
x is-0.183 t-63.0242; -0.150 t-396.6245; where x is a horizontal pixel calibration value (unit: pixel) of the grayscale image relative to the color image, y is a vertical pixel calibration value (unit: pixel) of the grayscale image relative to the color image, t is a temperature value (unit: celsius), and t may be a temperature change value relative to a set temperature (e.g., 20 ℃).
And step 23, in a scene that the color image and the gray image cannot be calibrated in low illumination, reading the temperature of the incubator by the camera processor, and calculating a calibration value by using a formula so as to verify the correctness of the calibration formula.
In the test, the environmental lamp of the incubator is turned off, so that the lens is in a low-illumination environment, the temperature of the incubator is reduced to-20 ℃, and a calibration value can be obtained according to the calibration formula:
x=-0.183t-63.0242=-0.183*(-20)-63.0242=-59.3642
y=-0.150t-396.6245=-0.150*(-20)-396.6245=-393.6245
that is, the calculated calibration value is (-59.3642, -393.6245) and the unit is pixel.
To verify the accuracy of this result, the experimental temperature of-20 ℃ was kept constant, the ambient illumination was increased by turning on the illumination lamp in the incubator, and the measured calibration was (-59.9689, -391.901). It can be seen that the calibration calculated using the formula (-59.3642, -393.6245) is very close to the actual measured calibration at high illumination (-59.9689, -391.901). The calibration values calculated by the formula are close to the deviation values, and the method is suitable for image fusion.
From the above example, the formula for calculating the calibration value (x, y) can be summarized as follows for an image taken by an unspecified camera.
x ═ a × t + b, where a is a constant, b is a constant, and t is the lens temperature.
y-c t + d, where c is a constant and d is a constant.
Wherein the calibration values comprise a transverse calibration value x and a longitudinal calibration value y; both x and y are linearly related to the lens temperature.
The above describes the process of obtaining the calibration formula. It should be noted that the expression is a simple formula which is easy to calculate. When the calculation result of the formula requires higher precision, or a lens made of a specific material is used, or the camera is located in a specific temperature range, the calibration formula may be changed. For example, the calibration formula may be changed to x ═ a (t-0.1) + b, and y ═ c (t +0.5) + d. However, it can be seen that the calibration formula is related to the temperature t, and specifically, the absolute value of the calibration value is positively related to the temperature t.
It should be noted that the incubator is an experimental environment for proving the feasibility of the solution of the present invention. The incubator is not necessary, and the calibration value formula can be obtained by using the scheme of the invention in the actual use environment of the camera, such as places of home, shopping mall, intersection, airport and the like.
The measurement of the calibration values is described below. Under a high-illumination scene, a color image and a gray image are obtained, the position of the same characteristic point in the color image and the gray image is searched, and the relative offset between the two positions under the same coordinate is measured, so that a measured calibration value can be obtained. Errors exist in the searching and measuring of the characteristic points inevitably, and when the arithmetic mean is carried out on the actual measurement calibration values of the plurality of characteristic points, more accurate actual measurement calibration values can be obtained, so that the errors are reduced. In a low-light scene, it is difficult to obtain an accurate calibration value by measurement because it is difficult to find a sufficient number of feature points.
In the step, the camera is positioned in the incubator, so that the temperature of the temperature sensor in the incubator is approximately used as the temperature of the lens. In practical applications, when the lens temperature is close to the camera body temperature (or air temperature), the lens temperature can also be indirectly obtained by measuring the body temperature (or air temperature).
Under different circumstances, or for different cameras, the measured calibration formula is slightly different. In order to make the calculation result of the calibration value more accurate, the calibration formula can be acquired separately for different cameras. The calibration formula may also be updated periodically for the same camera, for example, the calibration formula is updated every day for calibration value calculation at night.
The calibration formula is obtained by the above steps 21-23. For different cameras, calibration formulas obtained under different scenes and different time periods are different; specifically, the constants a, b, c and d may be different. In this embodiment, the following calibration formula is obtained based on the experimental camera.
x=-0.183t-63.0242=-0.183*(-20)-63.0242=-59.3642
y=-0.150t-396.6245=-0.150*(-20)-396.6245=-393.6245
Under the condition of low illumination, the calibration value of the image shot by the experimental camera at the corresponding temperature can be obtained by substituting the lens temperature measured when the sensor generates the original image into the formula, and the color image and the gray image shot by the subsequent camera under the low illumination scene are fused by using the calibration value, so that a clearer fused image can be obtained. How to apply the calibration formula to image fusion is described in detail below with reference to fig. 8.
Step 31, the camera lens receives light and transmits the received light to the 2 sensors. The time of arrival of the light at both sensors is the same.
There is a distinction in detail for monocular cameras (single-lens cameras) and binocular cameras (dual-lens cameras).
For a monocular camera. A beam splitter prism (other types of beam splitters than the beam splitter prism are also possible) is arranged between the lens and the sensor, which splits the light from the lens into 2 sub-rays: the first sub-ray and the second sub-ray. The first sub-light includes visible light (the first sub-light may not include infrared light), and the second sub-light includes infrared light (or visible light + infrared light). The first sub-ray enters the first sensor and the second sub-ray enters the second sensor.
For a binocular camera, 2 lenses are respectively coupled with 2 sensors, and light rays penetrate through the lenses and then enter the sensors corresponding to the lenses. The first lens transmits a first sub-ray (containing visible light) to the first sensor, and the second lens transmits a second sub-ray (the second sub-ray comprises infrared light, and optionally the second sub-ray may also contain visible light) to the second sensor. Optionally, a filter may be disposed between the lens and the sensor, for example: an infrared optical filter is arranged between the first lens and the first sensor; or a visible light filter is arranged between the second lens and the second sensor.
And step 32, receiving the corresponding sub light rays by the 2 sensors respectively, performing photoelectric conversion, and processing an image produced by the photoelectric conversion by an ISP (internet service provider) to obtain a first image and a second image. Or directly taking the images produced by the photoelectric conversion of the sensor as the first image and the second image without ISP processing. Wherein the first image and the second image are both original images. In this embodiment, the first sensor is a color sensor, and correspondingly, the first image is a color image (including color information and brightness information); the second sensor is a black and white sensor and correspondingly the second image is a grey scale image (containing luminance information and no color information). Since the time of arrival of the first sub-ray at the first sensor is the same as the time of arrival of the second sub-ray at the second sensor, the time at which the sensor generates the image is also the same (or there is only a negligible deviation in time).
The first image and the second image are not limited to the original image directly generated by the sensor, but may also be the original image obtained after ISP processing, in other words, in this embodiment and the following embodiments, the image output by the sensor and the image before fusion are both referred to as the original image. The first image and the second image may be RAW images, and may also be YUV images, RGB images. In practical applications, the image output from the image sensor may be subjected to various processing flows, and the foregoing processing flows are not limited in the embodiments of the present application. In other words, if the RAW image output by the image sensor is fused directly subsequently, the first image is in RAW format; if the RGB images output by the ISP are fused, the first image is in an RGB format; if the YUV images output by the ISP are fused, the first (second) image is in YUV format. Similarly, the image output from the second image sensor to the image before the image fusion or combination is performed is referred to as a first (second) image. The first image and the second image are not subjected to compression coding, and the fused images can be coded to generate image formats which are easily recognized by human eyes and occupy smaller storage space, such as a jpeg format, a bmp format, a tga format png format and a gif format.
The ISP is used to optimize the image generated by the sensor, thereby improving the performance of the imagery in optical terms. The content of the image optimization may be, for example, one or more of the following: removing bottom current noise, performing linearization processing on data, solving the problem of data nonlinearity, removing dead pixels, removing noise points, adjusting white balance, focusing and exposure, and adjusting the sharpness of an image. The visible light image provides more detailed information of the target or the scene, and is more beneficial to observation of human eyes. The content of the image optimization may be, for example, one or more of the following: removing bottom current noise, performing linear processing on data, solving the non-linear problem of the data, removing dead spots, removing noise points, adjusting white balance, focusing and exposure, adjusting the sharpness of an image, performing color space conversion (converting to different color spaces for processing), performing color enhancement on the image, and optimizing the skin color of the portrait.
And step 33, the processor acquires the temperature t of the lens when the camera generates the first image and the second image from the temperature sensor arranged on the lens. The lens temperature can be obtained by reading a temperature probe mounted to the lens.
As described above, in addition to directly measuring the temperature of the lens to obtain the lens temperature, in practical applications, the lens temperature may be obtained indirectly without directly measuring the lens, for example: measuring a device near the lens (e.g., the camera body), or measuring a facility near the lens (e.g., a stand to which the camera is fixed), or measuring the air temperature.
Since the temperature change is often slow, the time t at which the lens temperature is acquired may be earlier or slightly earlier than the time at which the first image and the second image are generated. It is sufficient that the measured temperature is not much different from the temperature at the time of generating the first image and the second image. For example, at night, the lens temperature acquired within 5 minutes before and after time t may be regarded as the lens temperature at time t.
And step 34, the processor brings the acquired lens temperature into a calibration formula to calculate a calibration value (x, y).
As previously described, the formula for calculating the calibration value (x, y) is as follows.
x ═ a × t + b, where a is a constant, b is a constant, and t is the lens temperature.
y-c t + d, where c is a constant and d is a constant. Step 35, the processor acquires the first image and the second image from the sensor (or ISP), and aligns the first image and the second image using the calculated calibration value, for example, aligning a color image to a grayscale image, thereby achieving calibration between the two images. Since the first image and the second image are imaged by the same subject, there is a correspondence between pixels of the first image and the second image. When the images are fused, the corresponding relation is found so as to lead the corresponding pixels to be fused, and better imaging effect can be obtained. The deviation value describes the amount of deviation between corresponding pixels in the first image and the second image. The offset value is adjusted using the calibration value (x, y), and the offset amount can be reduced, thereby reducing ghost images of the fused image.
The following illustrates the principles of image alignment: the a pixels in the first image correspond to a 'in the second image, in other words a and a' are image recordings of the same part of the subject.
The first image and the second image are placed in the same plane coordinate system, the coordinate of the A pixel in the first image is (52,80), the coordinate of the A' pixel in the second image is (55,60), and then the deviation value of the first image relative to the second image is (-3, 20). Assuming that the calibration value calculated by step 34 using the calibration formula is (-1,18), after the first image is adjusted using the calibration value, the a pixel coordinate becomes (53,62), at which point the deviation value of the first image from the second image becomes (-2, 2). It can be seen that after the images are adjusted by using the calibration values, the deviation value between the first image and the second image becomes small, and the distances between a pixel and a' pixel of the corresponding group of pixels are 2 pixels on the abscissa and the ordinate, and are closer to each other in the coordinate system relative to the alignment.
And step 36, the processor fuses the aligned first image and second image to generate a fused image. This step may be performed by a processor, such as a system on a chip (SOC).
The first image is a color image and thus can provide color information and luminance information. The second image is a grayscale image and therefore can provide luminance information.
There are many ways to fuse images. The following description of the steps of one of the fusion methods is provided by way of example only: (1) decomposing the luminance component of the A pixel into a high frequency component and a low frequency component, and decomposing the color component of the A pixel into a chrominance component and a saturation component; (2) the luminance component of the a' pixel is decomposed into a high frequency component and a low frequency component. (3) Fusing the high-frequency component of the A pixel and the high-frequency component of the A' pixel to obtain a fused high-frequency component; and fusing the low-frequency component of the A pixel and the low-frequency component of the A' pixel to obtain a fused low-frequency component. (4) And obtaining a fused brightness component according to the fused high-frequency component and the fused low-frequency component. (5) And obtaining a fused pixel according to the fused brightness component and the chroma and saturation components. This fusion process may be pixel-based, that is, for each group of corresponding pixels, the operations of steps (1) to (5) are performed, and finally 2 images are fused. It is also possible to have a set of pixels as granularity, e.g. 2 x2 pixels as a blend granularity.
Following the example in step 35, the coordinates of the a pixel in the first image are (52,80) and thus are fused with the pixel in the second image (here named B' pixel) with the coordinates (52, 80). Although the B ' pixel and the a pixel do not correspond (the a ' pixel is the pixel corresponding to the a pixel), since the B ' pixel and the a ' pixel are close to each other in the second image, the values of the components (for example, luminance components) of the B ' pixel are also close to the values of the components of the a ', and therefore, after the a and the B ' are fused, a better fusion effect can be obtained compared with the case where the first and the second images are fused without alignment.
Besides fusing 2 images, the method described in this embodiment can also implement the fusion of three images or more. For example: in a 3-mesh camera, different lenses correspond to different sensors, 3 images, such as 2 color images and 1 gray-scale image, are taken at the same time, and then the 2 color images are aligned to the 1 gray-scale image, and after alignment, 3 images are fused. For another example: splitting light entering a lens of the monocular camera into 4 light beams by using a beam splitter, wherein the 4 light beams are red light, green light, blue light and white light (the white light comprises the red light, the green light and the blue light); respectively sending the 4 beams of light to 4 different sensors to generate 4 images; the 4 images are aligned using the method of an embodiment of the present invention; the 4 images were then fused.
It should be noted that steps 33 to 36 may be executed by a processor in the camera, or may be executed by a device separate from the camera, for example, by an image fusion apparatus communicating with the camera. The image fusion device is used for acquiring 2 or more images from the camera, and aligning and fusing the acquired images. The above embodiments exemplify the obtaining of the calibration value by taking the calibration formula as an example. It should be noted that the calibration formula is only an alternative expression of the calibration algorithm. In addition to the formula, the calibration algorithm may also be in other forms, such as a set of temperature t (or a range of temperature t) and calibration value (x, y) corresponding relationship.
To facilitate understanding, an example is provided herein. In a high-illuminance scene, for example, during daytime, the calibration value is measured, and the correspondence between the range of the obtained temperature t and the calibration value (x, y) is: [ (-25 ℃ to-20 ℃): (-60, -391) ], [ (-20 ℃ to-15 ℃): (-62, -395) ], [ (-15 to-5 ℃): (-63, -397) … …. This correspondence table may be stored in the camera in advance. This means that when the measured lens temperature is in the range of-18 ℃ to (-20 ℃ to-15 ℃) in a low-illuminance scene (of course, a high-illuminance scene is equally applicable), for example, at night, the correspondence table can be directly queried to obtain the calibration value (-62, -395) for the alignment of the original image without using the calibration formula for calculation.
If this embodiment is applied to the embodiment described in steps 21 to 23 and steps 31 to 34, there will be some variations in the steps 23 and 34. Specifically, the method comprises the following steps: a set of correspondence tables of the calibration value temperature t (or the range of the temperature t) and the calibration value (x, y) is obtained in step 23; in step 34, according to the lens temperature when the camera generates the first image and the second image, the calibration value corresponding to the temperature range where the lens temperature is located is found from the corresponding relationship table.
Fig. 9 shows an embodiment of the image fusion apparatus according to the present invention, wherein the image fusion apparatus 4 can perform the steps 33 to 36. Optionally, the image fusion apparatus 4 may also perform the method performed by the camera processor in steps 21-23. The image fusion device may be a camera or a program module running in a camera.
The image acquiring module 41 is configured to acquire a plurality of images acquired by the camera, where the plurality of images may be generated directly by a sensor of the camera, or generated after an image directly generated by the sensor of the camera is processed by an ISP. The formats of the multiple images are the same, and specifically, the multiple images may be in RAW format, YUV format or RGB format.
The number of the plurality of images is, for example, 2 (gray image + color image, or color image + color image, or gray image + gray image), and may be 3 or more. The multiple images are generated by multiple sensors of the camera at the same time (or the multiple images are generated by the multiple sensors at the same time and then processed by an ISP), each sensor generates one image, and the images generated by different sensors are different.
A temperature obtaining module 42, configured to obtain a target temperature of a lens of the camera when the plurality of images are acquired. As described in the method embodiment, the time when the temperature is actually obtained may be before or after the plurality of images are acquired; the position where the temperature is actually acquired may be a lens, or may be a module or a device near the lens.
A calibration module 43, configured to obtain a calibration value according to the target temperature obtained by the temperature obtaining module 32; aligning the plurality of images according to the calibration value. Wherein the calibration value is related to the target temperature.
Specifically, the image acquisition module 31 is configured to acquire calibration values between different images generated by different sensors of the camera in advance when the camera is located in a high-illuminance scene. And obtaining a calibration value calculation formula (x, y) according to the relation between the obtained calibration value and the lens temperature. When the camera is positioned in a low-illumination scene, the temperature of the lens when the sensor generates the image is substituted into a calibration value calculation formula (x, y), and then the calibration value between the images generated by the sensor in the low-illumination scene can be obtained.
The specific relationship between the calibration value and the lens temperature may be: the formula x of the calibration value (x, y) is a t + b, where a is a constant, b is a constant, and t is the lens temperature; y-c t + d, where c is a constant and d is a constant. Or (2) a corresponding relation table of the temperature range and the calibration value. The specific relationship between the calibration value and the lens temperature may also be: and the corresponding table between a plurality of temperature ranges of the lens and a plurality of calibration values.
And a fusion module 44, configured to fuse the multiple images to generate a fused image. The format of the fused image and the fused image is the same. The fusion module can also encode the fusion image to generate a jpeg, bmp and other format pictures.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One or more of the above modules or units may be implemented in software, hardware or a combination of both. When any of the above modules or units are implemented in software, which is present as computer program instructions and stored in a memory, a processor may be used to execute the program instructions and implement the above method flows. The processor may include, but is not limited to, at least one of: various computing devices that run software, such as a Central Processing Unit (CPU), a microprocessor, a Digital Signal Processor (DSP), a Microcontroller (MCU), or an artificial intelligence processor, may each include one or more cores for executing software instructions to perform operations or processing. The processor may be built in an SoC (system on chip) or an Application Specific Integrated Circuit (ASIC), or may be a separate semiconductor chip. The processor may further include a necessary hardware accelerator such as a Field Programmable Gate Array (FPGA), a PLD (programmable logic device), or a logic circuit for implementing a dedicated logic operation, in addition to a core for executing software instructions to perform an operation or a process.
When the above modules or units are implemented in hardware, the hardware may be any one or any combination of a CPU, a microprocessor, a DSP, an MCU, an artificial intelligence processor, an ASIC, an SoC, an FPGA, a PLD, a dedicated digital circuit, a hardware accelerator, or a discrete device that is not integrated, which may run necessary software or is independent of software to perform the above method flows.

Claims (24)

1. An image fusion method, comprising:
acquiring a plurality of images acquired by a camera;
acquiring a target temperature of a lens of the camera when the plurality of images are acquired;
aligning the plurality of images according to a calibration value, and fusing the aligned plurality of images, wherein the calibration value is related to the target temperature.
2. The image fusion method according to claim 1, characterized in that:
the camera includes a plurality of sensors, the plurality of images being generated by the plurality of sensors, respectively, at a same time, different ones of the sensors generating different ones of the images.
3. The image fusion method according to claim 2, characterized in that:
the camera includes a single lens and a beam splitter, the method further comprising: the single lens receives light, the single lens transmits the received light to the optical splitter, the optical splitter splits the light from the single lens into a plurality of sub-light, the optical splitter transmits the plurality of sub-light to the plurality of sensors, and different sub-light corresponds to different sensors; or
The camera includes: a plurality of lenses, the method further comprising: each lens of the plurality of lenses receives a sub-ray, and the plurality of lenses transmit the received plurality of sub-rays to the plurality of sensors, wherein different sub-rays correspond to different sensors.
4. The image fusion method according to claim 1, wherein:
the camera includes a color sensor and a black and white sensor, the plurality of images including: a color image generated by the color sensor and a grayscale image generated by the black and white sensor.
5. The image fusion method according to any one of claims 1-4, the method further comprising:
when the camera is located in a high-illuminance scene, respectively acquiring when the lens is located at different temperatures: calibrating values among images acquired by different sensors of the camera so as to obtain a corresponding relation between the temperature of the lens and the calibrated values;
and obtaining a calibration value corresponding to the target temperature according to the target temperature and the corresponding relation.
6. The image fusion method according to any one of claims 1 to 4, wherein the number of the plurality of images is 2, wherein:
the calibration values comprise a transverse calibration value x and a longitudinal calibration value y, wherein x is linearly related to the temperature of the lens, and y is linearly related to the temperature of the lens.
7. The image fusion method according to any one of claims 1-5, wherein the plurality of images are in the same format and are in one of the following formats:
RAW format, YUV format, and RGB format.
8. An image fusion apparatus, comprising:
the image acquisition module is used for acquiring a plurality of images acquired by the camera;
the temperature acquisition module is used for acquiring the target temperature of the lens of the camera when the plurality of images are acquired;
a calibration module to align the plurality of images according to a calibration, wherein the calibration is related to the target temperature;
and the fusion module is used for fusing the aligned images.
9. The image fusion apparatus according to claim 8, wherein:
the plurality of images are generated by a plurality of sensors of the camera at the same time, respectively, different ones of the sensors being used to generate different ones of the images.
10. The image fusion apparatus according to claim 9, wherein:
the camera includes a color sensor and a black and white sensor, the plurality of images including: a color image generated by the color sensor and a grayscale image generated by the black and white sensor.
11. The image fusion device of claim 8, the calibration module further configured to:
when the camera is located in a high-illumination scene, respectively acquiring: calibrating values among images acquired by different sensors of the camera so as to obtain a corresponding relation between the temperature of the lens and the calibrated values;
and obtaining a calibration value corresponding to the target temperature according to the target temperature and the corresponding relation.
12. The image fusion apparatus according to any one of claims 8 to 11, wherein the number of the plurality of images is 2, wherein:
the calibration values comprise a transverse calibration value x and a longitudinal calibration value y, wherein x is linearly related to the temperature of the lens, and y is linearly related to the temperature of the lens.
13. The image fusion apparatus according to any one of claims 8-12, wherein the plurality of images are in the same format and are in one of the following formats:
RAW format, YUV format, and RGB format.
14. An image fusion apparatus comprising:
the interface is used for acquiring a plurality of images acquired by the camera;
a processor for obtaining a target temperature of a lens of the camera when acquiring the plurality of images; aligning the plurality of images according to a calibration value, and fusing the aligned plurality of images, wherein the calibration value is related to the target temperature.
15. A camera, comprising:
an optical structure for transmitting a plurality of sub-rays to a plurality of sensors, each of the plurality of sub-rays corresponding to one of the plurality of sensors;
each sensor of the plurality of sensors is used for performing photoelectric conversion on the received sub-rays, so as to generate a plurality of images;
and the processor is used for acquiring the target temperature of the lens of the camera when the plurality of images are acquired, aligning the plurality of images according to a calibration value, and fusing the aligned plurality of images, wherein the calibration value is related to the target temperature.
16. The camera of claim 15, wherein the processor is to:
when the camera is located in a high-illuminance scene, respectively acquiring when the lens is located at different temperatures: calibrating values among images acquired by different sensors of the camera so as to obtain a corresponding relation between the temperature of the lens and the calibrated values;
and obtaining a calibration value corresponding to the target temperature according to the target temperature and the corresponding relation.
17. The camera of claim 15, wherein: the optical structure is lenses with the same number as the sensors, and each lens corresponds to one sub-ray.
18. The camera of claim 15, wherein:
the optical structure includes a beam splitter and fewer lenses than the number of sensors.
19. The camera of claim 15, wherein:
the plurality of sensors includes a black and white sensor and a color sensor;
the plurality of images includes: a color image generated by the color sensor and a grayscale image generated by the black and white sensor.
20. The camera of claim 15, wherein:
the plurality of images are generated at the same time, and different ones of the sensors are used to generate different ones of the images.
21. The image fusion method of claim 15, the plurality of images being in the same format and being one of:
RAW format, YUV format, and RGB format.
22. The camera of claim 15, wherein:
the camera includes an image signal processor, ISP, located between the plurality of sensors and the processor, the ISP to: image signal processing the plurality of images before the plurality of images are sent to the processor.
23. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1-7.
24. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-7.
CN202010268724.XA 2019-12-17 2020-04-08 Image fusion method, device, camera, storage medium and program product Pending CN112991244A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2019113019916 2019-12-17
CN201911301991 2019-12-17

Publications (1)

Publication Number Publication Date
CN112991244A true CN112991244A (en) 2021-06-18

Family

ID=76344195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010268724.XA Pending CN112991244A (en) 2019-12-17 2020-04-08 Image fusion method, device, camera, storage medium and program product

Country Status (1)

Country Link
CN (1) CN112991244A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106454149A (en) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 Image photographing method and device and terminal device
CN107563971A (en) * 2017-08-12 2018-01-09 四川精视科技有限公司 A kind of very color high-definition night-viewing imaging method
CN107977924A (en) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of image processing method based on dual sensor imaging, system
CN109506782A (en) * 2018-12-03 2019-03-22 南京理工大学 Transient state temperature field test method and its test macro based on high-speed imaging technology
KR20190076188A (en) * 2017-12-22 2019-07-02 에이스웨이브텍(주) Fusion dual IR camera and image fusion algorithm using LWIR and SWIR
CN110012197A (en) * 2019-03-19 2019-07-12 昆明物理研究所 A kind of spatial domain picture registration fusion method based on focusing position compensation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977924A (en) * 2016-10-21 2018-05-01 杭州海康威视数字技术股份有限公司 A kind of image processing method based on dual sensor imaging, system
CN106454149A (en) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 Image photographing method and device and terminal device
CN107563971A (en) * 2017-08-12 2018-01-09 四川精视科技有限公司 A kind of very color high-definition night-viewing imaging method
KR20190076188A (en) * 2017-12-22 2019-07-02 에이스웨이브텍(주) Fusion dual IR camera and image fusion algorithm using LWIR and SWIR
CN109506782A (en) * 2018-12-03 2019-03-22 南京理工大学 Transient state temperature field test method and its test macro based on high-speed imaging technology
CN110012197A (en) * 2019-03-19 2019-07-12 昆明物理研究所 A kind of spatial domain picture registration fusion method based on focusing position compensation

Similar Documents

Publication Publication Date Title
US10560684B2 (en) System and methods for calibration of an array camera
JP5512550B2 (en) White balance calibration of digital camera devices
US9741118B2 (en) System and methods for calibration of an array camera
JP6480441B2 (en) Time-of-flight camera system
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN104508681A (en) Systems and methods for detecting defective camera arrays, optic arrays, and sensors
CN103391397B (en) Phase difference method is used to carry out focus detection and the picture pick-up device of focus detection
WO2004062275A1 (en) Image processing device and image processing program
CN112004029B (en) Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium
KR20170074602A (en) Apparatus for outputting image and method thereof
CN103201602A (en) Digital multi-spectral camera system having at least two independent digital cameras
EP2656602A1 (en) A visible light and ir hybrid digital camera
US11659294B2 (en) Image sensor, imaging apparatus, electronic device, image processing system, and signal processing method
US20140111606A1 (en) Camera systems and methods for gigapixel computational imaging
CN108053438B (en) Depth of field acquisition method, device and equipment
CN109068025A (en) A kind of camera lens shadow correction method, system and electronic equipment
CN109598763B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109559352B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
JP2004222231A (en) Image processing apparatus and image processing program
KR102285756B1 (en) Electronic system and image processing method
AU2017217833A1 (en) Devices and methods for high dynamic range video
Sattar et al. Snapshot spectropolarimetric imaging using a pair of filter array cameras
GB2460241A (en) Correction of optical lateral chromatic aberration
US11593958B2 (en) Imaging device, distance measurement method, distance measurement program, and recording medium
JP2015005921A (en) Imaging device, image processing device, imaging method and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination