CN110620885B - Infrared low-light-level image fusion system and method and electronic equipment - Google Patents

Infrared low-light-level image fusion system and method and electronic equipment Download PDF

Info

Publication number
CN110620885B
CN110620885B CN201910993282.2A CN201910993282A CN110620885B CN 110620885 B CN110620885 B CN 110620885B CN 201910993282 A CN201910993282 A CN 201910993282A CN 110620885 B CN110620885 B CN 110620885B
Authority
CN
China
Prior art keywords
image
light
infrared
low
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910993282.2A
Other languages
Chinese (zh)
Other versions
CN110620885A (en
Inventor
赵国如
李慧奇
黄连鹤
蔡凌峰
宁运琨
郭贵昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201910993282.2A priority Critical patent/CN110620885B/en
Publication of CN110620885A publication Critical patent/CN110620885A/en
Application granted granted Critical
Publication of CN110620885B publication Critical patent/CN110620885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an infrared low-light-level image fusion system and method and electronic equipment. The method comprises the following steps: the infrared image acquisition module: the infrared image acquisition module is used for acquiring an infrared image; the low-light-level image acquisition module: used for collecting low-light level images; the video data conversion module: the multi-channel video decoding chip is used for decoding and converting the infrared image and the low-light-level image from analog signals into digital signals and outputting the digital signals to the FPGA processing module; an FPGA processing module: the system is used for outputting a fused color image after preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through the FPGA chip; an A/D conversion module: and the A/D conversion chip is used for converting the color image into a VGA analog signal and displaying the VGA analog signal through the display module. This application has designed the collection and the fusion system of infrared and shimmer image through infrared shimmer image acquisition device and FPGA data processing controlling means, and the fusion speed of image can improve greatly, and image stereovision is stronger, is favorable to the observation and the judgement to the image.

Description

Infrared low-light-level image fusion system and method and electronic equipment
Technical Field
The application belongs to the technical field of digital image processing, and particularly relates to an infrared low-light-level image fusion system and method and electronic equipment.
Background
The infrared image is an image generated by radiation of an object, can actively acquire target information in a scene, can well display a hidden hot target, is less influenced by lighting conditions and severe weather, and has low infrared image contrast, strong spatial correlation, poor target detail reflecting capacity and imaging effect which is not in accordance with the habit of human vision due to the limitation of an imaging principle. Because the image captured by the visible light sensor is the reflection image of the object, the formed image contains abundant detail information, and the imaging effect accords with the observation habit of human eyes. The low-light-level image belongs to the visible light category, especially under low illumination, the content information is richer than that of an infrared image, the detail information of the environment can be better described, but the environment and the distance are limited, the imaging noise is very large when the weather is not good, and especially under the condition that the difference between the target and the background chromaticity is small, the target is easy to lose.
The infrared image and the low-light-level image are fused, so that the information acquisition of the two images by a detector can be greatly facilitated, and respective advantages can be kept. The infrared image and the low-light-level image can be fused by effectively utilizing respective characteristic part information, the understanding capacity of scenes is enhanced, the target is highlighted, the hidden and disguised target can be favorably detected, and the capacities of night monitoring and night battle can be improved. However, due to differences in imaging mechanisms between the infrared image and the low-light-level image, the environment and time for acquiring the images are different, and the images are different in space and time, so that the images are different greatly in fusion.
An infrared and night vision optical image fusion system and method is disclosed in application number 201710300515.7. The system comprises a night vision device, wherein a beam splitter comprises a first side surface and a second side surface which are opposite in position and is obliquely arranged on the upstream of the optical path of the night vision device; long-wave infrared light emitted by the target object is reflected by the first side surface, and the reflected light passes through the first light converging device; near infrared light of the target object sequentially passes through the first side surface and the second side surface and then enters the night vision device; the reflected light passes through the first light converging device and then is imaged on a photoelectric detector; the electric signal output by the photoelectric detector is sent to a processing unit; the processing unit converts the electric signal into an image signal; the display displays a visible light image corresponding to the thermal imaging of the target object according to the image signal; the visible light image is converged by the second light converging device and then reflected by the second side surface, and the reflected light enters the night vision device. The method is a pure optical system and does not involve image fusion, and the incident light is reflected for multiple times to cause large loss and large interference on image acquisition.
Disclosed in application No. 201710423679.9 is an infrared image and low-light level image real-time fusion system, the system wraps up low-light level optical lens group, and infrared optical lens group, synchronous focusing device, low-light level image sensor, infrared image sensor, low-light level image sensor driving module, infrared image sensor driving module, image sensor simulation front end, low-light level image processor, image synchronous acquisition module, digital image processing module, image display control module, micro-display, display magnifying lens group, power module, processor peripheral circuit, display switching button. The system adopts synchronous focusing and carries out image registration and fusion with the existing software algorithm, the registration and fusion efficiency is not high, the real-time performance of the obtained fusion image is poor, and the output delay is high.
Disclosure of Invention
The application provides an infrared low-light image fusion system, method and electronic equipment, which aim to solve at least one of the technical problems in the prior art to a certain extent.
In order to solve the above problems, the present application provides the following technical solutions:
an infrared low-light image fusion system, comprising:
the infrared image acquisition module: the infrared image acquisition module is used for acquiring an infrared image;
the low-light-level image acquisition module: used for collecting low-light level images;
the video data conversion module: the multi-channel video decoding chip is used for decoding the infrared image and the low-light-level image from analog signals to digital signals, and outputting the two paths of digital signals to the FPGA processing module after frame synchronization and frame buffer processing;
an FPGA processing module: the system is used for outputting a fused color image after preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip;
an A/D conversion module: and the A/D conversion chip is used for converting the color image into a VGA analog signal and displaying the VGA analog signal through the display module.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the FPGA processing module specifically comprises:
an image preprocessing unit: the infrared image and the low-light image are subjected to filtering processing;
an image registration unit: the image processing device is used for carrying out optical registration, image shearing and image scaling operation on the filtered infrared image and the low-light-level image;
an image fusion unit: and the image fusion module is used for performing fusion processing on the registered infrared image and low-light image by adopting an MIT pseudo-color image fusion algorithm and outputting a fused color image.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the image registration unit performs optical registration, image shearing and image scaling operations on the infrared image and the low-light-level image, and specifically comprises the following steps:
carrying out image registration by adopting a dual-channel parallel optical axis system; assuming that the infrared lens and the low-light-level lens can receive light in the horizontal direction and the vertical direction at the same elevation angle, calculating the size and the area occupied by the overlapped part of the acquired infrared image and the acquired low-light-level image according to the elevation angle:
Figure BDA0002238959780000041
in the above formula, u is the distance between the target and the objective lens, b is the distance between the two objective lenses in the parallel optical axis, and θ is the horizontal field angle of the horizontal lens;
shearing the non-shared parts in the infrared image and the low-light-level image, firstly extracting data, counting input data, including the number of rows and the number of columns, determining the shearing range of the input image, judging whether each input data is in the shearing range, inputting the data into a frame buffer memory when the input data is in the shearing range, and starting to judge the next data when the input data is not in the shearing range;
when the rest of the two images are the common parts, carrying out zooming processing on the two images, wherein the image zooming comprises a data caching unit, a bilinear interpolation operation unit and a coefficient generation and logic control unit; the data cache unit adopts two FIFOs to perform line cache of data and adopts ping-pong operation to perform line cache write-in of the data; when the interpolation operation of the data is carried out, the logic control unit controls the data reading and writing of the data cache unit, simultaneously generates interpolation parameters and inputs the interpolation parameters into the bilinear interpolation operation unit, the bilinear interpolation operation unit carries out interpolation in the Y direction on two input cache data lines, two interpolation results in the Y direction can be obtained after two times of interpolation in the Y direction, then the interpolation in the X direction is carried out, and the interpolation data in the X direction can be obtained after one time of interpolation in the X direction.
The technical scheme adopted by the embodiment of the application further comprises the following steps: the image fusion unit adopts an MIT pseudo-color image fusion algorithm to perform fusion processing on the two registered images, and specifically comprises the following steps: the MIT fusion algorithm is realized through 6 convolvers, 4 dividers, 4 normalization processes and a delay output unit; extracting images from an infrared image frame cache and a low-light-level image frame cache respectively in a synchronous extraction mode, enabling the extracted images to enter corresponding convolvers respectively for convolution, inputting convolution results into a divider, and finally performing normalization processing by a normalization process to enable the interval of the results to be between 0 and 256, so that the images are enhanced; and then inputting the infrared image and the low-light-level image for image fusion, and finally mapping the fusion result to the RGB three channels respectively to obtain a fused color image.
The technical scheme adopted by the embodiment of the application further comprises the following steps:
an external storage module: the image processing method comprises a storage chip SDRAM chip and a storage chip FLASH chip, wherein the storage chip SDRAM chip is used for caching real-time images and performing real-time data interaction with an FPGA chip in the image processing process;
an external control module: the FPGA chip is used for adjusting the internal mode of the FPGA chip through the button and controlling and outputting different mode signals.
Another technical scheme adopted by the embodiment of the application is as follows: an infrared low-light image fusion method comprises the following steps:
step a: respectively collecting an infrared image and a low-light image;
step b: the infrared image and the low-light-level image are decoded and converted into digital signals from analog signals through a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
step c: preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip, and then a fused color image is output;
step d: and converting the color image into a VGA analog signal by using an A/D conversion chip, and displaying the VGA analog signal by using a display module.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in the step c, the preprocessing, image registration and image fusion operations performed on the infrared image and the low-light-level image by the FPGA chip specifically include:
step c 1: filtering the infrared image and the low-light-level image;
step c 2: carrying out optical registration, image shearing and image scaling operation on the filtered infrared image and the low-light-level image;
step c 3: an image fusion unit: and the image fusion module is used for performing fusion processing on the registered infrared image and low-light image by adopting an MIT pseudo-color image fusion algorithm and outputting a fused color image.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in step c2, the performing optical registration, image cropping and image scaling operations on the infrared image and the low-light-level image specifically includes: carrying out image registration by adopting a dual-channel parallel optical axis system; assuming that the infrared lens and the low-light-level lens can receive light in the horizontal direction and the vertical direction at the same elevation angle, calculating the size and the area occupied by the overlapped part of the acquired infrared image and the acquired low-light-level image according to the elevation angle:
Figure BDA0002238959780000061
in the above formula, u is the distance between the target and the objective lens, b is the distance between the two objective lenses in the parallel optical axis, and θ is the horizontal field angle of the horizontal lens;
shearing the non-shared parts in the infrared image and the low-light-level image, firstly extracting data, counting input data, including the number of rows and the number of columns, determining the shearing range of the input image, judging whether each input data is in the shearing range, inputting the data into a frame buffer memory when the input data is in the shearing range, and starting to judge the next data when the input data is not in the shearing range;
when the rest of the two images are the common parts, carrying out zooming processing on the two images, wherein the image zooming comprises a data caching unit, a bilinear interpolation operation unit and a coefficient generation and logic control unit; the data cache unit adopts two FIFOs to perform line cache of data and adopts ping-pong operation to perform line cache write-in of the data; when the interpolation operation of the data is carried out, the logic control unit controls the data reading and writing of the data cache unit, simultaneously generates interpolation parameters and inputs the interpolation parameters into the bilinear interpolation operation unit, the bilinear interpolation operation unit carries out interpolation in the Y direction on two input cache data lines, two interpolation results in the Y direction can be obtained after two times of interpolation in the Y direction, then the interpolation in the X direction is carried out, and the interpolation data in the X direction can be obtained after one time of interpolation in the X direction.
The technical scheme adopted by the embodiment of the application further comprises the following steps: in step c3, the fusing the two registered images by using the MIT false-color image fusion algorithm specifically includes: the MIT fusion algorithm is realized through 6 convolvers, 4 dividers, 4 normalization processes and a delay output unit; extracting images from an infrared image frame cache and a low-light-level image frame cache respectively in a synchronous extraction mode, enabling the extracted images to enter corresponding convolvers respectively for convolution, inputting convolution results into a divider, and finally performing normalization processing by a normalization process to enable the interval of the results to be between 0 and 256, so that the images are enhanced; and then inputting the infrared image and the low-light-level image for image fusion, and finally mapping the fusion result to the RGB three channels respectively to obtain a fused color image.
The embodiment of the application adopts another technical scheme that: an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the one processor to cause the at least one processor to perform the following operations of the above-described infrared low-light image fusion method:
step a: respectively collecting an infrared image and a low-light image;
step b: the infrared image and the low-light-level image are decoded and converted into digital signals from analog signals through a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
step c: preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip, and then a fused color image is output;
step d: and converting the color image into a VGA analog signal by using an A/D conversion chip, and displaying the VGA analog signal by using a display module.
Compared with the prior art, the embodiment of the application has the advantages that: the infrared shimmer image fusion system, the infrared shimmer image fusion method and the electronic equipment in the embodiment of the application design the acquisition and fusion system of the infrared shimmer image and the shimmer image through the infrared shimmer image acquisition device and the FPGA data processing control device, and compared with the prior art, the infrared shimmer image fusion system at least has the following advantages:
1. the image registration part adopts an image registration mode of mainly registering images with parallel optical axes and secondarily registering digital images, and only images need to be cut and zoomed when the digital images are registered, so that the image processing speed can be greatly improved, and the real-time performance of the image processing can be greatly improved;
2. the image fusion part adopts the FPGA to realize the false color image fusion algorithm of the MIT, and due to the characteristic of the operational parallelism of the FPGA, the fusion speed of the image can be greatly improved, the image output delay is very low, the real-time performance is very strong, and then the image is a color image after the MIT algorithm is fused, so that the image has stronger layering sense and is more beneficial to the observation and judgment of the image;
3. the whole image fusion system uses the FPGA as a core chip to control and operate signals, and a circuit system is simple to build and is convenient to realize without the assistance of other processors.
Drawings
FIG. 1 is a hardware structure diagram of an infrared low-light image fusion system according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an infrared low-light image fusion system according to an embodiment of the present application;
FIG. 3 is an optical diagram of a parallel optical axis system;
FIG. 4 is an image cropping flow diagram;
FIG. 5 is a schematic diagram of an image scaling process;
FIG. 6 is a schematic diagram of an image fusion algorithm framework;
FIG. 7 is a flowchart of an infrared low-light image fusion method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of hardware equipment of an infrared low-light image fusion method provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Please refer to fig. 1, which is a hardware structure diagram of an infrared low-light image fusion system according to an embodiment of the present application. The infrared low-light-level image fusion system of the embodiment of the application uses XC6SLX45-2FG484C chips of SPARTAN6 series of Xilinx company, and a development software platform is ISE Design Suite 14.7 software. The whole hardware system comprises a power supply circuit, a frame cache circuit, a video data format conversion circuit, an FPGA main processor circuit, a D/A digital-to-analog conversion circuit and the like, comprises an SDRAM module, an SPI FLASH module, a JTAG module and a crystal oscillator module, takes the FPGA as a core processor, is driven by the power supply module, receives input video data, and converts the input video data into analog signals to output after being processed by the FPGA processor.
The system image processing mechanism mainly comprises FPGA + DDR3, the FPGA is a high-speed FPGA chip of XC6SLX45-2FG484 in SPARTAN6 series of XILINX company, undertakes a core algorithm of video image processing, makes full use of the parallel processing capability of the FPGA, and adds high-speed data reading and writing between the FPGA and the DDR3, the bandwidth of the whole system is up to 10Gb/s (666M 16bit), the capacity of the DDR3 is up to 2Gbit, and the requirement for a high buffer area in the video processing process is met. The clock frequency of communication between hard Core of the FPGA internal integrated DDR controller and DDR3 reaches 333Mhz, and 666Mhz is arranged inside the DDR 3. The TW2867 of Techwell company is adopted for the two-way video input acquisition of the infrared image and the low-light-level image, at most 4-way composite video signals can be input, PAL/NTSC/SECAM automatic identification is carried out, BT656 is output, a bus can be multiplexed, and an FPGA end is used for demultiplexing, so that IO is saved. The video output adopts a three-channel and 10-bit DAC conversion chip ADV7123 of ADI company, and supports RGB digital input and output of a VGA interface. The conversion rate of 240MSPS at maximum is supported, and the video image output of 1080p 60Hz is supported at the maximum.
Please refer to fig. 2, which is a schematic structural diagram of an infrared low-light image fusion system according to an embodiment of the present application. The infrared shimmer image fusion system comprises an infrared image acquisition module, a shimmer image acquisition module, a video data conversion module, an FPGA processing module, an A/D conversion module, a display module, an external storage module and an external control module.
The infrared image acquisition module: the infrared image acquisition module is used for acquiring an infrared image; the infrared image acquisition module of the application adopts an infrared induction machine core of a UWA384CX-H42 model and is provided with an infrared lens, the surface of the lens is provided with a light filtering film, the film has a reflection effect on visible light but allows the light of infrared wavelength to pass through, thereby reducing the passing through of the visible light, increasing the transmittance of the infrared light, the lens has a light condensation effect, external radiation or reflected infrared light can be converged by the lens and then projected on an IRFPA array (infrared focal plane array) at the back, and the IRFPA array induces the infrared light and performs imaging.
The low-light-level image acquisition module: used for collecting low-light level images; wherein, the shimmer image acquisition module of this application adopts the image intensifier core of 1XC18/18WHS-CL model to adopt adjustable diaphragm optical lens, its output is PAL video mode, and external light information assembles in photosensitive array after through optical lens, and the image intensifier strengthens through the image to assembling on the focal plane, increases the light sensitivity of system, makes the low light formation of image obtain the reinforcing.
The video data conversion module: the multi-channel video decoding chip is used for decoding and converting the collected infrared image and low-light image from analog signals into digital signals, and outputting the two paths of digital signals to the FPGA processing module after frame synchronization and frame buffer processing; in the embodiment of the application, because video signals output by image acquisition devices such as the infrared optical lens, the visible light optical lens, the infrared image detector and the low-light-level image detector are all in a PAL (Phase Alteration Line) mode, and subsequent FPGA chips cannot directly process the video signals, the video decoding chip is required to convert the infrared images and the low-light-level images into digital signals from video analog signals, and then the digital signals are transmitted to the FPGA for processing.
An FPGA processing module: the system is used for carrying out relevant preprocessing, image registration and image fusion operation on the infrared image and the low-light-level image to obtain a fused color image; specifically, the FPGA processing module includes:
an image preprocessing unit: the infrared image and the low-light image are subjected to filtering processing;
an image registration unit: the image processing device is used for carrying out optical registration, image shearing and image scaling operation on the filtered infrared image and the low-light-level image; the front-end optical system is registered by adopting a dual-channel parallel optical axis system, so that interference between the front-end optical system and the front-end optical system is avoided, and further fusion of images is facilitated. The following table 1 shows the parameters of the infrared lens and the low-light lens used in the present application:
TABLE 1 Infrared and shimmer lens parameters
Figure BDA0002238959780000121
As shown in fig. 3, is an optical path diagram of a parallel optical axis system. As shown in table 1, since the horizontal field angle and the vertical field angle of the infrared lens and the low-light-level lens are not greatly different, the elevation angles of the light beams that can be received by the infrared lens and the low-light-level lens in the horizontal direction and the vertical direction are approximately considered to be consistent, and the size and the area occupied by the overlapped portion of the acquired infrared image and the low-light-level image can be calculated on the premise. The target surface imaged by the infrared dim light is 2/3 target surface size, and the pixels of the target surface are 640 x 480.
The size of the overlapped part of the infrared image and the low-light image occupying the whole image pixel can be obtained by the following formula:
Figure BDA0002238959780000122
in the formula (1), u is the distance between the object and the objective lens, b is the distance between the two objective lenses in the parallel optical axis, and θ is the horizontal field angle of the horizontal lens.
Due to the fact that the mode of parallel optical axes is adopted, the image receiving sources have difference in spatial position, and non-shared portions exist in the infrared image and the low-light-level image, therefore, the non-shared portions need to be cut, and the pixel width and the pixel height of the cut infrared image and the cut low-light-level image are basically consistent. As shown in fig. 4, the image cropping process first extracts data, counts the input data, including the number of rows and columns, determines the cropping range of the input image, determines whether each input data is within the cropping range, and if so, puts the data into the frame buffer, and if not, does not process the data, and starts the determination of the next data.
When the rest of the two images are the shared parts, the two images are zoomed, so that the length-width ratios of the two images are consistent, and the pixel coordinates of the two images can be kept consistent to the maximum extent. The image scaling process is composed of a data buffer unit, a bilinear interpolation operation unit, and a coefficient generation and logic control unit, as shown in fig. 5. The data cache unit adopts two FIFOs to perform line cache of data, and adopts ping-pong operation to perform line cache write-in of the data; when the interpolation operation of the data is carried out, the logic control unit controls the data reading and writing of the data cache unit, simultaneously generates interpolation parameters and inputs the interpolation parameters into the bilinear interpolation operation unit, the bilinear interpolation operation unit carries out interpolation in the Y direction on two input cache data lines, two interpolation results in the Y direction can be obtained after two times of interpolation in the Y direction, then the interpolation in the X direction is carried out, and the interpolation data in the X direction can be obtained after one time of interpolation in the X direction. When an image is enlarged, the coordinates (X0, Y0), (X0, Y0) of the coordinates (Xnew, Ynew) of the pixel data of a new image in the original image are likely to be decimal numbers according to the magnification, and the value F of the new data is inferred according to the distance between the coordinates by taking four data nearest to the coordinates (X0, Y0) in the original image as the object of difference.
In summary, the image registration method mainly uses parallel optical axis image registration and assists digital image registration, and only the image needs to be cut and zoomed when the digital image registration is carried out, so that the speed of image processing can be greatly increased, and the real-time performance of image processing is improved.
An image fusion unit: the image fusion device is used for carrying out fusion processing on the two registered images by adopting an MIT pseudo-color image fusion algorithm and outputting a fused color image; in the embodiment of the application, as shown in fig. 6, an image fusion algorithm framework implements an MIT fusion algorithm through 6 convolvers, 4 dividers, 4 normalization processes, and one delay output unit. Because the FPGA processing addition and subtraction method is relatively simple, the adder and the subtracter are not included in the graph. The method is characterized in that the method carries out pixel-level image fusion, so that the infrared pixels and the low-light-level pixels are required to be in one-to-one correspondence, the infrared images and the low-light-level images need to be processed simultaneously, the images are extracted from an infrared image frame cache and a low-light-level image frame cache respectively in a synchronous extraction mode, the extracted images enter corresponding convolvers respectively, convolution is carried out through corresponding templates, convolution results are input into a divider again, and finally normalization processing is carried out through a normalization processing program, so that the interval of the results is between 0 and 256 and is just 8-bit binary number, and the images can be enhanced in the process. And then, carrying out image fusion, wherein the input data of the image fusion is two images, and the fusion process is basically consistent with the image enhancement process. And finally, mapping the output results to the RGB three channels respectively to obtain the fused color image.
To sum up, this application realizes the false color image fusion algorithm of MIT through adopting FPGA, because the characteristics of FPGA's operation parallelism, the fusion speed of image can improve greatly, and image output delay is very low, and the real-time is very strong, and secondly MIT algorithm fuses the back and is the color image, and image stereovision is stronger, is favorable to the observation and the judgement to the image more.
An A/D conversion module: the display module is used for converting the image data output by the FPGA chip into VGA analog signals by using the A/D conversion chip and displaying the VGA analog signals by the display module; in the embodiment of the present application, the display module is a liquid crystal display, and the resolution of the display image is preferably 1920 × 1080.
An external storage module: the image processing method mainly comprises two parts, namely, a storage chip SDRAM chip which can be used as a buffer interval in the image processing process is used as a frame buffer device to perform buffer processing on a real-time image and perform real-time data interaction with an FPGA; and the other is a storage chip FLASH chip for storing some information which needs to be powered down and is not lost, and the FPGA configuration information is stored.
An external control module: the internal mode of the system is adjusted through the buttons, and different mode signals are controlled and output so as to meet the use in different occasions.
Please refer to fig. 7, which is a flowchart of an infrared low-light image fusion method according to an embodiment of the present application. The infrared low-light image fusion method comprises the following steps:
step 100: respectively collecting an infrared image and a low-light image;
in step 100, an infrared image is acquired by adopting an infrared sensor core of UWA384CX-H42 model and configuring an infrared lens. The surface of the lens is provided with a light filtering film, the film has a reflection effect on visible light but allows the light with infrared wavelength to pass through, so that the passing of the visible light is reduced, the transmittance of the infrared light is increased, the lens has a light condensation effect, the infrared light radiated or reflected from the outside can be converged by the lens and then projected on an IRFPA array (infrared focal plane array) at the back, and the IRFPA array senses the infrared light and performs imaging. The low-light-level image is collected by adopting an image intensifier core of a 1XC18/18WHS-CL model and an adjustable aperture optical lens, the output of the low-light-level image is in a PAL video mode, external light information is converged in a photosensitive array after passing through the optical lens, and the image intensifier intensifies the image converged on a focal plane, so that the light sensitivity of the system is increased, and the low-light imaging is intensified.
Step 200: the collected infrared image and low-light image are decoded and converted into digital signals by a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
in step 200, since image signals output by image acquisition devices such as the infrared optical lens, the visible light optical lens, the infrared image detector, the low-light-level image detector and the like are all output in a PAL (Phase Alteration Line) system, and subsequent FPGA chips cannot directly process the image signals, a video decoding chip is required to convert the infrared image and the low-light-level image from video analog signals into digital signals, and then the digital signals are transmitted to the FPGA for processing.
Step 300: filtering, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA processing module, so that a fused color image is obtained;
in step 300, the image registration specifically includes: carrying out optical registration, image shearing and image scaling operation on the filtered infrared image and the low-light-level image; the registration of the front-end optical system is carried out by adopting a dual-channel parallel optical axis system, so that the mutual interference can not be caused, and the further fusion of images is facilitated. As shown in fig. 3, is an optical path diagram of a parallel optical axis system. Because the difference between the horizontal field angle and the vertical field angle of the infrared lens and the low-light-level lens is not large, the elevation angles of the light rays which can be received by the infrared lens and the low-light-level lens in the horizontal direction and the vertical direction can be approximately considered to be consistent, and the size and the area occupied by the overlapped part of the collected infrared image and the collected low-light-level image can be calculated on the premise. The target surface imaged by the infrared dim light is 2/3 target surface size, and the pixels of the target surface are 640 x 480.
The size of the overlapped part of the infrared image and the low-light image occupying the whole image pixel can be obtained by the following formula:
Figure BDA0002238959780000161
in the formula (1), u is the distance between the object and the objective lens, b is the distance between the two objective lenses in the parallel optical axis, and θ is the horizontal field angle of the horizontal lens.
Due to the fact that the mode of parallel optical axes is adopted, the image receiving sources have difference in spatial position, and non-shared portions exist in the infrared image and the low-light-level image, therefore, the non-shared portions need to be cut, and the pixel width and the pixel height of the cut infrared image and the cut low-light-level image are basically consistent. As shown in fig. 4, the image cropping process first extracts data, counts the input data, including the number of rows and columns, determines the cropping range of the input image, determines whether each input data is within the cropping range, and if so, puts the data into the frame buffer, and if not, does not process the data, and starts the determination of the next data.
When the rest of the two images are the shared parts, the two images are zoomed, so that the length-width ratios of the two images are consistent, and the pixel coordinates of the two images can be kept consistent to the maximum extent. The image scaling process is composed of a data buffer unit, a bilinear interpolation operation unit, and a coefficient generation and logic control unit, as shown in fig. 5. The data cache unit adopts two FIFOs to perform line cache of data, and adopts ping-pong operation to perform line cache write-in of the data; when the interpolation operation of the data is carried out, the logic control unit controls the data reading and writing of the data cache unit, simultaneously generates interpolation parameters and inputs the interpolation parameters into the bilinear interpolation operation unit, the bilinear interpolation operation unit carries out interpolation in the Y direction on two input cache data lines, two interpolation results in the Y direction can be obtained after two times of interpolation in the Y direction, then the interpolation in the X direction is carried out, and the interpolation data in the X direction can be obtained after one time of interpolation in the X direction. When an image is enlarged, the coordinates (X0, Y0), (X0, Y0) of the coordinates (Xnew, Ynew) of the pixel data of a new image in the original image are likely to be decimal numbers according to the magnification, and the value F of the new data is inferred according to the distance between the coordinates by taking four data nearest to the coordinates (X0, Y0) in the original image as the object of difference.
In the embodiment of the present application, the image fusion specifically includes: performing fusion processing on the two registered images by adopting an MIT pseudo-color image fusion algorithm; the image fusion algorithm framework is as shown in fig. 6, and the MIT fusion algorithm is implemented by 6 convolvers, 4 dividers, 4 normalization processes, and one delay output unit. Because the FPGA processing addition and subtraction method is relatively simple, the adder and the subtracter are not included in the graph. The method is characterized in that the method carries out pixel-level image fusion, so that the infrared pixels and the low-light-level pixels are required to be in one-to-one correspondence, the infrared images and the low-light-level images need to be processed simultaneously, the images are extracted from an infrared image frame cache and a low-light-level image frame cache respectively in a synchronous extraction mode, the extracted images enter corresponding convolvers respectively, convolution is carried out through corresponding templates, convolution results are input into a divider again, and finally normalization processing is carried out through a normalization processing program, so that the interval of the results is between 0 and 256 and is just 8-bit binary number, and the images can be enhanced in the process. And then, carrying out image fusion, wherein the input data of the image fusion is two images, and the fusion process is basically consistent with the image enhancement process. And finally, mapping the output results to the RGB three channels respectively to obtain the fused color image.
Step 400: and converting the fused video data output by the FPGA chip into VGA analog signals by using an A/D conversion chip, and displaying the VGA analog signals by using a display.
In step 400, the display is a liquid crystal display, and the resolution of the displayed image is preferably 1920 × 1080.
Fig. 8 is a schematic structural diagram of hardware equipment of an infrared low-light image fusion method provided in an embodiment of the present application. As shown in fig. 8, the device includes one or more processors and memory. Taking a processor as an example, the apparatus may further include: an input system and an output system.
The processor, memory, input system, and output system may be connected by a bus or other means, as exemplified by the bus connection in fig. 8.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules. The processor executes various functional applications and data processing of the electronic device, i.e., implements the processing method of the above-described method embodiment, by executing the non-transitory software program, instructions and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processing system over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input system may receive input numeric or character information and generate a signal input. The output system may include a display device such as a display screen.
The one or more modules are stored in the memory and, when executed by the one or more processors, perform the following for any of the above method embodiments:
step a: respectively collecting an infrared image and a low-light image;
step b: the infrared image and the low-light-level image are decoded and converted into digital signals from analog signals through a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
step c: preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip, and then a fused color image is output;
step d: and converting the color image into a VGA analog signal by using an A/D conversion chip, and displaying the VGA analog signal by using a display module.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application provide a non-transitory (non-volatile) computer storage medium having stored thereon computer-executable instructions that may perform the following operations:
step a: respectively collecting an infrared image and a low-light image;
step b: the infrared image and the low-light-level image are decoded and converted into digital signals from analog signals through a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
step c: preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip, and then a fused color image is output;
step d: and converting the color image into a VGA analog signal by using an A/D conversion chip, and displaying the VGA analog signal by using a display module.
Embodiments of the present application provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the following:
step a: respectively collecting an infrared image and a low-light image;
step b: the infrared image and the low-light-level image are decoded and converted into digital signals from analog signals through a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
step c: preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip, and then a fused color image is output;
step d: and converting the color image into a VGA analog signal by using an A/D conversion chip, and displaying the VGA analog signal by using a display module.
The infrared shimmer image fusion system, the infrared shimmer image fusion method and the electronic equipment in the embodiment of the application design the acquisition and fusion system of the infrared shimmer image and the shimmer image through the infrared shimmer image acquisition device and the FPGA data processing control device, and compared with the prior art, the infrared shimmer image fusion system at least has the following advantages:
1. the image registration part adopts an image registration mode of mainly registering images with parallel optical axes and secondarily registering digital images, and only images need to be cut and zoomed when the digital images are registered, so that the image processing speed can be greatly improved, and the real-time performance of the image processing can be greatly improved;
2. the image fusion part adopts the FPGA to realize the false color image fusion algorithm of the MIT, and due to the characteristic of the operational parallelism of the FPGA, the fusion speed of the image can be greatly improved, the image output delay is very low, the real-time performance is very strong, and then the image is a color image after the MIT algorithm is fused, so that the image has stronger layering sense and is more beneficial to the observation and judgment of the image;
3. the whole image fusion system uses the FPGA as a core chip to control and operate signals, and a circuit system is simple to build and is convenient to realize without the assistance of other processors.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. An infrared low-light image fusion system, comprising:
the infrared image acquisition module: the infrared image acquisition module is used for acquiring an infrared image;
the low-light-level image acquisition module: used for collecting low-light level images;
the video data conversion module: the multi-channel video decoding chip is used for decoding the infrared image and the low-light-level image from analog signals to digital signals, and outputting the two paths of digital signals to the FPGA processing module after frame synchronization and frame buffer processing;
an FPGA processing module: the system is used for outputting a fused color image after preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip;
an A/D conversion module: the A/D conversion chip is used for converting the color image into a VGA analog signal and displaying the VGA analog signal through the display module;
the FPGA processing module specifically comprises:
an image preprocessing unit: the infrared image and the low-light image are subjected to filtering processing;
an image registration unit: the image processing device is used for carrying out optical registration, image shearing and image scaling operation on the filtered infrared image and the low-light-level image;
the image registration unit performs optical registration, image shearing and image scaling operations on the infrared image and the low-light-level image, and specifically comprises the following steps:
carrying out image registration by adopting a dual-channel parallel optical axis system; assuming that the infrared lens and the low-light-level lens can receive light in the horizontal direction and the vertical direction at the same elevation angle, calculating the size and the area occupied by the overlapped part of the acquired infrared image and the acquired low-light-level image according to the elevation angle:
Figure FDA0003464083140000011
in the above formula, u is the distance between the target and the objective lens, b is the distance between the two objective lenses in the parallel optical axis, and θ is the horizontal field angle of the horizontal lens;
shearing the non-shared parts in the infrared image and the low-light-level image, firstly extracting data, counting input data, including the number of rows and the number of columns, determining the shearing range of the input image, judging whether each input data is in the shearing range, inputting the data into a frame buffer memory when the input data is in the shearing range, and starting to judge the next data when the input data is not in the shearing range;
when the rest of the two images are the common parts, carrying out zooming processing on the two images, wherein the zooming processing on the two images is realized by a data cache unit, a bilinear interpolation operation unit and a coefficient generation and logic control unit together; the data cache unit adopts two FIFOs to perform line cache of data and adopts ping-pong operation to perform line cache write-in of the data; when the interpolation operation of the data is carried out, the logic control unit controls the data reading and writing of the data cache unit, simultaneously generates interpolation parameters and inputs the interpolation parameters into the bilinear interpolation operation unit, the bilinear interpolation operation unit carries out interpolation in the Y direction on two input cache data lines, two interpolation results in the Y direction can be obtained after two times of interpolation in the Y direction, then the interpolation in the X direction is carried out, and the interpolation data in the X direction can be obtained after one time of interpolation in the X direction.
2. The infrared micro-optical image fusion system of claim 1, wherein the FPGA processing module further comprises:
an image fusion unit: and the image fusion module is used for performing fusion processing on the registered infrared image and low-light image by adopting an MIT pseudo-color image fusion algorithm and outputting a fused color image.
3. The infrared low-light-level image fusion system as claimed in claim 2, wherein the image fusion unit adopts an MIT pseudo-color image fusion algorithm to perform fusion processing on the two registered images, specifically: the MIT fusion algorithm is realized through 6 convolvers, 4 dividers, 4 normalization processes and a delay output unit; extracting images from an infrared image frame cache and a low-light-level image frame cache respectively in a synchronous extraction mode, enabling the extracted images to enter corresponding convolvers respectively for convolution, inputting convolution results into a divider, and finally performing normalization processing by a normalization process to enable the interval of the results to be between 0 and 256, so that the images are enhanced; and then inputting the infrared image and the low-light-level image for image fusion, and finally mapping the fusion result to the RGB three channels respectively to obtain a fused color image.
4. An infrared micro-optical image fusion system according to any one of claims 1 to 3, further comprising:
an external storage module: the image processing method comprises a storage chip SDRAM chip and a storage chip FLASH chip, wherein the storage chip SDRAM chip is used for caching real-time images and performing real-time data interaction with an FPGA chip in the image processing process;
an external control module: the FPGA chip is used for adjusting the internal mode of the FPGA chip through the button and controlling and outputting different mode signals.
5. An infrared low-light image fusion method is characterized by comprising the following steps:
step a: respectively collecting an infrared image and a low-light image;
step b: the infrared image and the low-light-level image are decoded and converted into digital signals from analog signals through a multi-channel video decoding chip, and the two paths of digital signals are subjected to frame synchronization and frame buffer processing and then output to an FPGA processing module;
step c: preprocessing, image registration and image fusion operations are carried out on the infrared image and the low-light-level image through an FPGA chip, and then a fused color image is output;
step d: converting the color image into a VGA analog signal by using an A/D conversion chip, and displaying the VGA analog signal by using a display module;
in the step c, the preprocessing, image registration and image fusion operations performed on the infrared image and the low-light-level image by the FPGA chip specifically include:
step c 1: filtering the infrared image and the low-light-level image;
step c 2: carrying out optical registration, image shearing and image scaling operation on the filtered infrared image and the low-light-level image;
in step c2, the performing optical registration, image cropping and image scaling operations on the infrared image and the low-light-level image specifically includes: carrying out image registration by adopting a dual-channel parallel optical axis system; assuming that the infrared lens and the low-light-level lens can receive light in the horizontal direction and the vertical direction at the same elevation angle, calculating the size and the area occupied by the overlapped part of the acquired infrared image and the acquired low-light-level image according to the elevation angle:
Figure FDA0003464083140000041
in the above formula, u is the distance between the target and the objective lens, b is the distance between the two objective lenses in the parallel optical axis, and θ is the horizontal field angle of the horizontal lens;
shearing the non-shared parts in the infrared image and the low-light-level image, firstly extracting data, counting input data, including the number of rows and the number of columns, determining the shearing range of the input image, judging whether each input data is in the shearing range, inputting the data into a frame buffer memory when the input data is in the shearing range, and starting to judge the next data when the input data is not in the shearing range;
when the rest of the two images are the common parts, carrying out zooming processing on the two images, wherein the zooming processing on the two images is realized by a data cache unit, a bilinear interpolation operation unit and a coefficient generation and logic control unit; the data cache unit adopts two FIFOs to perform line cache of data and adopts ping-pong operation to perform line cache write-in of the data; when the interpolation operation of the data is carried out, the logic control unit controls the data reading and writing of the data cache unit, simultaneously generates interpolation parameters and inputs the interpolation parameters into the bilinear interpolation operation unit, the bilinear interpolation operation unit carries out interpolation in the Y direction on two input cache data lines, two interpolation results in the Y direction can be obtained after two times of interpolation in the Y direction, then the interpolation in the X direction is carried out, and the interpolation data in the X direction can be obtained after one time of interpolation in the X direction.
6. The infrared low-light image fusion method according to claim 5, wherein in the step c, the pre-processing, image registration and image fusion operations of the infrared image and the low-light image by the FPGA chip further comprise:
step c 3: an image fusion unit: and the image fusion module is used for performing fusion processing on the registered infrared image and low-light image by adopting an MIT pseudo-color image fusion algorithm and outputting a fused color image.
7. The infrared low-light image fusion method according to claim 6, wherein in the step c3, the fusion process of the registered infrared image and low-light image by using the MIT false-color image fusion algorithm is specifically: the MIT fusion algorithm is realized through 6 convolvers, 4 dividers, 4 normalization processes and a delay output unit; extracting images from an infrared image frame cache and a low-light-level image frame cache respectively in a synchronous extraction mode, enabling the extracted images to enter corresponding convolvers respectively for convolution, inputting convolution results into a divider, and finally performing normalization processing by a normalization process to enable the interval of the results to be between 0 and 256, so that the images are enhanced; and then inputting the infrared image and the low-light-level image for image fusion, and finally mapping the fusion result to the RGB three channels respectively to obtain a fused color image.
8. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of infrared micro-light image fusion of any one of the preceding claims 5 to 7.
CN201910993282.2A 2019-10-18 2019-10-18 Infrared low-light-level image fusion system and method and electronic equipment Active CN110620885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910993282.2A CN110620885B (en) 2019-10-18 2019-10-18 Infrared low-light-level image fusion system and method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910993282.2A CN110620885B (en) 2019-10-18 2019-10-18 Infrared low-light-level image fusion system and method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110620885A CN110620885A (en) 2019-12-27
CN110620885B true CN110620885B (en) 2022-04-26

Family

ID=68925881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910993282.2A Active CN110620885B (en) 2019-10-18 2019-10-18 Infrared low-light-level image fusion system and method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110620885B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784621A (en) * 2020-07-10 2020-10-16 深圳市中江天华科技有限公司 Image acquisition signal enhancement method by using low-light-level and infrared fusion
CN113112440A (en) * 2021-04-23 2021-07-13 华北电力大学 Ultraviolet and visible light image fusion system and method based on FPGA
CN113727028B (en) * 2021-09-03 2022-03-25 中国人民解放军32802部队 Modular night vision imaging camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390281A (en) * 2013-07-29 2013-11-13 西安科技大学 Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN106454216A (en) * 2016-11-02 2017-02-22 南京理工大学 Night driving system based on uncooled infrared and low-light-level fusion
CN106500852A (en) * 2016-09-28 2017-03-15 北方夜视技术股份有限公司 Infrared and visible light image registration and the System and method for for merging

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10277838B2 (en) * 2016-07-28 2019-04-30 BAE Systems Imaging Solutions Inc. Monolithic visible/IR fused low light level imaging sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390281A (en) * 2013-07-29 2013-11-13 西安科技大学 Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method
WO2015157058A1 (en) * 2014-04-07 2015-10-15 Bae Systems Information & Electronic Systems Integration Inc. Contrast based image fusion
CN106500852A (en) * 2016-09-28 2017-03-15 北方夜视技术股份有限公司 Infrared and visible light image registration and the System and method for for merging
CN106454216A (en) * 2016-11-02 2017-02-22 南京理工大学 Night driving system based on uncooled infrared and low-light-level fusion

Also Published As

Publication number Publication date
CN110620885A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN110620885B (en) Infrared low-light-level image fusion system and method and electronic equipment
CN110708513B (en) 8K video multi-core heterogeneous processing device
CN101888487B (en) High dynamic range video imaging system and image generating method
CN106385530B (en) Double-spectrum camera
US20140347439A1 (en) Mobile device and system for generating panoramic video
CN108848354B (en) VR content camera system and working method thereof
CN107820066A (en) A kind of low-luminance color video camera
WO2019042034A1 (en) Intelligent three-light fusion imager and method therefor
CN105635720A (en) Stereo vision camera with double-lens single sensor
WO2022134957A1 (en) Camera occlusion detection method and system, electronic device, and storage medium
US20240054613A1 (en) Image processing method, imaging processing apparatus, electronic device, and storage medium
CN104463774B (en) A kind of three tunnel image co-registration processor design methods based on DM642
CN109089048A (en) More camera lens full-view cooperative device and methods
CN101610355A (en) Day and night camera and filter thereof, optical system
CN111783563A (en) Double-spectrum-based face snapshot and monitoring method, system and equipment
CN113066011B (en) Image processing method, device, system, medium and electronic equipment
CN110708443B (en) Single-optical-axis camera device and electronic equipment
CN109698897B (en) All-in-one optical system of dynamic zoom lens
CN109688314B (en) Camera system and method with low delay, less cache and controllable data output mode
CN216596288U (en) Image fusion system based on AM5728 chip
CN207369151U (en) A kind of panoramic picture harvester
CN111787198A (en) Double-camera system and double-camera
Yang et al. Research and implementation of color night vision imaging system based on FPGA and CMOS
JPH06105224A (en) Dynamic range expansion device
CN210201926U (en) Double-fisheye panoramic image acquisition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant