CN110619593B - Double-exposure video imaging system based on dynamic scene - Google Patents

Double-exposure video imaging system based on dynamic scene Download PDF

Info

Publication number
CN110619593B
CN110619593B CN201910693033.1A CN201910693033A CN110619593B CN 110619593 B CN110619593 B CN 110619593B CN 201910693033 A CN201910693033 A CN 201910693033A CN 110619593 B CN110619593 B CN 110619593B
Authority
CN
China
Prior art keywords
exposure
image
image data
images
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910693033.1A
Other languages
Chinese (zh)
Other versions
CN110619593A (en
Inventor
赵小明
宗靖国
李英
王文超
王子
吴昌辉
王星量
高苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910693033.1A priority Critical patent/CN110619593B/en
Publication of CN110619593A publication Critical patent/CN110619593A/en
Application granted granted Critical
Publication of CN110619593B publication Critical patent/CN110619593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a double-exposure video imaging system based on a dynamic scene, which comprises a double-exposure image acquisition module, a first-sequence image acquisition module and a second-sequence image acquisition module, wherein the double-exposure image acquisition module is used for acquiring first-sequence image data by controlling exposure time, and the first-sequence image data comprises a plurality of frames of first-exposure images; the image transmission module is connected with the double-exposure image acquisition module and is used for acquiring first sequence image data, and performing fixed-interval frame extraction processing on the first sequence image data to obtain second sequence image data; the image processing module is connected with the image transmission module and is used for acquiring second sequence image data and carrying out fusion processing on two adjacent second exposure images in the second sequence image data to obtain fused images. The double-exposure video imaging system provided by the invention can sequentially and alternately acquire the sequence images with high exposure and low exposure, so that the problems that the existing HDR photographing or shooting system cannot process dynamic scene moving object information, the fusion image generates pseudo contours caused by overlarge difference of brightness information of different exposure images and the detail information of the fusion image is lost are solved.

Description

Double-exposure video imaging system based on dynamic scene
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a double-exposure video imaging system based on a dynamic scene.
Background
The dynamic range is typically the ratio of the luminance values of the brightest portion to the darkest portion of the scene. Commonly photographed by common photographic devices such as cameras or mobile phonesWhen taking pictures, the dynamic range of the obtained pictures is far lower than the dynamic range contained in the real scene. Referring to FIGS. 1 a-1 b, FIG. 1a is a low dynamic range (Low Dynamic Range, LDR) image with a dynamic range of typically 10 2 About, fig. 1b shows a high dynamic range scene (High Dynamic Range, HDR) image with a dynamic range up to 10 6 A light level. Therefore, when an image is photographed, a darker or brighter area in the real scene will show saturation phenomenon, i.e. full black or full white (commonly referred to as underexposure and overexposure phenomenon) in the photographed image, thereby causing loss of image information and seriously affecting the image quality. Although some specialized image acquisition devices have appeared in recent years to be able to capture HDR data directly, the dynamic range that these devices can capture is still not as high as that of a real scene and is quite expensive to popularize. Therefore, in order to solve the problem of the gap between the real scene and the dynamic range of the photographed image, the details in the real scene are better captured, and the high dynamic range imaging technology is generated.
The main principle of the high dynamic range imaging technology is that scene information with different brightness ranges is obtained by continuously changing the exposure time of a camera, and then the scene information is combined, so that the photo effect is more similar to a real scene observed by human eyes. There are two capture strategies: one is hardware-based single exposure capture, the other is continuous multiple exposure capture at different times. Hardware-based single exposure capture, because of the simultaneous bracketing of the exposures on a single imaging sensor, sacrifices the spatial resolution of the image and the dynamic range of capture is far from that perceived by the human visual system. Different simultaneous sequential multiple exposure fusion techniques are an important topic of research in the field of HDR imaging. The technology controls the luminous flux of scene brightness information entering a camera by controlling shutter time, shoots a sequence multi-exposure image so that the sequence multi-exposure image contains detail information of different brightness ranges of a scene, fuses the information to obtain an HDR image, the shooting process is not completed instantaneously, and an object which obviously moves exists when a real scene is shot, and the scene is called a dynamic scene. Once the scene changes during the shooting process, a blurred or semitransparent image appears in the area where the scene changes in the finally obtained fused image, which is generally referred to as "ghosting", for example, please refer to fig. 2 a-2 c, and fig. 2a, 2b, and 2c are respectively a high exposure image, a low exposure image, and an image that generates "ghosting" after fusion.
However, since most of the outdoor photographed scenes are dynamic scenes, moving objects are difficult to avoid, and thus how to obtain all brightness information of the real world dynamic scenes is a problem to be solved.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a double-exposure video imaging system based on dynamic scenes. The technical problems to be solved by the invention are realized by the following technical scheme:
a dynamic scene-based dual exposure video imaging system, comprising:
the double-exposure image acquisition module is used for acquiring first sequence image data by controlling exposure time, wherein the first sequence image data comprises a plurality of frames of first exposure images, the exposure degrees of two adjacent first exposure images are respectively a first exposure degree and a second exposure degree, and the first exposure degree and the second exposure degree are unequal;
the image transmission module is connected with the double-exposure image acquisition module and is used for acquiring the first sequence image data, and performing fixed-interval frame extraction processing on the first sequence image data to obtain second sequence image data, wherein the second sequence image data comprises a plurality of frames of second exposure images, and the exposure degrees of two adjacent second exposure images are respectively a first exposure degree and a second exposure degree;
The image processing module is connected with the image transmission module and is used for acquiring the first sequence image data or the second sequence image data, and carrying out fusion processing on two adjacent first exposure images in the first sequence image data or two adjacent second exposure images in the second sequence image data to obtain a fused image sequence.
In one embodiment of the invention, the dual exposure image acquisition module includes a first processor and a visible light camera, wherein
The first processor is used for controlling the exposure time of a register in the visible light camera to alternately expose according to the first exposure time and the second exposure time;
the visible light camera is used for exposing according to the first exposure time and the second exposure time of the register to obtain the first sequence image data.
In one embodiment of the present invention, the first processor is an FPGA.
In one embodiment of the present invention, the image transmission module includes a second processor, a first memory, a second memory, a GPS timing module, and a third processor, where the visible light camera, the first memory, the second memory, the GPS timing module, and the third processor are all connected to the second processor,
The second processor is used for receiving the first sequence image data and performing frame extraction processing on the first sequence image data to obtain second sequence image data;
the first memory is used for caching and forwarding the first sequence image data and the second sequence image data;
the second memory is used for storing the first sequence image data buffered and forwarded by the first memory;
the GPS time service module is used for carrying out time service on the plurality of frames of first exposure images;
the third processor is configured to receive the first memory, buffer and forward the second sequence image data, and perform compression transmission processing on the second sequence image data.
In one embodiment of the invention, the visible light camera and the second processor are connected by a CAMERALINK transmission cable.
In one embodiment of the present invention, the second processor is an FPGA and the third processor is an ARM.
In one embodiment of the present invention, the image processing module includes a PC host computer, the PC host computer being connected to the third processor, wherein:
the PC upper computer is used for decompressing the first sequence image data or the compressed second sequence image data, processing two adjacent first exposure images or two adjacent decompressed second exposure images according to the self-adaptive threshold to obtain a binary image comprising a motion area, performing brightness balance processing on the two adjacent first exposure images or the two adjacent second exposure images to obtain two frames of brightness balance images, combining the two adjacent first exposure images or the two adjacent second exposure images with the binary image comprising the motion area respectively to obtain two frames of basic weight images, and finally processing the two frames of brightness balance images and the basic weight images according to the enhanced Laplacian pyramid to obtain a fused image.
In one embodiment of the present invention, the system further includes a wireless transmitting module and a wireless receiving module, where the third processor and the PC upper computer are wirelessly connected through the wireless transmitting module and the wireless receiving module.
In one embodiment of the present invention, the formula of the enhanced laplacian pyramid is:
Figure BDA0002148475630000041
where L { F' is the enhanced laplacian pyramid, g is the gain factor matrix tower, and L { F } is the fused laplacian pyramid.
In one embodiment of the present invention, the calculation formula of the gain coefficient is:
Figure BDA0002148475630000051
wherein G is gain coefficient matrix tower, G L To be minimumGain factor, G H For the maximum gain coefficient, D is the layer number of the highest layer of the fused Laplacian pyramid, D is the layer number of the fused Laplacian pyramid, gamma is an adjustable parameter,
Figure BDA0002148475630000052
noise visibility at (x, y) coordinate position in the fused laplacian pyramid for the d-th layer.
The invention has the beneficial effects that:
the double-exposure video imaging system provided by the invention can sequentially and alternately acquire images with high exposure and low exposure, so that the problems that the existing HDR photographing or shooting system cannot process dynamic scene moving object information, a fused image generates a pseudo contour due to overlarge difference of brightness information of different exposure images and the detail information of the fused image is lost are solved.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIGS. 1 a-1 b are effect contrast diagrams of a high dynamic range scene imaging diagram and a scene diagram captured directly by a common camera according to an embodiment of the present invention;
FIGS. 2 a-2 c are schematic diagrams illustrating a phenomenon of "ghosting" generated by fusing a high exposure image and a low exposure image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a dual exposure video imaging system based on dynamic scenes according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another dynamic scene-based dual exposure video imaging system according to an embodiment of the present invention;
FIG. 5 is a schematic workflow diagram of a dual exposure image acquisition module according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a visible light camera according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a control box for a visible light camera according to an embodiment of the present invention;
FIGS. 8 a-8 b are low exposure images and high exposure images obtained by a dual exposure video imaging system provided by an embodiment of the present invention;
FIG. 9 is a schematic flow chart of a dual exposure image fusion method based on dynamic scene provided by the embodiment of the invention;
FIGS. 10a to 10b are graphs showing the effects of a low exposure image and a histogram equalization processed image according to an embodiment of the present invention;
FIGS. 11a to 11b are graphs showing the effects of a high exposure image and an image subjected to histogram equalization according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a differential image provided by an embodiment of the present invention;
FIG. 13 is a schematic diagram of a binary image according to an embodiment of the present invention;
FIGS. 14 a-14 b are schematic illustrations of a balanced image provided by embodiments of the present invention;
FIG. 15 is a schematic illustration of a fused image provided by an embodiment of the present invention;
FIGS. 16 a-16 c are diagrams illustrating a fused image obtained by a dynamic scene-based dual-exposure video imaging system according to embodiments of the present invention;
FIGS. 17 a-17 c are contrast graphs of fused images obtained by another dynamic scene-based dual-exposure video imaging system according to embodiments of the present invention;
fig. 18a to 18c are contrast diagrams of fused images obtained by a dynamic scene-based dual-exposure video imaging system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
At present, pu Yong Jie et al adopts a multi-exposure mode to acquire a high dynamic range video, needs to quickly fuse a plurality of low dynamic range images, proposes an improved quick multi-resolution tower-shaped decomposition fusion algorithm, and greatly reduces the calculated amount on the premise of not reducing the fusion quality of the original algorithm. The algorithm is improved in the aspects of pyramid decomposition convolution kernel, gaussian coefficient expansion interpolation method, laplace coefficient fusion rule, gaussian coefficient fusion rule and the like, and the tower decomposition, fusion and reconstruction processes are simplified. The method is applied to a carrier rocket video acquisition system, has strong algorithm stability, and reduces the calculated amount in the image fusion process. The system is a fusion algorithm for static scenes, and the algorithm can indistinguish images with moving objects in the fusion scenes, so that a 'ghost' phenomenon is generated in the fusion images. Meanwhile, the algorithm does not have a brightness balancing step, and if the brightness difference of the exposure images is too large, the fusion effect of the exposure images can generate artifacts, so that the quality of the images is seriously affected.
In addition, sun Yanyan et al devised a multi-exposure image fusion system for a camera moving scene, which can process multi-exposure images captured by a moving camera by combining two techniques of image registration and image fusion. The algorithm matching module adopts a SURF feature extraction algorithm to perform coarse matching, a RANSAC algorithm to perform fine matching, and then projection transformation is performed to correct the image; and then, fusing the registered images by adopting a pyramid fusion algorithm, and the experimental result proves that the system can effectively shoot the multi-exposure images shot in the moving scene of the camera and obtain good effects. However, the method is only aimed at a camera moving scene, if a moving object exists in the scene, the accuracy of the detection result of the SURF feature point detection algorithm may be affected, and thus the image cannot be corrected correctly, and the system can only shoot an HDR image due to the limitation of the image size, and cannot shoot an HDR video. Meanwhile, the pyramid fusion algorithm used fuses images, so that the images are subjected to a 'ghosting' phenomenon, and meanwhile, plaque artifacts exist in the fused images.
Therefore, for the above reasons, this embodiment provides a dual-exposure video imaging system based on dynamic scene, please refer to fig. 3, fig. 3 is a schematic diagram of a dual-exposure video imaging system based on dynamic scene provided in an embodiment of the present invention, where the dual-exposure video imaging system based on dynamic scene provided in the embodiment includes a dual-exposure image acquisition module, an image transmission module and an image processing module,
the double-exposure image acquisition module is used for acquiring first sequence image data by controlling exposure time, the first sequence image data comprises a plurality of frames of first exposure images, the exposure degrees of two adjacent first exposure images are respectively a first exposure degree and a second exposure degree, and the first exposure degree and the second exposure degree are unequal;
specifically, in order to obtain a video image with a high dynamic range, the image acquisition module of the present embodiment serves as a data input source of the entire system. When the visible light camera shoots a scene by using the double exposure mode, the visible light camera is firstly electrified and initialized, and the normal double exposure visible light camera working mode is entered. The exposure time of the visible light camera is actually the integration time of scene brightness, so that the integration time of the visible light camera is continuously adjusted in real time through a control channel, so that the visible light camera continuously outputs double exposure video data with alternating brightness, namely first sequence image data, the short exposure time of the first sequence image data can be adjusted according to scene brightness information, the first sequence image data comprises a plurality of frames of first exposure images which are arranged in sequence, the exposure degrees of two adjacent first exposure images are respectively the first exposure degree and the second exposure degree, namely, if the exposure degree of one frame of first exposure image is the first exposure degree, the exposure degree of the other frame of first exposure image is the second exposure degree, wherein the first exposure degree and the second exposure degree are unequal, for example, the first exposure degree is the high exposure image, the second exposure degree is the low exposure degree, the first exposure image corresponding to the second exposure degree is the low exposure image, for example, the first exposure degree is the low exposure degree, the first exposure degree corresponding to the second exposure degree is the low exposure image, the second exposure degree is the high exposure degree corresponding to the second exposure degree is the low exposure image, and the high exposure degree is the high exposure degree corresponding to the low exposure degree, and the high exposure degree is different from the high exposure degree.
In order to further explain the working principle of the dual-exposure image acquisition module, please refer to fig. 4, the dual-exposure image acquisition module may specifically include a first processor and a visible light camera, where the first processor is connected to the visible light camera, the first processor is configured to control exposure time of a register in the visible light camera to be alternately according to a first exposure time and a second exposure time, where the first exposure time and the second exposure time are unequal, i.e., the first processor may continuously adjust an integral time register of the visible light camera in real time through a control path, when a next frame image needs a high exposure time, the exposure time of the register may be set to be a long exposure time by the first processor in a current frame, then the next frame image may be set to be a short exposure time by the first processor, and when the next frame image needs a low exposure time, the next frame image may be a low exposure time, for example, the first exposure time corresponds to a long exposure time, the second exposure time corresponds to a short exposure time, for example, the first exposure time corresponds to a short exposure time, the second exposure time corresponds to a short exposure time, and the two sets of the two times are not limited by the values.
In order to better explain the dual-exposure image acquisition module of the embodiment, the embodiment uses the first processor as the FPGA to perform the description, and the specific output process of the dual-exposure image may be: when the visible light camera is powered on, the FPGA utilizes the IIC bus to carry out initialization configuration on the visible light camera, and after the configuration is successful, the visible light camera enters a normal working state, and the FPGA can receive image data sent by the visible light camera. When the visible light camera is in a normal working state, the FPGA modifies a register for controlling exposure time in the visible light camera through the IIC bus, please refer to fig. 5 and 6, and the actual working procedure is as follows:
1) First, assuming that an nth frame image output by a visible light camera is a dark image (i.e., a low exposure image), changing a value of a register controlling an exposure time of the visible light camera to a long exposure parameter (i.e., a long exposure time) during a period of time in which the visible light camera outputs a blanking region of the nth frame image;
2) And exposing the (n+1) th frame image while the visible light camera outputs the (n) th frame image. The value of the exposure time register is read before exposure, and the integral time of the image being exposed is prolonged and the shutter opening time is prolonged because the value of the register is modified to form a long exposure parameter;
3) Modifying a register controlling the exposure time of the visible light camera into a short exposure parameter (i.e., a short exposure time) during the period when the visible light camera outputs the blanking region of the n+1th frame, wherein the n+1th frame image output by the visible light camera is a bright image (i.e., a high exposure image);
4) The n+2 frame image is exposed while the visible light camera outputs the n+1 frame image. The value of the exposure time register is read out before exposure, and the integral time of the image being exposed is shortened because the value of the register is modified into a short exposure parameter; the following steps are continuously and circularly and alternately executed, and the visible light camera outputs images with alternate brightness and darkness.
Preferably, the first processor is a chip with processing capabilities, such as an FPGA, which may be model EP4CE6F17C 8.
Table 1 shows performance parameters satisfied by the hardware portion of the visible light camera system and exposure parameters used in scene shooting.
Table 1 visible light camera parameters
Figure BDA0002148475630000101
The image transmission module is connected with the double-exposure image acquisition module and is used for acquiring first sequence image data, and performing fixed-interval frame extraction processing on the first sequence image data to obtain second sequence image data, wherein the second sequence image data comprises a plurality of frames of second exposure images, and the exposure degrees of two adjacent second exposure images are respectively the first exposure degree and the second exposure degree;
Specifically, after the dual-exposure image acquisition module acquires the first sequence image data, the first sequence image data is transmitted to the image transmission module, because the current visible light camera is used for shooting a hundred frames of images, for example, one second, the data size is huge, and the difficulty of real-time transmission and image processing is increased.
Further, referring to fig. 4, the image transmission module may specifically include a second processor, a first memory, a second memory, a GPS timing module, and a third processor, where the visible light camera, the first memory, the second memory, the GPS timing module, and the third processor are all connected to the second processor, and the dual-exposure image acquisition module is connected to the image transmission module (embedded system) by using a CAMERALINK transmission cable, and the embedded system receives serial video data after dual-exposure processing, where:
the second processor is used for receiving the first sequence image data and performing frame extraction processing on the first sequence image data to obtain second sequence image data; the first memory is used for caching and forwarding the first sequence image data and the second sequence image data, the first sequence image data cached by the first memory is transferred to the second memory, and the first memory can also be used for forwarding and transmitting the second sequence image data to the third processor; the second memory is used for storing the first sequence image data, and performing non-compression storage on all the first sequence image data so as to facilitate the follow-up operations of recording, deleting, downloading and the like on the first sequence image data, for example, when a PC upper computer needs some image data stored in the second memory, the required image data can be obtained from the second memory through the first memory through a gigabit network port; the GPS time service module is used for carrying out time service on a plurality of frames of first exposure images, the third processor is used for receiving the second sequence image data and carrying out compression processing on the second sequence image data, so that the second sequence image data can be timely transmitted to the image processing module, and the third processor compresses the second sequence image data when receiving the second sequence image data and transmits the second sequence image data to the image processing module in real time.
Preferably, the second processor is an FPGA with processing capability, for example, the model is XC6SLX16, the communication interface may be, for example, a gigabit network port or USB3.0, the third processor is an ARM with processing capability, for example, a hessian compression ARM, the first memory is DDR3SDRAM, and the second memory is EMMC.
Referring to fig. 7, the embedded system end adopts an FPGA and an ARM as a main control module of the system, and adopts a deserializer to parallelize serial data transmitted by the double-exposure image acquisition module and uses the FPGA to process the serial data. The FPGA module stores and forwards acquired data by using the DDR3SDRAM and the e.MMC storage module, adopts the GPS timing module to precisely time each shot image, adopts the ARM end to process an instruction sent by a remote upper computer through a network and respond, and sends image data compressed in real time through the ARM to upper computer software through a TCP/IP network protocol. The embedded system FPGA is connected with the ARM through an FFC cable, and the ARM receives image data transmitted by the FPGA.
The image processing module is connected with the image transmission module and is used for acquiring first sequence image data or second sequence image data, and carrying out fusion processing on two adjacent first exposure images in the first sequence image data or two adjacent second exposure images in the second sequence image data to obtain a fused image sequence, namely, for real-time display, the image processing module can acquire the second sequence image data in real time, then carries out fusion processing on exposure images contained in the second sequence image data, and can also directly acquire the first sequence image data from the second memory, and carries out fusion processing on exposure images contained in the first sequence image data.
Specifically, the image processing module includes a PC upper computer, where the PC upper computer is connected to the third processor, and fusion software is installed in the PC upper computer, where the fusion software can perform fusion processing on the first sequence exposure image of the second memory or the second sequence image data transmitted by the ARM according to the following manner, and the specific fusion process is: decompressing the compressed second sequence image data, processing two adjacent first exposure images or decompressed two adjacent second exposure images according to the self-adaptive threshold to obtain a binary image comprising a motion area, performing brightness balance processing on the two adjacent first exposure images or the two adjacent second exposure images to obtain two frames of brightness balance images, combining the two adjacent first exposure images or the two adjacent second exposure images with the binary image comprising the motion area respectively to obtain two frames of basic weight images, and finally processing the two frames of brightness balance images and the basic weight images according to the enhanced Laplace pyramid to obtain a fused image.
In the embodiment, portability and operability of the double-exposure video imaging system are considered, the embedded system end and the computer end are connected through the wireless transmitting module and the wireless receiving module, for example, the embedded system end can be connected through wireless WIFI signals, the embedded system end can extract frames of collected image data and compress and transmit the frames to the PC upper computer, and remote control of high dynamic range scene shooting and real-time data display can be achieved through the PC upper computer. The software developed by the PC computer enables the image frames with different exposure time to be displayed in two video controls respectively, is convenient for adjusting aperture parameters of the visible light camera, enables the system to capture all information of scenes with different brightness ranges, and is helpful for capturing high dynamic range scenes and detail information thereof. By adopting a mode that wireless signals are connected with a PC upper computer and an embedded main control module, a person can remotely control imaging equipment in a far-distance area of a shooting scene, portability and operability of the system are greatly improved, two frames of images of a video shot by a double-exposure visible light camera are shown in fig. 8a and 8b, and the images in fig. 8a and 8b are respectively a low-exposure image and a high-exposure image.
In order to better explain the fusion mode of software in the PC host computer of the embodiment to two adjacent first exposure images in the first sequence image data or two adjacent second exposure images in the second sequence image data, the embodiment will specifically be described:
the PC host computer of this embodiment firstly obtains first sequence image data or second sequence image data, where there are high exposure image and low exposure image in the first sequence image data or the second sequence image data, and processes the high exposure image and the low exposure image that need to be fused, and the method is applicable to all the high exposure image and the low exposure image that need to be fused in the video image sequence, where the fusion method of two adjacent frames of first exposure image in the first sequence image data and two adjacent frames of second exposure image in the second sequence image data is the same, for convenience of understanding, in this embodiment, two adjacent frames of second exposure image in the second sequence image data are taken as examples, for convenience of distinguishing, one frame of second exposure image is taken as a first sub exposure image, and the other frame of second exposure image is taken as a second sub exposure image, where the first sub exposure image and the second sub exposure image are images with different exposure degrees, for example, the first sub exposure image is the high exposure image, the second sub exposure image is the low exposure image, and for convenience of distinguishing.
Referring to fig. 4, fig. 4 is a flow chart of a dual exposure image fusion method based on a dynamic scene according to an embodiment of the present invention. Based on the above embodiment, the embodiment specifically describes a dual-exposure image fusion method based on a dynamic scene:
step 10, processing the decompressed adjacent two second exposure images according to the self-adaptive threshold to obtain a binary image comprising a motion area;
step 101, respectively carrying out histogram equalization processing on a first sub-exposure image and a second sub-exposure image to correspondingly obtain a third sub-exposure image and a fourth sub-exposure image;
firstly, because the exposure degrees of the first sub-exposure image and the second sub-exposure image are different, the brightness of the first sub-exposure image and the second sub-exposure image is inconsistent, and therefore, the first sub-exposure image and the second sub-exposure image are preprocessed, so that the static area and the dynamic area can be correctly screened out by the first sub-exposure image and the second sub-exposure image after the difference processing, the first sub-exposure image and the second sub-exposure image firstly need to be subjected to histogram equalization processing, so that the brightness of the first sub-exposure image and the second sub-exposure image is consistent, the first sub-exposure image becomes a third sub-exposure image after the histogram equalization processing, the second sub-exposure image becomes a fourth sub-exposure image after the histogram equalization processing, and the brightness of the third sub-exposure image and the fourth sub-exposure image is the same, wherein the histogram equalization formula can be expressed as follows:
I′ k =Histeq(I k ) (1)
Wherein I is k For the original image with different exposure degrees, k corresponds to the serial number of the different exposure images, in this embodiment k is 1 or 2,1 corresponds to the first sub-exposure image, 2 corresponds to the second sub-exposure image, histeq (·) is histogram equalization transformation, then I' k To correspond to the histogram-transformed image.
For example, referring to fig. 10a to 10b and 11a to 11b, fig. 10a is a low exposure image, fig. 10b is a low exposure image after histogram equalization processing, fig. 11a is a high exposure image corresponding to fig. 10a, and fig. 11b is a high exposure image after histogram equalization processing, whereby it can be seen that the low exposure image and the high exposure image become uniform in brightness after histogram equalization processing.
Step 102, calculating pixel difference values between the third sub-exposure image and the fourth sub-exposure image by using a frame difference method to obtain a difference image;
in this embodiment, a frame difference method is used to perform a difference process on the third sub-exposure image and the fourth sub-exposure image, and a pixel difference value between the third sub-exposure image and the fourth sub-exposure image is calculated, so as to obtain a difference image, where the calculation formula of the frame difference method is as follows:
ΔI′=|I′ 1 -I′ 2 | (2)
wherein ΔI 'is a differential image, I' 1 For the third sub-exposure image, I' 2 The fourth sub-exposure image.
For example, referring to fig. 12, fig. 12 is a differential image obtained according to fig. 10b and 11 b.
And 103, performing threshold segmentation processing on the differential image according to the self-adaptive threshold to obtain an initial binary image.
After obtaining the difference image, a proper threshold value is required to be selected to perform threshold segmentation processing on the difference image, the embodiment performs threshold segmentation processing on the difference image by using an adaptive threshold value to obtain an initial binary image, and a part higher than the adaptive threshold value is regarded as a motion area, and a part lower than the adaptive threshold value is regarded as a static area. The binary image is calculated as follows:
Figure BDA0002148475630000161
wherein M is a binary image after threshold segmentation, and T is an adaptive threshold.
In order to better perform threshold segmentation processing on the differential image, the embodiment provides a method for determining an adaptive threshold, which comprises the following steps:
step 1031, obtaining a threshold value array according to the total pixel percentage of the differential image, the pixel number of the width of the differential image and the pixel number of the height of the differential image, which are occupied by the pixel numbers of the static areas of the first sub-exposure image and the second sub-exposure image;
By observing the histogram of the differential image, it is found that the pixel arrangement of the differential image is mostly concentrated around the 0 gray scale value, that is, the pixel difference value between the third sub-exposure image and the fourth sub-exposure image is mostly close to 0, and the partial pixels correspond to a static region. Assuming that the percentage of the number of pixels of the static area to the total number of pixels of the image is fixed, and the number of pixels is set to be P, the number of pixels is accumulated from the pixel point with the gray value of 0 of the difference image until the number of pixels is added to the critical value of the total number of pixels of the static area, so as to obtain a threshold value array, and the calculation formula of the threshold value array is as follows:
Figure BDA0002148475630000162
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002148475630000163
for the threshold value array, n i The number of pixels with the gray value of i in the difference image is P, the number of pixels in the static areas of the first sub-exposure image and the second sub-exposure image is a percentage of the number of pixels in the difference image, t is a threshold range meeting the condition, and the value of t can be an integer between 0 and 255.
In practice, the minimum value of the threshold value array is the required threshold value, but if the minimum value of the threshold value array is too close to the gray value 0, the probability that there is a motion region in the differential image is small, so the embodiment sets another threshold value so that the pixels of the motion region cannot be detected in the static image, and the calculation formula of the adaptive threshold value is as follows:
Figure BDA0002148475630000171
Wherein T is an adaptive threshold, min (·) is the minimum value of the threshold array, and mt is a predetermined threshold.
The predetermined threshold mt is another threshold for dividing the static area pixels provided in this embodiment, and may be generally 18, that is, when the gray value of the adaptive threshold obtained by calculation is smaller than 18, it is considered that there is no moving area in the first sub-exposure image and the second sub-exposure image of two frames, and the threshold may be adjusted according to different scene conditions. After the adaptive threshold is determined, the differential image may be subjected to a threshold segmentation process using equation (3), thereby obtaining an initial binary image.
104, performing morphological expansion corrosion treatment on the initial binary image to obtain a binary image comprising a motion area;
the initial binary image obtained in step 103 also has the phenomena of holes and isolated noise points, so that morphological expansion corrosion treatment is also required to be performed on the initial binary image, thereby corroding isolated noise point areas and hole areas, and simultaneously expanding pixel points at the edge of a motion area, so that the determined motion area completely comprises pixels of the motion area. The calculation formula of the swelling corrosion is as follows:
Figure BDA0002148475630000172
Wherein the symbols are
Figure BDA0002148475630000173
For etching operations, the symbols->
Figure BDA0002148475630000174
For expansion operation, B 1 For etching the filter template, e.g. the etching filter template is a circle of radius 4 pixels, B 2 For the dilation-filter template, for example, the dilation-filter template is a circle with a radius of 20 pixels, and M' is a binary image including a motion region after the dilation-erosion operation.
For example, referring to fig. 13, fig. 13 is a binary image including a motion region after performing an expansion etching operation on the differential image of fig. 12.
Step 2, carrying out brightness balance processing on two adjacent second exposure images to correspondingly obtain two frames of brightness balance images, wherein a first sub-exposure image is set to correspond to the first brightness balance image, and a second sub-exposure image corresponds to the second brightness balance image;
step 201, filtering illuminance components of the first sub-exposure image and the second sub-exposure image respectively by using a Retinex theory, and correspondingly obtaining a fifth sub-exposure image and a sixth sub-exposure image;
the Retinex image enhancement algorithm considers that the image is composed of an illumination component and a reflection component, the illumination component of the image reflects the brightness of the whole image, and the reflection component reflects the original face information of the image scene. Based on this, in this embodiment, the original information of the first sub-exposure image and the second sub-exposure image is restored by filtering the illuminance component information of the first sub-exposure image and the second sub-exposure image and retaining the reflection component information of the first sub-exposure image and the second sub-exposure image, and the Retinex model may be expressed as:
S k (x,y)=L k (x,y)·R k (x,y) (7)
Wherein S is k For the kth original image (the original image is the original image to be fused in the video sequence), in this embodiment, S is set 1 For the first sub-exposure image S 2 For the second sub-exposure image, (x, y) is the coordinate position of the pixel, L k Representing the luminance component of the kth original image, R k Is the reflection component of the kth original image. For a gray scale image, in the single scale case, the reflection component of the kth original image can be expressed as:
R′ k (x,y)=logS k (x,y)-log[F(x,y)*S k (x,y)] (8)
wherein R 'is' k For the reflection component of the kth original image, F (x, y) is a center-around function, typically a gaussian function, which can be expressed as:
Figure BDA0002148475630000191
wherein sigma is the standard deviation, usually taken
Figure BDA0002148475630000192
Step 202, respectively processing the fifth sub-exposure image and the sixth sub-exposure image according to a brightness mapping model to correspondingly obtain a first brightness balance image and a second brightness balance image;
for the reflection component R' k The direct exponential transformation stretches the dynamic range of the image, and the brightness of the fifth sub-exposure image and the sixth sub-exposure image cannot be guaranteed to be in the same range, so that the purpose of balancing the brightness of the fifth sub-exposure image and the sixth sub-exposure image cannot be achieved, and therefore a certain brightness mapping range is defined for the fifth sub-exposure image and the sixth sub-exposure image, and the brightness of the fifth sub-exposure image and the sixth sub-exposure image is balanced. First, the maximum value and the minimum value of the mapping of the fifth sub-exposure image and the sixth sub-exposure image are respectively:
Figure BDA0002148475630000193
Wherein ave is the reflection component R k The mean value of the luminance average value of' α and β are adjustable parameters, which are typically taken as α=β=3, V is the maximum value of the variances of the fifth sub-exposure image and the sixth sub-exposure image, and the specific calculation formulas of ave and V are as follows:
Figure BDA0002148475630000194
wherein Mean (·) is the matrix average, var (·) is the matrix variance, and max (·) is the maximum value of the array, so the luminance mapping model is:
Figure BDA0002148475630000201
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002148475630000202
namely, the result after the brightness balance of the kth Zhang Yuantu image, in the present embodiment +.>
Figure BDA0002148475630000203
The first brightness balance image after the brightness balance processing of the fifth sub-exposure image is +.>
Figure BDA0002148475630000204
And the second brightness balance image is the second brightness balance image after the brightness balance processing of the sixth sub-exposure image.
For example, please refer to fig. 14 a-14 b, wherein fig. 14a is a balanced image corresponding to fig. 10a after the brightness balancing process, and fig. 14b is a balanced image corresponding to fig. 11a after the brightness balancing process.
Step 3, combining two adjacent second exposure images with the binary image comprising the motion area respectively to obtain two frames of basic weight images, wherein a first sub-exposure image is set to correspond to the first basic weight image, and a second sub-exposure image is set to correspond to the second basic weight image;
step 301, respectively combining the first sub-exposure image and the second sub-exposure image with the motion area to obtain a first basic weight image and a second basic weight image;
Step 3011, obtaining a first initial weight image according to the image contrast and the exposure moderation degree of the first sub-exposure image, and obtaining a second initial weight image according to the image contrast and the exposure moderation degree of the second sub-exposure image;
for single-channel gray scale images, no consideration is required for the saturation information of the images. The initial weighted image of the image is thus obtained by multiplying the following two factors:
Figure BDA0002148475630000205
wherein C is k,x,y For the kth original image I k Image contrast, E, at (x, y) coordinates k,x,y For the kth original image I k Exposure moderation at (x, y) coordinates, w c And w is equal to e As a weight index, generally w c =w e =1,W k,x,y For the pixel value of the kth Zhang Chushi weighted image at coordinates (x, y), then W 1,x,y For the pixel value, W, of the first initial weight image at coordinates (x, y) 2,x,y The pixel values at coordinates (x, y) for the second initial weight image.
Step 3012, combining the first initial weight image and the motion area and performing normalization processing to obtain a first basic weight image, and combining the second initial weight image and the motion area and performing normalization processing to obtain a second basic weight image;
in order to make the weighted image of the ' ghost ' region have no abrupt change, the M ' after being subjected to Gaussian filtering is determined as a final motion region mask. Therefore, the initial weight image and the binary image containing the motion area need to be combined, then the basic weight image can be obtained after normalization, and the calculation formula of the combination of the initial weight image and the binary image containing the motion area is as follows:
Figure BDA0002148475630000211
Wherein W' ,x,y For the pixel value of the image at coordinates (x, y) of the k-th initial weight image combined with the binary image, then W' 1,x,y Pixel values, W ', at coordinates (x, y) for the image of the first initial weight image combined with the binary image' 2,x,y For the pixel value of the image combining the second initial weight image and the binary image at the coordinates (x, y), ref is the sequence number of the reference image, in this embodiment, the sequence number is 1 or 2, the motion state of the fused image and the motion state of the selected reference image should be kept consistent, and then the reference image is processed for W' k,x,y Normalization is performed, and the formula is as follows:
Figure BDA0002148475630000212
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002148475630000213
pixel value of base weight map at coordinates (x, y), then +.>
Figure BDA0002148475630000214
For the pixel value of the first basis weight image at coordinates (x, y),/for the pixel value of the first basis weight image at coordinates (x, y)>
Figure BDA0002148475630000215
The pixel values at coordinates (x, y) for the second basis weight image.
Step 4, processing the two frames of brightness balance images and the basic weight image according to the enhanced Laplacian pyramid to obtain a fused image;
step 4.1, carrying out Laplacian pyramid transformation on the first brightness balance image to obtain a first Laplacian pyramid, and carrying out Laplacian pyramid transformation on the second brightness balance image to obtain a second Laplacian pyramid;
Step 4.2, carrying out Gaussian pyramid transformation on the first basic weight image to obtain a first weight image Gaussian pyramid, and carrying out Gaussian pyramid transformation on the second basic weight image to obtain a second weight image Gaussian pyramid;
step 4.3, obtaining a fused image according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid;
and 4.31, obtaining a fused Laplacian pyramid according to the first Laplacian pyramid, the second Laplacian pyramid, the first weight map Gaussian pyramid and the second weight map Gaussian pyramid, wherein the calculation formula of the Laplacian pyramid is as follows:
Figure BDA0002148475630000221
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002148475630000222
pixel value at (x, y) of Laplacian pyramid coordinate of the d-th layer corresponding to the k-th brightness balanced image +.>
Figure BDA0002148475630000223
For the pixel value at (x, y) of the Gaussian pyramid coordinate of the d-th layer corresponding to the k Zhang Quanchong image, L { F } d Is after fusionLaplacian pyramid.
Step 4.32, obtaining an enhanced Laplacian pyramid according to the gain coefficient matrix and the fused Laplacian pyramid;
because the fusion of the laplacian pyramids and the excessive filtering of the illuminance component, the fused image obtained in step 4.31 loses a part of detail information, and for this problem, the embodiment restores the detail information of the image by performing detail enhancement processing on the fused laplacian pyramids, and reduces the noise information of the image at the same time, and the human eye vision system is more sensitive to the high-frequency information of the image, so that the gain coefficient should be reduced along with the increase of the layer number of the laplacian pyramids, and the calculation formula of the gain coefficient is as follows:
Figure BDA0002148475630000231
Wherein G is gain coefficient matrix tower, G L G is the smallest gain coefficient H For the maximum gain factor, D is the layer number of the highest layer of the fused laplacian pyramid, this example takes d=log 2 (min(H,W))-log 2 (min (H, W)/2), d is the layer number of the fused Laplacian pyramid, gamma is an adjustable parameter, gamma is 0.5 for example,
Figure BDA0002148475630000232
for the noise visibility at the (x, y) coordinate position in the d-th layer fused Laplacian pyramid, +.>
Figure BDA0002148475630000233
The calculation formula of (2) is as follows:
Figure BDA0002148475630000234
wherein e d (x, y) is the entropy value of the local image at the coordinate position (x, y) in the Laplacian pyramid after the d-layer fusion, for example, the local image size can be 3*3 size neighborhood image,ω is an adjustable parameter, which can be generally taken as 1, and it can be generally considered that the more detailed information, the less the visibility of the noise of the local image, the more the visibility of the noise of the local image, the less detailed information.
Finally, multiplying the gain coefficient matrix by the Laplacian pyramid of the corresponding pixel point of the corresponding layer to obtain an enhanced Laplacian pyramid, wherein the calculation formula of the enhanced Laplacian pyramid is as follows:
Figure BDA0002148475630000235
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0002148475630000236
pixel values at the (x, y) position of the d-th layer coordinates for the enhanced laplacian pyramid.
And 4.32, inversely transforming the enhanced Laplacian pyramid to form a fused image.
For example, referring to fig. 15, fig. 15 is a fused image of fig. 10a and 11 a.
The invention can realize the effect of effective 'ghost' removal and image fusion on two frames of exposure images, the processing result can effectively recover the detailed information of the over-exposure and under-exposure areas of the images, meanwhile, the brightness balance algorithm of the embodiment can avoid the plaque and artifact phenomenon in the traditional fusion images, the fusion method of the two frames of exposure images provided by the embodiment can greatly reduce the time resolution of image shooting, and a solid foundation is laid for implementing the method provided by the embodiment on a hardware platform and acquiring the high dynamic range real-time video.
In order to realize the function of shooting a continuous two-frame high-low alternate exposure mode (double exposure mode) of a visible light camera, the double exposure video imaging system for the dynamic scene provided by the invention adopts an embedded system based on an FPGA structure so as to finish the sampling of the high dynamic range dynamic scene information. The FPGA of the double-exposure image acquisition module is communicated with a hardware system for controlling the exposure time of the visible light camera by modifying the register configuration of the visible light camera in real time, the exposure time of each frame of image is controlled and adjusted, and scene information is imaged by adopting continuous alternate exposure time to obtain double-exposure video data with brightness change, so that the visible light camera can continuously sample dynamic scene information in a high-low alternate exposure mode. Video data streams acquired by the double-exposure visible light camera are compressed and transmitted to the PC host computer for real-time display by using ARM, and images with different exposures are respectively displayed by using developed software, so that a user can select the optimal aperture size according to a specific scene. The dynamic scene ghost removing algorithm synthesizes a group of video sequences with alternating high and low exposure into a high dynamic range video, solves the problem that the traditional exposure fusion system cannot process dynamic scenes and the brightness of high and low frame exposure images is difficult to balance, and simultaneously adopts a two-frame exposure image fusion mode to furthest improve the time resolution of the video, and adopts a multi-exposure image fusion algorithm based on detail enhancement, so that all information of over-bright and over-dark areas in the scenes can be clearly displayed in the generated video. For example, in landscape illumination, there is often a huge contrast between the sky and the land, and the double-exposure video imaging system of the present invention can capture the detailed information of the sky and the land at the same time, so that the shooting effect is more vivid. As shown in fig. 16a to 16c, 17a to 17c, and 18a to 18c, which are respectively a video frame shot by a double-exposure visible light camera, and a result display processed by a fusion algorithm, wherein fig. 16a, 17a and 18a are low-exposure images, fig. 16b, 17b and 18b are high-exposure images, and fig. 16c, 17c and 18c are fused exposure images, it can be seen from the above three sets of results that the double-exposure video imaging system of the embodiment can perform high-quality fusion on the shot images, the obtained results can well recover the detailed information of the low-exposure and high-exposure regions, and meanwhile, the ghost phenomenon generated by the motion regions of the two frames of images is eliminated, so that the fusion result only maintains the motion state of one frame of images. In summary, it can be explained that the present dual exposure video imaging system is an effective system capable of acquiring high quality HDR video of dynamic scenes.
In the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Further, one skilled in the art can engage and combine the different embodiments or examples described in this specification.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (7)

1. A dynamic scene-based dual exposure video imaging system, comprising:
the double-exposure image acquisition module is used for acquiring first sequence image data by controlling exposure time, wherein the first sequence image data comprises a plurality of frames of first exposure images, the exposure degrees of two adjacent first exposure images are respectively a first exposure degree and a second exposure degree, and the first exposure degree and the second exposure degree are unequal;
the image transmission module is connected with the double-exposure image acquisition module and is used for acquiring the first sequence image data, and performing fixed-interval frame extraction processing on the first sequence image data to obtain second sequence image data, wherein the second sequence image data comprises a plurality of frames of second exposure images, and the exposure degrees of two adjacent second exposure images are respectively a first exposure degree and a second exposure degree;
The image processing module is connected with the image transmission module and is used for acquiring the first sequence image data or the second sequence image data, and carrying out fusion processing on two adjacent first exposure images in the first sequence image data or two adjacent second exposure images in the second sequence image data, so as to finally acquire a fused image sequence;
the double exposure image acquisition module comprises a first processor and a visible light camera, wherein:
the first processor is used for controlling the exposure time of a register in the visible light camera to alternately expose according to the first exposure time and the second exposure time;
the visible light camera is used for exposing according to the first exposure time and the second exposure time of the register to obtain the first sequence image data;
the image transmission module comprises a second processor, a first memory, a second memory, a GPS time service module and a third processor, wherein the visible light camera, the first memory, the second memory, the GPS time service module and the third processor are all connected with the second processor, and the image transmission module comprises a first memory, a second memory, a GPS time service module and a third processor, wherein:
the second processor is used for receiving the first sequence image data and performing frame extraction processing on the first sequence image data to obtain second sequence image data;
The first memory is used for caching and forwarding the first sequence image data and the second sequence image data;
the second memory is used for storing the first sequence image data buffered and forwarded by the first memory;
the GPS timing module is used for timing the exposure images of the plurality of frames of the first sequence;
the third processor is configured to receive the second sequence image data buffered and forwarded by the first memory, and perform compression transmission processing on the second sequence image data;
the image processing module comprises a PC upper computer, and the PC upper computer is connected with the third processor, wherein:
the PC upper computer is used for decompressing the compressed second sequence image data, processing two adjacent first exposure images or two adjacent decompressed second exposure images according to the self-adaptive threshold value to obtain a binary image comprising a motion area, performing brightness balance processing on the two adjacent first exposure images or the two adjacent second exposure images to obtain two frames of brightness balance images, combining the two adjacent first exposure images or the two adjacent second exposure images with the binary image comprising the motion area respectively to obtain two frames of basic weight images, and finally processing the two frames of brightness balance images and the basic weight images according to the enhanced Laplace pyramid to obtain a fused image.
2. The dual exposure video imaging system of claim 1, wherein the first processor is an FPGA.
3. The dual exposure video imaging system of claim 1, wherein the visible light camera and the second processor are connected by a CAMERALINK transmission cable.
4. The dual exposure video imaging system of claim 1, wherein the second processor is an FPGA and the third processor is an ARM.
5. The dual exposure video imaging system of claim 1, further comprising a wireless transmit module and a wireless receive module, wherein the third processor and the PC host computer are wirelessly connected through the wireless transmit module and the wireless receive module.
6. The dual exposure video imaging system of claim 1, wherein the enhanced laplacian pyramid is formulated as:
Figure QLYQS_1
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_2
for the enhanced Laplacian pyramid, +.>
Figure QLYQS_3
For gain coefficient matrix tower, < >>
Figure QLYQS_4
For the fused laplacian pyramid,dis the layer number of the fused Laplacian pyramidxy) Is the coordinate position.
7. The dual exposure video imaging system of claim 6, wherein the gain factor is calculated as:
Figure QLYQS_5
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_6
for gain coefficient matrix tower, < >>
Figure QLYQS_7
Is the smallest gain factor, +.>
Figure QLYQS_8
Is the maximum gain factor, +.>
Figure QLYQS_9
For the layer number of the highest layer of the fused laplacian pyramid,dlayer number of the fused Laplacian pyramid, < >>
Figure QLYQS_10
Is an adjustable parameter->
Figure QLYQS_11
Is the firstdThe position of coordinates in the fused Laplacian pyramid of the layers is the noise visibility at (x, y).
CN201910693033.1A 2019-07-30 2019-07-30 Double-exposure video imaging system based on dynamic scene Active CN110619593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910693033.1A CN110619593B (en) 2019-07-30 2019-07-30 Double-exposure video imaging system based on dynamic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910693033.1A CN110619593B (en) 2019-07-30 2019-07-30 Double-exposure video imaging system based on dynamic scene

Publications (2)

Publication Number Publication Date
CN110619593A CN110619593A (en) 2019-12-27
CN110619593B true CN110619593B (en) 2023-07-04

Family

ID=68921630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910693033.1A Active CN110619593B (en) 2019-07-30 2019-07-30 Double-exposure video imaging system based on dynamic scene

Country Status (1)

Country Link
CN (1) CN110619593B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113364964B (en) * 2020-03-02 2023-04-07 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN113347368B (en) * 2020-03-03 2023-04-18 华为技术有限公司 Video acquisition method and device based on exposure control
CN114630053B (en) * 2020-12-11 2023-12-12 青岛海信移动通信技术有限公司 HDR image display method and display device
CN112528944A (en) * 2020-12-23 2021-03-19 杭州海康汽车软件有限公司 Image identification method and device, electronic equipment and storage medium
CN114697537A (en) * 2020-12-31 2022-07-01 浙江清华柔性电子技术研究院 Image acquisition method, image sensor, and computer-readable storage medium
CN112887623B (en) * 2021-01-28 2022-11-29 维沃移动通信有限公司 Image generation method and device and electronic equipment
CN113012070B (en) * 2021-03-25 2023-09-26 常州工学院 High dynamic scene image sequence acquisition method based on fuzzy control
CN113079323B (en) * 2021-03-31 2022-02-11 中国科学院长春光学精密机械与物理研究所 Space remote sensing load automatic exposure method based on two-dimensional entropy
CN113222870B (en) * 2021-05-13 2023-07-25 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN115706870B (en) * 2021-08-12 2023-12-26 荣耀终端有限公司 Video processing method, device, electronic equipment and storage medium
CN115086567B (en) * 2021-09-28 2023-05-19 荣耀终端有限公司 Time delay photographing method and device
CN113822819B (en) * 2021-10-15 2023-10-27 Oppo广东移动通信有限公司 HDR scene detection method and device, terminal and readable storage medium
CN114071023B (en) * 2021-11-18 2023-06-02 成都微光集电科技有限公司 Image sensor exposure time sequence switching method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375675A (en) * 2016-08-30 2017-02-01 中国科学院长春光学精密机械与物理研究所 Aerial camera multi-exposure image fusion method
WO2019071613A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image processing method and device
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8406569B2 (en) * 2009-01-19 2013-03-26 Sharp Laboratories Of America, Inc. Methods and systems for enhanced dynamic range images and video from multiple exposures
US10148888B2 (en) * 2016-05-18 2018-12-04 Texas Instruments Incorporated Image data processing for multi-exposure wide dynamic range image data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106375675A (en) * 2016-08-30 2017-02-01 中国科学院长春光学精密机械与物理研究所 Aerial camera multi-exposure image fusion method
WO2019071613A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image processing method and device
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods

Also Published As

Publication number Publication date
CN110619593A (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN110619593B (en) Double-exposure video imaging system based on dynamic scene
US11558558B1 (en) Frame-selective camera
CN110072051B (en) Image processing method and device based on multi-frame images
CN110599433B (en) Double-exposure image fusion method based on dynamic scene
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
US11854167B2 (en) Photographic underexposure correction using a neural network
CN108419023B (en) Method for generating high dynamic range image and related equipment
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
KR101699919B1 (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
US11132771B2 (en) Bright spot removal using a neural network
CN110033418B (en) Image processing method, image processing device, storage medium and electronic equipment
US7463296B2 (en) Digital cameras with luminance correction
JP4234195B2 (en) Image segmentation method and image segmentation system
WO2020207262A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN110191291B (en) Image processing method and device based on multi-frame images
CN109218627B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
EP1583032A2 (en) Luminance Correction
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN111064904A (en) Dark light image enhancement method
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108156369B (en) Image processing method and device
KR20210118233A (en) Apparatus and method for shooting and blending multiple images for high-quality flash photography using a mobile electronic device
CN110047060B (en) Image processing method, image processing device, storage medium and electronic equipment
WO2014093048A1 (en) Determining an image capture payload burst structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant