CN111986129A - HDR image generation method and device based on multi-shot image fusion and storage medium - Google Patents

HDR image generation method and device based on multi-shot image fusion and storage medium Download PDF

Info

Publication number
CN111986129A
CN111986129A CN202010617015.8A CN202010617015A CN111986129A CN 111986129 A CN111986129 A CN 111986129A CN 202010617015 A CN202010617015 A CN 202010617015A CN 111986129 A CN111986129 A CN 111986129A
Authority
CN
China
Prior art keywords
image
exposure
fusion
hdr
shot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010617015.8A
Other languages
Chinese (zh)
Other versions
CN111986129B (en
Inventor
符顺
牛永岭
许楚萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN202010617015.8A priority Critical patent/CN111986129B/en
Publication of CN111986129A publication Critical patent/CN111986129A/en
Application granted granted Critical
Publication of CN111986129B publication Critical patent/CN111986129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a HDR image generation method based on multi-shot image fusion, terminal equipment and a computer storage medium, wherein the method acquires a plurality of images with different exposure degrees shot simultaneously in the same scene from a plurality of cameras, and the current exposure parameters of the cameras are generated by the result of the previous frame; then, taking the normal exposure image as a reference, and aligning at least one abnormal exposure image; and finally, fusing the brightness channel and the chrominance channel of the aligned image respectively to generate the HDR image. The technical scheme of the invention can realize that multiple cameras generate HDR images, and avoid the problems of ghost and color cast possibly caused by continuous exposure of a single camera.

Description

HDR image generation method and device based on multi-shot image fusion and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a storage medium for generating an HDR image based on multi-shot image fusion.
Background
Scenes in the real world often have very high dynamic range in luminance. However, the dynamic range sensed by the sensor in a common digital imaging device (e.g., a camera) is much smaller, so that it is difficult to show all details in one exposure image when a large dynamic range is captured, and overexposure or underexposure is formed in a place which is too bright or too dark. In order to make the imaging result show rich color details and light and shade levels and better match the cognitive characteristics of human eyes to real world scenes, High Dynamic Range (HDR) imaging is an increasingly popular imaging technology in digital imaging equipment. The image obtained by HDR imaging is also referred to as HDR image, in contrast to a Low Dynamic Range (LDR) image. HDR images can provide high dynamic range between darker to fully illuminated areas in a scene. A video composed of HDR images may be referred to as an HDR video.
Because a mature HDR imaging sensor is lacked at present, an HDR image can be shot at a time, and the current HDR video generation method is mainly based on the same principle: a single camera exposes the same scene for multiple times with different exposure amounts to cover the brightness range of the whole scene, images with different exposure values are synthesized into an HDR image, and then HDR videos are composed of multiple HDR images.
In the prior art, for example, patent CN105163047A proposes a method, a system and a shooting terminal for generating an HDR image based on color space conversion, which obtains at least three images with different exposure levels continuously shot, converts the images into image luminance and color components through color space conversion, separates the image luminance and color components, performs camera reflection function mapping, weighted sum fusion and tone mapping on the luminance components, fuses the color components, and obtains the HDR image through inverse conversion. However, when a scene with a moving object is shot, the object appears at different positions on different pictures, and a ghost effect is formed by directly conducting brightness fusion, which is specifically shown in the fact that a ghost image, a blur or a ghost image appears in a composite image. Secondly, the color components (channel U, V of YUV color space or channel A, B of LAB color space) are processed separately during the fusion process, which results in that the fused color components do not conform to the original difference relationship between each other, and the color cast occurs on the picture.
Disclosure of Invention
The invention provides a method, equipment and a storage medium for generating an HDR image based on multi-camera image fusion, which are used for realizing the generation of the HDR image by multiple cameras and avoiding the problems of ghost and color cast possibly caused by continuous exposure of a single camera.
In order to solve the above technical problem, the present invention provides a method for generating an HDR image based on multi-shot image fusion, including:
acquiring a plurality of images with different exposure degrees, which are shot at the same time in the same scene, from a plurality of cameras; wherein the different exposure level images include: a normal exposure image and at least one abnormal exposure image; the exposure parameters corresponding to the camera for shooting the abnormal exposure image are obtained by calculation according to the image shot by the first camera in the last frame; the first camera is used for shooting the normal exposure image;
aligning the at least one abnormal exposure image by taking the normal exposure image as a reference;
and respectively fusing the brightness channel and the chrominance channel of the aligned image to generate the HDR image.
As an improvement of the above scheme, the exposure parameter corresponding to the camera that captures the abnormal exposure image is obtained by calculation according to the image captured by the first camera in the previous frame, and specifically includes:
the first camera takes a first image as a normal exposure image shot in the last frame;
down-sampling the first image, and generating an abnormal exposure binary image according to the pixel value of the brightness channel of the down-sampled first image at each position point;
and respectively calculating first exposure parameters corresponding to other cameras except the first camera in the next frame according to the abnormal exposure binary image, and taking the first exposure parameters as the exposure parameters corresponding to the camera for shooting the abnormal exposure image.
As an improvement of the above scheme, the calculating, according to the abnormal exposure binary image, first exposure parameters corresponding to the next frame of the other cameras except the first camera respectively includes:
and adjusting first exposure parameters corresponding to other cameras in the next frame by combining a preset proportion interval according to the proportion of the number of abnormal exposure pixels in the abnormal exposure binary image to the number of full image pixels, so that all the exposure parameters are formed by taking the first camera as the center, and the overall exposure parameters of other cameras are gradually increased or decreased.
As an improvement of the above scheme, when the sensor models of all the cameras are consistent, the exposure parameter is the exposure time.
As an improvement of the above, the aligning the at least one abnormal exposure image with the normal exposure image as a reference specifically includes:
and aligning the at least one abnormal exposure image by taking the normal exposure image as a reference according to a threshold bitmap alignment method, a feature point alignment method or a block matching alignment method.
As an improvement of the above scheme, the merging of the luminance channel and the chrominance channel is performed on the aligned images, respectively, to generate the HDR image, specifically:
performing brightness channel fusion on the aligned images by adopting a Laplace pyramid decomposition multi-exposure fusion method;
carrying out chroma channel fusion on the aligned images according to the corresponding chroma value when the absolute value of the color difference of the multiple images at the same position is maximum;
and generating the HDR image according to the image after the fusion of the brightness channel and the chrominance channel.
As an improvement of the above scheme, the performing luminance channel fusion on the aligned images by using a multiple exposure fusion method using laplacian pyramid decomposition specifically includes:
calculating a weight map of each image to be fused;
performing L-level Laplacian pyramid decomposition on a brightness channel of each image to be fused, and performing L-level Gauss pyramid decomposition on each weight map; l is a positive integer;
and calculating the Laplace component of each fused layer, and reconstructing to obtain a brightness fused image.
The embodiment of the invention also provides a HDR video generation method based on multi-shot image fusion.
An embodiment of the present invention further provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the processor implements the HDR image generation method based on multi-shot image fusion as described in any one of the above.
The embodiment of the invention also provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, an apparatus where the computer-readable storage medium is located is controlled to execute the HDR image generation method based on multi-shot image fusion as described in any one of the above.
The embodiment of the invention has the following beneficial effects:
the invention provides a HDR image generation method based on multi-shot image fusion, terminal equipment and a computer storage medium, wherein the method acquires a plurality of images with different exposure degrees shot simultaneously in the same scene from a plurality of cameras, and the current exposure parameters of the cameras are generated by the result of the last frame; then, taking the normal exposure image as a reference, and aligning at least one abnormal exposure image; and finally, fusing the brightness channel and the chrominance channel of the aligned image respectively to generate the HDR image. Compared with the prior art that images with different exposure degrees are continuously photographed by using a single camera and then fused to generate an HDR image, the technical scheme of the invention can realize that multiple cameras generate the HDR image, and avoid the problems of ghost and color cast possibly caused by continuous exposure of a single camera.
Furthermore, when the chroma fusion is carried out, the chroma value corresponding to the maximum absolute value of the color difference is selected, the difference relation between color channels is kept, and the color cast problem in the prior art is further avoided.
Furthermore, when the brightness is fused, downsampling is carried out firstly, and then Gaussian blurring is carried out, so that the data processing time can be shortened, and the fusion efficiency can be improved.
Drawings
FIG. 1 is a schematic flowchart of an embodiment of an HDR image generation method based on multi-shot image fusion provided by the present invention;
FIG. 2 is a flow diagram of one embodiment of a filtering template provided by the present invention;
FIG. 3 is a flow chart illustrating an embodiment of luminance channel fusion provided by the present invention;
fig. 4 is a schematic structural diagram of an embodiment of a terminal device provided by the present invention.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of an HDR image generation method based on multi-shot image fusion, as shown in fig. 1, the method includes steps 101 to 103, where each step is as follows:
step 101: acquiring a plurality of images with different exposure degrees, which are shot at the same time in the same scene, from a plurality of cameras; wherein the different exposure level images include: a normal exposure image and at least one abnormal exposure image; the exposure parameters corresponding to the camera shooting the abnormal exposure image are obtained by calculation according to the image shot by the first camera in the last frame.
In the present embodiment, a plurality of cameras are used to shoot the same scene simultaneously, and the current exposure parameter of each camera is generated by the last frame result. The shot images are images with different exposure degrees and comprise a normal exposure image and at least one abnormal exposure image. The different exposure levels are classified into overexposure, underexposure, and normal exposure. The camera of shooting normal exposure is first camera, and other cameras shoot abnormal exposure image.
In this embodiment, the exposure parameters corresponding to the camera that captures the abnormal exposure image are obtained by calculation according to the image captured by the first camera in the previous frame, and specifically include the following steps:
step 1011: the first camera takes a first image of a normal exposure image shot in the last frame.
Step 1012: and performing down-sampling on the first image, and generating an abnormal exposure binary image according to the pixel value of the brightness channel of the down-sampled first image at each position point.
Step 1013: and respectively calculating first exposure parameters corresponding to other cameras except the first camera in the next frame according to the abnormal exposure binary image, and taking the first exposure parameters as the exposure parameters corresponding to the camera for shooting the abnormal exposure image.
To better illustrate the principles of this step, the following example is given.
1) The normal exposure image shot by the first camera in the last frame is a first image, and the abnormal exposure image shot by other cameras in the last time is a second image. In this example, the second image represents all the abnormal exposure images taken by the other cameras, including at least one image, so that the first image and the second image together contain K images, including the normal exposure image img _ normal and the abnormal exposure image img _ abnormalk(K-1., K-1); and if the image format is not the YUV format, converting the image into the YUV format.
2) The normal exposure image img _ normal is subjected to down-sampling to obtain a down-sampled image img _ normaldownThe length and width are respectively changed to 1/n1And 1/n2Wherein n is1And n2Is a positive integer, e.g. n1=8,n28. Detection of img _ normaldownPixel value Y of luminance channel Y at position (i, j)i,jIf y is a value ofi,jLess than threshold T1 (e.g., 25) or yi,jIf the value is greater than the threshold value T2 (e.g., 230), the pixel at the position is an abnormal exposure pixel, and the pixel value at the position (i, j) is set to ymax(as in 1) if T1 ≦ yi,jT2, the pixel at this position is the normally exposed pixel, and the pixel value at position (i, j) is set to 0. After all the pixel detection and setting are completed, an abnormal exposure binary image img _ mask is generated.
3) Before the next step is executed, morphological erosion operation may be performed on the abnormally exposed binary image img _ mask to remove noise in the image, so as to obtain an image img _ mask 2. The erosion operation may be implemented by filtering the image through a template, for example, using a 3 × 3 template, as shown in fig. 2, 0 and 1 are weight values, 0 is a weight value of the current position, filtering is performed with a pixel at the 0 weight position as the center, and the filtering result is the sum of surrounding pixels. When the filtering result of the corresponding pixel position is not equal to 8 x ymaxIf so, the pixel value at the position is 0, otherwise, the pixel value is ymax
4) After removing the noise, step 1013 is specifically: and adjusting first exposure parameters corresponding to the other cameras in the next frame by combining a preset proportion interval according to the proportion of the number of abnormal exposure pixels in the img _ mask2 to the number of full-image pixels, so that all the exposure parameters are formed by taking the first camera as the center, and the overall exposure parameters of the other cameras are gradually increased or decreased.
The exposure parameters include exposure time, exposure gain, aperture and the like. The adjustment may be different because the sensors of different types of lenses are different. Finally, the overall exposure parameters are adjusted to be gradually increased or decreased when the obtained effect is adjusted, so that the shot images are in a step effect, for example, 5 images are respectively a maximum overexposed image, a secondary overexposed image, a normal exposure image, a secondary underexposed image and a maximum underexposed image, and the step effect is formed as described above.
The adjusted exposure parameters may be, but are not limited to, adjusting the exposure time if the sensor models of all cameras are consistent. For example, when the proportion of the number of the pixels of the abnormal exposure to the number of the full image is 1/2, if the exposure time of the shot of the normal exposure is t, the exposure time of the next frame at the maximum is 2t for the shot of the overexposure, and the exposure times of the rest of the shots of the overexposure are extracted in proportion between [ t,2t ]; the exposure time of the underexposed lens is extracted proportionally between [ t/2, t ]. For example, when the ratio is 1/3, the ratio is 2/3 for 1-1/3, the overexposure lens interval is [ t,3t/2], and the underexposure lens interval is [2t/3, t ].
The invention shoots the exposure image of the next frame according to the adjusted exposure parameters, so that the whole control and shooting form self-adaption and self-adjustment, thereby providing a feasible method for generating HDR images by multiple cameras.
Step 102: and aligning at least one abnormal exposure image by taking the normal exposure image as a reference.
In this embodiment, step 102 specifically includes: and aligning at least one abnormal exposure image by taking the normal exposure image as a reference according to a threshold bitmap alignment method, a feature point alignment method or a block matching alignment method.
Taking a block matching alignment method as an example, the alignment method comprises the following steps:
step 1021: the normal-exposure image img _ normal is divided into m × n blocks. m and n are positive integers.
Step 1022: at each different exposure level image img _ abnormalk(K1.. K-1.) every partition block _ normal within the search and img _ normal is found to be a subblock block _ normaljBest matching block _ abnormal of (j ═ 1, …, m × n)j,k(j ═ 1.. multidata., mxn), and the block is matched with block _ normaljThe length and the width are consistent. The best match may be to img _ abnormalkOf uniform length within the otherBlock compared to block _ normaljThe sum of the absolute values of the differences of the corresponding pixels is minimal. When searching for a matching block, matching calculation can be performed according to the following formula:
Figure BDA0002561782610000081
wherein the parameter n1,n2Is the pixel coordinate, n, within a block b of the img normal image1+d1,n2+d2For the pixel coordinates within a block b' of the img _ abrormal image, there is a global d in position for both blocks1,d2Positional deviation of (2), N1,N2Is the length and width of the block, and the length and width of the blocks b and b' are consistent.
Step 1023: by applying a voltage at img _ abnormalkSearch each block _ normaljCan be matched with img _ abnormalkMatching graph img _ abnormal _ align with consistent length and widthkThereby realizing the alignment of the pictures.
In this embodiment, threshold bitmap alignment and feature point alignment are prior art and are not described herein again.
Step 103: and respectively fusing the brightness channel and the chrominance channel of the aligned image to generate the HDR image.
In this embodiment, step 103 includes steps 1031 to 1033, and each step is as follows:
step 1031: and performing brightness channel fusion on the aligned images by adopting a Laplace pyramid decomposition multi-exposure fusion method.
Step 1032: and carrying out chroma channel fusion on the aligned images according to the corresponding chroma values of the multiple images at the same position when the absolute value of the color difference is maximum.
Step 1033: and generating the HDR image according to the image after the fusion of the brightness channel and the chrominance channel.
In this embodiment, step 1031 specifically includes: calculating a weight map of each image to be fused; performing L-level Laplacian pyramid decomposition on a brightness channel of each image to be fused, and performing L-level Gauss pyramid decomposition on each weight map; l is a positive integer; and calculating the Laplace component of each fused layer, and reconstructing to obtain a brightness fused image.
For better explaining step 1031 of the present invention, please refer to fig. 3, and fig. 3 is a schematic flow chart of an embodiment of luminance channel fusion provided by the present invention. As shown in fig. 3:
a) for K aligned images, calculating a weight graph W of each image to be fusedk. Here, the weight map WkThe weight of each position of the brightness channel Y in each graph is calculated in the following way:
Figure BDA0002561782610000091
wherein, yk(i,j)∈[0,255],
Figure BDA0002561782610000092
After the weight of each position is calculated, weight normalization is carried out:
Figure BDA0002561782610000093
b) and performing L-level Laplacian pyramid decomposition on the brightness channels of the K images, and performing L-level Gauss pyramid decomposition on each weight image.
i. First calculate the L Laplace components of each graph in the luminance channel lpY1(0)~lpY1(L-1),……lpYK(0)~lpYK(L-1), and a highest-level Gaussian component gsY1(L),...,gsYK(L). The calculation method is as follows:
y-channel image g for each image0M × N, 1/2 downsampling, and passing the downsampled image through a symmetric low-pass filter ω (e.g., 3 × 3) to obtain an (M/2) × (N/2) gaussian image g1This process is repeated L times to obtain a series of images g of progressively halved size0…gL. Image g1An image EXPAND (g) of size M N is obtained by 2-fold upsampling1) Where EXPAND (-) represents a 2-fold upsampling operation. Then the 0 th layer has laplace lpY (0) ═ g0-EXPAND(g1) (ii) a This process was performed L-1 times to obtain laplace component lpy (i) ═ g for each layeri-EXPAND(g1+1) (i 1.., N), gaussian component gsy (l) of the highest layer, (l) gL. Where the low-pass filter ω is at 3 × 3 as follows:
Figure BDA0002561782610000094
then, calculate the L +1 Laplace components in weight for each graph gsW1(0)~gsW1(L-1),……,gsWK(0)~gsWK(L-1). The calculation method is as follows: the weight map W (i.e., gsW (0)) for each map is M × N, down-sampled by 1/2, and then the down-sampled image is passed through a symmetric low-pass filter ω (e.g., 3 × 3) to obtain a secondary gaussian image gsW (1), which is repeated L times to obtain a series of images gsW (0) with sizes halved step by step.
In this step, downsampling is performed first, and then gaussian blurring is performed, so that the calculation time can be reduced to more than half of the original calculation time. In addition, the template size of gaussian blur is 3 × 3, usually 5 × 5, and when the number of pyramid layers reaches a certain number, the two effects are not significantly different, but the calculation time using the 3 × 3 template is shorter than that of 5 × 5, which is about 5 times shorter.
c) And calculating the Laplace component of each layer after fusion.
By gsW1(0)~gsW1(L-1),…,gsWK(0)~gsWK(L-1) pair lpY1(0)~lpY1(L-1),gsY1(L),…,lpYK(0)~lpYK(L-1),gsYKAnd (L) carrying out weighted fusion on the corresponding levels. The value of the fused i-th laplace decomposition at the coordinates (x, y) is represented by lpf (i) (x, y), and the value of the fused highest-level gaussian decomposition at the coordinates (x, y) is represented by gsf (i) (x, y).
Figure BDA0002561782610000101
Figure BDA0002561782610000102
d) And reconstructing to obtain a fused brightness channel result. And (4) recursion is carried out on the fused Laplacian pyramid layer by layer from the top layer to the bottom layer by the following method, and after the calculation is finished, the final fusion result G can be obtained0
Figure BDA0002561782610000103
Where EXPAND (-) represents a 2-fold upsampling operation.
In this embodiment, the chroma fusion in step 1032 specifically includes: for K aligned images, the fused chroma value (u) at coordinates (x, y)f(x,y),vf(x, y)) should be the chroma value corresponding to the maximum absolute value of the color difference of the multiple images at the position. In YUV space, the fused chroma value at coordinate (x, y) is calculated as follows:
Figure BDA0002561782610000104
wherein u isi(x,y),vi(x, y) are u and v values at coordinates (x, y) of the ith image of the K images after alignment, abs is an absolute value operation, and argmax is a specific value of the parameter i when the result takes a maximum value. Thus, the left side of the first equation coordinate represents the value of the variable i at the maximum when comparing the values of the K images at coordinates (x, y), and assigning this value to pos. (u) in the second formulaf(x,y),vf(x, y)) is a fused value.
Finally, in step 1033, after the fusion result of the luminance channel and the chrominance channel is obtained, the HDR image with the same length and width as the normally exposed image is obtained.
From the above, the invention provides a HDR image generation method based on multi-shot image fusion, which obtains a plurality of images with different exposure levels shot simultaneously in the same scene from a plurality of cameras, wherein the current exposure parameters of the plurality of cameras are generated by the result of the previous frame; then, taking the normal exposure image as a reference, and aligning at least one abnormal exposure image; and finally, fusing the brightness channel and the chrominance channel of the aligned image respectively to generate the HDR image. Compared with the prior art that images with different exposure degrees are continuously photographed by using a single camera and then fused to generate an HDR image, the technical scheme of the invention can realize that multiple cameras generate the HDR image, and avoid the problems of ghost and color cast possibly caused by continuous exposure of a single camera.
Furthermore, when the chroma fusion is carried out, the chroma value corresponding to the maximum absolute value of the color difference is selected, the difference relation between color channels is kept, and the color cast problem in the prior art is further avoided.
Furthermore, when the brightness is fused, downsampling is carried out firstly, and then Gaussian blurring is carried out, so that the data processing time can be shortened, and the fusion efficiency can be improved.
Correspondingly, the invention also provides a method for generating the HDR video based on the multi-shot image fusion, which comprises the following steps: according to the HDR image generation method of the present invention, several HDR images are generated, and an HDR video is generated from the generated HDR images. The technology of generating the HDR video from the HDR image is the prior art, and is not described herein again.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a terminal device provided in the present invention.
The terminal device provided by the embodiment of the present invention includes a processor 71, a memory 72, and a computer program stored in the memory 72 and configured to be executed by the processor 71, where the processor 71 executes the computer program to implement the steps of the HDR image generation method based on multi-shot image fusion in the embodiment, for example, all the steps of the HDR image generation method shown in fig. 1.
In addition, the embodiment of the present invention also provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, an apparatus where the computer-readable storage medium is located is controlled to execute the HDR image generation method based on multi-shot image fusion as described in any one of the above embodiments.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a terminal device and does not constitute a limitation of a terminal device, and may include more or less components than those shown, or combine certain components, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 71 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 71 is a control center of the terminal device and connects various parts of the whole terminal device by using various interfaces and lines.
The memory 72 can be used for storing the computer programs and/or modules, and the processor 71 can implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory 72 and calling the data stored in the memory 72. The memory 72 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Claims (10)

1. A HDR image generation method based on multi-shot image fusion is characterized by comprising the following steps:
acquiring a plurality of images with different exposure degrees, which are shot at the same time in the same scene, from a plurality of cameras; wherein the different exposure level images include: a normal exposure image and at least one abnormal exposure image; the exposure parameters corresponding to the camera for shooting the abnormal exposure image are obtained by calculation according to the image shot by the first camera in the last frame; the first camera is used for shooting the normal exposure image;
aligning the at least one abnormal exposure image by taking the normal exposure image as a reference;
and respectively fusing the brightness channel and the chrominance channel of the aligned image to generate the HDR image.
2. The method as claimed in claim 1, wherein the exposure parameters corresponding to the camera capturing the abnormal exposure image are obtained by calculating from the image captured by the first camera in the previous frame, and specifically:
the first camera takes a first image as a normal exposure image shot in the last frame;
down-sampling the first image, and generating an abnormal exposure binary image according to the pixel value of the brightness channel of the down-sampled first image at each position point;
and respectively calculating first exposure parameters corresponding to other cameras except the first camera in the next frame according to the abnormal exposure binary image, and taking the first exposure parameters as the exposure parameters corresponding to the camera for shooting the abnormal exposure image.
3. The method for generating an HDR image based on multi-shot image fusion as claimed in claim 2, wherein the calculating, according to the binarized image with abnormal exposure, first exposure parameters corresponding to the next frame of other cameras except the first camera, specifically:
and adjusting first exposure parameters corresponding to other cameras in the next frame by combining a preset proportion interval according to the proportion of the number of abnormal exposure pixels in the abnormal exposure binary image to the number of full image pixels, so that all the exposure parameters are formed by taking the first camera as the center, and the overall exposure parameters of other cameras are gradually increased or decreased.
4. The method as claimed in claim 3, wherein when the sensor models of all cameras are the same, the exposure parameter is the exposure time.
5. The method as claimed in any one of claims 1 to 4, wherein the aligning the at least one abnormal exposure image with the normal exposure image as a reference is specifically:
and aligning the at least one abnormal exposure image by taking the normal exposure image as a reference according to a threshold bitmap alignment method, a feature point alignment method or a block matching alignment method.
6. The method for generating an HDR image based on multi-shot image fusion as claimed in claim 5, wherein the fusion of the luminance channel and the chrominance channel is performed on the aligned images, respectively, to generate the HDR image, specifically:
performing brightness channel fusion on the aligned images by adopting a Laplace pyramid decomposition multi-exposure fusion method;
carrying out chroma channel fusion on the aligned images according to the corresponding chroma value when the absolute value of the color difference of the multiple images at the same position is maximum;
and generating the HDR image according to the image after the fusion of the brightness channel and the chrominance channel.
7. The method for generating an HDR image based on multi-shot image fusion as claimed in claim 6, wherein the multi-exposure fusion method using laplacian pyramid decomposition performs luminance channel fusion on the aligned images, specifically:
calculating a weight map of each image to be fused;
performing L-level Laplacian pyramid decomposition on a brightness channel of each image to be fused, and performing L-level Gauss pyramid decomposition on each weight map; l is a positive integer;
and calculating the Laplace component of each fused layer, and reconstructing to obtain a brightness fused image.
8. An HDR video generation method based on multi-shot image fusion, characterized in that, according to the HDR image generation method of any one of claims 1 to 7, several HDR images are generated, and an HDR video is generated from the generated HDR images.
9. A terminal device comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the method for HDR image generation based on multi-shot image fusion of any one of claims 1-7 when executing the computer program.
10. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program runs, the computer-readable storage medium controls an apparatus to execute the HDR image generation method based on multi-shot image fusion according to any one of claims 1 to 7.
CN202010617015.8A 2020-06-30 2020-06-30 HDR image generation method, equipment and storage medium based on multi-shot image fusion Active CN111986129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010617015.8A CN111986129B (en) 2020-06-30 2020-06-30 HDR image generation method, equipment and storage medium based on multi-shot image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010617015.8A CN111986129B (en) 2020-06-30 2020-06-30 HDR image generation method, equipment and storage medium based on multi-shot image fusion

Publications (2)

Publication Number Publication Date
CN111986129A true CN111986129A (en) 2020-11-24
CN111986129B CN111986129B (en) 2024-03-19

Family

ID=73438433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010617015.8A Active CN111986129B (en) 2020-06-30 2020-06-30 HDR image generation method, equipment and storage medium based on multi-shot image fusion

Country Status (1)

Country Link
CN (1) CN111986129B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233049A (en) * 2020-12-14 2021-01-15 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN112651918A (en) * 2021-01-15 2021-04-13 北京小米松果电子有限公司 Image processing method and device, electronic device and storage medium
CN112651911A (en) * 2020-12-01 2021-04-13 广东工业大学 High dynamic range imaging generation method based on polarization image
CN112837254A (en) * 2021-02-25 2021-05-25 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN113012081A (en) * 2021-01-28 2021-06-22 北京迈格威科技有限公司 Image processing method, device and electronic system
CN113038025A (en) * 2021-02-26 2021-06-25 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN113222869A (en) * 2021-05-06 2021-08-06 杭州海康威视数字技术股份有限公司 Image processing method
CN113706429A (en) * 2021-07-30 2021-11-26 上海智砹芯半导体科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114418912A (en) * 2021-12-27 2022-04-29 杭州意象科技有限公司 Multi-angle illumination reflection elimination and multi-frame multi-angle illumination image fusion algorithm
CN114529477A (en) * 2022-02-28 2022-05-24 山东威高手术机器人有限公司 Binocular endoscope with high dynamic range, system and imaging method
CN117135468A (en) * 2023-02-21 2023-11-28 荣耀终端有限公司 Image processing method and electronic equipment
EP4369728A1 (en) * 2022-11-10 2024-05-15 Beijing Xiaomi Mobile Software Co., Ltd. Photographing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105163047A (en) * 2015-09-15 2015-12-16 厦门美图之家科技有限公司 HDR (High Dynamic Range) image generation method and system based on color space conversion and shooting terminal
CN105578042A (en) * 2015-12-18 2016-05-11 深圳市金立通信设备有限公司 Image data transmission method and terminal
CN107395998A (en) * 2017-08-24 2017-11-24 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
US9955084B1 (en) * 2013-05-23 2018-04-24 Oliver Markus Haynold HDR video camera
CN108063902A (en) * 2018-01-08 2018-05-22 信利光电股份有限公司 HDR image pickup methods, filming apparatus and the mobile terminal and readable storage medium storing program for executing of multi-cam
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
WO2020038069A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Exposure control method and device, and electronic apparatus
CN110930440A (en) * 2019-12-09 2020-03-27 Oppo广东移动通信有限公司 Image alignment method and device, storage medium and electronic equipment
CN111147755A (en) * 2020-01-02 2020-05-12 普联技术有限公司 Zoom processing method and device for double cameras and terminal equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9955084B1 (en) * 2013-05-23 2018-04-24 Oliver Markus Haynold HDR video camera
CN105163047A (en) * 2015-09-15 2015-12-16 厦门美图之家科技有限公司 HDR (High Dynamic Range) image generation method and system based on color space conversion and shooting terminal
CN105578042A (en) * 2015-12-18 2016-05-11 深圳市金立通信设备有限公司 Image data transmission method and terminal
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
CN107395998A (en) * 2017-08-24 2017-11-24 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN108063902A (en) * 2018-01-08 2018-05-22 信利光电股份有限公司 HDR image pickup methods, filming apparatus and the mobile terminal and readable storage medium storing program for executing of multi-cam
WO2020038069A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Exposure control method and device, and electronic apparatus
CN110930440A (en) * 2019-12-09 2020-03-27 Oppo广东移动通信有限公司 Image alignment method and device, storage medium and electronic equipment
CN111147755A (en) * 2020-01-02 2020-05-12 普联技术有限公司 Zoom processing method and device for double cameras and terminal equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ASHISH V.VANMALI等: "Multi-exposure image fusion for dynamic scenes without ghost effect", 《2015 TWENTY FIRST NATIONAL CONFERENCE ON COMMUNICATIONS》, pages 1 - 6 *
KALPANA SESHADRINATHAN等: "High dynamic range imaging using camera arrays", 《2017 IEEE INTERNATIONAL CONFERENCE ON IMAGE PRECESSING》, pages 725 - 729 *
VLADAN POPOVIC等: "Multi-camera platform for panoramic real-time HDR video construction and rendering", 《JOURNAL OF REAL-TIME IMAGE PROCESSING》, pages 697 - 708 *
周燕琴等: "一种改进金字塔的多曝光HDR图像生成方法", 《现代计算机》, no. 15, pages 130 - 136 *
徐桂忠等: "基于YUV空间的高动态范围图像的合成方法", 《中国传媒大学学报(自然科学版)》, vol. 24, no. 3, pages 11 - 13 *
杜永生等: "基于质量度量与颜色校正的多曝光图像融合算法", 《电子测量与仪器学报》, vol. 33, no. 1, pages 90 - 98 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651911A (en) * 2020-12-01 2021-04-13 广东工业大学 High dynamic range imaging generation method based on polarization image
CN112651911B (en) * 2020-12-01 2023-10-13 广东工业大学 High dynamic range imaging generation method based on polarized image
CN112233049A (en) * 2020-12-14 2021-01-15 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN112651918A (en) * 2021-01-15 2021-04-13 北京小米松果电子有限公司 Image processing method and device, electronic device and storage medium
CN113012081A (en) * 2021-01-28 2021-06-22 北京迈格威科技有限公司 Image processing method, device and electronic system
CN112837254B (en) * 2021-02-25 2024-06-11 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN112837254A (en) * 2021-02-25 2021-05-25 普联技术有限公司 Image fusion method and device, terminal equipment and storage medium
CN113038025A (en) * 2021-02-26 2021-06-25 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN113038025B (en) * 2021-02-26 2023-06-20 Oppo广东移动通信有限公司 Image processing method, terminal and storage medium
CN113222869A (en) * 2021-05-06 2021-08-06 杭州海康威视数字技术股份有限公司 Image processing method
CN113222869B (en) * 2021-05-06 2024-03-01 杭州海康威视数字技术股份有限公司 Image processing method
CN113706429B (en) * 2021-07-30 2023-08-04 爱芯元智半导体(上海)有限公司 Image processing method, device, electronic equipment and storage medium
CN113706429A (en) * 2021-07-30 2021-11-26 上海智砹芯半导体科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114418912A (en) * 2021-12-27 2022-04-29 杭州意象科技有限公司 Multi-angle illumination reflection elimination and multi-frame multi-angle illumination image fusion algorithm
CN114418912B (en) * 2021-12-27 2024-05-14 杭州意象科技有限公司 Multi-angle illumination image fusion algorithm for eliminating reflection and multi-frame multi-angle illumination
CN114529477A (en) * 2022-02-28 2022-05-24 山东威高手术机器人有限公司 Binocular endoscope with high dynamic range, system and imaging method
EP4369728A1 (en) * 2022-11-10 2024-05-15 Beijing Xiaomi Mobile Software Co., Ltd. Photographing method and device
CN117135468A (en) * 2023-02-21 2023-11-28 荣耀终端有限公司 Image processing method and electronic equipment
CN117135468B (en) * 2023-02-21 2024-06-07 荣耀终端有限公司 Image processing method and electronic equipment

Also Published As

Publication number Publication date
CN111986129B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
CN111986129B (en) HDR image generation method, equipment and storage medium based on multi-shot image fusion
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108335279B (en) Image fusion and HDR imaging
US20200134787A1 (en) Image processing apparatus and method
US9613408B2 (en) High dynamic range image composition using multiple images
US8189960B2 (en) Image processing apparatus, image processing method, program and recording medium
KR101663227B1 (en) Method and apparatus for processing image
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108717530B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
JP2009506688A (en) Image segmentation method and image segmentation system
Ko et al. Artifact-free low-light video enhancement using temporal similarity and guide map
JP2009282979A (en) Image processor and image processing method
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN109413335B (en) Method and device for synthesizing HDR image by double exposure
CN109493283A (en) A kind of method that high dynamic range images ghost is eliminated
Lee et al. Image contrast enhancement using classified virtual exposure image fusion
CN111127303A (en) Background blurring method and device, terminal equipment and computer readable storage medium
CN111953893B (en) High dynamic range image generation method, terminal device and storage medium
CN110660090A (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN111932587A (en) Image processing method and device, electronic equipment and computer readable storage medium
US9020269B2 (en) Image processing device, image processing method, and recording medium
CN111986106A (en) High dynamic image reconstruction method based on neural network
WO2022066726A1 (en) Saliency based capture or image processing
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN108122218B (en) Image fusion method and device based on color space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant