WO2021077963A1 - 图像融合的方法、装置、电子设备及可读存储介质 - Google Patents
图像融合的方法、装置、电子设备及可读存储介质 Download PDFInfo
- Publication number
- WO2021077963A1 WO2021077963A1 PCT/CN2020/116487 CN2020116487W WO2021077963A1 WO 2021077963 A1 WO2021077963 A1 WO 2021077963A1 CN 2020116487 W CN2020116487 W CN 2020116487W WO 2021077963 A1 WO2021077963 A1 WO 2021077963A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- processed
- brightness
- frame
- supplementary
- Prior art date
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 12
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000012549 training Methods 0.000 claims description 120
- 230000004927 fusion Effects 0.000 claims description 51
- 238000013528 artificial neural network Methods 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 5
- 230000007704 transition Effects 0.000 abstract description 5
- 235000019557 luminance Nutrition 0.000 abstract 8
- 230000000153 supplemental effect Effects 0.000 abstract 7
- 238000012545 processing Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 4
- 238000007499 fusion processing Methods 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 101100400452 Caenorhabditis elegans map-2 gene Proteins 0.000 description 1
- 101150064138 MAP1 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- This application relates to the field of image processing technology. Specifically, this application relates to an image fusion method, device, electronic device, and readable storage medium.
- Multi-exposure High-Dynamic Range (HDR, High-Dynamic Range) synthesis means that the camera simultaneously or continuously shoots a group of images with multiple exposure parameters.
- the typical shooting strategy is to shoot an overexposed image, an underexposed image, and one An image with a normal exposure is then used to merge the multiple images taken by an algorithm to obtain an image with a wider dynamic range.
- the purpose of this application is to at least solve one of the above-mentioned technical defects, especially the technical defect that the final fusion image is prone to unnatural brightness transition.
- an image fusion method which includes:
- the adjusted supplementary frames and reference frames are fused to obtain a fused image.
- the method further includes:
- Perform image fusion on the adjusted supplementary frames and reference frames including:
- the adjusted supplementary frames and reference frames are fused.
- acquiring the weight feature map of each RAW image to be processed includes:
- the neural network is obtained by training in the following manner:
- the training sample set includes training images corresponding to at least one scene, the training images of each scene are at least two images, and for at least two images of each scene, one of the images is used as a sample reference frame , Other images are used as sample supplementary frames;
- the initial network is a neural network with image as input and image weight feature map as output.
- the loss function represents the error between the sample fusion image corresponding to the same scene and the sample reference frame.
- the sample fusion image is based on the image corresponding to the same scene.
- the weight feature map of each training image is obtained by fusing each transformed training image.
- obtaining a training sample set includes:
- the initial training sample set includes an initial image corresponding to at least one scene, and the initial image of each scene is at least two images;
- the initial image is used as the training image for each scene;
- each initial image is converted into a low dynamic range image corresponding to each initial image
- the low dynamic range image corresponding to each initial image of each scene is used as the training image of each scene.
- separately determining the brightness relationship between each supplementary frame and the reference frame includes:
- the brightness relationship between the supplementary frame and the reference frame is determined according to the exposure parameter of the reference frame and the exposure parameter of the supplementary frame.
- the exposure parameters include aperture size, shutter time, and sensor gain
- the exposure parameters of the reference frame and the supplementary frame determine the brightness relationship between the supplementary frame and the reference frame, including:
- the brightness relationship between the supplementary frame and the reference frame is determined.
- the correlation is the ratio of the square value of the aperture size of the supplementary frame to the square value of the aperture size of the reference frame;
- the correlation is the ratio of the shutter time of the reference frame to the shutter time of the supplementary frame
- the correlation is the ratio of the sensor gain of the supplementary frame to the sensor gain of the reference frame.
- separately determining the brightness relationship between each supplementary frame and the reference frame includes:
- For each RAW image to be processed determine the brightness of the RAW image to be processed based on the adjusted brightness of each pixel of the RAW image to be processed;
- the brightness relationship between the supplementary frame and the reference frame is determined.
- an image fusion device which includes:
- the image acquisition module is used to acquire at least two RAW images to be processed in the same scene
- the brightness relationship determination module is configured to use one of the at least two RAW images to be processed as a reference frame, and other images as supplementary frames, and respectively determine the brightness relationship between each supplementary frame and the reference frame;
- the brightness adjustment module is used to linearly adjust the brightness of the pixels in the supplementary frame based on the brightness relationship for each supplementary frame to obtain the adjusted supplementary frame;
- the image fusion module is used to fuse the adjusted supplementary frames and reference frames to obtain a fused image.
- the device further includes a weight feature map acquisition module, which is specifically configured to:
- the image fusion module When the image fusion module performs image fusion on the adjusted supplementary frames and reference frames, it is specifically used to:
- the adjusted supplementary frames and reference frames are fused.
- the weight feature map acquisition module is specifically configured to: when acquiring the weight feature map of each RAW image to be processed:
- the RAW image to be processed is a high dynamic range image
- the device further includes a training module, wherein the training module obtains the neural network through training in the following manner:
- the training sample set includes training images corresponding to at least one scene, the training images of each scene are at least two images, and for at least two images of each scene, one of the images is used as a sample reference frame , Other images are used as sample supplementary frames;
- the initial network is a neural network with image as input and image weight feature map as output.
- the loss function represents the error between the sample fusion image corresponding to the same scene and the sample reference frame.
- the sample fusion image is based on the image corresponding to the same scene.
- the weight feature map of each training image is obtained by fusing each transformed training image.
- the training module is specifically used to:
- the initial training sample set includes an initial image corresponding to at least one scene, and the initial image of each scene is at least two images;
- the initial image is used as the training image for each scene;
- each initial image is converted into a low dynamic range image corresponding to each initial image
- the low dynamic range image corresponding to each initial image of each scene is used as the training image of each scene.
- the brightness relationship determination module when the brightness relationship determination module separately determines the brightness relationship between each supplementary frame and the reference frame, it is specifically configured to:
- the brightness relationship between the supplementary frame and the reference frame is determined according to the exposure parameter of the reference frame and the exposure parameter of the supplementary frame.
- the exposure parameters include aperture size, shutter time, and sensor gain
- the exposure parameters of the reference frame and the supplementary frame determine the brightness relationship between the supplementary frame and the reference frame, including:
- the brightness relationship between the supplementary frame and the reference frame is determined.
- the correlation is the ratio of the square value of the aperture size of the supplementary frame to the square value of the aperture size of the reference frame;
- the correlation is the ratio of the shutter time of the reference frame to the shutter time of the supplementary frame
- the correlation is the ratio of the sensor gain of the supplementary frame to the sensor gain of the reference frame.
- the brightness relationship determination module when the brightness relationship determination module separately determines the brightness relationship between each supplementary frame and the reference frame, it is specifically configured to:
- For each RAW image to be processed determine the brightness of the RAW image to be processed based on the adjusted brightness of each pixel of the RAW image to be processed;
- the brightness relationship between the supplementary frame and the reference frame is determined.
- an electronic device in a third aspect, includes:
- a processor and a memory where the memory is configured to store machine-readable instructions that, when executed by the processor, cause the processor to perform any one of the methods in the first aspect.
- a computer-readable storage medium storing a computer program, characterized in that the computer storage medium is used to store computer instructions, which when run on a computer, enable the computer to execute the above-mentioned first aspect Any method.
- a computer program including computer-readable code, which when the computer-readable code runs on an electronic device, causes the electronic device to execute any one of the methods in the first aspect.
- the brightness of each supplementary frame can be linearly adjusted based on the brightness relationship between each supplementary frame and the reference frame, and the adjusted supplementary frame And the reference frame is fused, and then the fused image is obtained.
- each supplementary frame can be linearly transformed by the brightness of the reference frame to make the difference between the brightness of each supplementary frame after adjustment and the brightness of the reference frame It can be further reduced.
- the brightness of each RAW image to be processed is almost equal, and the image value of the obtained fusion image still retains a linear relationship with the actual object brightness, which can effectively solve the problem of multiple brightness in the image.
- the resulting image is prone to unnatural brightness transitions.
- FIG. 1 is a schematic flowchart of an image fusion method provided by an embodiment of this application
- FIG. 2a is a schematic diagram of a reference frame provided by an embodiment of this application.
- 2b is a schematic diagram of a weight mask provided by an embodiment of the application.
- FIG. 3a is a schematic diagram of a RAW image to be processed according to an embodiment of the application.
- FIG. 3b is a schematic diagram of another RAW image to be processed according to an embodiment of the application.
- FIG. 4 is a schematic diagram of a complete flow of an image fusion method provided by an embodiment of the application.
- FIG. 5 is a schematic structural diagram of an image fusion device provided by an embodiment of the application.
- FIG. 6 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
- the embodiment of the present application provides an image fusion method. As shown in FIG. 1, the method includes:
- Step S101 Acquire at least two RAW images to be processed in the same scene.
- the RAW image is also called the original image, which is the original data that the image sensor of the digital camera, scanner, terminal camera and other devices converts the captured light source signal into the digital signal.
- RAW images will not lose information due to image processing (such as sharpening, increasing color contrast, etc.) and compression, and there is a linear brightness relationship between each RAW image.
- image processing such as sharpening, increasing color contrast, etc.
- compression there is a linear brightness relationship between adjacent frame images in the video.
- the RAW image to be processed in the same scene means that the image content in different RAW images to be processed is basically the same, that is, the difference in image content between the RAW images to be processed is less than a certain threshold, that is, the difference in the image in the image satisfies Pre-conditions. For example, if the user shoots two RAW images at the same place and in the same pose, the two RAW images are only taken at different times, but the content of the images included is almost the same (the degree of difference in the images meets the preset conditions).
- the two RAW images are the images in the same scene. Among them, the method of obtaining the RAW image to be processed in the same scene is not limited in the embodiment of this application.
- the RAW image to be processed For example, several adjacent frames in the same video can be used as the RAW image to be processed, or the image collection interval is less than the set interval.
- the image is used as the RAW image to be processed, for example, the image obtained through continuous shooting can be used as the RAW image to be processed.
- step S102 one of the at least two raw images to be processed is used as a reference frame, and the other images are used as supplementary frames, and the brightness relationship between each supplementary frame and the reference frame is determined respectively.
- the method of selecting the reference frame from the RAW image to be processed is not limited in the embodiment of the present application.
- one of the RAW images to be processed can be selected as the reference frame, and conditions can also be set to determine the reference frame.
- the RAW image to be processed is used as the reference frame.
- the setting conditions can be that the exposure parameters are within a specific range. If there are multiple RAW images to be processed, the exposure parameters are within a specific range, then any frame that meets the conditions can be selected as the reference frame, or the exposure parameters and presets can be selected
- the RAW image to be processed with the closest parameter is used as the reference frame. Further, after selecting an image that meets the set conditions as the reference frame, other RAW images to be processed may be used as supplementary frames.
- the setting condition for selecting the reference frame is that the exposure parameter of the image satisfies the setting condition.
- image 1 image 3, ..., and image 10 are used as supplementary frames.
- the image information of the image may include brightness, and after the reference frame and the supplementary frame are determined, the brightness relationship between each supplementary frame and the reference frame may also be determined separately.
- the brightness relationship can be expressed in multiple ways.
- the brightness relationship between each supplementary frame and the reference frame can be expressed in a proportional manner.
- the supplementary frame and the reference frame when the brightness relationship between the frames is 1:1; when the brightness of the supplementary frame is 1/2 of the brightness of the reference frame, the brightness relationship between the supplementary frame and the reference frame is 1:2.
- Step S103 for each supplementary frame, linearly adjust the brightness of the pixels in the supplementary frame based on the brightness relationship to obtain an adjusted supplementary frame.
- the brightness of each supplementary frame can be linearly adjusted according to the brightness relationship between each supplementary frame and the reference frame.
- the specific implementation manner of the adjustment is not limited in the embodiment of the present application.
- the supplementary frame includes supplementary frame 1 and supplementary frame 2, and the brightness relationship between supplementary frame 1 and the reference frame is 1:4, the brightness relationship between supplementary frame 2 and the reference frame is 1:2;
- the brightness of the pixels in the supplementary frame 1 may be linearly adjusted based on the brightness relationship between the supplementary frame 1 and the reference frame, such as multiplying the brightness of the pixels in the supplementary frame 1 by 4 to obtain the adjusted supplementary frame 1.
- the brightness of the pixels in the supplementary frame 2 is linearly adjusted based on the brightness relationship between the supplementary frame 2 and the reference frame. For example, the brightness of the pixels in the supplementary frame 2 are all multiplied by two to obtain the adjusted supplementary frame 2.
- Step S104 fusing the adjusted supplementary frames and reference frames to obtain a fused image.
- the image sizes of the adjusted supplementary frames and reference frames are the same. If there are images with different sizes, the images with different sizes can be processed , So that the size of all images is the same. In practical applications, if the RAW image to be processed is acquired, that is, there are images with different sizes, the images with different sizes can also be preprocessed first, and then the subsequent steps are performed. Among them, the manner of processing the image size is not limited in the embodiment of the present application.
- the adjusted supplementary frames and the reference frame can be merged to obtain a fused image.
- the specific fusion manner is not limited in the embodiment of this application.
- each supplementary frame can be linearly transformed by the brightness of the reference frame, so that the difference between the brightness of each supplementary frame after adjustment and the brightness of the reference frame can be further reduced, so that the processed each The brightness of the RAW image to be processed is almost equal, which can effectively solve the problem that the resulting image is prone to unnatural brightness transition due to the existence of multiple brightness in the image, and ensures that the image value of the obtained fused image is consistent with the actual The brightness of the object still retains a linear relationship.
- the method further includes:
- the adjusted supplementary frames and reference frames are fused to obtain a fused image.
- the weight feature map is used to characterize the value of each pixel in each RAW image to be processed, that is, the weight of each pixel in the RAW image to be processed can be obtained through the weight feature map of each RAW image to be processed, and When acquiring the weight feature map of each RAW image to be processed, the weight feature map of each RAW image to be processed can be input to the neural network to obtain the weight feature map of each RAW image to be processed.
- each RAW image to be processed may be fused according to the weight feature map corresponding to each RAW image to be processed to obtain a fused image.
- the specific fusion method is not limited in the embodiment of the present application, for example, Alpha fusion, pyramid fusion, gradient fusion, etc. can be used.
- the RAW image to be processed includes a reference frame, supplementary frame 1 and supplementary frame 2.
- the reference frame, supplementary frame 1 and supplementary frame 2 can be respectively input to the neural network to obtain the weight feature map of the reference frame, and supplement The weight feature map of frame 1 and the weight feature map of supplementary frame 2.
- the brightness relationship between the supplementary frame 1 and the reference frame, and the brightness relationship between the supplementary frame 2 and the reference frame can be determined, and the brightness relationship between the supplementary frame 1 and the reference frame can be determined based on the brightness relationship between the supplementary frame 1 and the reference frame.
- Based on the brightness relationship between the supplementary frame 2 and the reference frame adjust the brightness of the pixels in the supplementary frame 2 to obtain the adjusted supplementary frame 2.
- the weight feature map, the weight feature map of the supplementary frame 1, and the weight feature map of the supplementary frame 2 merge the reference frame, the adjusted supplementary frame 1 and the adjusted supplementary frame 2.
- the moving objects in the image may be in different positions on different images. In this way, the moving objects may appear semi-transparent artifacts in the fused image. It is called "ghost".
- the weighted feature map comes from the semantic recognition of the neural network, reasonable weights can be given to areas that are difficult to judge and process by traditional methods such as motion areas, so that the image obtained by fusion can be free of "ghosts". .
- the neural network is obtained by training in the following ways:
- the training sample set includes training images corresponding to at least one scene, the training images of each scene are at least two images, and for at least two images of each scene, one of the images is used as a sample reference frame , Other images are used as sample supplementary frames;
- the initial network is a neural network with image as input and image weight feature map as output.
- the loss function represents the error between the sample fusion image corresponding to the same scene and the sample reference frame.
- the sample fusion image is based on the image corresponding to the same scene.
- the weight feature map of each training image is obtained by fusing each transformed training image.
- the initial network may be Fully Convolutional Neural Network (FCN), Convolutional Neural Network (CNN), Deep Neural Network (DNN), etc.
- FCN Fully Convolutional Neural Network
- CNN Convolutional Neural Network
- DNN Deep Neural Network
- the embodiments of this application are The type of network is not limited.
- the network structure of the initial network can be designed according to computer vision tasks, or the network structure of the initial network can adopt at least a part of the existing network structure, such as deep residual network (Deep Residual Network, ResNet) or dense convolutional network (Dense Convolutional Network, DenseNet), etc.
- the embodiment of the present application does not limit the network structure of the initial network.
- the training images in the training sample set are sample data used to train the neural network.
- the training images in the training sample set correspond to at least one scene, and the training images of each scene are at least two images.
- One of the images is selected as the sample reference frame, and the other images are used as the sample supplementary frame.
- the method of selecting the sample reference frame is not limited in the embodiment of this application.
- one of the training images can be selected as the sample reference frame, or when the image information of the training image satisfies the set conditions, the training image As a sample reference frame, and the manner of obtaining training images in the same scene is not limited in the embodiment of the present application, for example, several adjacent frames in the same video can be selected as training images in the same scene.
- linear brightness transformation can be performed on each training image to obtain each transformed training image, and the acquired initial network can be trained through each transformed training image.
- the loss function of the initial network converges
- the loss The initial network when the function converges is determined to be a neural network.
- the initial network is a fully convolutional neural network with images as input and image weight feature maps as output.
- Linear brightness transformation can convert training images into images with reduced dynamic range.
- the training images in the training sample set can be separately input to the initial network to obtain the weight feature map of each training image.
- the transformed training images in the scene are fused to obtain a sample fusion image. And judge whether the error between the sample fusion image and the sample reference frame in the scene meets the condition (judge whether the loss function value obtained according to the sample fusion image and the sample reference frame in the scene converges). If the conditions are not met, adjust the parameters in the initial network, and again input the training images in the training sample set to the initial network to obtain the weight feature map of each training image.
- the transformed training images in the scene are fused to obtain the sample fusion image, and it is judged whether the error between the currently obtained sample fusion image and the reference frame in the scene is satisfied condition. If not, the initial network is trained again based on the training images in the training sample set until the error between the sample fusion image corresponding to the same scene and the sample reference frame of the scene meets the condition.
- obtaining a training sample set includes:
- the initial training sample set includes an initial image corresponding to at least one scene, and the initial image of each scene is at least two images;
- the initial image is used as the training image for each scene;
- each initial image is converted into a low dynamic range image corresponding to each initial image
- the low dynamic range image corresponding to each initial image of each scene is used as the training image of each scene.
- an initial training sample set can be obtained, the initial training image in the initial training sample set also corresponds to at least one scene, and the initial image of each scene is also at least two images.
- the acquired initial image is a low dynamic range image
- the acquired initial image is a high dynamic range image
- you can also Each high dynamic range image is converted into a corresponding low dynamic range image, and then the low dynamic range image corresponding to each high dynamic range image of each scene is used as the training image of each scene.
- the manner of converting a high dynamic range image into a low dynamic range image is not limited in the embodiment of the present application.
- the corresponding weight feature map is obtained based on the neural network, which can further give the area that is difficult to judge and process with traditional methods such as motion area Reasonable weight can make the image obtained by fusion have no "ghost".
- acquiring the weight feature map of each RAW image to be processed includes:
- the neural network is trained based on the training images in the training sample set, and the training images in the training sample set are low dynamic range images, that is to say, the input images of the trained neural network are also low dynamic range images .
- the acquired RAW image to be processed is a high dynamic range image
- the RAW image to be processed can be converted into a low dynamic range image, and then each converted RAW image to be processed is input to the neural network. , Get the weight feature map of each RAW image to be processed.
- the brightness of the pixels in the converted supplementary frame can be adjusted to obtain the adjusted supplementary frame; and then, based on each output
- the weight feature map of a RAW image to be processed is fused to each of the adjusted supplementary frames and the converted reference frame to obtain a fused image.
- separately determining the brightness relationship between each supplementary frame and the reference frame includes:
- the brightness relationship between the supplementary frame and the reference frame is determined according to the exposure parameter of the reference frame and the exposure parameter of the supplementary frame.
- the exposure parameter of each RAW image to be processed can be obtained.
- the exposure parameter refers to It is the exposure parameter when shooting each RAW image to be processed. That is to say, if the exposure parameters set when shooting different RAW images to be processed are different, the exposure parameters of different RAW images to be processed are different.
- the method of obtaining the exposure parameters of each RAW image to be processed is not limited in the embodiment of the present application.
- the exposure parameters of each RAW image to be processed can be obtained through an algorithm for obtaining the exposure parameters.
- the supplementary frame and the reference frame can be obtained according to the acquired exposure parameters of the supplementary frame and the exposure parameters of the reference frame. The relationship between brightness.
- the exposure parameters include aperture size, shutter time, and sensor gain
- the exposure parameters of the reference frame and the supplementary frame determine the brightness relationship between the supplementary frame and the reference frame, including:
- the brightness relationship between the supplementary frame and the reference frame is determined.
- the exposure parameters of each RAW image to be processed may include the aperture size, shutter time, and sensor gain set when the RAW image to be processed is taken. Further, for each supplementary frame, the correlation between the supplementary frame and the reference frame corresponding to each exposure parameter can be determined, that is, for each supplementary frame, there may be the aperture size of the supplementary frame and the aperture size of the reference frame.
- the manner in which the association relationship of each exposure parameter is expressed is not limited in the embodiment of the present application.
- the correlation between the supplementary frame and the reference frame corresponding to the aperture size can be expressed as R_ ⁇ aperture size ⁇
- the correlation between the supplementary frame and the reference frame corresponding to the aperture size can be expressed as R_ ⁇ shutter time ⁇
- the correlation between the supplementary frame and the reference frame corresponding to the aperture size can be expressed as R_ ⁇ sensor gain ⁇ .
- the brightness relationship between the supplementary frame and the reference frame can be determined according to the association relationship between the supplementary frame and the reference frame corresponding to each type of exposure parameter.
- the specific implementation manner of determining the brightness relationship between the supplementary frame and the reference frame according to the association relationship between the supplementary frame and the reference frame corresponding to each exposure parameter is not limited in the embodiment of the present application.
- the correlation is the ratio of the square value of the aperture size of the supplementary frame to the square value of the aperture size of the reference frame;
- the correlation is the ratio of the shutter time of the reference frame to the shutter time of the supplementary frame
- the correlation is the ratio of the sensor gain of the supplementary frame to the sensor gain of the reference frame.
- the correlation between the exposure parameters is expressed in a ratio
- the product of the correlation between the supplementary frame and the reference frame corresponding to each exposure parameter can be used as the supplementary frame and the reference frame.
- the aperture size of reference frame a is denoted by fa
- the shutter time is denoted by sa
- the sensor gain is denoted by ga
- the aperture size of supplementary frame b is denoted by fb
- the shutter time is denoted by ga.
- separately determining the brightness relationship between each supplementary frame and the reference frame also includes:
- For each RAW image to be processed determine the brightness of the RAW image to be processed based on the adjusted brightness of each pixel of the RAW image to be processed;
- the brightness relationship between the supplementary frame and the reference frame is determined based on the brightness of the supplementary frame and the brightness of the reference frame.
- the pixel value of the corresponding pixel in the weight mask is 1. If the brightness of the pixel in the reference frame does not meet the preset condition, the pixel of the corresponding pixel in the weight mask The value is 0.
- the mask refers to the pre-made image containing the region of interest, and the mask can be used to block all or part of the image to be processed, so that it does not participate in the processing or only processes the masked area.
- the mask in the embodiment of the present application may include the brightness weight of the corresponding pixel in each RAW image to be processed, so the mask is referred to as a weight mask.
- the weight mask can also be determined according to the brightness of each pixel in the reference frame.
- the weight mask includes the brightness weight of the corresponding pixel in each RAW image to be processed.
- the pixel value of the corresponding pixel in the weight mask is 1. If the brightness of the pixel does not meet the preset condition, the pixel value of the corresponding pixel in the weight mask is 0.
- the preset condition is not limited in the embodiment of the present application.
- the brightness of the pixel may be between 20% and 80% (including 20% and 80%) of the preset saturation value.
- the pixel size of the reference frame is an image of 2*2 (including 4 pixels)
- the brightness of the first pixel and the fourth pixel are 20% of the preset saturation value. % To 80%.
- the pixel value corresponding to the first pixel and the fourth pixel in the weight mask is 1, and the pixel value corresponding to the second pixel and the third pixel is 0.
- the brightness of each pixel in each RAW image to be processed can be adjusted according to the weight mask, that is, the brightness of each pixel in the RAW image to be processed and the weight mask are bit-calculated.
- the pixel in the RAW image to be processed corresponds to When the pixel value in the weight mask is 1, the brightness of the pixel in the RAW image to be processed remains the original value.
- the pixel in the RAW image to be processed corresponds to the pixel value in the weight mask is 0, the RAW image to be processed The brightness of this pixel in the image is 0.
- the pixel size of the RAW image to be processed is an image of 2*2 (including 4 pixels)
- the brightness of the first pixel and the fourth pixel is 50
- the second The brightness of each pixel and the third pixel are 100
- the weight mask is shown in Figure 2b;
- the first pixel and the fourth pixel in the RAW image to be processed correspond to the pixel value of 1 in the weight mask Pixels
- the brightness of the first pixel and the fourth pixel in the RAW image to be processed keeps the original value (that is, 50)
- the second pixel and the third pixel correspond to the pixel with the pixel value of 0 in the weight mask
- the brightness of the second pixel and the third pixel in the RAW image to be processed is 0, and the adjusted brightness of each pixel in the RAW image to be processed is shown in Figure 3b.
- the brightness of the RAW image to be processed may be determined based on the adjusted brightness of each pixel in the RAW image to be processed.
- the specific implementation manner of determining the brightness of the RAW image to be processed is not limited in the embodiment of the present application.
- the brightness of the adjusted pixel brightness can be weighted average to determine the brightness of the RAW image to be processed.
- the specific formula is as follows:
- L(X) represents the brightness of the RAW image to be processed
- X represents the RAW image to be processed X
- mask represents the weight mask
- X*mask represents the brightness of each pixel in the RAW image X to be processed based on the weight mask.
- the RAW image G to be processed is a 2*2 (including 4 pixels) image, where the adjusted brightness of the first pixel and the fourth pixel is 50, and the second pixel and the fourth pixel The adjusted brightness of the three pixels is 0.
- the brightness relationship between the supplementary frame and the reference frame may be determined based on the determined brightness of the supplementary frame and the determined brightness of the reference frame.
- how to determine the brightness relationship is not limited in the embodiment of the present application.
- the proportional relationship between each supplementary frame and the reference frame can be directly used as the brightness relationship between each supplementary frame and the reference frame.
- the brightness of the supplementary frame is 50 and the brightness of the reference frame is 100.
- the ratio between the supplementary frame and the reference frame is 1:2, and the brightness relationship between the supplementary frame and the reference frame is 1. :2.
- each HDR video corresponds to a scene, and several adjacent frames of images are randomly extracted from each HDR video (F1, F2,..., Fn) is used as the initial training image in the same scene, and F1 is selected from F1, F2,..., Fn as the sample reference frame; further, several frames of images (F1, F2,...
- Fn respectively perform linear brightness transformation to obtain n frames of low dynamic range images (LF1, LF2, ..., LFn), and perform the inverse conversion of the n frames of low dynamic range images (F1, F2, ..., Fn) respectively , Get n frames of images to be fused (FF1, FF2, ..., FFn).
- N RAW images to be processed are acquired, and the N RAW images to be processed are low dynamic range images (if the N RAW images to be processed are high dynamic range images, the N RAW images to be processed are respectively converted into low dynamic range images. Range image), input the N RAW images to be processed into the neural network, and obtain the weight feature map corresponding to each RAW image to be processed (weight feature map 1, weight feature map 2,..., and weight feature map N); Select a RAW image to be processed with exposure parameters that meet the setting requirements from N RAW images to be processed as a reference frame, and other RAW images to be processed as supplementary frames (ie, supplementary frame 1,..., and supplementary frame N-1) ; Determine the brightness relationship between each supplementary frame and the reference frame, and based on the determined brightness relationship between the supplementary frame and the reference frame, adjust the brightness of the pixels of each supplementary frame to obtain each adjusted supplementary frame (ie, adjust The subsequent supplementary frame 1,..., and the adjusted supplementary frame N-1); based on the weight feature map of each RAW image
- the image fusion system may include a RAW acquisition module, a neural network module, a software fusion module, and a post-processing module.
- the RAW acquisition module communicates with the sensor through the interface provided by the operating system and the driver to issue the exposure strategy (that is, which exposure parameter changes are used to shoot the RAW image to be processed), and obtain the RAW image to be processed with different exposure parameters, and then expose different exposures
- the raw image to be processed of the parameters is input to the neural network module (the neural network module can run on CPU (central processing unit, central processing unit), GPU (Graphics Processing Unit, graphics processing unit), NPU (neural-network process units, network Processor), DSP (digital signal processing, digital signal processing, digital signal processing, digital signal processing, etc.), to obtain the weight feature map of each RAW image to be processed, and then the software fusion module performs fusion processing on each RAW image to be processed to obtain the fused image ( That is, a RAW image with a high dynamic range is obtained), and the obtained fused image is input to the post-processing module to obtain a visualized image.
- the neural network module can run on CPU (central processing unit, central processing unit),
- the image fusion system can include a sensor interface, a memory interface, a neural network accelerator, a fusion processing module, a block ISP (Image Signal Processing, image signal processing) interface, or a post-processing module.
- the sensor interface is used for data communication with the image sensor.
- the communication interface can be direct communication or indirect communication through the memory interface; after obtaining multiple frames of RAW images to be processed with different exposure parameters, multiple frames of different exposures
- the parameter to be processed RAW image is input to the neural network accelerator to obtain the weight feature map of each RAW image to be processed, and then each RAW image to be processed and the weight feature map of each RAW image to be processed are input to the fusion processing module to obtain a RAW image with a high dynamic range , And input to the ISP through the ISP interface, or input to the post-processing module for further processing to obtain a visual image.
- FIG. 5 is a schematic structural diagram of an image fusion device provided by an embodiment of the application.
- the image fusion device 60 of the device may include: an image acquisition module 601, a brightness relationship determination module 602, and a brightness adjustment module 603 And image fusion module 604, in which:
- the image acquisition module 601 is used to acquire at least two RAW images to be processed in the same scene;
- the brightness relationship determination module 602 is configured to use one of the at least two RAW images to be processed as a reference frame, and other images as supplementary frames, and respectively determine the brightness relationship between each supplementary frame and the reference frame;
- the brightness adjustment module 603 is configured to linearly adjust the brightness of pixels in the supplementary frame based on the brightness relationship for each supplementary frame to obtain an adjusted supplementary frame;
- the image fusion module 604 is used for fusing the adjusted supplementary frames and reference frames to obtain a fused image.
- the device further includes a weight feature map obtaining module 605, which is specifically configured to:
- the image fusion module When the image fusion module performs image fusion on the adjusted supplementary frames and reference frames, it is specifically used to:
- the adjusted supplementary frames and reference frames are fused.
- the weight feature map acquisition module is specifically used to: when acquiring the weight feature map of each RAW image to be processed:
- the RAW image to be processed is a high dynamic range image
- the device further includes a training module 606, where the training module 606 obtains the neural network through training in the following manner:
- the training sample set includes training images corresponding to at least one scene, the training images of each scene are at least two images, and for at least two images of each scene, one of the images is used as a sample reference frame , Other images are used as sample supplementary frames;
- the initial network is a neural network with image as input and image weight feature map as output.
- the loss function represents the error between the sample fusion image corresponding to the same scene and the sample reference frame.
- the sample fusion image is based on the image corresponding to the same scene.
- the weight feature map of each training image is obtained by fusing each transformed training image.
- the training module 606 is specifically configured to:
- the initial training sample set includes an initial image corresponding to at least one scene, and the initial image of each scene is at least two images;
- the initial image is used as the training image for each scene;
- each initial image is converted into a low dynamic range image corresponding to each initial image
- the low dynamic range image corresponding to each initial image of each scene is used as the training image of each scene.
- the brightness relationship determination 602 module when the brightness relationship determination 602 module separately determines the brightness relationship between each supplementary frame and the reference frame, it is specifically used to:
- the brightness relationship between the supplementary frame and the reference frame is determined according to the exposure parameter of the reference frame and the exposure parameter of the supplementary frame.
- the exposure parameters include aperture size, shutter time, and sensor gain
- the exposure parameters of the reference frame and the supplementary frame determine the brightness relationship between the supplementary frame and the reference frame, including:
- the brightness relationship between the supplementary frame and the reference frame is determined.
- the correlation is the ratio of the square value of the aperture size of the supplementary frame to the square value of the aperture size of the reference frame;
- the correlation is the ratio of the shutter time of the reference frame to the shutter time of the supplementary frame
- the correlation is the ratio of the sensor gain of the supplementary frame to the sensor gain of the reference frame.
- the brightness relationship determination module 602 when the brightness relationship determination module 602 separately determines the brightness relationship between each supplementary frame and the reference frame, it is specifically configured to:
- For each RAW image to be processed determine the brightness of the RAW image to be processed based on the adjusted brightness of each pixel of the RAW image to be processed;
- the brightness relationship between the supplementary frame and the reference frame is determined based on the brightness of the supplementary frame and the brightness of the reference frame.
- the image fusion apparatus of this embodiment can execute the image fusion method shown in the embodiment of this application, and its implementation principles are similar, and will not be repeated here.
- the electronic device 2000 shown in FIG. 6 includes a processor 2001 and a memory 2003. Among them, the processor 2001 and the memory 2003 are connected, such as through a bus 2002.
- the electronic device 2000 may further include a transceiver 2004. It should be noted that in actual applications, the transceiver 2004 is not limited to one, and the structure of the electronic device 2000 does not constitute a limitation to the embodiments of the present application.
- the processor 2001 is applied in the embodiment of the present application, and is used to implement the functions of the modules shown in FIG. 5.
- the processor 2001 may be a CPU, a general-purpose processor, DSP, ASIC, FPGA or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
- the processor 2001 may also be a combination that implements computing functions, for example, includes a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
- the bus 2002 may include a path for transferring information between the above-mentioned components.
- the bus 2002 may be a PCI bus, an EISA bus, or the like.
- the bus 2002 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, only one thick line is used in FIG. 6, but it does not mean that there is only one bus or one type of bus.
- the memory 2003 can be ROM or other types of static storage devices that can store static information and instructions, RAM or other types of dynamic storage devices that can store information and instructions, or it can be EEPROM, CD-ROM or other optical disk storage, or optical disk storage. (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
- the memory 2003 is used to store application program codes for executing the solutions of the present application, and is controlled by the processor 2001 to execute.
- the processor 2001 is configured to execute application program codes stored in the memory 2003 to implement the actions of the image fusion apparatus provided in the embodiment shown in FIG. 5.
- the embodiment of the application provides an electronic device.
- the electronic device in the embodiment of the application includes: a processor; and a memory.
- the memory is configured to store machine-readable instructions. When the instructions are executed by the processor, the processing The method of image fusion performed by the device.
- the embodiment of the present application provides a computer-readable storage medium, which is used to store computer instructions.
- the computer instructions When the computer instructions are executed on the computer, the computer can execute the method for realizing image fusion.
- the embodiment of the present application provides a computer program, including computer readable code, which when the computer readable code runs on an electronic device, causes the electronic device to execute a method for implementing image fusion.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
一种图像融合的方法、装置、电子设备及可读存储介质。该方法包括:获取同一场景下的至少两张待处理RAW图像;将至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个补充帧与参考帧之间的亮度关系;针对每个补充帧,基于亮度关系,对补充帧中像素的亮度进行线性调整,得到调整后的补充帧;对调整后的各补充帧以及参考帧进行融合,得到融合图像。由于调整后的各补充帧是基于参考帧的亮度调整的,因此调整后的各补充帧的亮度与参考帧的亮度之间的差异可以进一步减小,进而可以有效地解决了因图像中存在多种亮度,导致最终得到的图像容易出现亮度过渡不自然的问题。
Description
本申请要求在2019年10月25日提交中国专利局、申请号为201911024851.9、发明名称为“图像融合的方法、装置、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及图像处理技术领域,具体而言,本申请涉及一种图像融合的方法、装置、电子设备及可读存储介质。
多曝光高动态范围(HDR,High-Dynamic Range)合成是指相机同时或连续拍摄一组多种曝光参数的图像,典型拍摄策略是拍摄一张过度曝光的图像、一张欠曝光的图像、一张正常曝光的图像,然后通过算法对拍摄的多张图像进行融合后得到一张动态范围更广的图像。
但是在实际应用中,在对多种曝光图像进行融合时,由于待融合的图像中存在多种亮度的图像信息,因此,最终融合得到的图像很容易出现亮度过渡不自然的现象。
发明内容
本申请的目的旨在至少能解决上述的技术缺陷之一,特别是最终融合得到的图像很容易出现亮度过渡不自然现象的技术缺陷。
第一方面,提供了一种图像融合的方法,该方法包括:
获取同一场景下的至少两张待处理RAW图像;
将至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个补充帧与参考帧之间的亮度关系;
针对每个补充帧,基于亮度关系,对补充帧中像素的亮度进行线性调整,得到调整后的补充帧;
对调整后的各补充帧以及参考帧进行融合,得到融合图像。
第一方面可选的实施例中,该方法还包括:
获取每个待处理RAW图像的权重特征图,其中,权重特征图包括待处理RAW 图像中每个像素的权重值;
对调整后的各补充帧以及参考帧进行图像融合,包括:
基于每个待处理RAW图像的权重特征图,对调整后的各补充帧和参考帧进行融合。
第一方面可选的实施例中,当待处理RAW图像为高动态范围图像时,获取每个待处理RAW图像的权重特征图,包括:
将各待处理RAW图像分别转换为低动态范围图像,得到各转换后的待处理RAW图像;
将每个转换后的待处理RAW图像分别输入至神经网络,得到每个待处理RAW图像的权重特征图。
第一方面可选的实施例中,神经网络是通过下列方式训练得到的:
获取训练样本集,训练样本集包括对应于至少一个场景的训练图像,每个场景的训练图像为至少两张图像,对于每个场景的至少两张图像,将其中的一张图像作为样本参考帧,其它图像作为样本补充帧;
对各训练图像分别进行线性亮度变换,得到各变换后的训练图像,并基于各变换后的训练图像对初始网络进行训练,直至初始网络的损失函数收敛,将损失函数收敛时的初始网络确定为神经网络;
其中,初始网络是以图像为输入,以图像的权重特征图为输出的神经网络,损失函数表征了对应于同一场景的样本融合图像与样本参考帧的误差,样本融合图像是根据对应于同一场景的各训练图像的权重特征图,对各变换后的训练图像进行融合得到的。
第一方面可选的实施例中,获取训练样本集,包括:
获取初始训练样本集,初始训练样本集包括对应于至少一个场景的初始图像,每个场景的初始图像为至少两张图像;
当初始图像为低动态范围图像时,则将初始图像作为每个场景的训练图像;
当初始图像为高动态范围图像时,则将各初始图像分别转换为各初始图像对应的低动态范围图像;
将每个场景的各初始图像对应的低动态范围图像作为每个场景的训练图像。
第一方面可选的实施例中,分别确定每个补充帧与参考帧之间的亮度关系,包括:
获取每张待处理RAW图像的曝光参数;
针对于每一补充帧,根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧 与参考帧之间的亮度关系。
第一方面可选的实施例中,曝光参数包括光圈大小、快门时间和传感器增益;
根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系,包括:
确定补充帧与参考帧对应于各曝光参数的关联关系;
根据补充帧与参考帧对应于各曝光参数的关联关系,确定补充帧与参考帧之间的亮度关系。
第一方面可选的实施例中,若曝光参数为光圈大小,关联关系为补充帧的光圈大小的平方值与参考帧的光圈大小的平方值的比值;
若曝光参数为快门时间,关联关系为参考帧的快门时间与补充帧的快门时间的比值;
若曝光参数为传感器增益,关联关系为补充帧的传感器增益与参考帧的传感器增益的比值。
第一方面可选的实施例中,分别确定每个补充帧与参考帧之间的亮度关系,包括:
基于参考帧中每个像素的亮度,确定权重掩膜;
基于权重掩膜,分别对各待处理RAW图像中各像素的亮度进行调整;
对于每个待处理RAW图像,基于调整后的待处理RAW图像各像素的亮度,确定待处理RAW图像的亮度;
针对每个补充帧,基于补充帧的亮度与参考帧的亮度,确定补充帧与参考帧之间的亮度关系。
第二方面,提供了一种图像融合的装置,该装置包括:
图像获取模块,用于获取同一场景下的至少两张待处理RAW图像;
亮度关系确定模块,用于将至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个补充帧与参考帧之间的亮度关系;
亮度调整模块,用于针对每个补充帧,基于亮度关系,对补充帧中像素的亮度进行线性调整,得到调整后的补充帧;
图像融合模块,用于对调整后的各补充帧以及参考帧进行融合,得到融合图像。
第二方面可选的实施例中,该装置还包括权重特征图获取模块,具体用于:
获取每个待处理RAW图像的权重特征图,其中,权重特征图包括待处理RAW图像中每个像素的权重值;
图像融合模块在对调整后的各补充帧以及参考帧进行图像融合时,具体用于:
基于每个待处理RAW图像的权重特征图,对调整后的各补充帧和参考帧进行融合。
第二方面可选的实施例中,权重特征图获取模块在获取每个待处理RAW图像的权重特征图时,具体用于:
在待处理RAW图像为高动态范围图像时,将各待处理RAW图像分别转换为低动态范围图像,得到各转换后的待处理RAW图像;
将每个转换后的待处理RAW图像分别输入至神经网络,得到每个待处理RAW图像的权重特征图。
第二方面可选的实施例中,该装置还包括训练模块,其中,训练模块通过下列方式训练得到神经网络:
获取训练样本集,训练样本集包括对应于至少一个场景的训练图像,每个场景的训练图像为至少两张图像,对于每个场景的至少两张图像,将其中的一张图像作为样本参考帧,其它图像作为样本补充帧;
对各训练图像分别进行线性亮度变换,得到各变换后的训练图像,并基于各变换后的训练图像对初始网络进行训练,直至初始网络的损失函数收敛,将损失函数收敛时的初始网络确定为神经网络;
其中,初始网络是以图像为输入,以图像的权重特征图为输出的神经网络,损失函数表征了对应于同一场景的样本融合图像与样本参考帧的误差,样本融合图像是根据对应于同一场景的各训练图像的权重特征图,对各变换后的训练图像进行融合得到的。
第二方面可选的实施例中,训练模块在获取训练样本集时,具体用于:
获取初始训练样本集,初始训练样本集包括对应于至少一个场景的初始图像,每个场景的初始图像为至少两张图像;
当初始图像为低动态范围图像时,则将初始图像作为每个场景的训练图像;
当初始图像为高动态范围图像时,则将各初始图像分别转换为各初始图像对应的低动态范围图像;
将每个场景的各初始图像对应的低动态范围图像作为每个场景的训练图像。
第二方面可选的实施例中,亮度关系确定模块在分别确定每个补充帧与参考帧之间的亮度关系时,具体用于:
获取每张待处理RAW图像的曝光参数;
针对于每一补充帧,根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系。
第二方面可选的实施例中,曝光参数包括光圈大小、快门时间和传感器增益;
根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系,包括:
确定补充帧与参考帧对应于各曝光参数的关联关系;
根据补充帧与参考帧对应于各曝光参数的关联关系,确定补充帧与参考帧之间的亮度关系。
第二方面可选的实施例中,若曝光参数为光圈大小,关联关系为补充帧的光圈大小的平方值与参考帧的光圈大小的平方值的比值;
若曝光参数为快门时间,关联关系为参考帧的快门时间与补充帧的快门时间的比值;
若曝光参数为传感器增益,关联关系为补充帧的传感器增益与参考帧的传感器增益的比值。
第二方面可选的实施例中,亮度关系确定模块在分别确定每个补充帧与参考帧之间的亮度关系时,具体用于:
基于参考帧中每个像素的亮度,确定权重掩膜;
基于权重掩膜,分别对各待处理RAW图像中各像素的亮度进行调整;
对于每个待处理RAW图像,基于调整后的待处理RAW图像各像素的亮度,确定待处理RAW图像的亮度;
针对每个补充帧,基于补充帧的亮度与参考帧的亮度,确定补充帧与参考帧之间的亮度关系。
第三方面,提供了一种电子设备,该电子设备包括:
处理器以及存储器,该储器被配置用于存储机器可读指令,该指令在由该处理器执行时,使得该处理器执行第一方面中的任一项方法。
第四方面,提供了一种计算机可读存储介质,存储有计算机程序,其特征在于,计算机存储介质用于存储计算机指令,当其在计算机上运行时,使得计算机可以执行上述第一方面中的任一项方法。
第五方面,提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备上运行时,使得所述电子设备执行上述第一方面中的任一项方法。
本申请实施例提供的技术方案带来的有益效果是:
在本申请实施例中,在获取到待处理RAW图像后,可以基于每个补充帧与参考帧之间的亮度关系,分别对各补充帧的亮度进行线性调整,并对调整后的各补充帧以及参考帧进行融合,进而得到融合图像。在本申请实施例中由于待处理RAW图 像具备线性亮度关系,进而可以通过参考帧的亮度对各补充帧进行线性亮度变换,使得调整后的各补充帧的亮度与参考帧的亮度之间的差异可以进一步减小,处理后的各待处理RAW图像中的亮度几乎相等,并且得到的融合图像的图像数值与实际物体亮度仍保留线性关系,进而可以有效地解决了因图像中存在多种亮度,导致最终得到的图像容易出现亮度过渡不自然的问题。
上述说明仅是本申请技术方案的概述,为了能够更清楚了解本申请的技术手段,而可依照说明书的内容予以实施,并且为了让本申请的上述和其它目的、特征和优点能够更明显易懂,以下特举本申请的具体实施方式。
为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种图像融合的方法的流程示意图;
图2a为本申请实施例提供的一种参考帧的示意图;
图2b为本申请实施例提供的一种权重掩膜的示意图;
图3a为本申请实施例提供的一种待处理RAW图像的示意图;
图3b为本申请实施例提供的另一种待处理RAW图像的示意图;
图4为本申请实施例提供的一种图像融合的方法的完整流程示意图;
图5为本申请实施例提供的一种图像融合的装置的结构示意图;
图6为本申请实施例提供的一种电子设备的结构示意图。
下面详细描述本申请的实施例,实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该 理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本申请的实施例进行描述。
本申请实施例提供了一种图像融合的方法,如图1所示,该方法包括:
步骤S101,获取同一场景下的至少两张待处理RAW图像。
其中,RAW图像也称为原始图像,是数码相机、扫描仪、终端摄像头等装置的图像传感器将捕捉到的光源信号转化为数字信号的原始数据。在实际应用中,RAW图像不会因为图像处理(如锐化、增加色彩对比等)和压缩而造成信息丢失,各RAW图像之间具备线性亮度关系。也就是说,当视频中的帧图像为RAW图像时,视频中相邻帧图像之间具备线性亮度关系。同一场景下的待处理RAW图像指的是不同待处理RAW图像中的图像内容基本相同,即各待处理RAW图像之间的图像内容的差异度小于一定阈值,也就是图像中画面的差异度满足预设条件。例如,用户在同一地点,同一姿势拍摄两张RAW图像,则该两张RAW图像只是拍摄时间不同,但是所包括的图像内容几乎完全相同(图像中画面的差异度满足预设条件),此时该两张RAW图像即为同一场景下的图像。其中,获取同一场景下的待处理RAW图像的方式,本申请实施例不做限定,如可以是将同一视频中的若干相邻帧作为待处理RAW图像,或者将图像采集间隔小于设定间隔的图像作为待处理RAW图像,如可以将通过连拍方式得到的图像作为待处理RAW图像。
步骤S102,将至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个补充帧与参考帧之间的亮度关系。
其中,从待处理RAW图像中选取参考帧的方式,本申请实施例不做限定,如可以任选待处理RAW图像中的一张图像作为参考帧,还可以设置条件来确定参考帧。如当待处理RAW图像的曝光参数满足设定条件时,将该待处理RAW图像作为参考帧。例如,设定条件可以是曝光参数在特定范围内,若存在多张待处理RAW图像的曝光参数为特定的范围,则可以选取满足条件的任一帧作为参考帧,或者选取 曝光参数与预设参数最接近的待处理RAW图像作为参考帧。进一步的,在选取一帧满足设定条件的图像作为参考帧后,可以将其它待处理RAW图像作为补充帧。
在一示例中,假如获取的待处理RAW图像包括图像1、图像2、……、和图像10,选择参考帧的设定条件为图像的曝光参数满足设定条件。其中,仅图像2的曝光参数满足设定条件,则将仅图像2作为参考帧,图像1、图像3、……、和图像10作为补充帧。
进一步的,在实际应用中,图像的图像信息中可以包括亮度,进而在确定出参考帧和补充帧后,还可以分别确定每个补充帧与参考帧之间的亮度关系。
其中,亮度关系可以采用多种方式表示,如可以采用比例的方式表示每个补充帧与参考帧之间的亮度关系,如当补充帧的亮度与参考帧的亮度相同时,该补充帧与参考帧之间的亮度关系为1:1时;当补充帧的亮度为参考帧的亮度的1/2时,该补充帧与参考帧之间的亮度关系为1:2。
步骤S103,针对每个补充帧,基于亮度关系,对补充帧中像素的亮度进行线性调整,得到调整后的补充帧。
在实际应用中,在确定出每个补充帧与参考帧之间的亮度关系后,可以根据每个补充帧与参考帧之间的亮度关系,分别对每个补充帧中像素的亮度进行线性调整,得到调整后的补充帧。其中,调整的具体实现方式本申请实施例不做限定。
在一示例中,假如补充帧包括补充帧1和补充帧2,且补充帧1与参考帧之间的亮度关系为1:4,补充帧2与参考帧之间的亮度关系为1:2;进一步,可以基于补充帧1与参考帧之间的亮度关系对补充帧1中的像素的亮度进行线性调整,如将补充帧1中的像素的亮度均乘4,得到调整后的补充帧1。基于补充帧2与参考帧之间的亮度关系对补充帧2中的像素的亮度进行线性调整,如将补充帧2中的像素的亮度均乘2,得到调整后的补充帧2。
步骤S104,对调整后的各补充帧以及参考帧进行融合,得到融合图像。
其中,在对调整后的各补充帧以及参考帧进行融合时,调整后的各补充帧和参考帧的图像大小是相同的,若存在大小不相同的图像,可以对该大小不同的图像进行处理,以使所有图像的大小是相同的。在实际应用中,若获取到待处理RAW图像时,即存在大小不相同的图像,也可以先对该大小不相同的图像进行预处理,再进行后续步骤。其中,处理图像大小的方式本申请实施例不做限定。
在实际应用中,在对各补充帧进行调整后,可以对调整后的各补充帧以及参考帧进行融合,得到融合图像。其中,具体的融合方式本申请实施例不做限定。
在本申请实施例中可以通过参考帧的亮度对各补充帧进行线性亮度变换,使得 调整后的各补充帧的亮度与参考帧的亮度之间的差异可以进一步减小,以使处理后的各待处理RAW图像中的亮度几乎相等,进而可以有效地解决了因图像中存在多种亮度,导致最终得到的图像容易出现亮度过渡不自然的问题,并且保证了得到的融合图像的图像数值与实际物体亮度仍保留线性关系。
在本申请可选的实施中,该方法还包括:
获取每个待处理RAW图像的权重特征图,其中,权重特征图包括待处理RAW图像中每个像素的权重值;
对调整后的各补充帧以及参考帧进行图像融合,得到融合图像,包括:
基于每个待处理RAW图像的权重特征图,对调整后的各补充帧和参考帧进行融合,得到融合图像。
其中,权重特征图用于表征每个待处理RAW图像中每个像素的取值,即通过每个待处理RAW图像的权重特征图即可获知待处理RAW图像中的每个像素的权重,而获取每个待处理RAW图像的权重特征图时,可以将每个待处理RAW图像的权重特征图分别输入至神经网络,得到每个待处理RAW图像的权重特征图。
进一步的,在对图像进行融合时,可以根据每个待处理RAW图像对应的权重特征图,对各待处理RAW图像进行融合,得到融合图像。其中,具体的融合方式本申请实施例不做限定,如可以Alpha融合、金字塔融合、梯度融合等方式。
在一示例中,待处理RAW图像包括参考帧、补充帧1和补充帧2,可以分别将参考帧、补充帧1和补充帧2分别输入至神经网络,分别得到参考帧的权重特征图,补充帧1的权重特征图和补充帧2的权重特征图。进一步的,可以确定补充帧1与参考帧之间的亮度关系,以及补充帧2与参考帧之间的亮度关系,并基于补充帧1与参考帧之间的亮度关系对补充帧1中像素的亮度进行调整,得到调整后的补充帧1,基于补充帧2与参考帧之间的亮度关系对补充帧2中像素的亮度进行调整,得到调整后的补充帧2;然后,可以根据参考帧的权重特征图、补充帧1的权重特征图、补充帧2的权重特征图对参考帧、调整后的补充帧1和调整后的补充帧2进行融合。
在实际生活中,由于若同时获得多种曝光参数的图像,则图像中的运动物体可能在不同图像上会处于不同位置,这样融合出的图像中,运动物体可能出现半透明的伪像,也称为“鬼影”。在本申请实施例中,由于权重特征图来自于神经网络的语义识别,对于运动区域等传统方法难以判断和处理的区域能给出合理的权重,进而可以使得融合得到的图像无“鬼影”。
在本申请可选的实施中,神经网络是通过下列方式训练得到的:
获取训练样本集,训练样本集包括对应于至少一个场景的训练图像,每个场景 的训练图像为至少两张图像,对于每个场景的至少两张图像,将其中的一张图像作为样本参考帧,其它图像作为样本补充帧;
对各训练图像分别进行线性亮度变换,得到各变换后的训练图像,并基于各变换后的训练图像对初始网络进行训练,直至初始网络的损失函数收敛,将损失函数收敛时的初始网络确定为神经网络;
其中,初始网络是以图像为输入,以图像的权重特征图为输出的神经网络,损失函数表征了对应于同一场景的样本融合图像与样本参考帧的误差,样本融合图像是根据对应于同一场景的各训练图像的权重特征图,对各变换后的训练图像进行融合得到的。
其中,初始网络可以为全卷积神经网络(Fully Convolutional Neural Network,FCN)、卷积神经网络(Convolutional Neural Network,CNN)、深度神经网络(Deep Neural Network,DNN)等,本申请实施例对初始网络的类型不作限定。另外,初始网络的网络结构可以根据计算机视觉任务设计,或者,初始网络的网络结构可以采用现有的网络结构的至少一部分,例如:深度残差网络(Deep Residual Network,ResNet)或密集卷积网络(Dense Convolutional Network,DenseNet)等,本申请实施例对初始网络的网络结构不作限定。下面以初始网络为全卷积神经网络为例,对本申请实施例进行说明。
训练样本集中的训练图像是用于训练神经网络的样本数据,该训练样本集中的训练图像至少对应于一个场景,并且每个场景的训练图像为至少两张图像,对于每个场景下的图像,选取其中的一张图像作为样本参考帧,将其它图像作为样本补充帧。其中,选取样本参考帧的方式,本申请实施例不做限定,如可以从训练图像中任选一张作为样本参考帧,也可以当训练图像的图像信息满意设定条件时,将该训练图像作为样本参考帧,并且获取同一场景下的训练图像的方式,本申请实施例不做限定,如可以选取同一视频中的若干相邻帧作为同一场景下的训练图像。
相应的,可以对各训练图像分别进行线性亮度变换,得到各变换后的训练图像,并通过各变换后的训练图像对获取到的初始网络进行训练,当初始网络的损失函数收敛时,将损失函数收敛时的初始网络确定为神经网络。其中,该初始网络是以图像为输入,以图像的权重特征图为输出的全卷积神经网络,线性亮度变换可以将训练图像转为动态范围减小的图像。
在训练过程中,可以将训练样本集中的训练图像分别输入至初始网络,得到每个训练图像的权重特征图,针对每一场景下的训练图像,根据输出的该场景下的训练图像的权重特征图,对该场景下的各变换后的训练图像进行融合得到样本融合图 像。并判断得到样本融合图像和该场景下的样本参考帧之间的误差是否满足条件(判断根据样本融合图像和该场景下的样本参考帧所得到的损失函数值是否收敛)。若不满足条件,则调整初始网络中的参数,再次将训练样本集中的训练图像分别输入至初始网络,得到每个训练图像的权重特征图,针对每一场景下的训练图像,再次根据输出的该场景下的训练图像的权重特征图,对该场景下的各变换后的训练图像进行融合得到样本融合图像,并判断当前得到的样本融合图像和该场景下的参考帧之间的误差是否满足条件。若不满足,则再次基于训练样本集中的训练图像对该初始网络进行训练,直至对应于同一场景的样本融合图像与该场景的样本参考帧的误差满足条件。
在本申请可选的实施中,获取训练样本集,包括:
获取初始训练样本集,初始训练样本集包括对应于至少一个场景的初始图像,每个场景的初始图像为至少两张图像;
当初始图像为低动态范围图像时,则将初始图像作为每个场景的训练图像;
当初始图像为高动态范围图像时,则将各初始图像分别转换为各初始图像对应的低动态范围图像;
将每个场景的各初始图像对应的低动态范围图像作为每个场景的训练图像。
在实际应用中,可以获取初始训练样本集,该初始训练样本集中的初始训练图像也至少对应于一个场景,并且每个场景的初始图像为也至少两张图像。其中,若所获取到的初始图像为低动态范围图像,此时可以直接将获取到的初始体图像直接作为训练样本集中的训练图像;若所获取到的初始图像为高动态范围图像,还可以将各高动态范围图像,分别转换为对应的低动态范围图像,然后将每个场景的各高动态范围图像对应的低动态范围图像作为每个场景的训练图像。其中,将高动态范围图像转换为低动态范围图像的方式本申请实施例不做限定。
在本申请实施例中,在将高动态范围图像转为低动态范围图像后,再基于神经网络得到对应的权重特征图,可以进一步的对于运动区域等传统方法难以判断和处理的区域能给出合理的权重,进而可以使得融合得到的图像无“鬼影”。
在本申请可选的实施中,当待处理RAW图像为高动态范围图像时,获取每个待处理RAW图像的权重特征图,包括:
将各待处理RAW图像分别转换为低动态范围图像,得到各转换后的待处理RAW图像,并将每个转换后的待处理RAW图像分别输入至神经网络,得到每个待处理RAW图像的权重特征图。
在实际应用中,由于神经网络是基于训练样本集中的训练图像训练得到,而训 练样本集中的训练图像是低动态范围图像,也就是说,训练得到的神经网络所输入图像也为低动态范围图像。进一步的,若所获取到的待处理RAW图像为高动态范围图像,此时可以将待处理RAW图像分别转换为低动态范围图像后,再将各转换后的待处理RAW图像分别输入至神经网络,得到每个待处理RAW图像的权重特征图。
进一步的,对于每一补充帧,可以基于确定的该补充帧与参考帧的亮度关系,对转换后的该补充帧中像素的亮度进行调整,得到调整后的补充帧;然后,基于输出的每个待处理RAW图像的权重特征图,对调整后的各补充帧以及转换后的参考帧进行融合,得到融合图像。
在本申请可选的实施中,分别确定每个补充帧与参考帧之间的亮度关系,包括:
获取每张待处理RAW图像的曝光参数;
针对于每一补充帧,根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系。
在实际应用中,可以存在多种方式确定每个补充帧与参考帧之间的亮度关系,作为一种可选的方式为:可以获取每张待处理RAW图像的曝光参数,该曝光参数指的是拍摄每张待处理RAW图像时的曝光参数,也就是说,若拍摄不同待处理RAW图像时设置的曝光参数不同,不同待处理RAW图像的曝光参数是不同的。其中,获取每张待处理RAW图像的曝光参数的方式,本申请实施例不做限定,如可以通过获取曝光参数的算法获取每张待处理RAW图像的曝光参数。
进一步的,在确定每个补充帧与参考帧之间的亮度关系时,对于每一补充帧,可以根据获取到的该补充帧的曝光参数与参考帧的曝光参数,得到该补充帧与参考帧之间的亮度关系。
在本申请可选的实施中,曝光参数包括光圈大小、快门时间和传感器增益;
根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系,包括:
确定补充帧与参考帧对应于各曝光参数的关联关系;
根据补充帧与参考帧对应于各曝光参数的关联关系,确定补充帧与参考帧之间的亮度关系。
在实际应用中,每个待处理RAW图像的曝光参数可以包括拍摄该待处理RAW图像时所设置的光圈大小、快门时间和传感器增益等。进一步的,对于每一补充帧,可以确定出该补充帧与参考帧对应于每一种曝光参数的关联关系,即对于每一补充帧,可以存在该补充帧的光圈大小与参考帧的光圈大小之间的关联关系、该补充帧的传感器增益与参考帧的传感器增益之间的关联关系、以及该补充帧的快门时间与 参考帧的快门时间之间的关联关系。其中,各曝光参数的关联关系的表示方式,本申请实施例不做限定。如可以采用比例的方式表示,此时补充帧与参考帧对应于光圈大小的关联关系可以表示为R_{光圈大小},补充帧与参考帧对应于光圈大小的关联关系可以表示为R_{快门时间},补充帧与参考帧对应于光圈大小的关联关系可以表示为R_{传感器增益}。
相应的,对于每一补充帧,可以根据该补充帧与参考帧对应于每一种曝光参数的关联关系,确定出该补充帧与参考帧之间的亮度关系。其中,根据补充帧与参考帧对应于各曝光参数的关联关系,确定补充帧与参考帧之间的亮度关系的具体实现方式,本申请实施例不做限定。
在本申请的可选实施例中,若曝光参数为光圈大小,关联关系为补充帧的光圈大小的平方值与参考帧的光圈大小的平方值的比值;
若曝光参数为快门时间,关联关系为参考帧的快门时间与补充帧的快门时间的比值;
若曝光参数为传感器增益,关联关系为补充帧的传感器增益与参考帧的传感器增益的比值。
在实际应用中,若各曝光参数的关联关系采用了比值的方式表示,作为一种选的方式,可以将补充帧与参考帧对应于各曝光参数的关联关系的乘积作为补充帧与参考帧之间的亮度关系。
在一示例中,假如存在参考帧a和补充帧b,参考帧a的光圈大小用fa表示、快门时间用sa表示、传感器增益用ga表示,补充帧b的光圈大小用fb表示、快门时间用sb表示、传感器增益用gb表示;此时,对于补充帧b,补充帧b与参考帧a对应于光圈大小的关联关系为R_{光圈大小}(fa,fb)=(fb)2/(fa)2,补充帧b与参考帧a对应于光圈大小的关联关系为R_{快门时间}(sa,sb)=sa/sb,补充帧b与参考帧a对应于光圈大小的关联关系为R_{传感器增益}(ga,gb)=gb/ga,此时补充帧b与参考帧a之间的亮度关系可以为Ratio(a,b)=R_{光圈大小}*R_{快门时间}*R_{传感器增益}。
在本申请可选的实施中,分别确定每个补充帧与参考帧之间的亮度关系,还包括:
基于参考帧中每个像素的亮度,确定权重掩膜(mask);
基于权重掩膜,分别对各待处理RAW图像中各像素的亮度进行调整;
对于每个待处理RAW图像,基于调整后的待处理RAW图像各像素的亮度,确定待处理RAW图像的亮度;
针对每一补充帧,基于补充帧的亮度与参考帧的亮度,确定补充帧与参考帧之间的亮度关系。
在实际应用中,若参考帧中像素的亮度满足预设条件,则权重mask中对应像素的像素值为1,若参考帧中像素的亮度不满足预设条件,则权重mask中对应像素的像素值为0。
其中,掩膜(即mask)指的是对预先制作的包含感兴趣区域的图像,可以使用mask对待处理的图像全部或局部进行遮挡,使其不参加处理或仅对屏蔽区做处理。其中,在本申请实施例中的mask中可以包括各待处理RAW图像中相对应的像素的亮度权重,因此将mask称为权重mask。
在实际应用中,若无法获取各待处理RAW图像的曝光参数,还可以根据参考帧中每个像素的亮度确定权重mask。其中,权重mask中包括各待处理RAW图像中相对应的像素的亮度权重,当参考帧中的像素的亮度满足预设条件时,权重mask中对应该像素的像素值为1,若参考帧中像素的亮度不满足预设条件,则权重mask中对应该像素的像素值为0。其中,预设条件本申请实施例不做限定,如可以是像素的亮度处于预设饱和值的20%至80%(包括20%和80%)之间。
在一示例中,如图2a所示,假如参考帧的像素尺寸为2*2(包括4个像素)的图像,其中,第1个像素与第4个像素的亮度处于预设饱和值的20%至80%之间,此时如图2b所示,权重mask中对应于第1个像素与第4个像素的像素值为1,对应于第2个像素与第3个像素的像素值为0。
进一步的,可以根据权重mask对每个待处理RAW图像中的各像素的亮度进行调整,即将待处理RAW图像中的各像素的亮度与权重mask进行位运算,当待处理RAW图像中的像素对应于权重mask中像素值为1的位置时,待处理RAW图像中的该像素的亮度保持原值,当待处理RAW图像中的像素对应于权重mask中像素值为0的位置时,待处理RAW图像中的该像素的亮度为0。
在一示例中,如图3a所示,假如待处理RAW图像的像素尺寸为2*2(包括4个像素)的图像,其中,第1个像素与第4个像素的亮度为50,第2个像素与第3个像素的亮度为100,并且权重mask如图2b所示;进一步的,由于待处理RAW图像中的第1个像素与第4个像素对应于权重mask中像素值为1的像素,待处理RAW图像中的第1个像素与第4个像素的亮度保持原值(即50),而第2个像素与第3个像素对应于权重mask中像素值为0的像素,则待处理RAW图像中的第2个像素与第3个像素的亮度为0,此时得到的调整后的待处理RAW图像中各像素的亮度如图3b所示。
进一步的,对于每个待处理RAW图像,可以基于调整后的该待处理RAW图像中各像素的亮度,确定该待处理RAW图像的亮度。其中,确定待处理RAW图像的亮度的具体实现方式,本申请实施例不做限定,如可以采用调整后的像素的亮度加权平均的方式确定待处理RAW图像的亮度,具体公式如下所示:
L(X)=average(X*mask)
其中,L(X)表示待处理RAW图像的亮度,X表示待处理RAW图像X,mask表示权重mask,X*mask表示基于权重mask对待处理RAW图像X中每个像素的亮度进行调整,得到待处理RAW图像X中每个像素调整后的亮度,然后对待处理RAW图像X中每个像素调整后的亮度进行加法运算,average(X*mask)表示将待处理RAW图像X中每个像素调整后的亮度进行平均运算。
在一示例中,假如待处理RAW图像G为2*2(包括4个像素)的图像,其中,第1个像素与第4个像的素调整后的亮度为50,第2个像素与第3个像素调整后的亮度为0,此时,待处理RAW图像G的亮度为L(G)=(50+50+0+0)/4=25。
相应的,针对每一补充帧,可以基于确定的补充帧的亮度与确定的参考帧的亮度,确定补充帧与参考帧之间的亮度关系。其中,如何确定亮度关系本申请实施例不做限定,如可以将每个补充帧与参考帧之间的比例关系直接作为每个补充帧与参考帧之间的亮度关系。例如,补充帧的亮度为50,参考帧的亮度为100,此时,该补充帧与参考帧之间的比例关系为1:2,则该补充帧与参考帧之间的亮度关系即为1:2。
如图4所示,下面结合具体应用场景对本申请实施例所提供的方案进行详细描述。
用单反相机、高动态工业相机等具有高动态RAW图像输出能力的设备拍摄1000段以上HDR视频,其中,对每一段HDR视频对应于一个场景,从每一段HDR视频中随机抽取邻近的若干帧图像(F1、F2、……、Fn)作为同一场景下的初始训练图像,从F1、F2、……、Fn中选取F1作为样本参考帧;进一步的,将若干帧图像(F1、F2、……、Fn)分别进行线性亮度变换,得到n帧低动态范围图像(LF1、LF2、……、LFn),将n帧低动态范围图像(F1、F2、……、Fn)分别进行亮度变换逆变换,得到n帧待融合图像(FF1、FF2、……、FFn)。
进一步的,设计一个编码器-解码器(Encoder-Decoder)结构的全卷积神经网络,将到n帧低动态范围图像(LF1、LF2、……、LFn)分别输入至全卷积神经网络,输出n帧尺寸与(LF1、LF2、……、LFn)相同的权重特征图(W1,W2,...,Wn),再根据输出的权重特征图对待融合图像(FF1、FF2、……、FFn)进行融合处理,得到 融合图像Y(即Y=FF1*W1+FF2*W2+...+FFn*Wn),并基于对应于各场景的Y和各场景的参考帧计算误差损失函数值,当损失函数收敛的数量达到阈值时训练完成,得到本申请实施例中所描述的神经网络。
进一步的,获取N张待处理RAW图像,且N张待处理RAW图像为低动态范围图像(若N张待处理RAW图像为高动态范围图像,则将N张待处理RAW图像分别转换为低动态范围图像),将N张待处理RAW图像分别输入至神经网络,得到对应于各待处理RAW图像的权重特征图(权重特征图1、权重特征图2、……、和权重特征图N);在N张待处理RAW图像中选取曝光参数满足设定要求的一张待处理RAW图像作为参考帧,其它待处理RAW图像作为补充帧(即补充帧1、……、和补充帧N-1);确定各补充帧与参考帧之间的亮度关系,并基于确定的补充帧与参考帧之间的亮度关系,对各补充帧的像素的亮度进行调整,得到各调整后的补充帧(即调整后的补充帧1、……、和调整后的补充帧N-1);基于每个待处理RAW图像的权重特征图,对各调整后的补充帧以及参考帧进行融合,得到融合图像。
下面结合图像融合的系统对本申请实施例提供的方案进行描述,该图像融合的系统中可以包括RAW采集模块、神经网络模块、软件融合模块和后处理模块。
其中,RAW采集模块通过操作系统和驱动提供的接口与传感器通信来下发曝光策略(即采用哪些曝光参数变化拍摄待处理RAW图像),并获取不同曝光参数的待处理RAW图像,然后将不同曝光参数的待处理RAW图像输入给神经网络模块(该神经网络模块可运行在CPU(central processing unit,中央处理器)、GPU(Graphics Processing Unit,图形处理器)、NPU(neural-network process units,网络处理器)、DSP(digital signal processing,数字信号处理器)等硬件上),得到各待处理RAW图像的权重特征图,再由软件融合模块对各待处理RAW图像进行融合处理,得到融合图像(即得到高动态范围的RAW图像),并将得到的融合图像输入给后处理模块,得到可视化图像。
在实际应用中,实现该图像融合的系统时可以包括传感器接口、内存接口、神经网络加速器、融合处理模、块ISP(Image Signal Processing,图像信号处理)接口或后处理模块。其中,传感器接口用于与图像传感器进行数据通信,其通信接口可以是直接通信,也可以是通过内存接口进行间接通信;在获得多帧不同曝光参数的待处理RAW图像后,将多帧不同曝光参数的待处理RAW图像输入给神经网络加速器得到各待处理RAW图像权重特征图,再将各待处理RAW图像和各待处理RAW图像权重特征图输入给融合处理模块,得到高动态范围的RAW图像,并通过ISP接口输入给ISP,或输入给后处理模块进行续处理,得到可视化图像。
图5为本申请实施例提供的一种图像融合的装置的结构示意图,如图5所示,该装置图像融合的装置60可以包括:图像获取模块601、亮度关系确定模块602、亮度调整模块603和图像融合模块604,其中:
图像获取模块601,用于获取同一场景下的至少两张待处理RAW图像;
亮度关系确定模块602,用于将至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个补充帧与参考帧之间的亮度关系;
亮度调整模块603,用于针对每个补充帧,基于亮度关系,对补充帧中像素的亮度进行线性调整,得到调整后的补充帧;
图像融合模块604,用于对调整后的各补充帧以及参考帧进行融合,得到融合图像。
本申请可选的实施例中,该装置还包括权重特征图获取模块605,具体用于:
获取每个待处理RAW图像分别的权重特征图,其中,权重特征图包括待处理RAW图像中每个像素的权重值;
图像融合模块在对调整后的各补充帧以及参考帧进行图像融合时,具体用于:
基于每个待处理RAW图像的权重特征图,对调整后的各补充帧和参考帧进行融合。
本申请可选的实施例中,权重特征图获取模块在获取每个待处理RAW图像的权重特征图时,具体用于:
在待处理RAW图像为高动态范围图像时,将各待处理RAW图像分别转换为低动态范围图像,得到各转换后的待处理RAW图像;
将每个转换后的待处理RAW图像分别输入至神经网络,得到每个待处理RAW图像的权重特征图。
本申请可选的实施例中,该装置还包括训练模块606,其中,训练模块606通过下列方式训练得到神经网络:
获取训练样本集,训练样本集包括对应于至少一个场景的训练图像,每个场景的训练图像为至少两张图像,对于每个场景的至少两张图像,将其中的一张图像作为样本参考帧,其它图像作为样本补充帧;
对各训练图像分别进行线性亮度变换,得到各变换后的训练图像,并基于各变换后的训练图像对初始网络进行训练,直至初始网络的损失函数收敛,将损失函数收敛时的初始网络确定为神经网络;
其中,初始网络是以图像为输入,以图像的权重特征图为输出的神经网络,损失函数表征了对应于同一场景的样本融合图像与样本参考帧的误差,样本融合图像 是根据对应于同一场景的各训练图像的权重特征图,对各变换后的训练图像进行融合得到的。
本申请可选的实施例中,训练模块606在获取训练样本集时,具体用于:
获取初始训练样本集,初始训练样本集包括对应于至少一个场景的初始图像,每个场景的初始图像为至少两张图像;
当初始图像为低动态范围图像时,则将初始图像作为每个场景的训练图像;
当初始图像为高动态范围图像时,则将各初始图像分别转换为各初始图像对应的低动态范围图像;
将每个场景的各初始图像对应的低动态范围图像作为每个场景的训练图像。
本申请可选的实施例中,亮度关系确定602模块在分别确定每个补充帧与参考帧之间的亮度关系时,具体用于:
获取每张待处理RAW图像的曝光参数;
针对于每一补充帧,根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系。
本申请可选的实施例中,曝光参数包括光圈大小、快门时间和传感器增益;
根据参考帧的曝光参数与补充帧的曝光参数,确定补充帧与参考帧之间的亮度关系,包括:
确定补充帧与参考帧对应于各曝光参数的关联关系;
根据补充帧与参考帧对应于各曝光参数的关联关系,确定补充帧与参考帧之间的亮度关系。
本申请可选的实施例中,若曝光参数为光圈大小,关联关系为补充帧的光圈大小的平方值与参考帧的光圈大小的平方值的比值;
若曝光参数为快门时间,关联关系为参考帧的快门时间与补充帧的快门时间的比值;
若曝光参数为传感器增益,关联关系为补充帧的传感器增益与参考帧的传感器增益的比值。
本申请可选的实施例中,亮度关系确定模块602在分别确定每个补充帧与参考帧之间的亮度关系时,具体用于:
基于参考帧中每个像素的亮度,确定权重掩膜;
基于权重掩膜,分别对各待处理RAW图像中各像素的亮度进行调整;
对于每个待处理RAW图像,基于调整后的待处理RAW图像各像素的亮度,确定待处理RAW图像的亮度;
针对每一补充帧,基于补充帧的亮度与参考帧的亮度,确定补充帧与参考帧之间的亮度关系。
本实施例的图像融合的装置可执行本申请实施例所示的图像融合的方法,其实现原理相类似,此处不再赘述。
本申请实施例提供了一种电子设备,如图6所示,图6所示的电子设备2000包括:处理器2001和存储器2003。其中,处理器2001和存储器2003相连,如通过总线2002相连。可选地,电子设备2000还可以包括收发器2004。需要说明的是,实际应用中收发器2004不限于一个,该电子设备2000的结构并不构成对本申请实施例的限定。
其中,处理器2001应用于本申请实施例中,用于实现图5所示的各模块的功能。
处理器2001可以是CPU,通用处理器,DSP,ASIC,FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器2001也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。
总线2002可包括一通路,在上述组件之间传送信息。总线2002可以是PCI总线或EISA总线等。总线2002可以分为地址总线、数据总线、控制总线等。为便于表示,图6中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器2003可以是ROM或可存储静态信息和指令的其他类型的静态存储设备,RAM或者可存储信息和指令的其他类型的动态存储设备,也可以是EEPROM、CD-ROM或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
存储器2003用于存储执行本申请方案的应用程序代码,并由处理器2001来控制执行。处理器2001用于执行存储器2003中存储的应用程序代码,以实现图5所示实施例提供的图像融合的装置的动作。
本申请实施例提供了一种电子设备,本申请实施例中的电子设备包括:处理器;以及存储器,存储器配置用于存储机器可读指令,该指令在由该处理器执行时,使得该处理器执行图像融合的方法。
本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上用于存储计算机指令,当计算机指令在计算机上运行时,使得计算机可以执行实现图像融合的方法。
本申请中的一种计算机可读存储介质所涉及的名词及实现原理具体可以参照本申请实施例中的一种图像融合的方法,在此不再赘述。
本申请实施例提供了一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备上运行时,使得所述电子设备执行实现图像融合的方法。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以上仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
Claims (11)
- 一种图像融合的方法,其特征在于,包括:获取同一场景下的至少两张待处理RAW图像;将所述至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个所述补充帧与所述参考帧之间的亮度关系;针对每个所述补充帧,基于所述亮度关系,对所述补充帧中像素的亮度进行线性调整,得到调整后的补充帧;对所述调整后的各补充帧以及所述参考帧进行融合,得到融合图像。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:获取每个所述待处理RAW图像的权重特征图,其中,所述权重特征图包括所述待处理RAW图像中每个像素的权重值;所述对所述调整后的各补充帧以及所述参考帧进行图像融合,包括:基于每个所述待处理RAW图像的权重特征图,对所述调整后的各补充帧和所述参考帧进行融合。
- 根据权利要求2所述的方法,其特征在于,当所述待处理RAW图像为高动态范围图像时,所述获取每个所述待处理RAW图像的权重特征图,包括:将各所述待处理RAW图像分别转换为低动态范围图像,得到各转换后的待处理RAW图像;将每个所述转换后的待处理RAW图像分别输入至神经网络,得到每个所述待处理RAW图像的权重特征图。
- 根据权利要求3所述的方法,其特征在于,所述神经网络是通过下列方式训练得到的:获取训练样本集,所述训练样本集包括对应于至少一个场景的训练图像,每个场景的训练图像为至少两张图像,对于每个场景的至少两张图像,将其中的一张图像作为样本参考帧,其它图像作为样本补充帧;对各所述训练图像分别进行线性亮度变换,得到各变换后的训练图像,并基于所述各变换后的训练图像对初始网络进行训练,直至所述初始网络的损失函数收敛,将损失函数收敛时的所述初始网络确定为所述神经网络;其中,所述初始网络是以图像为输入,以图像的权重特征图为输出的神经网络,所述损失函数表征了对应于同一场景的样本融合图像与样本参考帧的误差,所述样本融合图像是根据对应于同一场景的各训练图像的权重特征图,对各变换后的训练图像进行融合得到的。
- 根据权利要求4所述的方法,其特征在于,所述获取训练样本集,包括:获取初始训练样本集,所述初始训练样本集包括对应于至少一个场景的初始图像,每个场景的初始图像为至少两张图像;当所述初始图像为低动态范围图像时,将所述初始图像作为每个场景的训练图像;当所述初始图像为高动态范围图像时,将各所述初始图像分别转换为各所述初始图像对应的低动态范围图像;将每个场景的各所述初始图像对应的低动态范围图像作为每个场景的训练图像。
- 根据权利要求1所述的方法,其特征在于,所述分别确定每个所述补充帧与所述参考帧之间的亮度关系,包括:获取每张待处理RAW图像的曝光参数;针对于每一补充帧,根据所述参考帧的曝光参数与所述补充帧的曝光参数,确定所述补充帧与所述参考帧之间的亮度关系。
- 根据权利要求1所述的方法,其特征在于,所述分别确定每个所述补充帧与所述参考帧之间的亮度关系,包括:基于所述参考帧中每个像素的亮度,确定权重掩膜;基于所述权重掩膜,分别对各所述待处理RAW图像中各像素的亮度进行调整;对于每个所述待处理RAW图像,基于调整后的所述待处理RAW图像各像素的亮度,确定所述待处理RAW图像的亮度;针对每个所述补充帧,基于所述补充帧的亮度与所述参考帧的亮度,确定所述补充帧与所述参考帧之间的亮度关系。
- 一种图像融合的装置,其特征在于,包括:图像获取模块,用于获取同一场景下的两张待处理RAW图像;亮度关系确定模块,用于将所述至少两张待处理RAW图像中的一张图像作为参考帧,其它图像作为补充帧,并分别确定每个所述补充帧与所述参考帧之间的亮度关系;亮度调整模块,用于针对每个所述补充帧,基于所述亮度关系,对所述补充帧中像素的亮度进行线性调整,得到调整后的补充帧;图像融合模块,用于对所述调整后的各补充帧以及所述参考帧进行融合,得到融合图像。
- 一种电子设备,其特征在于,包括处理器以及存储器,所述存储器被配置用 于存储机器可读指令,所述指令在由所述处理器执行时,使得所述处理器执行权利要求1-7任一项所述的方法。
- 一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机存储介质用于存储计算机指令,当其在计算机上运行时,使得计算机可以执行上述权利要求1-7中任一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备上运行时,使得所述电子设备执行根据权利要求1-7中任何一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/768,143 US20240296534A1 (en) | 2019-10-25 | 2020-09-21 | Image fusion method and apparatus, electronic device, and readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911024851.9 | 2019-10-25 | ||
CN201911024851.9A CN110728648B (zh) | 2019-10-25 | 2019-10-25 | 图像融合的方法、装置、电子设备及可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021077963A1 true WO2021077963A1 (zh) | 2021-04-29 |
Family
ID=69223253
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/116487 WO2021077963A1 (zh) | 2019-10-25 | 2020-09-21 | 图像融合的方法、装置、电子设备及可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240296534A1 (zh) |
CN (1) | CN110728648B (zh) |
WO (1) | WO2021077963A1 (zh) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344803A (zh) * | 2021-05-08 | 2021-09-03 | 浙江大华技术股份有限公司 | 图像调节方法、装置、电子装置和存储介质 |
CN113706583A (zh) * | 2021-09-01 | 2021-11-26 | 上海联影医疗科技股份有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
CN113781370A (zh) * | 2021-08-19 | 2021-12-10 | 北京旷视科技有限公司 | 图像的增强方法、装置和电子设备 |
CN113888455A (zh) * | 2021-11-05 | 2022-01-04 | Oppo广东移动通信有限公司 | 图像生成方法、装置、电子设备和计算机可读存储介质 |
CN114708173A (zh) * | 2022-02-22 | 2022-07-05 | 北京旷视科技有限公司 | 图像融合方法、计算机程序产品、存储介质及电子设备 |
CN115115518A (zh) * | 2022-07-01 | 2022-09-27 | 腾讯科技(深圳)有限公司 | 高动态范围图像的生成方法、装置、设备、介质及产品 |
CN115293994A (zh) * | 2022-09-30 | 2022-11-04 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
CN115314627A (zh) * | 2021-05-08 | 2022-11-08 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法、系统及摄像机 |
CN115689963A (zh) * | 2022-11-21 | 2023-02-03 | 荣耀终端有限公司 | 一种图像处理方法及电子设备 |
CN115696059A (zh) * | 2021-07-28 | 2023-02-03 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
WO2023246392A1 (zh) * | 2022-06-22 | 2023-12-28 | 京东方科技集团股份有限公司 | 图像获取方法、装置、设备和非瞬态计算机存储介质 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110728648B (zh) * | 2019-10-25 | 2022-07-19 | 北京迈格威科技有限公司 | 图像融合的方法、装置、电子设备及可读存储介质 |
CN111311532B (zh) * | 2020-03-26 | 2022-11-11 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备、存储介质 |
CN113744120A (zh) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | 多媒体处理芯片、电子设备和图像处理方法 |
CN114078102A (zh) * | 2020-08-11 | 2022-02-22 | 北京芯海视界三维科技有限公司 | 图像处理装置和虚拟现实设备 |
CN112418279A (zh) * | 2020-11-05 | 2021-02-26 | 北京迈格威科技有限公司 | 图像融合方法、装置、电子设备及可读存储介质 |
CN112561847B (zh) * | 2020-12-24 | 2024-04-12 | Oppo广东移动通信有限公司 | 图像处理方法及装置、计算机可读介质和电子设备 |
CN113313661B (zh) * | 2021-05-26 | 2024-07-26 | Oppo广东移动通信有限公司 | 图像融合方法、装置、电子设备及计算机可读存储介质 |
CN113744257A (zh) * | 2021-09-09 | 2021-12-03 | 展讯通信(上海)有限公司 | 图像融合方法、装置、终端设备以及存储介质 |
CN115409754B (zh) * | 2022-11-02 | 2023-03-24 | 深圳深知未来智能有限公司 | 一种基于图像区域有效性的多曝光图像融合方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204513A (zh) * | 2016-08-15 | 2016-12-07 | 厦门美图之家科技有限公司 | 图像处理的方法、装置和系统 |
WO2018136373A1 (en) * | 2017-01-20 | 2018-07-26 | Microsoft Technology Licensing, Llc | Image fusion and hdr imaging |
CN108510560A (zh) * | 2018-04-11 | 2018-09-07 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、存储介质和计算机设备 |
CN109886906A (zh) * | 2019-01-25 | 2019-06-14 | 武汉大学 | 一种细节敏感的实时弱光视频增强方法和系统 |
CN110728648A (zh) * | 2019-10-25 | 2020-01-24 | 北京迈格威科技有限公司 | 图像融合的方法、装置、电子设备及可读存储介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1520580A (zh) * | 2000-07-06 | 2004-08-11 | ŦԼ�и��ױ��Ǵ�ѧ�йܻ� | 用于增强数据解析度的方法和设备 |
CN101394487B (zh) * | 2008-10-27 | 2011-09-14 | 华为技术有限公司 | 一种合成图像的方法与系统 |
CN102970549B (zh) * | 2012-09-20 | 2015-03-18 | 华为技术有限公司 | 图像处理方法及装置 |
CN108288253B (zh) * | 2018-01-08 | 2020-11-27 | 厦门美图之家科技有限公司 | Hdr图像生成方法及装置 |
CN108989699B (zh) * | 2018-08-06 | 2021-03-23 | Oppo广东移动通信有限公司 | 图像合成方法、装置、成像设备、电子设备以及计算机可读存储介质 |
CN109194872B (zh) * | 2018-10-24 | 2020-12-11 | 深圳六滴科技有限公司 | 全景图像像素亮度校正方法、装置、全景相机和存储介质 |
CN110060213B (zh) * | 2019-04-09 | 2021-06-15 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN110062160B (zh) * | 2019-04-09 | 2021-07-02 | Oppo广东移动通信有限公司 | 图像处理方法和装置 |
CN110166707B (zh) * | 2019-06-13 | 2020-09-25 | Oppo广东移动通信有限公司 | 图像处理方法、装置、电子设备以及存储介质 |
CN110248098B (zh) * | 2019-06-28 | 2021-08-24 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
-
2019
- 2019-10-25 CN CN201911024851.9A patent/CN110728648B/zh active Active
-
2020
- 2020-09-21 WO PCT/CN2020/116487 patent/WO2021077963A1/zh active Application Filing
- 2020-09-21 US US17/768,143 patent/US20240296534A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204513A (zh) * | 2016-08-15 | 2016-12-07 | 厦门美图之家科技有限公司 | 图像处理的方法、装置和系统 |
WO2018136373A1 (en) * | 2017-01-20 | 2018-07-26 | Microsoft Technology Licensing, Llc | Image fusion and hdr imaging |
CN108510560A (zh) * | 2018-04-11 | 2018-09-07 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、存储介质和计算机设备 |
CN109886906A (zh) * | 2019-01-25 | 2019-06-14 | 武汉大学 | 一种细节敏感的实时弱光视频增强方法和系统 |
CN110728648A (zh) * | 2019-10-25 | 2020-01-24 | 北京迈格威科技有限公司 | 图像融合的方法、装置、电子设备及可读存储介质 |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344803A (zh) * | 2021-05-08 | 2021-09-03 | 浙江大华技术股份有限公司 | 图像调节方法、装置、电子装置和存储介质 |
CN113344803B (zh) * | 2021-05-08 | 2024-03-19 | 浙江大华技术股份有限公司 | 图像调节方法、装置、电子装置和存储介质 |
CN115314627B (zh) * | 2021-05-08 | 2024-03-01 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法、系统及摄像机 |
CN115314627A (zh) * | 2021-05-08 | 2022-11-08 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法、系统及摄像机 |
CN115696059A (zh) * | 2021-07-28 | 2023-02-03 | Oppo广东移动通信有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN113781370A (zh) * | 2021-08-19 | 2021-12-10 | 北京旷视科技有限公司 | 图像的增强方法、装置和电子设备 |
CN113706583A (zh) * | 2021-09-01 | 2021-11-26 | 上海联影医疗科技股份有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
CN113706583B (zh) * | 2021-09-01 | 2024-03-22 | 上海联影医疗科技股份有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
CN113888455A (zh) * | 2021-11-05 | 2022-01-04 | Oppo广东移动通信有限公司 | 图像生成方法、装置、电子设备和计算机可读存储介质 |
CN114708173A (zh) * | 2022-02-22 | 2022-07-05 | 北京旷视科技有限公司 | 图像融合方法、计算机程序产品、存储介质及电子设备 |
WO2023246392A1 (zh) * | 2022-06-22 | 2023-12-28 | 京东方科技集团股份有限公司 | 图像获取方法、装置、设备和非瞬态计算机存储介质 |
CN115115518A (zh) * | 2022-07-01 | 2022-09-27 | 腾讯科技(深圳)有限公司 | 高动态范围图像的生成方法、装置、设备、介质及产品 |
CN115115518B (zh) * | 2022-07-01 | 2024-04-09 | 腾讯科技(深圳)有限公司 | 高动态范围图像的生成方法、装置、设备、介质及产品 |
CN115293994A (zh) * | 2022-09-30 | 2022-11-04 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
CN115689963A (zh) * | 2022-11-21 | 2023-02-03 | 荣耀终端有限公司 | 一种图像处理方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110728648A (zh) | 2020-01-24 |
CN110728648B (zh) | 2022-07-19 |
US20240296534A1 (en) | 2024-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021077963A1 (zh) | 图像融合的方法、装置、电子设备及可读存储介质 | |
US8135235B2 (en) | Pre-processing method and apparatus for wide dynamic range image processing | |
CN105120247B (zh) | 一种白平衡调整方法及电子设备 | |
CN104639920B (zh) | 基于单帧双次曝光模式的宽动态融合方法 | |
WO2020051898A1 (zh) | 双光图像自动曝光方法、装置、双光图像相机及机器存储介质 | |
WO2020029679A1 (zh) | 控制方法、装置、成像设备、电子设备及可读存储介质 | |
US10600170B2 (en) | Method and device for producing a digital image | |
US11574390B2 (en) | Apparatus and method for image processing | |
US11601600B2 (en) | Control method and electronic device | |
CN103237168A (zh) | 一种基于综合增益的高动态范围图像视频处理方法 | |
WO2023124123A1 (zh) | 图像处理方法及其相关设备 | |
CN106550227B (zh) | 一种图像饱和度调整方法及装置 | |
JP2015201842A (ja) | 画像処理装置、その制御方法、および制御プログラム | |
CN113706393B (zh) | 视频增强方法、装置、设备及存储介质 | |
CN110572585B (zh) | 图像处理方法、装置、存储介质及电子设备 | |
WO2022151852A1 (zh) | 图像处理方法、装置、系统、电子设备以及存储介质 | |
CN112738410A (zh) | 一种成像方法及装置、内窥镜设备 | |
CN115706870B (zh) | 视频处理方法、装置、电子设备和存储介质 | |
CN112702588B (zh) | 双模态图像信号处理器和双模态图像信号处理系统 | |
WO2022109897A1 (zh) | 延时摄影方法及设备、延时摄影视频生成方法及设备 | |
CN111429366B (zh) | 基于亮度转换函数的单帧弱光图像增强方法 | |
JP2015080157A (ja) | 画像処理装置、画像処理方法及びプログラム | |
WO2024098260A1 (zh) | 图像确定方法和装置、图像处理方法和装置、电子设备 | |
CN116033274B (zh) | 一种兼容3d降噪的图像宽动态方法 | |
WO2017096855A1 (zh) | 伽马参数的动态调整方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20879298 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 17768143 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20879298 Country of ref document: EP Kind code of ref document: A1 |