WO2021189733A1 - 图像处理方法及装置、电子设备、存储介质 - Google Patents
图像处理方法及装置、电子设备、存储介质 Download PDFInfo
- Publication number
- WO2021189733A1 WO2021189733A1 PCT/CN2020/103632 CN2020103632W WO2021189733A1 WO 2021189733 A1 WO2021189733 A1 WO 2021189733A1 CN 2020103632 W CN2020103632 W CN 2020103632W WO 2021189733 A1 WO2021189733 A1 WO 2021189733A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- processed
- pixel
- difference
- processing
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 132
- 238000000605 extraction Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims description 207
- 230000008569 process Effects 0.000 claims description 87
- 238000012549 training Methods 0.000 claims description 42
- 238000010606 normalization Methods 0.000 claims description 34
- 230000015654 memory Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 230000009466 transformation Effects 0.000 claims description 16
- 238000007499 fusion processing Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 abstract description 8
- 238000013527 convolutional neural network Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000004913 activation Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present disclosure relates to the field of image processing technology, and in particular to an image processing method and device, electronic equipment, and storage medium.
- EV exposure values
- the embodiments of the present disclosure provide an image processing method and device, electronic equipment, and storage medium.
- an image processing method includes:
- the first weight of the first pixel and the second weight of the second pixel are obtained, wherein the first pixel is a pixel in the first image to be processed, and the The second pixel is a pixel with the same name as the first pixel in the second image to be processed;
- fusion processing is performed on the first image to be processed and the second image to be processed to obtain a fused image.
- the shading information of the pixels in the first image to be processed and the shading information of the pixels in the second image to be processed are obtained.
- the weights of the pixels in the first image to be processed and the weights of the pixels in the second image to be processed are obtained, It can achieve the effect of making the weights of pixels with different degrees of darkness different, so that based on the weights of the pixels in the first image to be processed and the weight of pixels in the second image to be processed, the first image to be processed and the second image 2.
- the quality of the obtained fused image can be improved.
- the performing feature extraction processing on the first to-be-processed image and the second to-be-processed image to obtain a feature image includes:
- Non-linear transformation processing is performed on the third characteristic image to obtain the first characteristic image.
- the third feature image is subjected to nonlinear transformation processing to obtain the The first feature image includes:
- Up-sampling processing is performed on the fourth characteristic image to obtain the first characteristic image.
- the method before the splicing process is performed on the first image to be processed and the second image to be processed to obtain a third characteristic image, the method further includes:
- the splicing processing of the first image to be processed and the second image to be processed to obtain a third image to be processed includes:
- the obtaining the first weight of the first pixel and the second weight of the second pixel according to the first characteristic image includes:
- the first weight is obtained according to the pixel value of the third pixel, wherein the third pixel is a pixel in the first characteristic image, and the third pixel is a pixel in the first characteristic image.
- the position is the same as the position of the first pixel in the third image to be processed;
- the second weight is obtained according to the pixel value of the fourth pixel, wherein the fourth pixel is a pixel in the first characteristic image, and the fourth pixel is a pixel in the first characteristic image.
- the position is the same as the position of the second pixel in the third image to be processed.
- the image processing method is implemented through an image processing network
- the training process of the image processing network includes:
- the first sample image, the second sample image, the supervision data and the network to be trained, wherein the content of the first sample image is the same as the content of the second sample image, and the content of the first sample image
- the exposure level is different from the exposure level of the second sample image, and the supervision data is obtained by fusing the first sample image and the second sample image;
- the parameters of the network to be trained are adjusted to obtain the image processing network.
- the training process before obtaining the loss of the network to be trained based on the difference between the fused sample image and the supervision data, the training process further includes:
- the obtaining the loss of the network to be trained based on the difference between the fused sample image and the supervision data includes:
- the loss of the network to be trained is obtained.
- the training process before obtaining the loss of the network to be trained based on the first difference and the second difference, the training process further includes:
- a third difference is obtained, wherein the highlighted pixel and the third pixel have the same name each other point;
- the obtaining the loss of the network to be trained based on the first difference and the second difference includes:
- the loss of the network to be trained is obtained.
- the training process before obtaining the loss of the network to be trained based on the first difference, the second difference, and the third difference, the training process further includes:
- the obtaining the loss of the network to be trained according to the first difference, the second difference, and the third difference includes:
- the loss of the network to be trained is obtained.
- an image processing device in a second aspect, includes:
- the acquiring part is configured to acquire a first image to be processed and a second image to be processed, wherein the content of the first image to be processed is the same as the content of the second image to be processed, and the first image to be processed
- the exposure level of is different from the exposure level of the second image to be processed
- the first processing part is configured to perform feature extraction processing on the first image to be processed and the second image to be processed to obtain a feature image
- the second processing part is configured to obtain a first weight of a first pixel and a second weight of a second pixel according to the first characteristic image, wherein the first pixel is the first to-be-processed A pixel in the image, where the second pixel is a pixel with the same name as the first pixel in the second image to be processed;
- the third processing part is configured to perform fusion processing on the first image to be processed and the second image to be processed according to the first weight and the second weight to obtain a fused image.
- the first processing part is further configured to:
- Non-linear transformation is performed on the third characteristic image to obtain the first characteristic image.
- the first processing part is further configured to:
- Up-sampling processing is performed on the fourth characteristic image to obtain the first characteristic image.
- the device further includes:
- the fourth processing part is configured to perform splicing processing on the first to-be-processed image and the second to-be-processed image to obtain a third characteristic image, and perform normalization on the pixel values in the first to-be-processed image.
- the first processing part is also configured to:
- the third processing part is further configured as:
- the first weight is obtained according to the pixel value of the third pixel, wherein the third pixel is a pixel in the first characteristic image, and the third pixel is a pixel in the first characteristic image.
- the position is the same as the position of the first pixel in the third image to be processed;
- the second weight is obtained according to the pixel value of the fourth pixel, wherein the fourth pixel is a pixel in the first characteristic image, and the fourth pixel is a pixel in the first characteristic image.
- the position is the same as the position of the second pixel in the third image to be processed.
- the image processing method executed by the device is applied to an image processing network
- the device further includes: a training part configured to train the image processing network, and the training process of the image processing network includes:
- the first sample image, the second sample image, the supervision data, and the network Acquire the first sample image, the second sample image, the supervision data, and the network to be trained, wherein the content of the first sample image is the same as the content of the second sample image, and the content of the first sample image
- the exposure level is different from the exposure level of the second sample image, and the supervision data is obtained by fusing the first sample image and the second sample image;
- the parameters of the network to be trained are adjusted to obtain the image processing network.
- the training part is further configured as:
- the loss of the network to be trained is obtained.
- the training part is further configured as:
- the pixel points in the fused sample image whose pixel values are greater than or equal to the highlight pixel point threshold are determined as Highlight pixels;
- a third difference is obtained, wherein the highlighted pixel and the third pixel have the same name each other point;
- the loss of the network to be trained is obtained.
- the training part is further configured as:
- the second difference Before obtaining the loss of the network to be trained based on the first difference, the second difference, and the third difference, based on the difference between the gradient in the fused sample image and the gradient in the supervised data The difference between, get the fourth difference;
- the loss of the network to be trained is obtained.
- a processor is provided, and the processor is configured to execute a method as described in the above first aspect and any one of its possible implementation manners.
- an electronic device including: a processor, a sending device, an input device, an output device, and a memory, the memory is configured to store computer program code, the computer program code includes computer instructions, When the processor executes the computer instruction, the electronic device executes the method as described in the first aspect and any one of its possible implementation manners.
- a computer-readable storage medium stores a computer program.
- the computer program includes program instructions.
- the processor executes the method as described in the first aspect and any one of its possible implementation manners.
- a computer program including computer-readable code, which, when the computer-readable code runs in an electronic device, causes a processor in the electronic device to execute the above-mentioned first aspect and any of them.
- Figures 1a and 1b are schematic diagrams of an exemplary bracketing image provided by an embodiment of the disclosure
- FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the disclosure
- FIG. 3 is an exemplary schematic diagram of pixels at the same position provided by an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of an exemplary point with the same name provided by an embodiment of the present disclosure.
- FIG. 5 is a schematic flowchart of another image processing method provided by an embodiment of the disclosure.
- FIG. 6 is an exemplary schematic diagram of splicing images in the channel dimension provided by the embodiments of the disclosure.
- FIG. 7 is an exemplary schematic diagram of processing a third image to be processed to obtain a first characteristic image according to an embodiment of the disclosure
- Fig. 8 is a schematic structural diagram of an exemplary image processing network provided by an embodiment of the present disclosure.
- FIG. 9 is a schematic flowchart of another image processing method provided by an embodiment of the disclosure.
- FIG. 10 is a schematic structural diagram of an image processing device provided by an embodiment of the disclosure.
- FIG. 11 is a schematic diagram of the hardware structure of an image processing device provided by an embodiment of the disclosure.
- the processed image is obtained by adjusting the exposure of the reference image. Perform fusion processing on the reference image and the processed image to improve the quality of the reference image and obtain the fused image. For example (Example 1), suppose the exposure of the reference image is 2EV. By adjusting the exposure of the reference image, the exposure of the reference image is reduced by 1EV to obtain a processed image, where the exposure of the processed image is 1EV. The reference image and the processed image are fused to obtain a fused image, where the exposure of the fused image is between [1EV, 2EV].
- [ ⁇ , ⁇ ] represents a value interval greater than or equal to ⁇ and less than or equal to ⁇ .
- Example 1 the content of the reference image is the same as the content of the processed image, but the exposure level of the reference image is different from the exposure level of the processed image.
- the content of the resulting fused image is the same as the content of the reference image, but the exposure of the fused image is different from the exposure of the reference image. In this way, by fusing the reference image and the processed image, the effect of adjusting the exposure of the reference image can be achieved, thereby improving the quality of the reference image.
- the reference image and the processed image in Example 1 are the bracketed images.
- the image type may be a RAW image or a YUV image or RGB image after image signal processing (Image Signal Processing, ISP), etc., or may also be other image types, which are not limited here.
- the content of image a, the content of image b, and the content of image c are all the same, the exposure of image a is 1EV, the exposure of image b is -1EV, and the exposure of image c is 2EV, then image a, image b and image c are bracketed images.
- the image shown in FIG. 1a and the image shown in FIG. 1b are two images with the same content and different exposures, that is, the image shown in FIG. 1a and the image shown in FIG. 1b are bracketed exposure images.
- Example 2 In the process of bracketed image fusion, by setting different weights for different images, and performing a weighted summation on the bracketed images based on the weights, an image with appropriate exposure can be obtained without changing the image content.
- the adjustment range of the exposure required for different pixels is different.
- the pixel point A is dark due to the small exposure amount of the pixel point A
- the pixel point B is bright due to the large exposure amount of the pixel point B.
- the amount of exposure needs to be increased to increase the brightness of pixel point A
- the amount of exposure needs to be adjusted down to decrease the brightness of pixel point B.
- the brightness of different pixels in the image is not considered, resulting in low quality of the fused image obtained by the traditional method.
- Example 2 in the process of fusing the bracketed image, whether it is a bright pixel or a dark pixel, the weight of the pixel in the reference image is 0.6, and the pixel in the processed image The weights of the points are all 0.4.
- the embodiments of the present disclosure provide a technical solution that can determine the weight of the pixel based on the brightness of the pixel during the process of fusing the bracketed image, thereby improving the quality of the fused image.
- the execution subject of the embodiments of the present disclosure is an image processing device.
- the image processing device may be one of the following: a mobile phone, a computer, a server, and a tablet computer.
- FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
- the first image to be processed and the second image to be processed are bracketed exposure images.
- the image processing apparatus receives the first image to be processed and the second image to be processed that are input by the user through the input component.
- the above-mentioned input components include: a keyboard, a mouse, a touch screen, a touch pad, and an audio input device.
- the image processing apparatus receives the first image to be processed and the second image to be processed sent by the first terminal.
- the first terminal may be any of the following: a mobile phone, a computer, a tablet computer, a server, and a wearable device.
- the image processing device adjusts the image of the first image to be processed by processing the first image to be processed after acquiring the first image to be processed. Exposure to obtain the second image to be processed. For example, the EV of the first image to be processed acquired by the image processing apparatus is 2. The image processing device processes the first image to be processed, and reduces the EV of the first image to be processed by one to obtain the second image to be processed, where the EV of the second image to be processed is 1.
- the feature extraction processing may be convolution processing, pooling processing, a combination of convolution processing and pooling processing, or other processing that can extract features, but is not limited thereto.
- the feature extraction processing can be implemented by a convolutional neural network, or can be implemented by a feature extraction model, which is not limited in the present disclosure.
- the feature extraction process is implemented through a convolutional neural network.
- the convolutional neural network is trained by using the bracketed image with the annotation information as the training data, so that the trained convolutional neural network can complete the feature extraction processing of the first image to be processed and the second image to be processed.
- the annotation information of the image in the training data may be the brightness information of the pixels in the bracketed exposure image.
- the convolutional neural network extracts the characteristic image of the image from the bracketed image as the training result. Take the label information as the supervision information, supervise the training results of the convolutional neural network during the training process, and adjust the parameters of the convolutional neural network to complete the training of the convolutional neural network.
- the trained convolutional neural network can be used to process the first to-be-processed image and the second to-be-processed image to obtain a first feature image, where the first feature image carries the brightness and darkness of the pixels in the first to-be-processed image Information and the brightness information of the pixels in the second image to be processed.
- the first image to be processed and the second image to be processed are convolved layer by layer through at least two convolution layers to complete the convolution of the first image to be processed and the second image to be processed.
- the feature extraction process obtains feature images of the first image to be processed and the second image to be processed.
- the convolutional layers in at least two convolutional layers are serially connected in sequence, that is, the output of the previous convolutional layer is the input of the next convolutional layer, and the feature extraction process is performed on the first image to be processed and the second image to be processed During the process, the content and semantic information extracted by each convolutional layer are different.
- the specific expression is that the feature extraction process abstracts the features of the first image to be processed step by step, and also gradually discards the relatively minor ones.
- Feature information where relatively secondary feature information refers to feature information other than the light and dark information of the pixel. Therefore, the size of the feature image extracted later is smaller, but the content and semantic information are more concentrated.
- the first image to be processed and the second image to be processed are convolved step by step, and the first feature image can be obtained to carry the light and dark information of the pixels in the first image to be processed and the second to be processed
- the size of the first to-be-processed image and the second to-be-processed image are reduced to reduce the data processing volume of the image processing device and increase the processing speed of the image processing device.
- the implementation process of the above convolution processing is as follows: by sliding the convolution kernel on the first image to be processed and the second image to be processed, and combining the first image to be processed and the second image to be processed The pixel point corresponding to the center pixel of the convolution kernel is used as the target pixel point, and the pixel values on the first image to be processed and the second image to be processed are multiplied by the corresponding values on the convolution kernel, and then all are multiplied After the values are added, the pixel value after convolution processing is obtained. The pixel value after convolution processing is used as the pixel value of the target pixel.
- first image to be processed and the second image to be processed are slidingly processed, and the pixel values of all pixels in the first image to be processed and the second image to be processed are updated, and the first image to be processed and the second image to be processed are completed.
- Convolution processing obtains characteristic images of the first image to be processed and the second image to be processed.
- the size of the convolution kernels in the above at least two convolutional layers are all 3*3, and the step size of the convolution processing is 2.
- the first pixel is any pixel in the first image to be processed
- the second pixel is the pixel in the second image to be processed
- the first pixel and the second pixel are each other
- the point with the same name means that the physical point represented by the first pixel is the same as the physical point represented by the second pixel.
- the two images shown in FIG. 4 are bracketed exposure images, in which pixel point A and pixel point B have the same name for each other, and pixel point C and pixel point D have the same name for each other.
- the first weight is the weight of the first pixel in the subsequent process of fusing the first image to be processed and the second image to be processed.
- the second weight is the weight of the second pixel in the subsequent process of fusing the first image to be processed and the second image to be processed.
- the pixel value in the first feature image carries the light and dark information of the pixel. Therefore, the weight of the first pixel can be determined as the first weight according to the pixel value of the pixel corresponding to the first pixel in the first characteristic image (hereinafter referred to as the first reference pixel). According to the pixel value of the pixel point corresponding to the second pixel point in the first characteristic image (hereinafter referred to as the second reference pixel point), the weight of the second pixel point is determined as the second weight.
- the third image to be processed is an image obtained by concatenating the first image to be processed and the second image to be processed in the channel dimension.
- Performing feature extraction processing on the first image to be processed and the second image to be processed can be implemented by performing feature extraction processing on the third image to be processed.
- the feature extraction process is performed on the third image to be processed, and the size of the first feature image obtained is the same as the size of the third image to be processed.
- the position of the first reference pixel in the first feature image is the same as the position of the first pixel in the first image to be processed, and the position of the second reference pixel in the first feature image is the same as that of the second pixel in the second image.
- the position in the image to be processed is the same.
- the first feature image includes a first feature sub-image and a second feature sub-image, where the first feature sub-image is obtained by performing feature extraction processing on the first to-be-processed image, and the second feature sub-image is obtained by performing a feature extraction process on the second to-be-processed image.
- the processed image is obtained by feature extraction processing.
- the pixel point corresponding to the first pixel point in the first feature sub-image is called the first reference pixel point.
- the position of the second reference pixel in the second feature sub-image is the same as the position of the second pixel in the second image to be processed.
- the pixels at the same position in the two images can be seen in Fig. 3.
- the position of the pixel A 11 in the image A is the same as the position of the pixel B 11 in the image B.
- the position of point A 12 in image A is the same as the position of pixel point B 12 in image B
- the position of pixel point A 13 in image A is the same as the position of pixel point B 13 in image B
- the position of pixel point A 21 is
- the position in image A is the same as the position of pixel B 21 in image B
- the position of pixel A 22 in image A is the same as the position of pixel B 22 in image B
- the position is the same as the position of pixel B 23 in image B
- the position of pixel A 31 in image A is the same as the position of pixel B 31 in image B
- the position of pixel A 32 in image A is the same as that of pixel
- the position of B 32 in image B is the same, and the position
- the first weight is w 1
- the second weight is w 2
- the pixel value of the pixel corresponding to the first pixel in the first feature image is p 1
- the pixel corresponding to the first pixel in the first feature image The pixel value of the point is p 2 .
- w 1 , w 2 , p 1 , and p 2 satisfy the following formula:
- w 1 , w 2 , p 1 , and p 2 satisfy the following formula:
- the first weight and the second weight are used to determine the pixel value and the second weight of the first pixel.
- the pixel values of the two pixels are weighted and summed to achieve the fusion of the first pixel and the second pixel. Specifically, the following formula can be used:
- O represents the fused image
- W i represents the weight of pixel i
- I i represents the pixel value of pixel i.
- the pixel value of the first pixel is 130
- the pixel value of the second pixel is 30, the first weight is 0.4
- the second weight is 0.6.
- Use the first weight and the second weight to perform a weighted summation of the pixel value of the first pixel and the pixel value of the second pixel to obtain the pixel value of the fourth pixel in the fused image.
- this embodiment takes the first pixel and the second pixel as the processing object, and describes how to obtain the pixel value of the fourth pixel based on the pixel value of the first pixel and the pixel value of the second pixel.
- the processing process, and in practical applications, the pixel values of all pixels in the fused image can be obtained based on the pixel values of all the points with the same name in the first image to be processed and the second image to be processed.
- the first image to be processed includes pixel point a and pixel point b
- the second image to be processed includes pixel point c and pixel point d
- pixel point a and pixel point c are each other with the same name
- pixel point b and Pixel point d is each other with the same name
- the pixel value of pixel point a is 40
- the pixel value of pixel point b is 60
- the pixel value of pixel point c is 80
- the pixel value of pixel point d is 30.
- the fused image includes pixel point e and pixel point f, where pixel point e, pixel point a, and pixel point c have the same name, and pixel point f, pixel point b, and pixel point d have the same name.
- both step 202 and step 203 can be implemented by a convolutional neural network.
- Train the convolutional neural network by using the bracketed image as training data and the supervised image as the supervisory data, so that the trained convolutional neural network can complete the feature extraction processing of the first image to be processed and the second image to be processed , Where the content of the supervised image is the same as the content of the training data, but the exposure of the supervised image is more appropriate than the exposure of the training data.
- the convolutional neural network extracts feature images from the bracketed image, and determines the weights of pixels in the bracketed image based on the feature images.
- the bracketed image is fused to obtain the image obtained by training.
- the loss of the convolutional neural network is determined, and the parameters of the convolutional neural network are adjusted based on the loss to complete the training of the convolutional neural network.
- the trained convolutional neural network can be used to process the first to-be-processed image and the second to-be-processed image to obtain the first weight of the first pixel and the second weight of the second pixel, based on the first weight And the second weight, the first image to be processed and the second image to be processed are fused to obtain the fused image.
- the bracketed image includes two images, that is, the first to-be-processed image and the second to-be-processed image.
- the bracketed image can also include three images or more than three images.
- three images or more than three images can be processed to obtain a fused image.
- Image where the exposure of the fused image is more appropriate than the exposure of any one of the bracketed images.
- the bracketed image includes image a, image b, and image c.
- feature extraction processing is performed on the first image to be processed and the second image to be processed to obtain the brightness information of the pixels in the first image to be processed and the brightness information of the pixels in the second image to be processed.
- the weights of the pixels in the first image to be processed and the weights of the pixels in the second image to be processed are obtained, It can achieve the effect of making the weights of pixels with different degrees of darkness different, so that based on the weights of the pixels in the first image to be processed and the weight of pixels in the second image to be processed, the first image to be processed and the second image 2.
- the quality of the obtained fused image can be improved.
- FIG. 5 is a schematic flowchart of a possible implementation method of step 202 according to an embodiment of the present disclosure.
- the stitching processing is the stitching processing in the channel dimension, that is, the width of the third image to be processed (ie the number of columns) is the width of the first image to be processed (ie the number of columns) and the width of the second image to be processed (Ie the number of columns), the height (ie the number of rows) of the third image to be processed is the sum of the height (ie the number of rows) of the first image to be processed and the height (ie the number of rows) of the second image to be processed.
- the width of the third image to be processed is the width of the first image to be processed (ie the number of columns) and the width of the second image to be processed (Ie the number of columns)
- the height (ie the number of rows) of the third image to be processed is the sum of the height (ie the number of rows) of the first image to be processed and the height (ie the number of rows) of the second image to be processed.
- the value range of the pixel value in the first image to be processed may be different from the value range of the pixel value in the second image to be processed, this will give the image processing device an opportunity to compare the first image to be processed and the second image to be processed.
- the first image to be processed is an image captured by imaging device A
- the pixel value range of the first image to be processed is [0,255]
- the second image to be processed is an image captured by imaging device B
- the value range of the pixel value of the second image to be processed is [0,1000], where the imaging device A and the imaging device B may be one of a camera, a video camera, and a camera.
- Example 3 continues with an example.
- the brightness level represented by the pixel with the pixel value of 200 in the first image to be processed is different from the brightness level represented by the pixel with the pixel value of 200 in the second image to be processed.
- the first image before splicing the first image to be processed and the second image to be processed, the first image can be processed separately.
- the pixel values of the image to be processed and the second image to be processed are normalized, and the pixel values of the first image to be processed and the pixel values of the second image to be processed are normalized to [0,1] to obtain the normalization
- the processed first image to be processed and the normalized second image to be processed are normalized.
- the first image to be processed includes a pixel point a, the pixel value of the pixel point a is 153, and the value range of the pixel value in the first image to be processed is [0,255].
- the pixel value of pixel a is:
- the second image to be processed includes pixel point b, the pixel value of pixel point b is 320, and the value range of the pixel value in the second image to be processed is [0,800].
- the pixel value of pixel b is:
- step 501 specifically includes:
- the first image to be processed after the normalization process and the second image to be processed after the normalization process are spliced to obtain a third image to be processed.
- the splicing process is also the splicing process in the channel dimension, that is, the width of the third image to be processed (ie the number of columns) is the width of the first image to be processed after the normalization process (ie the number of columns) and the normalization process.
- the sum of the width (ie the number of columns) of the second image to be processed after the normalization process, and the height (ie the number of rows) of the third image to be processed is the height (ie the number of rows) of the first image to be processed after the normalization process.
- the feature information of the pixels in the third image to be processed can be extracted by performing convolution processing on the third image to be processed.
- convolution processing refer to the implementation process of convolution processing in step 202, where the third image to be processed corresponds to the first image to be processed and the second image to be processed in step 202, and the second characteristic image corresponds to step 202. Corresponds to the first feature image in.
- the data distribution in the third image to be processed will change, that is, the data distribution in the second feature image Different from the data distribution in the third image to be processed, this will bring difficulties to the subsequent processing of the second feature image. Therefore, before performing the next processing on the second image to be processed, the second feature image may be normalized, so that the data distribution in the second feature image is close to the data distribution in the third image to be processed.
- the process of normalizing the second feature image can be referred to the following:
- the BN layer will process the second feature image as follows:
- the variance of the above second feature image is determined, that is, the following formula:
- the zoom variable ⁇ Based on the zoom variable ⁇ and the translation variable ⁇ , the third feature image is obtained, which is the following formula:
- the normalized image is non-linearly transformed by the activation function to process complex mapping.
- the third feature image is substituted into a parameterized linear rectification function (parametric rectified linear unit, PReLU) to implement a nonlinear transformation of the third feature image to obtain the first feature image.
- PReLU parameterized linear rectification function
- the pixel value of each pixel in the first feature image contains light and dark information. According to the pixel value of a pixel in the first feature image, the weight of a pixel in the first image to be processed or the weight of a pixel in the second image to be processed can be obtained. The weight of a pixel.
- the size of the third image to be processed may be reduced, and the size of the second feature image may be smaller than the size of the third image to be processed.
- the size of the weight of the third image to be processed obtained based on the third characteristic image is smaller than the size of the third image to be processed. In this way, the weight of some pixels in the third image to be processed cannot be determined.
- the size of the first characteristic image obtained is smaller than the size of the third to-be-processed image.
- the first feature image includes 4 pixels. According to the pixel values of these 4 pixels, 4 weights can be obtained.
- both the first image to be processed and the second image to be processed shown in Figure 6 Including 9 pixels. Obviously, the weights of all pixels in the first image to be processed and the second image to be processed cannot be determined based on the first feature image.
- step 504 when the size of the first characteristic image is smaller than the size of the third image to be processed, step 504 specifically includes the following steps:
- step 404 For the implementation process of this step, please refer to the implementation process of "performing nonlinear transformation on the third characteristic image to obtain the first characteristic image" in step 404. It should be understood that in this step, the third characteristic image is non-linearly transformed, and the fourth characteristic image is obtained instead of the first characteristic image.
- the size of the fourth feature image is the same as the size of the first feature image, and the size of the fourth feature image is also smaller than the third image to be processed. Therefore, the size of the fourth feature image needs to be increased so that the size of the fourth feature image is the same as the size of the third image to be processed.
- up-sampling processing is performed on the fourth characteristic image to obtain the first characteristic image.
- the above-mentioned upsampling processing may be one of the following: bilinear interpolation processing, nearest neighbor interpolation processing, high-order interpolation and deconvolution processing.
- the feature information of the pixels in the third to-be-processed image is extracted to obtain the second feature image.
- Normalization processing and nonlinear transformation are sequentially performed on the second feature image to improve the effectiveness of obtaining the information in the second feature image.
- FIG. 8 is a schematic structural diagram of an exemplary image processing network provided by an embodiment of the present disclosure. As shown in Figure 8, the network layers in the image processing network are connected in series, including twelve convolutional layers and one upsampling layer.
- the size of the convolution kernel in the first convolution layer, the size of the convolution kernel in the third convolution layer, the size of the convolution kernel in the fifth convolution layer, and the seventh volume The size of the convolution kernel in the buildup layer, the size of the convolution kernel in the ninth convolution layer, and the size of the convolution kernel in the eleventh convolution layer are all 3 ⁇ 3, and the convolution kernel in the second convolution layer
- the size of the convolution kernel in the fourth convolution layer, the size of the convolution kernel in the sixth convolution layer, the size of the convolution kernel in the eighth convolution layer, and the convolution kernel in the tenth convolution layer The size of and the size of the convolution kernel in the twelfth convolutional layer are both 1 ⁇ 1.
- the number of convolution kernels in the first convolution layer, the number of convolution kernels in the second convolution layer, the number of convolution kernels in the fourth convolution layer, and the number of convolution kernels in the sixth convolution layer The number, the number of convolution kernels in the eighth convolution layer and the number of convolution kernels in the tenth convolution layer are both 6.
- the number of convolution kernels in the third convolution layer and the fifth convolution layer The number of convolution kernels in the middle, the number of convolution kernels in the seventh convolution layer, the number of convolution kernels in the ninth convolution layer, and the number of convolution kernels in the eleventh convolution layer are all 6.
- the number of convolution kernels in the twelfth convolution layer is K, where K is a positive integer, that is, the embodiment of the present disclosure does not limit the number of convolution kernels in the twelfth convolution layer.
- the step size of the convolution kernel in the first convolution layer is 2, and the step size of the convolution kernel in the remaining eleven convolution layers is 1.
- each convolutional layer except the twelfth convolutional layer is connected with a normalization (batchnorm, BN) layer and an activation layer (not shown in Figure 8)
- the BN layer is used to normalize the input data
- the activation layer is used to activate the input data.
- the data output by the first convolutional layer is input to the BN layer, and the data output by the first layer is processed by the BN layer to obtain the first intermediate data.
- the first intermediate data is input to the activation layer
- the first intermediate data is processed by the activation layer to obtain the second intermediate data
- the second intermediate data is input to the second convolutional layer.
- the image processing network performs splicing processing on the input first to-be-processed image and the second to-be-processed image to obtain a third to-be-processed image.
- the third to-be-processed image is sequentially processed by the first convolutional layer, the second convolutional layer, ..., and the twelfth convolutional layer to obtain a fourth characteristic image.
- the fourth feature image is input to the up-sampling layer, and the up-sampling process is performed on the fourth feature image through the up-sampling layer to obtain the first feature image.
- the weight of each pixel in the first image to be processed can be determined, and the weight of each pixel in the second image to be processed can be determined.
- the first image to be processed and the second image to be processed are fused to obtain a fused image.
- the embodiments of the present disclosure also provide a training method for an image processing network.
- FIG. 9 is a schematic flowchart of an image processing neural network training method provided by an embodiment of the present disclosure.
- the execution subject of this embodiment may be an image processing device or not an image device, that is, the execution subject of the training method of the image processing neural network may be the same or different from the execution subject of the image to be processed using the image processing network.
- the embodiments of the present disclosure do not limit the execution subject of this embodiment.
- the executive body of this embodiment is referred to as a training device below.
- the training device can be any of the following: mobile phones, computers, tablets, and servers.
- the first sample image and the second sample image are bracketed exposure images.
- the above-mentioned supervision data is an image obtained by fusing the first sample image and the second sample image (hereinafter referred to as a reference image), wherein the content of the reference image is the same as the content of the first sample image and the second sample image, However, the exposure of the reference image is more appropriate than the exposure of the first sample image and the second sample image.
- the network structure of the network to be trained is the same as the network structure of the image processing network. For details, refer to FIG. 8.
- the training device receives the network to be trained input by the user through the input component.
- the above-mentioned input components include: a keyboard, a mouse, a touch screen, a touch pad, and an audio input device.
- the training device receives the network to be trained sent by the second terminal.
- the foregoing second terminal may be any one of the following: a mobile phone, a computer, a tablet computer, a server, and a wearable device.
- the network Use the network to be trained to process the first sample image and the second sample image to obtain a fused sample image.
- the content of the fused sample image is the same as the first sample image and the second sample image.
- the exposure of the sample image is different from the exposure of the first sample image and the exposure of the second sample image.
- ⁇ y 1 -y 2 ⁇ 1 is the 1-norm of y 1 -y 2.
- ⁇ y 1 -y 2 ⁇ 2 is the 2-norm of y 1 -y 2.
- ⁇ y 1 -y 2 ⁇ F is the F norm of y 1 -y 2.
- the loss of the network to be trained can be determined based on the difference between the fused sample image and the supervised data.
- n is a real number and k is a positive number.
- n is a real number and k is a positive number.
- the loss of the network to be trained is determined.
- the parameters of the network to be trained are adjusted based on the loss of the network to be trained to obtain an image processing network, which can reduce the difference between the fused sample image and the reference image obtained through the image processing network, thereby improving the image The quality of the fused image.
- step 903 before step 903 is performed, the following steps may be performed:
- the reference image has a gradient of The gradient of the fused sample image is The first difference is L 1 , where, L 1 satisfies the following formula:
- the reference image has a gradient of The gradient of the fused sample image is The first difference is L 1 , where, L 2 satisfies the following formula:
- n is a real number and k is a positive number.
- the reference image has a gradient of The gradient of the fused sample image is The first difference is L 1 , where, L 2 satisfies the following formula:
- n is a real number and k is a positive number.
- step 903 specifically includes the following steps:
- step 903 for an implementation manner of determining the difference between the fused sample image and the supervised data.
- n is a real number and k is a positive number.
- n is a real number and k is a positive number.
- the loss of the network to be trained suppose that the first difference is L 1 , the second difference is L 2 , and the loss of the network to be trained is L t , where L 1 , L 2 , and L t satisfy the following Mode:
- the loss of the network to be trained is determined.
- the parameters of the network to be trained are adjusted based on the loss of the network to be trained to obtain an image processing network, which can reduce the difference between the fused sample image and the reference image obtained through the image processing network.
- the loss of the network to be trained is determined.
- the parameters of the network to be trained are adjusted based on the loss of the network to be trained to obtain an image processing network, and the image processing network is used to process the first sample image and the second sample image to obtain the fused sample image.
- the gradient direction of the sample image is the same as the gradient direction of the reference image, especially the gradient of the gradient pixel area in the opposite direction can be adjusted, so that the gradient of the gradient pixel area in the opposite direction is the same as the gradient direction of the reference image, so that the fused sample image
- the edges in are smoother, which in turn makes the fusion effect of the fused sample image more natural. Thereby improving the quality of the fused image obtained by using the image processing network.
- step 93 the following steps may be performed:
- the highlight pixel threshold is a positive integer, and the specific value can be adjusted according to the user's usage requirements. In some possible implementation manners, the highlight pixel threshold is 200.
- the third pixel is a pixel in the reference image, and the third pixel and the highlighted pixel have the same name as each other. According to the difference between the highlighted pixel and the third pixel, the third difference can be obtained.
- step 93 specifically includes the following steps:
- the loss of the network to be trained is L 1 , the second difference is L 2 , the third difference is L 3 , and the loss of the network to be trained is L t , where L 1 , L 2 , L 3 , and L t satisfy the following formula:
- the loss of the network to be trained is L 1 , the second difference is L 2 , the third difference is L 3 , and the loss of the network to be trained is L t , where L 1 , L 2 , L 3 , and L t satisfy the following formula:
- the loss of the network to be trained is determined.
- the parameters of the network to be trained are adjusted based on the loss of the network to be trained to obtain an image processing network, which can reduce the difference between the fused sample image and the reference image obtained through the image processing network.
- the loss of the network to be trained is determined.
- the subsequent processing adjust the parameters of the network to be trained based on the loss of the network to be trained to obtain the image processing network, and use the image processing network to process the first sample image and the second sample image to obtain the fused sample image, which can make the fusion
- the gradient direction of the sample image is the same as the gradient direction of the reference image, especially the gradient of the gradient pixel area in the opposite direction can be adjusted, so that the gradient of the gradient pixel area in the opposite direction is the same as the gradient direction of the reference image, so that the fused sample image
- the edges in are smoother, which in turn makes the fusion effect of the fused sample image more natural.
- the loss of the network to be trained is determined, and the highlight pixel area in the fused sample image can be adjusted, so that the quality of the highlight pixel area in the fused sample image can be higher. Thereby improving the quality of the fused image obtained by using the image processing network.
- step 96 before step 96 is performed, the following steps may be performed:
- the fourth difference is obtained.
- the gradient of the fused sample image is The gradient of the reference image is The fourth difference is L 4 , where, L 4 satisfies the following formula:
- step 96 specifically includes the following steps:
- the loss of the network to be trained is L 1 , the second difference is L 2 , the third difference is L 3 , the fourth difference is L 4 , and the loss of the network to be trained is L t , where L 1 , L 2 , L 3 , L 4 , and L t satisfy the following formula:
- the loss of the network to be trained is L 1 , the second difference is L 2 , the third difference is L 3 , the fourth difference is L 4 , and the loss of the network to be trained is L t , where L 1 , L 2 , L 3 , L 4 , and L t satisfy the following formula:
- the loss of the network to be trained is L 1 , the second difference is L 2 , the third difference is L 3 , the fourth difference is L 4 , and the loss of the network to be trained is L t , where L 1 , L 2 , L 3 , L 4 , and L t satisfy the following formula:
- the loss of the network to be trained is determined.
- the parameters of the network to be trained are adjusted based on the loss of the network to be trained to obtain an image processing network, which can reduce the difference between the fused sample image and the reference image obtained through the image processing network.
- the loss of the network to be trained is determined.
- the parameters of the network to be trained are adjusted based on the loss of the network to be trained to obtain an image processing network, and the image processing network is used to process the first sample image and the second sample image to obtain the fused sample image.
- the gradient direction of the sample image is the same as the gradient direction of the reference image, especially the gradient of the gradient pixel area in the opposite direction can be adjusted, so that the gradient of the gradient pixel area in the opposite direction is the same as the gradient direction of the reference image, so that the fused sample image
- the edges in are smoother, which in turn makes the fusion effect of the fused sample image more natural.
- the subsequent processing adjust the parameters of the network to be trained based on the loss of the network to be trained to obtain an image processing network.
- the gradient direction of the fused sample image be compared with the reference image
- the same gradient direction can also make the gradient size of the fused sample image the same as the gradient size of the reference image, further making the edges in the fused sample image smoother and the fusion effect more natural. Thereby improving the quality of the fused image obtained by using the image processing network.
- the training network to be trained is trained in the manner of reverse gradient propagation until convergence, and the training of the network to be trained is completed, and the image processing network is obtained.
- the embodiments of the present disclosure also provide a possible application scenario.
- the content of the three landscape images is the same and the exposure level is different.
- the technical solution provided by the embodiment of the present disclosure is applied to a mobile phone, and the mobile phone can use the technical solution provided by the embodiment of the present disclosure to process the three landscape images to obtain the merged landscape image.
- the exposure of the merged landscape image is more appropriate than the exposure of the above three landscape images.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- FIG. 10 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
- the device 1 includes: an acquiring part 11, a first processing part 12, a second processing part 13, a third processing part 14, The fourth processing part 15 and training part 16, in which:
- the acquiring part 11 is configured to acquire a first to-be-processed image and a second to-be-processed image, wherein the content of the first to-be-processed image is the same as the content of the second to-be-processed image, and the first to-be-processed image
- the exposure level of the image is different from the exposure level of the second image to be processed
- the first processing part 12 is configured to perform feature extraction processing on the first image to be processed and the second image to be processed to obtain a feature image;
- the second processing part 13 is configured to obtain a first weight of a first pixel and a second weight of a second pixel according to the first characteristic image, wherein the first pixel is the first waiting Processing a pixel in an image, where the second pixel is a pixel with the same name as the first pixel in the second image to be processed;
- the third processing part 14 is configured to perform fusion processing on the first image to be processed and the second image to be processed according to the first weight and the second weight to obtain a fused image.
- the first processing part 12 is further configured to:
- Non-linear transformation processing is performed on the third characteristic image to obtain the first characteristic image.
- the first processing part 12 is further configured to:
- Up-sampling processing is performed on the fourth characteristic image to obtain the first characteristic image.
- the device 1 further includes:
- the fourth processing part 15 is configured to perform splicing processing on the first to-be-processed image and the second to-be-processed image to obtain a third characteristic image, and perform processing on the pixel values in the first to-be-processed image. Normalization processing to obtain a first image to be processed after the normalization processing, and normalizing pixel values in the second image to be processed to obtain a second image to be processed after the normalization processing;
- the first processing part 12 is also configured to:
- the third processing part 14 is further configured to:
- the first weight is obtained according to the pixel value of the third pixel, wherein the third pixel is a pixel in the first characteristic image, and the third pixel is a pixel in the first characteristic image.
- the position is the same as the position of the first pixel in the third image to be processed;
- the second weight is obtained according to the pixel value of the fourth pixel, wherein the fourth pixel is a pixel in the first characteristic image, and the fourth pixel is a pixel in the first characteristic image.
- the position is the same as the position of the second pixel in the third image to be processed.
- the image processing method executed by the apparatus 1 is applied to an image processing network
- the device 1 further includes a training part 16 configured to train the image processing network, and the training process of the image processing network includes:
- the parameters of the network to be trained are adjusted to obtain the image processing network.
- the training part 16 is further configured to:
- the loss of the network to be trained is obtained.
- the training part 16 is further configured to:
- the pixel points in the fused sample image whose pixel values are greater than or equal to the highlight pixel point threshold are determined as Highlight pixels;
- a third difference is obtained, wherein the highlighted pixel and the third pixel have the same name each other point;
- the loss of the network to be trained is obtained.
- the training part 16 is further configured to:
- the second difference Before obtaining the loss of the network to be trained based on the first difference, the second difference, and the third difference, based on the difference between the gradient in the fused sample image and the gradient in the supervised data The difference between, get the fourth difference;
- the loss of the network to be trained is obtained.
- feature extraction processing is performed on the first image to be processed and the second image to be processed to obtain the brightness information of the pixels in the first image to be processed and the brightness information of the pixels in the second image to be processed.
- the weights of the pixels in the first image to be processed and the weights of the pixels in the second image to be processed are obtained, It can achieve the effect of making the weights of pixels with different degrees of darkness different, so that based on the weights of the pixels in the first image to be processed and the weight of pixels in the second image to be processed, the first image to be processed and the second image 2.
- the quality of the obtained fused image can be improved.
- the functions or parts included in the device provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments.
- the functions or parts included in the device provided in the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments.
- FIG. 11 is a schematic diagram of the hardware structure of an image processing device provided by an embodiment of the disclosure.
- the image processing device 2 includes a processor 21, a memory 22, an input device 23 and an output device 24.
- the processor 21, the memory 22, the input device 23, and the output device 24 are coupled through a connector, and the connector includes various interfaces, transmission lines, or buses, etc., which are not limited in the embodiment of the present disclosure.
- coupling refers to mutual connection in a specific manner, including direct connection or indirect connection through other devices, for example, can be connected through various interfaces, transmission lines, buses, and the like.
- the processor 21 may be one or more graphics processing units (GPUs). When the processor 21 is a GPU, the GPU may be a single-core GPU or a multi-core GPU. In some possible implementation manners, the processor 21 may be a processor group composed of multiple GPUs, and the multiple processors are coupled to each other through one or more buses. In some possible implementation manners, the processor may also be other types of processors, etc., which is not limited in the embodiment of the present disclosure.
- GPUs graphics processing units
- the memory 22 may be used to store computer program instructions and various types of computer program codes including program codes used to execute the solutions of the embodiments of the present disclosure.
- the memory includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) ), or a portable read-only memory (compact disc read-only memory, CD-ROM), which is used for related instructions and data.
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- CD-ROM compact disc read-only memory
- the input device 23 is used to input data and/or signals
- the output device 24 is used to output data and/or signals.
- the input device 23 and the output device 24 may be independent devices or a whole device.
- the memory 22 can be used not only to store related instructions, but also to store related data.
- the memory 22 can be used to store the first to-be-processed image and the second to-be-processed image acquired through the input device 23.
- the memory 22 may also be used to store the fused image obtained by the processor 21, etc.
- the embodiment of the present disclosure does not limit the specific data stored in the memory.
- FIG. 11 only shows a simplified design of an image processing device.
- the image processing device may also include other necessary components, including but not limited to any number of input/output devices, processors, memories, etc., and all image processing devices that can implement the embodiments of the present disclosure are in this Within the protection scope of the disclosed embodiments.
- a computer program including computer-readable code, which, when the computer-readable code runs in an electronic device, causes a processor in the electronic device to execute the foregoing method.
- the disclosed system, device, and method may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the parts is only a logical function division, and there may be other divisions in actual implementation, for example, multiple parts or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or parts, and may be in electrical, mechanical or other forms.
- the part described as a separate component may or may not be physically separated, and the part displayed as a part may or may not be a physical part, that is, it may be located in one place, or may be distributed on multiple network parts. Some or all of them may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional parts in the various embodiments of the present disclosure may be integrated into one processing part, or each part may exist alone physically, or two or more parts may be integrated into one part.
- parts may be parts of circuits, parts of processors, parts of programs or software, etc., of course, may also be units, modules, or non-modular.
- the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
- software it can be implemented in the form of a computer program product in whole or in part.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium or transmitted through the computer-readable storage medium.
- the computer instructions can be sent from a website, computer, server, or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) Another website site, computer, server or data center for transmission.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital versatile disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)) )Wait.
- the process can be completed by a computer program instructing relevant hardware.
- the program can be stored in a computer readable storage medium. , May include the processes of the above-mentioned method embodiments.
- the aforementioned storage media include: read-only memory (ROM) or random access memory (RAM), magnetic disks or optical disks and other media that can store program codes.
- the embodiments of the present disclosure relate to an image processing method and device, electronic equipment, and storage medium.
- the light and dark information of pixels in the first image to be processed is obtained.
- the light and dark information of the pixels in the second image to be processed based on the light and dark information of the pixels in the first image to be processed and the light and dark information of the pixels in the second image to be processed, to obtain the weight sum of the pixels in the first image to be processed.
- the weights of the pixels in the second image to be processed can achieve the effect of making the weights of the pixels with different degrees of darkness different, so that the weights of the pixels in the first image to be processed and the pixels in the second image to be processed are
- the weight of the point can improve the quality of the obtained fused image during the fusion process of the first to-be-processed image and the second to-be-processed image.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Studio Devices (AREA)
Abstract
本公开实施例公开了一种图像处理方法及装置、电子设备、存储介质。该方法包括:获取第一待处理图像和第二待处理图像,其中,所述第一待处理图像的内容与所述第二待处理图像的内容相同,且所述第一待处理图像的曝光量与所述第二待处理图像的曝光量不同;对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到第一特征图像;依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重;依据所述第一权重和所述第二权重,对所述第一待处理图像和所述第二待处理图像进行融合处理,得到融合后的图像。
Description
相关申请的交叉引用
本申请基于申请号为202010223122.2、申请日为2020年03月26日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法及装置、电子设备、存储介质。
相较于通过胶片相机进行摄影,在数码摄影中,是否正确曝光,是决定拍摄得到的图像质量的重要因素之一。合适的曝光量(exposure values,EV),可使图像中的被拍摄对象的明暗对比合适,而曝光量低易导致图像的亮度过低、曝光量过大易导致图像的亮度过高。因此,如何为图像确定合适的曝光量具有非常重要的意义。
发明内容
本公开实施例提供一种图像处理方法及装置、电子设备、存储介质。
第一方面,提供了一种图像处理方法,所述方法包括:
获取第一待处理图像和第二待处理图像,其中,所述第一待处理图像的内容与所述第二待处理图像的内容相同,且所述第一待处理图像的曝光量与所述第二待处理图像的曝光量不同;
对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到第一特征图像;
依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重,其中,所述第一像素点为所述第一待处理图像中的像素点,所述第二像素点为所述第二待处理图像中与所述第一像素点互为同名点的像素点;
依据所述第一权重和所述第二权重,对所述第一待处理图像和所述第二待处理图像进行融合处理,得到融合后的图像。
在该方面中,通过对第一待处理图像和第二待处理图像进行特征提取处理,得到第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息。基于第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息,得到第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,可达到使明暗程度不同的像素点的权重不同的效果,从而在基于第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,对第一待处理图像和第二待处理图像进行融合处理的过程中,可提高得到的融合后的图像的质量。
在一些可能的实现方式中,所述对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到特征图像,包括:
对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三待处理图像;
提取所述第三待处理图像中的像素点的特征信息,得到第二特征图像;
对所述第二特征图像进行归一化处理,得到第三特征图像;
对所述第三特征图像进行非线性变换处理,得到所述第一特征图像。
结合本公开实施例任一实施方式,在所述第一特征图像的尺寸小于所述第三待处理图像的尺寸的情况下,所述对所述第三特征图像进行非线性变换处理,得到所述第一特征图像,包括:
对所述第三特征图像进行非线性变换处理,得到第四特征图像;
对所述第四特征图像进行上采样处理,得到所述第一特征图像。
在一些可能的实现方式中,在所述对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三特征图像之前,所述方法还包括:
对所述第一待处理图像中的像素值进行归一化处理,得到归一化处理后的第一待处理图像;
对所述第二待处理图像中的像素值进行归一化处理,得到归一化处理后的第二待处理图像;
所述对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三待处理图像,包括:
对所述归一化处理后的第一待处理图像和所述归一化处理后的第二待处理图像进行拼接处理,得到所述第三待处理图像。
在一些可能的实现方式中,所述依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重,包括:
依据第三像素点的像素值得到所述第一权重,其中,所述第三像素点为所述第一特征图像中的像素点,所述第三像素点在所述第一特征图像中的位置与所述第一像素点在所述第三待处理图像中的位置相同;
依据第四像素点的像素值得到所述第二权重,其中,所述第四像素点为所述第一特征图像中的像素点,所述第四像素点在所述第一特征图像中的位置与所述第二像素点在所述第三待处理图像中的位置相同。
在一些可能的实现方式中,通过图像处理网络实现所述图像处理方法;
所述图像处理网络的训练过程包括:
获取第一样本图像、第二样本图像、监督数据和待训练网络,其中,所述第一样本图像的内容与所述第二样本图像的内容相同,且所述第一样本图像的曝光量与所述第二样本图像的曝光量不同,所述监督数据通过将所述第一样本图像和所述第二样本图像融合得到;
使用所述待训练网络对所述第一样本图像和所述第二样本图像进行处理,得到融合后的样本图像;
依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失;
基于所述待训练网络的损失,调整所述待训练网络的参数,得到所述图像处理网络。
在一些可能的实现方式中,在所述依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失之前,所述训练过程还包括:
依据所述融合后的样本图像中梯度的方向和所述监督数据中梯度的方向之间的差异,得到第一差异;
所述依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失,包括:
依据所述融合后的样本图像与所述监督数据之间的差异,得到第二差异;
依据所述第一差异和所述第二差异,得到所述待训练网络的损失。
在一些可能的实现方式中,在所述依据所述第一差异和所述第二差异,得到所述待训练网络的损失之前,所述训练过程还包括:
确定所述融合后的样本图像中像素值大于或等于高亮像素点阈值的像素点,作为高亮像素点;
依据所述高亮像素点的梯度与所述监督数据中的第三像素点的梯度之间的差异,得到第三差异,其中,所述高亮像素点与所述第三像素点互为同名点;
所述依据所述第一差异和所述第二差异,得到所述待训练网络的损失,包括:
依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失。
在一些可能的实现方式中,在所述依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失之前,所述训练过程还包括:
依据所述融合后的样本图像中梯度和所述监督数据中梯度之间的差异,得到第四差异;
所述依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失,包括:
依据所述第一差异、所述第二差异、所述第三差异和所述第四差异,得到所述待训练网络的损失。
第二方面,提供了一种图像处理装置,所述装置包括:
获取部分,被配置为获取第一待处理图像和第二待处理图像,其中,所述第一待处理图像的内容与所述第二待处理图像的内容相同,且所述第一待处理图像的曝光量与所述第二待处理图像的曝光量不同;
第一处理部分,被配置为对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到特征图像;
第二处理部分,被配置为依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重,其中,所述第一像素点为所述第一待处理图像中的像素点,所述第二像素点为所述第二待处理图像中与所述第一像素点互为同名点的像素点;
第三处理部分,被配置为依据所述第一权重和所述第二权重,对所述第一待处理图像和所述第二待处理图像进行融合处理,得到融合后的图像。
在一些可能的实现方式中,所述第一处理部分,还被配置为:
对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三待处理图像;
提取所述第三待处理图像中的像素点的特征信息,得到第二特征图像;
对所述第二特征图像进行归一化处理,得到第三特征图像;
对所述第三特征图像进行非线性变换,得到所述第一特征图像。
在一些可能的实现方式中,在所述第一特征图像的尺寸小于所述第三待处理图像的尺寸的情况下,所述第一处理部分,还被配置为:
对所述第三特征图像进行非线性变换,得到第四特征图像;
对所述第四特征图像进行上采样处理,得到所述第一特征图像。
在一些可能的实现方式中,所述装置还包括:
第四处理部分,被配置为在对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三特征图像之前,对所述第一待处理图像中的像素值进行归一化处理,得到归一化处理后的第一待处理图像,以及对所述第二待处理图像中的像素值进行归一化处理,得到归一化处理后的第二待处理图像;
所述第一处理部分,还被配置为:
对所述归一化处理后的第一待处理图像和所述归一化处理后的第二待处理图像进行拼接处理,得到所述第三待处理图像。
在一些可能的实现方式中,所述第三处理部分,还被配置为:
依据第三像素点的像素值得到所述第一权重,其中,所述第三像素点为所述第一特征图像中的像素点,所述第三像素点在所述第一特征图像中的位置与所述第一像素点在所述第三待处理图像中的位置相同;
依据第四像素点的像素值得到所述第二权重,其中,所述第四像素点为所述第一特征图像中的像素点,所述第四像素点在所述第一特征图像中的位置与所述第二像素点在所述第三待处理图像中的位置相同。
在一些可能的实现方式中,所述装置执行的图像处理方法应用于图像处理网络;
所述装置还包括:训练部分,被配置为对所述图像处理网络进行训练,所述图像处理网络的训练过程包括:
获取第一样本图像、第二样本图像、监督数据、待训练网络,其中,所述第一样本图像的内容与所述第二样本图像的内容相同,且所述第一样本图像的曝光量与所述第二样本图像的曝光量不同,所述监督数据通过将所述第一样本图像和所述第二样本图像融合得到;
使用所述待训练网络对所述第一样本图像和所述第二样本图像进行处理,得到融合后的样本图像;
依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失;
基于所述待训练网络的损失,调整所述待训练网络的参数,得到所述图像处理网络。
在一些可能的实现方式中,所述训练部分还被配置为:
在所述依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失之前,依据所述融合后的样本图像中梯度的方向和所述监督数据中梯度的方向之间的差异,得到第一差异;
依据所述融合后的样本图像与所述监督数据之间的差异,得到第二差异;
依据所述第一差异和所述第二差异,得到所述待训练网络的损失。
在一些可能的实现方式中,所述训练部分还被配置为:
在所述依据所述第一差异和所述第二差异,得到所述待训练网络的损失之前,确定所述融合后的样本图像中像素值大于或等于高亮像素点阈值的像素点,作为高亮像素点;
依据所述高亮像素点的梯度与所述监督数据中的第三像素点的梯度之间的差异,得到第三差异,其中,所述高亮像素点与所述第三像素点互为同名点;
依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失。
在一些可能的实现方式中,所述训练部分还被配置为:
在所述依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失之前,依据所述融合后的样本图像中梯度和所述监督数据中梯度之间的差异,得到第四差异;
依据所述第一差异、所述第二差异、所述第三差异和第四差异,得到所述待训练网络的损失。
第三方面,提供了一种处理器,所述处理器被配置为执行如上述第一方面及其任意一种可能实现的方式的方法。
第四方面,提供了一种电子设备,包括:处理器、发送装置、输入装置、输出装置和存储器,所述存储器被配置为存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行如上述第一方面及其任意一种可能实现的方式的方法。
第五方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行如上述第一方面及其任意一种可能实现的方式的方法。
第六方面,提供了一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,使得所述电子设备中的处理器执行上述第一方面及其任一种可能的实现方式的方法。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
为了更清楚地说明本公开实施例或背景技术中的技术方案,下面将对本公开实施例或背景技术中所需要使用的附图进行说明。
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开实施例的技术方案。
图1a和图1b为本公开实施例提供的示例性的一种包围曝光图像的示意图;
图2为本公开实施例提供的一种图像处理方法的流程示意图;
图3为本公开实施例提供的示例性的一种相同位置的像素点的示意图;
图4为本公开实施例提供的示例性的一种同名点的示意图;
图5为本公开实施例提供的另一种图像处理方法的流程示意图;
图6为本公开实施例提供的示例性的一种对图像进行通道维度上的拼接的示意图;
图7为本公开实施例提供的示例性的一种对第三待处理图像进行处理得到第一特征图像的示意图;
图8为本公开实施例提供的示例性的一种图像处理网络的结构示意图;
图9为本公开实施例提供的另一种图像处理方法的流程示意图;
图10为本公开实施例提供的一种图像处理装置的结构示意图;
图11为本公开实施例提供的一种图像处理装置的硬件结构示意图。
为了使本技术领域的人员更好地理解本公开实施例方案,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开实施例的一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开实施例保护的范围。
本公开实施例的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或部分的过程、方法、系统、产品或设备没有限定于已列出的步骤或部分,而是可选地还包括没有列出的步骤或部分,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或部分。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本公开实施例的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
相较于通过胶片相机进行摄影,在数码摄影中,是否正确曝光,是决定拍摄得到的图像质量的重要因素之一。合适的EV(Exposure Values)可使图像中的被拍摄对象的明暗对比合适,而曝光量低易导致图像的亮度过低、曝光量过大易导致图像的亮度过高。因此,合适的曝光量可提高图像的高质量。
由于摄影者无法确定合适的曝光量,导致拍摄得到的图像(下文将称为参考图像)的质量低。在传统方法中,通过调整参考图像的曝光量,得到处理后的图像。对参考图像和处理后的图像进行融合处理,提升参考图像的质量,得到融合后的图像。举例来说(例1),假设参考图像的曝光量为2EV。通过调整参考图像的曝光量,使参考图像的曝光量降低1EV,得到处理后的图像,其中,处理后的图像的曝光量为1EV。对参考图像和处理后的图像进行融合处理,得到融合后的图像,其中融合后的图像的曝光量处于[1EV,2EV]之间。
为表述方便,本公开实施例中,[α,β]表示大于或等于α且小于或等于β的取值区间。
在例1中,参考图像的内容与处理后的图像的内容相同,但参考图像的曝光量与处理后的图像的曝光量不同。通过将参考图像与处理后的图像融合,得到的融合后的图像的内容与参考图像的内容相同,但融合后的图像的曝光量与参考图像的曝光量不同。这样,可通过将参考图像和处理后的图像融合,达到调整参考图像的曝光量的效果,进而提升参考图像的质量。
为表述方便,下文将内容相同、曝光量不同的至少两张图像称为包围曝光图像。例如,例1中的参考图像和处理后的图像即为包围曝光图像。在一些实施例中,图像的类型可以是RAW图像或者经过图像信号处理(Image Signal Processing,ISP)后的YUV图像或者RGB图像等,还可以是其他图像的类 型,在此不做限定。又例如,图像a的内容、图像b的内容、图像c的内容均相同,图像a的曝光量为1EV、图像b的曝光量为-1EV、图像c的曝光量为2EV,则图像a、图像b和图像c为包围曝光图像。再例如,图1a所示的图像与图1b所示的图像为内容相同、曝光量不同的两张图像,即图1a所示的图像与图1b所示的图像为包围曝光图像。
在包围曝光图像融合的过程中,通过为不同的图像设置不同的权重,并基于该权重对包围曝光图像进行加权求和,可在不改变图像内容的前提下,得到曝光量合适的图像。例如(例2),在例1中,假设参考图像的权重为0.6,处理后的图像的权重为0.4,融合后的图像的曝光量为2×0.6+1×0.4=2.2EV。
因为包围曝光图像中不同像素点的明暗程度不一致,所以不同像素点所需的曝光量的调整幅度不同。例如,在参考图像中,由于像素点A的曝光量小导致像素点A暗、像素点B的曝光量大导致像素点B亮。显然,对于像素点A,需要将曝光量调大,以使像素点A的亮度增大,而对于像素点B,需要将曝光量调小,以使像素点B的亮度减小。由于在对参考图像和处理后的图像进行融合的过程中,并未考虑图像中不同像素点的明暗程度,导致通过传统方法得到的融合后的图像的质量低。例如,在例2中,在对包围曝光图像进行融合的过程中,无论是亮的像素点还是暗的像素点,参考图像中的像素点的权重均为0.6,且处理后的图像中的像素点的权重均为0.4。
本公开实施例提供了一种技术方案,可在对包围曝光图像进行融合的过程中,基于像素点的明暗程度,确定像素点的权重,进而提高融合后的图像的质量。
本公开实施例的执行主体为图像处理装置,在一些可能的实现方式中,图像处理装置可以是以下中的一种:手机、计算机、服务器和平板电脑。下面结合本公开实施例中的附图对本公开实施例进行描述。
请参阅图2,图2是本公开实施例提供的一种图像处理方法的流程示意图。
201、获取第一待处理图像和第二待处理图像。
本公开实施例中,第一待处理图像和第二待处理图像为包围曝光图像。
在一种获取第一待处理图像和第二待处理图像的实现方式中,图像处理装置接收用户通过输入组件输入的第一待处理图像和第二待处理图像。上述输入组件包括:键盘、鼠标、触控屏、触控板和音频输入器等。
在另一种获取第一待处理图像和第二待处理图像的实现方式中,图像处理装置接收第一终端发送的第一待处理图像和第二待处理图像。在一些可能的实现方式中,第一终端可以是以下任意一种:手机、计算机、平板电脑、服务器和可穿戴设备。
在又一种获取第一待处理图像和第二待处理图像的实现方式中,图像处理装置在获取第一待处理图像后,通过对第一待处理图像进行处理,调整第一待处理图像的曝光量,得到第二待处理图像。例如,图像处理装置获取到的第一待处理图像的EV为2。图像处理装置对第一待处理图像进行处理,使第一待处理图像的EV减一,得到第二待处理图像,其中,第二待处理图像的EV为1。
202、对上述第一待处理图像和上述第二待处理图像进行特征提取处理,得到第一特征图像。
本公开实施例中,特征提取处理可以是卷积处理,也可以是池化处理,还可以是卷积处理和池化处理的结合,或者其他可以提取特征的处理,不限于此。在一些可能的实现方式中,特征提取处理可通过卷积神经网络实现,也可通过特征提取模型实现,本公开对此不做限定。
在一种可能实现的方式中,特征提取处理通过卷积神经网络实现。通过将带有标注信息的包围曝光图像作为训练数据,对卷积神经网络进行训练,使训练后的卷积神经网络可完成对第一待处理图像和第二待处理图像的特征提取处理。训练数据中的图像的标注信息可以为包围曝光图像中的像素点的明暗信息。在使用训练数据对卷积神经网络进行训练的过程中,卷积神经网络从包围曝光图像中提取出图像的特征图像,作为训练结果。以标注信息为监督信息,监督卷积神经网络在训练过程中得到的训练结果,并调整卷积神经网络的参数,完成对卷积神经网络的训练。这样,可使用训练后的卷积神经网络对第一待处理图像和第二待处理图像进行处理,得到第一特征图像,其中,第一特征图像携带第一待处理图像中的像素点的明暗信息和第二待处理图像中的像素点的明暗信息。
在另一种可能实现的方式中,通过至少两层卷积层对第一待处理图像和第二待处理图像逐层进行卷积处理,完成对第一待处理图像和第二待处理图像的特征提取处理,得到第一待处理图像和第二待处理图像的特征图像。至少两层卷积层中的卷积层依次串联,即上一层卷积层的输出为下一层卷积层的输入,在对第一待处理图像和第二待处理图像进行特征提取处理的过程中,每层卷积层提取出的内容及语义信息均不一样,具体表现为,特征提取处理一步步地将第一待处理图像的特征抽象出来,同时也将逐步丢弃相对次要的特征信息,其中,相对次要的特征信息指除像素点的明暗信息之外的特征信息。因此,越到后面提取出的特征图像的尺寸越小,但内容及语义信息更浓缩。通过多层卷积层逐级对第一待处理图像和第二待处理图像进行卷积处理,可在得到第一特征图像携带第一待处理图像中的像素点的明暗信息 和第二待处理图像中的像素点的明暗信息的同时,将第一待处理图像和第二待处理图像的尺寸缩小,减小图像处理装置的数据处理量,提高图像处理装置的处理速度。
在一些可能的实现方式中,上述卷积处理的实现过程如下:通过使卷积核在第一待处理图像和第二待处理图像上滑动,并将第一待处理图像和第二待处理图像上与卷积核的中心像素点对应的像素点作为目标像素点,将第一待处理图像和第二待处理图像上的像素值与卷积核上对应的数值相乘,然后将所有相乘后的值相加得到卷积处理后的像素值。将卷积处理后的像素值作为目标像素点的像素值。最终滑动处理完第一待处理图像和第二待处理图像,更新第一待处理图像和第二待处理图像中所有像素点的像素值,完成对第一待处理图像和第二待处理图像的卷积处理,得到第一待处理图像和第二待处理图像的特征图像。示例性的,上述至少两层卷积层中的卷积核的尺寸均为3*3,卷积处理的步长为2。
203、依据上述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重。
本公开实施例中,第一像素点为第一待处理图像中的任意一个像素点,第二像素点为第二待处理图像中的像素点,且第一像素点与第二像素点互为同名点,即第一像素点所表征的物理点与第二像素点所表征的物理点相同。例如,图4所示的两张图像为包围曝光图像,其中,像素点A与像素点B互为同名点,像素点C与像素点D互为同名点。
第一权重为在后续对第一待处理图像和第二待处理图像进行融合的过程中,第一像素点的权重。第二权重为在后续对第一待处理图像和第二待处理图像进行融合的过程中,第二像素点的权重。
第一特征图像中的像素值携带像素点的明暗信息。因此,可依据第一特征图像中与第一像素点对应的像素点(下文将称为第一参考像素点)的像素值,确定第一像素点的权重,作为第一权重。依据第一特征图像中与第二像素点对应的像素点(下文将称为第二参考像素点)的像素值,确定第二像素点的权重,作为第二权重。
例如,假设第三待处理图像为将第一待处理图像和第二待处理图像进行通道维度上的拼接(concatenate)得到的图像。对第一待处理图像和第二待处理图像进行特征提取处理,可通过对第三待处理图像进行特征提取处理实现。对第三待处理图像进行特征提取处理,得到的第一特征图像的尺寸,与第三待处理图像的尺寸相同。第一参考像素点在第一特征图像中的位置与第一像素点在第一待处理图像中的位置相同,第二参考像素点在第一特征图像中的位置与第二像素点在第二待处理图像中的位置相同。
又例如,第一特征图像包括第一特征子图像和第二特征子图像,其中,第一特征子图像通过对第一待处理图像进行特征提取处理得到,第二特征子图像通过对第二待处理图像进行特征提取处理得到。将第一特征子图像中与第一像素点对应的像素点称为第一参考像素点,第一参考像素点在第一特征子图像中的位置与第一像素点在第一待处理图像中的位置相同,第二参考像素点在第二特征子图像中的位置与第二像素点在第二待处理图像中的位置相同。
本公开实施例中,两张图像中相同位置的像素点可参见图3,如图3所示,像素点A
11在图像A中的位置与像素点B
11在图像B中的位置相同,像素点A
12在图像A中的位置与像素点B
12在图像B中的位置相同,像素点A
13在图像A中的位置与像素点B
13在图像B中的位置相同,像素点A
21在图像A中的位置与像素点B
21在图像B中的位置相同,像素点A
22在图像A中的位置与像素点B
22在图像B中的位置相同,像素点A
23在图像A中的位置与像素点B
23在图像B中的位置相同,像素点A
31在图像A中的位置与像素点B
31在图像B中的位置相同,像素点A
32在图像A中的位置与像素点B
32在图像B中的位置相同,像素点A
33在图像A中的位置与像素点B
33在图像B中的位置相同。
假设:第一权重为w
1,第二权重为w
2,第一特征图像中与第一像素点对应的像素点的像素值为p
1,第一特征图像中与第一像素点对应的像素点的像素值为p
2。
在一种可能实现的方式中,w
1,w
2,p
1,p
2满足下式:
其中,k和q均为正数,在一些可能的实现方式中,k=q=1。
在另一种可能实现的方式中,w
1,w
2,p
1,p
2满足下式:
其中,k和q均为正数,a和b为实数,在一些可能的实现方式中,k=q=1,a=b=0。
204、依据上述第一权重和上述第二权重,对上述第一待处理图像和上述第二待处理图像进行融合处理,得到融合后的图像。
在得到第一权重和第二权重后,可在对第一待处理图像和第二待处理图像进行融合处理的过程中,使用第一权重和第二权重对第一像素点的像素值和第二像素点的像素值进行加权求和,以实现对第一像 素点和第二像素点的融合合,具体可以采用下式:
其中,O代表融合后的图像,W
i代表像素点i的权重,I
i代表像素点i的像素值。
举例来说,假设第一像素点的像素值为130、第二像素点的像素值为30、第一权重为0.4、第二权重为0.6。使用第一权重和第二权重对第一像素点的像素值和第二像素点的像素值进行加权求和,得到融合后的图像中的第四像素点的像素值,第四像素点与第一像素点、第二像素点互为同名点,第四像素点的像素值为:130×0.4+30×0.6=70。
需要理解的是,本实施例以第一像素点和第二像素点为处理对象,描述了基于第一像素点的像素值和第二像素点的像素值,得到第四像素点的像素值的处理过程,而在实际应用中,可基于第一待处理图像和第二待处理图像中所有同名点的像素值,得到融合后的图像中的所有像素点的像素值。
举例来说,第一待处理图像包括像素点a、像素点b,第二待处理图像包括像素点c、像素点d,其中,像素点a和像素点c互为同名点、像素点b和像素点d互为同名点,像素点a的像素值为40、像素点b的像素值为60、像素点c的像素值为80、像素点d的像素值为30。对第一待处理图像和第二待处理图像进行特征提取处理,确定像素点a的权重为0.4、像素点b的权重为0.3、像素点c的权重为0.6、像素点d的权重为0.7。通过对第一待处理图像和第二待处理图像进行融合处理,得到融合后的图像。融合后的图像包括像素点e和像素点f,其中,像素点e与像素点a、像素点c互为同名点,像素点f与像素点b、像素点d互为同名点。像素点e的像素值为:40×0.4+80×0.6=64,像素点f的像素值为:60×0.3×30×0.7=39。
在一些可能的实现方式中,步骤202和步骤203均可通过卷积神经网络实现。通过将包围曝光图像作为训练数据、将监督图像作为监督数据,对卷积神经网络进行训练,使训练后的卷积神经网络可完成对第一待处理图像和第二待处理图像的特征提取处理,其中,监督图像的内容与训练数据的内容相同,但监督图像的曝光量比训练数据的曝光量更合适。在使用训练数据对卷积神经网络进行训练的过程中,卷积神经网络从包围曝光图像中提取出特征图像,并依据特征图像确定包围曝光图像中像素点的权重。基于包围曝光图像中像素点的权重,对包围曝光图像进行融合,得到训练得到的图像。基于训练得到的图像与监督图像之间的差异,确定卷积神经网络的损失,并基于该损失调整卷积神经网络的参数,完成对卷积神经网络的训练。这样,可使用训练后的卷积神经网络对第一待处理图像和第二待处理图像进行处理,得到第一像素点的第一权重和第二像素点的第二权重,并基于第一权重和第二权重,对第一待处理图像和第二待处理图像进行融合,得到融合后的图像。
需要理解的是,在本公开实施例中,包围曝光图像包括两张图像,即第一待处理图像和第二待处理,通过对第一待处理图像和第二待处理图像进行处理,可得到融合后的图像。在实际应用中,包围曝光图像还可包括三张图像或三张以上的图像,而基于本公开实施例提供的技术方案,可对三张图像或三张以上的图像进行处理,得到融合后的图像,其中,融合后的图像的曝光量比包围曝光图像中的任意一张图像的曝光量均更合适。例如,包围曝光图像包括图像a、图像b和图像c。对图像a、图像b和图像c进行特征提取处理,得到第一权重图像、第二权重图像和第三权重图像,其中,第一权重图像包括图像a中每个像素点的权重,第二权重图像包括图像b中每个像素点的权重,第三权重图像包括图像c中每个像素点的权重。依据第一权重图像、第二权重图像和第三权重图像,对图像a、图像b和图像c进行融合处理,可得到融合后的图像。本公开实施例通过对第一待处理图像和第二待处理图像进行特征提取处理,得到第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息。基于第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息,得到第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,可达到使明暗程度不同的像素点的权重不同的效果,从而在基于第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,对第一待处理图像和第二待处理图像进行融合处理的过程中,可提高得到的融合后的图像的质量。
请参阅图5,图5是本公开实施例提供的步骤202的一种可能实现的方法的流程示意图。
501、对上述第一待处理图像和上述第二待处理图像进行拼接处理,得到第三待处理图像。
本实施例中,拼接处理为在通道维度上的拼接处理,即第三待处理图像的宽(即列数)为第一待处理图像的宽(即列数)和第二待处理图像的宽(即列数)的和,第三待处理图像的高(即行数)为第一待处理图像的高(即行数)和第二待处理图像的高(即行数)的和。第一待处理图像和第二待处理图像进行拼接处理的实现过程,可参见图6。
由于第一待处理图像中的像素值的取值范围,和第二待处理图像中的像素值的取值范围可能不同,这将给图像处理装置在对第一待处理图像和第二待处理图像进行处理的过程中,带来困难。例如(例3),第一待处理图像为通过成像设备A采集的图像,第一待处理图像的像素值的取值范围为[0,255],第二待处理图像为通过成像设备B采集的图像,第二待处理图像的像素值的取值范围为[0,1000],其中,成 像设备A和成像设备B可以是摄像头、摄像机和相机中的一种。显然,像素值的取值范围不同,给图像处理装置处理增加了难度。接着例3继续举例,第一待处理图像中像素值为200的像素点所表征的明暗程度,与第二待处理图像中像素值为200的像素点所表征的明暗程度不同。
为减小像素值的取值范围不同给图像装置的处理带来的难度,在一些可能的实施方式,在对第一待处理图像和第二待处理图像进行拼接处理之前,可分别对第一待处理图像和第二待处理图像的像素值进行归一化处理,将第一待处理图像的像素值和第二待处理图像的像素值归一化至[0,1],得到归一化处理后的第一待处理图像和归一化处理后的第二待处理图像。
在一种对图像(包括第一待处理图像和第二待处理图像)的像素值进行归一化处理的实现方式中,假设图像中目标像素点的像素值为x
r,图像的像素值的取值范围为[K
b,K
w],对目标像素点的像素值进行归一化处理后,得到的像素值为x
i,则x
i、x
r、K
b、K
w满足下式:
在得到归一化处理后的第一待处理图像和归一化处理后的第二待处理图像后,步骤501具体包括:
对归一化处理后的第一待处理图像和归一化处理后的第二待处理图像进行拼接处理,得到第三待处理图像。
本步骤中,拼接处理也为在通道维度上的拼接处理,即第三待处理图像的宽(即列数)为归一化处理后的第一待处理图像的宽(即列数)和归一化处理后的第二待处理图像的宽(即列数)的和,第三待处理图像的高(即行数)为归一化处理后的第一待处理图像的高(即行数)和归一化处理后的第二待处理图像的高(即行数)的和。
502、提取上述第三待处理图像中的像素点的特征信息,得到第二特征图像。
本步骤中,可通过对第三待处理图像进行卷积处理,提取第三待处理图像中的像素点的特征信息。卷积处理的实现过程可参见步骤202中卷积处理的实现过程,其中,第三待处理图像与步骤202中的第一待处理图像和第二待处理图像对应,第二特征图像与步骤202中的第一特征图像对应。
503、对上述第二特征图像进行归一化处理,得到第三特征图像。
在对第三待处理图像进行特征提取处理的过程中,第三待处理图像经过卷积层的处理后,第三待处理图像中的数据分布都会发生变化,即第二特征图像中的数据分布与第三待处理图像中的数据分布不同,这将给接下来对第二特征图像的处理带来困难。因此,在对第二待处理图像进行接下来的处理之前,可对第二特征图像进行归一化处理,以使第二特征图像中的数据分布与第三待处理图像中的数据分布接近。
在一些可能的实现方式中,对第二特征图像进行归一化处理的过程可参见下文:
假设第二特征图像为β=x
1→m,共m个数据,输出是y
i=BN(x),BN层将对第二特征图像进行如下处理:
求出第二特征图像β=x
1→m的平均值,即下式:
根据上述平均值μ
β,确定上述第二特征图像的方差,即下式:
基于缩放变量γ和平移变量δ,得到第三特征图像,即下式:
其中,γ和δ均为已知。
504、对上述第三特征图像进行非线性变换,得到上述第一特征图像。
由于卷积处理以及归一化处理无法处理具有复杂映射的数据,例如图像、视频、音频和语音等等。因此,需要通过对归一化处理后的数据进行非线性变换,来处理具有复杂映射的数据。
在一些可能的实现方式中,通过激活函数对归一化后的图像进行非线性变换,以处理复杂映射。在一些可能实现的方式中,将第三特征图像代入带参数的线性整流函数(parametric rectified linear unit,PReLU),实现对第三特征图像的非线性变换,得到第一特征图像。第一特征图像中每个像素点的像素值均包含明暗信息,依据第一特征图像中一个像素点的像素值,可得到第一待处理图像中一个像素点的权重或第二待处理图像中一个像素点的权重。由于在对第三待处理图像进行卷积处理得到第二特征图像 的过程中,可能使第三待处理图像的尺寸缩小,第二特征图像的尺寸可能小于第三待处理图像的尺寸,进而使基于第三特征图像得到的第三待处理图像的权重的尺寸小于第三待处理图像的尺寸。这样,将无法确定第三待处理图像中的部分像素点的权重。
举例来说,如图7所示,通过对图6所示的第三待处理图像进行卷积处理,得到的第一特征图像的尺寸小于第三待处理图像的尺寸。如图7所示,第一特征图像包括4个像素点,依据这4个像素点的像素值,可得到4个权重,但图6所示的第一待处理图像和第二待处理图像均包括9个像素点。显然,依据第一特征图像不能确定第一待处理图像和第二待处理图像中所有像素点的权重。
在一些可能的实现方式中,在第一特征图像的尺寸小于第三待处理图像的尺寸的情况下,步骤504具体包括以下步骤:
51、对上述第三特征图像进行非线性变换,得到第四特征图像。
本步骤的实现过程可参见步骤404中“对上述第三特征图像进行非线性变换,得到上述第一特征图像”的实现过程。需要理解的是,在本步骤中,对上述第三特征图像进行非线性变换,得到的是第四特征图像,而不是第一特征图像。
52、对上述第四特征图像进行上采样处理,得到上述第一特征图像。
由于第一特征图像的尺寸小于第三待处理图像,第四特征图像的尺寸与第一特征图像的尺寸相同,第四特征图像的尺寸也小于第三待处理图像。因此,需要增大第四特征图像的尺寸,使第四特征图像的尺寸与第三待处理图像的尺寸相同。
在一种可能的实现方式中,对第四特征图像进行上采样处理,得到第一特征图像。上述上采样处理可以是以下中的一种:双线性插值处理、最邻近插值处理、高阶插值和反卷积处理。
本实施例中,通过对第三待处理图像进行卷积处理,在减小图像处理装置的数据处理量的同时,提取出第三待处理图像中像素点的特征信息,得到第二特征图像。对第二特征图像依次进行归一化处理和非线性变换,以提高得到第二特征图像中的信息的有效性。
本公开实施例还提供了一种图像处理网络,可用于实现前文所提及的技术方案。请参阅图8,图8为本公开实施例提供的示例性的一种图像处理网络的结构示意图。如图8所示,图像处理网络中的网络层依次串联,共包含十二层卷积层和一层上采样层。
在十二层卷积层中,第一层卷积层中卷积核的尺寸、第三卷积层中卷积核的尺寸、第五层卷积层中卷积核的尺寸、第七卷积层中卷积核的尺寸、第九层卷积层中卷积核的尺寸和第十一卷积层中卷积核的尺寸均为3×3,第二层卷积层中卷积核的尺寸、第四卷积层中卷积核的尺寸、第六层卷积层中卷积核的尺寸、第八卷积层中卷积核的尺寸、第十层卷积层中卷积核的尺寸和第十二卷积层中的卷积核的尺寸均为1×1。第一层卷积层中卷积核的数量、第二层卷积层中卷积核的数量、第四层卷积层中卷积核的数量、第六层卷积层中卷积核的数量、第八层卷积层中卷积核的数量和第十层卷积层中卷积核的数量均为6,第三层卷积层中卷积核的数量、第五层卷积层中卷积核的数量、第七层卷积层中卷积核的数量、第九层卷积层中卷积核的数量和第十一层卷积层中卷积核的数量均为6,第十二层卷积层中卷积核的数量为K,其中,K为正整数,即本公开实施例对第十二层卷积层中卷积核的数量不做限定。第一层卷积层中卷积核的步长为2,其余十一层卷积层中卷积核的步长均为1。
在一些可能的实现方式中,除第十二层卷积层之外的每一层卷积层后都连接有一个归一化(batchnorm,BN)层和激活层(图8中未示出),其中,BN层用于对输入的数据进行归一化处理,激活层用于对输入的数据进行激活处理。例如,第一层卷积层输出的数据输入至BN层,经BN层对第一层输出的数据进行处理,得到第一中间数据。将第一中间数据输入至激活层,经激活层对第一中间数据进行处理,得到第二中间数据,将第二中间数据输入至第二层卷积层。
图像处理网络对输入的第一待处理图像和第二待处理图像进行拼接处理,得到第三待处理图像。第三待处理图像依次经第一层卷积层、第二层卷积层、…、第十二层卷积层的处理后,得到第四特征图像。将第四特征图像输入至上采样层,经上采样层对第四特征图像进行上采样处理,得到第一特征图像。基于第一特征图像,可确定第一待处理图像中每个像素点的权重,以及,确定第二待处理图像中每个像素点的权重。基于第一待处理图像中每个像素点的权重和第二待处理图像中每个像素点的权重,对第一待处理图像和第二待处理图像进行融合,得到融合后的图像。
在应用图8所示的图像处理网络对第一待处理图像和第二待处理图像进行处理之前,需对图像处理网络进行训练。为此,本公开实施例还提供了一种图像处理网络的训练方法。
请参阅图9,图9是本公开实施例提供的一种图像处理神经网络的训练方法的流程示意图。本实施例的执行主体可以是图像处理装置,也可以不是图像装置,即图像处理神经网络的训练方法的执行主体,与使用图像处理网络对待处理图像进行处理的执行主体可以相同,也可以不同,本公开实施例对本实施例的执行主体不做限定。为表述方便,下文将本实施例的执行主体称为训练装置,在一些可能的实现方 式中,训练装置可以是以下任意一种:手机、计算机、平板电脑和服务器。
901、获取第一样本图像、第二样本图像、监督数据和待训练网络。
本公开实施例中,第一样本图像与第二样本图像为包围曝光图像。上述监督数据为通过将第一样本图像和第二样本图像融合得到的图像(下文将称为参考图像),其中,参考图像的内容与第一样本图像和第二样本图像的内容相同,但参考图像的曝光量比第一样本图像和第二样本图像的曝光量更合适。
本公开实施例中,待训练网络的网络结构与图像处理网络的网络结构相同,具体可参见图8。
在一种获取待训练网络的实现方式中,训练装置接收用户通过输入组件输入的待训练网络。上述输入组件包括:键盘、鼠标、触控屏、触控板和音频输入器等。
在另一种获取待训练网络的实现方式中,训练装置接收第二终端发送的待训练网络。在一些可能的实现方式中,上述第二终端可以是以下任意一种:手机、计算机、平板电脑、服务器和可穿戴设备。
902、使用上述待训练网络对上述第一样本图像和上述第二样本图像进行处理,得到融合后的样本图像。
使用待训练网络对第一样本图像和第二样本图像进行处理,可得到融合后的样本图像,其中,融合后的样本图像的内容与第一样本图像和第二样本图像相同,融合后的样本图像的曝光量与第一样本图像的曝光量和第二样本图像的曝光量不同。
903、依据上述融合后的样本图像与上述监督数据之间的差异,得到上述待训练网络的损失。
在一种确定融合后的样本图像与监督数据之间的差异的实现方式中,假设参考图像为y
1,融合后的样本图像为y
2,融合后的样本图像与监督数据之间的差异为L
c,其中,y
1、y
2、L
1满足下式:
L
c=‖y
1-y
2‖
1 公式(8)
其中,‖y
1-y
2‖
1为y
1-y
2的1范数。
在另一种确定融合后的样本图像与监督数据之间的差异的实现方式中,假设参考图像为y
1,融合后的样本图像为y
2,融合后的样本图像与监督数据之间的差异为L
c,其中,y
1、y
2、L
1满足下式:
L
c=‖y
1-y
2‖
2 公式(9)
其中,‖y
1-y
2‖
2为y
1-y
2的2范数。
在又一种确定融合后的样本图像与监督数据之间的差异的实现方式中,假设参考图像为y
1,融合后的样本图像为y
2,融合后的样本图像与监督数据之间的差异为L
c,其中,y
1、y
2、L
c满足下式:
L
1=‖y
1-y
2‖
F 公式(10)
其中,‖y
1-y
2‖
F为y
1-y
2的F范数。
在确定融合后的样本图像与监督数据之间的差异后,可依据融合后的样本图像与监督数据之间的差异,确定待训练网络的损失。
在一种确定待训练网络的损失的实现方式中,假设融合后的样本图像与监督数据之间的差异为L
1,待训练神经网络的损失为L
t,其中,L
c和L
t满足下式:
L
t=k×L
c 公式(11)
其中,k为正数,在一些可能的实现方式中,k=1。
在另一种确定待训练网络的损失的实现方式中,假设融合后的样本图像与监督数据之间的差异为L
1,待训练神经网络的损失为L
t,其中,L
c和L
t满足下式:
L
t=k×L
c+m 公式(12)
其中,m为实数,k为正数,在一些可能的实现方式中,m=0,k=1。
在又一种确定待训练网络的损失的实现方式中,假设融合后的样本图像与监督数据之间的差异为L
1,待训练神经网络的损失为L
t,其中,L
c和L
t满足下式:
其中,m为实数,k为正数,在一些可能的实现方式中,m=0,k=1。
基于融合后的样本图像与监督数据之间的差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数,得到图像处理网络,可减小通过图像处理网络得到的融合后的样本图像与参考图像的差异,从而提高使用图像处理网络得到的融合后的图像的质量。
作为一种在一些可能的实现方式中实施方式,在执行步骤903之前,可执行以下步骤:
91、依据融合后的样本图像中梯度的方向和监督数据中梯度的方向之间的差异,得到第一差异。
其中,k为正数,在一些可能的实现方式中k=1。
其中,m为实数,k为正数,在一些可能的实现方式中,m=0,k=1。
其中,m为实数,k为正数,在一些可能的实现方式中,m=0,k=1。
在得到第一差异后,步骤903具体包括以下步骤:
92、依据融合后的样本图像与监督数据之间的差异,得到第二差异。
确定融合后的样本图像与监督数据之间的差异的实现方式可参见步骤903。
在一种确定第二差异的实现方式中,假设融合后的样本图像与监督数据之间的差异为L
c,第二差异为L
2,其中,L
c、L
2满足下式:
L
2=k×L
c 公式(18)
其中,k为正数,在一些可能的实现方式中,k=1。
在另一种确定第二差异的实现方式中,假设融合后的样本图像与监督数据之间的差异为L
c,第二差异为L
2,其中,L
c、L
2满足下式:
L
2=k×L
c+m 公式(19)
其中,m为实数,k为正数,在一些可能的实现方式中,m=0,k=1。
在又一种确定第二差异的实现方式中,假设融合后的样本图像与监督数据之间的差异为L
c,第二差异为L
2,其中,L
c、L
2满足下式:
其中,m为实数,k为正数,在一些可能的实现方式中,m=0,k=1。
93、依据第一差异和第二差异,得到待训练网络的损失。
在一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,待训练网络的损失为L
t,其中,L
1、L
2、L
t满足下式:
L
t=k×L
1+r×L
2 公式(21)
其中,k和r均为正数,在一些可能的实现方式中,k=r=1。
在另一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,待训练网络的损失为L
t,其中,L
1、L
2、L
t满足下式:
L
t=k×L
1+r×L
2+m 公式(22)
其中,k和r均为正数,m为实数,在一些可能的实现方式中,m=0,k=r=1。
在又一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,待训练网络的损失为L
t,其中,L
1、L
2、L
t满足下式:
其中,k和r均为正数,m为实数,在一些可能的实现方式中,m=0,k=r=1。
基于第一差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数,得到图像处理网络,可减小通过图像处理网络得到的融合后的样本图像与参考图像的差异。基于第二差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数得到图像处理网络,使用图像处理网络对第一样本图像和第二样本图像进行处理得到融合后的样本图像,可使融合后的样本图像的梯度方向与参考图像的梯度方向相同,尤其可调整反方向梯度像素点区域的梯度,使反方向梯度像素点区域的梯度与参考图像的梯度方向相同,以使融合后的样本图像中的边缘更平滑,进而使融合后的样本图像的融合效果更自然。从而提高使用图像处理网络得到的融合后的图像的质量。
在一些可能的实现方式中,在执行步骤93之前,可执行以下步骤:
94、确定上述融合后的样本图像中像素值大于或等于高亮像素点阈值的像素点,作为高亮像素点。
本公开实施例中,高亮像素点阈值为正整数,具体取值可依据用户的使用需求进行调整,在一些可能的实现方式中,高亮像素点阈值为200。
95、依据上述高亮像素点的梯度与上述监督数据中的第三像素点的梯度之间的差异,得到第三差异。
本步骤中,第三像素点为参考图像中的像素点,且第三像素点与高亮像素点互为同名点。依据高亮像素点与第三像素点之间的差异,可得到第三差异。
在得到第三差异后,步骤93具体包括以下步骤:
96、依据上述第一差异、上述第二差异和上述第三差异,得到上述待训练网络的损失。
在一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,第三差异为L
3,待训练网络的损失为L
t,其中,L
1、L
2、L
3、L
t满足下式:
L
t=k×L
1+r×L
2+s×L
3 公式(27)
其中,k、r和s均为正数,在一些可能的实现方式中,k=r=s=1。
在另一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,第三差异为L
3,待训练网络的损失为L
t,其中,L
1、L
2、L
3、L
t满足下式:
L
t=k×L
1+r×L
2+s×L
3+m 公式(28)
其中,k、r和s均为正数,m为实数,在一些可能的实现方式中,m=0,k=r=s=1。
在又一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,第三差异为L
3,待训练网络的损失为L
t,其中,L
1、L
2、L
3、L
t满足下式:
其中,k、r和s均为正数,m为实数,在一些可能的实现方式中,m=0,k=r=s=1。
基于第一差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数,得到图像处理网络,可减小通过图像处理网络得到的融合后的样本图像与参考图像的差异。基于第二差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数得到图像处理网络,使用图像处理网络对第一样本图像和第二样本图像进行处理得到融合后的样本图像,可使融合后的样本图像的梯度方向与参考图像的梯度方向相同,尤其可调整反方向梯度像素点区域的梯度,使反方向梯度像素点区域的梯度与参考图像的梯度方向相同,以使融合后的样本图像中的边缘更平滑,进而使融合后的样本图像的融合效果更自然。基于第三差异,确定待训练网络的损失,可对融合后的样本图像中高亮像素点区域的调整,这样,可使融合后的样本图像中的高亮像素点区域的质量更高。从而提高使用图像处理网络得到的融合后的图像的质量。
在一些可能的实现方式中,在执行步骤96之前,可执行以下步骤:
97、依据融合后的图像中梯度和监督数据中梯度之间的差异,得到第四差异。
在得到第四差异后,步骤96具体包括以下步骤:
98、依据上述第一差异、上述第二差异、上述第三差异和第四差异,得到上述待训练网络的损失。
在一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,第三差异为L
3,第四差异为L
4,待训练网络的损失为L
t,其中,L
1、L
2、L
3、L
4、L
t满足下式:
L
t=k×L
1+r×L
2+s×L
3+u×L
4 公式(33)
其中,k、r、s和u均为正数,在一些可能的实现方式中,k=r=s=u=1。
在一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,第三差异为L
3,第四差异为L
4,待训练网络的损失为L
t,其中,L
1、L
2、L
3、L
4、L
t满足下式:
L
t=k×L
1+r×L
2+s×L
3+u×L
4+m 公式(34)
其中,k、r、s和u均为正数,m为实数,在一些可能的实现方式中,m=0,k=r=s=u=1。
在又一种确定待训练网络的损失的实现方式中,假设第一差异为L
1,第二差异为L
2,第三差异为L
3,第四差异为L
4,待训练网络的损失为L
t,其中,L
1、L
2、L
3、L
4、L
t满足下式:
其中,k、r、s和u均为正数,m为实数,在一些可能的实现方式中,m=0,k=r=s=u=1。
基于第一差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数,得到图像处理网络,可减小通过图像处理网络得到的融合后的样本图像与参考图像的差异。基于第二差异,确定待训练网络的损失。在后续处理中,基于待训练网络的损失调整待训练网络的参数得到图像处理网络,使用图像处理网络对第一样本图像和第二样本图像进行处理得到融合后的样本图像,可使融合后的样本图像的梯度方向与参考图像的梯度方向相同,尤其可调整反方向梯度像素点区域的梯度,使反方向梯度像素点区域的梯度与参考图像的梯度方向相同,以使融合后的样本图像中的边缘更平滑,进而使融合后的样本图像的融合效果更自然。基于第三差异,确定待训练网络的损失,可对融合后的样本图像中高亮像素点区域的调整,这样,可使融合后的样本图像中的高亮像素点区域的质量更高。基于第四差异,确定待训练网络的损失,在后续处理中,基于待训练网络的损失调整待训练网络的参数,得到图像处理网络,不仅可使融合后的样本图像的梯度方向与参考图像的梯度方向相同,还可使融合后的样本图像的梯度大小与参考图像的梯度大小相同,进一步使融合后的样本图像中的边缘更平滑、融合效果更自然。从而提高使用图像处理网络得到的融合后的图像的质量。
904、基于上述待训练网络的损失,调整上述待训练网络的参数,得到上述图像处理网络。
基于待训练网络的损失,以反向梯度传播的方式对待训练训练网络进行训练,直至收敛,完成对待训练网络的训练,得到图像处理网络。
基于本公开实施例提供的技术方案,本公开实施例还提供了一种可能实现的应用场景。
张三在外出旅游时通过手机拍摄得到三张风景图像,这三张风景图像的内容相同、且曝光量各不相同。张三觉得这三张风景图像的曝光量均不合适,因此希望通过对这三张图像进行处理,以得到曝光量合适的图像。将本公开是实施例提供的技术方案应用于手机,手机可使用本公开实施例提供的技术方案对这三张风景图像进行处理,得到融合后的风景图像。融合后的风景图像的曝光量比上述三张风景图像的曝光量更加合适。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
上述详细阐述了本公开实施例的方法,下面提供了本公开实施例的装置。
请参阅图10,图10为本公开实施例提供的一种图像处理装置的结构示意图,该装置1包括:获取部分11、第一处理部分12、第二处理部分13、第三处理部分14、第四处理部分15以及训练部分16,其中:
获取部分11,被配置为获取第一待处理图像和第二待处理图像,其中,所述第一待处理图像的内容与所述第二待处理图像的内容相同,且所述第一待处理图像的曝光量与所述第二待处理图像的曝光量不同;
第一处理部分12,被配置为对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到特征图像;
第二处理部分13,被配置为依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重,其中,所述第一像素点为所述第一待处理图像中的像素点,所述第二像素点为所述第二待处理图像中与所述第一像素点互为同名点的像素点;
第三处理部分14,被配置为依据所述第一权重和所述第二权重,对所述第一待处理图像和所述第二待处理图像进行融合处理,得到融合后的图像。
在一些可能的实现方式中,所述第一处理部分12,还被配置为:
对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到三待处理图像;
提取所述第三待处理图像中的像素点的特征信息,得到第二特征图像;
对所述第二特征图像进行归一化处理,得到第三特征图像;
对所述第三特征图像进行非线性变换处理,得到所述第一特征图像。
在一些可能的实现方式中,在所述第一特征图像的尺寸小于所述第三待处理图像的尺寸的情况下,所述第一处理部分12,还被配置为:
对所述第三特征图像进行非线性变换处理,得到第四特征图像;
对所述第四特征图像进行上采样处理,得到所述第一特征图像。
在一些可能的实现方式中,所述装置1还包括:
第四处理部分15,被配置为在对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三特征图像之前,对所述第一待处理图像中的像素值进行归一化处理,得到归一化处理后的第一待处理图像,以及对所述第二待处理图像中的像素值进行归一化处理,得到归一化处理后的第二待处理图像;
所述第一处理部分12,还被配置为:
对所述归一化处理后的第一待处理图像和所述归一化处理后的第二待处理图像进行拼接处理,得到所述第三待处理图像。
在一些可能的实现方式中,所述第三处理部分14,还被配置为:
依据第三像素点的像素值得到所述第一权重,其中,所述第三像素点为所述第一特征图像中的像素点,所述第三像素点在所述第一特征图像中的位置与所述第一像素点在所述第三待处理图像中的位置相同;
依据第四像素点的像素值得到所述第二权重,其中,所述第四像素点为所述第一特征图像中的像素点,所述第四像素点在所述第一特征图像中的位置与所述第二像素点在所述第三待处理图像中的位置相同。
在一些可能的实现方式中,所述装置1执行的图像处理方法应用于图像处理网络;
所述装置1还包括:训练部分16,被配置为对所述图像处理网络进行训练,所述图像处理网络的训练过程包括:
获取第一样本图像、第二样本图像、所述监督数据和待训练网络,其中,所述第一样本图像的内容与所述第二样本图像的内容相同,且所述第一样本图像的曝光量与所述第二样本图像的曝光量不同,所述监督数据通过将所述第一样本图像和所述第二样本图像融合得到;
使用所述待训练网络对所述第一样本图像和所述第二样本图像进行处理,得到融合后的样本图像;
依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失;
基于所述待训练网络的损失,调整所述待训练网络的参数,得到所述图像处理网络。
在一些可能的实现方式中,所述训练部分16还被配置为:
在所述依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失之前,依据所述融合后的样本图像中梯度的方向和所述监督数据中梯度的方向之间的差异,得到第一差异;
依据所述融合后的样本图像与所述监督数据之间的差异,得到第二差异;
依据所述第一差异和所述第二差异,得到所述待训练网络的损失。
在一些可能的实现方式中,所述训练部分16还被配置为:
在所述依据所述第一差异和所述第二差异,得到所述待训练网络的损失之前,确定所述融合后的样本图像中像素值大于或等于高亮像素点阈值的像素点,作为高亮像素点;
依据所述高亮像素点的梯度与所述监督数据中的第三像素点的梯度之间的差异,得到第三差异,其中,所述高亮像素点与所述第三像素点互为同名点;
依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失。
在一些可能的实现方式中,所述训练部分16还被配置为:
在所述依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失之前,依据所述融合后的样本图像中梯度和所述监督数据中梯度之间的差异,得到第四差异;
依据所述第一差异、所述第二差异、所述第三差异和所述第四差异,得到所述待训练网络的损失。
本公开实施例通过对第一待处理图像和第二待处理图像进行特征提取处理,得到第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息。基于第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息,得到第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,可达到使明暗程度不同的像素点的权重不同的效果,从而在基于第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,对第一待处理图像和第二待处理图像进行融合处理的过程中,可提高得到的融合后的图像的质量。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的部分可以被配置为执行上文方法实 施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
图11为本公开实施例提供的一种图像处理装置的硬件结构示意图。该图像处理装置2包括处理器21,存储器22,输入装置23和输出装置24。该处理器21、存储器22、输入装置23和输出装置24通过连接器相耦合,该连接器包括各类接口、传输线或总线等等,本公开实施例对此不作限定。应当理解,本公开的各个实施例中,耦合是指通过特定方式的相互联系,包括直接相连或者通过其他设备间接相连,例如可以通过各类接口、传输线、总线等相连。
处理器21可以是一个或多个图形处理器(graphics processing unit,GPU),在处理器21是一个GPU的情况下,该GPU可以是单核GPU,也可以是多核GPU。在一些可能的实现方式中,处理器21可以是多个GPU构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。在一些可能的实现方式中,该处理器还可以为其他类型的处理器等等,本公开实施例不作限定。
存储器22可用于存储计算机程序指令,以及用于执行本公开实施例方案的程序代码在内的各类计算机程序代码。可选地,存储器包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储器用于相关指令及数据。
输入装置23用于输入数据和/或信号,以及输出装置24用于输出数据和/或信号。输入装置23和输出装置24可以是独立的器件,也可以是一个整体的器件。
可理解,本公开实施例中,存储器22不仅可用于存储相关指令,还可用于存储相关数据,如该存储器22可用于存储通过输入装置23获取的第一待处理图像和第二待处理图像,又或者该存储器22还可用于存储通过处理器21得到的融合后的图像等等,本公开实施例对于该存储器中具体所存储的数据不作限定。
可以理解的是,图11仅仅示出了一种图像处理装置的简化设计。在实际应用中,图像处理装置还可以分别包含必要的其他元件,包含但不限于任意数量的输入/输出装置、处理器、存储器等,而所有可以实现本公开实施例的图像处理装置都在本公开实施例的保护范围之内。
在一些实施例中,还提供了一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,使得所述电子设备中的处理器执行上述方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的部分及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开实施例的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和部分的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。所属领域的技术人员还可以清楚地了解到,本公开各个实施例描述各有侧重,为描述的方便和简洁,相同或类似的部分在不同实施例中可能没有赘述,因此,在某一实施例未描述或未详细描述的部分可以参见其他实施例的记载。
在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述部分的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个部分或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或部分的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的部分可以是或者也可以不是物理上分开的,作为部分显示的部件可以是或者也可以不是物理部分,即可以位于一个地方,或者也可以分布到多个网络部分上。可以根据实际的需要选择其中的部分或者全部部分来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能部分可以集成在一个处理部分中,也可以是各个部分单独物理存在,也可以两个或两个以上部分集成在一个部分中。
在本公开实施例以及其他的实施例中,“部分”可以是部分电路、部分处理器、部分程序或软件等等,当然也可以是单元,还可以是模块也可以是非模块化的。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本公开实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:只读存储器(read-only memory,ROM)或随机存储存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
本公开实施例涉及一种图像处理方法及装置、电子设备、存储介质,通过对第一待处理图像和第二待处理图像进行特征提取处理,得到第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息,基于第一待处理图像中像素点的明暗信息和第二待处理图像中像素点的明暗信息,得到第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,可达到使明暗程度不同的像素点的权重不同的效果,从而在基于第一待处理图像中的像素点的权重和第二待处理图像中的像素点的权重,对第一待处理图像和第二待处理图像进行融合处理的过程中,可提高得到的融合后的图像的质量。
Claims (13)
- 一种图像处理方法,所述方法包括:获取第一待处理图像和第二待处理图像,其中,所述第一待处理图像的内容与所述第二待处理图像的内容相同,且所述第一待处理图像的曝光量与所述第二待处理图像的曝光量不同;对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到第一特征图像;依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重,其中,所述第一像素点为所述第一待处理图像中的像素点,所述第二像素点为所述第二待处理图像中与所述第一像素点互为同名点的像素点;依据所述第一权重和所述第二权重,对所述第一待处理图像和所述第二待处理图像进行融合处理,得到融合后的图像。
- 根据权利要求1所述的方法,其中,所述对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到第一特征图像,包括:对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三待处理图像;提取所述第三待处理图像中的像素点的特征信息,得到第二特征图像;对所述第二特征图像进行归一化处理,得到第三特征图像;对所述第三特征图像进行非线性变换处理,得到所述第一特征图像。
- 根据权利要求2所述的方法,其中,在所述第一特征图像的尺寸小于所述第三待处理图像的尺寸的情况下,所述对所述第三特征图像进行非线性变换处理,得到所述第一特征图像,包括:对所述第三特征图像进行非线性变换处理,得到第四特征图像;对所述第四特征图像进行上采样处理,得到所述第一特征图像。
- 根据权利要求2或3所述的方法,其中,在所述对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三待处理图像之前,所述方法还包括:对所述第一待处理图像中的像素值进行归一化处理,得到归一化处理后的第一待处理图像;对所述第二待处理图像中的像素值进行归一化处理,得到归一化处理后的第二待处理图像;所述对所述第一待处理图像和所述第二待处理图像进行拼接处理,得到第三待处理图像特征图像,包括:对所述归一化处理后的第一待处理图像和所述归一化处理后的第二待处理图像进行拼接处理,得到所述第三待处理图像。
- 根据权利要求2至3中任意一项所述的方法,其中,所述依据所述第一特征图像,得到第一像素点的第一权重和第二像素点的第二权重,包括:依据第三像素点的像素值得到所述第一权重,其中,所述第三像素点为所述第一特征图像中的像素点,所述第三像素点在所述第一特征图像中的位置与所述第一像素点在所述第三待处理图像中的位置相同;依据第四像素点的像素值得到所述第二权重,其中,所述第四像素点为所述第一特征图像中的像素点,所述第四像素点在所述第一特征图像中的位置与所述第二像素点在所述第三待处理图像中的位置相同。
- 根据权利要求1至3中任意一项所述的方法,其中,通过图像处理网络实现所述图像处理方法;所述图像处理网络的训练过程包括:获取第一样本图像、第二样本图像、监督数据和待训练网络,其中,所述第一样本图像的内容与所述第二样本图像的内容相同,且所述第一样本图像的曝光量与所述第二样本图像的曝光量不同,所述监督数据通过将所述第一样本图像和所述第二样本图像融合得到;使用所述待训练网络对所述第一样本图像和所述第二样本图像进行处理,得到融合后的样本图像;依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失;基于所述待训练网络的损失,调整所述待训练网络的参数,得到所述图像处理网络。
- 根据权利要求6所述的方法,其中,在所述依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失之前,所述训练过程还包括:依据所述融合后的样本图像中梯度的方向和所述监督数据中梯度的方向之间的差异,得到第一差异;所述依据所述融合后的样本图像与所述监督数据之间的差异,得到所述待训练网络的损失,包括:依据所述融合后的样本图像与所述监督数据之间的差异,得到第二差异;依据所述第一差异和所述第二差异,得到所述待训练网络的损失。
- 根据权利要求7所述的方法,其中,在所述依据所述第一差异和所述第二差异,得到所述待训练网络的损失之前,所述训练过程还包括:确定所述融合后的样本图像中像素值大于或等于高亮像素点阈值的像素点,作为高亮像素点;依据所述高亮像素点的梯度与所述监督数据中的第三像素点的梯度之间的差异,得到第三差异,其中,所述高亮像素点与所述第三像素点互为同名点;所述依据所述第一差异和所述第二差异,得到所述待训练网络的损失,包括:依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失。
- 根据权利要求8所述的方法,其中,在所述依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失之前,所述训练过程还包括:依据所述融合后的样本图像中梯度和所述监督数据中梯度之间的差异,得到第四差异;所述依据所述第一差异、所述第二差异和所述第三差异,得到所述待训练网络的损失,包括:依据所述第一差异、所述第二差异、所述第三差异和所述第四差异,得到所述待训练网络的损失。
- 一种图像处理装置,所述装置包括:获取部分,被配置为获取第一待处理图像和第二待处理图像,其中,所述第一待处理图像的内容与所述第二待处理图像的内容相同,且所述第一待处理图像的曝光量与所述第二待处理图像的曝光量不同;第一处理部分,被配置为对所述第一待处理图像和所述第二待处理图像进行特征提取处理,得到第一特征图像;第二处理部分,被配置为依据所述第一特征图像,得到像素点的第一权重和第二像素点的第二权重,其中,所述第一像素点为所述第一待处理图像中的像素点,所述第二像素点为所述第二待处理图像中与所述第一像素点互为同名点的像素点;第三处理部分,被配置为依据所述第一权重和所述第二权重,对所述第一待处理图像和所述第二待处理图像进行融合处理,得到融合后的图像。
- 一种电子设备,包括:处理器和存储器,所述存储器存储计算机程序代码,所述计算机程序代码包括计算机指令,在所述处理器执行所述计算机指令的情况下,所述电子设备执行权利要求1至9中任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序包括程序指令,在所述程序指令被处理器执行的情况下,使所述处理器执行权利要求1至9中任意一项所述的方法。
- 一种计算机程序,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,使得所述电子设备中的处理器执行权利要求1至9中任意一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010223122.2A CN111311532B (zh) | 2020-03-26 | 2020-03-26 | 图像处理方法及装置、电子设备、存储介质 |
CN202010223122.2 | 2020-03-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021189733A1 true WO2021189733A1 (zh) | 2021-09-30 |
Family
ID=71160932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/103632 WO2021189733A1 (zh) | 2020-03-26 | 2020-07-22 | 图像处理方法及装置、电子设备、存储介质 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN111311532B (zh) |
TW (1) | TWI769725B (zh) |
WO (1) | WO2021189733A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023131236A1 (zh) * | 2022-01-10 | 2023-07-13 | 北京字跳网络技术有限公司 | 一种图像处理方法、装置及电子设备 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311532B (zh) * | 2020-03-26 | 2022-11-11 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备、存储介质 |
CN111724404A (zh) * | 2020-06-28 | 2020-09-29 | 深圳市慧鲤科技有限公司 | 边缘检测方法及装置、电子设备及存储介质 |
CN111798497A (zh) * | 2020-06-30 | 2020-10-20 | 深圳市慧鲤科技有限公司 | 图像处理方法及装置、电子设备及存储介质 |
CN113780165A (zh) * | 2020-09-10 | 2021-12-10 | 深圳市商汤科技有限公司 | 车辆识别方法及装置、电子设备及存储介质 |
CN112614064B (zh) * | 2020-12-18 | 2023-04-25 | 北京达佳互联信息技术有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN113313661B (zh) * | 2021-05-26 | 2024-07-26 | Oppo广东移动通信有限公司 | 图像融合方法、装置、电子设备及计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203985A (zh) * | 2017-05-18 | 2017-09-26 | 北京联合大学 | 一种端到端深度学习框架下的多曝光图像融合方法 |
US10248664B1 (en) * | 2018-07-02 | 2019-04-02 | Inception Institute Of Artificial Intelligence | Zero-shot sketch-based image retrieval techniques using neural networks for sketch-image recognition and retrieval |
CN110097528A (zh) * | 2019-04-11 | 2019-08-06 | 江南大学 | 一种基于联合卷积自编码网络的图像融合方法 |
CN110163808A (zh) * | 2019-03-28 | 2019-08-23 | 西安电子科技大学 | 一种基于卷积神经网络的单帧高动态成像方法 |
CN110717878A (zh) * | 2019-10-12 | 2020-01-21 | 北京迈格威科技有限公司 | 图像融合方法、装置、计算机设备和存储介质 |
CN111311532A (zh) * | 2020-03-26 | 2020-06-19 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备、存储介质 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102970549B (zh) * | 2012-09-20 | 2015-03-18 | 华为技术有限公司 | 图像处理方法及装置 |
CN103973958B (zh) * | 2013-01-30 | 2018-04-03 | 阿里巴巴集团控股有限公司 | 图像处理方法及设备 |
CN106157319B (zh) * | 2016-07-28 | 2018-11-02 | 哈尔滨工业大学 | 基于卷积神经网络的区域和像素级融合的显著性检测方法 |
CN106161979B (zh) * | 2016-07-29 | 2017-08-25 | 广东欧珀移动通信有限公司 | 高动态范围图像拍摄方法、装置和终端设备 |
JP6508496B2 (ja) * | 2017-06-16 | 2019-05-08 | 大日本印刷株式会社 | 図形パターンの形状推定装置 |
CN107800979B (zh) * | 2017-10-23 | 2019-06-28 | 深圳看到科技有限公司 | 高动态范围视频拍摄方法及拍摄装置 |
US20190335077A1 (en) * | 2018-04-25 | 2019-10-31 | Ocusell, LLC | Systems and methods for image capture and processing |
CN108694705B (zh) * | 2018-07-05 | 2020-12-11 | 浙江大学 | 一种多帧图像配准与融合去噪的方法 |
CN110084216B (zh) * | 2019-05-06 | 2021-11-09 | 苏州科达科技股份有限公司 | 人脸识别模型训练和人脸识别方法、系统、设备及介质 |
CN110602467B (zh) * | 2019-09-09 | 2021-10-08 | Oppo广东移动通信有限公司 | 图像降噪方法、装置、存储介质及电子设备 |
CN110751608B (zh) * | 2019-10-23 | 2022-08-16 | 北京迈格威科技有限公司 | 一种夜景高动态范围图像融合方法、装置和电子设备 |
CN110728648B (zh) * | 2019-10-25 | 2022-07-19 | 北京迈格威科技有限公司 | 图像融合的方法、装置、电子设备及可读存储介质 |
-
2020
- 2020-03-26 CN CN202010223122.2A patent/CN111311532B/zh active Active
- 2020-07-22 WO PCT/CN2020/103632 patent/WO2021189733A1/zh active Application Filing
-
2021
- 2021-03-04 TW TW110107768A patent/TWI769725B/zh active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203985A (zh) * | 2017-05-18 | 2017-09-26 | 北京联合大学 | 一种端到端深度学习框架下的多曝光图像融合方法 |
US10248664B1 (en) * | 2018-07-02 | 2019-04-02 | Inception Institute Of Artificial Intelligence | Zero-shot sketch-based image retrieval techniques using neural networks for sketch-image recognition and retrieval |
CN110163808A (zh) * | 2019-03-28 | 2019-08-23 | 西安电子科技大学 | 一种基于卷积神经网络的单帧高动态成像方法 |
CN110097528A (zh) * | 2019-04-11 | 2019-08-06 | 江南大学 | 一种基于联合卷积自编码网络的图像融合方法 |
CN110717878A (zh) * | 2019-10-12 | 2020-01-21 | 北京迈格威科技有限公司 | 图像融合方法、装置、计算机设备和存储介质 |
CN111311532A (zh) * | 2020-03-26 | 2020-06-19 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备、存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023131236A1 (zh) * | 2022-01-10 | 2023-07-13 | 北京字跳网络技术有限公司 | 一种图像处理方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN111311532A (zh) | 2020-06-19 |
TW202137133A (zh) | 2021-10-01 |
CN111311532B (zh) | 2022-11-11 |
TWI769725B (zh) | 2022-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021189733A1 (zh) | 图像处理方法及装置、电子设备、存储介质 | |
US10885608B2 (en) | Super-resolution with reference images | |
US11790497B2 (en) | Image enhancement method and apparatus, and storage medium | |
US20210232806A1 (en) | Image processing method and device, processor, electronic equipment and storage medium | |
CN111654594B (zh) | 图像拍摄方法、图像拍摄装置、移动终端及存储介质 | |
WO2020199931A1 (zh) | 人脸关键点检测方法及装置、存储介质和电子设备 | |
WO2019120110A1 (zh) | 图像重建方法及设备 | |
US10430075B2 (en) | Image processing for introducing blurring effects to an image | |
JP7142104B2 (ja) | ズーム方法及びそれを適用した電子機器 | |
WO2021164269A1 (zh) | 基于注意力机制的视差图获取方法和装置 | |
US10410327B2 (en) | Shallow depth of field rendering | |
WO2020233010A1 (zh) | 基于可分割卷积网络的图像识别方法、装置及计算机设备 | |
CN113034358B (zh) | 一种超分辨率图像处理方法以及相关装置 | |
WO2021114990A1 (zh) | 人脸畸变校正方法、装置、电子设备及存储介质 | |
US11004179B2 (en) | Image blurring methods and apparatuses, storage media, and electronic devices | |
CN112602088B (zh) | 提高弱光图像的质量的方法、系统和计算机可读介质 | |
WO2020186765A1 (zh) | 视频处理方法、装置以及计算机存储介质 | |
WO2020125229A1 (zh) | 特征融合方法、装置、电子设备及存储介质 | |
CN113688907B (zh) | 模型训练、视频处理方法,装置,设备以及存储介质 | |
WO2023030139A1 (zh) | 图像融合方法、电子设备和存储介质 | |
US20240046538A1 (en) | Method for generating face shape adjustment image, model training method, apparatus and device | |
CN110211017B (zh) | 图像处理方法、装置及电子设备 | |
WO2024027583A1 (zh) | 图像处理方法、装置、电子设备和可读存储介质 | |
WO2023125440A1 (zh) | 一种降噪方法、装置、电子设备及介质 | |
WO2020259123A1 (zh) | 一种调整图像画质方法、装置及可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20927652 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.01.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20927652 Country of ref document: EP Kind code of ref document: A1 |