WO2019019086A1 - 增强图像对比度的方法、设备及存储介质 - Google Patents
增强图像对比度的方法、设备及存储介质 Download PDFInfo
- Publication number
- WO2019019086A1 WO2019019086A1 PCT/CN2017/094650 CN2017094650W WO2019019086A1 WO 2019019086 A1 WO2019019086 A1 WO 2019019086A1 CN 2017094650 W CN2017094650 W CN 2017094650W WO 2019019086 A1 WO2019019086 A1 WO 2019019086A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- training
- neural network
- contrast
- network
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000013528 artificial neural network Methods 0.000 claims abstract description 84
- 230000004927 fusion Effects 0.000 claims description 72
- 230000002708 enhancing effect Effects 0.000 claims description 28
- 230000006870 function Effects 0.000 claims description 25
- 238000013527 convolutional neural network Methods 0.000 claims description 22
- 230000009466 transformation Effects 0.000 claims description 4
- 230000000644 propagated effect Effects 0.000 claims description 2
- 230000002349 favourable effect Effects 0.000 abstract 1
- 238000013507 mapping Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 3
- 230000005284 excitation Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10144—Varying exposure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present application relates to the field of image processing technologies, and in particular, to a method, device, and storage medium for enhancing image contrast.
- the exposure range of the image sensor such as a CCD (Charge Coupled Device)
- the dynamic range of the natural scene and overexposure or owing is likely to occur.
- a single image enhancement algorithm can be used to enhance image contrast.
- an algorithm based on Retinex theory is used to enhance the contrast of an image.
- the principle of the algorithm is to decompose the image into a low-frequency light intensity portion and a high-frequency detail portion. Optimize the low-light intensity portion to enhance the contrast of the original image.
- the above algorithm is based on a priori condition to optimize the illumination intensity part, the real image is often complicated, and the a priori condition is difficult to reflect the real world color well, so that the contrast-enhanced image has an unrealistic effect. , resulting in low image quality.
- the present application discloses methods, devices, and storage media for enhancing image contrast.
- a method of enhancing image contrast comprising:
- the training set of the neural network being a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, the first image having a lower contrast than the second image Contrast
- a fourth image outputted through the neural network map is obtained, the contrast of the fourth image being higher than the contrast of the third image.
- an apparatus for enhancing image contrast includes: an internal bus, and a memory and a processor connected by an internal bus; wherein
- the memory for storing machine readable instructions corresponding to control logic for enhancing image contrast
- the processor is configured to read the machine readable instructions on the memory and execute the instructions to implement the following operating:
- the training set of the neural network being a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, the first image having a lower contrast than the second image Contrast
- a machine readable storage medium having stored thereon a plurality of computer instructions, the computer instructions being executed as follows:
- the training set of the neural network being a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, the first image having a lower contrast than the second image Contrast
- a fourth image outputted through the neural network map is obtained, the contrast of the fourth image being higher than the contrast of the third image.
- the embodiment of the present application provides a pre-trained neural network
- the training set of the neural network is a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, and the contrast of the first image is low.
- the neural network trained based on the above training set has the performance of enhancing the contrast of the image, so in the actual application environment, when the third image is input to the neural network, the contrast of the third image can be realized.
- the image is enhanced to map the fourth image with higher output quality. It can be seen that the embodiment of the present application can enhance the low-contrast image of any input, so that the high dynamic range of the multi-exposure fused image can be achieved, and thus the contrast-enhanced image effect is enhanced. Real, the image quality is higher.
- FIG. 1 is a flow chart of one embodiment of a method for enhancing image contrast of the present application
- FIG. 2 is a flow chart of another embodiment of a method for enhancing image contrast of the present application.
- FIG. 3 is a block diagram of one embodiment of an apparatus for enhancing image contrast of the present application.
- FIG. 4 is a block diagram of another embodiment of an apparatus for enhancing image contrast of the present application.
- Figure 5 is a block diagram of one embodiment of an apparatus for enhancing image contrast of the present application.
- the digital camera device When the digital camera device is shooting, if the sensitivity range of the image sensor is lower than the dynamic range of the natural scene, the captured image may be overexposed or underexposed. In this case, the image needs to be enhanced in contrast. Improve the display of detail information in the image.
- computer vision recognition such as face recognition, scene recognition, pedestrian detection, etc.
- the algorithm for enhancing image contrast can be embedded in the chip of the camera device to achieve real-time processing of image contrast enhancement during shooting.
- a single image enhancement algorithm may be used to improve the image contrast, but the algorithm may easily cause an unrealistic effect on the contrast-enhanced image. Therefore, in order to improve the image contrast enhancement effect, the embodiment of the present application passes a pre-trained neural network. Enhance image contrast.
- the neural network can abstract the human brain neural network from the perspective of information processing, establish a simple model, and form different networks according to different connection methods.
- a neural network is an operational model consisting of a large number of nodes (or neurons) connected to each other. Each node represents a specific output function and can be called an Activation Function. The output of different neural networks differs according to the way the network is connected, the weight value of each node and the excitation function.
- DNN Deep Neural Network
- the neural network used in the embodiments of the present application has a training set as a set of image pairs, wherein each pair of images includes a first image for the same scene and a second image as a reference image, and the contrast of the first image is lower than
- the contrast of the second image, that is, the second image used to train the neural network has a high dynamic range and high contrast, thereby training to learn an end-to-end neural network that maps a low-contrast image to a high-contrast image. Performance.
- FIG. 1 is a flowchart of an embodiment of a method for enhancing image contrast according to the present invention.
- the embodiment may include the following steps:
- Step 101 Calling a neural network, the training set of the neural network is a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, and the contrast of the first image is lower than the contrast of the second image .
- the neural network in this embodiment may be a pre-built neural network, and the device for constructing the neural network may be different from the device for performing enhanced image contrast in this embodiment, or the device performing the embodiment has strong computing power.
- the two embodiments may be the same, and the embodiment of the present application does not limit the embodiment.
- the neural network has the ability to map a low contrast image to a high contrast image, such that its training set is a set of image pairs comprising a plurality of image pairs, wherein each pair of images is for the same scene and includes a low contrast first An image and a second image of high contrast, the second image being generated by a multi-exposure image fusion algorithm to ensure that the second image is higher in dynamic range and contrast than the first image for inputting the image pair as a reference image
- a neural network for enhancing image contrast can be obtained.
- the neural network in this embodiment is used as an algorithm model, and when the contrast enhancement of the image is required, the neural network is called by the execution body of the algorithm model.
- the algorithm model may be embedded in the chip of the imaging device in advance, and the algorithm model is used to perform contrast enhancement on the captured image in real time during the imaging process of the imaging device; or the algorithm model may be pre-stored in the memory of the computing device.
- the algorithm model is called to perform batch enhancement on the image contrast.
- Step 102 Input a third image into the neural network.
- Step 103 Obtain a fourth image outputted through the neural network map, and the contrast of the fourth image is higher than the contrast of the third image.
- the embodiment provides a pre-trained neural network
- the training set of the neural network is a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, and The contrast of an image is lower than the contrast of the second image
- the neural network trained based on the above training set has the performance of enhancing image contrast, so in the actual application environment, when the third image is input to the neural network, the The enhancement of the contrast of the three images, so as to map the fourth image with higher output quality, it can be seen that the embodiment of the present application can enhance the low-contrast image of any input, so that the high dynamic range of the multi-exposure fused image can be achieved, so the contrast The enhanced image is true and the image quality is high.
- FIG. 2 is a flow chart of another embodiment of a method for enhancing image contrast according to the present invention.
- the embodiment shows a process of training a deep convolutional neural network for enhancing image contrast, which may include the following steps:
- Step 201 Determine a plurality of training scenarios.
- different types of training scenarios may be determined before training, and the number of scenes may be flexibly set according to needs.
- more than 100 training scenarios are set up, and these scenarios may include scenes involved in most real shooting environments, wherein each scene may further include multiple sub-scene, for example, a forest scene in a natural environment, a river scene, a plant scene.
- the plant scene may include plant sub-scenarios in different seasons, for example, stair scenes in an indoor environment, living room scenes, bedroom scenes, etc.
- the stair scenes may include a straight ladder scene, a ladder scene, and the like.
- Step 202 Acquire a first image and a preset number of qualified images in each training scenario.
- a first image captured in the training scenario is collected, and usually the first image has a lower image contrast before being processed; and the acquisition is in the same Multiple candidate images taken with different exposure parameters in the training scene.
- the screening condition may be set in advance.
- the candidate image may be filtered by using the above screening condition, and the image including the moving object in the candidate image is removed, thereby obtaining a qualified image.
- Step 203 Invoke a target fusion algorithm configured for each training scenario.
- a preset number of fusion algorithms may be determined in advance, and each of the images in each training scene is merged by each fusion algorithm.
- a preset number of fused images are obtained, and a fused image with the best image quality is determined from the fused images, and a fusion algorithm for generating the fused image is determined as a target fusion algorithm corresponding to the training scene.
- the correspondence between the training scenario and the target fusion algorithm may be saved.
- the scene name of the target training scenario may be used as an index to search for the saved training scenario and the configured fusion.
- Corresponding relationship of the algorithm after finding the algorithm name of the target fusion algorithm corresponding to the scene name, calling the target fusion algorithm from the pre-saved fusion algorithm.
- Step 204 Perform a fusion of a preset number of qualified images by the target fusion algorithm to obtain a second image corresponding to the first image in each training scenario.
- the target fusion algorithm is used to fuse the qualified images of the target training scenario, and the specific execution process of the fusion algorithm is consistent with the prior art, and will not be described here.
- the algorithm can select high-quality areas in each image and fuse these high-quality areas together. Therefore, in this step, the qualified images of different exposure levels can be fused to obtain dynamic range stretching, and the contrast is compared with the first An image enhanced second image.
- each pair of images includes a first image and a second image for the same scene, and the contrast of the first image is lower than that of the second image Contrast.
- Step 205 Invoke a pre-established deep convolutional neural network model.
- the deep convolutional neural network model includes a plurality of network layers including an input layer, one or more hidden layers, and an output layer.
- a deep convolutional neural network model may be established in advance, and the model may include: an input layer, n hidden layers (also referred to as a convolution layer), and an output layer, and each layer may be configured with multiple
- the filter the size of the filter can be k*k, for example, 9*9, each filter is assigned an initial weight value.
- Step 206 Randomly extract a preset number of groups of images to be trained from a set of image pairs.
- a preset number of image pairs may be randomly extracted from the set of image pairs obtained in the foregoing step 204 as the image to be trained.
- the extracted image to be trained can be represented as a group (x, y).
- Step 207 The first image in the image to be trained is sequentially input into a plurality of network layers for training, and the trained first image is obtained.
- the first image x of each group of images to be trained may be input into an input layer of a plurality of network layers. If a total of N sets of images to be trained are shared, the first image x may be characterized as x.
- the second image y can be characterized as y (i) , where i takes an integer from 1 to N.
- a predetermined number of filters W l are convoluted with the first image x (i) , that is, W l *x (i) to obtain a feature image.
- the preset nonlinear excitation function is used.
- the feature image is nonlinearly transformed by the ReLU function to obtain a transformed image, and the transformed image is output to the next network layer.
- the transformation process is as follows:
- F represents a ReLU function
- ⁇ represents a parameter of the network layer filter W
- b i represents a constant
- a set of first images F(x (i) , ⁇ ) trained by the deep convolutional neural network are obtained.
- Step 208 The loss function is called to calculate a mean square error of the trained first image and the corresponding second image.
- the mean square error L between the transformed image F(x (i) , ⁇ ) and the second image y (i) as the reference image may be calculated by using a loss function (Loss Function):
- Step 209 Determine whether the mean square error is greater than the error threshold. If yes, execute step 210; otherwise, end the current flow. Cheng.
- an error threshold may be preset, and the error threshold is used to determine whether the loss function converges, that is, if the judgment result is that the mean square error L is greater than the error threshold, it indicates that the loss function has not converged yet, and the step needs to be continued.
- Step 210 Backpropagation algorithm is used to backpropagate the mean square error from the output layer to the input layer to update the parameters of the multiple network layers, and returns to step 206.
- the back propagation algorithm (Back Propagation) can be used.
- the mean square error is the partial derivative of the weight of each filter of the network layer, and the partial derivative of the mean square error to the network layer x is calculated by the following formula (4):
- the update weight value of the filter is obtained by calculating the difference between the original weight value of the filter and the partial derivative value, the weight of the filter is updated by the updated weight value, and the original x is updated by the partial derivative value of x. And then returns to step 206.
- the embodiment provides a pre-trained neural network
- the training set of the neural network is a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, and The contrast of an image is lower than the contrast of the second image
- the neural network trained based on the above training set has the performance of enhancing image contrast, so in the actual application environment, when the third image is input to the neural network, the The enhancement of the contrast of the three images, so as to map the fourth image with higher output quality, it can be seen that the embodiment of the present application can enhance the low-contrast image of any input, so that the high dynamic range of the multi-exposure fused image can be achieved, so the contrast The enhanced image is true and the image quality is high.
- the present invention also provides an apparatus for enhancing image contrast, Embodiments of devices and storage media.
- FIG. 3 it is a block diagram of an embodiment of an apparatus for enhancing image contrast according to the present invention:
- the apparatus may include an invoking unit 310, an input unit 320, and an obtaining unit 330.
- the calling unit is configured to invoke a neural network, where the training set of the neural network is a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, and the contrast of the first image Lower than the contrast of the second image;
- An input unit configured to input a third image into the neural network
- an obtaining unit configured to obtain a fourth image outputted through the neural network map, the contrast of the fourth image being higher than the contrast of the third image.
- FIG. 4 is a block diagram of another embodiment of an apparatus for enhancing image contrast according to the present invention.
- the embodiment may further include: a building unit 340 and a training unit 350, based on the embodiment shown in FIG.
- the building unit 340 is configured to construct a set of image pairs by using a multi-exposure image fusion algorithm
- the training unit 350 is configured to use the set of image pairs as a training set to train the neural network.
- the building unit 340 can include (not shown in FIG. 4):
- a scene determination subunit configured to determine a plurality of training scenarios
- An image acquisition subunit configured to acquire a first image and a preset number of qualified images in each training scenario
- An image fusion subunit is configured to fuse the preset number of qualified images by using the target fusion algorithm to obtain a second image corresponding to the first image in each training scenario.
- the image acquisition sub-unit is specifically configured to collect a first image captured in each training scene, and multiple candidate images captured by using different exposure parameters, and obtain the selected image from the candidate image.
- a qualified image that satisfies a preset condition, and the preset condition includes that the filtered qualified image does not include a moving object.
- the algorithm calls a sub-unit, which is specifically used to search for a corresponding relationship between the pre-stored training scenario and the configured fusion algorithm by using the scene name of each training scenario as an index, according to the found and the The algorithm name of the target fusion algorithm corresponding to the scene name, and the target fusion algorithm is called from a pre-saved fusion algorithm.
- the training unit may include (not shown in FIG. 4):
- the model includes a plurality of network layers, the plurality of network layers including an input layer, one or more hidden layers, and an output layer;
- the iterative processing sub-unit is used to repeatedly trigger the following sub-units to perform the training operation until the loss function converges:
- An image extraction subunit configured to randomly extract a preset number of groups of images to be trained from the set of image pairs
- An image training subunit configured to sequentially input the first image in the image to be trained into the plurality of network layers for training, to obtain a first image after training
- An error calculation subunit configured to call a loss function to calculate a mean square error of the trained first image and the corresponding second image
- a back propagation subunit configured to backpropagate the mean square error from the output layer to an input layer using a back propagation algorithm if the mean square error is greater than the error threshold to update the plurality of The parameters of the network layer.
- the image training subunit is specifically configured to input a first image in each set of images to be trained into an input layer of the plurality of network layers; at each network layer, a preset number of Filtering a convolution operation with the first image, obtaining a feature image, and performing nonlinear transformation on the feature image, obtaining a transformed image, and outputting the transformed image to a next network layer; obtaining the plurality of A transformed image output by the output layer in the network layer obtains the trained first image.
- the back propagation subunit is specifically configured to calculate the mean square error for the network layer for each network layer in a reverse direction from the output layer to the input layer. a partial derivative value of the weight of each filter; obtaining an update weight value of the filter by calculating a difference between the original weight value of the filter and the partial derivative value; updating the location by using the updated weight value The weight of the filter.
- FIG. 5 it is a schematic diagram of an embodiment of an apparatus for enhancing image contrast according to the present invention.
- the apparatus may include a memory 520 and a processor 530 connected through an internal bus 510.
- the memory 520 is configured to store machine readable instructions corresponding to control logic for enhancing image contrast
- the processor 530 is configured to read the machine readable instructions on the memory and execute the instructions to:
- the training set of the neural network being a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, the first image having a lower contrast than the second image Contrast
- a fourth image outputted through the neural network map is obtained, the contrast of the fourth image being higher than the contrast of the third image.
- the processor 530 is further configured to construct a set of image pairs by using a multi-exposure image fusion algorithm; and use the set of image pairs as a training set to train the neural network.
- the processor 530 is specifically configured to determine a plurality of training scenarios when performing an operation of constructing a set of image pairs by using a multi-exposure image fusion algorithm; acquiring each training scenario a first image and a preset number of qualified images; invoking a target fusion algorithm configured for each training scenario; and merging the preset number of qualified images by the target fusion algorithm to obtain each training scenario a second image corresponding to the first image.
- the processor 530 is specifically configured to collect in each training scenario when performing an operation of acquiring a first image and a preset number of qualified images in each training scenario. a first image taken under, and a plurality of candidate images taken with different exposure parameters; obtaining a qualified image that is selected from the candidate image and satisfying a preset condition, wherein the preset condition includes the selected qualified image Does not contain moving objects.
- the processor 530 is specifically configured to use the scene name of each training scenario as an index to perform an operation when calling the operation of the target fusion algorithm configured for each training scenario. Corresponding relationship between the pre-stored training scenario and the configured fusion algorithm; and the target fusion algorithm is invoked from the pre-saved fusion algorithm according to the found algorithm name of the target fusion algorithm corresponding to the scenario name.
- the processor 530 is specifically configured to invoke a pre-established deep convolutional nerve when performing the operation of using the set of image pairs as a training set to train the neural network.
- a network model including a plurality of network layers, the plurality of network layers including an input layer, one or more hidden layers, and an output layer;
- the processor 530 is configured to perform the operation of sequentially inputting the first image in the image to be trained into the plurality of network layers to obtain the operation of the trained first image.
- the first image in each group of images to be trained is input to an input layer in the plurality of network layers; at each network layer, a preset number of filters are convolved with the first image Operating, obtaining a feature image, and performing nonlinear transformation on the feature image, obtaining a transformed image, and outputting the transformed image to a next network layer; obtaining an output layer output in the plurality of network layers
- the transformed image is obtained to obtain the first image after the training.
- the processor 530 performs back propagation of the mean square error from the output layer to an input layer by using a back propagation algorithm to update the plurality of network layers.
- the weight of the mean square error for each filter of the network layer is calculated. Deriving; obtaining an update weight value of the filter by calculating a difference between the original weight value of the filter and the partial derivative value; and updating a weight of the filter by using the update weight value.
- the device may include: a drone, a handheld camera device, a terminal device, and the like.
- an embodiment of the present invention further provides a machine readable storage medium, where the computer readable storage medium stores a plurality of computer instructions, and when the computer instructions are executed, the following processing is performed:
- the training set of the neural network being a set of image pairs, wherein each pair of images includes a first image and a second image for the same scene, the first image having a lower contrast than the second image Contrast
- a fourth image outputted through the neural network map is obtained, the contrast of the fourth image being higher than the contrast of the third image.
- the set of image pairs is used as a training set, and the neural network is trained.
- the preset number of qualified images are fused by the target fusion algorithm to obtain a second image corresponding to the first image in each training scenario.
- the preset condition includes that the filtered qualified image does not include a moving object.
- the target fusion algorithm is invoked from the pre-supplied fusion algorithm according to the found algorithm name of the target fusion algorithm corresponding to the scene name.
- the computer instruction is executed to use the set of image pairs as a training set, and when the neural network is trained, the following processing is specifically performed:
- the deep convolutional neural network model including a plurality of network layers, the plurality of network layers including an input layer, one or more hidden layers, and an output layer;
- the mean square error is backpropagated from the output layer to the input layer using a back propagation algorithm to update parameters of the plurality of network layers.
- the computer instruction is executed to sequentially input the first image in the image to be trained into the plurality of network layers for training, and when the first image after training is obtained, specifically performing the following deal with:
- a predetermined number of filters are convoluted with the first image to obtain a feature image, and the feature image is nonlinearly transformed to obtain a transformed image, and the transformed image is output.
- a predetermined number of filters are convoluted with the first image to obtain a feature image, and the feature image is nonlinearly transformed to obtain a transformed image, and the transformed image is output.
- the computer instructions are executed to backpropagate the mean square error from the output layer to an input layer using a back propagation algorithm to update parameters of the plurality of network layers Specifically, proceed as follows:
- An update weight value of the filter is obtained by calculating a difference between an original weight value of the filter and the partial derivative value; and updating a weight of the filter by using the update weight value.
- the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
- the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一种增强图像对比度的方法、设备及存储介质,该方法包括:调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度(101);将第三图像输入所述神经网络(102);获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度(103)。该方法可以增强任意输入的低对比度图像,使之能达到多曝光融合图像的高动态范围,因此对比度增强后的图像效果真实,图像质量较高。
Description
本申请涉及图像处理技术领域,尤其涉及一种增强图像对比度的方法、设备及存储介质。
数码摄像设备在室外场景,或者夜间场景进行拍摄时,由于所采用的图像传感器,例如CCD(Charge Coupled Device,电荷耦合元件)等的感光范围低于自然场景的动态范围,容易出现过曝光或者欠曝光的情况,因此需要通过增强图像的对比度,提升图像中细节信息的显示,从而向计算机视觉识别提供更为可靠的输入图像。
相关技术中,可以采用单张图像增强算法提升图像对比度,例如,利用基于Retinex理论的算法增强图像的对比度,该算法的原理是将图像分解为低频的光照强度部分和高频的细节部分,通过优化低频的光照强度部分来增强原始图像的对比度。但是,由于上述算法是基于先验条件来优化光照强度部分的,而真实图像往往比较复杂,先验条件难以很好地反应出真实世界的色彩,从而使得对比度增强后的图像出现非真实的效果,导致图像质量不高。
发明内容
本申请公开了增强图像对比度的方法、设备及存储介质。
依据本发明的第一方面,提供一种增强图像对比度的方法,所述方法包括:
调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;
将第三图像输入所述神经网络;
获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
依据本发明的第二方面,提供一种增强图像对比度的设备,包括:内部总线,以及通过内部总线连接的存储器和处理器;其中,
所述存储器,用于存储增强图像对比度的控制逻辑对应的机器可读指令;
所述处理器,用于读取所述存储器上的所述机器可读指令,并执行所述指令以实现如下
操作:
调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;
将第三图像输入所述神经网络;
依据本发明的第三方面,提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时进行如下处理:
调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;
将第三图像输入所述神经网络;
获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
本申请实施例提供一预先训练的神经网络,该神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,且第一图像的对比度低于第二图像的对比度,基于上述训练集训练的神经网络具有增强图像对比度的性能,因而在实际应用环境中,当将第三图像输入到该神经网络后,可以实现对第三图像的对比度的增强,从而映射输出质量较高的第四图像,由此可知,本申请实施例可以增强任意输入的低对比度图像,使之能达到多曝光融合图像的高动态范围,因此对比度增强后的图像效果真实,图像质量较高。
图1是本申请增强图像对比度的方法的一个实施例流程图;
图2是本申请增强图像对比度的方法的另一个实施例流程图;
图3是本申请增强图像对比度的装置的一个实施例框图;
图4是本申请增强图像对比度的装置的另一个实施例框图;
图5是本申请增强图像对比度的设备的一个实施例框图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请
中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
数码摄像设备在进行拍摄时,如果其图像传感器的感光范围低于自然场景的动态范围,则会导致所拍摄图像出现过曝光或欠曝光的情况,此时需要对图像进行增强对比度的处理,以提升图像中细节信息的显示。在一些计算机视觉识别的典型应用场景中,例如,人脸识别,场景识别,行人检测等,通过增强图像对比度,可以向计算机视觉识别提供更为可靠的输入图像。增强图像对比度的算法可以嵌入摄像设备的芯片中,在拍摄过程中实现对图像对比度增强的实时处理。相关技术中,可以采用单张图像增强算法提升图像对比度,但是该算法容易导致对比度增强后的图像出现非真实的效果,因此为了提高图像对比度增强效果,本申请实施例通过一预先训练的神经网络对图像对比进行增强。
神经网络可以从信息处理角度对人脑神经元网络进行抽象,建立某种简单模型,按不同的连接方式组成不同的网络。神经网络是一种运算模型,其由大量的节点(或称神经元)之间相互联接构成,每个节点代表一种特定的输出函数,可称为激励函数(Activation Function)。不同神经网络的输出依据其网络连接方式,各个节点对应的权重值和激励函数的不同而不同。DNN(Deep Neural Network,深层神经网络)可以包括CNN(Convolutional Neural Network,卷积神经网络),RNN(Recurrent Neural Networks,循环神经网络)等,其具有自适应、自组织和实时学习的能力。
本申请实施例中使用的神经网络,其训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和作为参考图像的第二图像,且第一图像的对比度低于第二图像的对比度,也就是用于训练神经网络的第二图像具有高动态范围和高对比度,从而训练学习得到一个端对端的神经网络,该神经网络具有将一个低对比度图像映射为高对比度图像的性能。下面结合附图对本申请实施例进行详细说明。
参见图1,为本发明增强图像对比度的方法的一个实施例流程图,该实施例可以包括以下步骤:
步骤101:调用神经网络,该神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,第一图像的对比度低于第二图像的对比度。
本实施例中的神经网络可以为预先构建的一神经网络,用于构建该神经网络的设备可以与本实施例执行增强图像对比度的设备不同,或者在执行本实施例的设备具有较强计算能力时,二者也可以相同,对此本申请实施例不进行限制。
该神经网络具有将低对比度图像映射为高对比度图像的能力,因此其训练集为包含多个图像对的一组图像对,其中每一对图像针对同一场景,且包括一个低对比度的第一图像和一个高对比度的第二图像,该第二图像可以通过多曝光图像融合算法生成,从而保证第二图像在动态范围和对比度上均高于第一图像,以便在将图像对作为参考图像输入到神经网络进行学习时,可以获得用于增强图像对比度的神经网络。
本实施例中的神经网络作为一种算法模型,在需要对图像进行对比度增强时,由该算法模型的执行主体调用该神经网络。其中,上述算法模型可以预先嵌入摄像设备的芯片中,在摄像设备拍摄过程中,实时调用该算法模型对所拍摄图像进行对比度增强;或者,上述算法模型也可以预先保存在计算设备的存储器中,在计算设备进行图片批量处理时,调用该算法模型对图片对比度进行批量增强。
步骤102:将第三图像输入该神经网络。
步骤103:获得经过该神经网络映射输出的第四图像,该第四图像的对比度高于第三图像的对比度。
由上述实施例可见,该实施例提供一预先训练的神经网络,该神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,且第一图像的对比度低于第二图像的对比度,基于上述训练集训练的神经网络具有增强图像对比度的性能,因而在实际应用环境中,当将第三图像输入到该神经网络后,可以实现对第三图像的对比度的增强,从而映射输出质量较高的第四图像,由此可知,本申请实施例可以增强任意输入的低对比度图像,使之能达到多曝光融合图像的高动态范围,因此对比度增强后的图像效果真实,图像质量较高。
参见图2,为本发明增强图像对比度的方法的另一个实施例流程图,该实施例示出了训练用于增强图像对比度的深度卷积神经网络的过程,可以包括以下步骤:
步骤201:确定多种训练场景。
由于现实世界中的场景众多,为了使深度卷积神经网络可以对不同场景具有普适性,本实施例在训练之前,可以先确定好不同类型的训练场景,场景的数量可以根据需要灵活设置,例如,设置100个以上的训练场景,这些场景可以包括大部分实拍环境中涉及的场景,其中每个场景又可以进一步包括多个子场景,例如,自然环境中的森林场景,河流场景,植物场景等,上述植物场景可以包括不同季节下的植物子场景,又例如,室内环境中的楼梯场景,客厅场景,卧室场景等,上述楼梯场景可以包括直梯子场景,转梯子场景等。
步骤202:获取每一种训练场景下的第一图像和预设数量的合格图像。
本步骤中,针对步骤201中确定的每一种训练场景,采集在该训练场景下拍摄的第一图像,通常该第一图像在未做处理前,具有较低的图像对比度;以及采集在同一训练场景下采用不同曝光参数拍摄的多个候选图像。上述多个候选图像虽然针对同一场景,但由于不同图像的拍摄时间存在一定的时间差,当该场景中某一时刻出现移动对象时,会导致后续采用候选图像融合高对比度的第二图像时出现鬼影。因此本实施例中可以预先设置筛选条件,在本步骤中可以利用上述筛选条件对候选图像进行筛选,去除候选图像中包含移动对象的图像,从而得到合格图像。
步骤203:调用为每一种训练场景配置的目标融合算法。
相关技术中适用于构建高对比度图像的图像融合算法种类较多,本实施例中可以预先确定预设数量的融合算法,分别通过每一种融合算法对每一种训练场景下的图像进行融合,得到预设数量的融合图像,从这些融合图像中确定图像质量最好的融合图像,将生成该融合图像的融合算法确定为对应训练场景的目标融合算法。通过上述方式为每一种训练场景确定对应的目标融合算法后,可以将训练场景和目标融合算法的对应关系进行保存。
在步骤202中获取到每一种训练场景下的合格图像后,在针对某个目标训练场景进行图像融合时,可以以该目标训练场景的场景名称为索引,查找保存的训练场景与所配置融合算法的对应关系,在查找到与该场景名称对应的目标融合算法的算法名称后,从预先保存的融合算法中调用该目标融合算法。
步骤204:通过目标融合算法对预设数量的合格图像进行融合,获得每一种训练场景下与第一图像对应的第二图像。
本步骤中,在调用了目标训练场景的目标融合算法后,通过目标融合算法对目标训练场景的合格图像进行融合,融合算法的具体执行过程与现有技术一致,在此不再赘述,由于融合算法可以选择每一张图像中的高质量区域,并将这些高质量区域融合在一起,因此本步骤中可以将不同曝光程度的合格图像进行融合,得到动态范围拉伸,及对比度相较于第一图像增强的第二图像。
在对所有训练场景的合格图像进行融合处理后,得到一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,且第一图像的对比度低于第二图像的对比度。
步骤205:调用预先建立的深度卷积神经网络模型,深度卷积神经网络模型中包括多个网络层,多个网络层包括输入层、一个或多个隐含层、以及输出层。
本实施例中,可以预先建立一个深度卷积神经网络模型,该模型可以包括:输入层,n个隐含层(也可称为卷积层),以及输出层,每一层可以设置多个滤波器,滤波器的尺寸可以为k*k,例如,9*9,每个滤波器赋有初始权重值。
步骤206:从一组图像对中随机提取预设数量组的待训练图像。
本步骤中,在开始对深度卷积神经网络模型进行训练时,可以从前述步骤204中获得的一组图像对中随机提取预设数量组的图像对,作为待训练的图像。本实施例中,假设每一个图像对中的第一图像为x,第二图像为y,则所提取的待训练图像可以表征为一组(x,y)。
步骤207:依次将待训练图像中的第一图像输入多个网络层进行训练,得到训练后的第一图像。
本步骤中,可以将每一组待训练图像(x,y)中的第一图像x输入多个网络层中的输入层,假设共有N组待训练图像,则第一图像x可以表征为x(i),第二图像y可以表征为y(i),其中i的取值为1至N的整数。
在每一个网络层,可以执行如下操作:
将预设数量的滤波器Wl与第一图像x(i)进行卷积操作,即Wl*x(i),获得特征图像。
然后采用预设的非线性激励函数,例如,采用ReLU函数对特征图像进行非线性变换,获得变换图像,并将变换图像输出到下一个网络层,变换过程如下公式:
F(x(i),ω)=max[0,(Wl*x(i)+bi)] 公式(1)
上述公式(1)中,F表示ReLU函数,ω表示该网络层滤波器W的参数,bi表示一常数。
在获得多个网络层中的输出层输出的变换图像后,得到一组通过该深度卷积神经网络训练后的第一图像F(x(i),ω)。
步骤208:调用损失函数计算训练后的第一图像与对应的第二图像的均方误差。
本步骤中,可以采用如下损失函数(Loss Function)计算变换图像F(x(i),ω)与作为参考图像的第二图像y(i)之间的均方误差L:
步骤209:判断均方误差是否大于误差阈值,若是,则执行步骤210;否则,结束当前流
程。
均方误差L越小,则表明变换图像F(x(i),ω)越接近第二图像y(i),当均方误差L小到某个值时,可以表示深度卷积神经网络训练完成。因此本步骤中,可以预先设置一个误差阈值,该误差阈值用于判断损失函数是否收敛,即如果判断结果为均方误差L大于误差阈值,则说明此时损失函数还未收敛,需要继续执行步骤210;如果判断结果为均方误差L小于或等于误差阈值,则说明此时损失函数已经收敛,保存此时各个网络层的参数,包括各个滤波器的权重,从而完成深度卷积神经网络的训练。
步骤210:采用反向传播算法将均方误差从输出层反向传播到输入层,以更新多个网络层的参数,返回步骤206。
本步骤中,由于损失函数还未收敛,此时可以利用反向传播算法(Back Propagation),在从输出层到输入层的反向方向上,对于每一个网络层,采用如下公式(3)计算该均方误差对该网络层的每个滤波器的权重的偏导值,以及采用如下公式(4)计算该均方误差对该网络层的x的偏导值:
对于每一个网络层,通过计算滤波器的原始权重值与偏导值的差值,获得滤波器的更新权重值,利用更新权重值更新滤波器的权重,以及通过x的偏导值更新原始x,然后返回步骤206。
由上述实施例可见,该实施例提供一预先训练的神经网络,该神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,且第一图像的对比度低于第二图像的对比度,基于上述训练集训练的神经网络具有增强图像对比度的性能,因而在实际应用环境中,当将第三图像输入到该神经网络后,可以实现对第三图像的对比度的增强,从而映射输出质量较高的第四图像,由此可知,本申请实施例可以增强任意输入的低对比度图像,使之能达到多曝光融合图像的高动态范围,因此对比度增强后的图像效果真实,图像质量较高。
与前述增强图像对比度的方法实施例相对应,本发明还提供了增强图像对比度的装置、
设备及存储介质的实施例。
参见图3,为本发明增强图像对比度的装置的一个实施例框图:
该装置可以包括:调用单元310、输入单元320和获得单元330。
其中,调用单元,用于调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;
输入单元,用于将第三图像输入所述神经网络;
获得单元,用于获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
参见图4,为本发明增强图像对比度的装置的另一个实施例框图,该实施例在图3所示实施例的基础上,还可以包括:构建单元340和训练单元350。
其中,构建单元340,用于通过多曝光图像融合算法构建一组图像对;
训练单元350,用于将所述一组图像对作为训练集,训练得到所述神经网络。
在一个可选的实现方式中,所述构建单元340可以包括(图4中未示出):
场景确定子单元,用于确定多种训练场景;
图像获取子单元,用于获取每一种训练场景下的第一图像和预设数量的合格图像;
算法调用子单元,用于调用为每一种训练场景配置的目标融合算法;
图像融合子单元,用于通过所述目标融合算法对所述预设数量的合格图像进行融合,获得每一种训练场景下与所述第一图像对应的第二图像。
在一个例子中,所述图像获取子单元,具体用于采集在每一种训练场景下拍摄的第一图像,以及采用不同曝光参数拍摄的多个候选图像,获得从所述候选图像中筛选出的满足预设条件的合格图像,所述预设条件包括所筛选出的合格图像中不包含移动对象。
在另一个例子中,所述算法调用子单元,具体用于以每一种训练场景的场景名称为索引,查找预先保存的训练场景与所配置融合算法的对应关系,根据查找到的与所述场景名称对应的目标融合算法的算法名称,从预先保存的融合算法中调用所述目标融合算法。
在另一个可选的实现方式中,所述训练单元可以包括(图4中未示出):
模型调用子单元,用于调用预先建立的深度卷积神经网络模型,所述深度卷积神经网络
模型中包括多个网络层,所述多个网络层包括输入层、一个或多个隐含层、以及输出层;
迭代处理子单元,用于重复触发如下子单元执行训练操作,直至损失函数收敛:
图像提取子单元,用于从所述一组图像对中随机提取预设数量组的待训练图像;
图像训练子单元,用于依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像;
误差计算子单元,用于调用损失函数计算所述训练后的第一图像与对应的第二图像的均方误差;
反向传播子单元,用于如果所述均方误差大于所述误差阈值,则采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数。
在一个例子中,所述图像训练子单元,具体用于将每一组待训练图像中的第一图像输入所述多个网络层中的输入层;在每一个网络层,将预设数量的滤波器与所述第一图像进行卷积操作,获得特征图像,以及对所述特征图像进行非线性变换,获得变换图像,并将所述变换图像输出到下一个网络层;获得所述多个网络层中的输出层输出的变换图像,得到所述训练后的第一图像。
在另一个例子中,所述反向传播子单元,具体用于从所述输出层到所述输入层的反向方向上,对于每一个网络层,计算所述均方误差对所述网络层的每个滤波器的权重的偏导值;通过计算所述滤波器的原始权重值与所述偏导值的差值,获得所述滤波器的更新权重值;利用所述更新权重值更新所述滤波器的权重。
参见图5,为本发明增强图像对比度的设备的一个实施例示意图,该设备可以包括:通过内部总线510连接的存储器520和处理器530。
其中,所述存储器520,用于存储增强图像对比度的控制逻辑对应的机器可读指令;
所述处理器530,用于读取所述存储器上的所述机器可读指令,并执行所述指令以实现如下操作:
调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;
将第三图像输入所述神经网络;
获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
在一个可选的实现方式中,所述处理器530,还用于通过多曝光图像融合算法构建一组图像对;将所述一组图像对作为训练集,训练得到所述神经网络。
在另一个可选的实现方式中,所述处理器530,在执行通过多曝光图像融合算法构建一组图像对的操作时,具体用于确定多种训练场景;获取每一种训练场景下的第一图像和预设数量的合格图像;调用为每一种训练场景配置的目标融合算法;通过所述目标融合算法对所述预设数量的合格图像进行融合,获得每一种训练场景下与所述第一图像对应的第二图像。
在另一个可选的实现方式中,所述处理器530,在执行获取每一种训练场景下的第一图像和预设数量的合格图像的操作时,具体用于采集在每一种训练场景下拍摄的第一图像,以及采用不同曝光参数拍摄的多个候选图像;获得从所述候选图像中筛选出的满足预设条件的合格图像,所述预设条件包括所筛选出的合格图像中不包含移动对象。
在另一个可选的实现方式中,所述处理器530,在执行调用为每一种训练场景配置的目标融合算法的操作时,具体用于以每一种训练场景的场景名称为索引,查找预先保存的训练场景与所配置融合算法的对应关系;根据查找到的与所述场景名称对应的目标融合算法的算法名称,从预先保存的融合算法中调用所述目标融合算法。
在另一个可选的实现方式中,所述处理器530,在执行将所述一组图像对作为训练集,训练所述神经网络的操作时,具体用于调用预先建立的深度卷积神经网络模型,所述深度卷积神经网络模型中包括多个网络层,所述多个网络层包括输入层、一个或多个隐含层、以及输出层;以及
重复执行如下训练操作,直至损失函数收敛:
从所述一组图像对中随机提取预设数量组的待训练图像;依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像;调用损失函数计算所述训练后的第一图像与对应的第二图像的均方误差;如果所述均方误差大于所述误差阈值,则采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数。
在另一个可选的实现方式中,所述处理器530,在执行依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像的操作时,具体用于将每一组待训练图像中的第一图像输入所述多个网络层中的输入层;在每一个网络层,将预设数量的滤波器与所述第一图像进行卷积操作,获得特征图像,以及对所述特征图像进行非线性变换,获得变换图像,并将所述变换图像输出到下一个网络层;获得所述多个网络层中的输出层输
出的变换图像,得到所述训练后的第一图像。
在另一个可选的实现方式中,所述处理器530,在执行采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数的操作时,具体用于从所述输出层到所述输入层的反向方向上,对于每一个网络层,计算所述均方误差对所述网络层的每个滤波器的权重的偏导值;通过计算所述滤波器的原始权重值与所述偏导值的差值,获得所述滤波器的更新权重值;利用所述更新权重值更新所述滤波器的权重。
在另一个可选的实现方式中,所述设备可以包括:无人机,手持摄像设备,终端设备等。
另外,本发明实施例还提供一种机器可读存储介质,该机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时进行如下处理:
调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;
将第三图像输入所述神经网络;
获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
在一个可选的实现方式中,所述计算机指令被执行时还进行如下处理:
通过多曝光图像融合算法构建一组图像对;
将所述一组图像对作为训练集,训练得到所述神经网络。
在另一个可选的实现方式中,所述计算机指令被执行通过多曝光图像融合算法构建一组图像对时,具体进行如下处理:
确定多种训练场景;
获取每一种训练场景下的第一图像和预设数量的合格图像;
调用为每一种训练场景配置的目标融合算法;
通过所述目标融合算法对所述预设数量的合格图像进行融合,获得每一种训练场景下与所述第一图像对应的第二图像。
在另一个可选的实现方式中,所述计算机指令被执行获取每一种训练场景下的第一图像和预设数量的合格图像时,具体进行如下处理:
采集在每一种训练场景下拍摄的第一图像,以及采用不同曝光参数拍摄的多个候选图像;
获得从所述候选图像中筛选出的满足预设条件的合格图像,所述预设条件包括所筛选出的合格图像中不包含移动对象。
在另一个可选的实现方式中,所述计算机指令被执行调用为每一种训练场景配置的目标融合算法时,具体进行如下处理:
以每一种训练场景的场景名称为索引,查找预先保存的训练场景与所配置融合算法的对应关系;
根据查找到的与所述场景名称对应的目标融合算法的算法名称,从预先保存的融合算法中调用所述目标融合算法。
在另一个可选的实现方式中,所述计算机指令被执行将所述一组图像对作为训练集,训练所述神经网络时,具体进行如下处理:
调用预先建立的深度卷积神经网络模型,所述深度卷积神经网络模型中包括多个网络层,所述多个网络层包括输入层、一个或多个隐含层、以及输出层;
重复执行如下训练操作,直至损失函数收敛:
从所述一组图像对中随机提取预设数量组的待训练图像;
依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像;
调用损失函数计算所述训练后的第一图像与对应的第二图像的均方误差;
如果所述均方误差大于所述误差阈值,则采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数。
在另一个可选的实现方式中,所述计算机指令被执行依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像时,具体进行如下处理:
将每一组待训练图像中的第一图像输入所述多个网络层中的输入层;
在每一个网络层,将预设数量的滤波器与所述第一图像进行卷积操作,获得特征图像,以及对所述特征图像进行非线性变换,获得变换图像,并将所述变换图像输出到下一个网络层;
获得所述多个网络层中的输出层输出的变换图像,得到所述训练后的第一图像。
在另一个可选的实现方式中,所述计算机指令被执行采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数时,具体进行如下处理:
从所述输出层到所述输入层的反向方向上,对于每一个网络层,计算所述均方误差对所述网络层的每个滤波器的权重的偏导值;
通过计算所述滤波器的原始权重值与所述偏导值的差值,获得所述滤波器的更新权重值;利用所述更新权重值更新所述滤波器的权重。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。
Claims (25)
- 一种增强图像对比度的方法,其特征在于,所述方法包括:调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;将第三图像输入所述神经网络;获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:通过多曝光图像融合算法构建一组图像对;将所述一组图像对作为训练集,训练得到所述神经网络。
- 根据权利要求2所述的方法,其特征在于,所述通过多曝光图像融合算法构建一组图像对,包括:确定多种训练场景;获取每一种训练场景下的第一图像和预设数量的合格图像;调用为每一种训练场景配置的目标融合算法;通过所述目标融合算法对所述预设数量的合格图像进行融合,获得每一种训练场景下与所述第一图像对应的第二图像。
- 根据权利要求3所述的方法,其特征在于,所述获取每一种训练场景下的第一图像和预设数量的合格图像,包括:采集在每一种训练场景下拍摄的第一图像,以及采用不同曝光参数拍摄的多个候选图像;获得从所述候选图像中筛选出的满足预设条件的合格图像,所述预设条件包括所筛选出的合格图像中不包含移动对象。
- 根据权利要求3所述的方法,其特征在于,所述调用为每一种训练场景配置的目标融合算法,包括:以每一种训练场景的场景名称为索引,查找预先保存的训练场景与所配置融合算法的对应关系;根据查找到的与所述场景名称对应的目标融合算法的算法名称,从预先保存的融合算法中调用所述目标融合算法。
- 根据权利要求2所述的方法,其特征在于,所述将所述一组图像对作为训练集,训练所述神经网络,包括:调用预先建立的深度卷积神经网络模型,所述深度卷积神经网络模型中包括多个网络层,所述多个网络层包括输入层、一个或多个隐含层、以及输出层;重复执行如下训练操作,直至损失函数收敛:从所述一组图像对中随机提取预设数量组的待训练图像;依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像;调用损失函数计算所述训练后的第一图像与对应的第二图像的均方误差;如果所述均方误差大于所述误差阈值,则采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数。
- 根据权利要求6所述的方法,其特征在于,所述依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像,包括:将每一组待训练图像中的第一图像输入所述多个网络层中的输入层;在每一个网络层,将预设数量的滤波器与所述第一图像进行卷积操作,获得特征图像,以及对所述特征图像进行非线性变换,获得变换图像,并将所述变换图像输出到下一个网络层;获得所述多个网络层中的输出层输出的变换图像,得到所述训练后的第一图像。
- 根据权利要求7所述的方法,其特征在于,所述采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数,包括:从所述输出层到所述输入层的反向方向上,对于每一个网络层,计算所述均方误差对所述网络层的每个滤波器的权重的偏导值;通过计算所述滤波器的原始权重值与所述偏导值的差值,获得所述滤波器的更新权重值;利用所述更新权重值更新所述滤波器的权重。
- 一种增强图像对比度的设备,其特征在于,包括:内部总线,以及通过内部总线连接的存储器和处理器;其中,所述存储器,用于存储增强图像对比度的控制逻辑对应的机器可读指令;所述处理器,用于读取所述存储器上的所述机器可读指令,并执行所述指令以实现如下操作:调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;将第三图像输入所述神经网络;获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
- 根据权利要求9所述的设备,其特征在于,所述处理器,还用于通过多曝光图像融合算法构建一组图像对;将所述一组图像对作为训练集,训练得到所述神经网络。
- 根据权利要求10所述的设备,其特征在于,所述处理器,在执行通过多曝光图像融合算法构建一组图像对的操作时,具体用于确定多种训练场景;获取每一种训练场景下的第一图像和预设数量的合格图像;调用为每一种训练场景配置的目标融合算法;通过所述目标融合算法对所述预设数量的合格图像进行融合,获得每一种训练场景下与所述第一图像对应的第二图像。
- 根据权利要求11所述的设备,其特征在于,所述处理器,在执行获取每一种训练场景下的第一图像和预设数量的合格图像的操作时,具体用于采集在每一种训练场景下拍摄的第一图像,以及采用不同曝光参数拍摄的多个候选图像;获得从所述候选图像中筛选出的满足预设条件的合格图像,所述预设条件包括所筛选出的合格图像中不包含移动对象。
- 根据权利要求11所述的设备,其特征在于,所述处理器,在执行调用为每一种训练场景配置的目标融合算法的操作时,具体用于以每一种训练场景的场景名称为索引,查找预先保存的训练场景与所配置融合算法的对应关系;根据查找到的与所述场景名称对应的目标融合算法的算法名称,从预先保存的融合算法中调用所述目标融合算法。
- 根据权利要求10所述的设备,其特征在于,所述处理器,在执行将所述一组图像对作为训练集,训练所述神经网络的操作时,具体用于调用预先建立的深度卷积神经网络模型,所述深度卷积神经网络模型中包括多个网络层,所述多个网络层包括输入层、一个或多个隐含层、以及输出层;以及重复执行如下训练操作,直至损失函数收敛:从所述一组图像对中随机提取预设数量组的待训练图像;依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像;调用损失函数计算所述训练后的第一图像与对应的第二图像的均方误差;如果所述均方误差大于所述误差阈值,则采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数。
- 根据权利要求14所述的设备,其特征在于,所述处理器,在执行依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像的操作时,具体用于将每一组待训练图像中的第一图像输入所述多个网络层中的输入层;在每一个网络层,将预设数量的滤波器与所述第一图像进行卷积操作,获得特征图像,以及对所述特征图像进行非线性变换,获得变换图像,并将所述变换图像输出到下一个网络层;获得所述多个网络层中的输出层输出的变换图像,得到所述训练后的第一图像。
- 根据权利要求15所述的设备,其特征在于,所述处理器,在执行采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数的操作时,具体用于从所述输出层到所述输入层的反向方向上,对于每一个网络层,计算所述均方误差对所述网络层的每个滤波器的权重的偏导值;通过计算所述滤波器的原始权重值与所述偏导值的差值,获得所述滤波器的更新权重值;利用所述更新权重值更新所述滤波器的权重。
- 根据权利要求9至16任一所述的设备,其特征在于,所述设备包括:无人机,手持摄像设备,终端设备。
- 一种机器可读存储介质,其特征在于,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被执行时进行如下处理:调用神经网络,所述神经网络的训练集为一组图像对,其中每一对图像包括针对同一场景的第一图像和第二图像,所述第一图像的对比度低于所述第二图像的对比度;将第三图像输入所述神经网络;获得经过所述神经网络映射输出的第四图像,所述第四图像的对比度高于所述第三图像的对比度。
- 根据权利要求18所述的存储介质,其特征在于,所述计算机指令被执行时还进行如下处理:通过多曝光图像融合算法构建一组图像对;将所述一组图像对作为训练集,训练得到所述神经网络。
- 根据权利要求19所述的存储介质,其特征在于,所述计算机指令被执行通过多曝光图像融合算法构建一组图像对时,具体进行如下处理:确定多种训练场景;获取每一种训练场景下的第一图像和预设数量的合格图像;调用为每一种训练场景配置的目标融合算法;通过所述目标融合算法对所述预设数量的合格图像进行融合,获得每一种训练场景下与所述第一图像对应的第二图像。
- 根据权利要求20所述的存储介质,其特征在于,所述计算机指令被执行获取每一种训练场景下的第一图像和预设数量的合格图像时,具体进行如下处理:采集在每一种训练场景下拍摄的第一图像,以及采用不同曝光参数拍摄的多个候选图像;获得从所述候选图像中筛选出的满足预设条件的合格图像,所述预设条件包括所筛选出的合格图像中不包含移动对象。
- 根据权利要求20所述的存储介质,其特征在于,所述计算机指令被执行调用为每一种训练场景配置的目标融合算法时,具体进行如下处理:以每一种训练场景的场景名称为索引,查找预先保存的训练场景与所配置融合算法的对应关系;根据查找到的与所述场景名称对应的目标融合算法的算法名称,从预先保存的融合算法中调用所述目标融合算法。
- 根据权利要求19所述的存储介质,其特征在于,所述计算机指令被执行将所述一组图像对作为训练集,训练所述神经网络时,具体进行如下处理:调用预先建立的深度卷积神经网络模型,所述深度卷积神经网络模型中包括多个网络层,所述多个网络层包括输入层、一个或多个隐含层、以及输出层;重复执行如下训练操作,直至损失函数收敛:从所述一组图像对中随机提取预设数量组的待训练图像;依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像;调用损失函数计算所述训练后的第一图像与对应的第二图像的均方误差;如果所述均方误差大于所述误差阈值,则采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数。
- 根据权利要求23所述的存储介质,其特征在于,所述计算机指令被执行依次将所述待训练图像中的第一图像输入所述多个网络层进行训练,得到训练后的第一图像时,具体进行如下处理:将每一组待训练图像中的第一图像输入所述多个网络层中的输入层;在每一个网络层,将预设数量的滤波器与所述第一图像进行卷积操作,获得特征图像, 以及对所述特征图像进行非线性变换,获得变换图像,并将所述变换图像输出到下一个网络层;获得所述多个网络层中的输出层输出的变换图像,得到所述训练后的第一图像。
- 根据权利要求24所述的存储介质,其特征在于,所述计算机指令被执行采用反向传播算法将所述均方误差从所述输出层反向传播到输入层,以更新所述多个网络层的参数时,具体进行如下处理:从所述输出层到所述输入层的反向方向上,对于每一个网络层,计算所述均方误差对所述网络层的每个滤波器的权重的偏导值;通过计算所述滤波器的原始权重值与所述偏导值的差值,获得所述滤波器的更新权重值;利用所述更新权重值更新所述滤波器的权重。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780005603.2A CN108513672A (zh) | 2017-07-27 | 2017-07-27 | 增强图像对比度的方法、设备及存储介质 |
PCT/CN2017/094650 WO2019019086A1 (zh) | 2017-07-27 | 2017-07-27 | 增强图像对比度的方法、设备及存储介质 |
US16/742,145 US20200151858A1 (en) | 2017-07-27 | 2020-01-14 | Image contrast enhancement method and device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2017/094650 WO2019019086A1 (zh) | 2017-07-27 | 2017-07-27 | 增强图像对比度的方法、设备及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/742,145 Continuation US20200151858A1 (en) | 2017-07-27 | 2020-01-14 | Image contrast enhancement method and device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019019086A1 true WO2019019086A1 (zh) | 2019-01-31 |
Family
ID=63375795
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/094650 WO2019019086A1 (zh) | 2017-07-27 | 2017-07-27 | 增强图像对比度的方法、设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20200151858A1 (zh) |
CN (1) | CN108513672A (zh) |
WO (1) | WO2019019086A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951199A (zh) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | 一种图像融合方法及设备 |
WO2021029423A3 (en) * | 2019-08-15 | 2021-03-25 | Ricoh Company, Ltd. | Image processing method and apparatus and non-transitory computer-readable medium |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10783622B2 (en) * | 2018-04-25 | 2020-09-22 | Adobe Inc. | Training and utilizing an image exposure transformation neural network to generate a long-exposure image from a single short-exposure image |
CN109801224A (zh) * | 2018-12-04 | 2019-05-24 | 北京奇艺世纪科技有限公司 | 一种图片处理方法、装置、服务器和存储介质 |
CN109618094A (zh) * | 2018-12-14 | 2019-04-12 | 深圳市华星光电半导体显示技术有限公司 | 图像处理方法及图像处理系统 |
CN109712091B (zh) * | 2018-12-19 | 2021-03-23 | Tcl华星光电技术有限公司 | 图片处理方法、装置及电子设备 |
CN110191291B (zh) * | 2019-06-13 | 2021-06-25 | Oppo广东移动通信有限公司 | 基于多帧图像的图像处理方法和装置 |
CN110298810A (zh) * | 2019-07-24 | 2019-10-01 | 深圳市华星光电技术有限公司 | 图像处理方法及图像处理系统 |
CN110443252A (zh) * | 2019-08-16 | 2019-11-12 | 广东工业大学 | 一种文字检测方法、装置及设备 |
WO2021056304A1 (zh) * | 2019-09-26 | 2021-04-01 | 深圳市大疆创新科技有限公司 | 图像处理方法、装置、可移动平台及机器可读存储介质 |
CN112752011B (zh) * | 2019-10-29 | 2022-05-20 | Oppo广东移动通信有限公司 | 图像处理方法、图像处理装置、电子装置和存储介质 |
CN111325698A (zh) * | 2020-03-17 | 2020-06-23 | 北京迈格威科技有限公司 | 图像处理方法、装置及系统、电子设备 |
CN112365426B (zh) * | 2020-11-25 | 2022-06-07 | 兰州理工大学 | 一种基于双分支卷积神经网络的红外图像边缘增强方法 |
CN112767259A (zh) * | 2020-12-29 | 2021-05-07 | 上海联影智能医疗科技有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
CN113034384A (zh) * | 2021-02-26 | 2021-06-25 | Oppo广东移动通信有限公司 | 视频处理方法、装置、电子设备以及存储介质 |
CN113112418B (zh) * | 2021-03-26 | 2023-10-10 | 浙江理工大学 | 一种低照度图像迭代增强的方法 |
CN113570499B (zh) * | 2021-07-21 | 2022-07-05 | 此刻启动(北京)智能科技有限公司 | 一种自适应图像调色方法、系统、存储介质及电子设备 |
CN117437277B (zh) * | 2023-12-18 | 2024-03-12 | 聊城市至诚蔬果有限公司 | 一种果蔬脱水液面检测方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7288759B2 (en) * | 2004-09-09 | 2007-10-30 | Beth Israel Deaconess Medical Center, Inc. | Tissue-like phantoms |
CN101452575B (zh) * | 2008-12-12 | 2010-07-28 | 北京航空航天大学 | 一种基于神经网络的图像自适应增强方法 |
CN104036474A (zh) * | 2014-06-12 | 2014-09-10 | 厦门美图之家科技有限公司 | 一种图像亮度和对比度的自动调节方法 |
CN106934426A (zh) * | 2015-12-29 | 2017-07-07 | 三星电子株式会社 | 基于图像信号处理的神经网络的方法和设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7215828B2 (en) * | 2002-02-13 | 2007-05-08 | Eastman Kodak Company | Method and system for determining image orientation |
US8098934B2 (en) * | 2006-06-29 | 2012-01-17 | Google Inc. | Using extracted image text |
CN105354589A (zh) * | 2015-10-08 | 2016-02-24 | 成都唐源电气有限责任公司 | 一种在接触网图像中智能识别绝缘子裂损的方法及系统 |
CN106169081B (zh) * | 2016-06-29 | 2019-07-05 | 北京工业大学 | 一种基于不同照度的图像分类及处理方法 |
-
2017
- 2017-07-27 CN CN201780005603.2A patent/CN108513672A/zh active Pending
- 2017-07-27 WO PCT/CN2017/094650 patent/WO2019019086A1/zh active Application Filing
-
2020
- 2020-01-14 US US16/742,145 patent/US20200151858A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7288759B2 (en) * | 2004-09-09 | 2007-10-30 | Beth Israel Deaconess Medical Center, Inc. | Tissue-like phantoms |
CN101452575B (zh) * | 2008-12-12 | 2010-07-28 | 北京航空航天大学 | 一种基于神经网络的图像自适应增强方法 |
CN104036474A (zh) * | 2014-06-12 | 2014-09-10 | 厦门美图之家科技有限公司 | 一种图像亮度和对比度的自动调节方法 |
CN106934426A (zh) * | 2015-12-29 | 2017-07-07 | 三星电子株式会社 | 基于图像信号处理的神经网络的方法和设备 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951199A (zh) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | 一种图像融合方法及设备 |
WO2021029423A3 (en) * | 2019-08-15 | 2021-03-25 | Ricoh Company, Ltd. | Image processing method and apparatus and non-transitory computer-readable medium |
JP2022544665A (ja) * | 2019-08-15 | 2022-10-20 | 株式会社リコー | 画像処理方法、機器、非一時的コンピュータ可読媒体 |
JP7264310B2 (ja) | 2019-08-15 | 2023-04-25 | 株式会社リコー | 画像処理方法、機器、非一時的コンピュータ可読媒体 |
US12039698B2 (en) | 2019-08-15 | 2024-07-16 | Ricoh Company, Ltd. | Image enhancement model training data generation for panoramic images |
Also Published As
Publication number | Publication date |
---|---|
CN108513672A (zh) | 2018-09-07 |
US20200151858A1 (en) | 2020-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019019086A1 (zh) | 增强图像对比度的方法、设备及存储介质 | |
CN107529650B (zh) | 闭环检测方法、装置及计算机设备 | |
CN113065558A (zh) | 一种结合注意力机制的轻量级小目标检测方法 | |
CN108230278B (zh) | 一种基于生成对抗网络的图像去雨滴方法 | |
WO2021164234A1 (zh) | 图像处理方法以及图像处理装置 | |
CN111126412B (zh) | 基于特征金字塔网络的图像关键点检测方法 | |
CN108492294B (zh) | 一种图像色彩和谐程度的评估方法及装置 | |
CN106462772A (zh) | 对象识别特征的基于不变量的维数缩减、系统和方法 | |
CN109472193A (zh) | 人脸检测方法及装置 | |
WO2014201971A1 (zh) | 在线训练的目标检测方法及装置 | |
CN111832592A (zh) | Rgbd显著性检测方法以及相关装置 | |
CN110222718A (zh) | 图像处理的方法及装置 | |
CN114943773A (zh) | 相机标定方法、装置、设备和存储介质 | |
CN111260687A (zh) | 一种基于语义感知网络和相关滤波的航拍视频目标跟踪方法 | |
CN108875505A (zh) | 基于神经网络的行人再识别方法和装置 | |
CN108197594A (zh) | 确定瞳孔位置的方法和装置 | |
CN107194380A (zh) | 一种复杂场景下人脸识别的深度卷积网络及学习方法 | |
CN116797504A (zh) | 图像融合方法、电子设备及存储介质 | |
CN116546304A (zh) | 一种参数配置方法、装置、设备、存储介质及产品 | |
CN112633113B (zh) | 跨摄像头的人脸活体检测方法及系统 | |
CN105488780A (zh) | 一种用于工业生产线的单目视觉测距追踪装置及其追踪方法 | |
CN111726592B (zh) | 获取图像信号处理器的架构的方法和装置 | |
CN109801224A (zh) | 一种图片处理方法、装置、服务器和存储介质 | |
JP2016148588A (ja) | デプス推定モデル生成装置及びデプス推定装置 | |
CN117709409A (zh) | 应用于图像处理的神经网络训练方法及相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17919161 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17919161 Country of ref document: EP Kind code of ref document: A1 |