CN111383188A - Image processing method, system and terminal equipment - Google Patents
Image processing method, system and terminal equipment Download PDFInfo
- Publication number
- CN111383188A CN111383188A CN201811646727.1A CN201811646727A CN111383188A CN 111383188 A CN111383188 A CN 111383188A CN 201811646727 A CN201811646727 A CN 201811646727A CN 111383188 A CN111383188 A CN 111383188A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- size
- original image
- preset number
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000007781 pre-processing Methods 0.000 claims description 37
- 238000000034 method Methods 0.000 claims description 28
- 238000013528 artificial neural network Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 19
- 238000005070 sampling Methods 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 11
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000000926 separation method Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 7
- 230000003321 amplification Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention is suitable for the technical field of image processing and provides an image processing method, an image processing system and terminal equipment.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image processing method, an image processing system and terminal equipment.
Background
With the continuous popularization of various camera devices such as single lens reflex cameras, smart phones with photographing functions, tablet computers and the like, people can take photos anytime and anywhere, record various scenes in life, and bring convenience and fun to daily life of people.
However, in a dark light environment, the noise level is high, the number of photons obtained by the image pickup apparatus is small, and the signal-to-noise ratio is low, and it is particularly difficult to obtain a sharp image without increasing the exposure time.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, an image processing system, and a terminal device, so as to solve the problems that in a dark environment, a noise level is high, a number of photons obtained by an image capturing device is small, a signal-to-noise ratio is low, and it is particularly difficult to obtain a clear image without increasing an exposure time.
A first aspect of an embodiment of the present invention provides an image processing method, including:
carrying out image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than a preset illuminance;
inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result;
and generating a target image according to the output result.
A second aspect of an embodiment of the present invention provides an image processing system, including:
the preprocessing module is used for preprocessing the original image; the original image is an image acquired in an environment with brightness lower than a preset illuminance;
the calculation module is used for inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result;
and the image processing module is used for generating a target image according to the output result.
A third aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above-described method.
According to the embodiment of the invention, the original image acquired in the dark environment is subjected to image preprocessing, the original image subjected to image preprocessing is input into the deep neural network for forward calculation to obtain the output result, and then the target image is generated according to the output result, so that the original image acquired in the dark environment can be processed into the target image with high signal-to-noise ratio and low noise level, and the definition of the original image is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a deep neural network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the variation of weighting of overlapping pixel points according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an image processing procedure according to an embodiment of the present invention;
FIG. 5 is an original image, an original image after image preprocessing, and a target image according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an image processing system according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal device according to a third embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
Example one
The present embodiment provides an image processing method, which is applied to an image capturing device with an image capturing function, such as a camera, a mobile phone, a tablet computer, a personal digital assistant, a monitoring device, and the like, and can also be applied to a computing device such as a PC (personal computer) client or a server that is in communication connection with the image capturing device.
As shown in fig. 1, the image processing method provided by the present embodiment includes:
s101, performing image preprocessing on an original image; the original image is an image acquired under an environment with brightness lower than a preset illuminance.
In a specific application, the preset illuminance can be set according to actual needs. The environment with the brightness lower than the preset illuminance generally includes a night environment, a weak environment or a dark room environment.
In one embodiment, the original image may also be an image acquired in an environment with a brightness lower than a preset illuminance and having an exposure time shorter than a preset exposure time.
In a specific application, the preset exposure time may be set according to actual needs, and the preset exposure time is generally less than or equal to the exposure time of the image pickup apparatus under a normal exposure condition.
In one embodiment, step S101 is preceded by:
and step S100, acquiring an original image under the environment that the brightness is lower than the preset illuminance.
In a specific application, step S100 may be performed by the image capturing apparatus, or may be performed by the computing apparatus connected to the image capturing apparatus in communication with the image capturing apparatus to control the image capturing apparatus.
In a particular application, image pre-processing may include color channel separation, black level correction, normalization, magnification, clamping, cropping, and the like.
In one embodiment, step S101 includes:
step S1011, performing color channel separation on the original image, and storing the original image as an image with a preset number of color channels according to the number and sequence of the color channels of the original image.
In a specific application, the value of the preset number is determined by the number of color channels of the photosensitive device of the image pickup apparatus. The image pickup apparatus capable of color imaging includes at least three color channels of R (red), G (green), and B (blue), and may further include a fourth color channel, where the fourth color channel may be any one of R, G, B color channels, and may also be a yellow or white color channel.
In one embodiment, the original image has color channels in the order of R, G, B, G, and the original image is separated into R, G, B, G images with 4 color channels, and stored as images with 4 color channels in the order of R, G, B, G.
Step S1012, performing black level correction on the images of the preset number of color channels.
In a specific application, the black level correction is to subtract a black level value from an image of each color channel to correct the deviation of the pixel value of each pixel point in the image.
And step S1013, carrying out normalization processing on the images of the preset number of color channels after the black level correction.
In a specific application, the normalization process is to normalize the pixel value of each pixel point in the image of each color channel from which the black level value is subtracted to [0,1], and define a normalization coefficient as the maximum pixel value minus the black level value in the original image, where the original image obtained by the image capturing device is usually 14 bits, the maximum pixel value in the original image is 16383, and the normalization operation is to divide the pixel value of each pixel point from which the black level value is subtracted by the normalization coefficient.
And step 1014, performing amplification processing on the normalized images of the preset number of color channels.
In a specific application, the amplification processing is to multiply the pixel value of each pixel point in the image of each color channel after the normalization processing by an exposure coefficient, wherein the exposure coefficient is a multiple between the expected long exposure time and the expected short exposure time. In the present embodiment, the short exposure time refers to an exposure time shorter than a preset exposure time, and generally, the short exposure time is 0.1s in an environment where the brightness is lower than the preset illuminance, and the desired long exposure time is 10s, and then the exposure coefficient is 100.
And step S1015, performing clamping processing on the enlarged images of the preset number of color channels.
In a specific application, the clamp processing is to clamp the pixel value of each pixel point in the image of each color channel after the amplification processing to [0,1], specifically, to set all the pixel values of the pixel points whose pixel values are greater than 1 in the image of each color channel after the amplification processing to 1, so as to prevent overexposure.
And step S1016, performing clipping operation on the clipped images of the preset number of color channels.
In a specific application, the clipping operation refers to clipping a preset number of color channel patterns arranged side by side into an equal-divided preset number of image blocks, and certain pixel points need to be overlapped between adjacent image blocks to ensure natural transition of an image in a subsequent linear weighted fusion process, and the number of the overlapped pixel points can be set according to actual needs, for example, can be set to 120 pixel points.
And S102, inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result.
In one embodiment, the deep neural network performs global optimization training on the long-exposure image and the short-exposure image after image preprocessing in advance until convergence.
In specific application, the deep neural network can utilize a large number of long-exposure images and a large number of short-exposure images after image preprocessing in advance, and global optimization training is carried out through a global optimization algorithm until convergence is achieved, so that the deep neural network can output a target image which is clear in imaging and equivalent to an image shot under a normal illumination environment and is infinitely close to the long-exposure images after the original image is input.
In one embodiment, the deep neural network is trained using an L1 cost function and Adam optimizer and continually adjusts parameters until convergence is achieved.
In a specific application, the short-exposure image refers to an image acquired under the same or equivalent conditions (i.e., the brightness is lower than the preset illuminance, and the exposure time is shorter than the preset exposure time) as the original image, and the long-exposure image refers to a clear image acquired under the normal illuminance environment and having an exposure time greater than or equal to the normal exposure time of the imaging device. The image pre-processing mode for the short exposure image is the same as the image pre-processing mode for the original image.
In one embodiment, the deep neural network comprises 1 feature extraction layer, a preset number of down-sampling layers, 1 intermediate processing layer, and the preset number +2 up-sampling layers;
correspondingly, step S102 includes:
and S1021, inputting the original image after image preprocessing into the 1 feature extraction layer, and performing feature extraction.
In one embodiment, the 1 feature extraction layer comprises 1 first convolution layer of a first step size and a convolution kernel size of a first size and 1 second convolution layer of a second step size and a convolution kernel size of a second size.
In a specific application, the first step size is 2, the first size is 5 × 5, the second step size is 1, and the second size is 1 × 1; when the preset number is 4, the number of output channels of the first convolution layer is 32, and the number of output channels of the second convolution layer is 16.
Step S1022, down-sampling the original image after feature extraction sequentially by a preset number of down-sampling layers.
In one embodiment, each of the downsampled layers includes 1 third convolution layer having a third step size and a convolution kernel size of a third size and 1 inverse residual block (Inverted residual) having an expansion coefficient of a preset coefficient.
In a specific application, the third step size is 2, and the third size is 3 × 3; when the preset number is 4, the number of output channels of the preset number of down-sampling layers is 32, 64, 128, 256 respectively.
Step S1023, inverse residual calculation is performed on the downsampled original image through the 1 intermediate processing layer.
In one embodiment, the 1 intermediate processing layer includes a preset number of inverse residual blocks having expansion coefficients of preset coefficients.
In a specific application, the preset coefficient is 4; when the preset number is 4, the number of output channels of the reverse residual block with the preset number of expansion coefficients being the preset coefficient is 256.
And step S1024, sequentially performing upsampling on the original image after the reverse residual error calculation through the preset number of +2 upsampling layers to obtain an output result.
In one embodiment, the pre-set number of upsampling layers are constructed based on a bilinear interpolation algorithm and a short connection form and include 1 fourth convolution layer with a fourth step size and a convolution kernel size of the fourth convolution layer and 1 inverse residual block with an expansion coefficient of a pre-set coefficient;
the last up-sampling layer comprises a first deconvolution layer with 1 fifth step size and a convolution kernel size of the fifth size, a fifth convolution layer with 1 sixth step size and a convolution kernel size of the sixth size, and a sixth convolution layer with 1 seventh step size and a convolution kernel size of the seventh size and containing no activation function;
the last of the up-sampled layers includes a second deconvolution layer of 1 eighth step size with a convolution kernel size of eighth size.
In a specific application, the fourth step size is 1, the fourth size is 1 × 1, the fifth step size is 2, the fifth size is 2 × 2, the sixth step size is 1, the sixth size is 3 × 3, the seventh step size is 1, the seventh size is 1 × 1, the eighth step size is 2, and the eighth size is 2 × 2; when the preset number is 4, the number of output channels of the previous preset number of the upsampling layers is 128, 64, 32 and 16 respectively; the number of output channels of the first deconvolution layer is 16, the number of output channels of the fifth convolution layer is 16, and the number of output channels of the sixth convolution layer is 12; the number of output channels of the second deconvolution layer is 3.
In one embodiment, the short connection form of the up-sampling layers with the preset number is constructed in the following manner:
and adding the output result of the fourth convolution layer of each up-sampling layer in the up-sampling layers with the preset number with the output result with the same number of output channels in the feature extraction layer or the down-sampling layer.
In a specific application, when the preset number is 4, the output results of the first 4 upsampling layers with the number of output channels of 128, 64, 32, and 16 are added in a one-to-one correspondence with the output results of the third convolutional layer with the number of output channels of 128, 64, and 32, and the second convolutional layer with the number of output channels of 16.
In one embodiment, all activation functions in the deep neural network are Relu activation functions.
As shown in fig. 2, a schematic diagram of a deep neural network is exemplarily shown; in the figure, the number on the network structure of each layer indicates the number of output channels of the layer, and the arrow direction indicates the transmission direction of image data.
And step S103, generating a target image according to the output result.
In one embodiment, step S103 includes:
and performing clamping processing, stretching processing and splicing processing on the output result to generate a target image.
In specific application, the clamp processing is to clamp the pixel value of each pixel point in the image of each color channel of the image output by the deep neural network to [0,1], specifically, to set all the pixel values of the pixel points whose pixel values are greater than 1 in the image of each color channel in the image to 1, so as to prevent overexposure.
In a specific application, the stretching process is to multiply the output result after the clamping process by a coefficient, which may be 255.
In a specific application, the splicing process can be realized by adopting a linear weighted fusion method.
In one embodiment, the model of the linear weighted fusion algorithm is as follows:
Wa=1-Wb;
Xmerge=Xa*Wa+Xb*Wb;
wherein, X1To the left boundary where image a and image b overlap, X2Is the right boundary where image a and image b overlap, X is a specific column position where image a and image b overlap, WaAs a fusion weight of the image a at the X position, WbAs a fusion weight of image b at the X position, XaFor data on image a at X position, XbFor data on image b at X position, XmergeIs the fused image data at the X position, [0, X2]Is the length of image a, [ X ]1,X3]Is the length of image b, [ X ]1,X2]Is the overlapping region of image a and image b, [0, X3]The image length after the image a and the image b are fused.
Fig. 3 schematically shows a diagram of the change of the weighting weights for overlapping pixel points.
Fig. 4 exemplarily shows a process of generating a target image after an original image including four color channels is subjected to image preprocessing, a deep neural network, a clamping process, a stretching process and a stitching process.
Fig. 5 exemplarily shows an original image, an original image after image preprocessing, and a target image. As can be seen from fig. 5, the image processing method provided by the embodiment can effectively improve the signal-to-noise ratio and the definition of the short-exposure image acquired in the dark environment.
According to the method, the original image acquired in the dark environment is subjected to image preprocessing, the original image subjected to image preprocessing is input into the deep neural network to be subjected to forward calculation, an output result is obtained, the target image is generated according to the output result, the original image acquired in the dark environment can be processed into the target image with high signal-to-noise ratio and low noise level, and the definition of the original image is effectively improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Example two
As shown in fig. 6, the present embodiment provides an image processing system 5 for executing the method steps in the first embodiment, which may be a software program system in an image capturing apparatus or a computing apparatus, or a software program system executed when a computer program is executed by a processor of an image capturing apparatus or a computing apparatus.
In a Specific Application, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
As shown in fig. 6, the image processing system 5 includes:
a preprocessing module 501, configured to perform image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than a preset illuminance;
a calculating module 502, configured to input the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result;
and an image processing module 503, configured to generate a target image according to the output result. In an embodiment, the image processing module is specifically configured to perform clamping, stretching, and stitching on the output result to generate a target image.
In one embodiment, the image processing system 5 further includes:
and the image acquisition module is used for acquiring an original image under the environment that the brightness is lower than the preset illuminance.
In a specific application, the modules may be implemented by independent processors, or may be integrated together into one processor.
According to the method, the original image acquired in the dark environment is subjected to image preprocessing, the original image subjected to image preprocessing is input into the deep neural network to be subjected to forward calculation, an output result is obtained, the target image is generated according to the output result, the original image acquired in the dark environment can be processed into the target image with high signal-to-noise ratio and low noise level, and the definition of the original image is effectively improved.
EXAMPLE III
As shown in fig. 7, the present embodiment provides a terminal device 6, which includes: a processor 60, a memory 61 and a computer program 62, such as a depth information estimation program, stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various depth information estimation method embodiments described above, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules in the above device embodiments, such as the functions of the modules 501 to 505 shown in fig. 6.
Illustratively, the computer program 62 may be partitioned into one or more modules that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into a preprocessing module, a computing module, and an image processing module, each of which functions specifically as follows:
the preprocessing module is used for preprocessing the original image; the original image is an image acquired in an environment with brightness lower than a preset illuminance;
the calculation module is used for inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result; and the image processing module is used for generating a target image according to the output result.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 7 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.
Claims (10)
1. An image processing method, comprising:
carrying out image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than a preset illuminance;
inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result;
and generating a target image according to the output result.
2. The image processing method of claim 1, wherein the deep neural network includes 1 feature extraction layer, a preset number of down-sampling layers, 1 intermediate processing layer, and the preset number +2 up-sampling layers;
inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result, wherein the output result comprises the following steps:
inputting the original image after image preprocessing into the 1 feature extraction layer for feature extraction;
sequentially performing downsampling on the original image after the features are extracted through a preset number of downsampling layers;
performing inverse residual calculation on the downsampled original image through the 1 intermediate processing layer;
and sequentially carrying out upsampling on the original image after the reverse residual error calculation through the preset number of +2 upsampling layers to obtain an output result.
3. The image processing method of claim 2, wherein the 1 feature extraction layer includes 1 first convolution layer of a first step size and a convolution kernel size of a first size and 1 second convolution layer of a second step size and a convolution kernel size of a second size;
each downsampling layer comprises 1 third convolution layer with a third step length and a convolution kernel of the third size and 1 reverse residual block with an expansion coefficient being a preset coefficient;
the 1 intermediate processing layer comprises a preset number of reverse residual blocks with expansion coefficients as preset coefficients;
the pre-set number of the up-sampling layers are all constructed based on a bilinear interpolation algorithm and a short connection form and comprise 1 fourth convolution layer with a fourth step length and a convolution kernel with a fourth size and 1 reverse residual block with an expansion coefficient as a pre-set coefficient;
the last up-sampling layer comprises a first deconvolution layer with 1 fifth step size and a convolution kernel size of the fifth size, a fifth convolution layer with 1 sixth step size and a convolution kernel size of the sixth size, and a sixth convolution layer with 1 seventh step size and a convolution kernel size of the seventh size and containing no activation function;
the last of the up-sampled layers includes a second deconvolution layer of 1 eighth step size with a convolution kernel size of eighth size.
4. The image processing method of claim 3, wherein the short connection forms of the up-sampling layers of the previous preset number are constructed in a manner that:
and adding the output result of the fourth convolution layer of each up-sampling layer in the up-sampling layers with the preset number with the output result with the same number of output channels in the feature extraction layer or the down-sampling layer.
5. The image processing method according to claim 3 or 4, wherein the preset number is 4;
the number of output channels of the first convolution layer is 32, and the number of output channels of the second convolution layer is 16;
the number of output channels of the preset number of downsampling layers is 32, 64, 128 and 256 respectively;
the number of output channels of the reverse residual blocks with the preset number of expansion coefficients as the preset coefficients is 256;
the number of output channels of the up-sampling layers with the preset number is 128, 64, 32 and 16 respectively;
the number of output channels of the first deconvolution layer is 16, the number of output channels of the fifth convolution layer is 16, and the number of output channels of the sixth convolution layer is 12;
the number of output channels of the second deconvolution layer is 3.
6. The image processing method of claims 1 to 4, wherein the image preprocessing of the original image comprises:
carrying out color channel separation on an original image, and storing the original image as images of a preset number of color channels according to the number and the sequence of the color channels of the original image;
performing black level correction on the images of the preset number of color channels;
normalizing the images of the preset number of color channels after the black level correction;
amplifying the images of the preset number of color channels after normalization processing;
the amplified images of the preset number of color channels are subjected to clamping processing;
and clipping the clamped images of the preset number of color channels.
7. The image processing method according to claims 1 to 4, wherein the deep neural network is subjected to global optimization training for the long-exposure image and the image-preprocessed short-exposure image in advance until convergence.
8. The image processing method of claims 1 to 4, wherein generating a target image from the output result comprises:
and performing clamping processing, stretching processing and splicing processing on the output result to generate a target image.
9. An image processing system, comprising:
the preprocessing module is used for preprocessing the original image; the original image is an image acquired in an environment with brightness lower than a preset illuminance;
the calculation module is used for inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result;
and the image processing module is used for generating a target image according to the output result.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811646727.1A CN111383188B (en) | 2018-12-29 | 2018-12-29 | Image processing method, system and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811646727.1A CN111383188B (en) | 2018-12-29 | 2018-12-29 | Image processing method, system and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111383188A true CN111383188A (en) | 2020-07-07 |
CN111383188B CN111383188B (en) | 2023-07-14 |
Family
ID=71218361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811646727.1A Active CN111383188B (en) | 2018-12-29 | 2018-12-29 | Image processing method, system and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111383188B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349102A (en) * | 2019-06-27 | 2019-10-18 | 腾讯科技(深圳)有限公司 | Processing method, the processing unit and electronic equipment of image beautification of image beautification |
CN112104818A (en) * | 2020-08-28 | 2020-12-18 | 稿定(厦门)科技有限公司 | RGB channel separation method and system |
WO2022133874A1 (en) * | 2020-12-24 | 2022-06-30 | 京东方科技集团股份有限公司 | Image processing method and device and computer-readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017084222A1 (en) * | 2015-11-22 | 2017-05-26 | 南方医科大学 | Convolutional neural network-based method for processing x-ray chest radiograph bone suppression |
CN108765319A (en) * | 2018-05-09 | 2018-11-06 | 大连理工大学 | A kind of image de-noising method based on generation confrontation network |
CN108965731A (en) * | 2018-08-22 | 2018-12-07 | Oppo广东移动通信有限公司 | A kind of half-light image processing method and device, terminal, storage medium |
CN108960257A (en) * | 2018-07-06 | 2018-12-07 | 东北大学 | A kind of diabetic retinopathy grade stage division based on deep learning |
CN108986050A (en) * | 2018-07-20 | 2018-12-11 | 北京航空航天大学 | A kind of image and video enhancement method based on multiple-limb convolutional neural networks |
-
2018
- 2018-12-29 CN CN201811646727.1A patent/CN111383188B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017084222A1 (en) * | 2015-11-22 | 2017-05-26 | 南方医科大学 | Convolutional neural network-based method for processing x-ray chest radiograph bone suppression |
CN108765319A (en) * | 2018-05-09 | 2018-11-06 | 大连理工大学 | A kind of image de-noising method based on generation confrontation network |
CN108960257A (en) * | 2018-07-06 | 2018-12-07 | 东北大学 | A kind of diabetic retinopathy grade stage division based on deep learning |
CN108986050A (en) * | 2018-07-20 | 2018-12-11 | 北京航空航天大学 | A kind of image and video enhancement method based on multiple-limb convolutional neural networks |
CN108965731A (en) * | 2018-08-22 | 2018-12-07 | Oppo广东移动通信有限公司 | A kind of half-light image processing method and device, terminal, storage medium |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349102A (en) * | 2019-06-27 | 2019-10-18 | 腾讯科技(深圳)有限公司 | Processing method, the processing unit and electronic equipment of image beautification of image beautification |
CN112104818A (en) * | 2020-08-28 | 2020-12-18 | 稿定(厦门)科技有限公司 | RGB channel separation method and system |
CN112104818B (en) * | 2020-08-28 | 2022-07-01 | 稿定(厦门)科技有限公司 | RGB channel separation method and system |
WO2022133874A1 (en) * | 2020-12-24 | 2022-06-30 | 京东方科技集团股份有限公司 | Image processing method and device and computer-readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111383188B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11107205B2 (en) | Techniques for convolutional neural network-based multi-exposure fusion of multiple image frames and for deblurring multiple image frames | |
CN111194458B (en) | Image signal processor for processing images | |
CN107403421B (en) | Image defogging method, storage medium and terminal equipment | |
US10708525B2 (en) | Systems and methods for processing low light images | |
CN113168684B (en) | Method, system and computer readable medium for improving quality of low brightness images | |
US10270988B2 (en) | Method for generating high-dynamic range image, camera device, terminal and imaging method | |
CN111402258A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN112602088B (en) | Method, system and computer readable medium for improving quality of low light images | |
CN110675336A (en) | Low-illumination image enhancement method and device | |
CN111383188B (en) | Image processing method, system and terminal equipment | |
CN109214996B (en) | Image processing method and device | |
CN111372006B (en) | High dynamic range imaging method and system for mobile terminal | |
Nam et al. | Modelling the scene dependent imaging in cameras with a deep neural network | |
CN113168673A (en) | Image processing method and device and electronic equipment | |
CN113052768B (en) | Method, terminal and computer readable storage medium for processing image | |
CN113674193A (en) | Image fusion method, electronic device and storage medium | |
CN110717864B (en) | Image enhancement method, device, terminal equipment and computer readable medium | |
CN115147304A (en) | Image fusion method and device, electronic equipment, storage medium and product | |
CN111953888B (en) | Dim light imaging method and device, computer readable storage medium and terminal equipment | |
CN110838088B (en) | Multi-frame noise reduction method and device based on deep learning and terminal equipment | |
CN111724312A (en) | Method and terminal for processing image | |
CN116563190B (en) | Image processing method, device, computer equipment and computer readable storage medium | |
CN113298740A (en) | Image enhancement method and device, terminal equipment and storage medium | |
CN110971837B (en) | ConvNet-based dim light image processing method and terminal equipment | |
CN111383171B (en) | Picture processing method, system and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province Applicant after: TCL Technology Group Co.,Ltd. Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District Applicant before: TCL Corp. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |