CN111383188B - Image processing method, system and terminal equipment - Google Patents
Image processing method, system and terminal equipment Download PDFInfo
- Publication number
- CN111383188B CN111383188B CN201811646727.1A CN201811646727A CN111383188B CN 111383188 B CN111383188 B CN 111383188B CN 201811646727 A CN201811646727 A CN 201811646727A CN 111383188 B CN111383188 B CN 111383188B
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- original image
- preset number
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 43
- 238000007781 pre-processing Methods 0.000 claims description 37
- 238000000034 method Methods 0.000 claims description 36
- 238000013528 artificial neural network Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 21
- 238000005070 sampling Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 12
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000000926 separation method Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 description 21
- 238000010586 diagram Methods 0.000 description 7
- 238000010606 normalization Methods 0.000 description 7
- 230000004927 fusion Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention is suitable for the technical field of image processing, and provides an image processing method, an image processing system and terminal equipment.
Description
Technical Field
The present invention belongs to the technical field of image processing, and in particular, relates to an image processing method, an image processing system, and a terminal device.
Background
With the continuous popularization of various camera devices such as a single-lens reflex camera, a smart phone with a photographing function, a tablet personal computer and the like, people can photograph at any time and any place, record various scenes in life, and bring convenience and fun to the daily life of people.
However, in a dark light environment, the noise level is high, the number of photons obtained by the image capturing apparatus is small, and the signal-to-noise ratio is low, and it is particularly difficult to obtain a clear image without increasing the exposure time.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, system, and terminal device, so as to solve the problems that in a dark environment, the noise level is high, the number of photons obtained by an image capturing device is small, the signal to noise ratio is low, and it is particularly difficult to obtain a clear image without increasing the exposure time.
A first aspect of an embodiment of the present invention provides an image processing method, including:
performing image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than preset illuminance;
inputting the original image subjected to image preprocessing into a deep neural network for forward calculation to obtain an output result;
and generating a target image according to the output result.
A second aspect of an embodiment of the present invention provides an image processing system, including:
the preprocessing module is used for preprocessing the original image; the original image is an image acquired in an environment with brightness lower than preset illuminance;
the computing module is used for inputting the original image subjected to image preprocessing into a deep neural network for forward computation to obtain an output result;
and the image processing module is used for generating a target image according to the output result.
A third aspect of the embodiments of the present invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
According to the embodiment of the invention, the original image obtained in the dark environment is subjected to image preprocessing, the original image after the image preprocessing is input into the deep neural network for forward calculation, an output result is obtained, then the target image is generated according to the output result, the original image obtained in the dark environment can be processed into the target image with high signal-to-noise ratio and low noise level, and the definition of the original image is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a deep neural network according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a change in weighting of overlapping pixels according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image processing procedure according to a first embodiment of the present invention;
FIG. 5 shows an original image, an original image after image preprocessing, and a target image according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an image processing system according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal device according to a third embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution of an embodiment of the present invention will be clearly described below with reference to the accompanying drawings in the embodiment of the present invention, and it is apparent that the described embodiment is a part of the embodiment of the present invention, but not all the embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The term "comprising" in the description of the invention and the claims and in the above figures and any variants thereof is intended to cover a non-exclusive inclusion. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include additional steps or elements not listed or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used for distinguishing between different objects and not for describing a particular sequential order.
Example 1
The present embodiment provides an image processing method applied to an image pickup apparatus having an image pickup function, for example, a camera, a mobile phone, a tablet computer, a personal digital assistant, a monitoring apparatus, and the like, and also applied to a computing apparatus such as a PC (personal computer) client or a server that is communicatively connected to the image pickup apparatus.
As shown in fig. 1, the image processing method provided in this embodiment includes:
s101, performing image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than preset illuminance.
In a specific application, the preset illuminance can be set according to actual needs. Environments with a brightness below the preset illuminance typically include a night time environment, a low light environment, or a darkroom environment.
In one embodiment, the original image may also be an image acquired in an environment with a brightness below a preset illuminance for an exposure time shorter than the preset exposure time.
In a specific application, the preset exposure time may be set according to actual needs, and the preset exposure time is generally smaller than or equal to the exposure time of the image capturing apparatus under the normal exposure condition.
In one embodiment, step S101 is preceded by:
step S100, acquiring an original image in an environment with the brightness lower than the preset illuminance.
In a specific application, step S100 may be performed by the image capturing apparatus, or may be performed by a computing apparatus communicatively connected to the image capturing apparatus controlling the image capturing apparatus.
In particular applications, the image preprocessing may include color channel separation, black level correction, normalization processing, magnification processing, clipping processing, and the like.
In one embodiment, step S101 includes:
step S1011, performing color channel separation on an original image, and storing the original image as images with a preset number of color channels according to the number and the sequence of the color channels of the original image.
In a specific application, the value of the preset number is determined by the number of color channels of the photosensitive device of the image capturing apparatus. The image capturing apparatus capable of realizing color imaging includes at least three color channels of R (red), G (green), and B (blue), and may further include a fourth color channel, which may be any one of the three color channels of R, G, B, and may also be a yellow or white color channel.
In one embodiment, the original image is separated into R, G, B, G images of 4 color channels in the order R, G, B, G, and saved as 4 color channel images in the order R, G, B, G.
Step S1012, performing black level correction on the images of the preset number of color channels.
In a specific application, the black level correction is to subtract the black level value from the image of each color channel to correct the deviation of the pixel value of each pixel point in the image.
Step S1013, performing normalization processing on the images of the preset number of color channels after the black level correction.
In a specific application, the normalization process is to normalize the pixel value of each pixel point in the image of each color channel subtracted with the black level value to be between 0 and 1, define a normalization coefficient as the largest pixel value minus the black level value in the original image, generally the original image obtained by the image capturing device is 14 bits, the largest pixel value in the original image is 16383, and the normalization operation is to divide the pixel value of each pixel point subtracted with the black level value by the normalization coefficient.
Step S1014, performing an amplifying process on the normalized images of the preset number of color channels.
In a specific application, the magnification processing is to multiply the pixel value of each pixel point in the image of each color channel after the normalization processing by an exposure coefficient, where the exposure coefficient is a multiple between a desired long exposure time and a short exposure time. In the present embodiment, the short exposure time refers to an exposure time shorter than a preset exposure time, and is typically 0.1s in an environment where the luminance is lower than the preset illuminance, and the exposure coefficient is 100 if the desired long exposure time is 10 s.
Step S1015, performing clipping processing on the amplified images with the preset number of color channels.
In a specific application, the clipping process clips the pixel value of each pixel point in the image of each color channel after the amplifying process to be between [0,1], specifically sets the pixel value of the pixel point with the pixel value greater than 1 in the image of each color channel after the amplifying process to be 1, so as to prevent overexposure.
Step S1016, performing clipping operation on the clamped images of the preset number of color channels.
In a specific application, the clipping operation refers to clipping a preset number of color channel patterns arranged side by side into a preset number of equally divided image blocks, and a certain pixel point needs to be overlapped between adjacent image blocks to ensure natural transition of the images in the subsequent linear weighted fusion process, and the number of overlapped pixel points can be set according to actual needs, for example, can be set to 120 pixel points.
Step S102, inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result.
In one embodiment, the deep neural network is globally optimized trained in advance with long-exposure images and short-exposure images after image preprocessing until convergence.
In a specific application, the deep neural network can utilize a large number of long exposure images and a large number of short exposure images after image preprocessing in advance, and perform global optimization training through a global optimization algorithm until convergence is achieved, so that the deep neural network can output a target image which is clear in imaging and is equivalent to an image shot in a normal illumination environment and is infinitely close to the long exposure image after the original image is input.
In one embodiment, the deep neural network is trained using an L1 cost function and an Adam optimizer and continuously adjusts parameters until convergence results.
In a specific application, a short exposure image refers to an image obtained under the same or equivalent conditions as an original image (i.e., brightness is lower than a preset illuminance, and exposure time is shorter than a preset exposure time), and a long exposure image refers to a clear image obtained under a normal illuminance environment and having an exposure time greater than or equal to a normal exposure time of an image capturing apparatus. The image preprocessing method for the short exposure image is the same as that for the original image.
In one embodiment, the deep neural network comprises 1 feature extraction layer, a preset number of downsampling layers, 1 intermediate processing layer, and the preset number +2 upsampling layers;
correspondingly, step S102 includes:
step S1021, inputting the original image after image preprocessing into the 1 feature extraction layer for feature extraction.
In one embodiment, the 1 feature extraction layer includes 1 first step and a first convolution layer with a convolution kernel size of a first size and 1 second step and a second convolution layer with a convolution kernel size of a second size.
In a specific application, the first step size is 2, the first size is 5*5, the second step size is 1, and the second size is 1*1; when the preset number=4, the number of output channels of the first convolution layer is 32, and the number of output channels of the second convolution layer is 16.
Step S1022, sequentially downsampling the original image after feature extraction through a preset number of downsampling layers.
In one embodiment, each of the downsampling layers includes a third convolution layer of a third size with a convolution kernel of a third size and 1 inverse residual block (Inverted residuals) with a predetermined coefficient of expansion.
In a specific application, the third step size is 2 and the third size is 3*3; when the preset number=4, the output channel numbers of the preset number of downsampling layers are 32, 64, 128, 256 respectively.
Step S1023, performing reverse residual calculation on the original image after downsampling through the 1 intermediate processing layers.
In one embodiment, the 1 intermediate processing layer includes a preset number of reverse residual blocks with expansion coefficients being preset coefficients.
In a specific application, the preset coefficient is 4; when the preset number=4, the number of output channels of the reverse residual blocks whose preset number of expansion coefficients is the preset coefficient is 256.
Step S1024, sequentially upsampling the original image after the calculation of the reverse residual error by the preset number of +2 upsampling layers, to obtain an output result.
In one embodiment, the first preset number of upsampling layers are each constructed based on a bilinear interpolation algorithm and a short connection form and include a fourth convolution layer with a fourth step size and a convolution kernel size of a fourth size, and a reverse residual block with 1 expansion coefficient being a preset coefficient;
the last-to-last up-sampling layer comprises a first deconvolution layer with a fifth step size, a fifth deconvolution layer with a sixth step size and a sixth deconvolution layer with a sixth step size, and a sixth deconvolution layer with a seventh step size and a seventh deconvolution core size and without an activation function;
the last up-sampling layer comprises 1 eighth step and a second deconvolution layer with a convolution kernel size of eighth size.
In a specific application, the fourth step size is 1, the fourth step size is 1*1, the fifth step size is 2, the fifth step size is 2×2, the sixth step size is 1, the sixth step size is 3*3, the seventh step size is 1, the seventh step size is 1*1, the eighth step size is 2, and the eighth step size is 2×2; when the preset number=4, the number of output channels of the previous preset number of up-sampling layers is 128, 64, 32, 16 respectively; the number of output channels of the first deconvolution layer is 16, the number of output channels of the fifth deconvolution layer is 16, and the number of output channels of the sixth deconvolution layer is 12; the number of output channels of the second deconvolution layer is 3.
In one embodiment, the short connection form construction method of the pre-set number of up-sampling layers is as follows:
and adding the output results of the fourth convolution layer of each up-sampling layer in the pre-set number of up-sampling layers with the output results of the same number of output channels in the feature extraction layer or the down-sampling layer.
In a specific application, when the preset number=4, the output results of the output channels of the first 4 upsampling layers are 128, 64, 32, and 16 respectively, and the output results of the third convolution layer with the output channels of 128, 64, and 32 and the output results of the second convolution layer with the output channels of 16 are added in a one-to-one correspondence.
In one embodiment, all activation functions in the deep neural network are Relu activation functions.
As shown in fig. 2, a schematic diagram of a deep neural network is exemplarily shown; the numbers on the network structure of each layer in the figure indicate the number of output channels of the layer, and the arrow direction indicates the transmission direction of the image data.
And step S103, generating a target image according to the output result.
In one embodiment, step S103 includes:
and carrying out clamping processing, stretching processing and splicing processing on the output result to generate a target image.
In a specific application, the clipping processing is to clip the pixel value of each pixel point in the image of each color channel of the image output by the deep neural network to be between [0,1], specifically to set all the pixel values of the pixel points with the pixel value larger than 1 in the image of each color channel in the image to be 1, so as to aim at preventing overexposure.
In a specific application, the stretching process multiplies the output result after the clamping process by a coefficient, which may be 255.
In a specific application, the splicing process can be implemented by adopting a linear weighted fusion method.
In one embodiment, the model of the linear weighted fusion algorithm is as follows:
W a =1-W b ;
X merge =X a *W a +X b *W b ;
wherein X is 1 X is the left boundary where image a and image b overlap 2 For the right boundary where image a and image b overlap, X is a specific column position where image a and image b overlap, W a Is the fusion weight of image a at the X position, W b Is the fusion weight of image b at X position, X a X is the data on image a at the X position b X is the data on image b at the X position merge Is the fused image data at the X position, [0, X 2 ]For the length of image a, [ X ] 1 ,X 3 ]Is a figureLength of image b, [ X ] 1 ,X 2 ]For the overlapping region of image a and image b, [0, X 3 ]The image length after the fusion of the image a and the image b.
Fig. 3 schematically illustrates a diagram of a change in weighting of overlapping pixels.
Fig. 4 exemplarily shows a process of generating a target image after an original image including four color channels is subjected to an image preprocessing, a deep neural network, a clamping process, a stretching process, and a stitching process.
Fig. 5 exemplarily shows an original image, an original image after image preprocessing, and a target image. As can be seen from fig. 5, the image processing method provided in this embodiment can effectively improve the signal-to-noise ratio and the definition of the short exposure image acquired in the dark environment.
According to the embodiment, the original image obtained in the dark environment is subjected to image preprocessing, the original image after the image preprocessing is input into the deep neural network for forward calculation, an output result is obtained, then a target image is generated according to the output result, the original image obtained in the dark environment can be processed into the target image with high signal to noise ratio and low noise level, and the definition of the original image is effectively improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
Example two
As shown in fig. 6, the present embodiment provides an image processing system 5 for executing the method steps in the first embodiment, which may be a software program system in an image capturing apparatus or a computing apparatus, or a software program system executed by a processor of the image capturing apparatus or the computing apparatus when the computer program is run.
In particular applications, the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
As shown in fig. 6, the image processing system 5 includes:
a preprocessing module 501, configured to perform image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than preset illuminance;
the computing module 502 is configured to input the original image after image preprocessing into a deep neural network for forward computation, so as to obtain an output result;
an image processing module 503, configured to generate a target image according to the output result. In one embodiment, the image processing module is specifically configured to perform a clipping process, a stretching process, and a stitching process on the output result, so as to generate a target image.
In one embodiment, the image processing system 5 further comprises:
the image acquisition module is used for acquiring an original image in an environment with the brightness lower than the preset illuminance.
In a specific application, each module may be implemented by a processor independent of each other, or may be integrated together into one processor.
According to the embodiment, the original image obtained in the dark environment is subjected to image preprocessing, the original image after the image preprocessing is input into the deep neural network for forward calculation, an output result is obtained, then a target image is generated according to the output result, the original image obtained in the dark environment can be processed into the target image with high signal to noise ratio and low noise level, and the definition of the original image is effectively improved.
Example III
As shown in fig. 7, the present embodiment provides a terminal device 6 including: a processor 60, a memory 61 and a computer program 62, such as a depth information estimation program, stored in the memory 61 and executable on the processor 60. The processor 60, when executing the computer program 62, implements the steps in the respective depth information estimation method embodiments described above, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules in the apparatus embodiments described above, such as the functions of the modules 501 to 505 shown in fig. 6.
Illustratively, the computer program 62 may be partitioned into one or more modules that are stored in the memory 61 and executed by the processor 60 to complete the present invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into a preprocessing module, a calculation module, and an image processing module, each of which functions specifically as follows:
the preprocessing module is used for preprocessing the original image; the original image is an image acquired in an environment with brightness lower than preset illuminance;
the computing module is used for inputting the original image subjected to image preprocessing into a deep neural network for forward computation to obtain an output result; and the image processing module is used for generating a target image according to the output result.
The terminal device 6 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 7 is merely an example of the terminal device 6 and does not constitute a limitation of the terminal device 6, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (9)
1. An image processing method, comprising:
performing image preprocessing on an original image; the original image is an image acquired in an environment with brightness lower than preset illuminance;
inputting the original image subjected to image preprocessing into a deep neural network for forward calculation to obtain an output result;
generating a target image according to the output result;
image preprocessing is carried out on an original image, and the method comprises the following steps:
performing color channel separation on an original image, and storing the original image as images with a preset number of color channels according to the number and the sequence of the color channels of the original image;
performing black level correction on the images of the preset number of color channels;
normalizing the images of the preset number of color channels after black level correction;
amplifying the normalized images of the preset number of color channels;
clamping the amplified images of the preset number of color channels;
and cutting the clamped images of the preset number of color channels.
2. The image processing method according to claim 1, wherein the deep neural network includes 1 feature extraction layer, a preset number of downsampling layers, 1 intermediate processing layer, and the preset number +2 of upsampling layers;
inputting the original image after image preprocessing into a deep neural network for forward calculation to obtain an output result, wherein the method comprises the following steps:
inputting the original image subjected to image pretreatment into the 1 feature extraction layer to perform feature extraction;
sequentially downsampling the original image after feature extraction through a preset number of downsampling layers;
performing reverse residual calculation on the original image after downsampling through the 1 intermediate processing layers;
and sequentially upsampling the original image after the reverse residual error calculation through the preset number of +2 upsampling layers to obtain an output result.
3. The image processing method of claim 2, wherein the 1 feature extraction layer comprises 1 first convolution layer of a first size and convolution kernel size and 1 second convolution layer of a second size and convolution kernel size;
each downsampling layer comprises a third convolution layer with a third step size, a convolution kernel size of the third convolution layer is 1, and 1 reverse residual block with a expansion coefficient of the reverse residual block being a preset coefficient;
the 1 intermediate processing layer comprises reverse residual blocks with preset number of expansion coefficients being preset coefficients;
the front preset number of up-sampling layers are built based on bilinear interpolation algorithm and short connection form and comprise 1 fourth convolution layer with a fourth step size and a convolution kernel with a fourth size and 1 reverse residual block with a preset coefficient as a expansion coefficient;
the last-to-last up-sampling layer comprises a first deconvolution layer with a fifth step size, a fifth deconvolution layer with a sixth step size and a sixth deconvolution layer with a sixth step size, and a sixth deconvolution layer with a seventh step size and a seventh deconvolution core size and without an activation function;
the last up-sampling layer comprises 1 eighth step and a second deconvolution layer with a convolution kernel size of eighth size.
4. The image processing method according to claim 3, wherein the short connection form construction method of the pre-set number of up-sampling layers is as follows:
and adding the output results of the fourth convolution layer of each up-sampling layer in the pre-set number of up-sampling layers with the output results of the same number of output channels in the feature extraction layer or the down-sampling layer.
5. The image processing method according to claim 3 or 4, wherein the preset number is 4;
the number of output channels of the first convolution layer is 32, and the number of output channels of the second convolution layer is 16;
the output channel numbers of the preset number of downsampling layers are 32, 64, 128 and 256 respectively;
the number of output channels of the reverse residual blocks with the preset number of expansion coefficients being preset coefficients is 256;
the number of output channels of the front preset number of up-sampling layers is 128, 64, 32 and 16 respectively;
the number of output channels of the first deconvolution layer is 16, the number of output channels of the fifth deconvolution layer is 16, and the number of output channels of the sixth deconvolution layer is 12;
the number of output channels of the second deconvolution layer is 3.
6. The image processing method according to any one of claims 1 to 4, wherein the deep neural network performs global optimization training in advance on the long-exposure image and the short-exposure image after the image preprocessing until convergence.
7. The image processing method according to any one of claims 1 to 4, wherein generating a target image from the output result includes:
and carrying out clamping processing, stretching processing and splicing processing on the output result to generate a target image.
8. An image processing system, comprising:
the preprocessing module is used for preprocessing the original image; the original image is an image acquired in an environment with brightness lower than preset illuminance;
the computing module is used for inputting the original image subjected to image preprocessing into a deep neural network for forward computation to obtain an output result;
the image processing module is used for generating a target image according to the output result;
the preprocessing module is specifically used for:
performing color channel separation on an original image, and storing the original image as images with a preset number of color channels according to the number and the sequence of the color channels of the original image;
performing black level correction on the images of the preset number of color channels;
normalizing the images of the preset number of color channels after black level correction;
amplifying the normalized images of the preset number of color channels;
clamping the amplified images of the preset number of color channels;
and cutting the clamped images of the preset number of color channels.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811646727.1A CN111383188B (en) | 2018-12-29 | 2018-12-29 | Image processing method, system and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811646727.1A CN111383188B (en) | 2018-12-29 | 2018-12-29 | Image processing method, system and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111383188A CN111383188A (en) | 2020-07-07 |
CN111383188B true CN111383188B (en) | 2023-07-14 |
Family
ID=71218361
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811646727.1A Active CN111383188B (en) | 2018-12-29 | 2018-12-29 | Image processing method, system and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111383188B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110349102A (en) * | 2019-06-27 | 2019-10-18 | 腾讯科技(深圳)有限公司 | Processing method, the processing unit and electronic equipment of image beautification of image beautification |
CN112104818B (en) * | 2020-08-28 | 2022-07-01 | 稿定(厦门)科技有限公司 | RGB channel separation method and system |
US20220398696A1 (en) * | 2020-12-24 | 2022-12-15 | Boe Technology Group Co., Ltd. | Image processing method and device, and computer-readable storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105447866A (en) * | 2015-11-22 | 2016-03-30 | 南方医科大学 | X-ray chest radiograph bone marrow suppression processing method based on convolution neural network |
CN108765319B (en) * | 2018-05-09 | 2020-08-14 | 大连理工大学 | Image denoising method based on generation countermeasure network |
CN108960257A (en) * | 2018-07-06 | 2018-12-07 | 东北大学 | A kind of diabetic retinopathy grade stage division based on deep learning |
CN108986050B (en) * | 2018-07-20 | 2020-11-10 | 北京航空航天大学 | Image and video enhancement method based on multi-branch convolutional neural network |
CN108965731A (en) * | 2018-08-22 | 2018-12-07 | Oppo广东移动通信有限公司 | A kind of half-light image processing method and device, terminal, storage medium |
-
2018
- 2018-12-29 CN CN201811646727.1A patent/CN111383188B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111383188A (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111194458B (en) | Image signal processor for processing images | |
CN107403421B (en) | Image defogging method, storage medium and terminal equipment | |
US10708525B2 (en) | Systems and methods for processing low light images | |
CN106920221B (en) | Take into account the exposure fusion method that Luminance Distribution and details are presented | |
CN111383188B (en) | Image processing method, system and terminal equipment | |
CN110675336A (en) | Low-illumination image enhancement method and device | |
US10270988B2 (en) | Method for generating high-dynamic range image, camera device, terminal and imaging method | |
CN111402258A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN113228094A (en) | Image processor | |
CN110428362B (en) | Image HDR conversion method and device and storage medium | |
CN111372006B (en) | High dynamic range imaging method and system for mobile terminal | |
CN110838088B (en) | Multi-frame noise reduction method and device based on deep learning and terminal equipment | |
CN113168673A (en) | Image processing method and device and electronic equipment | |
Nam et al. | Modelling the scene dependent imaging in cameras with a deep neural network | |
CN113052768B (en) | Method, terminal and computer readable storage medium for processing image | |
CN111724312A (en) | Method and terminal for processing image | |
CN110717864B (en) | Image enhancement method, device, terminal equipment and computer readable medium | |
CN113962859A (en) | Panorama generation method, device, equipment and medium | |
CN111953888B (en) | Dim light imaging method and device, computer readable storage medium and terminal equipment | |
CN110971837B (en) | ConvNet-based dim light image processing method and terminal equipment | |
Li et al. | Rendering nighttime image via cascaded color and brightness compensation | |
CN111383171B (en) | Picture processing method, system and terminal equipment | |
CN112087556B (en) | Dark light imaging method and device, readable storage medium and terminal equipment | |
CN113287147A (en) | Image processing method and device | |
US9836827B2 (en) | Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province Applicant after: TCL Technology Group Co.,Ltd. Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District Applicant before: TCL Corp. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |