CN109345487A - A kind of image enchancing method and calculate equipment - Google Patents

A kind of image enchancing method and calculate equipment Download PDF

Info

Publication number
CN109345487A
CN109345487A CN201811252617.7A CN201811252617A CN109345487A CN 109345487 A CN109345487 A CN 109345487A CN 201811252617 A CN201811252617 A CN 201811252617A CN 109345487 A CN109345487 A CN 109345487A
Authority
CN
China
Prior art keywords
image
processed
output
enhancing
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811252617.7A
Other languages
Chinese (zh)
Other versions
CN109345487B (en
Inventor
周铭柯
李志阳
张伟
李启东
许清泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201811252617.7A priority Critical patent/CN109345487B/en
Publication of CN109345487A publication Critical patent/CN109345487A/en
Application granted granted Critical
Publication of CN109345487B publication Critical patent/CN109345487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a kind of image enchancing methods, comprising: inputs in preset image enhancement model image to be processed, exports image after multiple convolution is handled;Image to be processed and output image are transformed into predetermined color space respectively;And image to be processed and output image generation enhancing image are merged in predetermined color space.The present invention discloses the calculating equipment for executing the above method together.

Description

A kind of image enchancing method and calculate equipment
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image enchancing method and calculate equipment.
Background technique
With the development of internet technology, people, which are increasingly dependent on, rapidly obtains information by network, such as picture, Video etc..However, passing through a large amount of pictures of transmission on Internet, visual effect is general, and Internet user is often difficult to find interior Hold the picture of good and image color.On the other hand, mobile terminal (such as mobile phone, tablet computer etc.) is also common at people Photographing device, but mobile terminal take come photo be difficult to meet higher visual demand.Based on the considerations of this two o'clock, pass through The visual effect that image enchancing method promotes image has a wide range of applications scene.
Traditional algorithm for image enhancement usually adjusts the pixel value in each channel of image by fixed parameter value, to improve Clarity, saturation degree and the contrast of image.But such methods single effect is easy to appear unnatural, color lump of adjustment effect etc. Problem.The development of convolutional neural networks (CNN, Convolutional Neural Network) is that image procossing brings new think of Road, reinforcing effect are better than traditional algorithm in some aspects, and still, the algorithm based on CNN is easy to appear that transition is unnatural, colour cast The problems such as.
Therefore, it is necessary to a kind of image enhancement schemes that can overcome disadvantages mentioned above.
Summary of the invention
For this purpose, the present invention provides a kind of image enchancing method and calculates equipment, to try hard to solve or at least alleviate deposit above At least one problem.
According to an aspect of the invention, there is provided a kind of image enchancing method, this method executes in calculating equipment, wraps It includes: image to be processed being inputted in preset image enhancement model, exports image after multiple convolution is handled;By image to be processed It is transformed into predetermined color space respectively with output image;And image to be processed and output image are merged in predetermined color space Generate enhancing image.
Optionally, in the method according to the invention, image to be processed and output image are transformed into predetermined color respectively The step of space further include: image to be processed is transformed into predetermined color space, obtain the first figure to be processed on three channels, Second figure to be processed and third figure to be processed;Output image is transformed into predetermined color space, obtains first on three channels Output figure, the second output figure and third output figure.
Optionally, in the method according to the invention, image to be processed and output image are merged in predetermined color space The step of generating enhancing image includes: to judge in the first figure to be processed the pixel value size of pixel and according to judging result knot It closes the first output figure and generates the first enhancing figure;The second enhancing figure is generated in conjunction with the second figure to be processed and the second output figure;In conjunction with Three figures to be processed and third output figure generate third enhancing figure;And the first enhancing of fusion figure, the second enhancing figure and third enhancing Figure generates enhancing image.
Optionally, in the method according to the invention, it combines the first output figure to generate the first enhancing according to judging result to scheme If the step of include: pixel in the first figure to be processed pixel value be less than first threshold, by first export figure respective pixel Pixel value of the pixel value of point as the first enhancing figure corresponding pixel points;If the pixel value of pixel is greater than in the first figure to be processed Second threshold then combines the pixel value of corresponding pixel points in the first figure to be processed and the first output figure to generate first in the first way Enhance the pixel value of figure corresponding pixel points;If the pixel value of pixel had both been not less than first threshold or little in the first figure to be processed In second threshold, then the pixel value of corresponding pixel points in the first figure to be processed and the first output figure is combined to generate the in a second manner The pixel value of one enhancing figure corresponding pixel points.
Optionally, in the method according to the invention, the second enhancing is generated in conjunction with the second figure to be processed and the second output figure The step of figure includes: that the second figure to be processed and the second output figure are weighted, and generates the second enhancing figure;It is waited in conjunction with third The step of processing figure and third output figure generate third enhancing figure includes: to be weighted third figure to be processed and third output figure It calculates, generates third enhancing figure.
Optionally, in the method according to the invention, predetermined color space is Lab color space.
Optionally, in the method according to the invention, image enhancement model includes the multiple intermediate treatment blocks being sequentially connected With a result treatment block, wherein each intermediate treatment block includes at least two convolution active coatings being sequentially connected and a jump Jump articulamentum, and jump articulamentum is suitable for the input of first convolution active coating of the intermediate treatment block belonging to it and the last one The output of convolution active coating is added;Result treatment block includes multiple convolution active coatings;In first centre of image enhancement model It further include a convolution active coating before process block.
Optionally, in the method according to the invention, the activation primitive of the convolution active coating in each intermediate treatment block is ReLU function, and the number of intermediate treatment block is 4.
Optionally, in the method according to the invention, result treatment block includes three convolution active coatings, wherein the first two The activation primitive of convolution active coating is ReLU function, and the activation primitive of third convolution active coating is Tanh function.
Optionally, in the method according to the invention, in the step that image to be processed is inputted to preset image enhancement model Before rapid, further include the steps that generating preset image enhancement model by training: obtaining multiple training images pair, each instruction Practice image to including input picture and target image, wherein input picture is the image acquired by slr camera, target image For to the input picture image that is adjusted that treated;Input picture is input to the image enhancement model of pre-training, through multiple Image is exported after process of convolution, calculates output image relative to the penalty values of target image according to default loss function and updates figure The parameter of image intensifying model, when penalty values meet predetermined condition, training terminates, the image enhancement model after being trained, As preset image enhancement model.
Optionally, in the method according to the invention, input picture is being input to the image enhancement model of pre-training Step further include: for each training image pair, the subgraph conduct of at least one predetermined size is intercepted out from input picture Input subgraph;Each target is intercepted out from target image according to the coordinate position of each input subgraph in the input image Subgraph;And input subgraph is input to the image enhancement model of pre-training, subgraph is exported after multiple convolution is handled, Output subgraph is calculated relative to the penalty values of target subgraph according to default loss function and updates the ginseng of image enhancement model Number, when penalty values meet predetermined condition, training terminates, the image enhancement model after being trained, as preset image Enhance model.
Optionally, in the method according to the invention, preset loss function is indicated by following formula: loss=λ1* color_loss+λ2* vgg_loss, wherein loss indicates penalty values, and color_loss is first-loss, vgg_loss the Two losses, λ1And λ2For corresponding weighting coefficient.
Optionally, in the method according to the invention, the step of calculating first-loss includes: to output image and corresponding Target image carries out mean value Fuzzy Processing respectively, output image after being obscured and it is fuzzy after target image;It calculates fuzzy Rear output image and it is fuzzy after target image in corresponding pixel points pixel distance value, as first-loss.
Optionally, in the method according to the invention, calculating the step of the second loss includes: by output image and target figure Respective characteristic pattern is generated as inputting default convolutional network respectively;Calculate the characteristic pattern of output image and the feature of target image The pixel distance value of corresponding pixel points in figure, as the second loss.
According to an aspect of the present invention, a kind of calculating equipment is provided, comprising: at least one processor;Be stored with journey The memory of sequence instruction, wherein described program instruction is configured as being suitable for being executed by least one processor, and program instruction includes For executing the instruction of either method as described above.
According to an aspect of the present invention, a kind of readable storage medium storing program for executing being stored with program instruction is provided, program instruction is worked as When being read and executed by calculating equipment, so that calculating equipment executes either method as described above.
Image to be processed is input to preset image enhancement model, through model by image enhancement schemes according to the present invention Image is exported after processing, the problems such as there may be colour casts in view of convolutional network treated image, therefore will output image and to Processing image, which is transformed into the color space for meeting human eye perception, carries out further fusion treatment, ultimately generates enhancing image. The enhancing picture quality of generation is substantially better than original image to be processed, and that has repaired that overexposure and color overflow well asks Topic.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings Face, these aspects indicate the various modes that can practice principles disclosed herein, and all aspects and its equivalent aspect It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical appended drawing reference generally refers to identical Component or element.
Fig. 1 shows the schematic diagram according to an embodiment of the invention for calculating equipment 100;
Fig. 2 shows the flow charts of image enchancing method 200 according to an embodiment of the invention;
Fig. 3 shows the structure chart of image enhancement model 300 according to an embodiment of the invention;
Fig. 4 shows the structure chart of intermediate treatment block according to an embodiment of the invention;
Fig. 5 shows the structure chart of result treatment block according to an embodiment of the invention;
Fig. 6 A and Fig. 6 B show training image pair according to an embodiment of the invention, and wherein Fig. 6 A is input picture, Fig. 6 B is target image;And
Fig. 7 A and Fig. 7 B show reinforcing effect comparison diagram according to an embodiment of the invention, and wherein Fig. 7 A is to be processed Image, Fig. 7 B be through preset image enhancement model treated output image.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
Image enchancing method of the invention is suitable for executing in one or a set of calculating equipment, that is, in one or a set of meter Calculate the enhanced processes completed in equipment to the image to be processed of input.Calculating equipment for example can be server (such as Web Server, application server etc.), the personal computers such as desktop computer and notebook computer, mobile phone, tablet computer, intelligence Portable mobile apparatus such as wearable device etc., but not limited to this.According to a kind of preferred embodiment, image enhancement of the invention Method executes in calculating equipment, for example, calculating equipment can be implemented as the distributed system of Parameter Server architecture.
Fig. 1 shows the schematic diagram according to an embodiment of the invention for calculating equipment 100.As shown in Figure 1, basic In configuration 102, calculates equipment 100 and typically comprise system storage 106 and one or more processor 104.Memory is total Line 108 can be used for the communication between processor 104 and system storage 106.
Depending on desired configuration, processor 104 can be any kind of processing, including but not limited to: microprocessor (μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 may include such as The cache of one or more rank of on-chip cache 110 and second level cache 112 etc, processor core 114 and register 116.Exemplary processor core 114 may include arithmetic and logical unit (ALU), floating-point unit (FPU), Digital signal processing core (DSP core) or any combination of them.Exemplary Memory Controller 118 can be with processor 104 are used together, or in some implementations, and Memory Controller 118 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to: easily The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System storage Device 106 may include operating system 120, one or more is using 122 and program data 124.In some embodiments, It may be arranged to be executed instruction by one or more processors 104 using program data 124 on an operating system using 122.
Calculating equipment 100 can also include facilitating from various interface equipments (for example, output equipment 142, Peripheral Interface 144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as facilitate via One or more port A/V 152 is communicated with the various external equipments of such as display or loudspeaker etc.Outside example If interface 144 may include serial interface controller 154 and parallel interface controller 156, they, which can be configured as, facilitates Via one or more port I/O 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.Exemplary communication is set Standby 146 may include network controller 160, can be arranged to convenient for via one or more communication port 164 and one A or multiple other calculate communication of the equipment 162 by network communication link.
Network communication link can be an example of communication media.Communication media can be usually presented as in such as carrier wave Or computer readable instructions, data structure, program module in the modulated data signal of other transmission mechanisms etc, and can To include any information delivery media." modulated data signal " can be such signal, one in its data set or It is multiple or it change can the mode of encoded information in the signal carry out.As unrestricted example, communication media can To include the wired medium of such as cable network or private line network etc, and it is such as sound, radio frequency (RF), microwave, infrared (IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein may include depositing Both storage media and communication media.
It include a plurality of program for executing image enchancing method 200 using 122 in calculating equipment 100 according to the present invention Instruction, and program data 124 may include that training image increases the preset image that training generates to and by training image The data such as the parameter of strong model.
Fig. 2 shows the flow chart of image enchancing method 200 according to an embodiment of the invention, method 200 is suitable for It calculates in equipment (such as aforementioned computing device 100) and executes.As shown in Fig. 2, method 200 starts from step S210.
In step S210, image to be processed is inputted in preset image enhancement model, it is defeated after multiple convolution is handled Image out.Image to be processed, which can be, shoots resulting image by mobile terminal, is also possible to the image downloaded by network. In embodiment according to the present invention, preset image enhancement model is a full convolutional neural networks, therefore input model Image to be processed can be arbitrary dimension, and after handling by preset image enhancement model, export the size and input of image The image to be processed of model is identical.
It should be pointed out that the structure of preset image enhancement model can be by those skilled in the art according to actual needs voluntarily Setting, the present invention are without limitation.According to a kind of embodiment, the structure of image enhancement model include be sequentially connected it is multiple in Between process block (B) and a result treatment block (C).It should be noted that the embodiment of the present invention is in included in model Between process block number with no restrictions.Each intermediate treatment block (B) include at least two convolution active coatings (A) being sequentially connected and One jump articulamentum (SKIP), wherein each convolution active coating includes convolutional layer (CONV) and active coating (ACTI), activation again The activation primitive of layer can be by those skilled in the art's self-setting, and the present invention is without limitation, for example, letter can will be activated Number is set as ReLU function, Tanh function, Sigmoid function etc..Articulamentum jump for first of the intermediate treatment block belonging to it The input of a convolution active coating exports after being added with the output of the last one convolution active coating.The setting of jump articulamentum can have Effect ground keeps image detail, is conducive to the accuracy rate for improving training effectiveness and model.Result treatment block (C) includes one or more A convolution active coating, it is noted that the present invention to the quantity of convolution active coating included in result treatment block with no restrictions, separately Outside, the present invention to activation primitive used by convolution active coating each in result treatment block also with no restrictions.Particularly, according to one kind Embodiment further includes a convolution active coating A0, still includes in convolution active coating A0 before first intermediate treatment block B1 Convolutional layer CONV0 and active coating ACTI0.
As described above, Fig. 3 shows the structural schematic diagram of image enhancement model 300 according to an embodiment of the invention. As shown in figure 3, as shown in figure 3, model 300 include the convolution active coating A0, the 4 intermediate treatment block B1~B4 that are sequentially connected and Result treatment block C.The structure of CNN intermediate treatment block B1~B4 is similar, and Fig. 4 illustrates each in Fig. 3 by taking intermediate treatment block B1 as an example The structure of intermediate treatment block.As shown in figure 4, intermediate treatment block B1 includes the convolution active coating A1 being sequentially connected, convolution active coating A2 and jump articulamentum SKIP1, convolution active coating A1 further comprise convolutional layer CONV1 and an activation using ReLU function Layer ACTI1, convolution active coating A2 further comprise a convolutional layer CONV2 and active coating ACTI2 using ReLU function, jump Articulamentum SKIP1 by the output of the input (namely input of convolutional layer CONV1) of convolution active coating A1 and convolution active coating A2 ( That is the output of active coating ACTI2) be added then export.
Fig. 5 shows a kind of structure of result treatment block C in Fig. 3.As shown in figure 5, result treatment block C includes three convolution Active coating A9~A11, convolution active coating A9 further comprise convolutional layer CONV9 and the active coating ACTI9 using ReLU function, volume Product active coating A10 further comprise convolutional layer CONV10 and using ReLU function active coating ACTI10, convolution active coating A11 into One step includes convolutional layer CONV11 and the active coating ACTI11 using Tanh function.
It should be pointed out that there are also some parameters needs to preset after the structure for constructing image enhancement model 300, example Such as, the quantity and size, the moving step length of convolution kernel, surrounding of convolution kernel (kernel) used by each convolutional layer (CONV) Fill the quantity etc. on side.The following table shows Fig. 3~model shown in fig. 5 300 example of parameters (it should be pointed out that each convolution Active coating includes convolutional layer and active coating, wherein active coating is only needed to determine which kind of activation primitive selected, without setting in advance Other parameters are set, therefore, in following table, the parameter of each convolution active coating A is the convolutional layer CONV in the convolution active coating Parameter):
Image enhancement model 300 is the convolutional network of one end-to-end (End to End), therefore it exports image and input The size of image is identical.
The structure of above-mentioned image enhancement model 300 and the basic parameter of each convolutional layer are set in advance by those skilled in the art It sets, as the image enhancement model of pre-training, then, the image enhancement model of pre-training is trained, so that it exports energy Enough achieve the desired results.The preset image enhancement model of training determines the model parameter in model, model parameter includes each Weight and offset parameter at each position of a convolution kernel etc..A kind of embodiment according to the present invention, method 200 are wrapped The step of preset image enhancement model is generated by training has been included, following two step is specifically included.
The first step obtains multiple training images pair, each training image is to including input picture and target image.In root According in a preferred embodiment of the present invention, it is contemplated that the image noise shot by slr camera is few, details is enriched, therefore inputs figure As selecting the image collected by slr camera (such as shooting 1000 pictures with slr camera), meanwhile, utilize profession Personage is adjusted separately processing to the collected all input pictures of institute (can be obtained by 1000 figures adjusted in this way Piece), better contrast and saturation degree are made it have, and using image adjusted as target figure corresponding with input picture Picture.Fig. 6 A and Fig. 6 B show training image pair according to an embodiment of the invention, and wherein Fig. 6 A is input picture, and Fig. 6 B is Target image.
The input picture of all training image centerings is input to the image enhancement model of pre-training, through multiple by second step Image is exported after process of convolution, calculates output image relative to the penalty values of target image according to default loss function and updates figure The parameter of image intensifying model, (in the training process of model, with the increasing of frequency of training when penalty values meet predetermined condition Add, ordinary loss value can be smaller and smaller, and when penalty values restrain, i.e., the absolute value of the difference of adjacent penalty values trained twice is less than pre- If when threshold value, it is believed that model training is completed), training terminates, the image enhancement model after being trained, and increases as preset image Strong model.
Another kind embodiment according to the present invention, training effect and training speed in order to balance, to what is obtained through the first step Training image is to being further processed, then by treated training of the training image to image enhancement model is used for.Implement herein In example, the image that some small sizes are cut out from training image pair can be to being further processed for training image pair, for example, For each training image pair, the subgraph of at least one predetermined size (such as 100*100 size) is first intercepted out from input picture As being used as input subgraph, meanwhile, phase of the coordinate position of subgraph in the input image from target image is inputted according to each It answers and intercepts out corresponding each target subgraph on coordinate position.To all training images to carry out intercepting process after, just The more training image set of a sample size are obtained.It should be noted that the embodiment of the present invention is to interception subgraph Mode with no restriction, the subgraph of predetermined size can be intercepted out from original image with any angle, any position.It connects , all input subgraphs intercepted out are input to the image enhancement model of pre-training (it should be pointed out that being also possible to from all Input subgraph in selected part be trained, the embodiment of the present invention is without limitation), it is defeated after multiple convolution is handled Subgraph out.Training process as described previously calculates output subgraph relative to target subgraph according to default loss function The penalty values of picture and the parameter for updating image enhancement model, when penalty values meet predetermined condition, training terminates, and is trained Image enhancement model afterwards, as preset image enhancement model.
The setting of default loss function will affect the training effect of image enhancement model.Implementation according to the present invention, Default loss function is set by the way of calculating losses by mixture, default loss function is indicated by following formula:
Loss=λ1*color_loss+λ2*vgg_loss
Wherein, loss indicates penalty values, and color_loss is first-loss, and vgg_loss is the second loss, λ1And λ2It is right The weighting coefficient answered.It should be pointed out that λ1And λ2Value can be by those skilled in the art according to training process self-setting, this hair It is bright without limitation.According to a kind of preferred embodiment, λ1And λ2Value be respectively as follows: λ1=10, λ2=1.
A kind of method for calculating first-loss and the second loss is provided individually below.
(a) the step of calculating first-loss includes: firstly, to output image and corresponding target image (that is, the output figure The target image of training image centering as belonging to corresponding input picture) mean value Fuzzy Processing is carried out respectively, after being obscured Output image and it is fuzzy after target image, to the Fuzzy Processing of image can eliminate image medium-high frequency information interference, from And allow model learning to more colouring informations;Then corresponding in the target image after calculating the output image after obscuring and obscuring The pixel distance value of pixel, as first-loss.In one embodiment, the mean value pond Hua Chu in convolutional neural networks is utilized (that is, mean-pooling) is managed to realize the mean value Fuzzy Processing to output image and target image, certainly, those skilled in the art Member can realize the Fuzzy Processing to image using other algorithms, and the embodiment of the present invention is without limitation.
For a frame exports image and target image, the calculation formula of first-loss can be briefly described as follows:
In above formula, W and H are respectively the horizontal size and vertical dimension for exporting image and target image, and (i, j) indicates image In coordinate position, rij、gij、bijRespectively indicate R, G, B value for the pixel that coordinate in output image is (i, j), rij′、gij、 bijRespectively indicate R, G, B value for the pixel that coordinate in target image is (i, j).The all pixels point being equivalent in traversal image It calculates corresponding pixel distance value, then all pixels distance value is added is used as first-loss.For N number of training image to next It says, the mean value of the first-loss of N number of output image, the first-loss as a training process can be found out.
(b) calculating the step of the second loss includes: to input default convolution net respectively with target image firstly, image will be exported Network (can be generated each layer in default convolutional network of characteristic pattern, can also therefrom extract portion to generate respective characteristic pattern The characteristic pattern of layering, the present invention are without limitation);Then, the characteristic pattern for calculating output image is corresponding with target image special The pixel distance value for levying corresponding pixel points in figure, as the second loss.In one embodiment, convolutional network is preset to use The VGG-19 network of trained parameter initialization on ImageNet data set, and the characteristic pattern calculating for generating each layer is corresponding Pixel distance value, as second loss of this layer, finally, the mean value for seeking all layers of the second loss is used as and once trained Second loss of journey.
For the character pair figure of the characteristic pattern of frame output image and target image, the calculation formula of the second loss It can be briefly described as follows:
In above formula, W' and H' are respectively to export the horizontal size of the characteristic pattern of characteristic pattern and target image of image and vertical Size, (i, j) indicate the coordinate position in image, vrij、vgij、vbijRespectively indicating coordinate in the characteristic pattern of output image is R, G, B value of the pixel of (i, j), vrij′、vgij、vbijRespectively indicate the picture that coordinate in the characteristic pattern of target image is (i, j) R, G, B value of element.The all pixels point in traversal image is equivalent to calculate corresponding pixel distance value, then by all pixels away from It is added from value as the second loss.For N number of training image for, the mean value of N number of second loss can be found out, as primary Second loss of training process.
It should be pointed out that also can use input subgraph according to described previously and carry out training image enhancing model, need at this time Calculate the penalty values of output subgraph and target subgraph.It is only illustrated for exporting image herein, those skilled in the art Member should can calculate penalty values by output subgraph and target subgraph, details are not described herein again according to explanation herein.
To sum up, the image enhancement model that image to be processed input precondition is good, obtains after process of convolution at the beginning of one Walk the output image of enhancing.Compared to image to be processed, which obtains in terms of resolution ratio, contrast and saturation degree Improve, but there can be the problems such as unnatural transition, colour cast (such as local display is excessively bright or partially grey).
According to the present invention, the output image tentatively enhanced is further improved.Usual image to be processed is RGB figure Picture, RGB color is to design from the principle of colour light emitting, and do not meet human-eye visual characteristic, therefore, to adjust Image afterwards can more meet perception of the human eye to brightness, coloration, and in an embodiment according to the present invention, output image is transformed into Meet and is further processed on the color space of human eye perception.
Therefore in subsequent step S220, image to be processed and output image are first transformed into predetermined color space respectively.
According to an embodiment of the invention, predetermined color space is Lab color space, Lab color space is mainly considered It is more conform with human eye perception and is not influenced by display equipment, wherein the channel L is mainly shown as light and shade, and the channel a mainly influences Red green, the channel b mainly influence champac color.Usual image to be processed is RGB image, therefore, it is necessary to by image to be processed and It exports image and is transformed into Lab color space from RGB color.Detailed process about color space conversion is not opened up herein It opens, it should be noted that in view of the treatment effect of needs, image can also be transformed into other color spaces (such as light tone point From color space), herein with no restrictions.
Image to be processed is transformed into Lab color space, obtains on three channels (i.e. the channel L, the channel a, the channel b) One figure (i.e. the figure to be processed in the channel L), the second figure (i.e. the figure to be processed in the channel a) to be processed and third figure to be processed to be processed Output image is similarly transformed into Lab color space, obtained first defeated on three channels by (i.e. the figure to be processed in the channel b) Figure (i.e. the channel L output figure), the second output figure (i.e. the output figure in the channel a) and third output figure (i.e. the output figure in the channel b) out. For purposes of illustration only, in subsequent instruction, image to be processed to be transformed into first obtained on Lab color space figure to be processed, Two figures to be processed, third figure to be processed are denoted as l respectively1、a1、b1, image will be exported and be transformed into the obtained on Lab color space One output figure, the second output figure, third output figure are denoted as l respectively2、a2、b2
Usually in Lab color space, the value range in the channel L is that the value range in the channel 0~100, a and the channel b is equal It is -128~+127.According to one embodiment of present invention, for convenient for subsequent fusion calculation, to after color space conversion Three channels on image be normalized, the pixel value range of the image in three channels is normalized to 0~255 Between.
Then in step S230, image to be processed is merged in predetermined color space and output image generates enhancing figure Picture.
According to an embodiment of the invention, the blending algorithm using subchannel handles output image, below from three Channel illustrates treatment process respectively, to improve the reinforcing effect of output image.
(1) on the channel L, judge the first figure l to be processed1The pixel value size of middle pixel, and according to judging result knot Close the first output figure l2Generate the first enhancing figure (being denoted as l).
Specifically, as the first figure l to be processed1The pixel value of middle pixel is less than first threshold (according to the present invention one In a embodiment, when first threshold is taken 50), l is schemed into the first output2The pixel value of corresponding pixel points is as the first l pairs of enhancing figure Answer the pixel value of pixel;As the first figure l to be processed1The pixel value of middle pixel is greater than second threshold (according to the present invention In one embodiment, when second threshold is taken 200), then the first figure l to be processed is combined1With the first output figure l2Middle corresponding pixel points Pixel value weights the pixel value for generating the first enhancing figure l corresponding pixel points in the first way;As the first figure l to be processed1Middle pixel When the pixel value of point had both been not less than first threshold or had been not more than second threshold, in conjunction with the first figure l to be processed1With the first output figure l2 The pixel value of middle corresponding pixel points weights the pixel value for generating the first enhancing figure l corresponding pixel points in a second manner.
In a preferred embodiment, judge that the process of processing can be indicated with following formula:
In above formula, l (x, y) is the value of pixel at (x, y) coordinate in the first enhancing figure, l1(x, y) is first to be processed In figure at (x, y) coordinate pixel value, l2(x, y) is the value of pixel at (x, y) coordinate in the first output figure.It needs to illustrate , it is shown here only a kind of preferred embodiment, the present invention is directed to protect the pixel value size according to the first figure to be processed The first figure to be processed and the first output figure are weighted come point situation, with the think of of the reinforcing effect of the first output figure of improvement Road, value to specific weighting coefficient and with no restrictions.
(2) on the channel a, in conjunction with the second figure a to be processed1With the second output figure a2Generate the second enhancing figure (being denoted as a).? In a kind of embodiment, by the second figure a to be processed1With the second output figure a2It is weighted, generates the second enhancing figure a.At one In preferred embodiment, the calculation formula of the second enhancing figure is expressed as follows:
A (x, y)=0.15*a1(x,y)+0.85*a2(x,y)
In above formula, a (x, y) is the value of pixel at (x, y) coordinate in the second enhancing figure, a1(x, y) is second to be processed In figure at (x, y) coordinate pixel value, a2(x, y) is the value of pixel at (x, y) coordinate in the second output figure.It needs to illustrate , be shown here only a kind of preferred embodiment, the present invention is directed to protect to the second figure to be processed and the second output figure into Row weighted calculation, the thinking of reinforcing effect to improve the second output figure, value to specific weighting coefficient and with no restrictions.
(3) on the channel b, in conjunction with third figure b to be processed1Figure b is exported with third2Generate third enhancing figure (being denoted as b).? In a kind of embodiment, by third figure b to be processed1Figure b is exported with third2It is weighted, generates third enhancing figure b.At one In preferred embodiment, the calculation formula of third enhancing figure is expressed as follows:
B (x, y)=0.15*b1(x,y)+0.85*b2(x,y)
In above formula, b (x, y) is the value that third enhances pixel at (x, y) coordinate in figure, b1(x, y) is that third is to be processed In figure at (x, y) coordinate pixel value, b2(x, y) is the value that third exports pixel at (x, y) coordinate in figure.It needs to illustrate , be shown here only a kind of preferred embodiment, the present invention is directed to protect to third figure to be processed and third output figure into Row weighted calculation, the thinking of reinforcing effect to improve third output figure, value to specific weighting coefficient and with no restrictions.
To sum up, for L channel image processing so that generate first enhancing figure dark areas will not excessively bright, bright area again It is unlikely to overexposure, preferably color can also be avoided to overflow the processing of a channel image and b channel image.Finally, fusion the One enhancing figure l, second enhancing figure a and third enhancing figure b generate improve treated enhancing image.Certainly, in some embodiments In, the enhancing image that can also will improve that treated is again by Lab color space conversion to RGB color, as enhancing image It is shown.
Fig. 7 A and Fig. 7 B show reinforcing effect comparison diagram according to an embodiment of the invention, and wherein Fig. 7 A is to be processed Image, Fig. 7 B are enhancing image.Comparison diagram 7A and Fig. 7 B, the enhancing image that image enhancement schemes according to the present invention generate, Quality is substantially better than original image to be processed, and can repair the problem of overexposure and color are overflowed well.
In addition, according to the method for the present invention 200 based on deep learning algorithm and traditional image processing algorithm, in conjunction with them Respective advantage had both played the powerful learning ability of deep learning, can learn the various reinforcing effects into training image set, The advantage that traditional algorithm directly finely tunes effect has been played again, has the advantages that processing is direct, quick.It is obtained according to this method 200 Enhancing image, reinforcing effect is smooth, beautiful, overcomes the problems such as transition is unnatural, there is very strong practical value.
Various technologies described herein are realized together in combination with hardware or software or their combination.To the present invention Method and apparatus or the process and apparatus of the present invention some aspects or part can take insertion tangible media, such as can Program code (instructing) in mobile hard disk, USB flash disk, floppy disk, CD-ROM or other any machine readable storage mediums Form, wherein when program is loaded into the machine of such as computer etc, and when being executed by the machine, the machine becomes to practice Equipment of the invention.
In the case where program code executes on programmable computers, calculates equipment and generally comprise processor, processor Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory Instruction in the said program code of middle storage executes method of the invention.
By way of example and not limitation, readable medium includes readable storage medium storing program for executing and communication media.Readable storage medium storing program for executing Store the information such as computer readable instructions, data structure, program module or other data.Communication media is generally such as to carry The modulated message signals such as wave or other transmission mechanisms embody computer readable instructions, data structure, program module or other Data, and including any information transmitting medium.Above any combination is also included within the scope of readable medium.
In the instructions provided here, algorithm and display not with any certain computer, virtual system or other Equipment is inherently related.Various general-purpose systems can also be used together with example of the invention.As described above, it constructs this kind of Structure required by system is obvious.In addition, the present invention is also not directed to any particular programming language.It should be understood that can With using various programming languages realize summary of the invention described herein, and the description that language-specific is done above be for Disclosure preferred forms of the invention.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, knot is not been shown in detail Structure and technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims than feature more features expressly recited in each claim.More precisely, as following As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, it abides by Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself As a separate embodiment of the present invention.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example In different one or more equipment.Module in aforementioned exemplary can be combined into a module or furthermore be segmented into multiple Submodule.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
The present invention discloses together:
A9, the method as described in A8, wherein before first intermediate treatment block of described image enhancing model, also wrap Include a convolution active coating.
A10, the method as described in A8 or 9, wherein the activation primitive of the convolution active coating in each intermediate treatment block is ReLU Function.
A11, the method as described in any one of A8-10, wherein the number of the intermediate treatment block is 4.
A12, the method as described in any one of A8-11, wherein the result treatment block includes three convolution active coatings, Wherein, the activation primitive of the first two convolution active coating is ReLU function, and the activation primitive of third convolution active coating is Tanh letter Number.
Image to be processed is being inputted preset image enhancement model by A13, the method as described in any one of A1-12 Before step, further includes the steps that generating preset image enhancement model by training: obtaining multiple training images pair, each Training image is to including input picture and target image, wherein the input picture is the image acquired by slr camera, institute Stating target image is the image that is adjusted that treated to input picture;The image that the input picture is input to pre-training is increased Strong model exports image after multiple convolution is handled, and calculates output image relative to target image according to default loss function Penalty values and the parameter for updating image enhancement model, when the penalty values meet predetermined condition, training terminates, and is trained Image enhancement model afterwards, as preset image enhancement model.
A14, the method as described in A13, wherein in the step of input picture to be input to the image enhancement model of pre-training Further include: for each training image pair, the subgraph of at least one predetermined size is intercepted out from input picture as input Subgraph;Each target is intercepted out from the target image according to the coordinate position of each input subgraph in the input image Subgraph;And the input subgraph is input to the image enhancement model of pre-training, son is exported after multiple convolution is handled Image calculates output subgraph relative to the penalty values of target subgraph according to default loss function and updates image enhancement model Parameter, when the penalty values meet predetermined condition, training terminate, the image enhancement model after being trained, as pre- If image enhancement model.
A15, the method as described in A13 or 14, wherein the default loss function is indicated by following formula:
Loss=λ1*color_loss+λ2*vgg_loss
Wherein, loss indicates penalty values, and color_loss is first-loss, and vgg_loss is the second loss, λ1And λ2It is right The weighting coefficient answered.
A16, the method as described in A15, wherein the step of calculating the first-loss includes: to output image and correspondence Target image carry out mean value Fuzzy Processing respectively, output image after being obscured and it is fuzzy after target image;Calculate mould Output image after paste and it is fuzzy after target image in corresponding pixel points pixel distance value, as first-loss.
A17, the method as described in A15, wherein the step of calculating the described second loss includes: by output image and target Image inputs default convolutional network respectively to generate respective characteristic pattern;Calculate the characteristic pattern of output image and the spy of target image The pixel distance value for levying corresponding pixel points in figure, as the second loss.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed Meaning one of can in any combination mode come using.
In addition, be described as herein can be by the processor of computer system or by executing by some in the embodiment The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, Installation practice Element described in this is the example of following device: the device be used for implement as in order to implement the purpose of the invention element performed by Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc. Description plain objects, which are merely representative of, is related to the different instances of similar object, and is not intended to imply that the object being described in this way must Must have the time it is upper, spatially, sequence aspect or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that Language used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this Many modifications and changes are obvious for the those of ordinary skill of technical field.For the scope of the present invention, to this It invents done disclosure to be illustrative and be not restrictive, it is intended that the scope of the present invention be defined by the claims appended hereto.

Claims (10)

1. a kind of image enchancing method, the method executes in calculating equipment, which comprises
Image to be processed is inputted in preset image enhancement model, exports image after multiple convolution is handled;
The image to be processed and output image are transformed into predetermined color space respectively;And
Image to be processed is merged in predetermined color space and output image generates enhancing image.
2. the method for claim 1, wherein described be transformed into predetermined color for image to be processed and output image respectively The step of space further include:
Image to be processed is transformed into predetermined color space, obtains the first figure to be processed, the second figure to be processed on three channels With third figure to be processed;
Output image is transformed into predetermined color space, obtains the first output figure, the second output figure and the third on three channels Output figure.
3. method according to claim 2, wherein described to merge image to be processed and output image in predetermined color space Generate enhancing image the step of include:
Judge the pixel value size of pixel in the first figure to be processed and combines the first output figure to generate first according to judging result Enhancing figure;
The second enhancing figure is generated in conjunction with the second figure to be processed and the second output figure;
Third enhancing figure is generated in conjunction with third figure to be processed and third output figure;And
It merges the first enhancing figure, the second enhancing figure and third enhancing figure and generates enhancing image.
4. method as claimed in claim 3, wherein combining the first output figure to generate first according to judging result enhances the step of figure Suddenly include:
If the pixel value of pixel is less than first threshold in the first figure to be processed, by the pixel of the first output figure corresponding pixel points It is worth the pixel value as the first enhancing figure corresponding pixel points;
If the pixel value of pixel is greater than second threshold in the first figure to be processed, in conjunction with the first figure to be processed and the first output figure The pixel value of middle corresponding pixel points generates the pixel value of the first enhancing figure corresponding pixel points in the first way;And
If the pixel value of pixel had both been not less than first threshold or had been not more than second threshold in the first figure to be processed, in conjunction with first The pixel value of corresponding pixel points generates the first enhancing figure corresponding pixel points in a second manner in figure to be processed and the first output figure Pixel value.
5. the method as claimed in claim 3 or 4, wherein the combination second figure to be processed and the second output figure generate second The step of enhancing figure includes:
Second figure to be processed and the second output figure are weighted, the second enhancing figure is generated.
6. the method as described in any one of claim 3-5, wherein combination third figure to be processed and third output figure life Include: at the step of third enhancing figure
Third figure to be processed and third output figure are weighted, third enhancing figure is generated.
7. such as method of any of claims 1-6, wherein the predetermined color space is Lab color space.
8. such as method of any of claims 1-7, wherein described image enhancing model include be sequentially connected it is multiple Intermediate treatment block and a result treatment block, wherein
Each intermediate treatment block includes at least two convolution active coatings being sequentially connected and a jump articulamentum, the jump The articulamentum that jumps is suitable for the input and the last one convolution active coating of first convolution active coating of the intermediate treatment block belonging to it Output be added;
The result treatment block includes multiple convolution active coatings.
9. a kind of calculating equipment, comprising:
At least one processor;With
It is stored with the memory of program instruction, wherein described program instruction is configured as being suitable for by least one described processor It executes, described program instruction includes for executing the instruction such as any one of claim 1-8 the method.
10. a kind of readable storage medium storing program for executing for being stored with program instruction, when described program instruction is read and is executed by calculating equipment, So that the calculating equipment executes such as method of any of claims 1-8.
CN201811252617.7A 2018-10-25 2018-10-25 Image enhancement method and computing device Active CN109345487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811252617.7A CN109345487B (en) 2018-10-25 2018-10-25 Image enhancement method and computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811252617.7A CN109345487B (en) 2018-10-25 2018-10-25 Image enhancement method and computing device

Publications (2)

Publication Number Publication Date
CN109345487A true CN109345487A (en) 2019-02-15
CN109345487B CN109345487B (en) 2020-12-25

Family

ID=65312412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811252617.7A Active CN109345487B (en) 2018-10-25 2018-10-25 Image enhancement method and computing device

Country Status (1)

Country Link
CN (1) CN109345487B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161175A (en) * 2019-12-24 2020-05-15 苏州江奥光电科技有限公司 Method and system for removing image reflection component
CN112200747A (en) * 2020-10-16 2021-01-08 展讯通信(上海)有限公司 Image processing method and device and computer readable storage medium
CN114693548A (en) * 2022-03-08 2022-07-01 电子科技大学 Dark channel defogging method based on bright area detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090136102A1 (en) * 2007-11-24 2009-05-28 Tom Kimpe Image processing of medical images
CN106683065A (en) * 2012-09-20 2017-05-17 上海联影医疗科技有限公司 Lab space based image fusing method
CN107424124A (en) * 2017-03-31 2017-12-01 北京臻迪科技股份有限公司 A kind of image enchancing method and device
CN107563984A (en) * 2017-10-30 2018-01-09 清华大学深圳研究生院 A kind of image enchancing method and computer-readable recording medium
CN108648163A (en) * 2018-05-17 2018-10-12 厦门美图之家科技有限公司 A kind of Enhancement Method and computing device of facial image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090136102A1 (en) * 2007-11-24 2009-05-28 Tom Kimpe Image processing of medical images
CN106683065A (en) * 2012-09-20 2017-05-17 上海联影医疗科技有限公司 Lab space based image fusing method
CN107424124A (en) * 2017-03-31 2017-12-01 北京臻迪科技股份有限公司 A kind of image enchancing method and device
CN107563984A (en) * 2017-10-30 2018-01-09 清华大学深圳研究生院 A kind of image enchancing method and computer-readable recording medium
CN108648163A (en) * 2018-05-17 2018-10-12 厦门美图之家科技有限公司 A kind of Enhancement Method and computing device of facial image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREY IGNATOV等: "DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161175A (en) * 2019-12-24 2020-05-15 苏州江奥光电科技有限公司 Method and system for removing image reflection component
CN112200747A (en) * 2020-10-16 2021-01-08 展讯通信(上海)有限公司 Image processing method and device and computer readable storage medium
CN112200747B (en) * 2020-10-16 2022-06-21 展讯通信(上海)有限公司 Image processing method and device and computer readable storage medium
CN114693548A (en) * 2022-03-08 2022-07-01 电子科技大学 Dark channel defogging method based on bright area detection
CN114693548B (en) * 2022-03-08 2023-04-18 电子科技大学 Dark channel defogging method based on bright area detection

Also Published As

Publication number Publication date
CN109345487B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN109584179A (en) A kind of convolutional neural networks model generating method and image quality optimization method
CN109255769A (en) The training method and training pattern and image enchancing method of image enhancement network
He et al. Conditional sequential modulation for efficient global image retouching
CN109544482A (en) A kind of convolutional neural networks model generating method and image enchancing method
CN110197463B (en) High dynamic range image tone mapping method and system based on deep learning
CN106412547B (en) A kind of image white balance method based on convolutional neural networks, device and calculate equipment
CN109345487A (en) A kind of image enchancing method and calculate equipment
CN109191558B (en) Image polishing method and device
CN109978788A (en) Convolutional neural networks generation method, image demosaicing methods and relevant apparatus
CN109978063A (en) A method of generating the alignment model of target object
CN106778928A (en) Image processing method and device
US20220172322A1 (en) High resolution real-time artistic style transfer pipeline
CN109840912A (en) The modification method of abnormal pixel and equipment is calculated in a kind of image
CN109003231A (en) A kind of image enchancing method, device and display equipment
CN106255990A (en) Image for camera array is heavily focused
CN111835983A (en) Multi-exposure-image high-dynamic-range imaging method and system based on generation countermeasure network
Liu et al. Very lightweight photo retouching network with conditional sequential modulation
CN108717691A (en) A kind of image interfusion method, device, electronic equipment and medium
CN108921810A (en) A kind of color transfer method and calculate equipment
US10957092B2 (en) Method and apparatus for distinguishing between objects
CN107169942B (en) Underwater image enhancement method based on fish retina mechanism
CN107766803A (en) Video personage based on scene cut dresss up method, apparatus and computing device
Wang et al. Neural color operators for sequential image retouching
CN107481203A (en) A kind of image orientation filtering method and computing device
CN109727211A (en) A kind of image de-noising method, calculates equipment and medium at device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant