CN110458754A - Image generating method and terminal device - Google Patents
Image generating method and terminal device Download PDFInfo
- Publication number
- CN110458754A CN110458754A CN201810427888.5A CN201810427888A CN110458754A CN 110458754 A CN110458754 A CN 110458754A CN 201810427888 A CN201810427888 A CN 201810427888A CN 110458754 A CN110458754 A CN 110458754A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- network model
- resolution
- function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000003062 neural network model Methods 0.000 claims abstract description 102
- 238000012549 training Methods 0.000 claims abstract description 38
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000004379 similarity theory Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 103
- 238000004590 computer program Methods 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 13
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000010276 construction Methods 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 description 16
- 230000000694 effects Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000006854 communication Effects 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 210000005036 nerve Anatomy 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000005303 weighing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to technical field of image processing, a kind of image generating method and terminal device are provided.This method comprises: building neural network model;According to brightness between two images in structural similarity theory, the similarity relation of contrast and structure, loss function is established;Sample image is obtained, and the neural network model is trained according to the sample image and the loss function;By the neural network model after image to be processed input training, the super resolution image of the image to be processed is generated.During the thought of structural similarity theory is dissolved into construction loss function by the present invention, improve super resolution technology by improving loss function, it can help to improve picture quality, the super resolution image for being more in line with human eye vision is obtained, the quality stability of the super resolution image generated is made to get a promotion.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image generating methods and terminal device.
Background technique
Super resolution image processing technique has boundless application at present, such as in terms of image transmitting, can be figure
As carrying out dimension-reduction treatment in input terminal, a liter dimension processing, the number transmitted in this way are carried out by super resolution technology in output end later
It can significantly be reduced according to amount, be conducive to improve transmission speed and alleviate network transmission pressure;In image storage side
Face, can be figure to store after dimension-reduction treatment, and the size of such picture will reduce, thus alleviate the pressure of storage,
When to be viewed and these images of application, then these images are carried out by liter dimension by super resolution technology and are handled, supplement picture is thin
Section, obtains high-dimensional picture.
The core of ultra-resolution method is the fitting of low-resolution image to high-definition picture.With depth learning technology
It continues to develop, some ultra-resolution methods in solving in fitting function, have obtained the effect of super-resolution very Application of Neural Network
The raising of big degree.By minimizing the difference between the super-resolution figure of generation and former high resolution graphics in the training process,
Parameter in neural network is trained to, and the super resolution image of high quality can be generated by the neural network after training.Such as
The difference what is measured between two figures is very important, and determines the quality of the super-resolution image of generation.
Existing ultra-resolution method when measuring difference between super-resolution figure and former high resolution graphics, be mostly
Pixel scale is compared.For example, former high resolution graphics is A, the later low resolution figure of dimensionality reduction is B, raw by super resolution technology
At super-resolution figure be C.During training neural network, loss function is defined as mean square deviation (MSE, Mean Square
Error form), calculation formula are as follows:
Wherein, a and b is the height and width of image.(x, y) is the position coordinates of image.Loss letter is defined with MSE
Number, is conducive to obtain good PSNR (Peak Signalto NoiseRatio, Y-PSNR) when assessment algorithm
Value.But there is also following problems:
The visual law not in full conformity with human eye is compared to image in pixel scale.The image of identical MSE gives human eye
The perception of vision system may be entirely different.For example, mean square deviation smaller may give the better visual experience of people.As shown in Figure 1,
Fig. 1 (a) is original image, and Fig. 1 (b) to Fig. 1 (f) is by the processed distorted image of different type of distortion, Fig. 1 (b) to Fig. 1
(f) MSE is followed successively by 144,144,144,144,142.As can be seen that the MSE value of these distorted images is essentially identical, but this
A little images are visibly different to the visual experience of people.
Due in the calculating process of MSE, not accounting for the spatial continuity of picture, each position is not accounted for yet
Significance level difference, but be uniformly processed.If the training neural network using MSE as loss function, ultra-resolution method are raw
At the quality of super-resolution figure you can't get guarantees.
Summary of the invention
In view of this, the embodiment of the invention provides image generating method and terminal device, to solve current super-resolution side
Method not can guarantee the problem of super resolution image quality of generation.
The first aspect of the embodiment of the present invention provides image generating method, comprising:
Construct neural network model;
According to brightness between two images in structural similarity theory, the similarity relation of contrast and structure, loss is established
Function;
Sample image is obtained, and the neural network model is instructed according to the sample image and the loss function
Practice;
By the neural network model after image to be processed input training, the super resolution image of the image to be processed is generated.
The second aspect of the embodiment of the present invention provides super resolution image generating means, comprising:
Module is constructed, for constructing neural network model;
Processing module, for according to the similar of brightness between two images in structural similarity theory, contrast and structure
Relationship establishes loss function;
Training module, for obtaining sample image, and according to the sample image and the loss function to the nerve
Network model is trained;
Generation module, for generating the image to be processed for the neural network model after image to be processed input training
Super resolution image.
The third aspect of the embodiment of the present invention provides terminal device, including memory, processor and is stored in described
In memory and the computer program that can run on the processor, the processor are realized when executing the computer program
Super resolution image generation method in first aspect.
The fourth aspect of the embodiment of the present invention provides computer readable storage medium, the computer readable storage medium
It is stored with computer program, the super resolution image generation side in first aspect is realized when the computer program is executed by processor
Method.
Existing beneficial effect is the embodiment of the present invention compared with prior art: according to two width figures in structural similarity theory
As between brightness, contrast and structure similarity relation, loss function is established, according to sample image and loss function to nerve net
Network model is trained, and by the neural network model after image to be processed input training, generates the super-resolution figure of image to be processed
Picture can be realized the superresolution processing to image to be processed.The thought of structural similarity theory is dissolved by the embodiment of the present invention
During constructing loss function, improves super resolution technology by improving loss function, can help to improve picture quality, obtain
To the super resolution image for being more in line with human eye vision, the quality stability of the super resolution image generated is made to get a promotion.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is the schematic diagram of several different distorted images of same original image provided in an embodiment of the present invention;
Fig. 2 is the implementation flow chart of image generating method provided in an embodiment of the present invention;
Fig. 3 is the implementation process being trained in image generating method provided in an embodiment of the present invention to neural network model
Figure;
Fig. 4 is the implementation process that second value and preset threshold are compared in image generating method provided in an embodiment of the present invention
Figure;
Fig. 5 is the schematic diagram of video generation device provided in an embodiment of the present invention;
Fig. 6 is the schematic diagram of terminal device provided in an embodiment of the present invention.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, to understand thoroughly the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific
The present invention also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, in case unnecessary details interferes description of the invention.
In order to illustrate technical solutions according to the invention, the following is a description of specific embodiments.
Fig. 2 is the implementation flow chart of image generating method provided in an embodiment of the present invention, and details are as follows:
In S201, neural network model is constructed.
In the present embodiment, superresolution processing is carried out to image using neural network model, constructs neural network first
Model.The pixel characteristic that the type of neural network model can according to need the image of processing is selected, for example, neural network
Model can be convolutional neural networks model (such as based on ResNet, FSRCNN, GoogleNet or other similar model), RNN
Or LSTM neural network etc., it is not limited thereto.
In S202, according to brightness between two images in structural similarity theory, the similarity relation of contrast and structure,
Establish loss function.
In the present embodiment, structural similarity theoretical (Structural Similarity Index, SSIM) is a kind of weighing apparatus
The method for measuring two images similarity proposes a kind of image quality evaluation index referred to entirely, can from brightness, contrast,
Image similarity is measured in terms of structure three.Loss function (loss function) is for estimating the predicted value of model and true
The inconsistent degree of value, it is a non-negative real-valued function, and loss function is smaller, and the accuracy of model is higher.
In S203, sample image is obtained, and according to the sample image and the loss function to the neural network
Model is trained.
In the present embodiment, sample image is to choose for training the image of neural network model.According to sample image
Neural network model can be trained with loss function, so that the network parameter to neural network model is corrected, be mentioned
The super resolution image quality that high neural network model generates.
As an embodiment of the present invention, the sample image includes that original image and the original image are corresponding low
Image in different resolution;The low-resolution image is the image obtained after reducing the resolution ratio of the original image;The loss
Function are as follows:
Lossnew(A, C)=- l (A, C)-c (A, C)-s (A, C) (1)
Wherein, A indicates that the first image, C indicate the second image, brightness of the l (A, C) between the first image and the second image
Contrast function;Contrast contrast function of the c (A, C) between the first image and the second image;S (A, C) is the first image and the
Structure Comparison function between two images;Wherein the first image is the original image;Second image is in the training process will
After the original image corresponding low-resolution image input neural network model, the figure of the neural network model output
Picture.
It in the present embodiment, may include at least one set of image in sample image, every group of image includes an original image
With low-resolution image corresponding with the original image.Original image is high-resolution image, is used for and neural network model
The image of output compares, so that the effect to neural network model is assessed.Low-resolution image is by original image
Resolution ratio reduce after obtained image, the image as input neural network model.It can be by reducing original image dimension
Processing method generate the lower image of resolution ratio, using the image of generation as the corresponding low resolution image of original image.Example
Such as, if the resolution ratio of original image is 1600*900, it is 800*450's that resolution ratio is generated after the resolution ratio of original image is reduced
Image, then can be using the image that resolution ratio is 800*450 as the corresponding low-resolution image of the original image.
In the present embodiment, brightness contrast function is used to characterize the brightness similarity degree between two images.Contrast comparison
Function is used to characterize the contrast similarity degree between two images, and Structure Comparison function is used to characterize the structure phase between two images
Like degree.The present embodiment by brightness contrast function, contrast contrast function and Structure Comparison function take it is negative after be added, lost
Function.In the loss function established out, the weight of brightness, contrast and structure is all 1, can make loss function more in this way
Reflect to general equilibrium similarity degree of the two images in brightness, contrast and structure, to make according to loss function training
Neural network can generate the super resolution image of high quality.It is understood that the weight of the brightness, contrast and structure
It can also be adjusted.
As an embodiment of the present invention, the brightness contrast function isIt is described right
It is than degree contrast functionThe Structure Comparison function is
Wherein, μAFor the pixel average of the first image, μCFor the pixel average of the second image, σAFor the first image
Pixel criterion is poor, σCPixel criterion for the second image is poor, σAσCFor the pixel covariance of the first image and the second image;K1, K2
And K3It is constant.
As an embodiment of the present invention, as shown in figure 3, " according to the sample image and the loss letter in S203
It is several that the neural network model is trained " may include:
In S301, the low-resolution image is inputted into the neural network model, obtains the neural network model
The third image of output.
In the present embodiment, third image is to export after neural network model carries out superresolution processing to low resolution image
Image.For example, the high-definition picture (image A) of a high quality can be carried out to reducing the method for dimension to obtain low resolution
The image (image B) of rate, and using image B as the input of neural network model.In this way, obtaining one by neural network model
After opening high quality, high-resolution image (image C), so that it may by the difference between contrast images A and image C, for mind
Effect through network model is assessed.Wherein image A is equivalent to original image, and it is corresponding low that image B is equivalent to original image
Image in different resolution, image C are equivalent to third image.
In S302, according to the image information of the original image, the image information and the loss of the third image
Function calculates the first numerical value, and first numerical value is the corresponding loss function numerical value of Current Situation of Neural Network model.
In the present embodiment, image information may include image number of pixels and each pixel value.According to original image
Image information and third image image information, pixel average and the pixel criterion that can calculate original image are poor,
The pixel average and pixel criterion of three images be poor and pixel covariance between original image and third image.It will be above-mentioned
Calculated value, which substitutes into loss function, can calculate the first numerical value.First numerical value can characterize original image and third image
Similarity degree, to reflect the super-resolution effect of Current Situation of Neural Network model.
In S303, the parameter of the neural network model is adjusted according to first numerical value.
In the present embodiment, it can be adjusted according to parameter of first numerical value to neural network model.For example, can lead to
It crosses gradient descent method and loss function is acted on into each of network parameter, so that the value to parameters is corrected.In
In back-propagation process, correcting of the error current for each parameter is gone out by gradient descent algorithm.This correction is followed,
The parameter of whole network model is adjusted, reduces the error of whole network model constantly.
For example, if original image and third image are very close, the first numerical value is reversed in error with regard to smaller
It can be smaller to the correcting of network parameter in communication process;If original image and third image difference are very big, the
One numerical value must significantly be rectified network parameter towards reduced direction is lost during error back propagation with regard to bigger
Just.Therefore, by the training to neural network model, original image and third image can be made more to approach.
The present embodiment is by inputting the third image that neural network model obtains for low-resolution image, according to original image
The value of loss function is calculated with third image, then the parameter of neural network model is adjusted, and network parameter can be made more suitable
In carrying out superresolution processing to image, the training to neural network model is realized, to improve the super of neural network model generation
The quality of resolution image.
As an embodiment of the present invention, as shown in figure 4, after S203, can also include:
In S401, the low-resolution image is inputted into neural network model adjusted, obtains the neural network
4th image of model output.
In S402, according to the image information of the original image, the image information and the loss of the 4th image
Function calculates second value, and the second value is the corresponding loss function numerical value of neural network model adjusted.
In S403, the second value is compared with preset threshold.
In the present embodiment, whether the effect that preset threshold is used to assess neural network model adjusted reaches requirement.In advance
If the numerical value of threshold value can be set according to the requirement to neural network model super-resolution effect.
After being adjusted according to parameter of first numerical value to neural network model, again to neural network model adjusted
Low-resolution image is inputted, the corresponding loss function number of neural network model after adjusting is calculated according to the 4th image of output
Value.Assess whether neural network model adjusted reaches requirement by comparing second value and preset threshold.
In S404, if the second value is greater than the preset threshold, according to the second value to the nerve
The parameter of network is adjusted, and the step of jumping to S401.
In the present embodiment, since loss function is smaller, the super-resolution effect of neural network model is better.If second value
Greater than preset threshold, then requirement has not been reached yet in the super-resolution effect for characterizing neural network model adjusted, need again into
The adjustment of row parameter, therefore, is adjusted again according to parameter of the second value to neural network model.Jump to S301's later
Step is assessed again according to super-resolution effect of the preset threshold to neural network model.
In S405, if the second value is less than the preset threshold, terminate the instruction to the neural network model
Practice.
In the present embodiment, if second value is less than preset threshold, the oversubscription of neural network model adjusted is characterized
It distinguishes that effect has reached requirement, therefore the training process to the neural network model can be terminated, by current neural network
Model is as the neural network model after training, for carrying out superresolution processing to image to be processed.
The present embodiment assesses parameter neural network model adjusted by preset threshold, judges Current neural net
Whether the super-resolution effect of network model reaches requirement, can be carried out to the effect of neural network model training by preset threshold quasi-
Really assessment guarantees the quality of the super resolution image generated to guarantee the super-resolution effect of the neural network model after training.
In S204, by the neural network model after image to be processed input training, the super of the image to be processed is generated
Resolution image.
In the present embodiment, image to be processed is the image for needing to carry out superresolution processing.Image to be processed is inputted and is instructed
The super resolution image of image to be processed can be generated in neural network model after white silk, promotes the quality of image.
The embodiment of the present invention is according to brightness between two images in structural similarity theory, the similar pass of contrast and structure
System, establishes loss function, is trained according to sample image and loss function to neural network model, image to be processed is inputted
Neural network model after training generates the super resolution image of image to be processed, can be realized the super-resolution to image to be processed
Processing.During the thought of structural similarity theory is dissolved into construction loss function by the embodiment of the present invention, damaged by improving
Function is lost to improve super resolution technology, can help to improve picture quality, obtain the super resolution image for being more in line with human eye vision,
The quality stability of the super resolution image generated is set to get a promotion.
The embodiment of the present invention is put forward for the first time joint and considers brightness, contrast and knot during constructing loss function
The premium-qualities correlated characteristic such as structure similitude, and these factors are dissolved into the building of machine learning neural network model.In
The loss function constructed under the auxiliary of these quality correlated characteristics, the available picture for being more in line with human eye vision rule, side
Raising picture quality is helped, the application range of super resolution technology is extended.It is advantageous in that using SSIM to rebuild loss function:
Because brightness, contrast and structural similarity belong to premium-quality correlated characteristic, the neural network mould that training is passed through
Type image generated more easily spatially keeps continuous;And more meet human eye with SSIM come the loss function of auxiliary definition
The result of visual law.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
Show provided in an embodiment of the present invention corresponding to super resolution image generation method, Fig. 5 described in foregoing embodiments
The schematic diagram of super resolution image generating means.For ease of description, only the parts related to this embodiment are shown.
Referring to Fig. 5, which includes building module 51, processing module 52, training module 53 and generation module 54.
Module 51 is constructed, for constructing neural network model.
Processing module 52, for the phase according to brightness between two images in structural similarity theory, contrast and structure
Like relationship, loss function is established.
Training module 53, for obtaining sample image, and according to the sample image and the loss function to the mind
It is trained through network model.
Generation module 54, for generating the figure to be processed for the neural network model after image to be processed input training
The super resolution image of picture.
Optionally, the sample image includes original image and the corresponding low-resolution image of the original image;It is described
Low-resolution image is the image obtained after reducing the resolution ratio of the original image;The loss function are as follows:
Lossnew(A, C)=- l (A, C)-c (A, C)-s (A, C)
Wherein, A indicates that the first image, C indicate the second image, brightness of the l (A, C) between the first image and the second image
Contrast function;Contrast contrast function of the c (A, C) between the first image and the second image;S (A, C) is the first image and the
Structure Comparison function between two images;Wherein the first image is the original image;Second image is in the training process will
After the original image corresponding low-resolution image input neural network model, the figure of the neural network model output
Picture.
Optionally, the brightness contrast function isThe contrast contrast function isThe Structure Comparison function is
Wherein, μAFor the pixel average of the first image, μCFor the pixel average of the second image, σAFor the first image
Pixel criterion is poor, σCPixel criterion for the second image is poor, σAσCFor the pixel covariance of the first image and the second image;K1, K2
And K3It is constant.
Optionally, the training module 53 is used for:
The low-resolution image is inputted into the neural network model, obtains the third of the neural network model output
Image;
The is calculated according to the image information of the original image, the image information of the third image and the loss function
One numerical value, first numerical value are the corresponding loss function numerical value of Current Situation of Neural Network model;
The parameter of the neural network model is adjusted according to first numerical value.
Optionally, the training module 53 is also used to:
The low-resolution image is inputted into neural network model adjusted, obtains the neural network model output
4th image;
The is calculated according to the image information of the original image, the image information of the 4th image and the loss function
Two numerical value, the second value are the corresponding loss function numerical value of neural network model adjusted;
The second value is compared with preset threshold;
If the second value is greater than the preset threshold, according to the second value to the parameter of the neural network
It is adjusted, and jumps to and " low-resolution image is inputted into neural network model adjusted, obtains the neural network
The step of 4th image of model output ";
If the second value is less than the preset threshold, terminate the training to the neural network model.
The embodiment of the present invention is according to brightness between two images in structural similarity theory, the similar pass of contrast and structure
System, establishes loss function, is trained according to sample image and loss function to neural network model, image to be processed is inputted
Neural network model after training generates the super resolution image of image to be processed, can be realized the super-resolution to image to be processed
Processing.During the thought of structural similarity theory is dissolved into construction loss function by the embodiment of the present invention, damaged by improving
Function is lost to improve super resolution technology, can help to improve picture quality, obtain the super resolution image for being more in line with human eye vision,
The quality stability of the super resolution image generated is set to get a promotion.
Fig. 6 is the schematic diagram for the terminal device that one embodiment of the invention provides.As shown in fig. 6, the terminal of the embodiment is set
Standby 6 include: processor 60, memory 61 and are stored in the meter that can be run in the memory 61 and on the processor 60
Calculation machine program 62, such as program.The processor 60 realizes above-mentioned each embodiment of the method when executing the computer program 62
In step, such as step 201 shown in Fig. 2 is to 204.Alternatively, reality when the processor 60 executes the computer program 62
The function of each module/unit in existing above-mentioned each Installation practice, such as the function of module 51 to 54 shown in Fig. 5.
Illustratively, the computer program 62 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 61, and are executed by the processor 60, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the computer program 62 in the terminal device 6 is described.For example, the computer program 62 can be divided
It is as follows to be cut into building module, processing module, training module and generation module, each module concrete function:
Module is constructed, for constructing neural network model;
Processing module, for according to the similar of brightness between two images in structural similarity theory, contrast and structure
Relationship establishes loss function;
Training module, for obtaining sample image, and according to the sample image and the loss function to the nerve
Network model is trained;
Generation module, for generating the image to be processed for the neural network model after image to be processed input training
Super resolution image.
The terminal device 6 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 60, memory 61.It will be understood by those skilled in the art that Fig. 6
The only example of terminal device 6 does not constitute the restriction to terminal device 6, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus, display etc..
Alleged processor 60 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 61 can be the internal storage unit of the terminal device 6, such as the hard disk or interior of terminal device 6
It deposits.The memory 61 is also possible to the External memory equipment of the terminal device 6, such as be equipped on the terminal device 6
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 61 can also both include the storage inside list of the terminal device 6
Member also includes External memory equipment.The memory 61 is for storing needed for the computer program and the terminal device
Other programs and data.The memory 61 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the present invention realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry the computer program code
Dish, CD, computer storage, read-only memory (Read-Only Memory, ROM), random access memory (Random
Access Memory, RAM), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that the meter
The content that calculation machine readable medium includes can carry out increase and decrease appropriate according to the requirement made laws in jurisdiction with patent practice,
Such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and electricity
Believe signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality
Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all
It is included within protection scope of the present invention.
Claims (10)
1. a kind of image generating method characterized by comprising
Construct neural network model;
According to brightness between two images in structural similarity theory, the similarity relation of contrast and structure, loss function is established;
Sample image is obtained, and the neural network model is trained according to the sample image and the loss function;
By the neural network model after image to be processed input training, the super resolution image of the image to be processed is generated.
2. image generating method as described in claim 1, which is characterized in that the sample image includes original image and described
The corresponding low-resolution image of original image;The low-resolution image is to obtain after reducing the resolution ratio of the original image
Image;The loss function are as follows:
Lossnew(A, C)=- l (A, C)-c (A, C)-s (A, C)
Wherein, A indicates that the first image, C indicate the second image, brightness contrast of the l (A, C) between the first image and the second image
Function;Contrast contrast function of the c (A, C) between the first image and the second image;S (A, C) is the first image and the second figure
Structure Comparison function as between;Wherein the first image is the original image;Second image is in the training process will be described
After the corresponding low-resolution image of original image inputs the neural network model, the image of the neural network model output.
3. image generating method as claimed in claim 2, which is characterized in that the brightness contrast function isThe contrast contrast function isThe Structure Comparison letter
Number is
Wherein, μAFor the pixel average of the first image, μCFor the pixel average of the second image, σAFor the pixel of the first image
Standard deviation, σCPixel criterion for the second image is poor, σAσCFor the pixel covariance of the first image and the second image;K1, K2And K3
It is constant.
4. such as the described in any item image generating methods of Claims 2 or 3, which is characterized in that described according to the sample image
The neural network model is trained with the loss function and includes:
The low-resolution image is inputted into the neural network model, obtains the third figure of the neural network model output
Picture;
The first number is calculated according to the image information of the original image, the image information of the third image and the loss function
Value, first numerical value are the corresponding loss function numerical value of Current Situation of Neural Network model;
The parameter of the neural network model is adjusted according to first numerical value.
5. image generating method as claimed in claim 4, which is characterized in that it is described according to first numerical value to the mind
After parameter through network model is adjusted, further includes:
The low-resolution image is inputted into neural network model adjusted, obtains the 4th of the neural network model output
Image;
The second number is calculated according to the image information of the original image, the image information of the 4th image and the loss function
Value, the second value are the corresponding loss function numerical value of neural network model adjusted;
The second value is compared with preset threshold;
If the second value is greater than the preset threshold, carried out according to parameter of the second value to the neural network
Adjustment, and jump to and " low-resolution image is inputted into neural network model adjusted, obtains the neural network model
The step of 4th image of output ";
If the second value is less than the preset threshold, terminate the training to the neural network model.
6. a kind of video generation device, which is characterized in that
Module is constructed, for constructing neural network model;
Processing module, for the similarity relation according to brightness between two images in structural similarity theory, contrast and structure,
Establish loss function;
Training module, for obtaining sample image, and according to the sample image and the loss function to the neural network
Model is trained;
Generation module, for generating the super of the image to be processed for the neural network model after image to be processed input training
Resolution image.
7. video generation device as claimed in claim 6, which is characterized in that the sample image includes original image and described
The corresponding low-resolution image of original image;The low-resolution image is to obtain after reducing the resolution ratio of the original image
Image;The loss function are as follows:
Lossnew(A, C)=- l (A, C)-c (A, C)-s (A, C)
Wherein, A indicates that the first image, C indicate the second image, brightness contrast of the l (A, C) between the first image and the second image
Function;Contrast contrast function of the c (A, C) between the first image and the second image;S (A, C) is the first image and the second figure
Structure Comparison function as between;Wherein the first image is the original image;Second image is in the training process will be described
After the corresponding low-resolution image of original image inputs the neural network model, the image of the neural network model output.
8. video generation device as claimed in claim 7, which is characterized in that the brightness contrast function isThe contrast contrast function isThe Structure Comparison letter
Number is
Wherein, μAFor the pixel average of the first image, μCFor the pixel average of the second image, σAFor the pixel of the first image
Standard deviation, σCPixel criterion for the second image is poor, σAσCFor the pixel covariance of the first image and the second image;K1, K2And K3
It is constant.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 5 when executing the computer program
The step of any one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 5 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810427888.5A CN110458754B (en) | 2018-05-07 | 2018-05-07 | Image generation method and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810427888.5A CN110458754B (en) | 2018-05-07 | 2018-05-07 | Image generation method and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458754A true CN110458754A (en) | 2019-11-15 |
CN110458754B CN110458754B (en) | 2021-12-03 |
Family
ID=68472098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810427888.5A Active CN110458754B (en) | 2018-05-07 | 2018-05-07 | Image generation method and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458754B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308094A (en) * | 2020-11-25 | 2021-02-02 | 创新奇智(重庆)科技有限公司 | Image processing method and device, electronic equipment and storage medium |
GB2615849A (en) * | 2021-10-29 | 2023-08-23 | Nvidia Corp | Image upsampling using one or more neural networks |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9317761B2 (en) * | 2010-12-09 | 2016-04-19 | Nanyang Technological University | Method and an apparatus for determining vein patterns from a colour image |
WO2016132147A1 (en) * | 2015-02-19 | 2016-08-25 | Magic Pony Technology Limited | Enhancement of visual data |
CN106228512A (en) * | 2016-07-19 | 2016-12-14 | 北京工业大学 | Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method |
CN107767343A (en) * | 2017-11-09 | 2018-03-06 | 京东方科技集团股份有限公司 | Image processing method, processing unit and processing equipment |
CN107784296A (en) * | 2017-11-21 | 2018-03-09 | 中山大学 | A kind of face identification method of low-resolution image |
-
2018
- 2018-05-07 CN CN201810427888.5A patent/CN110458754B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9317761B2 (en) * | 2010-12-09 | 2016-04-19 | Nanyang Technological University | Method and an apparatus for determining vein patterns from a colour image |
WO2016132147A1 (en) * | 2015-02-19 | 2016-08-25 | Magic Pony Technology Limited | Enhancement of visual data |
CN106228512A (en) * | 2016-07-19 | 2016-12-14 | 北京工业大学 | Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method |
CN107767343A (en) * | 2017-11-09 | 2018-03-06 | 京东方科技集团股份有限公司 | Image processing method, processing unit and processing equipment |
CN107784296A (en) * | 2017-11-21 | 2018-03-09 | 中山大学 | A kind of face identification method of low-resolution image |
Non-Patent Citations (3)
Title |
---|
HANG ZHAO 等: "Loss Functions for Image Restoration With Neural Networks", 《IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING》 * |
房晓晶: "基于卷积神经网络的图像超分辨率应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
朱秀昌 等: "《数字图像处理与图像通信(第4版)》", 31 August 2016, 北京邮电大学出版社 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308094A (en) * | 2020-11-25 | 2021-02-02 | 创新奇智(重庆)科技有限公司 | Image processing method and device, electronic equipment and storage medium |
GB2615849A (en) * | 2021-10-29 | 2023-08-23 | Nvidia Corp | Image upsampling using one or more neural networks |
Also Published As
Publication number | Publication date |
---|---|
CN110458754B (en) | 2021-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11734851B2 (en) | Face key point detection method and apparatus, storage medium, and electronic device | |
KR102385463B1 (en) | Facial feature extraction model training method, facial feature extraction method, apparatus, device and storage medium | |
Guo et al. | Underwater ranker: Learn which is better and how to be better | |
CN109326264A (en) | A kind of brightness Demura method and system of liquid crystal display die set | |
CN113674421B (en) | 3D target detection method, model training method, related device and electronic equipment | |
EP3779891A1 (en) | Method and device for training neural network model, and method and device for generating time-lapse photography video | |
CN108694719B (en) | Image output method and device | |
US11875584B2 (en) | Method for training a font generation model, method for establishing a font library, and device | |
US20220358675A1 (en) | Method for training model, method for processing video, device and storage medium | |
CN105681775B (en) | A kind of white balance method and device | |
CN113378911B (en) | Image classification model training method, image classification method and related device | |
WO2023174036A1 (en) | Federated learning model training method, electronic device and storage medium | |
CN108600783A (en) | A kind of method of frame rate adjusting, device and terminal device | |
CN110888687A (en) | Mobile edge computing task unloading optimal contract design method based on contract design | |
CN108765264A (en) | Image U.S. face method, apparatus, equipment and storage medium | |
CN110458754A (en) | Image generating method and terminal device | |
CN110473181A (en) | Screen content image based on edge feature information without ginseng quality evaluating method | |
CN109215614A (en) | A kind of image quality adjusting method, regulating device and terminal device | |
CN109348207B (en) | Color temperature adjusting method, image processing method and device, medium and electronic equipment | |
CN108377372B (en) | A kind of white balancing treatment method, device, terminal device and storage medium | |
CN116245769A (en) | Image processing method, device, equipment and storage medium | |
CN114758130B (en) | Image processing and model training method, device, equipment and storage medium | |
CN112560210B (en) | Method for adjusting a grid structure, associated device and computer program product | |
CN112183309B (en) | Face key point processing method, system, terminal and storage medium | |
CN109657546A (en) | Video behavior recognition methods neural network based and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province Applicant after: TCL Technology Group Co., Ltd Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District Applicant before: TCL Group Co., Ltd |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |