CN109360154A - A kind of super-resolution method of convolutional neural networks generation method and image - Google Patents
A kind of super-resolution method of convolutional neural networks generation method and image Download PDFInfo
- Publication number
- CN109360154A CN109360154A CN201811269554.6A CN201811269554A CN109360154A CN 109360154 A CN109360154 A CN 109360154A CN 201811269554 A CN201811269554 A CN 201811269554A CN 109360154 A CN109360154 A CN 109360154A
- Authority
- CN
- China
- Prior art keywords
- image
- process block
- super
- resolution
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The invention discloses a kind of convolutional neural networks generation method for image progress super-resolution processing, the super-resolution method of image, calculating equipment and mobile terminals, convolutional neural networks generation method includes: to construct the first process block, second processing block and third process block respectively, first process block includes the first convolutional layer, second processing block includes the second convolutional layer, and third process block includes third convolutional layer;Fourth process block is constructed, fourth process block includes Volume Four lamination;According to preset concatenate rule, one or more first process blocks, second processing block, third process block are connected with fourth process block, with generate with the first process block be input, with third process block be output convolutional neural networks;According to the sets of image data obtained in advance, convolutional neural networks are trained, super-resolution image corresponding to the output instruction input picture so as to convolutional neural networks.
Description
Technical field
It is the present invention relates to technical field of image processing, in particular to a kind of for carrying out the volume of super-resolution processing to image
Product neural network generation method, the super-resolution method of image, calculating equipment and mobile terminal.
Background technique
Super-resolution is a kind of technology for improving original image resolution ratio, obtains one by a series of low-resolution images
The process for opening high-definition picture is exactly super-resolution rebuilding.Super-resolution technique based on deep neural network, due to obtaining
Considerably beyond traditional super-resolution technique effect and cause to pay attention to extensively.
The existing super-resolution technique based on deep neural network is broadly divided into two kinds, the first is first with tradition side
Original image is amplified to target sizes by method, then handles amplified image with supplemental content details, most with deep neural network
After obtain target image, second is that original image is directly sent into deep neural network to carry out abstract image feature, at last
Layer obtains the image of target size using a kind of technology for being referred to as Sub-Pixel (sub-pix).
Above two mode respectively has its advantage and disadvantage.The target image of the available arbitrary dimension of first way, but by
In being amplified to target size before image is sent into deep neural network, so the meter of deep neural network can be dramatically increased
Calculation amount.The calculation amount of second way abstract image feature in original image size, deep neural network can be smaller, but due to
The limitation of Sub-Pixel technology, which can only obtain the image of integer zooming ratio, if target image size is not original image
Integral multiple, then need subsequent processing section carry out extra process, it is relatively complicated.
Summary of the invention
For this purpose, the present invention provides a kind of convolutional neural networks generation scheme for image progress super-resolution processing,
And the super-resolution scheme of the image based on the convolutional neural networks is proposed, to try hard to solve or at least alleviate exist above
The problem of.
According to an aspect of the present invention, it provides a kind of for carrying out the convolutional neural networks of super-resolution processing to image
Generation method, suitable for executing in calculating equipment, this method comprises the following steps: firstly, constructing the first process block, second respectively
Process block and third process block, the first process block include the first convolutional layer, and second processing block includes the second convolutional layer, third processing
Block includes third convolutional layer, and the first convolutional layer, the second convolutional layer and third convolutional layer keep it to export image and corresponding input
The size of image is consistent;Construct fourth process block, fourth process block includes Volume Four lamination, the output image of Volume Four lamination with
The dimension scale of corresponding input picture meets preset proportionality coefficient;According to preset concatenate rule, by one or more first
Process block, second processing block, third process block are connected with fourth process block, with generate with the first process block for input, with
Third process block is the convolutional neural networks of output;According to the sets of image data obtained in advance, convolutional neural networks are carried out
Training, so that the output of convolutional neural networks indicates super-resolution image corresponding to input picture, sets of image data includes
Multiple images group, each image group include original image and its corresponding to super-resolution image, the super-resolution image with should
The dimension scale of original image meets proportionality coefficient.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, the step of constructing the first process block, second processing block and third process block respectively further include: building the first active coating and first
Criticize normalization layer;The first active coating for being sequentially connected and first normalization layer are added, after the first convolutional layer to form the
One process block.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, the step of constructing the first process block, second processing block and third process block respectively further include: building the second active coating and second
Criticize normalization layer;The second active coating and second batch normalization layer being sequentially connected are added, after the second convolutional layer to form
Two process blocks.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, the step of constructing the first process block, second processing block and third process block respectively further include: building third active coating and third
Criticize normalization layer;The third active coating being sequentially connected and third batch normalization layer are added, after third convolutional layer to form the
Three process blocks.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, construct fourth process block the step of include: that corresponding convolution nuclear parameter is generated according to preset proportionality coefficient;Pass through convolution
Nuclear parameter constructs Volume Four lamination, to form the fourth process block for including Volume Four lamination.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, according to the sets of image data obtained in advance, convolutional neural networks are trained, so that the output of convolutional neural networks refers to
The step of showing super-resolution image corresponding to input picture includes: the image group extracted to each, with the image group institute
Including original image be convolutional neural networks in first the first process block input, with super-resolution included by the image group
Rate image is the output of the last one third process block in convolutional neural networks, is trained to convolutional neural networks.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, Volume Four lamination is transposition convolutional layer.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, the quantity of the first process block, third process block and fourth process block is 1, and the quantity of second processing block is 17.
Optionally, according to the present invention for carrying out the convolutional neural networks generation method of super-resolution processing to image
In, the step of further including pre-generated sets of image data, pre-generating sets of image data includes: to each figure to be processed
Piece carries out image procossing, to obtain the first image that each picture to be processed is corresponding, meets pre-set dimension;According to preset ratio
Coefficient obtains corresponding down-sampling range, and to the first image of each sufficient pre-set dimension that is filled, according to down-sampling range to this
First image carries out down-sampling, to generate the second image of corresponding low resolution;It, will using second image as original image
First image carries out the original image and the super-resolution image as super-resolution image corresponding with the original image
Association process is to form image group;Collect each image group, to form sets of image data.
According to a further aspect of the invention, provide a kind of calculating equipment, including one or more processors, memory with
And one or more programs, wherein one or more programs are stored in memory and are configured as by one or more processors
It executes, one or more programs include according to the present invention for carrying out the convolution mind of super-resolution processing to image for executing
Instruction through network generation method.
According to a further aspect of the invention, a kind of computer-readable storage medium storing one or more programs is provided
Matter, one or more programs include instruction, are instructed when executed by a computing apparatus, so that it is according to the present invention to calculate equipment execution
For carrying out the convolutional neural networks generation method of super-resolution processing to image.
According to a further aspect of the invention, a kind of super-resolution method of image is provided, suitable for holding in the terminal
Row, this method based on for image carry out super-resolution processing convolutional neural networks generation method in, trained convolution
Neural network carries out super-resolution processing to image, includes the following steps: firstly, obtaining image to be processed and its corresponding oversubscription
Resolution coefficient;Judge whether super-resolution coefficient matches with trained convolutional neural networks;If matching, by image to be processed
It is input in trained convolutional neural networks and carries out super-resolution processing;The output of trained convolutional neural networks is obtained,
The corresponding super-resolution image of image to be processed is determined according to the output.
Optionally, in the super-resolution method of image according to the present invention, further includes: if super-resolution coefficient and training
Good convolutional neural networks mismatch, then are adjusted according to super-resolution coefficient to the fourth process block in the convolutional neural networks
It is whole;Image to be processed is input in convolutional neural networks adjusted and carries out super-resolution processing;Obtain trained convolution
The output of neural network determines the corresponding super-resolution image of image to be processed according to the output.
According to a further aspect of the invention, provide a kind of mobile terminal, including one or more processors, memory with
And one or more programs, wherein one or more programs are stored in memory and are configured as by one or more processors
It executes, one or more programs include the instruction for executing the super-resolution method of image according to the present invention.
According to a further aspect of the invention, a kind of computer-readable storage medium storing one or more programs is also provided
Matter, one or more programs include instruction, are instructed when by mobile terminal execution, so that mobile terminal execution is according to the present invention
The super-resolution method of image.
The technical solution that convolutional neural networks according to the present invention for carrying out super-resolution processing to image generate, it is first
The first process block, second processing block and third process block are first constructed respectively, and the first process block includes the first convolutional layer, second processing
Block includes the second convolutional layer, and third process block includes third convolutional layer, then constructs fourth process block, and fourth process block includes the 4th
Convolutional layer, according to preset concatenate rule, by one or more first process blocks, second processing block, third process block and the 4th
Process block carry out it is connected, with generate with the first process block be input, with third process block be output convolutional neural networks, finally
According to the sets of image data obtained in advance, convolutional neural networks are trained, indicate that input picture institute is right so that it is exported
The super-resolution image answered.In the above scheme, the first convolutional layer, the second convolutional layer and third convolutional layer keep its output figure
As consistent with the size of corresponding input picture, and the output image of Volume Four lamination meets with the dimension scale of corresponding input picture
Preset proportionality coefficient, therefore for the convolutional neural networks of generation, before network at the first process block and second of part
In the processing links for managing block, the characteristics of image that super-resolution is used in input picture can be extracted, and then in fourth process block
In processing can be amplified to above-mentioned characteristics of image according to preset proportionality coefficient, finally recycle network after part second at
Manage block and third process block, improvement refinement carried out to amplified characteristics of image, finally obtains target image, i.e., be originally inputted
The corresponding super-resolution image of image realizes the target image that arbitrary dimension is measured with smaller calculating.In convolutional Neural net
After the completion of network training, the super-resolution processing model transplantations that can be used as image are applied to mobile terminal.
In turn, the super-resolution scheme of image according to the present invention, if super-resolution coefficient corresponding to image to be processed
It matches, then directly image to be processed is input in the convolutional neural networks, according to convolution with trained convolutional neural networks
The output of neural network determines the corresponding super-resolution image of image to be processed, if mismatching, first according to super-resolution system
Fourth process block in several pairs of convolutional neural networks is adjusted, then image to be processed is input to convolutional Neural net adjusted
Subsequent processing is carried out in network, is had both speed fastly with flexible advantage, is calculated service without a large amount of, achieve extraordinary speed
With the balance of effect.
Detailed description of the invention
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings
Face, these aspects indicate the various modes that can practice principles disclosed herein, and all aspects and its equivalent aspect
It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned
And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical appended drawing reference generally refers to identical
Component or element.
Fig. 1 shows the schematic diagram according to an embodiment of the invention for calculating equipment 100;
Fig. 2 shows according to an embodiment of the invention for carrying out the convolutional Neural of super-resolution processing to image
The flow chart of network generation method 200;
Fig. 3 A shows the structural schematic diagram of the first process block according to an embodiment of the invention;
Fig. 3 B shows the structural schematic diagram of second processing block according to an embodiment of the invention;
Fig. 3 C shows the structural schematic diagram of third process block according to an embodiment of the invention;
Fig. 3 D shows the structural schematic diagram of fourth process block according to an embodiment of the invention;
Fig. 4 shows the structural schematic diagram of convolutional neural networks according to an embodiment of the invention;
Fig. 5 shows the schematic diagram of mobile terminal 500 according to an embodiment of the invention;And
Fig. 6 shows the flow chart of the super-resolution method 600 of image according to an embodiment of the invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
It is fully disclosed to those skilled in the art.
Fig. 1 is the block diagram of Example Computing Device 100.In basic configuration 102, calculating equipment 100, which typically comprises, is
System memory 106 and one or more processor 104.Memory bus 108 can be used for storing in processor 104 and system
Communication between device 106.
Depending on desired configuration, processor 104 can be any kind of processing, including but not limited to: microprocessor
(μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 may include such as
The cache of one or more rank of on-chip cache 110 and second level cache 112 etc, processor core
114 and register 116.Exemplary processor core 114 may include arithmetic and logical unit (ALU), floating-point unit (FPU),
Digital signal processing core (DSP core) or any combination of them.Exemplary Memory Controller 118 can be with processor
104 are used together, or in some implementations, and Memory Controller 118 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to: easily
The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System storage
Device 106 may include operating system 120, one or more program 122 and program data 124.In some embodiments,
Program 122 may be arranged to be executed instruction by one or more processors 104 using program data 124 on an operating system.
Calculating equipment 100 can also include facilitating from various interface equipments (for example, output equipment 142, Peripheral Interface
144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example
Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as facilitate via
One or more port A/V 152 is communicated with the various external equipments of such as display or loudspeaker etc.Outside example
If interface 144 may include serial interface controller 154 and parallel interface controller 156, they, which can be configured as, facilitates
Via one or more port I/O 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch
Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.Exemplary communication is set
Standby 146 may include network controller 160, can be arranged to convenient for via one or more communication port 164 and one
A or multiple other calculate communication of the equipment 162 by network communication link.
Network communication link can be an example of communication media.Communication media can be usually presented as in such as carrier wave
Or computer readable instructions, data structure, program module in the modulated data signal of other transmission mechanisms etc, and can
To include any information delivery media." modulated data signal " can such signal, one in its data set or more
It is a or it change can the mode of encoded information in the signal carry out.As unrestricted example, communication media can be with
Wired medium including such as cable network or private line network etc, and it is such as sound, radio frequency (RF), microwave, infrared
(IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein may include depositing
Both storage media and communication media.
Calculating equipment 100 can be implemented as server, such as file server, database server, application program service
Device and WEB server etc. also can be implemented as a part of portable (or mobile) electronic equipment of small size, these electronic equipments
It can be such as cellular phone, personal digital assistant (PDA), personal media player device, wireless network browsing apparatus, individual
Helmet, application specific equipment or may include any of the above function mixing apparatus.Calculating equipment 100 can also be real
It is now the personal computer for including desktop computer and notebook computer configuration.
In some embodiments, equipment 100 is calculated to be configured as executing according to the present invention be used for image progress oversubscription
The convolutional neural networks generation method 200 of resolution processing.Wherein, the one or more programs 122 for calculating equipment 100 include being used for
It executes according to the present invention for carrying out the instruction of the convolutional neural networks generation method 200 of super-resolution processing to image.
Fig. 2 shows according to an embodiment of the invention for carrying out the convolutional Neural net of super-resolution processing to image
The flow chart of network generation method 200.Convolutional neural networks generation method 200 for carrying out super-resolution processing to image is suitable for
It is executed in calculating equipment (such as calculating equipment 100 shown in FIG. 1).
As shown in Fig. 2, method 200 starts from step S210.In step S210, the first process block is constructed respectively, at second
Block and third process block are managed, the first process block includes the first convolutional layer, and second processing block includes the second convolutional layer, third process block
Including third convolutional layer, the first convolutional layer, the second convolutional layer and third convolutional layer keep it to export image and corresponding input figure
The size of picture is consistent.Based on the considerations of accelerating network convergence rate and alleviating over-fitting situation, an implementation according to the present invention
Example, when constructing the first process block, can also construct the first active coating and first normalization layer, add after the first convolutional layer
Add the first active coating and first normalization layer being sequentially connected, to form the first process block.Fig. 3 A is shown according to the present invention
One embodiment the first process block structural schematic diagram.As shown in Figure 3A, in the first process block, including what is be sequentially connected
First convolutional layer, the first active coating and first standardization (Batch Normalization, BN) layer.Wherein, the first convolutional layer
There are 64 different convolution kernels, the number of parameters of each convolution kernel is 3 × 3 × 3, step-length 1, using ReLU (Rectified
Linear Unit) activation primitive of the function as the first active coating, the output of the first convolutional layer is passed through with adjustment, is avoided next
The output of layer can not approach arbitrary function for upper one layer of linear combination.
In this embodiment, when constructing second processing block, the second active coating and second batch standardization can also be constructed
Layer, adds the second active coating and second batch normalization layer being sequentially connected, after the second convolutional layer to form second processing block.
Fig. 3 B shows the structural schematic diagram of second processing block according to an embodiment of the invention.As shown in Figure 3B, at second
It manages in block, including the second convolutional layer, the second active coating and the second batch normalization layer being sequentially connected.Wherein, the second convolutional layer has
64 different convolution kernels, the number of parameters of each convolution kernel are 64 × 3 × 3, and step-length 1 equally uses ReLU
(the Rectified Linear Unit) activation primitive of function as the second active coating, to adjust by the defeated of the second convolutional layer
Out, avoid next layer of output that from can not approaching arbitrary function for upper one layer of linear combination.
And when constructing third process block, third active coating and third batch normalization layer can also be constructed, in third convolution
Layer adds the third active coating being sequentially connected and third batch normalization layer later, to form third process block.Fig. 3 C shows root
According to the structural schematic diagram of the third process block of one embodiment of the present of invention.As shown in Figure 3 C, in third process block, including according to
Secondary connected third convolutional layer, third active coating and third batch normalization layer.Wherein, third convolutional layer has 3 different convolution
Core, the number of parameters of each convolution kernel are 64 × 3 × 3, and step-length 1 also uses ReLU (Rectified Linear Unit) letter
Activation primitive of the number as third active coating, to adjust the output for passing through third convolutional layer, avoiding next layer of output is upper one
Layer linear combination and arbitrary function can not be approached.
In order to guarantee that the first convolutional layer, the second convolutional layer and third the convolutional layer picture size before and after convolution remain unchanged,
This processing mode of Boundary filling is introduced, i.e., by the side of the first convolutional layer, the second convolutional layer and third convolutional layer institute input picture
The each row and each column of outside 1 pixel unit of edge carries out convolution after completing filling processing, to ensure the first volume with 0 filling again
The output image of lamination, the second convolutional layer and third convolutional layer is consistent with the size of corresponding input picture.About Boundary filling
Treatment process will remark additionally subsequent, wouldn't repeat herein.
Then, S220 is entered step, constructs fourth process block, fourth process block includes Volume Four lamination, Volume Four lamination
Output image meet preset proportionality coefficient with the dimension scale of corresponding input picture.According to one embodiment of present invention,
Fourth process block can be constructed in the following way.Firstly, corresponding convolution nuclear parameter is generated according to preset proportionality coefficient, then
Volume Four lamination is constructed by convolution nuclear parameter, to form the fourth process block for including Volume Four lamination.Wherein, proportionality coefficient can
To be simply interpreted as amplification factor, i.e., to obtained super-resolution image and the original after original image progress super-resolution processing
Dimension scale between beginning image, specifically, if the size of original image is 120px × 120px, proportionality coefficient is preset as 4,
By 120px × 4=480px it is found that the size of the corresponding super-resolution image of the original image is 480px × 480px.
In this embodiment, the side length of convolution kernel and the relationship of proportionality coefficient may be expressed as: in Volume Four laminationWhereinIt indicates to be rounded downwards.Convolution has been determined according to proportionality coefficient
After the side length of core, the center of convolution kernel is determined, then complete the generation of convolution nuclear parameter, then center and ratio based on convolution kernel
Pre-generated convolution coefficient correspondence is filled into convolution kernel, Volume Four lamination is constructed by convolution nuclear parameter, with shape by relationship
At the fourth process block including Volume Four lamination.When proportionality coefficient is 4, the side length of convolution kernel known to relation above formula is 7
Or 8.Herein, selecting the side length of convolution kernel is 7.Fig. 3 D shows the knot of fourth process block according to an embodiment of the invention
Structure schematic diagram.As shown in Figure 3D, in fourth process block, including Volume Four lamination, Volume Four lamination have 64 different convolution
Core, the number of parameters of each convolution kernel are 64 × 7 × 7, step-length 4.The treatment process of above-mentioned building Volume Four lamination, it is crucial
Property code is as follows:
Factor=(size+1) // 2% proportionality coefficient factor, the side length size of convolution kernel
If size%2==1:% calculates convolution kernel center center
Center=factor-1
else:
Center=factor-0.5
The position that og=np.ogrid [: size: size] % storage convolution nuclear phase is answered
return(1-abs(og[0]-center)/factor)*\(1-abs(og[1]-center)/factor)
%factor has reacted the proportionate relationship between Volume Four lamination input and output, can be used to calculate corresponding convolution
Core coefficient
Next, in step S230, according to preset concatenate rule, at one or more first process blocks, second
Reason block, third process block are connected with fourth process block, with to generate with the first process block be input, with third process block is defeated
Convolutional neural networks out.According to one embodiment of present invention, the number of the first process block, third process block and fourth process block
Amount is 1, and the quantity of second processing block is 17.In this embodiment, 1 first is handled according to preset concatenate rule
Block, 16 second processing blocks, 1 fourth process block, 1 second processing block and 1 third process block are sequentially connected, with generate with
First process block is input, the convolutional neural networks that the third process block is output.
Fig. 4 shows the structural schematic diagram of convolutional neural networks according to an embodiment of the invention.As shown in figure 4,
In convolutional neural networks, be using the first process block A1 as input terminal, behind be sequentially connected second processing block B1, second processing block
B2, second processing block B3, second processing block B4, second processing block B5, second processing block B6, second processing block B7, second processing
Block B8, second processing block B9, second processing block B10, second processing block B11, second processing block B12, second processing block B13,
Two process block B14, second processing block B15, second processing block B16, fourth process block D1, second processing block B17 and third processing
Block C1, wherein third process block C1 is output end.The order of connection of each processing unit illustrated in fig. 4, as according to preset
Concatenate rule is arranged.It, can be according to practical application scene, network training situation, system about presetting for concatenate rule
Configuration and performance requirement etc. carry out appropriate adjustment, these are that can be easy to think for the technical staff for understanding the present invention program
It arrives, and also within protection scope of the present invention, is not repeated herein.
Table 1 shows the parameter setting example of convolutional neural networks according to an embodiment of the invention.Wherein, to table
For the value of 1 inner this parameter of Boundary filling, "-" indicates to operate without Boundary filling, and " 1 " indicates to be inputted processing unit
The each row and each column of outside 1 pixel unit in the edge of characteristic pattern is filled with 0, and so on.If being related to below without particularly pointing out
The content of Boundary filling is subject to above description.In addition, to can be understood as convolution kernel to be dealt with for this parameter of port number
The port number of image or the quantity of characteristic pattern.
Content in table 1 is specific as follows shown:
Processing unit | Convolutional layer | Convolution kernel size | Boundary filling | Step-length | Port number | Convolution nuclear volume |
First process block A1 | First convolutional layer | 3×3 | 1 | 1 | 3 | 64 |
Second processing block B1 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B2 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B3 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B4 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B5 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B6 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B7 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B8 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B9 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B10 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B11 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B12 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B13 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B14 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B15 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Second processing block B16 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Fourth process block D1 | Volume Four lamination | 7×7 | - | 4 | 64 | 64 |
Second processing block B17 | Second convolutional layer | 3×3 | 1 | 1 | 64 | 64 |
Third process block C1 | Third convolutional layer | 3×3 | 1 | 1 | 64 | 3 |
Table 1
After building convolutional neural networks, step S240 is executed, according to the sets of image data obtained in advance, to convolution mind
It is trained through network, super-resolution image corresponding to the output instruction input picture so as to convolutional neural networks, picture number
It include multiple images group according to set, each image group includes original image and the super-resolution image corresponding to it, the super-resolution
The dimension scale of rate image and the original image meets proportionality coefficient.According to one embodiment of present invention, such as lower section can be passed through
Formula is trained convolutional neural networks.In this embodiment, it to the image group that each is extracted, is wrapped with the image group
The original image included is the input of first the first process block in convolutional neural networks, with super-resolution included by the image group
Image is the output of the last one third process block in convolutional neural networks, is trained to convolutional neural networks.Wherein, image
The included original image of group is RGB triple channel image, having a size of 120px × 120px, the corresponding super-resolution figure of original image
As being also RGB triple channel image, having a size of 480px × 480px.
Below by by taking an image group X in sets of image data as an example, the training process of convolutional neural networks is carried out
Explanation.Image group X includes that original image X1 super-resolution image X2, X1 and X2 corresponding with its are RGB triple channel image, ruler
Very little is 120px × 120px and 480px × 480px respectively.It is with original image X1 for the defeated of the first process block A1 in training
Enter, super-resolution image X2 is that the output of third process block C1 carries out the training of convolutional neural networks.
Specifically, original image X1 is first input to the first process block A1, original image X1 is RGB triple channel image, ruler
Very little is 120px × 120px, and the first convolutional layer in the first process block A1 has 64 convolution kernels, the number of parameters of each convolution kernel
It is 3 × 3 × 3, the convolution kernel for being equivalent to 64 3 × 3 sizes carries out convolution, step-length 1 in 3 channels respectively.Due to introducing
Boundary filling processing, then before carrying out convolution, each channel image in the original image X1 that needs to be inputted first convolutional layer
Outside 1 pixel unit in edge each row and each column with 0 filling, after the convolution of first convolutional layer, according to (120-3+
2 × 1)/1+1=120 it is found that the image obtained at this time size be 120px × 120px, that is, obtain 64 120px × 120px
The characteristic pattern of size.Triple channel is combined carry out process of convolution, therefore the first process block in first convolutional layer
The input of the first active coating in A1 is the characteristic pattern of 64 120px × 120px.Swash using first in the first process block A1
The processing of layer and first normalization layer living, the output for obtaining the first process block A1 is the characteristic pattern of 64 120px × 120px.
Then, into second processing block B1.The second convolutional layer in second processing block B1 has 64 convolution kernels, each convolution
The number of parameters of core is 64 × 3 × 3, and the convolution kernel for being equivalent to 64 3 × 3 sizes respectively rolls up 64 characteristic patterns of input
Product, step-length 1.Using Boundary filling mode, by each of outside 1 pixel unit in the edge of the second convolutional layer institute input picture
It is capable to be filled with each column with 0, after the convolution of second convolutional layer, according to (120-3+2 × 1)/1+1=120 it is found that obtaining at this time
The size of the image arrived is 120px × 120px, that is, obtains the characteristic pattern of 64 120px × 120px sizes.Hereafter, by second
The processing of second active coating and second batch normalization layer in process block B1, the output that can obtain second processing block B1 is 64 120px
The characteristic pattern of × 120px.After the output of second processing block B1 is input to second processing block B2, by the phase of subsequent processing units
Pass processing, the output for obtaining second processing block B16 is the characteristic pattern of 64 120px × 120px.It should be noted that at second
Reason block B3~B16 can refer to the treatment process of second processing block B2 as above to the relevant treatment of image, and details are not described herein again.
In turn, the output of second processing block B16 is input to fourth process block D1, the Volume Four product in fourth process block D1
Layer has 64 convolution kernels, and the number of parameters of each convolution kernel is 64 × 7 × 7, step-length 4, by the Volume Four lamination to input
The characteristic pattern of 64 120px × 120px carry out transposition convolution, it is special to amplify image in each characteristic pattern according to proportionality coefficient
Sign, to obtain the characteristic pattern of 64 480px × 480px.
Using the characteristic pattern of this 64 480px × 480px of fourth process block D1 output as the input of second processing block B17
It continues with, the second convolutional layer in second processing block B17 has 64 convolution kernels, and the number of parameters of each convolution kernel is 64 × 3
× 3, the convolution kernel for being equivalent to 64 3 × 3 sizes carries out convolution, step-length 1 to 64 characteristic patterns of input respectively.Using boundary
Filling mode, by each row and each column of outside 1 pixel unit in the edge of the second convolutional layer institute input picture with 0 filling, warp
After crossing the convolution of second convolutional layer, according to (480-3+2 × 1)/1+1=480 it is found that the size of the image obtained at this time is
480px × 480px obtains the characteristic pattern of 64 480px × 480px sizes.Hereafter, by second processing block B17 second
The processing of active coating and second batch normalization layer, the output that can obtain second processing block B17 is the feature of 64 480px × 480px
Figure.
Finally, the characteristic pattern of 64 480px × 480px of second processing block B17 output is input to third process block C1,
Third convolutional layer in third process block C1 has 3 convolution kernels, and the number of parameters of each convolution kernel is 64 × 3 × 3, is equivalent to 3
The convolution kernel of a 3 × 3 size carries out convolution, step-length 1 to 64 characteristic patterns of input respectively.It, will using Boundary filling mode
The each row and each column of outside 1 pixel unit in edge of the third convolutional layer institute input picture is with 0 filling, by the third convolution
After the convolution of layer, according to (480-3+2 × 1)/1+1=480 it is found that the size of the image obtained at this time is 480px × 480px,
Obtain the characteristic pattern of 3 480px × 480px sizes.Hereafter, by third active coating in third process block C1 and third batch mark
The processing of standardization layer, the output that can obtain third process block C1 is the RGB triple channel image of 1 480px × 480px.For training
The convolutional neural networks, using the corresponding super-resolution image X2 of the original image X1 of input as foreseen outcome, to third process block
The output of C1 is adjusted, by the method backpropagation of minimization error to adjust each parameter in convolutional neural networks.By
After a large amount of image group is trained in sets of image data, trained convolutional neural networks are obtained.
Sets of image data for training convolutional neural networks is to need pre-generated a, reality according to the present invention
Example is applied, sets of image data can be pre-generated in the following way.Firstly, image procossing is carried out to each picture to be processed,
To obtain the first image that each picture to be processed is corresponding, meets pre-set dimension.Wherein, pre-set dimension is 480px × 480px,
When carrying out image procossing to picture to be processed, the picture to be processed is usually zoomed into the pre-set dimension, to form correspondence
The first image.Later, corresponding down-sampling range, in this embodiment, ratio system are obtained according to preset proportionality coefficient
Number is preset as 4, can determine that the corresponding down-sampling range of proportionality coefficient is [1,0.25] by 1/4=0.25, then to each foot that is filled
First image of pre-set dimension carries out down-sampling to first image according to above-mentioned down-sampling range, low point corresponding to generate
Second image of resolution, according to 480px × 0.25=120px it is found that the size of the second image generated be 120px ×
120px.Using second image as original image, using first image as super-resolution figure corresponding with the original image
Picture is associated processing to the original image and the super-resolution image to form image group, collects each image group, to form figure
As data acquisition system.
Furthermore it is noted that in order to further increase the image-capable of convolutional neural networks, it can be based on difference
Proportionality coefficient to generate corresponding down-sampling range, and then a variety of image data sets are generated by different down-sampling ranges
It closes, every kind of sets of image data corresponds to a kind of proportionality coefficient, using these sets of image data come training convolutional neural networks,
By a large amount of different proportion coefficients, the image training of amplification factor, facilitates network and obtains in super-resolution processing in other words
The balance that speed is fast and effect is good.
Fig. 5 shows the structural block diagram of mobile terminal 500 according to an embodiment of the invention.Mobile terminal 500 can be with
Including memory interface 502, one or more data processors, image processor and/or central processing unit 504, and outside
Enclose interface 506.
Memory interface 502, one or more processors 504 and/or peripheral interface 506 either discrete component,
It can integrate in one or more integrated circuits.In mobile terminal 500, various elements can pass through one or more communication
Bus or signal wire couple.Sensor, equipment and subsystem may be coupled to peripheral interface 506, a variety of to help to realize
Function.
For example, motion sensor 510, light sensor 512 and range sensor 514 may be coupled to peripheral interface 506,
To facilitate the functions such as orientation, illumination and ranging.Other sensors 516 can equally be connected with peripheral interface 506, such as positioning system
System (such as GPS receiver), temperature sensor, biometric sensor or other sensor devices, it is possible thereby to help to implement phase
The function of pass.
Camera sub-system 520 and optical sensor 522 can be used for the camera of convenient such as record photos and video clips
The realization of function, wherein the camera sub-system and optical sensor for example can be charge-coupled device (CCD) or complementary gold
Belong to oxide semiconductor (centimetre OS) optical sensor.Reality can be helped by one or more radio communication subsystems 524
Existing communication function, wherein radio communication subsystem may include that radio-frequency transmitter and transmitter and/or light (such as infrared) receive
Machine and transmitter.The particular design and embodiment of radio communication subsystem 524 can depend on what mobile terminal 500 was supported
One or more communication networks.For example, mobile terminal 500 may include being designed to support LTE, 3G, GSM network, GPRS net
Network, EDGE network, Wi-Fi or WiMax network and BlueboothTMThe communication subsystem 524 of network.
Audio subsystem 526 can be coupled with loudspeaker 528 and microphone 530, to help to implement to enable voice
Function, such as speech recognition, speech reproduction, digital record and telephony feature.I/O subsystem 540 may include touch screen control
Device 542 processed and/or other one or more input controllers 544.Touch screen controller 542 may be coupled to touch screen 546.It lifts
For example, any one of a variety of touch-sensing technologies are can be used to detect in the touch screen 546 and touch screen controller 542
The contact and movement or pause carried out therewith, wherein detection technology includes but is not limited to capacitive character, resistive, infrared and table
Face technology of acoustic wave.Other one or more input controllers 544 may be coupled to other input/control devicess 548, such as one
Or the pointer device of multiple buttons, rocker switch, thumb wheel, infrared port, USB port, and/or stylus etc.It is described
One or more button (not shown)s may include the up/down for controlling 530 volume of loudspeaker 528 and/or microphone
Button.
Memory interface 502 can be coupled with memory 570.The memory 570 may include that high random access is deposited
Reservoir and/or nonvolatile memory, such as one or more disk storage equipments, one or more optical storage apparatus, and/
Or flash memories (such as NAND, NOR).Memory 570 can store an operating system 572, for example, Android, iOS or
The operating system of Windows Phone etc.The operating system 572 may include for handling basic system services and execution
The instruction of task dependent on hardware.Memory 570 can also store one or more programs 574.In mobile device operation,
Meeting load operating system 572 from memory 570, and executed by processor 504.Program 574 at runtime, also can be from storage
It loads in device 570, and is executed by processor 504.Program 574 operates on operating system, utilizes operating system and bottom
The interface that hardware provides realizes the various desired functions of user, such as instant messaging, web page browsing, pictures management.Program 574 can
To be independently of operating system offer, it is also possible to what operating system carried.In addition, program 574 is mounted to mobile terminal
When in 500, drive module can also be added to operating system.Program 574 may be arranged on an operating system by one or more
A processor 504 executes relevant instruction.In some embodiments, mobile terminal 500 is configured as executing according to the present invention
The super-resolution method 600 of image.Wherein, one or more programs 574 of mobile terminal 500 include for executing according to this hair
The instruction of the super-resolution method 600 of bright image.
Fig. 6 shows the flow chart of the super-resolution method 600 of image according to an embodiment of the invention.Image surpasses
Resolution method 600 is suitable for executing in mobile terminal (such as mobile terminal 500 shown in fig. 5), for surpassing to image
In the convolutional neural networks generation method of resolution processes, trained convolutional neural networks carry out super-resolution processing.
As shown in fig. 6, method 600 starts from step S610.In step S610, image to be processed and its corresponding super is obtained
Differentiate rate coefficient.According to one embodiment of present invention, image to be processed is Y1, corresponding having a size of 120px × 120px
Super-resolution coefficient is 4, and herein, super-resolution coefficient can be understood as image magnification.
Then, S620 is entered step, judges whether the super-resolution coefficient matches with trained convolutional neural networks.Root
According to one embodiment of the present of invention, the corresponding proportionality coefficient of trained convolutional neural networks is 4, with the super-resolution coefficient phase
Deng showing that the super-resolution coefficient and the convolutional neural networks are matched.
Next, in step S630, if matching, image to be processed is input in trained convolutional neural networks
Carry out super-resolution processing.According to one embodiment of present invention, by step S620 it is found that the corresponding oversubscription of image Y1 to be processed
Resolution coefficient is matched with trained convolutional neural networks, then image Y1 to be processed is input in the convolutional neural networks and is carried out
Super-resolution processing.
Finally, executing step S640, the output of trained convolutional neural networks is obtained, is determined according to the output to be processed
The corresponding super-resolution image of image.According to one embodiment of present invention, third process block C1 in convolutional neural networks is obtained
Output, the output be a 480px × 480px RGB triple channel image, be denoted as Y2, then can determine image to be processed
The corresponding super-resolution image of Y1 is image Y2.
According to still another embodiment of the invention, image to be processed is Z1, corresponding super having a size of 120px × 120px
Differentiating rate coefficient is 3, at this time it is found that super-resolution coefficient and the trained convolution of image Z1 to be processed after execution step S620
Neural network mismatches, then is adjusted according to the super-resolution coefficient to the fourth process block in the convolutional neural networks.?
In the embodiment, the corresponding proportionality coefficient of fourth process block D1 in convolutional neural networks is adjusted to this super-resolution system
Number, i.e., be updated to 3 for the proportionality coefficient of fourth process block D1, according to new proportionality coefficient, generate corresponding convolution nuclear parameter, leads to
Convolution nuclear parameter reconstruct Volume Four lamination is crossed, to re-form the fourth process block D1 including Volume Four lamination.Everywhere about
The forming process for managing block D1, has been carried out detailed description in the step S220 in method 200, details are not described herein again.It is above-mentioned right
The process that convolutional neural networks are adjusted, it is only necessary to fourth process block D1 is modified, do not need to change other processing
Unit, without re -training convolutional neural networks, can be realized meet any super-resolution coefficient to image carry out oversubscription
The convolutional neural networks of resolution processing.After adjusting convolutional neural networks, image Z1 to be processed is input to volume adjusted
Carry out super-resolution processing in product neural network, and obtain the output of trained convolutional neural networks, obtain a 360px ×
The RGB triple channel image of 360px, is denoted as Z2, then can determine that the corresponding super-resolution image of image Z1 to be processed is to scheme
As Z2.
In practical applications, the super-resolution model encapsulation based on above-mentioned trained convolutional neural networks can be related to
In the application for thering is associated picture to handle.When this kind of mobile application is installed in downloading, super-resolution model is directly deployed in movement
Terminal 500, shared memory space is smaller, and memory source occupancy is low, and response speed is very fast, can provide the user with better body
It tests.
Currently based on the super-resolution method of deep neural network, it is difficult to obtain balance in calculation amount and flexibility, if
It is required that deep neural network has lesser calculation amount, then integer zooming ratio image can only be generally realized, appoint if being intended to obtain
The target image for size of anticipating, then will increase the computation burden of deep neural network.It is according to an embodiment of the present invention to be used for image
The technical solution that the convolutional neural networks of super-resolution processing generate is carried out, constructs the first process block, second processing respectively first
Block and third process block, the first process block include the first convolutional layer, and second processing block includes the second convolutional layer, third process block packet
Third convolutional layer is included, then constructs fourth process block, fourth process block includes Volume Four lamination, will according to preset concatenate rule
One or more first process blocks, second processing block, third process block are connected with fourth process block, to generate at first
Reason block is input, take third process block as the convolutional neural networks exported, and the sets of image data that finally basis obtains in advance is right
Convolutional neural networks are trained, so that it exports super-resolution image corresponding to instruction input picture.In the above scheme,
First convolutional layer, the second convolutional layer and third convolutional layer keep its output image consistent with the size of corresponding input picture, and
The output image of Volume Four lamination meets preset proportionality coefficient with the dimension scale of corresponding input picture, therefore to the volume of generation
For product neural network, before network in the first process block of part and the processing links of second processing block, it can extract defeated
Enter to be used for the characteristics of image of super-resolution in image, and then can be according to preset proportionality coefficient to above-mentioned figure in fourth process block
As feature amplifies processing, the second processing block and third process block of part after network are finally recycled, to amplified figure
As feature carries out improvement refinement, target image is finally obtained, i.e., super-resolution image corresponding with original input picture realizes
The target image of arbitrary dimension is measured with smaller calculating.After the completion of convolutional neural networks training, image can be used as
Super-resolution processing model transplantations are applied to mobile terminal.In turn, the super-resolution side of image according to an embodiment of the present invention
Case directly will be to be processed if super-resolution coefficient corresponding to image to be processed is matched with trained convolutional neural networks
Image is input in the convolutional neural networks, and the corresponding super-resolution of image to be processed is determined according to the output of convolutional neural networks
Rate image is first adjusted the fourth process block in convolutional neural networks according to super-resolution coefficient if mismatching, then will
Image to be processed, which is input in convolutional neural networks adjusted, carries out subsequent processing, have both speed fastly with flexible advantage, nothing
Service need to be largely calculated, the balance of extraordinary speed and effect is achieved.
A6. the method as described in any one of A1-5, the sets of image data that the basis obtains in advance, to the convolution
Neural network is trained, super-resolution image corresponding to the output instruction input picture so as to the convolutional neural networks
Step includes:
It is in the convolutional neural networks with original image included by the image group to the image group that each is extracted
The input of first the first process block is last in the convolutional neural networks with super-resolution image included by the image group
The output of one third process block is trained the convolutional neural networks.
A7. the method as described in any one of A1-6, the Volume Four lamination are transposition convolutional layer.
A8. the method as described in any one of A1-7, wherein first process block, third process block and fourth process
The quantity of block is 1, and the quantity of the second processing block is 17.
A9. the method as described in any one of A1-8 further includes pre-generated sets of image data, the pre-generated figure
As the step of data acquisition system includes:
Image procossing is carried out to each picture to be processed, each picture to be processed is corresponding, meets pre-set dimension to obtain
The first image;
Corresponding down-sampling range is obtained according to preset proportionality coefficient, and to the first figure of each sufficient pre-set dimension that is filled
Picture carries out down-sampling to first image according to the down-sampling range, to generate the second image of corresponding low resolution;
Using second image as original image, using first image as super-resolution figure corresponding with the original image
Picture is associated processing to the original image and the super-resolution image to form image group;
Collect each image group, to form sets of image data.
B13. method as described in B12, further includes:
If the super-resolution coefficient and the trained convolutional neural networks mismatch, according to the super-resolution
Coefficient is adjusted the fourth process block in the convolutional neural networks;
The image to be processed is input in convolutional neural networks adjusted and carries out super-resolution processing;
The output for obtaining the trained convolutional neural networks determines that the image to be processed is corresponding according to the output
Super-resolution image.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, knot is not been shown in detail
Structure and technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims than feature more features expressly recited in each claim.More precisely, as following
As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, it abides by
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
As a separate embodiment of the present invention.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups
Between can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined into a module or furthermore be segmented into multiple
Submodule.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Be combined into one between module or unit or group between member or group, and furthermore they can be divided into multiple submodule or subelement or
Between subgroup.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
Meaning one of can in any combination mode come using.
In addition, be described as herein can be by the processor of computer system or by executing by some in the embodiment
The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, Installation practice
Element described in this is the example of following device: the device be used for implement as in order to implement the purpose of the invention element performed by
Function.
Various technologies described herein are realized together in combination with hardware or software or their combination.To the present invention
Method and apparatus or the process and apparatus of the present invention some aspects or part can take insertion tangible media, such as it is soft
The form of program code (instructing) in disk, CD-ROM, hard disk drive or other any machine readable storage mediums,
Wherein when program is loaded into the machine of such as computer etc, and is executed by the machine, the machine becomes to practice this hair
Bright equipment.
In the case where program code executes on programmable computers, calculates equipment and generally comprise processor, processor
Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely
A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory
Instruction in the said program code of middle storage executes the convolutional Neural for being used to carry out image super-resolution processing of the invention
The super-resolution method of network generation method and/or image.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates
Machine readable medium includes computer storage media and communication media.Computer storage medium storage such as computer-readable instruction,
The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc.
Data-signal processed passes to embody computer readable instructions, data structure, program module or other data including any information
Pass medium.Above any combination is also included within the scope of computer-readable medium.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.
Description plain objects, which are merely representative of, is related to the different instances of similar object, and is not intended to imply that the object being described in this way must
Must have the time it is upper, spatially, sequence aspect or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
Language used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit
Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this
Many modifications and changes are obvious for the those of ordinary skill of technical field.For the scope of the present invention, to this
Invent done disclosure be it is illustrative and not restrictive, it is intended that the scope of the present invention be defined by the claims appended hereto.
Claims (10)
1. it is a kind of for carrying out the convolutional neural networks generation method of super-resolution processing to image, suitable for being held in calculating equipment
Row, the method includes the steps:
The first process block, second processing block and third process block are constructed respectively, and first process block includes the first convolutional layer, institute
Stating second processing block includes the second convolutional layer, and the third process block includes third convolutional layer, first convolutional layer, volume Two
It is consistent with the size of corresponding input picture that lamination and third convolutional layer keep it to export image;
Construct fourth process block, the fourth process block includes Volume Four lamination, the output image of the Volume Four lamination with it is right
The dimension scale of input picture is answered to meet preset proportionality coefficient;
According to preset concatenate rule, by one or more first process blocks, second processing block, third process block and fourth process
Block carry out it is connected, with generate with the first process block be input, with third process block be output convolutional neural networks;
According to the sets of image data obtained in advance, the convolutional neural networks are trained, so as to the convolutional Neural net
Super-resolution image corresponding to the output instruction input picture of network, described image data acquisition system includes multiple images group, each
Image group includes original image and the super-resolution image corresponding to it, the size ratio of the super-resolution image and the original image
Example meets the proportionality coefficient.
2. the method as described in claim 1, the step for constructing the first process block, second processing block and third process block respectively
Suddenly further include:
Construct the first active coating and first normalization layer;
The first active coating and first normalization layer being sequentially connected are added, after first convolutional layer to be formed at first
Manage block.
3. it is method according to claim 1 or 2, it is described to construct the first process block, second processing block and third process block respectively
The step of further include:
Construct the second active coating and second batch normalization layer;
The second active coating and second batch normalization layer being sequentially connected are added, after second convolutional layer to be formed at second
Manage block.
4. method as claimed in any one of claims 1-3, described to construct the first process block, second processing block and third respectively
The step of process block further include:
Construct third active coating and third batch normalization layer;
The third active coating being sequentially connected and third batch normalization layer are added, after the third convolutional layer to be formed at third
Manage block.
5. the step of such as method of any of claims 1-4, the building fourth process block, includes:
According to preset proportionality coefficient, corresponding convolution nuclear parameter is generated;
Volume Four lamination is constructed by the convolution nuclear parameter, to form the fourth process block for including the Volume Four lamination.
6. a kind of calculating equipment, comprising:
One or more processors;
Memory;And
One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one
A or multiple processors execute, and one or more of programs include for executing in method described in -5 according to claim 1
Either method instruction.
7. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction when executed by a computing apparatus so that the calculating equipment executes according to claim 1 in method described in -5
Either method.
8. a kind of super-resolution method of image, suitable for executing in the terminal, the method is based on appointing in claim 1-5
Trained convolutional neural networks described in one carry out super-resolution processing to image, comprising steps of
Obtain image to be processed and its corresponding super-resolution coefficient;
Judge whether the super-resolution coefficient matches with the trained convolutional neural networks;
If matching, the image to be processed is input in trained convolutional neural networks and carries out super-resolution processing;
The output for obtaining the trained convolutional neural networks determines the corresponding oversubscription of the image to be processed according to the output
Resolution image.
9. a kind of mobile terminal, comprising:
One or more processors;
Memory;And
One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one
A or multiple processors execute, and one or more of programs include the finger for executing the method according to claim 11
It enables.
10. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction is when by mobile terminal execution, so that the mobile terminal execution is according to the method for claim 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811269554.6A CN109360154B (en) | 2018-10-29 | 2018-10-29 | Convolutional neural network generation method and super-resolution method of image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811269554.6A CN109360154B (en) | 2018-10-29 | 2018-10-29 | Convolutional neural network generation method and super-resolution method of image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109360154A true CN109360154A (en) | 2019-02-19 |
CN109360154B CN109360154B (en) | 2022-09-20 |
Family
ID=65347203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811269554.6A Active CN109360154B (en) | 2018-10-29 | 2018-10-29 | Convolutional neural network generation method and super-resolution method of image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109360154B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109978788A (en) * | 2019-03-25 | 2019-07-05 | 厦门美图之家科技有限公司 | Convolutional neural networks generation method, image demosaicing methods and relevant apparatus |
CN110428378A (en) * | 2019-07-26 | 2019-11-08 | 北京小米移动软件有限公司 | Processing method, device and the storage medium of image |
CN111402139A (en) * | 2020-03-25 | 2020-07-10 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111737193A (en) * | 2020-08-03 | 2020-10-02 | 深圳鲲云信息科技有限公司 | Data storage method, device, equipment and storage medium |
CN111881927A (en) * | 2019-05-02 | 2020-11-03 | 三星电子株式会社 | Electronic device and image processing method thereof |
CN112183736A (en) * | 2019-07-05 | 2021-01-05 | 三星电子株式会社 | Artificial intelligence processor and method for executing neural network operation |
CN112788200A (en) * | 2020-12-04 | 2021-05-11 | 光大科技有限公司 | Method and device for determining frequency spectrum information, storage medium and electronic device |
CN114092337A (en) * | 2022-01-19 | 2022-02-25 | 苏州浪潮智能科技有限公司 | Method and device for super-resolution amplification of image at any scale |
CN117408881A (en) * | 2023-09-28 | 2024-01-16 | 上海纬而视科技股份有限公司 | Super-resolution image reconstruction method based on insect compound eye vision net nerve membrane |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017219263A1 (en) * | 2016-06-22 | 2017-12-28 | 中国科学院自动化研究所 | Image super-resolution enhancement method based on bidirectional recursion convolution neural network |
CN108376386A (en) * | 2018-03-23 | 2018-08-07 | 深圳天琴医疗科技有限公司 | A kind of construction method and device of the super-resolution model of image |
CN108596833A (en) * | 2018-04-26 | 2018-09-28 | 广东工业大学 | Super-resolution image reconstruction method, device, equipment and readable storage medium storing program for executing |
-
2018
- 2018-10-29 CN CN201811269554.6A patent/CN109360154B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017219263A1 (en) * | 2016-06-22 | 2017-12-28 | 中国科学院自动化研究所 | Image super-resolution enhancement method based on bidirectional recursion convolution neural network |
CN108376386A (en) * | 2018-03-23 | 2018-08-07 | 深圳天琴医疗科技有限公司 | A kind of construction method and device of the super-resolution model of image |
CN108596833A (en) * | 2018-04-26 | 2018-09-28 | 广东工业大学 | Super-resolution image reconstruction method, device, equipment and readable storage medium storing program for executing |
Non-Patent Citations (1)
Title |
---|
刘鹏飞等: "基于卷积神经网络的图像超分辨率重建", 《计算机工程与应用》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109978788A (en) * | 2019-03-25 | 2019-07-05 | 厦门美图之家科技有限公司 | Convolutional neural networks generation method, image demosaicing methods and relevant apparatus |
CN109978788B (en) * | 2019-03-25 | 2020-11-27 | 厦门美图之家科技有限公司 | Convolutional neural network generation method, image demosaicing method and related device |
US11861809B2 (en) | 2019-05-02 | 2024-01-02 | Samsung Electronics Co., Ltd. | Electronic apparatus and image processing method thereof |
US11257189B2 (en) | 2019-05-02 | 2022-02-22 | Samsung Electronics Co., Ltd. | Electronic apparatus and image processing method thereof |
CN111881927A (en) * | 2019-05-02 | 2020-11-03 | 三星电子株式会社 | Electronic device and image processing method thereof |
CN112183736A (en) * | 2019-07-05 | 2021-01-05 | 三星电子株式会社 | Artificial intelligence processor and method for executing neural network operation |
US11189014B2 (en) | 2019-07-26 | 2021-11-30 | Beijing Xiaomi Mobile Software Co., Ltd. | Method and device for processing image, and storage medium |
CN110428378B (en) * | 2019-07-26 | 2022-02-08 | 北京小米移动软件有限公司 | Image processing method, device and storage medium |
CN110428378A (en) * | 2019-07-26 | 2019-11-08 | 北京小米移动软件有限公司 | Processing method, device and the storage medium of image |
CN111402139A (en) * | 2020-03-25 | 2020-07-10 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111402139B (en) * | 2020-03-25 | 2023-12-05 | Oppo广东移动通信有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
CN111737193A (en) * | 2020-08-03 | 2020-10-02 | 深圳鲲云信息科技有限公司 | Data storage method, device, equipment and storage medium |
CN112788200A (en) * | 2020-12-04 | 2021-05-11 | 光大科技有限公司 | Method and device for determining frequency spectrum information, storage medium and electronic device |
CN112788200B (en) * | 2020-12-04 | 2022-11-01 | 光大科技有限公司 | Method and device for determining frequency spectrum information, storage medium and electronic device |
CN114092337A (en) * | 2022-01-19 | 2022-02-25 | 苏州浪潮智能科技有限公司 | Method and device for super-resolution amplification of image at any scale |
CN117408881A (en) * | 2023-09-28 | 2024-01-16 | 上海纬而视科技股份有限公司 | Super-resolution image reconstruction method based on insect compound eye vision net nerve membrane |
Also Published As
Publication number | Publication date |
---|---|
CN109360154B (en) | 2022-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109360154A (en) | A kind of super-resolution method of convolutional neural networks generation method and image | |
CN110363279B (en) | Image processing method and device based on convolutional neural network model | |
CN114127742A (en) | System and method for cross-channel, shift-based information mixing for a mixed-rank-like networking neural network | |
JP7096888B2 (en) | Network modules, allocation methods and devices, electronic devices and storage media | |
CN109800877A (en) | Parameter regulation means, device and the equipment of neural network | |
CN107424184B (en) | A kind of image processing method based on convolutional neural networks, device and mobile terminal | |
CN108197602A (en) | A kind of convolutional neural networks generation method and expression recognition method | |
CN108537283A (en) | A kind of image classification method and convolutional neural networks generation method | |
CN107851307A (en) | To the method and system of the Bayer types of image data demosaicing for image procossing | |
US11586903B2 (en) | Method and system of controlling computing operations based on early-stop in deep neural network | |
CN109584179A (en) | A kind of convolutional neural networks model generating method and image quality optimization method | |
JP2022502733A (en) | Data representation for dynamic accuracy in neural network cores | |
CN111695682B (en) | Data processing method and device | |
CN109118490A (en) | A kind of image segmentation network generation method and image partition method | |
CN106295707B (en) | Image-recognizing method and device | |
CN110288518A (en) | Image processing method, device, terminal and storage medium | |
KR20190107766A (en) | Computing device and method | |
CN109754359A (en) | A kind of method and system that the pondization applied to convolutional neural networks is handled | |
CN107369174A (en) | The processing method and computing device of a kind of facial image | |
CN110062176A (en) | Generate method, apparatus, electronic equipment and the computer readable storage medium of video | |
CN109934773A (en) | A kind of image processing method, device, electronic equipment and computer-readable medium | |
CN108960411A (en) | A kind of adjustment of convolutional neural networks and relevant apparatus | |
CN108288089A (en) | Method and apparatus for generating convolutional neural networks | |
KR20200132340A (en) | Electronic device and Method for controlling the electronic device thereof | |
CN103518227A (en) | Depth buffer compression for stochastic motion blur rasterization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |