CN109949225A - A kind of image processing method and calculate equipment - Google Patents
A kind of image processing method and calculate equipment Download PDFInfo
- Publication number
- CN109949225A CN109949225A CN201910180903.5A CN201910180903A CN109949225A CN 109949225 A CN109949225 A CN 109949225A CN 201910180903 A CN201910180903 A CN 201910180903A CN 109949225 A CN109949225 A CN 109949225A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- component
- machine learning
- learning model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of image processing method and equipment is calculated, wherein image processing method is comprising steps of obtain low-resolution image corresponding with original image;It is input in image processing machine learning model component using low-resolution image and original image as input picture, obtaining compared to original image has higher resolution and the high-resolution complete image of richer detailed information, wherein, image processing machine learning model component is trained to obtain using the multiple series of images collection obtained in advance, and wherein multiple series of images collection includes high-resolution complete graph image set, high resolution graphics image set and low-resolution image collection.
Description
Technical field
The present invention relates to field of computer technology, in particular to a kind of image processing method and calculating equipment.
Background technique
In recent years, using guidance figure guiding filtering method when executing image filtering, can not only keep image border property but also
The texture part of image can be made approximate with guidance figure, therefore scratch in the scenes such as figure, image defogging and receive greatly vigorously in image enhancement, image
It meets.Based on this, propose a kind of image processing method coupled using guiding filtering unit with convolutional neural networks (for example, depth
Guiding filtering), this method can introduce guiding filtering layer formed depth guiding filtering network, so as to using high-definition picture as
The guidance figure of low-resolution image simultaneously exports full resolution pricture.But there are image details to lose for the processing result image of this method
It loses, the problems such as image is excessively smooth.It is more preferable (for example, image detail enriches and image that therefore, it is necessary to a kind of image processing effects
Will not be excessively smooth) technical solution.
Summary of the invention
For this purpose, the present invention provides a kind of image processing method and calculates equipment, to try hard to solve or at least alleviate above
At least one existing problem.
According to an aspect of the invention, there is provided a kind of image processing method, suitable for being executed in calculating equipment, the party
Method includes the following steps: to obtain low-resolution image corresponding with original image;Using low-resolution image and original image as
Input picture is input in image processing machine learning model component, and obtaining has higher resolution and more compared to original image
The high-resolution complete image of detailed information abundant, wherein the utilization of image processing machine learning model component obtains in advance
Multiple series of images collection is trained to obtain, wherein multiple series of images collection include high-resolution complete graph image set, high resolution graphics image set with
And low-resolution image collection.
Optionally, image processing machine learning model component is by depth guiding filtering component and recurrent neural network group
Part coupling generates.
Optionally, image processing machine study is input to using low-resolution image and the original image as input picture
Acquisition has higher resolution compared to the original image in model component and the high-resolution of richer detailed information is complete
The step of whole image includes: that low-resolution image and the original image are input to depth guiding filtering component, is obtained and original
The corresponding output image of beginning image;Output image is input to recurrent neural network component, obtains high-resolution complete image.
Optionally, image processing machine learning model component is trained to obtain using the multiple series of images collection obtained in advance,
Comprising steps of being cooperateed with using multiple series of images collection to the depth guiding filtering component and the recurrent neural network component
Training, so that realizing completes image processing machine learning model component trains.
Optionally, the step of described image handling machine learning model component being trained using multiple series of images collection packet
It includes: obtaining high-resolution complete graph image set, high resolution graphics image set and low-resolution image collection;Construct image processing machine
Model component is practised, is provided with training parameter in the model component;Utilize high resolution graphics image set, low-resolution image collection and high score
Corresponding relationship between resolution complete image is trained image processing machine learning model component, adjusting training parameter, directly
Reach preset requirement to image processing machine learning model component.
Optionally, the structure of image processing machine learning model component includes guiding filtering network and recurrent neural network.
Optionally, multiple series of images collection refers to utilize extracts high-resolution from multiple images corresponding with each picture respectively
Rate complete image, high-definition picture and low-resolution image form high-resolution complete graph image set, high-definition picture
Collection and low-resolution image collection.
Optionally, preset condition refers to that the loss function value obtained using loss function reaches threshold value.
Optionally, image processing machine learning model is input to using low-resolution image and original image as input picture
The high-resolution complete image that there is higher resolution and richer detailed information compared to original image is obtained in component
Step includes: according to constituting low-resolution image and the image channel of original image respectively for the corresponding image of each image channel
Data are input in described image handling machine learning model component, obtain final image number corresponding with each image channel
According to;Using the final image data in each channel, high-resolution complete image is generated.
Optionally, image processing machine learning model component is by depth guiding filtering component, recurrent neural network group
What part and the coupling of full convolutional neural networks component generated.
Optionally, image processing machine learning model component is trained using the multiple series of images collection obtained in advance and is wrapped
Include: using multiple series of images collection to depth guiding filtering component, recurrent neural network component and full convolutional neural networks component into
Row coorinated training, so that realizing completes image processing machine learning model component trains.
Optionally, image processing machine learning model is input to using low-resolution image and original image as input picture
The high-resolution complete image that there is higher resolution and richer detailed information compared to original image is obtained in component
Step includes: that low-resolution image and original image are input to depth guiding filtering component, is obtained corresponding with original image
Export image;Output image is input to recurrent neural network component, obtains multiple recurrence by the output of different recurrence numbers
Image;Multiple recursive images are input to full convolutional network component, obtain intermediate image as high-resolution complete image.
Optionally, multiple recursive images are input to full convolutional network component to obtain intermediate image complete as high-resolution
The step of image includes: that multiple recursive images are input to full convolutional network component to obtain intermediate image;By intermediate image with it is defeated
Image executes superposition processing out, obtains high-resolution complete image.
According to a further aspect of the invention, a kind of calculating equipment, including one or more processors, memory are provided
And one or more programs, wherein one or more programs store in memory and are configured as being handled by one or more
Device executes, and one or more programs include the instruction for executing image processing method according to the present invention.
According to a further aspect of the invention, a kind of computer-readable storage medium for storing one or more programs is provided
Matter, one or more programs include instruction, are instructed when executed by a computing apparatus, so that it is according to the present invention to calculate equipment execution
The instruction of image processing method.
In conclusion the image processing method of an exemplary embodiment of the present invention learns mould using image processing machine
Type component obtains the high-resolution complete image of original image, and realizing enhances the details of original image and image point can be improved
Resolution.Further, depth guiding filtering component can be coupled with recurrent neural network, so as to draw using depth
It leads after filtering unit realizes the effect enhanced the details of original image and utilizes recurrent neural network component acquisition high resolution
Image.Further, disturbance operation can be increased to each image set, to improve network adaptability in the training process.More
Further, the high-frequency information in image can be learnt using recurrent neural network component, determined followed by full convolutional network high
The weight of frequency information, to enhance the detailed information in image.Further, each iteration diagram is determined by full convolutional network
The weight of picture, to keep image apparent.
Detailed description of the invention
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings
Face, these aspects indicate the various modes that can practice principles disclosed herein, and all aspects and its equivalent aspect
It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned
And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical appended drawing reference generally refers to identical
Component or element.
Fig. 1 shows the schematic diagram according to an embodiment of the invention for calculating equipment 100;
Fig. 2 shows the flow charts of the image processing method 200 of one embodiment of the present of invention;
Fig. 3 is shown using depth guiding filtering component according to an embodiment of the invention to original image execution figure
As the schematic diagram of processing;
Fig. 4, which is shown, executes at image original image using image processing method according to an embodiment of the invention
The schematic diagram of reason;
Fig. 5, which is shown, executes at image original image using image processing method according to an embodiment of the invention
The schematic diagram of reason;
Fig. 6 shows the structural block diagram of mobile terminal 600 according to an embodiment of the invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
It is fully disclosed to those skilled in the art.
Fig. 1 is the block diagram of Example Computing Device 100.In basic configuration 102, calculating equipment 100, which typically comprises, is
System memory 106 and one or more processor 104.Memory bus 108 can be used for storing in processor 104 and system
Communication between device 106.
Depending on desired configuration, processor 104 can be any kind of processing, including but not limited to: microprocessor
(μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 may include such as
The cache of one or more rank of on-chip cache 110 and second level cache 112 etc, processor core
114 and register 116.Exemplary processor core 114 may include arithmetic and logical unit (ALU), floating-point unit (FPU),
Digital signal processing core (DSP core) or any combination of them.Exemplary Memory Controller 118 can be with processor
104 are used together, or in some implementations, and Memory Controller 118 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to: easily
The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System storage
Device 106 may include operating system 120, one or more program 122 and program data 124.In some embodiments,
Program 122 may be arranged to be executed instruction by one or more processors 104 using program data 124 on an operating system.
Calculating equipment 100 can also include facilitating from various interface equipments (for example, output equipment 142, Peripheral Interface
144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example
Output equipment 142 include graphics processing unit 148 and audio treatment unit 150.They can be configured as facilitate via
One or more port A/V 152 is communicated with the various external equipments of such as display or loudspeaker etc.Outside example
If interface 144 may include serial interface controller 154 and parallel interface controller 156, they, which can be configured as, facilitates
Via one or more port I/O 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch
Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.Exemplary communication is set
Standby 146 may include network controller 160, can be arranged to convenient for via one or more communication port 164 and one
A or multiple other calculate communication of the equipment 162 by network communication link.
Network communication link can be an example of communication media.Communication media can be usually presented as in such as carrier wave
Or computer readable instructions, data structure, program module in the modulated data signal of other transmission mechanisms etc, and can
To include any information delivery media." modulated data signal " can such signal, one in its data set or more
It is a or it change can the mode of encoded information in the signal carry out.As unrestricted example, communication media can be with
Wired medium including such as cable network or private line network etc, and it is such as sound, radio frequency (RF), microwave, infrared
(IR) the various wireless mediums or including other wireless mediums.Term computer-readable medium used herein may include depositing
Both storage media and communication media.
Calculating equipment 100 can be implemented as server, such as file server, database server, application program service
Device and WEB server etc. also can be implemented as a part of portable (or mobile) electronic equipment of small size, these electronic equipments
It can be such as cellular phone, personal digital assistant (PDA), personal media player device, wireless network browsing apparatus, individual
Helmet, application specific equipment or may include any of the above function mixing apparatus.Calculating equipment 100 can also be real
It is now the personal computer for including desktop computer and notebook computer configuration.
In some embodiments, equipment 100 is calculated to be configured as executing image processing method according to the present invention.Wherein,
One or more programs 122 of calculating equipment 100 include the instruction for executing image processing method according to the present invention.
Before a description of fig. 2 it should be clear that: the present invention involved in depth guiding filtering component refer to
The component that guiding filtering unit is coupled with convolutional neural networks, wherein guiding filtering unit can be using navigational figure to input
Image is filtered, so that output image is generally similar to initial pictures, but texture part is similar to guidance figure.Base
In this, depth guiding filtering component introduces neural convolutional network, and before executing guiding filtering, low-resolution image is inputted mind
Through convolutional network, equally output low-resolution image, finally, guiding filtering layer is inputted using these images, by guiding filtering
After processing, final image is obtained.That is, the depth guiding filtering component is fully relied in neural convolutional network to low point
The processing result of resolution image.
Fig. 2 shows the flow chart of the image processing method 200 of one embodiment of the present of invention, described image processing methods
200 execute suitable for calculating equipment (such as calculating equipment 100 shown in FIG. 1).
For ease of description, the image occurred in the present invention is named herein, specific as follows, original image can table
It is shown as M or Ih, low-resolution image corresponding with original image M is represented by Il, the high-resolution complete image of output can indicate
For Oh。
As shown in Fig. 2, method 200 starts from step S210.In step S210, low resolution corresponding with original image M is obtained
Rate image, resolution ratio refer to the information content stored in image, that is, have how many pixel, therefore resolution ratio in per inch image
Also it is usually called pixel per inch, for example, the resolution ratio of some image is 1280*960, in this way, low-resolution image then table
Show the lower image of resolution ratio.
According to the exemplary embodiment of the application, can be obtained by executing down-sampling processing to original image and original graph
As corresponding low-resolution image, wherein down-sampling processing can simply be interpreted as executing reduction operation, and down-sampling processing may include
But it is not limited to arest neighbors interpolation, bilinear interpolation, mean value interpolation, intermediate value interpolation.
Then, in step S220, by low-resolution image IlImage procossing is input to as input picture with original image M
In machine learning model component, obtaining compared to original image M has higher resolution and the high score of richer detailed information
Resolution complete image Oh, wherein image processing machine learning model component is trained using the multiple series of images collection obtained in advance
It obtains, wherein multiple series of images collection includes high-resolution complete graph image set, high resolution graphics image set and low-resolution image collection.
It should be noted that multiple series of images collection refers to utilize extracts high-resolution from multiple images corresponding with each picture respectively
Rate complete image, high-definition picture and low-resolution image form high-resolution complete graph image set, high-definition picture
Collection and low-resolution image collection.The high resolution and detailed information of high-resolution complete image are abundant, so being user's expectation
The image of acquisition, although and the high image detail of high-definition picture image resolution ratio does not enrich, point of low-resolution image
Resolution is low and detailed information is not enriched.
In the present embodiment, these training set (above-described high-resolution complete images can be obtained by following steps
Collection, high resolution graphics image set and low-resolution image collection): multiple high-resolution and details image abundant are obtained as high score
Resolution complete graph image set is then concentrated in high-resolution complete image and increases noise, and high resolution graphics image set is obtained, and finally will
These high resolution graphics image sets carry out down-sampling and obtain low-resolution image collection.
That is, images above collection is input in image processing machine learning model, using being deposited in these image sets
Corresponding relationship described image handling machine learning model is trained, reach pre-provisioning request.It is according to the present invention
One embodiment can regard the reserved a part of the image set of these in training set as test set, it should be noted that the size of test set can
It is rule of thumb determined by technical staff.
For example, the loss function (loss function) such as following formula can be made constantly to subtract by constantly training in the training process
Less until convergence, at this moment the preset requirement can be value lower than predetermined threshold:
Loss=∑n I=1=(yi-f (xi))2Formula 1
Wherein, n indicates that the sum of pixel in image, f () represent the gray value of output image, and it is complete that y represents low resolution
The gray value of image, x represent some pixel.
According to the exemplary embodiment of the application, described image handling machine learning model component can be guided by depth and be filtered
Wave component couples generation with recurrent neural network component.That is, in this application, can to depth guiding filtering component with
Recurrent neural network is combined training, to realize the purpose to image processing machine learning model component trains.Generally speaking,
After being trained using the multiple series of images collection to depth guiding filtering component, then recurrent neural network component is trained,
Described image handling machine learning model component trains are completed to realize.Figuratively, data flow corresponding with image exists
Using recurrent neural network after depth filtering component, final data flow is generated, is achieved in end-to-end processing.
According to the exemplary embodiment of the application, using the multiple series of images collection to described image handling machine learning model
The step of component is trained, and include: to obtain high-resolution complete graph image set, high resolution graphics image set and low resolution figure
Image set;Image processing machine learning model component is constructed, while being provided with training parameter in the model component;Utilize high-resolution
Corresponding relationship between image set, low-resolution image collection and high-resolution complete image learns mould to described image handling machine
Type component is trained, and adjusts the training parameter, until described image handling machine learning model component reaches preset requirement.
Specifically, during being trained to depth guiding filtering component, it may include following steps: obtaining high score
Resolution image set, low-resolution image collection and high-resolution complete graph image set construct depth guiding filtering component, the guidance
Training parameter is provided in Filtering Model;Utilize low-resolution image collection and high resolution graphics image set and high-resolution complete image
Corresponding relationship between collection is trained the depth guiding filtering component, the training parameter is adjusted, until the depth
Guiding filtering component reaches preset requirement, wherein the preset condition is also possible to loss function represented by above formula 1
Value.
The depth guiding filtering component is described in detail below with reference to Fig. 3, as described in Figure 3, to input picture
IhDown-sampling is carried out to obtain and input picture IhCorresponding low-resolution image Il, then, convolutional neural networks C can will be applied tol
(Il), result from the corresponding low resolution output image G of low-resolution imagel, then, utilize Ih、IlAnd GlAs input, warp
It crosses guiding filtering layer and obtains high-definition picture Gh.It should be noted that during description, using each image as single channel image
It is described, if each image is made of multichannel (for example, 3 channels), the above operation can be executed to each image respectively.Cause
This, is when being trained to depth guiding filtering component shown in Fig. 3, high-definition picture set, low-resolution image collection
The corresponding relationship between high-resolution complete graph image set is closed, the training to the depth guiding filtering component can be realized.
According to one embodiment of present invention, the recurrent neural network be there is tree-shaped hierarchical structure and network node by
Its order of connection carries out recursive artificial neural network to input information, that is to say, that the recurrent neural network is that have note
Recall the neural network of ability.Therefore, the node between the hidden layer of Recognition with Recurrent Neural Network has connection, and the input of hidden layer is not
Only include the output of input layer, further includes the output of the hidden layer of last moment.
During being trained to recurrent neural network component, it may include following steps: building recurrent neural network
Component is provided with training parameter in the recurrent neural network component;It is complete using the high-resolution exported by depth guiding filtering
Whole image set is as the corresponding relationship pair between high resolution output image set and the better high-resolution complete graph image set of effect
The recurrent neural network component trains, adjusting training parameter, until recurrent neural network component reaches preset requirement, wherein
Preset condition is also possible to loss function value represented by above formula 1.
In addition, when executing training to depth guiding filtering component and recurrent neural network, it can be to training set (namely
Above-mentioned multiple images collection) disturbance treatment is executed, disturbance treatment includes but is not limited to random noise processing and random mould
Paste processing, wherein random noise processing may include but be not limited to Gaussian noise, salt-pepper noise etc., and Random-fuzzy processing can wrap
It is fuzzy etc. to include but be not limited to Gaussian Blur, mean value, wherein can be used as the ginseng of disturbance treatment using the random number that above method generates
Number.It is further noted that although it have been described that realize training to image processing machine learning object using training process twice,
It in the actual process, can be merely with primary training.
Above method is being utilized, it, can be by low resolution figure after completing to the training of image processing machine learning model component
Image processing machine learning model component as being input to training completion jointly with original image, the high resolution output that will acquire
Image is input to recurrent neural network component, obtains high-resolution complete image corresponding with high resolution output image.
It is input in image processing machine learning model component using low-resolution image and original image as input picture
Obtain compared to the original image have the high-resolution complete image of higher resolution and richer detailed information include: by
Low-resolution image and original image are input to depth guiding filtering component, obtain high resolution output corresponding with original image
Image;High resolution output image is input to recurrent neural network component, obtains high-resolution complete image.
According to one embodiment of present invention, logical according to the image for constituting the low-resolution image and the original image
The corresponding image data of each image channel is input in described image handling machine learning model component by road respectively, obtain with
The corresponding final image data of each image channel;Using the final image data in each channel, high-resolution complete graph is generated
Picture.
In addition, according to one embodiment of present invention, image processing machine learning model component is to guide to filter by depth
What wave component, recurrent neural network component and the coupling of full convolutional neural networks component generated.That is, can be in the above
Structure in the case where increase full convolutional neural networks component, pass through different recurrence numbers in this way, obtaining using the above operation
Then these recursive images are superimposed by multiple recursive images, obtain high-resolution complete image.In addition, can also be by these recurrence
Image is overlapped with the output image by operating above by full convolutional neural networks component, to obtain final
High-resolution complete image.
Specifically, using the multiple series of images collection to depth guiding filtering component, recurrent neural network component and complete
Convolutional neural networks component carries out coorinated training, so that realizing completes image processing machine learning model component trains.Work as figure
As after the completion of handling machine learning model component trains, original image can be input to the image processing machine learning model group
Part.
Then, low-resolution image corresponding with original image is obtained, low-resolution image and original image are input to
Depth guiding filtering component obtains output image corresponding with original image;The output image is input to recurrent neural net
Network component obtains multiple recursive images by the output of different recurrence numbers;Multiple recursive images are input to full convolutional network
Component obtains intermediate image as high-resolution complete image.
Optionally, the multiple recursive image is input to full convolutional network component and obtains intermediate image;By intermediate image
Superposition processing is executed with output image, obtains high-resolution complete image.
In order to more clearly describe the process of the application, it is described in detail below with reference to Fig. 4.
Fig. 4, which is shown, executes at image original image using image processing method according to an embodiment of the invention
The schematic diagram of reason.
As described in Figure 4, for ease of description, original image can regard high-definition picture I ash, then can be to high-resolution
Image IhDown-sampling processing is executed, low-resolution image I is obtainedl, can be by low-resolution image IlApplied to convolutional neural networks Cl
(Il), result from the corresponding low resolution output image G of low-resolution imagel, then, utilize Ih、IlAnd GlAs input, warp
It crosses guiding filtering layer and obtains low resolution complete image Gh.That is, using depth guiding filtering group as shown in Figure 3
The image G that part obtainsh, finally by image GhRecurrent neural net network layers are input to, output image O is obtainedh.In this way, output image Oh
Not only details enhances and resolution ratio can be improved.
That is, according to one embodiment of present invention, original image by as in Fig. 4 guiding filtering network with
High-resolution complete image is converted to after recurrent neural network.In this embodiment, guiding filtering network and recurrent neural network
It include data input layer, convolutional calculation layer, active coating, pond layer and full articulamentum.
In one embodiment of the invention, in guiding filtering network, the parameter of convolutional calculation layer is settable as follows: volume
Product core can be arranged to 3*3, and be directed to for the value of this parameter of boundary zero padding, can be used 1, that is to say, that indicate convolution
The each row and each column of outside 1 pixel unit in edge of layer institute input picture is with 0 filling, and step-length (stride) is set as 1,
Being grouped (group) indicates grouping corresponding to input and output, is defaulted as 1, that is to say, that all channels of default output input are respectively
One group.It exports a channel and carries out convolution algorithm by inputting all channels.
In one embodiment of the invention, in recurrent neural network, the parameter of convolutional calculation layer is settable as follows: volume
Product core can be arranged to 3*3, and be directed to for the value of this parameter of boundary zero padding, can be used 1, that is to say, that indicate convolution
The each row and each column of outside 1 pixel unit in edge of layer institute input picture is with 0 filling, and step-length (stride) is set as 1,
Being grouped (group) indicates grouping corresponding to input and output, is defaulted as 1, that is to say, that all channels of default output input are respectively
One group.It exports a channel and carries out convolution algorithm by inputting all channels.
And PReLU can be used in the active coating in above-mentioned guiding filtering network and recurrent neural network
(Parametric Rectified Linear Unit), Leaky ReLU (band leakage rectification function) and Tanh function etc. make
For the activation primitive of active coating, to adjust the output by convolutional layer and batch normalization layer, avoiding next layer of output is upper one
Layer linear combination and arbitrary function can not be approached.It should be pointed out which kind of mode is the embodiment of the present invention be not intended to limit specifically using
Realize the function of active coating, any activation primitive with the above function can combine with the embodiment of the present invention.
Fig. 5, which is shown, executes at image original image using image processing method according to an embodiment of the invention
The schematic diagram of reason.Recursive image is being got after guiding filtering network layer and recurrent neural net network layers as shown in Figure 4
Res1, the recursive image is then again inputted into recurrent neural net network layers, gets recursive image Res2, so by several
After secondary recurrence, multiple recursive image Res are got1、Res2…Resn.Assuming that recurrence number is 5, then 5 recurrence plots can be obtained
Picture, if the image of processing includes triple channel, corresponding 5 recursive images in each channel get 15 images altogether.
At this point, corresponding 5 recursive images in each channel can be overlapped, final image is obtained, it should be noted that described folded
Add and go mean value after can be superposition, weight can also be arranged to each recursive image as needed, then be overlapped.
In order to determine the weight of each recursive image, according to one embodiment of present invention, full convolutional network can be introduced
Mode, that is to say, that the recursive image that can will acquire and more than the output image subchannel that obtains be input to it is as shown in Figure 5 complete
Convolutional neural networks.Then, the weight of the high-frequency information in each recursive image is determined by training, and finally determines full convolution
Parameters in network.
In one embodiment of the invention, the parameter of the convolutional calculation layer of full convolutional neural networks is settable as follows, volume
Product core can be arranged to 1*1, and be directed to for the value of this parameter of boundary zero padding, can be used 1, that is to say, that indicate convolution
The each row and each column of outside 1 pixel unit in edge of layer institute input picture is with 0 filling, and step-length (stride) is set as 1,
Being grouped (group) indicates grouping corresponding to input and output, is defaulted as 5, that is to say, that using five inputs get one it is defeated
Out, input and output ratio is 5.And the activation of PReLU function then can be used in the active coating in full convolutional neural networks.
In conclusion the image processing method of an exemplary embodiment of the present invention learns mould using image processing machine
Type component obtains the high-resolution complete image of original image, and realizing enhances the details of original image and image point can be improved
Resolution.Further, depth guiding filtering component can be coupled with recurrent neural network, so as to draw using depth
It leads after filtering unit realizes the effect enhanced the details of original image and utilizes recurrent neural network component acquisition high resolution
Image.Further, disturbance operation can be increased to each image set, to improve network adaptability in the training process.More
Further, the high-frequency information in image can be learnt using the recurrent neural network component, it is true followed by full convolutional network
The weight of high-frequency information is determined, to enhance the detailed information in image.Further, each change is determined by full convolutional network
For the weight of image, to keep image apparent.
Fig. 6 shows the structural block diagram of mobile terminal 600 according to an embodiment of the invention.Mobile terminal 600 can be with
Including memory interface 602, one or more data processors, image processor and/or central processing unit 604, and outside
Enclose interface 606.
Memory interface 602, one or more processors 604 and/or peripheral interface 606 either discrete component,
It can integrate in one or more integrated circuits.In mobile terminal 600, various elements can pass through one or more communication
Bus or signal wire couple.Sensor, equipment and subsystem may be coupled to peripheral interface 606, a variety of to help to realize
Function.
For example, motion sensor 610, light sensor 612 and range sensor 614 may be coupled to peripheral interface 606,
To facilitate the functions such as orientation, illumination and ranging.Other sensors 616 can equally be connected with peripheral interface 606, such as positioning system
System (such as GPS receiver), temperature sensor, biometric sensor or other sensor devices, it is possible thereby to help to implement phase
The function of pass.
Camera sub-system 620 and optical sensor 622 can be used for the camera of convenient such as record photos and video clips
The realization of function, wherein the camera sub-system and optical sensor for example can be charge-coupled device (CCD) or complementary gold
Belong to oxide semiconductor (centimetre OS) optical sensor.Reality can be helped by one or more radio communication subsystems 624
Existing communication function, wherein radio communication subsystem may include that radio-frequency transmitter and transmitter and/or light (such as infrared) receive
Machine and transmitter.The particular design and embodiment of radio communication subsystem 624 can depend on what mobile terminal 600 was supported
One or more communication networks.For example, mobile terminal 600 may include being designed to support LTE, 3G, GSM network, GPRS net
Network, EDGE network, Wi-Fi or WiMax network and BlueboothTMThe communication subsystem 624 of network.
Audio subsystem 626 can be coupled with loudspeaker 628 and microphone 630, to help to implement to enable voice
Function, such as speech recognition, speech reproduction, digital record and telephony feature.I/O subsystem 640 may include touch screen control
Device 642 processed and/or other one or more input controllers 644.Touch screen controller 642 may be coupled to touch screen 646.It lifts
For example, any one of a variety of touch-sensing technologies are can be used to detect in the touch screen 646 and touch screen controller 642
The contact and movement or pause carried out therewith, wherein detection technology includes but is not limited to capacitive character, resistive, infrared and table
Face technology of acoustic wave.Other one or more input controllers 644 may be coupled to other input/control devicess 648, such as one
Or the pointer device of multiple buttons, rocker switch, thumb wheel, infrared port, USB port, and/or stylus etc.It is described
One or more button (not shown)s may include the up/down for controlling 630 volume of loudspeaker 628 and/or microphone
Button.
Memory interface 602 can be coupled with memory 650.The memory 650 may include that high random access is deposited
Reservoir and/or nonvolatile memory, such as one or more disk storage equipments, one or more optical storage apparatus, and/
Or flash memories (such as NAND, NOR).Memory 650 can store an operating system 672, for example, Android, iOS or
The operating system of Windows Phone etc.The operating system 672 may include for handling basic system services and execution
The instruction of task dependent on hardware.Memory 650 can also store one or more programs 674.In mobile device operation,
Meeting load operating system 672 from memory 650, and executed by processor 604.Program 674 at runtime, also can be from storage
It loads in device 650, and is executed by processor 604.Program 674 operates on operating system, utilizes operating system and bottom
The interface that hardware provides realizes the various desired functions of user, such as instant messaging, web page browsing, pictures management.Program 674 can
To be independently of operating system offer, it is also possible to what operating system carried.In addition, program 674 is mounted to mobile terminal
When in 600, drive module can also be added to operating system.Program 674 may be arranged on an operating system by one or more
A processor 604 executes relevant instruction.In some embodiments, mobile terminal 600 is configured as executing according to the present invention
Image processing method 200.Wherein, one or more programs 674 of mobile terminal 600 include for executing according to image processing method
The instruction of method 200.
In conclusion the image processing method of an exemplary embodiment of the present invention learns mould using image processing machine
Type component obtains the high-resolution complete image of original image, and realizing enhances the details of original image and image point can be improved
Resolution.Further, depth guiding filtering component can be coupled with recurrent neural network, so as to draw using depth
It leads after filtering unit realizes the effect enhanced the details of original image and utilizes recurrent neural network component acquisition high resolution
Image.Further, disturbance operation can be increased to each image set, to improve network adaptability in the training process.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, knot is not been shown in detail
Structure and technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims than feature more features expressly recited in each claim.More precisely, as following
As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, it abides by
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
As a separate embodiment of the present invention.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups
Between can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined into a module or furthermore be segmented into multiple
Submodule.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Be combined into one between module or unit or group between member or group, and furthermore they can be divided into multiple submodule or subelement or
Between subgroup.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
The present invention discloses together:
A9, method as described in a1, wherein be input to low-resolution image as input picture with the original image
Obtaining in image processing machine learning model component has higher resolution and richer details compared to the original image
The step of high-resolution complete image of information, comprising: logical according to the image for constituting low-resolution image and the original image
The corresponding image data of each image channel is input in image processing machine learning model component by road respectively, is obtained and each
The corresponding final image data of image channel;Using the final image data in each channel, high-resolution complete image is generated.
A10, method as described in a1, wherein image processing machine learning model component is by depth guiding filtering component, recurrence mind
It is generated through networking component and the coupling of full convolutional neural networks component.A11, the method as described in A10, wherein image procossing
Machine learning model component is trained the step of obtaining using the multiple series of images collection obtained in advance, comprising: utilizes multiple series of images
Collection carries out coorinated training to the depth guiding filtering component, recurrent neural network component and full convolutional neural networks component,
Image processing machine learning model component trains are completed to realize.A12, the method as described in A11, wherein by low resolution
Rate image and original image are input in image processing machine learning model component as input picture to be obtained compared to original graph
As the step of there is higher resolution and the high-resolution complete image of richer detailed information, comprising: by low resolution figure
As being input to depth guiding filtering component with the original image, output image corresponding with original image is obtained;Output is schemed
As being input to recurrent neural network component, multiple recursive images by the output of different recurrence numbers are obtained;By multiple recurrence plots
As being input to full convolutional network component, intermediate image is obtained as high-resolution complete image.A13, the method as described in A12,
Multiple recursive images are input to full convolutional network component and obtain intermediate image as the step of high-resolution complete image, are wrapped
It includes: multiple recursive images being input to full convolutional network component and obtain intermediate image;Intermediate image and output image are executed folded
Add processing, obtains high-resolution complete image.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
Meaning one of can in any combination mode come using.
In addition, be described as herein can be by the processor of computer system or by executing by some in the embodiment
The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, Installation practice
Element described in this is the example of following device: the device be used for implement as in order to implement the purpose of the invention element performed by
Function.
Various technologies described herein are realized together in combination with hardware or software or their combination.To the present invention
Method and apparatus or the process and apparatus of the present invention some aspects or part can take insertion tangible media, such as it is soft
The form of program code (instructing) in disk, CD-ROM, hard disk drive or other any machine readable storage mediums,
Wherein when program is loaded into the machine of such as computer etc, and is executed by the machine, the machine becomes to practice this hair
Bright equipment.
In the case where program code executes on programmable computers, calculates equipment and generally comprise processor, processor
Readable storage medium (including volatile and non-volatile memory and or memory element), at least one input unit, and extremely
A few output device.Wherein, memory is configured for storage program code;Processor is configured for according to the memory
Instruction in the said program code of middle storage executes the convolution for being used to carry out the face in image Expression Recognition of the invention
Neural network generation method and/or expression recognition method.
By way of example and not limitation, computer-readable medium includes computer storage media and communication media.It calculates
Machine readable medium includes computer storage media and communication media.Computer storage medium storage such as computer-readable instruction,
The information such as data structure, program module or other data.Communication media is generally modulated with carrier wave or other transmission mechanisms etc.
Data-signal processed passes to embody computer readable instructions, data structure, program module or other data including any information
Pass medium.Above any combination is also included within the scope of computer-readable medium.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.
Description plain objects, which are merely representative of, is related to the different instances of similar object, and is not intended to imply that the object being described in this way must
Must have the time it is upper, spatially, sequence aspect or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
Language used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit
Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this
Many modifications and changes are obvious for the those of ordinary skill of technical field.For the scope of the present invention, to this
Invent done disclosure be it is illustrative and not restrictive, it is intended that the scope of the present invention be defined by the claims appended hereto.
Claims (10)
1. a kind of image processing method, suitable for being executed in calculating equipment, the method includes the steps:
Obtain low-resolution image corresponding with original image;
Image processing machine learning model group is input to using the low-resolution image and the original image as input picture
In part, obtaining compared to the original image has higher resolution and the high-resolution complete graph of richer detailed information
Picture, wherein described image handling machine learning model component is trained to obtain using the multiple series of images collection obtained in advance,
In, the multiple series of images collection includes high-resolution complete graph image set, high resolution graphics image set and low-resolution image collection.
2. the method for claim 1, wherein described image handling machine learning model component is to guide to filter by depth
Wave component couples generation with recurrent neural network component.
3. method according to claim 2, wherein using the low-resolution image and the original image as input picture
It is input to obtain in image processing machine learning model component and there is higher resolution and richer compared to the original image
Detailed information high-resolution complete image the step of include:
The low-resolution image and the original image are input to depth guiding filtering component, obtained and the original image
Corresponding output image;
The output image is input to recurrent neural network component, obtains high-resolution complete image.
4. method according to claim 2, wherein described image handling machine learning model component is more using obtaining in advance
Group image set is trained to obtain, comprising:
The depth guiding filtering component and the recurrent neural network component are cooperateed with using the multiple series of images collection
Training, so that realizing completes described image handling machine learning model component trains.
5. method as claimed in claim 4, wherein using the multiple series of images collection to described image handling machine learning model
The step of component is trained include:
Obtain high-resolution complete graph image set, high resolution graphics image set and low-resolution image collection;
Described image handling machine learning model component is constructed, is provided with training in described image handling machine learning model component
Parameter;
Using the corresponding relationship between high resolution graphics image set, low-resolution image collection and high-resolution complete graph image set to described
Image processing machine learning model component is trained, and adjusts the training parameter, until described image handling machine learns mould
Type component reaches preset requirement.
6. the method for claim 1, wherein the structure of described image handling machine learning model component includes guidance filter
Wave network and recurrent neural network.
It utilizes 7. the method for claim 1, wherein the multiple series of images collection refers to from corresponding with each picture multiple
It is complete to form high-resolution that high-resolution complete image, high-definition picture and low-resolution image are extracted in image respectively
Whole image set, high resolution graphics image set and low-resolution image collection.
8. method as claimed in claim 5, wherein the preset condition refers to the loss function value obtained using loss function
Reach threshold value.
9. a kind of calculating equipment, comprising:
One or more processors;
Memory;And
One or more programs, wherein one or more of programs are stored in the memory and are configured as by described one
A or multiple processors execute, and one or more of programs include for executing in method described in -8 according to claim 1
Either method instruction.
10. a kind of computer readable storage medium for storing one or more programs, one or more of programs include instruction,
Described instruction when executed by a computing apparatus so that the calculating equipment executes according to claim 1 in method described in -8
Either method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910180903.5A CN109949225B (en) | 2019-03-11 | 2019-03-11 | Image processing method and computing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910180903.5A CN109949225B (en) | 2019-03-11 | 2019-03-11 | Image processing method and computing device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109949225A true CN109949225A (en) | 2019-06-28 |
CN109949225B CN109949225B (en) | 2021-05-04 |
Family
ID=67009552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910180903.5A Active CN109949225B (en) | 2019-03-11 | 2019-03-11 | Image processing method and computing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109949225B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093442A (en) * | 2011-09-14 | 2013-05-08 | 联发科技(新加坡)私人有限公司 | Method for reconfiguring high-resolution image based on a plurality of low-resolution images, and device of method |
CN106682664A (en) * | 2016-12-07 | 2017-05-17 | 华南理工大学 | Water meter disc area detection method based on full convolution recurrent neural network |
CN108874145A (en) * | 2018-07-04 | 2018-11-23 | 深圳美图创新科技有限公司 | A kind of image processing method calculates equipment and storage medium |
CN109064396A (en) * | 2018-06-22 | 2018-12-21 | 东南大学 | A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network |
-
2019
- 2019-03-11 CN CN201910180903.5A patent/CN109949225B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103093442A (en) * | 2011-09-14 | 2013-05-08 | 联发科技(新加坡)私人有限公司 | Method for reconfiguring high-resolution image based on a plurality of low-resolution images, and device of method |
CN106682664A (en) * | 2016-12-07 | 2017-05-17 | 华南理工大学 | Water meter disc area detection method based on full convolution recurrent neural network |
CN109064396A (en) * | 2018-06-22 | 2018-12-21 | 东南大学 | A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network |
CN108874145A (en) * | 2018-07-04 | 2018-11-23 | 深圳美图创新科技有限公司 | A kind of image processing method calculates equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
HUIKAI WU 等: "Fast End-to-End Trainable Guided Filter", 《ARXIV》 * |
JIWON KIM 等: "Deeply-Recursive Convolutional Network for Image Super-Resolution", 《ARXIV》 * |
YANG WEN 等: "Deep Color Guided Coarse-to-Fine Convolutional Network Cascade for Depth Image Super-Resolution", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
Also Published As
Publication number | Publication date |
---|---|
CN109949225B (en) | 2021-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109978764A (en) | A kind of image processing method and calculate equipment | |
CN108010031B (en) | Portrait segmentation method and mobile terminal | |
CN106295533B (en) | A kind of optimization method, device and the camera terminal of self-timer image | |
CN109102483B (en) | Image enhancement model training method and device, electronic equipment and readable storage medium | |
CN109584179A (en) | A kind of convolutional neural networks model generating method and image quality optimization method | |
CN110163827A (en) | Training method, image de-noising method, device and the medium of image denoising model | |
CN108197602A (en) | A kind of convolutional neural networks generation method and expression recognition method | |
CN107145902B (en) | A kind of image processing method based on convolutional neural networks, device and mobile terminal | |
CN107578453A (en) | Compressed image processing method, apparatus, electronic equipment and computer-readable medium | |
JP7096888B2 (en) | Network modules, allocation methods and devices, electronic devices and storage media | |
CN109118490A (en) | A kind of image segmentation network generation method and image partition method | |
CN108537283A (en) | A kind of image classification method and convolutional neural networks generation method | |
CN108537193A (en) | Ethnic attribute recognition approach and mobile terminal in a kind of face character | |
CN105141827B (en) | Distortion correction method and terminal | |
CN107424184A (en) | A kind of image processing method based on convolutional neural networks, device and mobile terminal | |
CN109544482A (en) | A kind of convolutional neural networks model generating method and image enchancing method | |
US20220222872A1 (en) | Personalized Machine Learning System to Edit Images Based on a Provided Style | |
CN109636712A (en) | Image Style Transfer and date storage method, device and electronic equipment | |
CN107392933B (en) | Image segmentation method and mobile terminal | |
US20240062054A1 (en) | Storage of input values across multiple cores of neural network inference circuit | |
CN109754359A (en) | A kind of method and system that the pondization applied to convolutional neural networks is handled | |
CN109241437A (en) | A kind of generation method, advertisement recognition method and the system of advertisement identification model | |
CN109949226A (en) | A kind of image processing method and calculate equipment | |
CN107808394A (en) | A kind of image processing method and mobile terminal based on convolutional neural networks | |
CN109727211A (en) | A kind of image de-noising method, calculates equipment and medium at device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |