CN107578054A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN107578054A CN107578054A CN201710888763.8A CN201710888763A CN107578054A CN 107578054 A CN107578054 A CN 107578054A CN 201710888763 A CN201710888763 A CN 201710888763A CN 107578054 A CN107578054 A CN 107578054A
- Authority
- CN
- China
- Prior art keywords
- convolution
- convolutional layer
- module
- convolution kernel
- size
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The disclosure is directed to image processing method and device.This method includes:Processing image is treated using the first convolution module of default network model and carries out feature extraction, obtains the fisrt feature figure of pending image, wherein, the size of fisrt feature figure is less than pending image, and the first convolution module includes at least one first convolutional layer;And then fisrt feature figure is up-sampled using the second convolution module of default network model, obtain second feature figure, the size of second feature figure is more than fisrt feature figure, and the second convolution module includes the second convolutional layer, and the step-length of the convolution kernel movement of the second convolutional layer is proper fraction;According to second feature figure, characteristic pattern after the processing equal with pending picture size is obtained.On the premise of disclosed method can reach the size of amplification fisrt feature figure, image processing efficiency is effectively lifted.
Description
Technical field
This disclosure relates to technical field of image processing, more particularly to image processing method and device.
Background technology
Convolutional neural networks (Convolutional Neural Network, referred to as:CNN) since 2012,
Image classification and image detection etc. achieve huge achievement and are widely applied.
CNN powerful part is its automatic learning characteristic of sandwich construction energy, and may learn many levels
Feature:Shallower convolutional layer perception domain is smaller, the feature of study to some regional areas;Deeper convolutional layer has larger sense
Know domain, can learn to the more abstract feature of some.These abstract characteristics are to sensitiveness such as the size of object, position and direction
It is lower, so as to contribute to the raising of recognition performance.
These abstract features are helpful to classifying, and can judge what classification is included in piece image well
Object, but because lost the details of some objects, it is impossible to the specific profile of object is provided well, points out each pixel tool
Which object body belongs to, therefore accomplishes that accurate segmentation is just very difficult.
The way of traditional dividing method based on CNN is typically:In order to a pixel classifications, around the pixel
An image block as CNN input be used for train and predict.This method has several drawbacks in that:First, storage overhead is very big.
Such as the size of the image block used each pixel is 15 × 15, then required memory space is 225 times of original image.Two
It is that computational efficiency is low.Adjacent block of pixels is substantially what is repeated, and convolution, this calculating are calculated one by one for each block of pixels
Also there is repetition largely.Third, the size that size limit sensing region of block of pixels.The size ratio of usual block of pixels
The size of entire image is much smaller, can only extract some local features, so as to cause the performance of classification to be restricted.
For this problem, UC Berkeley Jonathan Long et al. propose full convolutional neural networks (Fully
Convolutional Networks, referred to as:FCN) it is used for the segmentation of image.The network attempts to recover from abstract feature
Go out the classification belonging to each pixel.The classification of pixel scale is further extended into from the classification of image level.But pass through FCN
In convolutional layer for several times and pond layer processing after, the size of image can constantly diminish, that is, the resolution ratio of image is increasingly
It is low.Therefore need to recover the original size of image using measure.
The content of the invention
To overcome problem present in correlation technique, the embodiment of the present disclosure provides image processing method and device.The skill
Art scheme is as follows:
According to the first aspect of the embodiment of the present disclosure, there is provided a kind of image processing method, including:
Processing image is treated using the first convolution module of default network model and carries out feature extraction, is obtained described pending
The fisrt feature figure of image;The size of the fisrt feature figure is less than the pending image, and first convolution module includes
At least one first convolutional layer;
The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second
Characteristic pattern, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer,
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;
According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
The technical scheme provided by this disclosed embodiment can include the following benefits:Using the of default network model
One convolution module treats processing image and carries out feature extraction, obtains the fisrt feature figure of pending image, wherein, fisrt feature figure
Size be less than pending image, the first convolution module includes at least one first convolutional layer;And then using default network model
The second convolution module fisrt feature figure is up-sampled, obtain second feature figure, the size of second feature figure is more than first
Characteristic pattern, the second convolution module includes the second convolutional layer, and the step-length of the convolution kernel movement of the second convolutional layer is proper fraction;According to
Second feature figure, obtain characteristic pattern after the processing equal with pending picture size.Processing is being treated by the first convolution module
After image carries out feature extraction, the size of obtained fisrt feature figure can be less than pending image, in order that obtaining output end output
Image recover the original size of image so that user can more be clearly seen that the image after processing, lead in the disclosure
Cross and set the second convolution module for including the second convolutional layer to up-sample fisrt feature figure, now, due to volume Two product module
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in block, so as to reach the mesh of the size of amplification fisrt feature figure
, and due to being operated using deconvolution, required data are more during processing, treatment effeciency can be reduced, and in the disclosure
It on the premise of being operated without using deconvolution, can equally reach the purpose of the size of amplification fisrt feature figure, and effectively improve
The efficiency of image procossing.
According to the second aspect of the embodiment of the present disclosure, there is provided a kind of image processing apparatus, including:
First acquisition module, carried for treating processing image progress feature using the first convolution module of default network model
Take, obtain the fisrt feature figure of the pending image;The size of the fisrt feature figure is less than the pending image, described
First convolution module includes at least one first convolutional layer;
Second acquisition module, for the second convolution module using the default network model to first acquisition module
The fisrt feature figure obtained is up-sampled, and obtains second feature figure, and the size of the second feature figure is more than described the
One characteristic pattern;Second convolution module includes the second convolutional layer, the convolution kernel of the second convolutional layer in second convolution module
Mobile step-length is proper fraction;
3rd acquisition module, for the second feature figure obtained according to second acquisition module, obtain with it is described
Characteristic pattern after the equal processing of pending picture size.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of image processing apparatus, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Processing image is treated using the first convolution module of default network model and carries out feature extraction, is obtained described pending
The fisrt feature figure of image;The size of the fisrt feature figure is less than the pending image, and first convolution module includes
At least one first convolutional layer;
The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second
Characteristic pattern, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer,
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;
According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
According to the fourth aspect of the embodiment of the present disclosure, there is provided a kind of computer-readable recording medium, be stored thereon with calculating
Machine instructs, and the instruction realizes following steps when being executed by processor:
Processing image is treated using the first convolution module of default network model and carries out feature extraction, is obtained described pending
The fisrt feature figure of image;The size of the fisrt feature figure is less than the pending image, and first convolution module includes
At least one first convolutional layer;
The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second
Characteristic pattern, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer,
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;
According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not
The disclosure can be limited.
Brief description of the drawings
Accompanying drawing herein is merged in specification and forms the part of this specification, shows the implementation for meeting the disclosure
Example, and be used to together with specification to explain the principle of the disclosure.
Fig. 1 is the flow chart of the image processing method according to an exemplary embodiment.
Fig. 2 is the schematic diagram of the default network model according to an exemplary embodiment one.
Fig. 3 is the schematic diagram of the default network model according to an exemplary embodiment two.
Fig. 4 is the schematic diagram of the default network model according to an exemplary embodiment three.
Fig. 5 is the element group schematic diagram of the fisrt feature figure according to an exemplary embodiment.
Fig. 6 is the convolution kernel schematic diagram of the second convolutional layer according to an exemplary embodiment.
Fig. 7 is the convolution operation schematic diagram according to an exemplary embodiment one.
Fig. 8 is the convolution operation schematic diagram according to an exemplary embodiment two.
Fig. 9 is the convolution operation schematic diagram according to an exemplary embodiment three.
Figure 10 is the convolution operation schematic diagram according to an exemplary embodiment four.
Figure 11 is the application scenario diagram of the embodiment of the present disclosure method according to an exemplary embodiment one.
Figure 12 is the application scenario diagram of the embodiment of the present disclosure method according to an exemplary embodiment two.
Figure 13 is a kind of block diagram of image processing apparatus according to an exemplary embodiment one.
Figure 14 is a kind of block diagram of image processing apparatus according to an exemplary embodiment two.
Figure 15 is a kind of block diagram of image processing apparatus according to an exemplary embodiment three.
Figure 16 is a kind of block diagram of image processing apparatus according to an exemplary embodiment four.
Figure 17 is a kind of block diagram of image processing apparatus according to an exemplary embodiment five.
Figure 18 is a kind of block diagram of image processing apparatus according to an exemplary embodiment six.
Figure 19 is a kind of block diagram of image processing apparatus according to an exemplary embodiment seven.
Figure 20 is a kind of block diagram for image processing apparatus 80 according to an exemplary embodiment.
Figure 21 is a kind of block diagram of device 90 for image procossing according to an exemplary embodiment.
Embodiment
Here exemplary embodiment will be illustrated in detail, its example is illustrated in the accompanying drawings.Following description is related to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represent same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended
The example of the consistent apparatus and method of some aspects be described in detail in claims, the disclosure.
In the embodiment of the present disclosure, processing image progress feature is treated using the first convolution module of default network model and carried
Take, obtain the fisrt feature figure of pending image, wherein, the size of fisrt feature figure is less than pending image, first volume product module
Block includes at least one first convolutional layer;And then fisrt feature figure is carried out using the second convolution module of default network model
Sampling, obtains second feature figure, the size of second feature figure is more than fisrt feature figure, and the second convolution module includes the second convolution
Layer, and the step-length of the convolution kernel movement of the second convolutional layer is proper fraction;According to second feature figure, obtain and pending picture size
Characteristic pattern after equal processing.After processing image progress feature extraction is treated by the first convolution module, first obtained is special
The size of sign figure can be less than pending image, in order that the image for obtaining output end output recovers the original size of image, to cause
User can more be clearly seen that the image after processing, include the volume Two product module of the second convolutional layer in the disclosure by setting
Block up-samples to fisrt feature figure, now, due to the step-length of the convolution kernel movement of the second convolutional layer in the second convolution module
For proper fraction, so as to reach the purpose of the size of amplification fisrt feature figure, such as:Second convolutional layer in second convolution module
Convolution kernel movement step-length be 1/2, after being up-sampled using the second convolution module to fisrt feature figure, obtain second
The size of characteristic pattern will be 2 times of the size of fisrt feature figure.And due to being operated using deconvolution, required data during processing
It is more, treatment effeciency can be reduced, and in the disclosure on the premise of being operated without using deconvolution, it can equally reach amplification
The purpose of the size of fisrt feature figure, and effectively improve the efficiency of image procossing.
Fig. 1 is the flow chart of the image processing method according to an exemplary embodiment one, as shown in figure 1, this method
Comprise the following steps S101-S103:
In step S101, processing image progress feature is treated using the first convolution module in default network model and carried
Take, obtain the fisrt feature figure of pending image;The size of fisrt feature figure is less than pending image, and the first convolution module includes
At least one first convolutional layer.
Example, the size of fisrt feature figure can be represented with the lateral length of fisrt feature figure with longitudinally wide, and
Lateral length with it is longitudinally wide can in units of pixel, can also by centimetre in units of.Such as:In units of pixel, the
The size of one characteristic pattern is 480 × 800, wherein, the transverse direction of 480 expression fisrt feature figures has 480 pixels, and 800 represent first
There are 800 pixels the longitudinal direction of characteristic pattern.
When treating processing image progress feature extraction using the first convolution module, pending image is inputted to the first volume
Lamination 1, process of convolution is carried out to pending image in the first convolutional layer 1, and by the pending image after process of convolution
Output is to pond layer 1, and to the image progress pond processing of input in pond layer 1, the image after pondization is handled is exported to the
One convolutional layer 2, the like, last first convolutional layer N carries out the obtained as fisrt feature figure after process of convolution.
Because the size of fisrt feature figure is less than pending image, if directly exporting the fisrt feature figure to user, use
Family clearly can not see desired processing result image from the fisrt feature figure, so as to make it that user satisfaction is relatively low.
In step s 102, fisrt feature figure is up-sampled using the second convolution module in default network model,
Second feature figure is obtained, the size of second feature figure is more than fisrt feature figure;Second convolution module includes the second convolutional layer, and second
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in convolution module.
In order to lift the satisfaction of user, that is, in order to amplify the size of fisrt feature figure, exported in the first convolution module
After fisrt feature figure, up-sampling processing can be carried out to fisrt feature image using the second convolution module.As shown in Fig. 2 when second
, will when being up-sampled using the second convolution module to fisrt feature figure when convolution module includes multiple second convolutional layers
Fisrt feature figure is inputted to the second convolutional layer 1, and up-sampling processing is carried out in the second convolutional layer 1, and will be passed through up-sampling and be handled
Fisrt feature figure afterwards is exported to the second convolutional layer 2, the like, after last second convolutional layer M carries out up-sampling processing
What is obtained is second feature figure.
In a kind of achievable mode, the second convolution module that network model is preset in above-mentioned use is entered to fisrt feature figure
Row up-sampling can be implemented in the following manner:The convolution kernel of second convolutional layer and fisrt feature figure are subjected to convolution.
It is proper fraction step-length Move Volumes during performing convolution operation in each second convolutional layer of the second convolution module
Product core, such as:Convolution kernel can be moved with 1/2 step-length, after now carrying out convolution to fisrt feature figure, the second feature of output
The size of figure would is that 2 times of fisrt feature figure size, thus reach the purpose of up-sampling, namely reach amplification first
The purpose of the size of characteristic pattern.
Because amount of calculation is small when convolution is handled than deconvolution, therefore operate, but use without using deconvolution in the disclosure
Convolution operation, so as to effectively lift treatment effeciency.
In step s 103, according to second feature figure, characteristic pattern after the processing equal with pending picture size is obtained.
It is worth noting that, above-mentioned default network model includes but is not limited to, full convolutional neural networks (Fully
Convolutional Networks, referred to as:FCN) model.
FCN is the general depth convolutional network framework for carrying out image segmentation, and the network attempts to recover from abstract feature
Go out the classification belonging to each pixel, i.e., further extend into the classification of pixel scale from the classification of image level.FCN is by tradition
Full articulamentum in CNN changes into convolutional layer one by one.In traditional CNN structures, first 5 layers are convolutional layers, the 6th layer and
7 layers be respectively a length be 4096 one-dimensional vector, the 8th layer be length be 1000 one-dimensional vector, respectively correspond to 1000
The probability of classification.FCN is expressed as convolutional layer, the size (port number, wide, height) of corresponding convolution kernel by the 6th layer, the 7th layer and the 8th layer
Respectively (4096,1,1), (4096,1,1) and (1000,1,1).Because all layers are all convolutional layers, therefore referred to as full convolution net
Network.When carrying out image segmentation using FCN, after the convolutional layer for several times in FCN and the processing of pond layer, the size meeting of image
Constantly diminish, in order that the image that must be exported recovers the original size of image, at present, can be realized and adopted by deconvolution operation
Sample recovers the purpose of the original size of image to reach.Deconvolution is similar with convolution, is all the computing being added that is multiplied.It is anti-realizing
When the forward and backward of convolution operation is propagated, the forward, backward of reverse convolution can be used to propagate.But using deconvolution
During operation, treatment effeciency can be caused than relatively low.
The technical scheme provided by this disclosed embodiment can include the following benefits:Using the of default network model
One convolution module treats processing image and carries out feature extraction, obtains the fisrt feature figure of pending image, wherein, fisrt feature figure
Size be less than pending image, the first convolution module includes at least one first convolutional layer;And then using default network model
The second convolution module fisrt feature figure is up-sampled, obtain second feature figure, the size of second feature figure is more than first
Characteristic pattern, the second convolution module includes the second convolutional layer, and the step-length of the convolution kernel movement of the second convolutional layer is proper fraction;According to
Second feature figure, obtain characteristic pattern after the processing equal with pending picture size.Processing is being treated by the first convolution module
After image carries out feature extraction, the size of obtained fisrt feature figure can be less than pending image, in order that obtaining output end output
Image recover the original size of image so that user can more be clearly seen that the image after processing, lead in the disclosure
Cross and set the second convolution module for including the second convolutional layer to up-sample fisrt feature figure, now, due to volume Two product module
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in block, so as to reach the mesh of the size of amplification fisrt feature figure
, and due to being operated using deconvolution, required data are more during processing, treatment effeciency can be reduced, and in the disclosure
It on the premise of being operated without using deconvolution, can equally reach the purpose of the size of amplification fisrt feature figure, and effectively improve
The efficiency of image procossing.
In one embodiment, feature extraction can be realized by way of convolution operation, now, presets network model
In the first convolution module convolution operation only can be carried out to pending image, as shown in figure 3, first in default network model
Convolution module just only includes at least one first convolutional layer, and those first convolutional layers are sequentially connected.So, the first convolution is being used
When module carries out feature extraction to pending image, pending image is inputted to the first convolutional layer 1, in the first convolutional layer 1
Process of convolution is carried out, and the pending image after process of convolution is exported to the first convolutional layer 2, the like, last
Individual first convolutional layer N carries out the obtained as fisrt feature figure after process of convolution.
Because the picture quality of the fisrt feature figure only got by convolution operation is relatively low, therefore, in another implementation
In example, feature extraction can also be realized by way of convolution and pondization operation, now, preset first in network model
Convolution module can carry out convolution simultaneously to pending image and pondization operates, then, the first volume product module of default network model
Not only include at least one first convolutional layer, in addition at least one pond layer in block.Wherein, the quantity of pond layer is no more than the
The quantity of one convolutional layer, as shown in figure 4, each pond layer is arranged between two the first convolutional layers, and in the first convolution module
The step-length of convolution kernel movement be integer more than or equal to 1.
Example, the step-length of the convolution kernel movement in the first convolution module can refer to the first convolution in the first convolution module
The step-length of the convolution kernel movement of layer.
The technical scheme provided by this disclosed embodiment can include the following benefits:By being carried out to pending image
Convolution and pondization processing so that the picture quality of obtained fisrt feature figure is higher, effectively improves the quality of image procossing.
Because each convolution kernel in the second convolutional layer is moved with fraction step-length, then, using the second convolutional layer
Convolution kernel up-sampled during, it may appear that each element in the convolution kernel of the second convolutional layer and element to be sampled
Group situation about misplacing, the further comprising the steps of A1 of the above method, now above-mentioned steps S102 may be embodied as step A2:
In step A1, computing is carried out to the element of fisrt feature figure using bilinear interpolation, after obtaining interpolation processing
Fisrt feature figure.
In step A2, the fisrt feature figure after difference processing is up-sampled using the second convolution module.
Example, Fig. 5 is the element group schematic diagram of the fisrt feature figure according to an exemplary embodiment, first spy
Pel element group is levied as 5 × 5 matrix, Fig. 6 is the convolution kernel schematic diagram of the second convolutional layer according to an exemplary embodiment,
The convolution kernel of second convolutional layer is 3 × 3 matrix.
Illustrated so that the processing of upper and lower sample is process of convolution as an example, when the convolution kernel that the second convolutional layer is moved with step-length 1,
As shown in fig. 7, when it carries out convolution operation with first group of fisrt feature pel element group, in fact, above-mentioned convolution operation is just
It is a dot product operations, as shown in figure 8, the value after convolution is:1×1+0×1+1×1+0×0+1×1+0×1+1×0+0×0
+ 1 × 1=4, the convolution kernel of the second convolutional layer is moved by above-mentioned step-length, namely the convolution kernel of the second convolutional layer moves every time
One pixel (rectangular block in figure), often moves and once performs once dot product operations as shown in Figure 8.
When with step-length 1 to move the convolution kernel of the second convolutional layer, the convolution kernel of the second convolutional layer all can be with first every time
The element group alignment of convolution is treated in the element group of characteristic pattern, so as to directly carry out very much convolution operation, and when with Fractional-step
It is long to move the convolution kernel of the second convolutional layer when, the convolution kernel of the second convolutional layer be possible to can with the element group of fisrt feature figure
Treat that the element group of convolution misplaces, so as to cannot directly carry out convolution operation.
Assuming that on the basis of Fig. 7, after the convolution kernel that the second convolutional layer is moved with step-length 1, what is obtained as shown in Figure 9 shows
It is intended to, and after the convolution kernel of the second convolutional layer is moved with step-length 1/2, obtained schematic diagram as shown in Figure 10.
As shown in Figure 10, the convolution kernel of the second convolutional layer and treat convolution element group be dislocation, now, in order to perform volume
Product operation, can enter row interpolation using bilinear interpolation in the element group for treating convolution, obtain the first spy after interpolation processing
The element group of figure is levied, is then rolled up using the element group of the fisrt feature figure after the convolution kernel and interpolation processing of the second convolutional layer
Product.
It is worth noting that, above-mentioned bilinear interpolation may be replaced by linear interpolation method etc., the disclosure is not to it
It is any limitation as.
The technical scheme provided by this disclosed embodiment can include the following benefits:It is special to first using double interpolation methods
The element for levying figure carries out computing, obtains the fisrt feature figure after interpolation processing, in the convolution kernel that can avoid the second convolutional layer
The problem of element group in each element and fisrt feature figure during dislocation there occurs that can not up-sample, adopts so as to effectively improve
The accuracy of the numerical value obtained after sample.
The step-length of convolution kernel movement in the quantity of above-mentioned second convolutional layer and each second convolutional layer can be according to reality
Using adjustment, in one embodiment, in quantity and each second convolutional layer that the second convolutional layer can be determined according to following formula
Convolution kernel movement step-length:
Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor jth
The step-length of convolution kernel movement in individual second convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
Due to the step-length E of the convolution kernel movement in each first convolutional layeriIt is to determine with the quantity M of the first convolutional layer,
There are two unknown numbers in above-mentioned formula, it is determined that the convolution kernel in the quantity and each second convolutional layer of the second convolutional layer moves
Step-length when, be premised on meeting following constraints:
Example, it is assumed that M=4, then:
When the quantity of the second convolutional layer is N=2, then:
log E1+log E2+log E3+log E4+log Fi+log F2=0
When the step-length of the convolution kernel of each first convolutional layer is 2, then:
log2+log2+log2+log2+log F1+log F2=0
That is, F1And F2Value to meet:2×2×2×2×F1×F2=1
Due to F1And F2It is proper fraction, then, F1And F2Value can beWithCan also beWithCan also beWith
As the step-length E of the convolution kernel movement in the 1st the first convolutional layer1Convolution kernel in=2, the 2nd the first convolutional layer
Mobile step-length E2The step-length E of convolution kernel movement in=3, the 3rd the first convolutional layer3Volume in=2, the 4th the first convolutional layer
The step-length E of product core movement4When=4, then:
log2+log3+log2+log4+logF1+log F2=0
That is, F1And F2Value to meet:2×3×2×4×F1×F2=1
It can release
Due to F1And F2It is proper fraction, then, F1And F2Value for example can beWithAlso may be used
ThinkWithCan also beWithCan also be F1=1 HeDeng.
The technical scheme provided by this disclosed embodiment can include the following benefits:Can by above-mentioned constraints
In the hope of the step that when the quantity of the second convolutional layer is some numerical value, now the convolution kernel in each second convolutional layer moves
It is long, so as to which corresponding perform of the step-length moved by the convolution kernel in obtained each second convolutional layer is accumulated with fraction step-length Move Volumes
The convolution operation of core so that the size of the second feature figure finally given is with the premise of pending image identical, effectively being lifted
The speed of image procossing.
After the step-length of the movement of the convolution kernel in each second convolutional layer has been obtained, during convolution is performed, also need
Know the convolution kernel size in each second convolutional layer.
When the second convolution module includes second convolutional layer, in one embodiment, the convolution of the second convolutional layer
Core can redefine, or the convolution kernel size in any one first convolutional layer.
Example, there are 3 the first convolutional layers in the first convolution module, be respectively:First convolutional layer 1, the and of the first convolutional layer 2
First convolutional layer 3;Wherein, the convolution kernel size of the first convolutional layer 1 is 3 × 3, and the convolution kernel size of the first convolutional layer 2 is 3 × 3,
The convolution kernel size of first convolutional layer 3 is 5 × 5;There are 1 the second convolutional layer, now, the second convolutional layer in second convolution module
Convolution kernel size can be:3 × 3 or 5 × 5.
Convolution kernel so is selected for the second convolutional layer at random, may be such that the second feature image obtained after final convolution
Effect it is poor.
In another embodiment, the convolution kernel in the second convolutional layer is the convolution kernel in the first convolutional layer of predetermined number
The average value of size.
Such as:There are 3 the first convolutional layers in first convolution module, be respectively:First convolutional layer 1, the first convolutional layer 2 and
One convolutional layer 3;There is 1 the second convolutional layer in second convolution module, be:Second convolutional layer 1;So, the convolution of the second convolutional layer 1
Core size can be:The volume of the convolution kernel size of first convolutional layer 1, the convolution kernel size of the first convolutional layer 2 and the first convolutional layer 3
The average value of product core size;The convolution kernel size of second convolutional layer 1 can also be:The convolution kernel size of first convolutional layer 1 and
The average value of the convolution kernel size of one convolutional layer 3.
In another embodiment, convolution kernel corresponding to each first convolutional layer in the first convolution module can first be obtained;
Then the size for determining the convolution kernel that size is maximum in the first convolutional layer is the convolution kernel size of the second convolutional layer.
It is determined that the second convolutional layer in the second convolution module convolution kernel size when, can first obtain each first convolution
The convolution kernel of layer, can be by those the first convolutional layers because the convolution kernel of each the first convolutional layer has the size of oneself
The size of convolution kernel is contrasted respectively, selects the size of the maximum convolution kernel of size as the convolution kernel size of the second convolutional layer.
Example, can by the width of the convolution kernel of each first convolutional layer and it is high be compared respectively, selection it is wherein wide and
Height is convolution kernel of the convolution kernel of maximum as the second convolutional layer.
Continue according to above-mentioned example, in 3 the first convolutional layers of the first convolution module, the convolution of the first convolutional layer 3
The size of core is maximum, then, the convolution kernel size of the second convolutional layer is:5×5.
The technical scheme provided by this disclosed embodiment can include the following benefits:By determining each first convolution
In the convolution kernel of layer the size of the maximum convolution kernel of size for the convolution kernel of the second convolutional layer size, so as to pass through obtain the
The convolution kernel of two convolutional layers comes after carrying out convolution operation to fisrt feature figure, can effectively lift the effect of second feature figure.
When the second convolution module includes at least two second convolutional layers, at this point it is possible to according in the first convolution module
The convolution kernel and preset rules of each first convolutional layer determine the convolution kernel of each second convolutional layer in the second convolution module.
In a kind of achievable mode, preset rules can be:According to preset order in the first convolution module each
Convolution kernel in one convolutional layer is ranked up, and the convolution kernel in the first convolutional layer after sequence is used as in the second convolution module successively
The convolution kernel of each second convolutional layer.
Example, the mode arranged in sequence is carried out to the convolution kernel in each first convolutional layer in the first convolution module
Sequence, the convolution kernel in the first convolutional layer after sorting in sequence are used as each second convolutional layer in the second convolution module successively
Convolution kernel, that is, the convolution kernel of each second convolutional layer is each first volume in the first convolution module in the second convolution module
The order arrangement of convolution kernel in lamination.
Assuming that have 3 the first convolutional layers in the first convolution module, respectively the first convolutional layer 1, the first convolutional layer 2 and first
Convolutional layer 3;Also there are 3 the second convolutional layers, respectively the second convolutional layer 1, the second convolutional layer 2 and volume Two in second convolution module
Lamination 3.Then the convolution kernel of the second convolutional layer 1 is the convolution kernel of the first convolutional layer 1, and the convolution kernel of the second convolutional layer 2 is the first volume
The convolution kernel of lamination 2, the convolution kernel of the second convolutional layer 3 are the convolution kernel of the first convolutional layer 3.
Example, the convolution kernel in each first convolutional layer in the first convolution module is carried out in the way of reversing
Sequence, the convolution kernel in the first convolutional layer after being sorted according to backward are used as each second convolutional layer in the second convolution module successively
Convolution kernel, that is, the convolution kernel of each second convolutional layer is each first volume in the first convolution module in the second convolution module
Convolution kernel in lamination reverses.
Assuming that have 3 the first convolutional layers in the first convolution module, respectively the first convolutional layer 1, the first convolutional layer 2 and first
Convolutional layer 3;Also there are 3 the second convolutional layers, respectively the second convolutional layer 1, the second convolutional layer 2 and volume Two in second convolution module
Lamination 3.Then the convolution kernel of the second convolutional layer 1 is the convolution kernel of the first convolutional layer 3, and the convolution kernel of the second convolutional layer 2 is the first volume
The convolution kernel of lamination 2, the convolution kernel of the second convolutional layer 3 are the convolution kernel of the first convolutional layer 1.
In a kind of achievable mode, preset rules can be:The convolution of each second convolutional layer in second convolution module
Core size is the average value of the convolution kernel size in the first convolutional layer of predetermined number in the first convolution module.
Such as:There are 4 the first convolutional layers in first convolution module, be respectively:First convolutional layer 1, the first convolutional layer 2,
One convolutional layer 3 and the first convolutional layer 4;There are 2 the second convolutional layers in second convolution module, be respectively:Second convolutional layer 1 and second
Convolutional layer 2;So, the convolution kernel size of the second convolutional layer 1 can be the convolution kernel size and the first convolutional layer of first volume lamination 1
The average value of 2 convolution kernel size, the convolution kernel size of the second convolutional layer 2 can be first volume lamination 3 convolution kernel size and
The average value of the convolution kernel size of first convolutional layer 4;Or second the convolution kernel size of convolutional layer 1 can be first volume lamination 1
Convolution kernel size and the first convolutional layer 3 convolution kernel size average value, the convolution kernel size of the second convolutional layer 2 can be the
The average value of the convolution kernel size of one convolutional layer 2 and the convolution kernel size of the first convolutional layer 4;Or second convolutional layer 1 convolution
Core size can be the convolution kernel size of first volume lamination 1, and the convolution kernel size of the second convolutional layer 2 can be first volume lamination 2
Convolution kernel size, the average value of the convolution kernel size of the first convolutional layer 3 and the convolution kernel size of the first convolutional layer 4 be averaged
Value;Or second convolutional layer 1 convolution kernel size can be first volume lamination 2 convolution kernel size, the volume of the second convolutional layer 2
Product core size can be first volume lamination 1 convolution kernel size and the first convolutional layer 4 convolution kernel size average value.
In a kind of achievable mode, preset rules can be the convolution kernel of each second convolutional layer of preset in advance.
Such as:There are 2 the second convolutional layers, respectively the second convolutional layer 1 and the second convolutional layer 2, can be with preset in advance second
The convolution kernel of convolutional layer 1 is:The convolution kernel of the second convolutional layer of preset in advance 2 is:Each second
When convolutional layer carries out convolution algorithm, only convolution kernel corresponding to second convolutional layer need to be used.
In a kind of achievable mode, preset rules can be:The convolution kernel of each second convolutional layer can be any one
The convolution kernel of first convolutional layer.
Such as:There are 3 the first convolutional layers in first convolution module, be respectively:First convolutional layer 1, the first convolutional layer 2 and
One convolutional layer 3;Wherein, the convolution kernel of the first convolutional layer 1 isThe convolution kernel of first convolutional layer 2 isThe convolution kernel of first convolutional layer 3 isThere are 2 the second convolutional layers in second convolution module, be respectively
Second convolutional layer 1 and the second convolutional layer 2, then the convolution kernel of the second convolutional layer 1 can be in the convolution kernel of 3 the first convolutional layers
Any one, that is, can be:OrSimilarly, the convolution kernel of the second convolutional layer 2 can also be 3
Any one in the convolution kernel of individual first convolutional layer, that is, can be:Or
The technical scheme provided by this disclosed embodiment can include the following benefits:Determined by preset rules each
The convolution kernel of second convolutional layer, it can effectively lift the display effect of second feature figure.
During the first convolutional layer or the second convolutional layer carry out convolution, the convolution kernel movement of each first convolutional layer
Step-length can differ, likewise, the step-length of the convolution kernel movement of each second convolutional layer can also differ, but using
During this kind of step-length carries out convolution, it can make it that operand is larger, so that image processing efficiency is relatively low.
In one embodiment, in order to reduce operand, and image processing efficiency is lifted, the convolution of each first convolutional layer
The step-length all same of core movement;The step-length all same of the corresponding convolution kernel movement of each second convolutional layer.
The technical scheme provided by this disclosed embodiment can include the following benefits:By setting each first convolution
The step-length all same of the convolution kernel movement of layer;The step-length all same of the corresponding convolution kernel movement of each second convolutional layer, from can be with
Operand is effectively reduced, and lifts image processing efficiency.
Because full convolutional network model is important models realizing function of image segmentation, therefore, in one embodiment
In, when the default network model in above-mentioned each embodiment is full convolutional network model, the above method also includes:To step
Characteristic pattern is classified pixel-by-pixel after the processing that S103 is obtained, and obtains carrying out the image after image segmentation.
It is worth noting that, the embodiment of the present disclosure include but is not limited to by way of softmax graders to processing after
Characteristic pattern is classified pixel-by-pixel.
The technical scheme provided by this disclosed embodiment can include the following benefits:Use with fraction step-length Move Volumes
The mode of product core realizes the purpose of up-sampling, and the characteristic pattern after handle, and then by the characteristic pattern progress after processing
Classification obtains carrying out the image after image segmentation pixel-by-pixel, due to realizing up-sampling in a manner of fraction step-length Move Volumes product core
Purpose compared in a manner of deconvolution efficiency it is higher, so as to effectively lifted image segmentation efficiency.
Portrait segmentation is an important subdomains of image segmentation, and mobile phone camera can be split by portrait, be background blurring
Two steps simulate the background blurring effect of slr camera.Full convolutional network (FCN) is realize portrait dividing function one
Important algorithm, in order to improve the low problem of efficiency that FCN networks are brought using deconvolution, the disclosure proposes that one kind is based on fraction
The realization of the convolution operation of step-length, the operation still fall within the category of convolution operation, but can reach as deconvolution operation
Function, while can save deconvolution operation low efficiency problem.Efficient portrait partitioning algorithm, significantly improves hand
The experience of taking pictures of machine camera.
Figure 11 is the application scenario diagram of the embodiment of the present disclosure method according to an exemplary embodiment one, such as
Shown in Figure 11, it is assumed that the quantity M of the first convolutional layer in the first convolution module is 5, is respectively:First convolutional layer 1,
One convolutional layer 2, the first convolutional layer 3, the first convolutional layer 4 and the first convolutional layer 5;.Convolution kernel in each first convolutional layer
Mobile step-length EiIt is 2;The quantity N of second convolutional layer is 1, that is, in the present embodiment by fraction step-length
The operation of convolution make it that the resolution ratio of second feature figure is identical with the resolution ratio of pending image.Pass through constraintsUnderstand, the step-length of the convolution kernel of the second convolutional layer in the second convolution module is
In the convolution kernel of each first convolutional layer of the first convolution module, it is volume Two to select the maximum convolution kernel of size
The convolution kernel size of the second convolutional layer in volume module.
In the first convolution module after 5 process of convolution, the size of fisrt feature figure is relative to pending image
32 times of size reduction, second convolutional layer in the second convolution module is used to fisrt feature figure in the second convolution module
When convolution kernel performs convolution operation, withStep-length movement convolution kernel, the size of the second feature figure now obtained is relative to the
The size of one characteristic pattern is exaggerated 32 times, that is, the size of the second feature figure now obtained and the size phase of pending image
Together.
Due to having carried out multiple convolution operation in the first convolution module, if accumulating core by a fraction step-length Move Volumes
Convolution operation just obtain the second feature image that there is identical size with pending image, the second feature image now obtained
Although having identical size with pending image, second feature image may may also be relatively rough, and therefore, Figure 12 is basis
The application scenario diagram of embodiment of the present disclosure method shown in one exemplary embodiment two, as shown in figure 12, on the basis of Figure 11,
The quantity N=5 of the second convolutional layer in the present embodiment in the second convolution module, it is respectively:Second convolutional layer 1, the second convolutional layer
2nd, the second convolutional layer 3, the second convolutional layer 4 and the second convolutional layer 5.
Because the quantity M of the first convolutional layer is 5, the step-length E of the convolution kernel of each first convolutional layeriIt is 2, it is assumed that second
The step-length of the convolution kernel movement of each second convolutional layer in convolution module is identical, passes through constraintsUnderstand, the convolution kernel movement of each second convolutional layer in the second convolution module
Step-length can be
And determine that the convolution kernel of the second convolutional layer 1 is identical with the convolution kernel of the first convolutional layer 5, the convolution of the second convolutional layer 2
Core is identical with the convolution kernel of the first convolutional layer 4, and the convolution kernel of the second convolutional layer 3 is identical with the convolution kernel of the first convolutional layer 3, and second
The convolution kernel of convolutional layer 4 is identical with the convolution kernel of the first convolutional layer 2, the volume of the convolution kernel of the second convolutional layer 5 and the first convolutional layer 1
Product nuclear phase is same.
In the first convolution module after 5 convolution operations, the size of fisrt feature image is relative to pending figure
32 times of the size reduction of picture, fisrt feature image is performed using the convolution kernel of the second convolutional layer in each second convolutional layer
During convolution operation, respectively withStep-length movement convolution kernel, when using the second convolutional layer convolution kernel carry out convolution process
In, computing is carried out to the element of fisrt feature figure using bilinear interpolation algorithm, the fisrt feature figure after interpolation processing is obtained, adopts
Convolution operation is carried out to the fisrt feature figure after difference processing with the second convolution module.Then, the second feature figure finally given
The size of picture is exaggerated 32 times relative to the size of fisrt feature image, that is, the size of the second feature image now obtained
It is identical with the size of pending image.
Following is embodiment of the present disclosure, can be used for performing embodiments of the present disclosure.
Figure 13 is a kind of block diagram of image processing apparatus according to an exemplary embodiment one.As shown in figure 13, should
Image processing apparatus includes:
First acquisition module 11, feature is carried out for treating processing image using the first convolution module of default network model
Extraction, obtain the fisrt feature figure of the pending image;The size of the fisrt feature figure is less than the pending image, institute
Stating the first convolution module includes at least one first convolutional layer;
Second acquisition module 12, mould is obtained to described first for the second convolution module using the default network model
The fisrt feature figure that block 11 obtains is up-sampled, and obtains second feature figure, the size of the second feature figure is more than institute
State fisrt feature figure;Second convolution module includes the second convolutional layer, the volume of the second convolutional layer in second convolution module
The step-length of product core movement is proper fraction;
3rd acquisition module 13, for the second feature figure obtained according to second acquisition module 12, obtain with
Characteristic pattern after the equal processing of the pending picture size.
In one embodiment, first convolution module also includes at least one pond layer, the quantity of the pond layer
No more than the quantity of first convolutional layer, each pond layer is arranged between two first convolutional layers, and described the
The step-length of convolution kernel movement in one convolution module is the integer more than or equal to 1.
In one embodiment, as shown in figure 14, described device also includes:Interpolating module 14, second acquisition module
12 include:Up-sample submodule 121;
The interpolating module 14, for first acquisition module 11 is obtained using bilinear interpolation algorithm described the
The element of one characteristic pattern carries out computing, obtains the fisrt feature figure after interpolation processing;
The up-sampling submodule 121, it is described for being carried out using second convolution module to the interpolating module 14
Fisrt feature figure after difference processing is up-sampled.
In one embodiment, as shown in figure 15, second acquisition module 12 includes:Convolution submodule 122;
The convolution submodule 122, for the convolution kernel of second convolutional layer and first acquisition module to be obtained
The fisrt feature figure carry out convolution.
In one embodiment, as shown in figure 16, described device also includes:First determining module 15;
First determining module 15, for determined according to following formula second convolutional layer quantity and it is each described in
The step-length that the second convolution kernel moves in second convolutional layer:
Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor jth
The step-length of convolution kernel movement in individual second convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
In one embodiment, as shown in figure 17, when second convolution module includes second convolutional layer, institute
Stating device also includes:4th acquisition module 16 and the second determining module 17;
4th acquisition module 16, for obtaining in first convolution module corresponding to each first convolutional layer
Convolution kernel;
Second determining module 17, chi in first convolutional layer obtained for determining the 4th acquisition module 16
The size of very little maximum convolution kernel is the convolution kernel size of second convolutional layer.
In one embodiment, as shown in figure 18, when second convolution module includes at least two second convolutional layers
When, described device also includes:3rd determining module 18;
3rd determining module 18, for the volume in each first convolutional layer in first convolution module
Product core and preset rules determine the convolution kernel of each second convolutional layer in second convolution module;
Wherein, the preset rules comprise at least at least one of following rule:
The convolution kernel in each first convolutional layer in first convolution module is ranked up according to preset order,
The convolution kernel in first convolutional layer after sequence is used as each second convolutional layer in second convolution module successively
Convolution kernel;
Or the convolution kernel size of each second convolutional layer is the first volume product module in second convolution module
The average value of convolution kernel size in block in first convolutional layer of predetermined number.
In one embodiment, as shown in figure 19, the default network model is full convolutional network, and described device is also wrapped
Include:Sort module 19;
The sort module 19, carried out pixel-by-pixel for characteristic pattern after the processing that is obtained to the 3rd acquisition module
Classification, obtain carrying out the image after image segmentation.
According to the third aspect of the embodiment of the present disclosure, there is provided a kind of image processing apparatus, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, processor is configured as:
Processing image is treated using the first convolution module of default network model and carries out feature extraction, is obtained described pending
The fisrt feature figure of image;The size of the fisrt feature figure is less than the pending image, and first convolution module includes
At least one first convolutional layer;
The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second
Characteristic pattern, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer,
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;
According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
Above-mentioned processor is also configured to:
First convolution module also includes at least one pond layer, and the quantity of the pond layer is not more than the first volume
The quantity of lamination, each pond layer are arranged between two first convolutional layers, the volume in first convolution module
The step-length of product core movement is the integer more than or equal to 1.
Before being up-sampled to the fisrt feature figure, methods described also includes:
Computing is carried out to the element of the fisrt feature figure using bilinear interpolation algorithm, obtains first after interpolation processing
Characteristic pattern;
Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:
The fisrt feature figure after difference processing is up-sampled using second convolution module.
Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:
The convolution kernel of second convolutional layer and the fisrt feature figure are subjected to convolution.
Methods described also includes:The quantity of second convolutional layer and each second convolution are determined according to following formula
The step-length that the second convolution kernel moves in layer:
Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor jth
The step-length of convolution kernel movement in individual second convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
When second convolution module includes second convolutional layer, methods described also includes:
Obtain each convolution kernel corresponding to first convolutional layer in first convolution module;
The size for determining the convolution kernel that size is maximum in first convolutional layer is the convolution kernel chi of second convolutional layer
It is very little.
When second convolution module includes at least two second convolutional layers, methods described also includes:
Described in convolution kernel and preset rules in each first convolutional layer in first convolution module determine
The convolution kernel of each second convolutional layer in second convolution module;
Wherein, the preset rules comprise at least at least one of following rule:
The convolution kernel in each first convolutional layer in first convolution module is ranked up according to preset order,
The convolution kernel in first convolutional layer after sequence is used as each second convolutional layer in second convolution module successively
Convolution kernel;
Or the convolution kernel size of each second convolutional layer is the first volume product module in second convolution module
The average value of convolution kernel size in block in first convolutional layer of predetermined number.
The default network model is full convolutional network, and methods described also includes:
Characteristic pattern after the processing is classified pixel-by-pixel, obtains carrying out the image after image segmentation.On above-mentioned reality
The device in example is applied, wherein modules perform the concrete mode operated and carried out in the embodiment about this method in detail
Thin description, will be not set forth in detail explanation herein.
Figure 20 is a kind of block diagram for image processing apparatus 80 according to an exemplary embodiment, and the device is applicable
In terminal device.For example, device 80 can be mobile phone, and computer, digital broadcast terminal, messaging devices, game control
Platform processed, tablet device, Medical Devices, body-building equipment, personal digital assistant etc..
Device 80 can include following one or more assemblies:Processing component 802, memory 804, power supply module 806 are more
Media component 808, audio-frequency assembly 810, the interface 812 of input/output (I/O), sensor cluster 814, and communication component
816。
The integrated operation of the usual control device 80 of processing component 802, such as communicated with display, call, data, camera
The operation that operation and record operation are associated.Processing component 802 can carry out execute instruction including one or more processors 820,
To complete all or part of step of above-mentioned method.In addition, processing component 802 can include one or more modules, it is easy to
Interaction between processing component 802 and other assemblies.For example, processing component 802 can include multi-media module, to facilitate more matchmakers
Interaction between body component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in device 80.These data are shown
Example includes the instruction of any application program or method for being operated on device 80, contact data, telephone book data, disappears
Breath, picture, video etc..Memory 804 can be by any kind of volatibility or non-volatile memory device or their group
Close and realize, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) are erasable to compile
Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 806 provides electric power for the various assemblies of device 80.Power supply module 806 can include power management system
System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 80.
Multimedia groupware 808 is included in the screen of one output interface of offer between described device 80 and user.One
In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch sensings
Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action
Border, but also detect and touched or the related duration and pressure of slide with described.In certain embodiments, more matchmakers
Body component 808 includes a front camera and/or rear camera.When device 80 is in operator scheme, such as screening-mode or
During video mode, front camera and/or rear camera can receive outside multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio-frequency assembly 810 is configured as output and/or input audio signal.For example, audio-frequency assembly 810 includes a Mike
Wind (MIC), when device 80 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone is configured
To receive external audio signal.The audio signal received can be further stored in memory 804 or via communication component
816 send.In certain embodiments, audio-frequency assembly 810 also includes a loudspeaker, for exports audio signal.
I/O interfaces 812 provide interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor cluster 814 includes one or more sensors, for providing the state estimation of various aspects for device 80.
For example, sensor cluster 814 can detect opening/closed mode of device 80, the relative positioning of component, such as the component
For the display and keypad of device 80, sensor cluster 814 can be with the position of 80 1 components of detection means 80 or device
Change, the existence or non-existence that user contacts with device 80, the orientation of device 80 or acceleration/deceleration and the temperature change of device 80.
Sensor cluster 814 can include proximity transducer, be configured to detect object nearby in no any physical contact
Presence.Sensor cluster 814 can also include optical sensor, such as CMOS or ccd image sensor, in imaging applications
Use.In certain embodiments, the sensor cluster 814 can also include acceleration transducer, gyro sensor, magnetic sensing
Device, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between device 80 and other equipment.Device
80 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation
In example, communication component 816 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 816 also includes near-field communication (NFC) module, to promote junction service.Example
Such as, in NFC module radio frequency identification (RFID) technology can be based on, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 80 can be believed by one or more application specific integrated circuits (ASIC), numeral
Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic building bricks are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 804 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 820 of device 80.For example, institute
State non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and
Optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processor of device 80
During execution so that device 80 is able to carry out above-mentioned image processing method, and methods described includes:
Processing image is treated using the first convolution module of default network model and carries out feature extraction, is obtained described pending
The fisrt feature figure of image;The size of the fisrt feature figure is less than the pending image, and first convolution module includes
At least one first convolutional layer;
The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second
Characteristic pattern, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer,
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;
According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
First convolution module also includes at least one pond layer, and the quantity of the pond layer is not more than the first volume
The quantity of lamination, each pond layer are arranged between two first convolutional layers, the volume in first convolution module
The step-length of product core movement is the integer more than or equal to 1.
Before being up-sampled to the fisrt feature figure, methods described also includes:
Computing is carried out to the element of the fisrt feature figure using bilinear interpolation algorithm, obtains first after interpolation processing
Characteristic pattern;
Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:
The fisrt feature figure after difference processing is up-sampled using second convolution module.
Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:
The convolution kernel of second convolutional layer and the fisrt feature figure are subjected to convolution.
Methods described also includes:The quantity of second convolutional layer and each second convolution are determined according to following formula
The step-length that the second convolution kernel moves in layer:
Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor jth
The step-length of convolution kernel movement in individual second convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
When second convolution module includes second convolutional layer, methods described also includes:
Obtain each convolution kernel corresponding to first convolutional layer in first convolution module;
The size for determining the convolution kernel that size is maximum in first convolutional layer is the convolution kernel chi of second convolutional layer
It is very little.
When second convolution module includes at least two second convolutional layers, methods described also includes:
Described in convolution kernel and preset rules in each first convolutional layer in first convolution module determine
The convolution kernel of each second convolutional layer in second convolution module;
Wherein, the preset rules comprise at least at least one of following rule:
The convolution kernel in each first convolutional layer in first convolution module is ranked up according to preset order,
The convolution kernel in first convolutional layer after sequence is used as each second convolutional layer in second convolution module successively
Convolution kernel;
Or the convolution kernel size of each second convolutional layer is the first volume product module in second convolution module
The average value of convolution kernel size in block in first convolutional layer of predetermined number.
The default network model is full convolutional network, and methods described also includes:
Characteristic pattern after the processing is classified pixel-by-pixel, obtains carrying out the image after image segmentation.
Figure 21 is a kind of block diagram of device 90 for image procossing according to an exemplary embodiment.For example, dress
Put 90 and may be provided in a server.Device 90 includes processing component 902, and it further comprises one or more processors,
And as the memory resource representated by memory 903, can be by the instruction of the execution of processing component 902 for storing, such as should
Use program.The application program stored in memory 903 can include it is one or more each correspond to one group of instruction
Module.In addition, processing component 902 is configured as execute instruction, to perform the above method.
Device 90 can also include the power management that a power supply module 906 is configured as performs device 90, and one wired
Or radio network interface 905 is configured as device 90 being connected to network, and input and output (I/O) interface 908.Device 90
It can operate based on the operating system for being stored in memory 903, such as Windows ServerTM, Mac OS XTM, UnixTM,
LinuxTM, FreeBSDTM or similar.
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processor of device 90
During execution so that device 90 is able to carry out above-mentioned image processing method, and methods described includes:
Processing image is treated using the first convolution module of default network model and carries out feature extraction, is obtained described pending
The fisrt feature figure of image;The size of the fisrt feature figure is less than the pending image, and first convolution module includes
At least one first convolutional layer;
The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second
Characteristic pattern, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer,
The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;
According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
First convolution module also includes at least one pond layer, and the quantity of the pond layer is not more than the first volume
The quantity of lamination, each pond layer are arranged between two first convolutional layers, the volume in first convolution module
The step-length of product core movement is the integer more than or equal to 1.
Before being up-sampled to the fisrt feature figure, methods described also includes:
Computing is carried out to the element of the fisrt feature figure using bilinear interpolation algorithm, obtains first after interpolation processing
Characteristic pattern;
Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:
The fisrt feature figure after difference processing is up-sampled using second convolution module.
Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:
The convolution kernel of second convolutional layer and the fisrt feature figure are subjected to convolution.
Methods described also includes:The quantity of second convolutional layer and each second convolution are determined according to following formula
The step-length that the second convolution kernel moves in layer:
Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor jth
The step-length of convolution kernel movement in individual second convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
When second convolution module includes second convolutional layer, methods described also includes:
Obtain each convolution kernel corresponding to first convolutional layer in first convolution module;
The size for determining the convolution kernel that size is maximum in first convolutional layer is the convolution kernel chi of second convolutional layer
It is very little.
When second convolution module includes at least two second convolutional layers, methods described also includes:
Described in convolution kernel and preset rules in each first convolutional layer in first convolution module determine
The convolution kernel of each second convolutional layer in second convolution module;
Wherein, the preset rules comprise at least at least one of following rule:
The convolution kernel in each first convolutional layer in first convolution module is ranked up according to preset order,
The convolution kernel in first convolutional layer after sequence is used as each second convolutional layer in second convolution module successively
Convolution kernel;
Or the convolution kernel size of each second convolutional layer is the first volume product module in second convolution module
The average value of convolution kernel size in block in first convolutional layer of predetermined number.
The default network model is full convolutional network, and methods described also includes:
Characteristic pattern after the processing is classified pixel-by-pixel, obtains carrying out the image after image segmentation.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice disclosure disclosed herein
Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modifications, purposes or
Person's adaptations follow the general principle of the disclosure and including the undocumented common knowledges in the art of the disclosure
Or conventional techniques.Description and embodiments are considered only as exemplary, and the true scope of the disclosure and spirit are by following
Claim is pointed out.
It should be appreciated that the precision architecture that the disclosure is not limited to be described above and is shown in the drawings, and
And various modifications and changes can be being carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.
Claims (18)
- A kind of 1. image processing method, it is characterised in that including:Processing image is treated using the first convolution module of default network model and carries out feature extraction, obtains the pending image Fisrt feature figure;The size of the fisrt feature figure is less than the pending image, and first convolution module is included at least One the first convolutional layer;The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second feature Figure, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer, described The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
- 2. according to the method for claim 1, it is characterised in that first convolution module also includes at least one pond Layer, the quantity of the pond layer are not more than the quantity of first convolutional layer, and each the pond layer is arranged at two described the Between one convolutional layer, the step-length of the convolution kernel movement in first convolution module is the integer more than or equal to 1.
- 3. according to the method for claim 1, it is characterised in that before being up-sampled to the fisrt feature figure, institute Stating method also includes:Computing is carried out to the element of the fisrt feature figure using bilinear interpolation algorithm, obtains the fisrt feature after interpolation processing Figure;Second convolution module using the default network model carries out up-sampling to the fisrt feature figure to be included:The fisrt feature figure after difference processing is up-sampled using second convolution module.
- 4. according to the method described in any one of claims 1 to 3, it is characterised in that described using the default network model Second convolution module carries out up-sampling to the fisrt feature figure to be included:The convolution kernel of second convolutional layer and the fisrt feature figure are subjected to convolution.
- 5. according to the method described in any one of claims 1 to 3, it is characterised in that methods described also includes:According to following formula Determine the quantity of second convolutional layer and the step-length that each the second convolution kernel moves in second convolutional layer:<mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>log</mi> <mi> </mi> <msub> <mi>E</mi> <mi>i</mi> </msub> <mo>+</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>log</mi> <mi> </mi> <msub> <mi>F</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow>Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor j-th second The step-length of convolution kernel movement in convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
- 6. according to the method described in any one of claims 1 to 3, it is characterised in that when second convolution module includes one During individual second convolutional layer, methods described also includes:Obtain each convolution kernel corresponding to first convolutional layer in first convolution module;The size for determining the convolution kernel that size is maximum in first convolutional layer is the convolution kernel size of second convolutional layer.
- 7. according to the method described in any one of claims 1 to 3, it is characterised in that when second convolution module include to During few two the second convolutional layers, methods described also includes:Convolution kernel and preset rules in each first convolutional layer in first convolution module determine described second The convolution kernel of each second convolutional layer in convolution module;Wherein, the preset rules comprise at least any of following rule:The convolution kernel in each first convolutional layer in first convolution module is ranked up according to preset order, sorted The volume as each second convolutional layer in second convolution module successively of the convolution kernel in first convolutional layer afterwards Product core;Or the convolution kernel size of each second convolutional layer is in first convolution module in second convolution module The average value of convolution kernel size in first convolutional layer of predetermined number.
- 8. according to the method described in any one of claims 1 to 3, it is characterised in that the default network model is full convolution net Network, methods described also include:Characteristic pattern after the processing is classified pixel-by-pixel, obtains carrying out the image after image segmentation.
- A kind of 9. image processing apparatus, it is characterised in that including:First acquisition module, feature extraction is carried out for treating processing image using the first convolution module of default network model, Obtain the fisrt feature figure of the pending image;The size of the fisrt feature figure is less than the pending image, and described One convolution module includes at least one first convolutional layer;Second acquisition module, for being obtained using the second convolution module of the default network model to first acquisition module The fisrt feature figure up-sampled, obtain second feature figure, it is special that the size of the second feature figure is more than described first Sign figure;Second convolution module includes the second convolutional layer, the convolution kernel movement of the second convolutional layer in second convolution module Step-length be proper fraction;3rd acquisition module, for the second feature figure obtained according to second acquisition module, obtain waiting to locate with described Characteristic pattern after the equal processing of reason picture size.
- 10. device according to claim 9, it is characterised in that first convolution module also includes at least one pond Layer, the quantity of the pond layer are not more than the quantity of first convolutional layer, and each the pond layer is arranged at two described the Between one convolutional layer, the step-length of the convolution kernel movement in first convolution module is the integer more than or equal to 1.
- 11. device according to claim 9, it is characterised in that described device also includes:Interpolating module, described second obtains Modulus block includes:Up-sample submodule;The interpolating module, for the fisrt feature figure obtained using bilinear interpolation algorithm to first acquisition module Element carry out computing, obtain the fisrt feature figure after interpolation processing;The up-sampling submodule, after carrying out the difference processing to the interpolating module using second convolution module Fisrt feature figure up-sampled.
- 12. according to the device described in any one of claim 9 to 11, it is characterised in that second acquisition module includes:Convolution Submodule;The convolution submodule, for the convolution kernel of second convolutional layer and first acquisition module are obtained described the One characteristic pattern carries out convolution.
- 13. according to the device described in any one of claim 9 to 11, it is characterised in that described device also includes:First determines mould Block;First determining module, for the quantity that second convolutional layer is determined according to following formula and each volume Two The step-length that the second convolution kernel moves in lamination:<mrow> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>log</mi> <mi> </mi> <msub> <mi>E</mi> <mi>i</mi> </msub> <mo>+</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mi>log</mi> <mi> </mi> <msub> <mi>F</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow>Wherein, the M be first convolutional layer quantity, the N be the second convolutional layer quantity, the FjFor j-th second The step-length of convolution kernel movement in convolutional layer, the EiFor the step-length of the convolution kernel movement in i-th of first convolutional layers.
- 14. according to the device described in any one of claim 9 to 11, it is characterised in that when second convolution module includes During one the second convolutional layer, described device also includes:4th acquisition module and the second determining module;4th acquisition module, for obtaining each convolution corresponding to first convolutional layer in first convolution module Core;Second determining module, size is maximum in first convolutional layer obtained for determining the 4th acquisition module The size of convolution kernel is the convolution kernel size of second convolutional layer.
- 15. according to the device described in any one of claim 9 to 11, it is characterised in that when second convolution module includes During at least two second convolutional layers, described device also includes:3rd determining module;3rd determining module, for the convolution kernel in each first convolutional layer in first convolution module and Preset rules determine the convolution kernel of each second convolutional layer in second convolution module;Wherein, the preset rules comprise at least at least one of following rule:The convolution kernel in each first convolutional layer in first convolution module is ranked up according to preset order, sorted The volume as each second convolutional layer in second convolution module successively of the convolution kernel in first convolutional layer afterwards Product core;Or the convolution kernel size of each second convolutional layer is in first convolution module in second convolution module The average value of convolution kernel size in first convolutional layer of predetermined number.
- 16. according to the device described in any one of claim 9 to 11, it is characterised in that the default network model is full convolution Network, described device also include:Sort module;The sort module, classified pixel-by-pixel for characteristic pattern after the processing that is obtained to the 3rd acquisition module, Obtain carrying out the image after image segmentation.
- A kind of 17. image processing apparatus, it is characterised in that including:Processor;For storing the memory of processor-executable instruction;Wherein, the processor is configured as:Processing image is treated using the first convolution module of default network model and carries out feature extraction, obtains the pending image Fisrt feature figure;The size of the fisrt feature figure is less than the pending image, and first convolution module is included at least One the first convolutional layer;The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second feature Figure, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer, described The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
- 18. a kind of computer-readable recording medium, is stored thereon with computer instruction, it is characterised in that the instruction is by processor Following steps are realized during execution:Processing image is treated using the first convolution module of default network model and carries out feature extraction, obtains the pending image Fisrt feature figure;The size of the fisrt feature figure is less than the pending image, and first convolution module is included at least One the first convolutional layer;The fisrt feature figure is up-sampled using the second convolution module of the default network model, obtains second feature Figure, the size of the second feature figure are more than the fisrt feature figure;Second convolution module includes the second convolutional layer, described The step-length of the convolution kernel movement of the second convolutional layer is proper fraction in second convolution module;According to the second feature figure, characteristic pattern after the processing equal with the pending picture size is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710888763.8A CN107578054A (en) | 2017-09-27 | 2017-09-27 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710888763.8A CN107578054A (en) | 2017-09-27 | 2017-09-27 | Image processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107578054A true CN107578054A (en) | 2018-01-12 |
Family
ID=61038893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710888763.8A Pending CN107578054A (en) | 2017-09-27 | 2017-09-27 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107578054A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108288075A (en) * | 2018-02-02 | 2018-07-17 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN108305236A (en) * | 2018-01-16 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image enhancement processing method and device |
CN108364019A (en) * | 2018-01-30 | 2018-08-03 | 上海大学 | Image convolution outsourcing method based on DCTR features |
CN108875904A (en) * | 2018-04-04 | 2018-11-23 | 北京迈格威科技有限公司 | Image processing method, image processing apparatus and computer readable storage medium |
CN108921806A (en) * | 2018-08-07 | 2018-11-30 | Oppo广东移动通信有限公司 | A kind of image processing method, image processing apparatus and terminal device |
CN109036522A (en) * | 2018-06-28 | 2018-12-18 | 深圳视见医疗科技有限公司 | Image processing method, device, equipment and readable storage medium storing program for executing |
CN109583576A (en) * | 2018-12-17 | 2019-04-05 | 上海联影智能医疗科技有限公司 | A kind of medical image processing devices and method |
CN109685750A (en) * | 2018-12-14 | 2019-04-26 | 厦门美图之家科技有限公司 | Image enchancing method and calculating equipment |
CN110088777A (en) * | 2018-07-18 | 2019-08-02 | 深圳鲲云信息科技有限公司 | Deconvolution implementation method and Related product |
CN110211017A (en) * | 2019-05-15 | 2019-09-06 | 北京字节跳动网络技术有限公司 | Image processing method, device and electronic equipment |
CN110348411A (en) * | 2019-07-16 | 2019-10-18 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and equipment |
CN110378976A (en) * | 2019-07-18 | 2019-10-25 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN111144310A (en) * | 2019-12-27 | 2020-05-12 | 创新奇智(青岛)科技有限公司 | Face detection method and system based on multi-layer information fusion |
CN111340049A (en) * | 2020-03-06 | 2020-06-26 | 清华大学 | Image processing method and device based on wide-area dynamic convolution |
CN111712853A (en) * | 2018-02-16 | 2020-09-25 | 松下知识产权经营株式会社 | Processing method and processing device using the same |
CN112581414A (en) * | 2019-09-30 | 2021-03-30 | 京东方科技集团股份有限公司 | Convolutional neural network, image processing method and electronic equipment |
CN113344884A (en) * | 2021-06-11 | 2021-09-03 | 广州逅艺文化科技有限公司 | Video image area detection and compression method, device and medium |
CN113408325A (en) * | 2020-03-17 | 2021-09-17 | 北京百度网讯科技有限公司 | Method and device for identifying surrounding environment of vehicle and related equipment |
CN113887542A (en) * | 2021-12-06 | 2022-01-04 | 深圳小木科技有限公司 | Target detection method, electronic device, and storage medium |
US11341734B2 (en) | 2018-12-17 | 2022-05-24 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image segmentation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017031630A1 (en) * | 2015-08-21 | 2017-03-02 | 中国科学院自动化研究所 | Deep convolutional neural network acceleration and compression method based on parameter quantification |
CN106530227A (en) * | 2016-10-27 | 2017-03-22 | 北京小米移动软件有限公司 | Image restoration method and device |
CN106611148A (en) * | 2015-10-21 | 2017-05-03 | 北京百度网讯科技有限公司 | Image-based offline formula identification method and apparatus |
CN106650690A (en) * | 2016-12-30 | 2017-05-10 | 东华大学 | Night vision image scene identification method based on deep convolution-deconvolution neural network |
CN106709532A (en) * | 2017-01-25 | 2017-05-24 | 京东方科技集团股份有限公司 | Image processing method and device |
-
2017
- 2017-09-27 CN CN201710888763.8A patent/CN107578054A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017031630A1 (en) * | 2015-08-21 | 2017-03-02 | 中国科学院自动化研究所 | Deep convolutional neural network acceleration and compression method based on parameter quantification |
CN106611148A (en) * | 2015-10-21 | 2017-05-03 | 北京百度网讯科技有限公司 | Image-based offline formula identification method and apparatus |
CN106530227A (en) * | 2016-10-27 | 2017-03-22 | 北京小米移动软件有限公司 | Image restoration method and device |
CN106650690A (en) * | 2016-12-30 | 2017-05-10 | 东华大学 | Night vision image scene identification method based on deep convolution-deconvolution neural network |
CN106709532A (en) * | 2017-01-25 | 2017-05-24 | 京东方科技集团股份有限公司 | Image processing method and device |
Non-Patent Citations (1)
Title |
---|
LDY: "Transposed Convolution, Fractionally Strided Convolution or Deconvolution", 《HTTPS://BUPTLDY.GITHUB.IO/2016/10/29/2016-10-29-DECONV/》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108305236A (en) * | 2018-01-16 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Image enhancement processing method and device |
CN108364019A (en) * | 2018-01-30 | 2018-08-03 | 上海大学 | Image convolution outsourcing method based on DCTR features |
CN108364019B (en) * | 2018-01-30 | 2021-12-03 | 上海大学 | Image convolution outsourcing method based on DCTR (data communication and data transmission rate) features |
CN108288075B (en) * | 2018-02-02 | 2019-06-14 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN108288075A (en) * | 2018-02-02 | 2018-07-17 | 沈阳工业大学 | A kind of lightweight small target detecting method improving SSD |
CN111712853A (en) * | 2018-02-16 | 2020-09-25 | 松下知识产权经营株式会社 | Processing method and processing device using the same |
CN111712853B (en) * | 2018-02-16 | 2023-11-07 | 松下知识产权经营株式会社 | Processing method and processing device using same |
CN108875904A (en) * | 2018-04-04 | 2018-11-23 | 北京迈格威科技有限公司 | Image processing method, image processing apparatus and computer readable storage medium |
CN109036522A (en) * | 2018-06-28 | 2018-12-18 | 深圳视见医疗科技有限公司 | Image processing method, device, equipment and readable storage medium storing program for executing |
CN109036522B (en) * | 2018-06-28 | 2021-08-17 | 深圳视见医疗科技有限公司 | Image processing method, device, equipment and readable storage medium |
CN110088777A (en) * | 2018-07-18 | 2019-08-02 | 深圳鲲云信息科技有限公司 | Deconvolution implementation method and Related product |
CN110088777B (en) * | 2018-07-18 | 2023-05-05 | 深圳鲲云信息科技有限公司 | Deconvolution implementation method and related products |
CN108921806B (en) * | 2018-08-07 | 2020-08-07 | Oppo广东移动通信有限公司 | Image processing method, image processing device and terminal equipment |
CN108921806A (en) * | 2018-08-07 | 2018-11-30 | Oppo广东移动通信有限公司 | A kind of image processing method, image processing apparatus and terminal device |
CN109685750A (en) * | 2018-12-14 | 2019-04-26 | 厦门美图之家科技有限公司 | Image enchancing method and calculating equipment |
CN109583576A (en) * | 2018-12-17 | 2019-04-05 | 上海联影智能医疗科技有限公司 | A kind of medical image processing devices and method |
US11836925B2 (en) | 2018-12-17 | 2023-12-05 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image segmentation |
CN109583576B (en) * | 2018-12-17 | 2020-11-06 | 上海联影智能医疗科技有限公司 | Medical image processing device and method |
US11341734B2 (en) | 2018-12-17 | 2022-05-24 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image segmentation |
CN110211017B (en) * | 2019-05-15 | 2023-12-19 | 北京字节跳动网络技术有限公司 | Image processing method and device and electronic equipment |
CN110211017A (en) * | 2019-05-15 | 2019-09-06 | 北京字节跳动网络技术有限公司 | Image processing method, device and electronic equipment |
CN110348411B (en) * | 2019-07-16 | 2024-05-03 | 腾讯科技(深圳)有限公司 | Image processing method, device and equipment |
CN110348411A (en) * | 2019-07-16 | 2019-10-18 | 腾讯科技(深圳)有限公司 | A kind of image processing method, device and equipment |
CN110378976B (en) * | 2019-07-18 | 2020-11-13 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110378976A (en) * | 2019-07-18 | 2019-10-25 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112581414A (en) * | 2019-09-30 | 2021-03-30 | 京东方科技集团股份有限公司 | Convolutional neural network, image processing method and electronic equipment |
CN112581414B (en) * | 2019-09-30 | 2024-04-23 | 京东方科技集团股份有限公司 | Convolutional neural network, image processing method and electronic equipment |
CN111144310A (en) * | 2019-12-27 | 2020-05-12 | 创新奇智(青岛)科技有限公司 | Face detection method and system based on multi-layer information fusion |
CN111340049B (en) * | 2020-03-06 | 2023-06-09 | 清华大学 | Image processing method and device based on wide-area dynamic convolution |
CN111340049A (en) * | 2020-03-06 | 2020-06-26 | 清华大学 | Image processing method and device based on wide-area dynamic convolution |
CN113408325A (en) * | 2020-03-17 | 2021-09-17 | 北京百度网讯科技有限公司 | Method and device for identifying surrounding environment of vehicle and related equipment |
CN113344884A (en) * | 2021-06-11 | 2021-09-03 | 广州逅艺文化科技有限公司 | Video image area detection and compression method, device and medium |
CN113887542A (en) * | 2021-12-06 | 2022-01-04 | 深圳小木科技有限公司 | Target detection method, electronic device, and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578054A (en) | Image processing method and device | |
CN113743535B (en) | Neural network training method and device and image processing method and device | |
CN108629354B (en) | Target detection method and device | |
TWI766286B (en) | Image processing method and image processing device, electronic device and computer-readable storage medium | |
EP3825923A1 (en) | Hypernetwork training method and device, electronic device and storage medium | |
CN107798669A (en) | Image defogging method, device and computer-readable recording medium | |
CN109859096A (en) | Image Style Transfer method, apparatus, electronic equipment and storage medium | |
CN107492115A (en) | The detection method and device of destination object | |
CN106650575A (en) | Face detection method and device | |
WO2018113512A1 (en) | Image processing method and related device | |
CN106778773A (en) | The localization method and device of object in picture | |
US20200294249A1 (en) | Network module and distribution method and apparatus, electronic device, and storage medium | |
CN107145904A (en) | Determination method, device and the storage medium of image category | |
CN107563994A (en) | The conspicuousness detection method and device of image | |
CN107992848A (en) | Obtain the method, apparatus and computer-readable recording medium of depth image | |
CN107480665A (en) | Character detecting method, device and computer-readable recording medium | |
CN107911641A (en) | Video watermark generation method, device and terminal | |
CN108062547A (en) | Character detecting method and device | |
US20210089913A1 (en) | Information processing method and apparatus, and storage medium | |
CN107748867A (en) | The detection method and device of destination object | |
CN107948510A (en) | The method, apparatus and storage medium of Focussing | |
CN107729880A (en) | Method for detecting human face and device | |
CN106791014A (en) | Message display method and device | |
CN107133354A (en) | The acquisition methods and device of description information of image | |
CN106295707A (en) | Image-recognizing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180112 |