CN110276389A - One kind being based on the modified mine movable inspection image rebuilding method in edge - Google Patents
One kind being based on the modified mine movable inspection image rebuilding method in edge Download PDFInfo
- Publication number
- CN110276389A CN110276389A CN201910513747.XA CN201910513747A CN110276389A CN 110276389 A CN110276389 A CN 110276389A CN 201910513747 A CN201910513747 A CN 201910513747A CN 110276389 A CN110276389 A CN 110276389A
- Authority
- CN
- China
- Prior art keywords
- image
- formula
- reconstruction
- edge
- indicate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses one kind to be based on the modified mine movable inspection image rebuilding method in edge, comprising the following steps: step 1: image preprocessing carries out different degrees of down-sampling and image dividing processing to input picture;Step 2: image characteristics extraction and expression;Use improved activation primitive respectively behind each convolutional layer;Step 3: image reconstruction;The channel Y is rebuild using the method for the present invention;Step 4: the amendment of marginal error area information.The enhanced processing of different scale is done in the present invention to training set image in pretreatment stage, to facilitate the cross-training in later period, so that network model is suitable for the reconstruction process of the different zooms multiples such as 2,3,4;It is appropriate to deepen network layer, extract richer image feature information;Edge correction factor is extracted, the HR image that counterweight is built up carries out marginal information amendment, solves the problems, such as that edge details are fuzzy;Improved activation primitive is introduced, non-linear expression's ability is improved, activates more inelastic region characteristic of field.
Description
Technical field
The present invention relates to technical field of image processing, are based on the modified mine movable inspection figure in edge more particularly to one kind
As method for reconstructing.
Background technique
Transporting coal belt feeder and the coal stream of transport are more in current mine system, cause manually to be difficult to detect belt-conveying peace
Whether complete, so mostly automatically and efficiently detecting the situation of coal stream transport using mobile routine inspection mode in daily production, and detected
The video image that will lead to shooting due to the problem of imaging environment in journey is unintelligible, is distorted, will affect detection effect, institute
To need the monitoring image to mobile inspection to carry out super-resolution rebuilding.
The traffic condition of coal stream and the safe shape of underground work personnel can be checked in real time by mobile inspection image
Condition must improve figure by super-resolution rebuilding to improve the clarity of image so the readability of image is most important
The resolution ratio of picture, to sum up, the super-resolution rebuilding of mobile inspection image has far-reaching researching value.
The super-resolution rebuilding of image makes image directly revert to high-resolution from low resolution (low resolution, LR)
Rate (high resolution, HR).Currently, this method is broadly divided into three classes: based on interpolation, being based on reconstruction, based on study.It inserts
Value method very simple, it is easy to accomplish;Based on the method for reconstruction based on degradation model, weight is carried out using the priori knowledge of image
It builds;In recent years, the method based on study is concerned.Yang et al. has studied sparse coding (sparse coding, SC) side
Method learns the mapping relations between LR and HR image by training high-low resolution dictionary.Timofte et al. is by sparse coding
Dictionary is combined with the method that neighborhood is embedded in, the recurrence of proposition anchoring neighborhood (anchored neighborhood regression,
ANR), the anchoring neighborhood adjusted returns (adjusted anchored neighborhood regression, A+) method etc..
Later, with the rise of deep learning method, the super resolution ratio reconstruction method based on deep learning is also obtained significantly
Effect.Dong C et al. proposes super-resolution rebuilding (the super resolution based on convolutional neural networks at first
Using convolutional neural networks, SRCNN) method, this method directly learn a kind of LR and HR image it
Between mapping relations end to end, dictionary learning is carried out in hidden layer and spatial model is established, image block is carried out in convolutional layer
Extraction and amplification, to avoid many front/rear phases processing.Chen Y etc. proposes a kind of trainable nonlinear reaction-diffusion
(trainable nonlinear reaction diffusion, TNRD) network, for the filtering parameter and shadow in each stage
Function is rung to be learnt simultaneously.Kim et al. proposes that deeper network carries out super-resolution rebuilding, is mitigated by the inspiration of ResNet
The study " burden " of network constructs the convolutional neural networks under different amplification scales.2016, Dong et al. was in the side SRCNN
It is improved on the basis of method, proposes quick convolutional neural networks super-resolution (fast super resolution using
Convolutional neural networks, FSRCNN) method for reconstructing, will not pretreated LR image directly as net
The input of network is up-sampled using warp lamination in the end of network, and adjusts convolution kernel size, and integral net is reduced
The parameter amount of network structure.The experimental results showed that this method ratio SRCNN training speed is faster, it is more significant to rebuild effect.2017,
Xiao Jinsheng et al. adjusts convolution kernel size on the basis of the three-layer coil of SRCNN product neural network, and pond layer is added to reduce dimension
Degree reduces calculation amount.
Method based on interpolation is difficult the detailed information such as the texture of reproduced image, and the image of generation is more fuzzy;Based on weight
The method built usually requires the image registration for calculating complexity and fusing stage, accuracy directly affect the quality of result;It is based on
In the method for study, SC needs a large amount of high-low resolution image block to go to train high-low resolution dictionary, time-consuming more;SRCNN
The convolution number of plies is very little, and receptive field is smaller, and the characteristics of image of acquisition is limited, and may be only available for the network under a kind of amplification factor
Training;Kim et al. selects deeper convolutional neural networks, and overall architecture is complex, and the training time is longer, calculation amount compared with
Greatly;FSRCNN also reduces reconstruction effect while reducing calculation amount in reduction parameter amount;The method of Xiao Jinsheng uses pond layer
Reducing dimension will lead to many detailed information for losing image, influence the precision of super-resolution rebuilding.
Summary of the invention
In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides one kind to be based on the modified mine movable inspection in edge
Image rebuilding method.
The technical scheme adopted by the invention is that: one kind being based on the modified mine movable inspection image rebuilding method in edge,
The following steps are included:
Step 1: image preprocessing carries out different degrees of down-sampling and image dividing processing to input picture;
Step 2: image characteristics extraction and expression;Using 7 layers of convolutional layer (Conv.1, Conv.2 ..., Conv.7) to figure
As feature progress Nonlinear Mapping, parameter padding is set as 0, uses improved activation primitive respectively behind each convolutional layer,
Totally 6 layers of activation primitive layer (Active.1, Active.2 ..., Active.6), until use finishes;
Step 3: image reconstruction;The channel Y is rebuild using the method for the present invention, using the channel Bicubic interpolation reconstruction CbCr,
It is final HR image by the image co-registration rebuild;
Step 4: the amendment of marginal error area information extracts the fringing coefficient of HR sample image by training sample database, with
The insecure marginal information of image after amendment is rebuild.
Further, the extraction of the step 2 including characteristic block, Nonlinear Mapping and three processes are rebuild,
(1) extraction of characteristic block extracts required characteristic block from pretreated input picture Y, then by each spy
Sign block is expressed as high dimension vector, is shown below:
F1(Y)=max (0, W1*Y+B1) (1)
In formula, W1Indicate convolution kernel, B1Indicate bias vector, convolution kernel size is f1×f1, using n1A convolution kernel is to c
Image channel carries out convolution and obtains n1A LR characteristic pattern generally only considers bright since luminance channel influences maximum to picture quality
Channel is spent, so c is taken as 1, the LR figure that linear amending unit ReLU is extracted as activation primitive processing is closely followed after convolutional layer
As block;
(2) Nonlinear Mapping: LR image block is mapped as HR image block, i.e., by n1Dimensional feature matrix is mapped as n2Dimensional feature
Matrix constitutes one group of new characteristic pattern, i.e. n2A HR characteristic block, is shown below:
F2(Y)=max (0, W2*F1(Y)+B2) (2)
In formula, W2Indicate convolution kernel, B2Indicate bias vector, convolution kernel size is f2×f2, using n2A convolution kernel is to n1
A LR characteristic pattern carries out convolution and obtains n2A HR characteristic pattern;
(3) image reconstruction: the n that upper layer is generated2A HR characteristic pattern carries out convolution and merges to obtain final HR image, such as
Shown in following formula:
F (Y)=W3*F2(Y)+B3 (3)
In formula, W3Indicate convolution kernel, B3Indicate bias vector, convolution kernel size is f3×f3, using c convolution kernel to n2It is a
HR characteristic pattern carries out the HR image that convolution is rebuild, and in (1), c takes 1;Image reconstruction algorithm based on CNN network is most
It is important that training one group of optimal network parameter Θ={ W1,W2,W3,B1,B2,B3, i.e., to reconstruction image F (Y, Θ) with
Error loss function between practical HR image X is continued to optimize, and in the range of minimizing it regulation, is shown below:
In formula, n is the sample number chosen, XiFor the HR sample of selection, YiFor LR sample, the Optimization Solution of above formula is used
Stochastic gradient descent method.
Further, in above-mentioned steps 1, image is pre-processed, first converts training set image by RGB color
At YCbCr color format, and extract the channel Y;Then the contracting of different scale is carried out to LR image using Bicubic interpolation method
Processing is put, input of the different size of LR image as network is obtained;Finally by HR image, LR image divides respectively with treated
It is segmented into 24/48,24/72 and 24/96.
Further, in step 2 above, new activation primitive-PReLU-Softplus, expression are constructed are as follows:
α takes 0.01.
Further, in step 2 above, the output result of convolutional layer are as follows:
In above formula, x is the value for exporting the pixel in characteristic pattern, and l is the convolutional layer number of plies, and i, j respectively indicate pixel
Position coordinates, f () indicate that activation primitive, W indicate weight parameter, and b indicates offset constant, also can be expressed as:
xl=f (μl) (9)
Wherein, μl=xl-1×Wl+bl;Using the SGD algorithmic minimizing loss function, it is made to continue to optimize weighting parameter,
Wherein, N is training samples number, YiFor the HR image of standard, XiThe LR image of input, Θ=(W1,W2,...,W7,
b1,b2,...,b7) indicate that weighting parameter, F () they are the mapping function of network model;If E is enabled to represent the output error damage of network
Function is lost, δ is enabledlE error caused by offset constant b for l layers of convolutional layer then has following relationship:
Recurrence formula is as follows:
Formula (11) and (13) be multiplied with negative learning rate be exactly current convolutional layer right value update, be shown below:
Further, in above-mentioned steps 4, by obtain image marginal information to the image after reconstruction carry out fusion and
Edge amendment, the HR image rebuild using the linear operation amendment of neighborhood territory pixel difference value, utilizes the marginal information of different directions
It is merged with the image after reconstruction, seeks x+, x-, the gradient information of y+ and y- four direction, all directions gradient such as public affairs respectively
Shown in formula (16)~(19):
(19) wherein f (x, y) indicates that coordinate is the grey scale pixel value of (x, y) in image, calculates x+ according to gradient formula,
The gradient information image s of x-, y+, y- four directionx+, sx-, sy+, sy-, marginal information giFor shown in formula (20):
gi=ki·si(i=1,2,3 ..., n × n) (20)
Wherein ki=[a, b, c, d] is coefficient matrix, si=[sx+,sx-,sy+,sy-]T, then can by solution formula (21) come
Obtain coefficient matrix K=[k1,k2,...,kn×n],
min||G+HR-I|| (21)
Wherein G={ g1,g2,...,gn×n}={ k1·s1,k2·s2,...,kn·sn}=KS is the collection of marginal information
It closes, S=[s1,s2,...,sn×n] be gradient information set, HR be convolutional layer rebuild high-definition picture, I is sample graph
Picture obtains coefficient matrix K, in fusion process, using the matched Fusion Edges coefficient matrix of Euclidean distance minimum method choice.
Compared with prior art, the beneficial effects of the present invention are:
(1) enhanced processing of different scale is done to training set image in pretreatment stage, to facilitate the cross-training in later period,
So that network model is suitable for the reconstruction process of the different zooms multiples such as 2,3,4.
(2) suitably deepen network layer, 7 layers of convolutional layer, 6 layers of activation primitive layer, convolution are become from original three-layer coil product
The size of core is set as 3 × 3, extracts richer image feature information.
(3) learn edge correction factor, the HR image that counterweight is built up carries out marginal information amendment, and it is fuzzy to solve edge details
The problem of.
(4) improved activation primitive is introduced, non-linear expression's ability is improved, activates more inelastic region characteristic of field.
Detailed description of the invention
Fig. 1 is that the present invention is based on the structural schematic diagrams of the modified mine movable inspection image rebuilding method in edge;
Fig. 2 is the structure chart of SRCNN model of the present invention;
Fig. 3 is ReLU and Softplus function of the present invention;
Fig. 4 is the improved nonlinear activation function of the present invention.
Specific embodiment
In order to deepen the understanding of the present invention, present invention will be further explained below with reference to the attached drawings and examples, the implementation
Example for explaining only the invention, does not constitute protection scope of the present invention and limits.
As shown in Figure 1, steps are as follows by the present invention:
Step 1: image preprocessing mainly carries out the segmentation portion of different degrees of down-sampling and image to input picture
Reason;
Step 2: image characteristics extraction and expression, 7 layers of convolutional layer (Conv.1, Conv.2 ..., Conv.7) and 6 layers of activation
Function layer (Active.1, Active.2 ..., Active.6), it is main that Nonlinear Mapping is carried out to characteristics of image using convolution,
Parameter padding is set as 0, prevents the increase with network depth, and the characteristic pattern after convolution becomes smaller, and loses marginal information;Convolution
Core size is 3 × 3, and improved activation primitive is used behind each convolutional layer, plays better regularization effect;
Step 3: image reconstruction is rebuild the channel Y using the method for the present invention, other two Color Channels is adopted
It is directly rebuild with Bicubic interpolation, is then final HR image by the image co-registration rebuild;
Step 4: the amendment of marginal error area information extracts the fringing coefficient of HR sample image by training sample database, with
The insecure marginal information of image after amendment is rebuild.
In the above-described embodiments, deep learning is introduced the field image SR at first by the SRCNN model that Dong et al. is proposed.
The model framework is made of three-layer coil lamination, as shown in Fig. 2, being learnt between HR image block and LR image block using convolution algorithm
Nonlinear Mapping relationship, instead of the study of the super complete dictionary pair in sparse coding algorithm, from being input to the entire weight of output
The various pieces for process of founding a capital realize the distinctive performance of convolutional neural networks (CNN) network among convolution optimization.
Fig. 2 shows that the algorithm model includes three extraction of characteristic block, Nonlinear Mapping and reconstruction processes, algorithm structure letter
It is single, clear thinking.Each stage is analyzed in detail below:
(1) extraction of characteristic block: extracting required characteristic block from pretreated input picture Y, then by each spy
Sign block is expressed as high dimension vector, is shown below:
F1(Y)=max (0, W1*Y+B1) (1)
In formula, W1Indicate convolution kernel, B1Indicate bias vector, convolution kernel size is f1×f1, using n1A convolution kernel is to c
Image channel carries out convolution and obtains n1A LR characteristic pattern generally only considers bright since luminance channel influences maximum to picture quality
Channel is spent, so c is taken as 1;The LR figure that linear amending unit ReLU is extracted as activation primitive processing is closely followed after convolutional layer
As block.
(2) Nonlinear Mapping: LR image block is mapped as HR image block, i.e., by n1Dimensional feature matrix is mapped as n2Dimensional feature
Matrix constitutes one group of new characteristic pattern, i.e. n2A HR characteristic block, is shown below:
F2(Y)=max (0, W2*F1(Y)+B2) (2)
In formula, W2Indicate convolution kernel, B2Indicate bias vector, convolution kernel size is f2×f2, using n2A convolution kernel is to n1
A LR characteristic pattern carries out convolution and obtains n2A HR characteristic pattern.
(3) image reconstruction: the n that upper layer is generated2A HR characteristic pattern carries out convolution and merges to obtain final HR image, such as
Shown in following formula:
F (Y)=W3*F2(Y)+B3 (3)
In formula, W3Indicate convolution kernel, B3Indicate bias vector, convolution kernel size is f3×f3, using c convolution kernel to n2It is a
HR characteristic pattern carries out the HR image that convolution is rebuild, and in (1), c takes 1.
Image reconstruction algorithm based on CNN network most importantly trains one group of optimal network parameter Θ={ W1,W2,
W3,B1,B2,B3, i.e., the error loss function between reconstruction image F (Y, Θ) and practical HR image X is continued to optimize, make it
In the range of minimizing regulation, it is shown below:
In formula, n is the sample number chosen, XiFor the HR sample of selection, YiFor LR sample, the Optimization Solution of above formula is used
Stochastic gradient descent method (Stochastic Gradient Descent, SGD).
In the above-described embodiments, a kind of to be based on the modified mine movable inspection image rebuilding method in edge, including following step
It is rapid: 1, to pre-process: in the field SR of image, needing to do input picture simple process, could input in SR model, then pass through
Feature extraction, Nonlinear Mapping and reconstruction process are crossed, final HR is obtained and restores image.Preprocessing process of the invention is first
Training set image is first converted into YCbCr color format by RGB color, and extracts the channel Y, is then inserted using Bicubic
Value method carries out the scaling processing of different scale (2,3,4) to LR image, obtains different size of LR image as the defeated of network
Enter, finally by HR image, LR image is split with treated, and size is respectively 24/48,24/72 and 24/96, in order to
Network model carries out multiple dimensioned cross-training, improves generalization ability.
In the above-described embodiments, 2, optimization activation primitive: as shown in figure 3, in conjunction with PReLU and Softplus, one is constructed
A new activation primitive-PReLU-Softplus that not only there is nonlinear characteristic but also remain sparsity, curve such as Fig. 4 culminant star
Shown in dotted line: Fig. 4 dot-dashed line is Softplus function, expression formula are as follows:
F (x)=ln (1+ex) (5)
As x=0, f (x)=ln2.Dashed middle line is ReLU activation primitive, expression formula are as follows:
F (x)=max (0, x) (6)
As x < 0, f (x)=0.This paper improvements are as follows: as x >=0, function Softplus is translated downwards ln2
Unit, i.e. ln (1+ex)-ln2.As x < 0, using the negative fraction of PReLU function, i.e. α x, α takes 0.01, in training process
In can be learnt and adjusted, rather than be directly taken as 0, cause neuronal necrosis.Such improve retains function sparsity
Nonlinear interaction is enhanced again simultaneously, the bioactivation characteristic of neuron is extremely close to, improves the energy of network expressing information
Power makes network preferably fitting function, enhances the mapping ability of network, improves and rebuilds effect, so PReLU-Softplus has
Shown in the expression formula of body such as formula (7):
In the above-described embodiments, 3, network training: image characteristics extraction is that a linear convolution by convolutional layer acts on,
Then again by activation primitive nonlinear operation, by n1The vector characteristics of dimension are mapped to n2The process of dimension.According to convolution algorithm mistake
Journey obtains the output result of convolutional layer are as follows:
In above formula, x is the value for exporting the pixel in characteristic pattern, and l is the convolutional layer number of plies, and i, j respectively indicate pixel
Position coordinates, f () indicate that activation primitive, W indicate weight parameter, and b indicates offset constant, also can be expressed as:
xl=f (μl) (9)
Wherein, μl=xl-1×Wl+bl。
During network training, W and b are updated as loss function using mean square error (MSE), it is such as public
Shown in formula (10), and the SGD algorithmic minimizing loss function is used, it is made to continue to optimize weighting parameter.
Wherein, N is training samples number, YiFor the HR image of standard, XiThe LR image of input, Θ=(W1,W2,...,W7,
b1,b2,...,b7) indicate that weighting parameter, F () they are the mapping function of network model.
If E is enabled to represent the output error loss function of network, δ is enabledlE caused by offset constant b for l layers of convolutional layer is missed
Difference then has following relationship:
Recurrence formula is as follows:
Formula (11) and (13) be multiplied with negative learning rate be exactly current convolutional layer right value update, be shown below:
In the above-described embodiments, 4, edge correction factor, by obtaining the marginal information of image come to the image after reconstruction
Fusion and edge amendment are carried out, and neighborhood territory pixel value and the difference of the pixel value can indicate the marginal information of image well, adopt
The HR image rebuild with the linear operation amendment of neighborhood territory pixel difference value.Due to the different images obtained of the gradient direction of image
Marginal information also can be different, so the marginal information using different directions is merged with the image after reconstruction, so as to so that
The marginal information of image is more abundant, and visual effect is more preferably.So x+ is sought respectively, and x-, the gradient letter of y+ and y- four direction
Breath, shown in all directions gradient such as formula (16)~(19):
Wherein f (x, y) indicates that coordinate is the grey scale pixel value of (x, y) in image, calculates x+, x-, y according to gradient formula
+, the gradient information image s of y- four directionx+, sx-, sy+, sy-.Marginal information giFor shown in formula (20):
gi=ki·si(i=1,2,3 ..., n × n) (20)
Wherein ki=[a, b, c, d] is coefficient matrix, si=[sx+,sx-,sy+,sy-]T.Then can by solution formula (21) come
Obtain coefficient matrix K=[k1,k2,...,kn×n]。
min||G+HR-I|| (21)
Wherein G={ g1,g2,...,gn×n}={ k1·s1,k2·s2,...,kn·sn}=KS is the collection of marginal information
It closes, S=[s1,s2,...,sn×n] be gradient information set, HR be convolutional layer rebuild high-definition picture, I is sample graph
Picture, above formula is solved using improved orthogonal matching pursuit algorithm (OMP), to obtain coefficient matrix K.In fusion process
In, using the matched Fusion Edges coefficient matrix of Euclidean distance minimum method choice.
What the embodiment of the present invention was announced is preferred embodiment, and however, it is not limited to this, the ordinary skill people of this field
Member, easily according to above-described embodiment, understands spirit of the invention, and make different amplification and variation, but as long as not departing from this
The spirit of invention, all within the scope of the present invention.
Claims (6)
1. one kind is based on the modified mine movable inspection image rebuilding method in edge, which comprises the following steps:
Step 1: image preprocessing carries out different degrees of down-sampling and image dividing processing to input picture;
Step 2: image characteristics extraction and expression;Using 7 layers of convolutional layer (Conv.1, Conv.2 ..., Conv.7) to image spy
Sign carries out Nonlinear Mapping, and parameter padding is set as 0, uses improved activation primitive respectively behind each convolutional layer, and totally 6
Layer activation primitive layer (Active.1, Active.2 ..., Active.6), until use finishes;
Step 3: image reconstruction;Rebuilding the channel Y using the method for the present invention will be rebuild using the channel Bicubic interpolation reconstruction CbCr
Good image co-registration is final HR image;
Step 4: the amendment of marginal error area information extracts the fringing coefficient of HR sample image by training sample database, with amendment
The insecure marginal information of image after reconstruction.
2. according to claim 1 be based on the modified mine movable inspection image rebuilding method in edge, it is characterised in that: institute
The extraction, Nonlinear Mapping and rebuild three processes that step 2 includes characteristic block are stated,
(1) extraction of characteristic block extracts required characteristic block from pretreated input picture Y, then by each characteristic block
It is expressed as high dimension vector, is shown below:
F1(Y)=max (0, W1*Y+B1) (1)
In formula, W1Indicate convolution kernel, B1Indicate bias vector, convolution kernel size is f1×f1, using n1A convolution kernel is to c image
Channel carries out convolution and obtains n1A LR characteristic pattern is general only to consider that brightness is logical since luminance channel influences maximum to picture quality
The LR image block that linear amending unit ReLU is extracted as activation primitive processing is closely followed after convolutional layer so c is taken as 1 in road;
(2) Nonlinear Mapping: LR image block is mapped as HR image block, i.e., by n1Dimensional feature matrix is mapped as n2Dimensional feature matrix,
Constitute one group of new characteristic pattern, i.e. n2A HR characteristic block, is shown below:
F2(Y)=max (0, W2*F1(Y)+B2) (2)
In formula, W2Indicate convolution kernel, B2Indicate bias vector, convolution kernel size is f2×f2, using n2A convolution kernel is to n1A LR
Characteristic pattern carries out convolution and obtains n2A HR characteristic pattern;
(3) image reconstruction: the n that upper layer is generated2A HR characteristic pattern carries out convolution and merges to obtain final HR image, such as following formula institute
Show:
F (Y)=W3*F2(Y)+B3 (3)
In formula, W3Indicate convolution kernel, B3Indicate bias vector, convolution kernel size is f3×f3, using c convolution kernel to n2A HR is special
Sign figure carries out the HR image that convolution is rebuild, and in (1), c takes 1;Image reconstruction algorithm based on CNN network is most important
Be to train one group of optimal network parameter Θ={ W1,W2,W3,B1,B2,B3, i.e., it be to reconstruction image F (Y, Θ) and practical
Error loss function between HR image X is continued to optimize, and in the range of minimizing it regulation, is shown below:
In formula, n is the sample number chosen, XiFor the HR sample of selection, YiFor LR sample, the Optimization Solution of above formula is used random
Gradient descent method.
3. according to claim 1 be based on the modified mine movable inspection image rebuilding method in edge, it is characterised in that: on
It states in step 1, image is pre-processed, training set image is first converted into YCbCr color format by RGB color, and
Extract the channel Y;Then the scaling processing for carrying out different scale to LR image using Bicubic interpolation method, obtains different size
Input of the LR image as network;Finally by HR image, LR image is divided into 24/48,24/72 and 24/ respectively with treated
96。
4. according to claim 1 be based on the modified mine movable inspection image rebuilding method in edge, it is characterised in that:
In above-mentioned steps 2, new activation primitive-PReLU-Softplus, expression are constructed are as follows:
α takes 0.01.
5. according to claim 1 be based on the modified mine movable inspection image rebuilding method in edge, it is characterised in that:
In above-mentioned steps 2, the output result of convolutional layer are as follows:
In above formula, x is the value for exporting the pixel in characteristic pattern, and l is the convolutional layer number of plies, and i, j respectively indicate the position of pixel
Coordinate, f () indicate that activation primitive, W indicate weight parameter, and b indicates offset constant, also can be expressed as:
xl=f (μl) (9)
Wherein, μl=xl-1×Wl+bl;Using the SGD algorithmic minimizing loss function, it is made to continue to optimize weighting parameter,
Wherein, N is training samples number, YiFor the HR image of standard, XiThe LR image of input, Θ=(W1,W2,...,W7,b1,
b2,...,b7) indicate that weighting parameter, F () they are the mapping function of network model;
If E is enabled to represent the output error loss function of network, δ is enabledlE error caused by offset constant b for l layers of convolutional layer, then
There is following relationship:
Recurrence formula is as follows:
Formula (11) and (13) be multiplied with negative learning rate be exactly current convolutional layer right value update, be shown below:
6. according to claim 1 be based on the modified mine movable inspection image rebuilding method in edge, it is characterised in that:
In above-mentioned steps 4, the marginal information by obtaining image carries out fusion to the image after reconstruction and edge is corrected, using neighborhood picture
The HR image that the linear operation amendment of plain difference value is rebuild, is melted using the image after the marginal information and reconstruction of different directions
It closes, seeks x+ respectively, x-, the gradient information of y+ and y- four direction, shown in all directions gradient such as formula (16)~(19):
Wherein f (x, y) indicates that coordinate is the grey scale pixel value of (x, y) in image, calculates x+, x-, y+, y- tetra- according to gradient formula
The gradient information image s in a directionx+, sx-, sy+, sy-, marginal information giFor shown in formula (20):
gi=ki·si(i=1,2,3 ..., n × n) (20)
Wherein ki=[a, b, c, d] is coefficient matrix, si=[sx+,sx-,sy+,sy-]T, then can be by solution formula (21)
Matrix number K=[k1,k2,...,kn×n],
min||G+HR-I|| (21)
Wherein G={ g1,g2,...,gn×n}={ k1·s1,k2·s2,...,kn·sn}=KS is the set of marginal information, S=
[s1,s2,...,sn×n] be gradient information set, HR be convolutional layer rebuild high-definition picture, I is sample image, obtain
Coefficient matrix K, in fusion process, using the matched Fusion Edges coefficient matrix of Euclidean distance minimum method choice.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910513747.XA CN110276389B (en) | 2019-06-14 | 2019-06-14 | Mine mobile inspection image reconstruction method based on edge correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910513747.XA CN110276389B (en) | 2019-06-14 | 2019-06-14 | Mine mobile inspection image reconstruction method based on edge correction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110276389A true CN110276389A (en) | 2019-09-24 |
CN110276389B CN110276389B (en) | 2023-04-07 |
Family
ID=67960871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910513747.XA Active CN110276389B (en) | 2019-06-14 | 2019-06-14 | Mine mobile inspection image reconstruction method based on edge correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110276389B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111767928A (en) * | 2020-06-28 | 2020-10-13 | 中国矿业大学 | Method and device for extracting image characteristic information based on convolutional neural network |
CN112215525A (en) * | 2020-11-04 | 2021-01-12 | 安徽农业大学 | Lake and reservoir water quality inversion and visual evaluation method |
CN112925932A (en) * | 2021-01-08 | 2021-06-08 | 浙江大学 | High-definition underwater laser image processing system |
CN113496468A (en) * | 2020-03-20 | 2021-10-12 | 北京航空航天大学 | Method and device for restoring depth image and storage medium |
CN115239564A (en) * | 2022-08-18 | 2022-10-25 | 中国矿业大学 | Mine image super-resolution reconstruction method combining semantic information |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550115A (en) * | 2018-04-25 | 2018-09-18 | 中国矿业大学 | A kind of image super-resolution rebuilding method |
-
2019
- 2019-06-14 CN CN201910513747.XA patent/CN110276389B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108550115A (en) * | 2018-04-25 | 2018-09-18 | 中国矿业大学 | A kind of image super-resolution rebuilding method |
Non-Patent Citations (1)
Title |
---|
于喜娜: "基于学习的图像超分辨率重建方法研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113496468A (en) * | 2020-03-20 | 2021-10-12 | 北京航空航天大学 | Method and device for restoring depth image and storage medium |
CN111767928A (en) * | 2020-06-28 | 2020-10-13 | 中国矿业大学 | Method and device for extracting image characteristic information based on convolutional neural network |
CN111767928B (en) * | 2020-06-28 | 2023-08-08 | 中国矿业大学 | Method and device for extracting image characteristic information based on convolutional neural network |
CN112215525A (en) * | 2020-11-04 | 2021-01-12 | 安徽农业大学 | Lake and reservoir water quality inversion and visual evaluation method |
CN112215525B (en) * | 2020-11-04 | 2023-06-23 | 安徽农业大学 | Lake and reservoir water quality inversion and visual evaluation method |
CN112925932A (en) * | 2021-01-08 | 2021-06-08 | 浙江大学 | High-definition underwater laser image processing system |
CN115239564A (en) * | 2022-08-18 | 2022-10-25 | 中国矿业大学 | Mine image super-resolution reconstruction method combining semantic information |
CN115239564B (en) * | 2022-08-18 | 2023-06-16 | 中国矿业大学 | Mine image super-resolution reconstruction method combining semantic information |
Also Published As
Publication number | Publication date |
---|---|
CN110276389B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276389A (en) | One kind being based on the modified mine movable inspection image rebuilding method in edge | |
CN108734659B (en) | Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label | |
Ma et al. | STDFusionNet: An infrared and visible image fusion network based on salient target detection | |
CN111062872B (en) | Image super-resolution reconstruction method and system based on edge detection | |
CN109741256A (en) | Image super-resolution rebuilding method based on rarefaction representation and deep learning | |
CN109671023A (en) | A kind of secondary method for reconstructing of face image super-resolution | |
CN108921786A (en) | Image super-resolution reconstructing method based on residual error convolutional neural networks | |
CN112734646B (en) | Image super-resolution reconstruction method based on feature channel division | |
CN110276721A (en) | Image super-resolution rebuilding method based on cascade residual error convolutional neural networks | |
CN107977932A (en) | It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method | |
CN109615582A (en) | A kind of face image super-resolution reconstruction method generating confrontation network based on attribute description | |
CN110119780A (en) | Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network | |
CN110189253A (en) | A kind of image super-resolution rebuilding method generating confrontation network based on improvement | |
CN110232661A (en) | Low illumination colour-image reinforcing method based on Retinex and convolutional neural networks | |
CN103871041B (en) | The image super-resolution reconstructing method built based on cognitive regularization parameter | |
CN110675462B (en) | Gray image colorization method based on convolutional neural network | |
CN110232653A (en) | The quick light-duty intensive residual error network of super-resolution rebuilding | |
CN109685716A (en) | A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback | |
CN113012172A (en) | AS-UNet-based medical image segmentation method and system | |
CN110211038A (en) | Super resolution ratio reconstruction method based on dirac residual error deep neural network | |
CN109325915A (en) | A kind of super resolution ratio reconstruction method for low resolution monitor video | |
CN106157249A (en) | Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood | |
CN109949223A (en) | Image super-resolution reconstructing method based on the dense connection of deconvolution | |
CN107833182A (en) | The infrared image super resolution ratio reconstruction method of feature based extraction | |
CN116664397B (en) | TransSR-Net structured image super-resolution reconstruction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |