CN108765338A - Spatial target images restored method based on convolution own coding convolutional neural networks - Google Patents

Spatial target images restored method based on convolution own coding convolutional neural networks Download PDF

Info

Publication number
CN108765338A
CN108765338A CN201810523868.8A CN201810523868A CN108765338A CN 108765338 A CN108765338 A CN 108765338A CN 201810523868 A CN201810523868 A CN 201810523868A CN 108765338 A CN108765338 A CN 108765338A
Authority
CN
China
Prior art keywords
convolution
image
neural network
layer
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810523868.8A
Other languages
Chinese (zh)
Inventor
谢春芝
高志升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN201810523868.8A priority Critical patent/CN108765338A/en
Publication of CN108765338A publication Critical patent/CN108765338A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of spatial target images restored methods based on convolution own coding convolutional neural networks, including:The degraded image with different degree of degenerations is built as input data, to learn and build more healthy and stronger CAE neural network models;Have the characteristics that high similarity between priori and model using known to extraterrestrial target limited amount, department pattern, constructs analogous diagram image set of the different type from different fog-levels, and for training in convolutional network.The advantage of the invention is that:Marginal texture clearly image can be restored;It goes turbulent flow to obscure ability with outstanding, and restores the image border contrast height, anti-noise ability is outstanding, and restored image internal structure, which is shown, more to be stablized clearly, more efficient.

Description

Spatial target images restored method based on convolution own coding convolutional neural networks
Technical field
The present invention relates to technical field of image processing, more particularly to the space based on convolution own coding convolutional neural networks Target image restored method.
Background technology
The whole concept of neural network is to define an effective object module and one to weigh object module pros and cons Loss function by successive optimization object module and minimizes the loss of object module, study input data and prediction data it Between internal relation, to make neural network model complete various tasks.In the learning method for image restoration, it is assumed that There are local correlations between figure for figure, on this basis by learning the feature of the degradation model and degraded image of image, It will realize the recovery to degraded image.Wherein mainly there are the method based on rarefaction representation and the side based on deep neural network Method, and based on the method for deep neural network due to its powerful nonlinear fitting ability, at present in super-resolution research work Have been achieved for breakthrough progress.
Wherein paper (Dong C, Loy C C, He K, et al.Image Super-Resol ution Using Deep Convolutional Networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,38(2):295-307.) research team that Li Tangxiao gulls professor leads proposes based on 3 layers of convolution The image super-resolution network (SRCNN, Super-Resolution Convolutional Neural Network) of layer, Utilize the non-linear mapping capability of network, study low resolution to the end-to-end mapping between high-definition picture.Divide in article The equivalence between SR methods and convolutional network based on traditional sparse coding has been analysed, and has proved to construct a kind of simple Convolutional network restores the thought of sparse coding.Coding, mapping, decoded side are handled respectively different from traditional sparse coding Method, neural network will automatically learn to sparse coding table, and optimize the work of 3 parts jointly.SRC NN neural network structures Lightly, and in super-resolution reparation it improves quality better than traditional sparse coding method with medium trained degree, and also ensures The degree that reality uses online can be reached in efficiency.The disadvantage is that structure is single, if using 3 layers of neural network of SRCNN, it will No calligraphy learning is to the turbulent flow blurred picture of degree of degeneration complexity and the sparse expression mapping relations of clear image.
And in paper (Hradis M, Kotera J, Zemcik P, et al.Convolutional Neural Networks for Direct Text Deblurring.[C].british machine visi on conference, 2015.) in research work, Michal Hradis et al. be directed to text invention shelves picture quality resume work on, instruction The deeper 15 layers of convolutional network (L1 5-CNN) practised, by enough fuzzy and noise text image data training, Will high quality graphic directly be reconstructed from low quality input picture, be carried out by the real pictures that various equipment are shot real It tests, demonstrating the convolutional network of deep layer can learn and repair out of focus fuzzy with text invention shelves caused by camera shake.But Difficulty is trained also can for the make of convolutional layer with reason in article and caused by indefinite and too deep network It is doubled and redoubled, and is easy to bring the study consequence of over-fitting, and for different page orientations, font style and text language On, the performance of neural network is not fine.
Invention content
The present invention in view of the drawbacks of the prior art, provides a kind of space based on convolution own coding convolutional neural networks Target image restored method, can effectively solve the problem that the above-mentioned problems of the prior art.
In order to realize the above goal of the invention, the technical solution adopted by the present invention is as follows:
A kind of spatial target images restored method based on convolution own coding convolutional neural networks, including:
The CAE neural network models built, if f1, f2, f3, f4, f5 are the respective convolution kernel size of five convolutional layers, If n1, n2, n3, n4, n5 are the convolution nuclear volume of five convolutional layers.CAE neural network models have the neuron for including 9 layers altogether, And 1~5 layer is coding convolution, 6~9 layers are decoding convolution.The gray-scale map that the CAE network inputs sizes of structure are 32 × 32 Picture, wherein convolution working method are indicated with following weighted sum formula:
N represents the number of current layer neuron, XjRepresent j-th output knot of the present node to preceding i input data Fruit, xiFor input image data, wijFor the convolution kernel of corresponding j-th of output, * is convolution operation, bjFor bias term.ReLu (Rectified Linear Units) is the activation primitive of present invention structure Web vector graphic, and ReLu activation primitives are by positive and negative The piecewise linear function that two parts are formed, all negative values are modified to 0 by it, and keep positive constant.The effect of ReLu It is the unilateral transmission for inhibiting gradient, in order to ensure that the coding convolutional layer of CAE networks can be corresponded with decoding convolutional layer, and Coding-decoded image can revert to the same size of input figure, need to grasp into row bound zero padding to being convolved input picture Make, to ensure that characteristic pattern size is identical as input image size after convolution.
The calculating process of entire neural network is as follows:
Level 1 volume product will export the characteristic pattern of n1 a 32 × 32 to inputting after figure progress convolution, and letter is being activated by ReLu After number corrects the maximum pondization operation with 2 × 2, then an image characteristics extraction and screening are completed, it will output behind pond N1 16 × 16 characteristic patterns.Pond is handled usually as the Feature Selection after convolution operation, the purpose in maximum pond be in order to More significant local feature statistic, and the size of energy compressive features figure are obtained, and reduces calculation amount.3rd layer of convolution is by the 1st For the characteristic pattern of layer convolution and Chi Huahou as input value, which carries out convolution operation using the convolution kernel of n2 f2 × f2, and leads to ReLu activation primitive linear transformations are crossed, convolution mode is same as above.2 × 2 maximum ponds are carried out in the characteristic pattern that convolution obtains, in this way The size of convolution characteristic pattern reduces half, the characteristic pattern that output is n2 8 × 8 again.3rd layer of convolution uses the volume of n3 f3 × f3 Product core carries out convolution operation, and passes through ReLu activation primitive linear transformations.Such 1~5 layer of convolution will be extracted with pondization Low-level image feature in artwork completes the coding Encode processes to inputting figure.
Followed by 6~9 parts decoding Decode, primary anti-pond is carried out first, or is known as up-sampling operation, is led to The vertical and horizontal value of duplication is crossed, 8 × 8 characteristic pattern is expanded to 16 × 16.Then the 4th convolutional layer makees the characteristic pattern behind anti-pond For input value, which does convolution operation using the convolution kernel of n4 f4 × f4, and carries out linear transformation using activation primitive.So It is followed by the output of 4 layers of convolution, then carries out primary anti-pondization operation, 16 × 16 characteristic pattern is expanded to 32 × 32.5th layer of volume Product is passing through the convolution nuclear convolution of f5 × f5 and the linear transformation of activation primitive using the characteristic pattern behind anti-pond as input value Afterwards, decoded restored map will finally be obtained.
Further, select loss functions of the MSE as neural network, MSE that will correctly assess output figure and prognostic chart Correspondence between pixel, formula are as follows:
M indicates that number of samples, x are input pictures, and y is output image, the wherein formula of mean square deviation MSE and peak value noise Calculating than PSNR is inversely.PSNR values show that the image fault after repairing is smaller, closer to original graph, therefore it is excellent The target for changing function is exactly that MSE is allowed to get minimum value as far as possible.
Further, present invention employs Adam optimization algorithms to carry out reverse train network weight.
Its function mode is similar with momentum.Parameter more new formula is:
Since Adam further improves algorithm speed, convergence rate faster, and is avoided that in other optimization algorithms and exists Learning rate loss, the excessive defect of parameter update variance, in the performance that compared Different Optimization device, present invention employs Adam optimization algorithms carry out reverse train network weight.
Further, the present invention selects following simple normalization algorithm to handle inputoutput data;
Simplest normalization algorithm, formula are as follows:
Y=(x-min) × (max-min) (4)
X is input data, and min and max is the minimum value and maximum value of x respectively, and y is the result after normalization.
Compared with prior art the advantage of the invention is that:
Can be utilized has height between priori and model known to extraterrestrial target limited amount, department pattern The characteristics of spending similitude constructs analogous diagram image set of the different type from different fog-levels, and for being instructed in convolutional network Practice.Neural network after training up will restore marginal texture and clearly scheme from the degraded image in test set Picture.
It goes turbulent flow to obscure ability with outstanding, and restores the image border contrast height, anti-noise ability is better than existing There is technology, restored image internal structure, which is shown, more to be stablized clearly.
More high efficiency will effectively learn the low of turbulent flow image by the network of end-to-end mapping learning training Dimensional feature.
Description of the drawings
Fig. 1 is the convolution own coding Artificial Neural Network Structures of the embodiment of the present invention;
Fig. 2 is the training set schematic diagram of a part of neural network of the embodiment of the present invention;
Fig. 3 is psnr curve graphs trained under the different CAE Artificial Neural Network Structures of the embodiment of the present invention;
Fig. 4 is psnr curve graphs trained under the different convolution nuclear volumes of the embodiment of the present invention;
Fig. 5 is the part convolution nuclear shape signal in the 1st layer in the convolution own coding neural network of the embodiment of the present invention Figure;
Fig. 6 is one group of restoration result schematic diagram in the moderate turbulent flow Degenerate Graphs of the emulation of the embodiment of the present invention;
Fig. 7 is one group of restoration result schematic diagram in the severe turbulent flow Degenerate Graphs of the emulation of the embodiment of the present invention.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention more comprehensible, below in conjunction with attached drawing and implementation is enumerated Example, is described in further details the present invention.
Convolution autoencoder network is by encoding the low-dimensional data compressed in extraction image collection, and then decoded back arrives Original image, to learn the correlated characteristic in image pattern automatically.Convolution autoencoder network first converts the data of input To the space of low-dimensional, is then extended again and revert to approximate original image.This unsupervised learning method is usually used in acquisition one The internal feature of serial associated data set removes the existence of redundant of input data, to obtain the figure with certain robustness As low-dimensional feature.CAE neural network models such as Fig. 1 that the present invention is built.
Wherein f1, f2, f3, f4, f5 represent the respective convolution kernel size of five convolutional layers, and n1, n2, n3, n4, n5 are represented The convolution nuclear volume of five convolutional layers.It is used in the present embodiment fuzzy after atmospheric turbulance interferes under short-time exposure Image carries out image restoration experiment, many minutias of this seriously polluted missing image, but remains observed object General outline, in order to extract this important information, experiment uses convolution own coding neural network model, allows neural network Therefrom study filters out turbulent flow pollution part, and reconstruct the figure for being not affected by Turbulent Flow Effects to the low-dimensional feature of original image Picture.Wherein, the characteristics of image after first half is encoded in Fig. 1, latter half decoded back go out original image.The present invention is set The CAE neural network models of meter have the neuron for including 9 layers altogether, and 1~5 layer is coding convolution, and 6~9 layers are decoding convolution. The gray level image that the CAE network inputs sizes that the present invention is built are 32 × 32, the wherein following weighted sum of convolution working method Formula indicates:
N represents the number of current layer neuron, XjRepresent j-th output knot of the present node to preceding i input data Fruit, xiFor input image data, wijFor the convolution kernel of corresponding j-th of output, * is convolution operation, bjFor bias term.ReLu (Rectified Linear Units) is the activation primitive of present invention structure Web vector graphic, and ReLu activation primitives are by positive and negative The piecewise linear function that two parts are formed, all negative values are modified to 0 by it, and keep positive constant.The effect of ReLu It is the unilateral transmission for inhibiting gradient, it compared to other activation primitives there is stronger gradient to decline ability.Due to the ladder of ReLu Degree has enough gradient magnitudes in non-negative region, so there is no what gradient disappeared to ask forever without carrying out Compression Correction Topic, this advantage can help the convergence rate of network model to be maintained at relatively stable state.And letter is activated for others Number, since the gradient near 0 is very small, causes the error propagation between predicted value and actual value that can constantly decay, deeper Neural network will be more difficult to train or even premature with regard to deconditioning.Therefore in building network, hidden layer would generally select With ReLu activation primitives, this can transmit the gradient for ensureing neural network always down.Since the principle of convolution is to convolution Value in core frame does product and sums, and after finishing convolution, the characteristic pattern size of output can be reduced a convolution kernel, be Ensure the coding convolutional layers of CAE networks and decodes that convolutional layer can correspond and coding-decoded image can revert to The same size of input figure needs to operate into row bound zero padding to being convolved input picture, to ensure characteristic pattern ruler after convolution It is very little identical as input image size.
The calculating process of entire neural network is as follows:Level 1 volume product to input scheme to carry out after convolution will output n1 32 × 32 characteristic pattern then completes a characteristics of image after the maximum pondization operation by the amendment of ReLu activation primitives with 2 × 2 Extraction and screening, will n1 16 × 16 characteristic patterns of output behind pond.It is sieved usually as the feature after convolution operation in pond Choosing is handled, the purpose in maximum pond be in order to obtain more significant local feature statistic, and the size of energy compressive features figure, And reduce calculation amount.3rd layer of convolution using level 1 volume product and Chi Huahou characteristic pattern be used as input value, the layer use n2 f2 × The convolution kernel of f2 carries out convolution operation, and by ReLu activation primitive linear transformations, convolution mode is same as above.It is obtained in convolution Characteristic pattern carries out 2 × 2 maximum ponds, and the size of such convolution characteristic pattern reduces half, the characteristic pattern that output is n2 8 × 8 again. 3rd layer of convolution carries out convolution operation using the convolution kernel of n3 f3 × f3, and passes through ReLu activation primitive linear transformations.Such 1 ~5 layers of convolution will extract the low-level image feature in artwork with pondization, complete the coding Encode processes to inputting figure.It connects Be 6~9 the parts decoding Decode, primary anti-pond is carried out first, or be known as up-sampling operation, by replicating anyhow Value, 16 × 16 are expanded to by 8 × 8 characteristic pattern.Then the 4th convolutional layer, should using the characteristic pattern behind anti-pond as input value Layer does convolution operation using the convolution kernel of n4 f4 × f4, and carries out linear transformation using activation primitive.Then 4 layers of convolution are connect Output, then carry out primary anti-pondization and operate, 16 × 16 characteristic pattern is expanded to 32 × 32.5th layer of convolution will be behind anti-pond Characteristic pattern will finally be obtained after the linear transformation by the convolution nuclear convolution of f5 × f5 and activation primitive as input value Decoded restored map.
Allowable loss function
In order to calculate the fault tolerances value of individualized training sample, need that rational loss function is arranged for neural network.Damage It is the calculation formula for assessing the extent of deviation between output valve and predicted value to lose function, and is entire neural network Training objective function.For the neural network of classification problem, logistic regression loss function is mainly used for obtaining in training sample most Big possible prediction distribution value.In the figure to figure map neural network that the present invention discusses, MSE is generally selected as nerve net The loss function of network, MSE will correctly assess the correspondence between output figure and prognostic chart pixel, and formula is as follows:
M indicates that number of samples, x are input pictures, and y is output image, the wherein formula of mean square deviation MSE and peak value noise Calculating than PSNR is inversely.PSNR values show that the image fault after repairing is smaller, closer to original graph, therefore it is excellent The target for changing function is exactly that MSE is allowed to get minimum value as far as possible.
Design optimization device
Optimizer is used to update the weight of neural network model, selects correct optimizer that neural network can be allowed using most Small frequency of training quickly searches out optimal solution, that is, converges on global minimum.Back-propagation algorithm in optimizer is nerve Input signal is converted to the core concept of output signal by network, while being also the basic skills of training complex nonlinear function. Back-propagation algorithm by the prediction error of output toward travel back, by calculating the gradient of error function and predicted value, and in ladder Every layer of weight parameter is updated on degree negative direction.Wherein the more new direction Yu gradient direction of weight are on the contrary, neural in this way Network will drop to local minimum according to gradient direction.
Present invention employs Adam optimization algorithms to carry out reverse train network weight.
Adaptive moments estimation algorithm (Adam, Adaptive Moment Estimation) is in the mode of renewal learning rate On be improved (Kingma D P, Ba J L.Adam:A Meth od for Stochastic Optimization[J] .international conference on learnin g representations, 2015.), Adam algorithms not only can The average attenuation of gradient is stored, and previous gradient decaying can be calculated simultaneously, function mode is similar with momentum.Ginseng Counting more new formula is:
Since Adam further improves algorithm speed, convergence rate faster, and is avoided that in other optimization algorithms and exists Learning rate loss, the excessive defect of parameter update variance, in the performance that compared Different Optimization device, present invention employs Adam optimization algorithms carry out reverse train network weight.
The pretreatment of data set
The value range very little of neural network output layer activation primitive, input data is too big or too small can all lead to nerve Network output valve deviates normal range (NR).If input value is excessive, the convergence rate of neural network will reduce, therefore network is instructed Practice target data to need to be remapped to a unified range, wherein it is most common one that data, which are normalized, Kind preprocess method.
If using sigmoid and ReLu activation primitives in the output layer of neural network, due to sigmoid functions Output valve is between [0,1], then the training data needs of neural network normalize in [0,1] section;If bis- using tanh Curve activation primitive, training data will then normalize between [- 1,1].Wherein linear transformation formula is simplest normalization Algorithm, formula are as follows:
Y=(x-min) × (max-min) (4)
X is input data, and min and max is the minimum value and maximum value of x respectively, and y is after normalizing as a result, above-mentioned public affairs Image data is standardized as between section [0,1] range by formula.Neural network design based on figure to figure is upper usually using ReLu As the activation primitive of output layer, otherwise training process is extremely slow, therefore for the convolution autoencoder network of the invention designed Training select above simple normalization algorithm to handle inputoutput data.
Experiment and analysis
Experimental setup and parameter optimization
Experimental selection of the present invention is studied in Ubuntu systems, CPU frequency 3.3GHz, the RAM 8GB of computer, GPU is GTX Titan X, and video memory size is 12GB, Keras development of neural networks kit of the experimental selection based on Theano. The data set structure of experiment has used 233 satellite mappings that satellite modeling software generates, and zooms in and out rotation to them and expand Data set is opened up, paper is used in combination[48]The Turbulence-degraded Images calculation formula provided calculates the rapids generated after degenerating on Matlab It flows blurred picture, is cut into effective image part (black color part is background, needs to be deleted when constructing training set) And the fuzzy graph of up to ten thousand 32 × 32 sizes is obtained, training set of the corresponding clear artwork as experiment will be fuzzy and clear Clear fritter cutting image respectively as input data (input) and label data (label), be put into convolutional network constantly into Row right value update.Wherein Fig. 2 is a part for the training set that the present invention constructs, and in corresponding one group of segment, left is label Data correspond to original image;Right is input data, corresponds to blurred picture such as Fig. 2 of emulation.
Although the neural network constructed in experiment is only trained in 32 × 32 sizes, when the weights whole network It moves on various sizes of image in use, only needing the image size for changing input terminal that can be carried out using convolutional network Deblurring works, it should be noted that network has done pond down-sampling operation twice, and the image length and width of input must be 4 times Number.
How size in order to study the convolutional layer under same turbulent flow Blur scale selects, to every layer of convolution in experiment Core size f and convolution nuclear volume n are adjusted, and are compared in several groups of different neural network algorithms.Complete primary instruction The neural network for practicing (1 epoch, which is equal to, has recycled primary entire training set) carries out image restoration on verification collection image, and The average peak signal to noise ratio of the upper restored image of output verification collection, obtained Y-PSNR curve such as Fig. 3.
Wherein f and n corresponds to the convolution nuclear volume of the convolution kernel size and five convolutional layers of five convolutional layers in Fig. 1, Middle n is defined as taking the same number entirely.It can find out from curve graph, when the convolution kernel number of neural network is set as 128, Curve is that 300 or so can basically reach maximum value, and shake in follow-up training process obvious in epoch;And convolution kernel After number is designed as 64, curve convergence it is more gentle, more stable PSNR can just be obtained by reaching 800 or so in epoch Maximum value.And possessing the neural network model of different convolution kernel sizes, final PSNR maximum values also can be variant, in convolution Core size f is respectively 13,3,3,3,13, and convolution kernel number n is the CAE model structures under 128, after completely training PSNR values show best, and Y-PSNR is higher than remaining 5 kinds of model structure.Thus, it is possible to draw a conclusion:Construct bigger The recovery effect of convolution kernel size f and convolution nuclear volume n, neural network can theoretically obtain better promotion, but promote degree Can't be too big, after giving enough training times, the PSNR score values of 6 kinds of model structures can finally converge on it is close 26.0 up and down.
Next experiment needs to analyze in the case of fixed convolution kernel size, nerve under more convolution kernel numbers Can network also have room for promotion, training curve such as Fig. 4 under the 4 kinds of CAE model structures constructed.It can find out when network convolution Nuclear volume is promoted from 32 to after 128, and the convergent speed of neural network faster, and will eventually pass through 1000 circuit training In neural network, the model structure PSNR maximum values under convolution kernel number n=128 are substantially higher in other two network As a result;And after convolution kernel number n is promoted to 256, neural metwork training number can then decline always after more than 200, this says It is bright in n=256, neural network overfitting training data causes network generalization to weaken, is testing instead It is showed worse and worse on collection image.
Show to change by the convolution size and every layer of convolution nuclear volume of increase filter by multigroup contrast test The Quality of recovery of kind image, the characteristic pattern estimation that convolutional network can be included in more surrounding pixels Encoder processes end are worked as In, while the utilization information of the final image reduction of Decoder also can be more.But when the design of convolution nuclear volume is more than certain After value, network can not can preferably reconstruct original image because of overlearning instead.
Compare the result such as table 1 of recovery time under different CAE model structures.
Table 1 restores individual figure and averagely takes
It can find out on recovery time, complicated network will spend the longer training time, and neural network reducing power It is not obviously improved (Fig. 2), and network is susceptible to over-fitting (Fig. 3), bigger artificial neural is on individual figure Treatment effeciency decreases
It is 9,3,3,3,9 that experiment, which is extracted in training convolutional core size f, when convolution nuclear volume n is respectively 128, CAE nets A part of convolution kernel that the 1st layer of network, it is seen that after training up, neural network has learnt to good convolution karyomorphism Shape, structure and the dictionary image learnt in sparse coding are very similar, such as Fig. 5.The convolution of neural network learning of the present invention Core distribution is sparse and regular, and possesses different sizes and the form of different directions, this demonstrate that the network structure that the present invention designs There is the ability of the characteristic information of good extraction image.
It is worth noting that, the blurred picture only constructed in the above experiment under smaller Blur scale is used as training Collection, figure is to can be regarded as simple one-to-one mapping relations between figure.And under actual conditions, the air under short-time exposure Turbulent flow degeneration convolution kernel scale can not be accurately known, and result between degraded image and real image that there are many-ones in this way Mapping relations be difficult to normal if the fuzzy set that constructs can not include all possible turbulent flow Blur scale in experiment Restore the degraded image being disturbed.Therefore experiment also needs to structure bigger and more diversified data set, allows neural network Image low-dimensional feature can be correctly extracted under different scale, promote the generalization ability of network.Simultaneously in training set image Noise jamming is added, the generation of over-fitting in network training process can be prevented.
Experimental result and analysis
The CAE neural networks that the present invention is built are trained on the image set of noise pollution.In contrast experiment, It selects on blind restored image algorithm, adds two groups of neural network algorithms.Wherein there are Jia Jiaya team (Zhang J, Pan J,Lai W,et al.Learning Fully Convolutional Networks for Iterative Non-Blind Deconvolution [J].computer vision and pattern recognition,2016:3817-3825.) band The fuzzy neural network algorithm (DCNN) that deconvolutes out of focus come, Michal et al.[72]What is proposed goes text to obscure convolutional Neural Network algorithm (L15-CNN), restores to having with reference to Turbulence-degraded Images, wherein two groups of neural network algorithms all employ Data set constructed by the present invention has carried out transfer training.Two groups of network knots of the CAE network structures that the present invention is built and comparison Structure (X u L, Ren J S, Liu C, et al.Deep Convolutional Neural Network fo r Image Deconvolution[C].neural information processing systems,2 014:1790-1798.) (Hradis M,Kotera J,Zemcik P,et al.Convolutio nal Neural Networks for Direct Text Deblurring. [C] .british machine vision conference, 2015.) such as table 2, wherein input figure Gray image as being length and width m × n.
The structure table of 23 kinds of neural networks of table
This 5 kinds of mean absolute difference, signal-to-noise ratio, Y-PSNR, fidelity, mean square deviation of Experimental comparison has with reference to evaluating Index.In order to accurately evaluate the restorability between various algorithms, experiment will be imitated more concerned with the vision in comparison restored image Fruit, therefore not only to analyze the evaluation index of image, it is also necessary to observe the whole perception of image restoration result, comprehensive evaluation The quality of each algorithm.
Wherein the restored image result of medium degree of degeneration and having with reference to evaluation index such as Fig. 6 and table 3.
The restoration result of 3 one groups of moderate turbulent flow Degenerate Graphs of table has with reference to evaluation result
Fig. 6 be subject in analogous diagram (b) moderate turbulent flow in short-term it is fuzzy with the knot for carrying out restoring experiment after poisson noise Fruit.It can find out from figure (a), turbulent flow blurred picture has been lost most internal structural information, and is done by noise After disturbing, edge contour acutance declines serious.From the visual effect of restored image, Jan algorithms produce in (d) figure Serious shake bell phenomenon, noise is serious and picture structure is chaotic;(c) restored image of (e) (f) algorithm has in various degree The noise of noise, algorithm (c) is the most serious, and noise (f) is showed in blocky;Although (e) (f) can be carried to a certain extent High rim contrast, but noise resisting ability and bad;(g) algorithm can remove most noise, but blocky effect is serious, mould Contour edge information is pasted;There is torsional deformation at the edge of restored image in DCNN algorithms (h), and internal noise is more tight Weight, general image result are partially dark;Inventive algorithm (j) and the relatively other algorithms of L15-CNN algorithms (i), can not only enhance image Contrast on border, and anti-noise ability is outstanding.
Evaluation index in contrast table 3 between 8 kinds of algorithms, it is seen that the index of 3 kinds of neural network algorithms all shows very It is good.The algorithm index of wherein Dilip and Jan is worst, and secondly with BDTV algorithms, L0SR is then calculated in 5 groups of non-neural networks BDLIP Peak value to-noise ratio index is best in method, it was demonstrated that its noise resisting ability is higher than remaining algorithm;And it is evaluated in 3 groups of neural network algorithms In index, the performance of DCNN is worst, and since the selection of its convolution kernel is excessive, what is showed in turbulent flow recovery operation is not so good;And Inventive algorithm and L15-CNN algorithm peak values to-noise ratio are all close to 25.5 or so, and inventive algorithm index outline is higher than The index of L15-CNN algorithms.
On one group of heavy-degraded degree turbulent flow Degenerate Graphs, recovery effect and evaluation index such as Fig. 7 and the table 4 of each algorithm.
The restoration result of 4 one groups of severe turbulent flow Degenerate Graphs of table has with reference to evaluation result
Fig. 7 is the fuzzy recovery experimental result with poisson noise of turbulent flow in short-term for being subject to severe in analogous diagram (b).? It can be seen that blurred picture is even more serious relative to Fig. 6 (a) fog-levels on Fig. 7 (a), the wheel of blurred picture can be only seen It is wide.From Integral Restoration result, (d) algorithm of Jan is entirely ineffective in above 8 kinds of methods, and image result can not be distinguished Recognize;There is more serious blocky phenomenon in L0SR algorithms (g), can not accurately describe object edge structure;Dilip algorithms (c) It is serious with BDLIP algorithms (e) noise phenomenon;Although algorithm (f) has eliminated noise, also there is blocky phenomenon, illustrate it The denoising mode of algorithm is not so good;And neural network algorithm is compared, algorithm (h) scalloping of DCNN restores too Positive wing plate structure is slightly chaotic;Algorithm (i) clear-cut margin but internal structure of L15 is excessively smooth, and part details letter has been fallen in effacement Breath;The present invention arithmetic result (j) ensure details do not lack in the case of, general image clear-cut margin, visual effect compared with It is good.
In the comparison index of table 4, it is seen that 3 groups of neural network algorithms are higher than 5 groups of non-neural network algorithms on the whole Index, show that neural network has certain advantage repairing the Turbulence-degraded Images with noise jamming, anti-noise ability is opposite Non- neural network performs better than.The algorithm index of wherein Jan is worst, and the signal-to-noise ratio of L0SR is higher than the ranking Dilip of next and calculates Method 10% or so, noise resisting ability is preferable;In the comparison index of neural network algorithm, the Y-PSNR of L15-CNN is most It is low;L15-CNN and DCNN algorithms Y-PSNR is all close to 23 or so;Inventive algorithm index will be higher by DCNN algorithms 6% More than, mean absolute difference is minimum on all comparison algorithms, shows the error smaller of restored image.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this The implementation of invention, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This The those of ordinary skill in field can make according to the technical disclosures disclosed by the invention various does not depart from of the invention essence Various other specific variations and combinations, these variations and combinations are still within the scope of the present invention.
Those of ordinary skill in the art will understand that the embodiments described herein, which is to help reader, understands this The implementation of invention, it should be understood that protection scope of the present invention is not limited to such specific embodiments and embodiments.This The those of ordinary skill in field can make according to the technical disclosures disclosed by the invention various does not depart from of the invention essence Various other specific variations and combinations, these variations and combinations are still within the scope of the present invention.

Claims (4)

1. a kind of spatial target images restored method based on convolution own coding convolutional neural networks, which is characterized in that including:
The CAE neural network models built, if f1, f2, f3, f4, f5 are the respective convolution kernel size of five convolutional layers, if n1, N2, n3, n4, n5 are the convolution nuclear volume of five convolutional layers;CAE neural network models, which have altogether, includes 9 layers of neuron, and 1~5 Layer is coding convolution, and 6~9 layers are decoding convolution;The gray level image that the CAE network inputs sizes of structure are 32 × 32, wherein rolling up Product working method is indicated with following weighted sum formula:
N represents the number of current layer neuron, XjPresent node is represented to j-th of output of preceding i input data as a result, xiFor Input image data, wijFor the convolution kernel of corresponding j-th of output, * is convolution operation, bjFor bias term;ReLu is the present invention The activation primitive of Web vector graphic is built, the piecewise linear function that ReLu activation primitives are made of positive and negative two parts, it is by institute There is negative value to be modified to 0, and keeps positive constant;The effect of ReLu is the unilateral transmission for inhibiting gradient, in order to ensure CAE networks Coding convolutional layer can be corresponded and coding-decoded image can revert to the same ruler of input figure with decoding convolutional layer It is very little, it needs to operate into row bound zero padding to being convolved input picture, to ensure characteristic pattern size and input picture ruler after convolution It is very little identical;
The calculating process of entire neural network is as follows:
Level 1 volume product will export the characteristic pattern of n1 a 32 × 32 to inputting after figure progress convolution, be repaiied by ReLu activation primitives Just with after 2 × 2 maximum pondization operation, then an image characteristics extraction and screening are completed, it will output n1 behind pond 16 × 16 characteristic patterns;Pond is handled usually as the Feature Selection after convolution operation, and the purpose in maximum pond is to obtain More significant local feature statistic, and the size of energy compressive features figure, and reduce calculation amount;3rd layer of convolution accumulates level 1 volume And the characteristic pattern of Chi Huahou, as input value, which carries out convolution operation using the convolution kernel of n2 f2 × f2, and passes through ReLu Activation primitive linear transformation, convolution mode are same as above;2 × 2 maximum ponds, such convolution feature are carried out in the characteristic pattern that convolution obtains The size of figure reduces half, the characteristic pattern that output is n2 8 × 8 again;3rd layer of convolution is rolled up using the convolution kernel of n3 f3 × f3 Product operation, and pass through ReLu activation primitive linear transformations;Such 1~5 layer of convolution will extract the bottom in artwork with pondization Layer feature completes the coding Encode processes to inputting figure;
Followed by 6~9 parts decoding Decode, primary anti-pond is carried out first, or is known as up-sampling operation, passes through duplication Vertical and horizontal value, 16 × 16 are expanded to by 8 × 8 characteristic pattern;Then the 4th convolutional layer is using the characteristic pattern behind anti-pond as input Value, which does convolution operation using the convolution kernel of n4 f4 × f4, and carries out linear transformation using activation primitive;Then 4 layers are connect The output of convolution, then primary anti-pondization operation is carried out, 16 × 16 characteristic pattern is expanded to 32 × 32;5th layer of convolution is by anti-pond Characteristic pattern after change is as input value, after the linear transformation by the convolution nuclear convolution of f5 × f5 and activation primitive, finally will Obtain decoded restored map.
2. according to the method described in claim 1, it is characterized in that:Select loss functions of the MSE as neural network, MSE will The correspondence between output figure and prognostic chart pixel can be correctly assessed, formula is as follows:
M indicates that number of samples, x are input pictures, and y is output image, the wherein formula of mean square deviation MSE and Y-PSNR PSNR Calculating inversely;PSNR values show that the image fault after repairing is smaller, closer to original graph, therefore majorized function Target is exactly that MSE is allowed to get minimum value as far as possible.
3. according to the method described in claim 2, it is characterized in that:It uses Adam optimization algorithms and carrys out reverse train network weight Value;
Its function mode is similar with momentum;Parameter more new formula is:
Since Adam further improves algorithm speed, convergence rate faster, and is avoided that present in other optimization algorithms and learns The excessive defect of rate loss, parameter update variance is practised, in the performance that compared Different Optimization device, it is excellent that present invention employs Adam Change algorithm and carrys out reverse train network weight.
4. according to the method described in claim 3, it is characterized in that:Select following simple normalization algorithm to input and output number According to being handled;
Simplest normalization algorithm, formula are as follows:
Y=(x-min) × (max-min) (4)
X is input data, and min and max is the minimum value and maximum value of x respectively, and y is the result after normalization.
CN201810523868.8A 2018-05-28 2018-05-28 Spatial target images restored method based on convolution own coding convolutional neural networks Pending CN108765338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810523868.8A CN108765338A (en) 2018-05-28 2018-05-28 Spatial target images restored method based on convolution own coding convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810523868.8A CN108765338A (en) 2018-05-28 2018-05-28 Spatial target images restored method based on convolution own coding convolutional neural networks

Publications (1)

Publication Number Publication Date
CN108765338A true CN108765338A (en) 2018-11-06

Family

ID=64003103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810523868.8A Pending CN108765338A (en) 2018-05-28 2018-05-28 Spatial target images restored method based on convolution own coding convolutional neural networks

Country Status (1)

Country Link
CN (1) CN108765338A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509159A (en) * 2018-11-20 2019-03-22 湖南湖工电气有限公司 A kind of end-to-end restored method of UAV Fuzzy image based on deep learning
CN109543822A (en) * 2018-11-29 2019-03-29 北京理工大学 A kind of one-dimensional signal data recovery method based on convolutional neural networks
CN109671026A (en) * 2018-11-28 2019-04-23 浙江大学 Gray level image noise-reduction method based on empty convolution and automatic encoding and decoding neural network
CN109727209A (en) * 2018-12-13 2019-05-07 北京爱奇艺科技有限公司 A kind of method and device of determining incomplete historical relic complete image
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network
CN110007355A (en) * 2019-04-15 2019-07-12 中国科学院电子学研究所 The detection method and device of a kind of convolution self-encoding encoder and interior of articles exception
CN110070498A (en) * 2019-03-12 2019-07-30 浙江工业大学 A kind of image enchancing method based on convolution self-encoding encoder
CN110111251A (en) * 2019-04-22 2019-08-09 电子科技大学 A kind of combination depth supervision encodes certainly and perceives the image super-resolution rebuilding method of iterative backprojection
CN110211017A (en) * 2019-05-15 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110599416A (en) * 2019-09-02 2019-12-20 太原理工大学 Non-cooperative target image blind restoration method based on space target image database
CN110674334A (en) * 2019-09-16 2020-01-10 南京信息工程大学 Near-repetitive image retrieval method based on consistency region deep learning features
CN110831106A (en) * 2019-11-14 2020-02-21 西安邮电大学 Clustering method based on convolution
CN110974217A (en) * 2020-01-03 2020-04-10 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
CN111402175A (en) * 2020-04-07 2020-07-10 华中科技大学 High-speed scanning imaging system and method
CN111598964A (en) * 2020-05-15 2020-08-28 厦门大学 Quantitative magnetic susceptibility image reconstruction method based on space adaptive network
CN111667526A (en) * 2019-03-07 2020-09-15 西门子医疗有限公司 Method and apparatus for determining size and distance of multiple objects in an environment
WO2020186888A1 (en) * 2019-03-21 2020-09-24 深圳先进技术研究院 Method and apparatus for constructing image processing model, and terminal device
CN112241668A (en) * 2019-07-18 2021-01-19 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN112261415A (en) * 2020-10-23 2021-01-22 青海民族大学 Image compression coding method based on overfitting convolution self-coding network
CN112669240A (en) * 2021-01-22 2021-04-16 深圳市格灵人工智能与机器人研究院有限公司 High-definition image restoration method and device, electronic equipment and storage medium
WO2021093718A1 (en) * 2019-11-15 2021-05-20 北京金山云网络技术有限公司 Video processing method, video repair method, apparatus and device
CN114152596A (en) * 2021-11-30 2022-03-08 西华大学 Method and device for measuring atmospheric turbulence generalized index parameter based on steepness parameter
CN114580285A (en) * 2022-03-07 2022-06-03 哈尔滨理工大学 Hyperbolic system model reduction method based on CAE network
CN116152054A (en) * 2022-11-01 2023-05-23 海飞科(南京)信息技术有限公司 Image super-resolution method for improving storage capacity and recall precision by using time iteration mode

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512680A (en) * 2015-12-02 2016-04-20 北京航空航天大学 Multi-view SAR image target recognition method based on depth neural network
CN106056564A (en) * 2016-05-27 2016-10-26 西华大学 Edge sharp image fusion method based on joint thinning model
CN106251303A (en) * 2016-07-28 2016-12-21 同济大学 A kind of image denoising method using the degree of depth full convolutional encoding decoding network
CN106997581A (en) * 2017-03-01 2017-08-01 杭州电子科技大学 A kind of method that utilization deep learning rebuilds high spectrum image
CN107067396A (en) * 2017-04-26 2017-08-18 中国人民解放军总医院 A kind of nuclear magnetic resonance image processing unit and method based on self-encoding encoder
CN107180248A (en) * 2017-06-12 2017-09-19 桂林电子科技大学 Strengthen the hyperspectral image classification method of network based on associated losses
CN107251053A (en) * 2015-02-13 2017-10-13 北京市商汤科技开发有限公司 A kind of method and device for the compression artefacts for reducing lossy compression method image
US20180129974A1 (en) * 2016-11-04 2018-05-10 United Technologies Corporation Control systems using deep reinforcement learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107251053A (en) * 2015-02-13 2017-10-13 北京市商汤科技开发有限公司 A kind of method and device for the compression artefacts for reducing lossy compression method image
CN105512680A (en) * 2015-12-02 2016-04-20 北京航空航天大学 Multi-view SAR image target recognition method based on depth neural network
CN106056564A (en) * 2016-05-27 2016-10-26 西华大学 Edge sharp image fusion method based on joint thinning model
CN106251303A (en) * 2016-07-28 2016-12-21 同济大学 A kind of image denoising method using the degree of depth full convolutional encoding decoding network
US20180129974A1 (en) * 2016-11-04 2018-05-10 United Technologies Corporation Control systems using deep reinforcement learning
CN106997581A (en) * 2017-03-01 2017-08-01 杭州电子科技大学 A kind of method that utilization deep learning rebuilds high spectrum image
CN107067396A (en) * 2017-04-26 2017-08-18 中国人民解放军总医院 A kind of nuclear magnetic resonance image processing unit and method based on self-encoding encoder
CN107180248A (en) * 2017-06-12 2017-09-19 桂林电子科技大学 Strengthen the hyperspectral image classification method of network based on associated losses

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NATHAN HUBENS: "Deep inside: Autoencoders", 《HTTPS://TOWARDSDATASCIENCE.COM/DEEP-INSIDE-AUTOENCODERS-7E41F319999F》 *
XIAO-JIAO MAO等: "Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections", 《ARXIV:1606.08921V3》 *
刘超等: "超低照度下微光图像的深度卷积自编码网络复原", 《光学精密工程》 *
张喆: "雾霾天气下的集装箱自动识别系统关键技术研究", 《计算机产品与流通》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509159A (en) * 2018-11-20 2019-03-22 湖南湖工电气有限公司 A kind of end-to-end restored method of UAV Fuzzy image based on deep learning
CN109671026A (en) * 2018-11-28 2019-04-23 浙江大学 Gray level image noise-reduction method based on empty convolution and automatic encoding and decoding neural network
CN109543822A (en) * 2018-11-29 2019-03-29 北京理工大学 A kind of one-dimensional signal data recovery method based on convolutional neural networks
CN109727209A (en) * 2018-12-13 2019-05-07 北京爱奇艺科技有限公司 A kind of method and device of determining incomplete historical relic complete image
CN109919864A (en) * 2019-02-20 2019-06-21 重庆邮电大学 A kind of compression of images cognitive method based on sparse denoising autoencoder network
CN111667526A (en) * 2019-03-07 2020-09-15 西门子医疗有限公司 Method and apparatus for determining size and distance of multiple objects in an environment
CN110070498A (en) * 2019-03-12 2019-07-30 浙江工业大学 A kind of image enchancing method based on convolution self-encoding encoder
WO2020186888A1 (en) * 2019-03-21 2020-09-24 深圳先进技术研究院 Method and apparatus for constructing image processing model, and terminal device
CN110007355A (en) * 2019-04-15 2019-07-12 中国科学院电子学研究所 The detection method and device of a kind of convolution self-encoding encoder and interior of articles exception
CN110111251A (en) * 2019-04-22 2019-08-09 电子科技大学 A kind of combination depth supervision encodes certainly and perceives the image super-resolution rebuilding method of iterative backprojection
CN110111251B (en) * 2019-04-22 2023-04-28 电子科技大学 Image super-resolution reconstruction method combining depth supervision self-coding and perception iterative back projection
CN110211017A (en) * 2019-05-15 2019-09-06 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN110211017B (en) * 2019-05-15 2023-12-19 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN112241668A (en) * 2019-07-18 2021-01-19 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment
CN110599416A (en) * 2019-09-02 2019-12-20 太原理工大学 Non-cooperative target image blind restoration method based on space target image database
CN110599416B (en) * 2019-09-02 2022-10-11 太原理工大学 Non-cooperative target image blind restoration method based on spatial target image database
CN110674334B (en) * 2019-09-16 2020-08-11 南京信息工程大学 Near-repetitive image retrieval method based on consistency region deep learning features
CN110674334A (en) * 2019-09-16 2020-01-10 南京信息工程大学 Near-repetitive image retrieval method based on consistency region deep learning features
CN110831106A (en) * 2019-11-14 2020-02-21 西安邮电大学 Clustering method based on convolution
CN110831106B (en) * 2019-11-14 2021-08-20 西安邮电大学 Clustering method based on convolution
WO2021093718A1 (en) * 2019-11-15 2021-05-20 北京金山云网络技术有限公司 Video processing method, video repair method, apparatus and device
CN110974217A (en) * 2020-01-03 2020-04-10 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
CN110974217B (en) * 2020-01-03 2022-08-09 苏州大学 Dual-stage electrocardiosignal noise reduction method based on convolution self-encoder
CN111402175B (en) * 2020-04-07 2022-04-08 华中科技大学 High-speed scanning imaging system and method
CN111402175A (en) * 2020-04-07 2020-07-10 华中科技大学 High-speed scanning imaging system and method
CN111598964A (en) * 2020-05-15 2020-08-28 厦门大学 Quantitative magnetic susceptibility image reconstruction method based on space adaptive network
CN111598964B (en) * 2020-05-15 2023-02-14 厦门大学 Quantitative magnetic susceptibility image reconstruction method based on space adaptive network
CN112261415A (en) * 2020-10-23 2021-01-22 青海民族大学 Image compression coding method based on overfitting convolution self-coding network
CN112669240A (en) * 2021-01-22 2021-04-16 深圳市格灵人工智能与机器人研究院有限公司 High-definition image restoration method and device, electronic equipment and storage medium
CN112669240B (en) * 2021-01-22 2024-05-10 深圳市格灵人工智能与机器人研究院有限公司 High-definition image restoration method and device, electronic equipment and storage medium
CN114152596A (en) * 2021-11-30 2022-03-08 西华大学 Method and device for measuring atmospheric turbulence generalized index parameter based on steepness parameter
CN114152596B (en) * 2021-11-30 2024-01-12 西华大学 Method and device for measuring generalized index parameter of atmospheric turbulence based on sharpness parameter
CN114580285B (en) * 2022-03-07 2022-11-01 哈尔滨理工大学 Hyperbolic system model reduction method based on CAE network
CN114580285A (en) * 2022-03-07 2022-06-03 哈尔滨理工大学 Hyperbolic system model reduction method based on CAE network
CN116152054A (en) * 2022-11-01 2023-05-23 海飞科(南京)信息技术有限公司 Image super-resolution method for improving storage capacity and recall precision by using time iteration mode
CN116152054B (en) * 2022-11-01 2024-03-01 海飞科(南京)信息技术有限公司 Image super-resolution method for improving storage capacity and recall precision by using time iteration mode

Similar Documents

Publication Publication Date Title
CN108765338A (en) Spatial target images restored method based on convolution own coding convolutional neural networks
Pathak et al. Context encoders: Feature learning by inpainting
CN113469356B (en) Improved VGG16 network pig identity recognition method based on transfer learning
Liu et al. Learning discriminative representations from RGB-D video data
CN110414498B (en) Natural scene text recognition method based on cross attention mechanism
CN110097519A (en) Double supervision image defogging methods, system, medium and equipment based on deep learning
CN110378208B (en) Behavior identification method based on deep residual error network
CN108765279A (en) A kind of pedestrian's face super-resolution reconstruction method towards monitoring scene
CN108520504A (en) A kind of blurred picture blind restoration method based on generation confrontation network end-to-end
CN109377459B (en) Super-resolution deblurring method of generative confrontation network
CN109410146A (en) A kind of image deblurring algorithm based on Bi-Skip-Net
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN106204499A (en) Single image rain removing method based on convolutional neural networks
CN113392711A (en) Smoke semantic segmentation method and system based on high-level semantics and noise suppression
CN107563299A (en) A kind of pedestrian detection method using ReCNN integrating context informations
CN110503608A (en) The image de-noising method of convolutional neural networks based on multi-angle of view
Liu et al. Low-quality license plate character recognition based on CNN
Pires et al. Image denoising using attention-residual convolutional neural networks
CN113538258B (en) Mask-based image deblurring model and method
CN111242870A (en) Low-light image enhancement method based on deep learning knowledge distillation technology
CN111401209B (en) Action recognition method based on deep learning
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN117078553A (en) Image defogging method based on multi-scale deep learning
CN106886819A (en) A kind of improved method on restricted Boltzmann machine
CN114155560B (en) Light weight method of high-resolution human body posture estimation model based on space dimension reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106

RJ01 Rejection of invention patent application after publication