CN107610192A - Adaptive observation compressed sensing image reconstructing method based on deep learning - Google Patents

Adaptive observation compressed sensing image reconstructing method based on deep learning Download PDF

Info

Publication number
CN107610192A
CN107610192A CN201710923137.8A CN201710923137A CN107610192A CN 107610192 A CN107610192 A CN 107610192A CN 201710923137 A CN201710923137 A CN 201710923137A CN 107610192 A CN107610192 A CN 107610192A
Authority
CN
China
Prior art keywords
network
full articulamentum
observation
reconstructed
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710923137.8A
Other languages
Chinese (zh)
Other versions
CN107610192B (en
Inventor
谢雪梅
王禹翔
石光明
王陈业
杜江
赵至夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd
Original Assignee
Xidian University
Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University, Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd filed Critical Xidian University
Priority to CN201710923137.8A priority Critical patent/CN107610192B/en
Publication of CN107610192A publication Critical patent/CN107610192A/en
Application granted granted Critical
Publication of CN107610192B publication Critical patent/CN107610192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of adaptive observation compressed sensing image reconstructing method based on deep learning, mainly solves the problems, such as the observation that prior art can not obtain being adaptive to data set.Its implementation is:1. prepare reconstructed network DR2 network structures file and associated documents;2. addition output dimension obtains adaptive observation network less than the second full articulamentum of input on the basis of reconstructed network DR2 network structures, file needed for modification training, and adaptive observation network is trained using amended file, the model trained;3. carry out image observation and reconstruct using the model trained.The reconstruction result of the present invention is substantially better than the reconstruction result of existing random Gaussian observation, and strong adaptability, and real-time is good, available for radar imagery.

Description

Adaptive observation compressed sensing image reconstructing method based on deep learning
Technical field
The invention belongs to technical field of image processing, relates generally to a kind of reconstruct side of adaptive observation compressed sensing image Method, available for radar imagery.
Background technology
In information age now, people are to the demand of information also more and more higher.After the analog signal sampling of real world It is necessary in this day and age to be converted into data signal and handled, and the theoretical foundation sampled is then that nyquist sampling is determined Reason, it shows in the transfer process of analog signal and data signal is carried out to intactly retain the information in primary signal, In sampling, sample frequency is at least needed as twice of signal highest frequency.But with information requirement more and more higher, broader bandwidth The processing of signal becomes difficulty.In many application scenarios, too high sample rate causes people to have to sampled result Stored and transmitted again after compression.
Compressive sensing theory shows to sample signal with low rate under certain condition, high probability reconstruct, big so as to save Measure resource.A large amount of restructing algorithms are suggested in recent years, and such as orthogonal matching pursuit, base are followed the trail of, but these algorithms are excellent by solving The characteristics of change problem is reconstructed makes the real-time that its difficulty has had.It is preferable that the characteristic trained under deep learning method line allows it to have Real-time, therefore many image reconstructing methods based on deep learning are suggested, such as SDA, DR2, wherein:
SDA is only made up of full articulamentum, more parameters is needed when input data increases, so as to cause bigger calculating Amount.
DR2 is made up of full articulamentum and convolutional layer, can be carried out piecemeal reconstruct, be reduced amount of calculation, it is not easy to over-fitting, What DR2 was used skips formula connection so that quality reconstruction is more preferable;But because the DR2 observations used are random Gaussian observation, it is not easy to carry Useful information of the access in so that the Y-PSNR PSNR of reconstructed image is low, influences the definition of reconstructed image.
The content of the invention
It is an object of the invention to the deficiency for above-mentioned existing compressed sensing Image Reconstruction deep learning method, proposes A kind of compressed sensing Image Reconstruction deep learning method of adaptive observation, preferably to extract the useful information in data, is carried The Y-PSNR PSNR of high reconstructed image, and then improve reconstructed image quality.
To achieve the above object, technical scheme includes as follows:
1) adaptive observation network is built:
Reconstructed network DR2 network structure file, parameter setting file, generation training 1a) is carried above and below github websites Code, code, test code and the test set picture of generation checking collection of collection, download training set picture from SRCNN websites Collect picture with checking;
Caffe platforms 1b) are built under Linux system, and matlab business softwares are installed;
Reconstructed network structured file, the i.e. full connection in reconstructed network DR2 1c) are changed on the basis of reconstructed network DR2 Addition output is the second full articulamentum of low-dimensional before layer, obtains adaptive observation network;
2) change data form starts to train:
2a) by 1a) in training set picture and checking collection picture be put into respectively Training_Data/Train files and Under Training_Data/Test files, change and operation code generates the hdf5 forms of training set and the hdf5 lattice of checking collection Formula file;
2b) use 1c) amended network structure file and parameter setting file and 2a) in training set hdf5 lattice The hdf5 formatted files of formula and checking collection
2c) adaptive observation network is trained on caffe platforms:
2c1) only train reconstructed network DR2 full articulamentum and the second full articulamentum newly added;
2c2) in 2c1) on the basis of the second full articulamentum for training reconstructed network DR2 all layers and newly adding, that is, instruct Practice adaptive observation network, the model trained, the model trained includes the second full articulamentum and reconstructed network DR2;
3) image observation and reconstruct are carried out using the model trained:
3a) from 1a) in test set picture in choose an image as artwork, be divided into fixed-size several images Block, the function in caffe is called by matlab business softwares, these image blocks of artwork are input to the model trained In, each image block obtains a group observations after the second full articulamentum;
Reconstructed image block 3b) is obtained after reconstructed network DR2 per group observations, these reconstructed image blocks are suitable by artwork Sequence merges, and obtains reconstructed image.
The present invention has advantages below compared with prior art:
1. strong adaptability:
Traditional random Gaussian observation is in order to which high probability meets RIP conditions, but limitation itself is very big, because it is not Design to obtain according to training set;The observation that the present invention uses is obtained according to training set by training, available for different type Training set, strong adaptability;
2. quality reconstruction is good:
The adaptive observation that the present invention uses can observe more information from artwork, and observation obtains after reconstruct Reconstructed image Y-PSNR PSNR be obviously improved;
3. real-time is good:
The present invention reduces test and taken as a result of being deep learning method, has good reality in actual use Shi Xing.
Brief description of the drawings
Fig. 1 is the implementation process figure of the present invention;
Fig. 2 is the adaptive observation network structure in the present invention;
Fig. 3 is artwork used in adaptive observation network in the test present invention;
Fig. 4 be with of the invention and existing random Gaussian observation procedure respectively when observation rate is 25% to Fig. 3 reconstruct image Picture and reconstructed image Y-PSNR PSNR.
Embodiment
The present invention is described in detail with example below in conjunction with the accompanying drawings.
Reference picture 1, step is as follows for of the invention realizing:
Step 1, reconstructed network DR2 network structures file and associated documents are prepared.
Reconstructed network DR2 relevant document 1a) is carried above and below github websites, it includes:Network structure text needed for training Part DR2_stage1.prototxt, DR2_stage2.prototxt, parameter setting file DR2_stage1_solver.proto Txt and DR2_stage2_solver.prototxt, code file store2hdf5.m, life needed for making hdf5 formatted files Into code file generate_train.m, generate_test.m needed for training set and checking collection, test code file Network structure file reconnet_0_10.prototxt, reconnet_0_ needed for test_everything.m, test 25.prototxt and test set picture;
Training set picture and checking collection picture 1b) are downloaded from SRCNN websites;
Caffe platforms 1c) are built under Linux system, and matlab business softwares are installed.
Step 2, file needed for modification training.
2a) modification obtains adaptive observation network structure, such as Fig. 2 on the basis of reconstructed network DR2 network structure files Shown, step is as follows:
2a1) open preserve network structure file DR2_stage2.prototxt, this document include full articulamentum fc1, Reshape layers, 4 identical residual error study blocks, first residual error study block include the first convolutional layer conv1, volume Two Lamination conv2, the 3rd convolutional layer conv3, the first BN layers, the first scale layers, the first relu layers, second residual error learn block bag Include Volume Four lamination conv4, the 5th convolutional layer conv5, the 6th convolutional layer conv6, the 2nd BN layers, the 2nd scale layers, second Relu layers, the 3rd residual error study block include the 7th convolutional layer conv7, the 8th convolutional layer conv8, the 9th convolutional layer conv9, the Three BN layers, the 3rd scale layers, the 3rd relu layers, the 4th residual error study block include the tenth convolutional layer conv10, the 11st convolution Layer conv11, the 12nd convolutional layer conv12, the of one layer of output dimension less than input dimension is added before full articulamentum fc1 Two full articulamentums, i.e., add a segment table in DR2_stage1.prototxt files according to specified format and show the second full articulamentum Word;In the DR2_stage1.prototxt network structure files each layer all comprising layer title name and this layer it is previous The title bottom of layer;
2a2) name of the second full articulamentum is changed to " fc1 ", the name of full articulamentum is changed to " fc2 ", and bottom is changed to " fc1 ", the bottom of reshape layers are changed to " fc2 ", due to the preceding layer that full articulamentum is reshape layers, so reshape The bottom of layer is also " fc2 ";
2a3) according to the dimension of input picture block and observation rate, the output dimension of the second full articulamentum of calculating:
It is less than requirement and the practicality of input dimension according to output dimension, the dimension that this example fixes input picture block is 1089, observation rate of the selection less than 50%;Both is multiplied again, obtains the output dimension of the second full articulamentum;Such as observe rate For 25% when, output image block dimension is 1089x 25%=272;
2b) open preserve network structure file DR2_stage1.prototxt, this document include full articulamentum fc1, Reshape layers, according to 2a) identical mode adds the second full articulamentum;
2c) the code file generate_train.m needed for generation training set and checking collection is opened using matlab softwares And generate_test.m, modify, that is, delete the observation process code in two files;
Step 3, change data form starts to train.
3a) by 1b) in training set picture be put under the Training_Data/Train of path, will checking collection picture be put into Under the Training_Data/Test of path, 2c is separately operable by matlab softwares) in amended code file generate_ The hdf5 forms of train.m, generate_test.m generation training set and the hdf5 formatted files of checking collection;
3b) adaptive observation network is trained on caffe platforms:
3b1) only train reconstructed network DR2 full articulamentum and the second full articulamentum newly added:
Linux system terminal is opened under the caffe_dr2-master/DR2/train of path, starts to instruct to terminal input Experienced order ../../../build/tools/caffe train--solver=DR2_stage1_solver.prototxt, The model 1 trained;
3b2) in 3a1) on the basis of train reconstructed network DR2 and the second full articulamentum for newly adding, i.e., training is adaptively Observation grid:
Linux system terminal is opened under the caffe_dr2-master/DR2/train of path, starts to instruct to terminal input Experienced order ../../../build/tools/caffe train--solver=DR2_stage2_ Solver.prototxt--weights=model/DR2_stage1_iter_1000000.c affemodel, are trained Adaptive observation network model, wherein -- weights=model/DR2_stage1_iter_1000000.caffemodel The full articulamentum and the second full articulamentum for representing adaptive observation network are trained on the basis of the model 1 trained.
Step 4, image observation and reconstruct are carried out using the model trained.
4a) from step 1a1) in test set picture in pick out a pictures as artwork, as shown in Figure 3;
4b) according to 2a) identical mode under path/test/deploy_prototxt_files/ file carry out Modification, different file is changed according to the different choice of observation rate, when such as observation rate being 10%, change reconnet_0_10.p Rototxt files, when observation rate is 25%, change reconnet_0_25.prototxt;
Matlab softwares are opened under the caffe_dr2-master/DR2/ of path, path is opened in matlab softwares Test_everything.m code files under test/ are simultaneously modified to the code file, and the code file is run after modification Complete test;
After the completion of 4c) testing, the reconstructed image of artwork and the Y-PSNR PSNR of reconstructed image are exported, such as Fig. 4 (a) institutes Show;
4d) input order net.blobs (' fc1 ') .get_data () in matlab software command windows, in the order Fc1 represent the second full articulamentum, the effect of the order is the observation of the output, i.e. original image of taking out the second full articulamentum.
The effect of the present invention can further illustrate that the emulation experiment step of existing method is such as by the comparison with existing method Under:
The first step, prepare reconstructed network DR2 network structures file and associated documents.
1.1) reconstructed network DR2 relevant document is carried above and below github websites, it includes:Network structure text needed for training Part DR2_stage1.prototxt, DR2_stage2.prototxt, parameter setting file DR2_stage1_solver.pro Totxt and DR2_stage2_solver.prototxt, make hdf5 formatted files needed for code file store2hdf5.m, Generate training set and verify the required code file generate_train.m, generate_test.m of collection, test code file Network structure file reconnet_0_10.prototxt, reconnet_0_ needed for test_everything.m, test 25.prototxt and test set picture;
1.2) training set picture and checking collection picture are downloaded from SRCNN websites;
1.3) caffe platforms are built under Linux system, and matlab business softwares are installed.
Second step, change data form start to train.
Change data form starts to train.
2.1) the training set picture in 1.2) is put under the Training_Data/Train of path, checking collection picture is put To under the Training_Data/Test of path, the code file generate_t in being separately operable 1.1) by matlab softwares The hdf5 forms of rain.m, generate_test.m generation training set and the hdf5 formatted files of checking collection;
2.2) reconstructed network DR2 is trained on caffe platforms:
2.2.1 reconstructed network DR2 full articulamentum) is only trained:
Linux system terminal is opened under the caffe_dr2-master/DR2/train of path, starts to instruct to terminal input Experienced order ../../../build/tools/caffe train--solver=DR2_stage1_solver.prototxt, The model 1 trained;
2.2.2 2.2.1)) on the basis of train reconstructed network DR2:
Linux system terminal is opened under the caffe_dr2-master/DR2/train/ of path, starts to instruct to terminal input Experienced order ../../../build/tools/caffe train--solver=DR2_stage2_ Solver.prototxt--weights=model/DR2_stage1_iter_1000000.c affemodel, are trained Reconstructed network DR2 model, wherein -- weights=model/DR2_stage1_iter_1000000.caffemodel tables Show that reconstructed network DR2 full articulamentum is trained on the basis of model 1.
3rd step, image observation and reconstruct are carried out using the model trained.
3.1) pictures are picked out in the test set picture in step 1.1) as artwork, as shown in Figure 3;
3.2) matlab softwares are opened under the caffe_dr2-master/DR2/ of path, road is opened in matlab softwares Test_everything.m code files under the test/ of footpath, the code file contain the random Gaussian observation process to artwork With the restructuring procedure by reconstructed network DR2, run the code file and complete test, export the reconstructed image and reconstruct image of artwork The Y-PSNR PSNR of picture, as shown in Fig. 4 (b);
It will be compared, can seen with reconstruction result Fig. 4 (b) that existing method obtains with reconstruction result Fig. 4 (a) of the present invention Go out:
Reconstruction result of the reconstruction result of adaptive observation network proposed by the present invention substantially than random Gaussian observation is clear;
The characteristic that the present invention is adaptive to data set can preferably extract useful information in data, improve reconstructed image Y-PSNR PSNR, and then improve reconstructed image quality, while also have on the basis of higher reconstruction quality is ensured The characteristics of real-time is good.

Claims (4)

1. the adaptive observation compressed sensing image reconstructing method based on deep learning, including:
1) adaptive observation network is built:
Reconstructed network DR2 network structure file, parameter setting file, generation training set 1a) are carried above and below github websites Code, test code and the test set picture of code, generation checking collection, training set picture is downloaded from SRCNN websites and is tested Card collection picture;
Caffe platforms 1b) are built under Linux system, and matlab business softwares are installed;
Reconstructed network structured file 1c) is changed on the basis of reconstructed network DR2, i.e., before reconstructed network DR2 full articulamentum Addition output is the second full articulamentum of low-dimensional, obtains adaptive observation network;
2) change data form starts to train:
2a) by 1a) in training set picture and checking collection picture be put into respectively Training_Data/Train files and Under Training_Data/Test files, change and operation code generates the hdf5 forms of training set and the hdf5 lattice of checking collection Formula file;
2b) use 1c) amended network structure file and parameter setting file and 2a) in training set hdf5 forms and Verify the hdf5 formatted files of collection
2c) adaptive observation network is trained on caffe platforms:
2c1) only train reconstructed network DR2 full articulamentum and the second full articulamentum newly added;
2c2) in 2c1) on the basis of the second full articulamentum for training reconstructed network DR2 all layers and newly adding, that is, train oneself Observation grid, the model trained are adapted to, the model trained includes the second full articulamentum and reconstructed network DR2;
3) image observation and reconstruct are carried out using the model trained:
3a) from 1a) in test set picture in choose an image as artwork, be divided into fixed-size several image blocks, Function in caffe is called by matlab business softwares, these image blocks of artwork are input in the model trained, often Individual image block obtains a group observations after the second full articulamentum;
Reconstructed image block 3b) is obtained after reconstructed network DR2 per group observations, these reconstructed image blocks are entered by artwork order Row merges, and obtains reconstructed image.
2. according to the method for claim 1, wherein step 1c) in add second on the basis of DR2 reconstructed network structures Full articulamentum, adaptive observation network is obtained, carried out as follows:
Network structure file 1c1) is opened, this document includes full articulamentum fc1, reshape layer, 4 identical residual errors Block is practised, first residual error study block includes the first convolutional layer conv1, the second convolutional layer conv2, the 3rd convolutional layer conv3, first BN layers, the first scale layers, the first relu layers, second residual error study block include Volume Four lamination conv4, the 5th convolutional layer Conv5, the 6th convolutional layer conv6, the 2nd BN layers, the 2nd scale layers, the 2nd relu layers, the 3rd residual error study block include the Seven convolutional layer conv7, the 8th convolutional layer conv8, the 9th convolutional layer conv9, the 3rd BN layers, the 3rd scale layers, the 3rd relu Layer, the 4th residual error study block include the tenth convolutional layer conv10, the 11st convolutional layer conv11, the 12nd convolutional layer Conv12, one layer of output dimension is added before full articulamentum fc1 less than the second full articulamentum of input dimension, the network structure Each layer all title name comprising the layer and title bottom of this layer of preceding layer in file;
1c2) name of the second full articulamentum is changed to " fc1 ", the name of full articulamentum is changed to " fc2 ", and bottom is changed to " fc1 ", the bottom of reshape layers are changed to " fc2 ", i.e., full articulamentum.
3. according to the method for claim 2, wherein step 1c1) in the second full articulamentum output dimension, by input picture The dimension and observation rate of block determine that the dimension of fixed input picture block is 1089, observation rate of the selection less than 50%;Again by this two Person is multiplied, and obtains the output dimension of the second full articulamentum.
4. according to the method for claim 1, wherein step 2b) in use 1c) amended network structure file and parameter Set file to be trained on caffe platforms to adaptive observation network, carry out as follows:
2b1) open Linux system terminal;
Catalogue 2b2) is switched to network structure file and parameter setting file position from current location in the terminal;
2b3) input starts training of the instruction completion of training to adaptive observation network in the terminal.
CN201710923137.8A 2017-09-30 2017-09-30 Self-adaptive observation compressed sensing image reconstruction method based on deep learning Active CN107610192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710923137.8A CN107610192B (en) 2017-09-30 2017-09-30 Self-adaptive observation compressed sensing image reconstruction method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710923137.8A CN107610192B (en) 2017-09-30 2017-09-30 Self-adaptive observation compressed sensing image reconstruction method based on deep learning

Publications (2)

Publication Number Publication Date
CN107610192A true CN107610192A (en) 2018-01-19
CN107610192B CN107610192B (en) 2021-02-12

Family

ID=61067960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710923137.8A Active CN107610192B (en) 2017-09-30 2017-09-30 Self-adaptive observation compressed sensing image reconstruction method based on deep learning

Country Status (1)

Country Link
CN (1) CN107610192B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510464A (en) * 2018-01-30 2018-09-07 西安电子科技大学 Compressed sensing network and full figure reconstructing method based on piecemeal observation
CN108537104A (en) * 2018-01-30 2018-09-14 西安电子科技大学 Compressed sensing network based on full figure observation and perception loss reconstructing method
CN108810651A (en) * 2018-05-09 2018-11-13 太原科技大学 Wireless video method of multicasting based on depth-compression sensing network
CN109086819A (en) * 2018-07-26 2018-12-25 北京京东尚科信息技术有限公司 Caffemodel model compression method, system, equipment and medium
CN111192334A (en) * 2020-01-02 2020-05-22 苏州大学 Trainable compressed sensing module and image segmentation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123089A (en) * 2017-04-24 2017-09-01 中国科学院遥感与数字地球研究所 Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN107123091A (en) * 2017-04-26 2017-09-01 福建帝视信息科技有限公司 A kind of near-infrared face image super-resolution reconstruction method based on deep learning
CN107154064A (en) * 2017-05-04 2017-09-12 西安电子科技大学 Natural image compressed sensing method for reconstructing based on depth sparse coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123089A (en) * 2017-04-24 2017-09-01 中国科学院遥感与数字地球研究所 Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN107123091A (en) * 2017-04-26 2017-09-01 福建帝视信息科技有限公司 A kind of near-infrared face image super-resolution reconstruction method based on deep learning
CN107154064A (en) * 2017-05-04 2017-09-12 西安电子科技大学 Natural image compressed sensing method for reconstructing based on depth sparse coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HANTAO YAO等: "DR2-Net:Deep Residual Reconstruction Network for Image Compressive Sensing", 《HTTPS://ARXIV.ORG/PDF/1702.05743V3.PDF》 *
XUEMEI XIE等: "Adaptive Measurement Network for CS Image Reconstruction", 《HTTPS://ARXIV.ORG/PDF/1710.01244V1.PDF》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510464A (en) * 2018-01-30 2018-09-07 西安电子科技大学 Compressed sensing network and full figure reconstructing method based on piecemeal observation
CN108537104A (en) * 2018-01-30 2018-09-14 西安电子科技大学 Compressed sensing network based on full figure observation and perception loss reconstructing method
CN108510464B (en) * 2018-01-30 2021-11-30 西安电子科技大学 Compressed sensing network based on block observation and full-image reconstruction method
CN108810651A (en) * 2018-05-09 2018-11-13 太原科技大学 Wireless video method of multicasting based on depth-compression sensing network
CN108810651B (en) * 2018-05-09 2020-11-03 太原科技大学 Wireless video multicast method based on deep compression sensing network
CN109086819A (en) * 2018-07-26 2018-12-25 北京京东尚科信息技术有限公司 Caffemodel model compression method, system, equipment and medium
CN109086819B (en) * 2018-07-26 2023-12-05 北京京东尚科信息技术有限公司 Method, system, equipment and medium for compressing caffemul model
CN111192334A (en) * 2020-01-02 2020-05-22 苏州大学 Trainable compressed sensing module and image segmentation method

Also Published As

Publication number Publication date
CN107610192B (en) 2021-02-12

Similar Documents

Publication Publication Date Title
CN107610192A (en) Adaptive observation compressed sensing image reconstructing method based on deep learning
CN109145730B (en) Automatic semantic segmentation method for mining area in remote sensing image
CN109410261B (en) Monocular image depth estimation method based on pyramid pooling module
CN107392019A (en) A kind of training of malicious code family and detection method and device
CN109508717A (en) A kind of licence plate recognition method, identification device, identification equipment and readable storage medium storing program for executing
CN110321874A (en) A kind of light-weighted convolutional neural networks pedestrian recognition method
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
CN109146788A (en) Super-resolution image reconstruction method and device based on deep learning
CN107562918A (en) A kind of mathematical problem knowledge point discovery and batch label acquisition method
DE112015007134T5 (en) Method and device for synchronous image display
CN108961220B (en) Image collaborative saliency detection method based on multilayer convolution feature fusion
CN107316054A (en) Non-standard character recognition methods based on convolutional neural networks and SVMs
CN110689012A (en) End-to-end natural scene text recognition method and system
CN107240136A (en) A kind of Still Image Compression Methods based on deep learning model
CN112017116A (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
CN115100039A (en) Lightweight image super-resolution reconstruction method based on deep learning
CN111666114A (en) Plug-in type well logging data conversion method
CN103903240B (en) Image super-resolution method based on multi output Least square support vector regression
CN110399344A (en) Choose the method and device of multiimage
CN113839838A (en) Business type identification method for federal learning based on cloud edge cooperation
Fu et al. Screen content image quality assessment using Euclidean distance
CN115984949A (en) Low-quality face image recognition method and device with attention mechanism
CN115272667B (en) Farmland image segmentation model training method and device, electronic equipment and medium
CN105469116A (en) Picture recognition and data extension method for infants based on man-machine interaction
CN114973271A (en) Text information extraction method, extraction system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant