CN107610192B - Self-adaptive observation compressed sensing image reconstruction method based on deep learning - Google Patents
Self-adaptive observation compressed sensing image reconstruction method based on deep learning Download PDFInfo
- Publication number
- CN107610192B CN107610192B CN201710923137.8A CN201710923137A CN107610192B CN 107610192 B CN107610192 B CN 107610192B CN 201710923137 A CN201710923137 A CN 201710923137A CN 107610192 B CN107610192 B CN 107610192B
- Authority
- CN
- China
- Prior art keywords
- layer
- network
- training
- image
- reconstructed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a self-adaptive observation compressed sensing image reconstruction method based on deep learning, which mainly solves the problem that observation self-adaptive to a data set cannot be obtained in the prior art. The implementation scheme is as follows: 1. preparing a network structure file and related files of a reconstructed network DR 2; 2. adding a second full-connection layer with the output dimensionality lower than that of input on the basis of reconstructing a network DR2 network structure to obtain a self-adaptive observation network, modifying files required by training, and training the self-adaptive observation network by using the modified files to obtain a trained model; 3. and observing and reconstructing the image by using the trained model. The reconstruction result of the invention is obviously superior to the reconstruction result of the existing random Gaussian observation, has strong adaptability and good real-time property, and can be used for radar imaging.
Description
Technical Field
The invention belongs to the technical field of image processing, and mainly relates to a reconstruction method for adaptively observing a compressed sensing image, which can be used for radar imaging.
Background
In the information age, people have higher and higher requirements for information. Sampling real-world analog signals and converting the sampled signals into digital signals for processing is necessary in the modern times, and the theoretical basis of the sampling is the nyquist sampling theorem, which shows that in order to completely retain the information in the original signals in the process of converting the analog signals and the digital signals, the sampling frequency needs to be at least twice as high as the highest frequency of the signals during sampling. However, as the demand for information becomes higher, the processing of signals with wider bandwidths becomes difficult. In many application scenarios, too high a sampling rate causes people to have to compress the sampling results before storing and transmitting the compressed sampling results.
The compressed sensing theory shows that the signal can be sampled at a low rate and reconstructed at a high probability under certain conditions, so that a large amount of resources are saved. In recent years, a large number of reconstruction algorithms such as orthogonal matching pursuit, basis pursuit and the like are proposed, but the reconstruction of the algorithms by solving an optimization problem has the characteristic that the algorithms have difficulty in having good real-time performance. The feature of training under the deep learning method makes it have better real-time performance, so many image reconstruction methods based on deep learning are proposed, such as SDA, DR2, where:
the SDA consists of only the fully connected layer, requiring more parameters as the input data increases, resulting in a larger computational load.
The DR2 consists of a full-link layer and a convolution layer, block reconstruction can be performed, the calculated amount is reduced, overfitting is not easy to occur, and the reconstruction effect is better due to the skip-type connection used by the DR 2; however, since the observation used by DR2 is a random gaussian observation, useful information in data is not easy to extract, so that the peak signal-to-noise ratio PSNR of the reconstructed image is low, and the definition of the reconstructed image is affected.
Disclosure of Invention
The invention aims to provide a compressed sensing image reconstruction depth learning method based on adaptive observation aiming at the defects of the existing compressed sensing image reconstruction depth learning method, so that useful information in data can be better extracted, the peak signal-to-noise ratio (PSNR) of a reconstructed image is improved, and the quality of the reconstructed image is further improved.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
1) constructing an adaptive observation network:
1a) downloading a network structure file, a parameter setting file, a code for generating a training set, a code for generating a verification set, a test code and a test set picture of a reconstructed network DR2 from a gitubb website, and downloading the training set picture and the verification set picture from an SRCNN website;
1b) a caffe platform is built under a linux system, and matlab commercial software is installed;
1c) modifying the reconstructed network structure file on the basis of the reconstructed network DR2, namely adding a second full connection layer with low-dimensional output before the full connection layer of the reconstructed network DR2 to obtain a self-adaptive observation network;
2) converting the data format to start training:
2a) respectively placing the Training set pictures and the verification set pictures in the step 1a) into a tracking _ Data/tracking folder and a tracking _ Data/Test folder, modifying and running codes to generate files in the hdf5 format of the Training set and the hdf5 format of the verification set;
2b) hdf5 formatted files using 1c) the modified network structure file and parameter settings file and the training set's hdf5 format and validation set in 2a)
2c) Training the self-adaptive observation network on a caffe platform:
2c1) training only the fully-connected layer of the reconstructed network DR2 and the newly added second fully-connected layer;
2c2) training all layers of the reconstructed network DR2 and a second newly added full-connected layer, namely a training adaptive observation network, on the basis of 2c1), so as to obtain a trained model, wherein the trained model comprises the second full-connected layer and the reconstructed network DR 2;
3) and (3) observing and reconstructing images by using the trained model:
3a) selecting an image from the test set image in 1a) as an original image, dividing the image into a plurality of image blocks with fixed sizes, calling a function in caffe through matlab commercial software, inputting the image blocks of the original image into a trained model, and obtaining a group of observed values after each image block passes through a second full-connection layer;
3b) and each group of observation values pass through a reconstruction network DR2 to obtain reconstructed image blocks, and the reconstructed image blocks are combined according to the original image sequence to obtain a reconstructed image.
Compared with the prior art, the invention has the following advantages:
1. the adaptability is strong:
the traditional random Gaussian observation aims at meeting the RIP condition with high probability, but has great limitation because the random Gaussian observation is not designed according to a training set; the observation adopted by the invention is obtained by training according to the training set, can be used for different types of training sets, and has strong adaptability;
2. the reconstruction effect is good:
the adaptive observation adopted by the invention can observe more information from the original image, and the peak signal-to-noise ratio (PSNR) of the reconstructed image obtained by reconstructing the observation value is obviously improved;
3. the real-time property is as follows:
the invention adopts the deep learning method, thereby reducing the time consumption of the test and having good real-time performance in the actual use.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a diagram of an adaptive observation network architecture in accordance with the present invention;
FIG. 3 is an original image used to test the adaptive observation network of the present invention;
FIG. 4 shows the peak SNR PSNR of the reconstructed image and the reconstructed image of FIG. 3 when the observation rate is 25% by using the random Gaussian observation method of the present invention and the conventional random Gaussian observation method, respectively.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
Referring to fig. 1, the implementation steps of the invention are as follows:
step 1, preparing a network structure file and related files of a reconstructed network DR 2.
1a) Downloading relevant files for restructuring the network DR2 from the github website, including: training required network structure files DR2_ stage1.prototxt, DR2_ stage2.prototxt, parameter setting files DR2_ stage1_ solvent. prototxt and DR2_ stage2_ solvent. prototxt, making code files required by hdf5 format files, generating code files required by training and verification sets, generating code files required by generating the training and verification sets, generating code files, generating code _ test.m, testing code files, testing efficiency.m, network structure files required by testing, correspondence _0_10.prototxt, correspondence _0_25.prototxt and testing set pictures;
1b) downloading a training set picture and a verification set picture from an SRCNN website;
1c) and (3) building a cafe platform under the linux system, and installing matlab commercial software.
And 2, modifying the file required by training.
2a) Modifying the reconstructed network DR2 network structure file to obtain an adaptive observation network structure, as shown in FIG. 2, the steps are as follows:
2a1) the file DR2_ stage2.prototxt holding the network structure is opened, the file comprises fully connected layers fc1, reshape layers, 4 identical residual learning blocks, a first residual learning block comprising a first convolutional layer conv1, a second convolutional layer conv2, a third convolutional layer conv3, a first BN layer, a first scale layer, a first relu layer, a second residual learning block comprising a fourth convolutional layer conv4, a fifth convolutional layer conv5, a sixth convolutional layer conv6, a second BN layer, a second scale layer, a second relu layer, a third residual learning block comprising a seventh convolutional layer conv7, an eighth convolutional layer conv8, a ninth convolutional layer conv9, a third BN layer, a third scale layer, a third relu layer, a fourth residual learning block comprising a tenth convolutional layer conv10, an eleventh convolutional layer conv11, a twelfth convolutional layer conv12, a second fully-connected layer with an output dimension lower than the input dimension is added before fully-connected layer fc1, adding a text representing the second full connection layer into the DR2_ stage1.prototxt file according to a specified format; each layer in the DR2_ stage1.prototxt network structure file contains the name of the layer and the name bottom of the layer before the layer;
2a2) changing the name of the second fully-connected layer to fc1, the name of the fully-connected layer to fc2, the bottom to fc1, the bottom to fc2 of the reshape layer, and the bottom to fc2 of the reshape layer since the fully-connected layer is the previous layer of the reshape layer;
2a3) calculating the output dimension of the second full-connection layer according to the dimension and the observation rate of the input image block:
according to the requirement and the practicability that the output dimension is smaller than the input dimension, the dimension of the input image block is fixed to be 1089, and the observation rate smaller than 50% is selected; multiplying the two to obtain the output dimension of the second full-connection layer; if the observation rate is 25%, the dimension of the output image block is 1089x 25% ═ 272;
2b) opening a file DR2_ stage1.prototxt holding a network structure, the file comprising a fully connected layer fc1, a reshape layer, adding a second fully connected layer in the same way as in 2 a);
2c) using matlab software to open code files generate _ train.m and generate _ test.m required by generating a training set and a verification set, and modifying, namely deleting the codes of the observation process in the two files;
and 3, converting the data format and starting training.
3a) Putting the Training set picture in the step 1b) under a path Training _ Data/Train, putting the verification set picture under the path Training _ Data/Test, and respectively operating the modified code files generate _ train.m and generate _ test.m in the step 2c) through matlab software to generate files in the hdf5 format of the Training set and the hdf5 format of the verification set;
3b) training the self-adaptive observation network on a caffe platform:
3b1) only the fully-connected layer of the reconstructed network DR2 and the newly added second fully-connected layer are trained:
opening a linux system terminal under a path of buffer _ DR2-master/DR2/train, and inputting a command for starting training to the terminal, wherein the command for starting training is DR 2-stage 1-solvent.
3b2) Training the reconstructed network DR2 and the newly added second fully-connected layer on the basis of 3a1), i.e. training the adaptive observation network:
opening a linux system terminal under a path of "call _ DR2-master/DR 2/train", inputting a command for starting training to the terminal,/,/build/tools/call train-solvent ═ DR2_ stage2_ solvent, prototxt-weights ═ model/DR2_ stage1_ iter _1000000. call model, and obtaining a trained adaptive observation network model, wherein weight ═ model/2 _ stage1_ iter _1000000. call model indicates that a full connection layer and a second full connection layer of the adaptive observation network are trained on the basis of the trained model 1.
And 4, observing and reconstructing the image by using the trained model.
4a) Selecting a picture from the test set pictures in step 1a1) as an original picture, as shown in fig. 3;
4b) modifying files under the path/test/default _ prototxt _ files/in the same way as 2a), and selecting and modifying different files according to different observation rates, wherein, if the observation rate is 10%, the recontet _0_10.prototxt file is modified, and if the observation rate is 25%, the recontet _0_25.prototxt file is modified;
opening matlab software under the path ca _ DR2-master/DR2, opening a test _ updating.m code file under the path test/in the matlab software, modifying the code file, and operating the code file to finish the test after modification;
4c) after the test is finished, outputting a reconstructed image of the original image and a peak signal-to-noise ratio (PSNR) of the reconstructed image, as shown in fig. 4 (a);
4d) blob ('fc 1'). get _ data () command is entered in the matlab software command window, fc1 in the command representing the second fully connected layer, the function of the command being to fetch the output of the second fully connected layer, i.e. the observed value of the original image.
The effect of the invention can be further illustrated by comparing with the prior method, and the simulation experiment steps of the prior method are as follows:
first, a network structure file and related files of the reconstructed network DR2 are prepared.
1.1) downloading relevant files for reconstructing the network DR2 from the github website, which includes: training required network structure files DR2_ stage1.prototxt, DR2_ stage2.prototxt, parameter setting files DR2_ stage1_ solvent. prototxt and DR2_ stage2_ solvent. prototxt, making code files required by hdf5 format files, generating code files required by generating training and verification sets, generating code files, generating code _ test.m, testing code files, testing efficiency.m, network structure files required by testing, correspondence _0_10.prototxt, correspondence _0_25.prototxt and testing set pictures;
1.2) downloading a training set picture and a verification set picture from an SRCNN website;
1.3) building a cafe platform under a linux system, and installing matlab commercial software.
And secondly, converting the data format to start training.
The conversion data format begins training.
2.1) putting the Training set picture in the step 1.2) under a path tracing _ Data/tracing, putting the verification set picture under the path tracing _ Data/Test, and respectively operating the code files generate _ t _ rain.m and generate _ test.m in the step 1.1) through matlab software to generate files in the hdf5 format of the Training set and the hdf5 format of the verification set;
2.2) training the reconstructed network DR2 on the caffe platform:
2.2.1) train only the fully-connected layer of the reconstructed network DR 2:
opening a linux system terminal under a path of buffer _ DR2-master/DR2/train, and inputting a command for starting training to the terminal, wherein the command for starting training is DR 2-stage 1-solvent.
2.2.2) of 2.2.2) training the reconstructed network DR 2:
opening a linux system terminal under a path of "call _ DR2-master/DR 2/train"/input a command to start training to the terminal,/,/build/tools/call train- — solution ═ DR2_ stage2_ solution. prototxt- — (weights) — (model/DR 2_ stage1_ iter _1000000. call model, and obtaining a trained model of a reconstruction network DR2, wherein — weights ═ model/DR2_ stage1_ iter _1000000. call model indicates that a full-connection layer of the reconstruction network DR2 is trained on the basis of model 1.
And thirdly, observing and reconstructing images by using the trained model.
3.1) selecting a picture from the test set pictures in the step 1.1) as an original picture, as shown in FIG. 3;
3.2) opening matlab software under the path caffe _ DR2-master/DR2, opening a test _ updating.m code file under the path test in the matlab software, wherein the code file comprises a random Gaussian observation process of the original image and a reconstruction process of the original image through a reconstruction network DR2, operating the code file to complete a test, and outputting a reconstructed image of the original image and a peak signal-to-noise ratio PSNR of the reconstructed image, as shown in fig. 4 (b);
comparing the reconstruction result obtained by the prior art method of fig. 4(b) with the reconstruction result of the present invention of fig. 4(a), it can be seen that:
the reconstruction result of the self-adaptive observation network provided by the invention is obviously clearer than that of random Gaussian observation;
the method is adaptive to the characteristics of the data set, can better extract useful information in the data, improves the peak signal-to-noise ratio (PSNR) of the reconstructed image, further improves the quality of the reconstructed image, and has the characteristic of good real-time performance on the basis of ensuring higher reconstruction quality.
Claims (4)
1. The self-adaptive observation compressed sensing image reconstruction method based on deep learning comprises the following steps:
1) constructing an adaptive observation network:
1a) downloading a network structure file, a parameter setting file, a code for generating a training set, a code for generating a verification set, a test code and a test set picture of a reconstructed network DR2 from a gitubb website, and downloading the training set picture and the verification set picture from an SRCNN website;
1b) a caffe platform is built under a linux system, and matlab commercial software is installed;
1c) modifying the reconstructed network structure file on the basis of the reconstructed network DR2, namely adding a second full connection layer with low-dimensional output before the full connection layer of the reconstructed network DR2 to obtain a self-adaptive observation network;
2) converting the data format to start training:
2a) respectively placing the Training set pictures and the verification set pictures in the step 1a) into a tracking _ Data/tracking folder and a tracking _ Data/Test folder, modifying and running codes to generate files in the hdf5 format of the Training set and the hdf5 format of the verification set;
2b) hdf5 formatted files using 1c) the modified network structure file and parameter settings file and the training set's hdf5 format and validation set in 2a)
2c) Training the self-adaptive observation network on a caffe platform:
2c1) training only the fully-connected layer of the reconstructed network DR2 and the newly added second fully-connected layer;
2c2) training all layers of the reconstructed network DR2 and a second newly added full-connected layer, namely a training adaptive observation network, on the basis of 2c1), so as to obtain a trained model, wherein the trained model comprises the second full-connected layer and the reconstructed network DR 2;
3) and (3) observing and reconstructing images by using the trained model:
3a) selecting an image from the test set image in 1a) as an original image, dividing the image into a plurality of image blocks with fixed sizes, calling a function in caffe through matlab commercial software, inputting the image blocks of the original image into a trained model, and obtaining a group of observed values after each image block passes through a second full-connection layer;
3b) and each group of observation values pass through a reconstruction network DR2 to obtain reconstructed image blocks, and the reconstructed image blocks are combined according to the original image sequence to obtain a reconstructed image.
2. The method according to claim 1, wherein the step 1c) of adding a second full connectivity layer on the basis of the DR2 reconstructing the network structure to obtain the adaptive observation network is performed as follows:
1c1) opening a network structure file comprising fully connected layers fc1, reshape layers, 4 identical residual learning blocks, a first residual learning block comprising a first convolutional layer conv1, a second convolutional layer conv2, a third convolutional layer conv3, a first BN layer, a first scale layer, a first relu layer, a second residual learning block comprising a fourth convolutional layer conv4, a fifth convolutional layer conv5, a sixth convolutional layer conv6, a second BN layer, a second scale layer, a second relu layer, a third residual learning block comprising a seventh convolutional layer conv7, an eighth convolutional layer conv8, a ninth convolutional layer conv9, a third BN layer, a third scale layer, a third relu layer, a fourth residual learning block comprising a tenth convolutional layer conv10, an eleventh convolutional layer conv11, a twelfth convolutional layer conv12, a second fully-connected layer with an output dimension lower than the input dimension is added before fully-connected layer fc1, each layer in the network structure file comprises a name of the layer and a name bottom of the layer before the layer;
1c2) the name of the second fully-connected layer is changed to fc1, the name of the fully-connected layer is changed to fc2, the bottom is changed to fc1, and the bottom of the reshape layer is changed to fc2, namely the fully-connected layer.
3. The method according to claim 2, wherein the output dimension of the second fully-connected layer in step 1c1) is determined by the dimension of the input image block and the observation rate, the dimension of the fixed input image block is 1089, and less than 50% of the observation rate is selected; and multiplying the two to obtain the output dimension of the second fully-connected layer.
4. The method according to claim 1, wherein the step 2b) of training the adaptive observation network on a buffer platform by using the network structure file and the parameter setting file after modification of 1c) is performed according to the following steps:
2b1) opening a linux system terminal;
2b2) switching the directory from the current position to the position of the network structure file and the parameter setting file in the terminal;
2b3) and inputting an instruction for starting training in the terminal to finish the training of the adaptive observation network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710923137.8A CN107610192B (en) | 2017-09-30 | 2017-09-30 | Self-adaptive observation compressed sensing image reconstruction method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710923137.8A CN107610192B (en) | 2017-09-30 | 2017-09-30 | Self-adaptive observation compressed sensing image reconstruction method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107610192A CN107610192A (en) | 2018-01-19 |
CN107610192B true CN107610192B (en) | 2021-02-12 |
Family
ID=61067960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710923137.8A Active CN107610192B (en) | 2017-09-30 | 2017-09-30 | Self-adaptive observation compressed sensing image reconstruction method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107610192B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510464B (en) * | 2018-01-30 | 2021-11-30 | 西安电子科技大学 | Compressed sensing network based on block observation and full-image reconstruction method |
CN108537104A (en) * | 2018-01-30 | 2018-09-14 | 西安电子科技大学 | Compressed sensing network based on full figure observation and perception loss reconstructing method |
CN108810651B (en) * | 2018-05-09 | 2020-11-03 | 太原科技大学 | Wireless video multicast method based on deep compression sensing network |
CN109086819B (en) * | 2018-07-26 | 2023-12-05 | 北京京东尚科信息技术有限公司 | Method, system, equipment and medium for compressing caffemul model |
CN111192334B (en) * | 2020-01-02 | 2023-06-06 | 苏州大学 | Trainable compressed sensing module and image segmentation method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123091A (en) * | 2017-04-26 | 2017-09-01 | 福建帝视信息科技有限公司 | A kind of near-infrared face image super-resolution reconstruction method based on deep learning |
CN107123089A (en) * | 2017-04-24 | 2017-09-01 | 中国科学院遥感与数字地球研究所 | Remote sensing images super-resolution reconstruction method and system based on depth convolutional network |
CN107154064A (en) * | 2017-05-04 | 2017-09-12 | 西安电子科技大学 | Natural image compressed sensing method for reconstructing based on depth sparse coding |
-
2017
- 2017-09-30 CN CN201710923137.8A patent/CN107610192B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107123089A (en) * | 2017-04-24 | 2017-09-01 | 中国科学院遥感与数字地球研究所 | Remote sensing images super-resolution reconstruction method and system based on depth convolutional network |
CN107123091A (en) * | 2017-04-26 | 2017-09-01 | 福建帝视信息科技有限公司 | A kind of near-infrared face image super-resolution reconstruction method based on deep learning |
CN107154064A (en) * | 2017-05-04 | 2017-09-12 | 西安电子科技大学 | Natural image compressed sensing method for reconstructing based on depth sparse coding |
Non-Patent Citations (2)
Title |
---|
Adaptive Measurement Network for CS Image Reconstruction;Xuemei Xie等;《https://arxiv.org/pdf/1710.01244v1.pdf》;20170923;第1-8页 * |
DR2-Net:Deep Residual Reconstruction Network for Image Compressive Sensing;Hantao Yao等;《https://arxiv.org/pdf/1702.05743v3.pdf》;20170706;第1-4页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107610192A (en) | 2018-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107610192B (en) | Self-adaptive observation compressed sensing image reconstruction method based on deep learning | |
CN107463989B (en) | A kind of image based on deep learning goes compression artefacts method | |
CN108111873B (en) | GIS image data transmission method based on machine learning | |
CN113159173B (en) | Convolutional neural network model compression method combining pruning and knowledge distillation | |
CN108960333B (en) | Hyperspectral image lossless compression method based on deep learning | |
CN108012157A (en) | Construction method for the convolutional neural networks of Video coding fractional pixel interpolation | |
Zhu et al. | Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network | |
CN107087201B (en) | Image processing method and device | |
CN110490082A (en) | A kind of road scene semantic segmentation method of effective integration neural network characteristics | |
CN112288632B (en) | Single image super-resolution method and system based on simplified ESRGAN | |
CN113011581A (en) | Neural network model compression method and device, electronic equipment and readable storage medium | |
CN105513099A (en) | Compression method and apparatus for bone animation data | |
CN112950640A (en) | Video portrait segmentation method and device, electronic equipment and storage medium | |
CN116091288A (en) | Diffusion model-based image steganography method | |
CN115100039B (en) | Lightweight image super-resolution reconstruction method based on deep learning | |
CN110677716B (en) | Audio processing method, electronic device, and storage medium | |
CN103985102A (en) | Image processing method and system | |
CN111967358A (en) | Neural network gait recognition method based on attention mechanism | |
CN114170082A (en) | Video playing method, image processing method, model training method, device and electronic equipment | |
CN110177229B (en) | Video conversion method based on multi-task counterstudy, storage medium and terminal | |
CN113010469A (en) | Image feature extraction method, device and computer-readable storage medium | |
CN111105364A (en) | Image restoration method based on rank-one decomposition and neural network | |
CN111080729A (en) | Method and system for constructing training picture compression network based on Attention mechanism | |
CN115546325A (en) | Image compressed sensing self-adaptive reconstruction method and device based on textural features | |
CN113657136B (en) | Identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |