CN114723604B - Video super-resolution method based on sample data set optimization - Google Patents
Video super-resolution method based on sample data set optimization Download PDFInfo
- Publication number
- CN114723604B CN114723604B CN202210158976.6A CN202210158976A CN114723604B CN 114723604 B CN114723604 B CN 114723604B CN 202210158976 A CN202210158976 A CN 202210158976A CN 114723604 B CN114723604 B CN 114723604B
- Authority
- CN
- China
- Prior art keywords
- resolution
- super
- data set
- sample data
- video image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000005457 optimization Methods 0.000 title claims abstract description 8
- 238000012549 training Methods 0.000 claims abstract description 50
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 238000004458 analytical method Methods 0.000 claims abstract description 9
- 238000005070 sampling Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000007405 data analysis Methods 0.000 claims description 6
- 238000010219 correlation analysis Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 9
- 238000013135 deep learning Methods 0.000 description 9
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a video super-resolution method based on sample data set optimization, which comprises the following steps: firstly, judging the content category of a target video through a content classification module; secondly, selecting sample data in real time according to the analysis result of the content, and inputting the selected sample data into a mapping relation model to obtain a real-time selection increment data set; then training the selected sample data set to obtain a pre-trained network model; training the selected incremental data set to obtain a trained super-resolution reconstruction model; and finally, inputting the video image to be super-resolution reconstructed into the trained super-resolution reconstruction network model to obtain the super-resolution reconstructed video image, and completing the super-resolution reconstruction task of the video image. The method has the advantages that a lightweight video super-resolution method is constructed, the training efficiency of the maximum samples in the video super-resolution is improved, and the best reconstruction effect is achieved by using the minimum samples.
Description
Technical Field
The invention relates to the field of graphic processing, in particular to a video super-resolution method based on sample data set optimization.
Background
For the field of video monitoring, how to obtain a high-resolution target image of interest is a hotspot and difficult problem in the field. Compared with hardware equipment for acquiring high-resolution images, the method for reconstructing the images by adopting the image super-resolution algorithm is low in cost and easy to implement. Super Resolution (SR) is a process of recovering a High Resolution (HR) image from a given Low Resolution (LR) image, and is an important branch of the computer vision field. In recent years, the related research development of image super-resolution is rapid, especially the rapid development of deep learning, but the existing super-resolution algorithms such as VDSR, laprn, MSRN and the like adopt the parameter quantity of a promotion network and the framework of a refined convolutional neural network to promote the reconstruction effect of an image, and cannot be applied to the real scene with limited devices. In order to pursue the effect of video image reconstruction to improve the generalization capability of the training model, the existing algorithm needs to adopt a large amount of training data and needs training data with independent contents. Commonly used super-resolution data sets, such as Div2K, flick2K, etc., all contain 1000 or more images with 2K resolution, imageNet contains even 350 ten thousand pictures, and the types of images contained in the training set include people, animals, scenery, etc. Although the training data set with weak correlation is beneficial to improving the generalization capability of the algorithm and avoiding overfitting, the reconstruction effect of the model is reduced at the same time.
In the last few years, the deep learning method has made a major breakthrough in several fields such as machine vision, speech recognition and natural language processing, and has been widely noticed and applied. Building deep learning based models is a modern approach commonly used to solve complex problems, which then disadvantageously requires a large amount of data in order to train a satisfactory model, and for supervised deep learning, a large number of datasets that have already been labeled. As is known, labeled data is usually obtained by means of manual labeling, and therefore, unbalanced data exists in a data set with a large number of samples, and the unbalanced data refers to unbalanced distribution of data on different classes in the training process of the artificial neural network based on deep learning. Empirically, training a deep learning based model with a balanced data set will have better results than training with an unbalanced data set, whereas in real world situations, the available data sets are often unbalanced. Paulina Hensman and David Masko propose that imbalance of data samples of a data set has great influence on training results of a deep learning-based classification model, so that how to process imbalance data is a great challenge in deep learning. Similarly, the video image super-resolution reconstruction model based on deep learning also needs to use a large amount of sample data to train model parameters, so that the model after training has better performance, and can achieve strong generalization capability so as to predict as many samples which are not in a data set as possible. However, the method of training the model by adopting a large amount of data sets still makes the relationship between the generalization capability of the model and the sample data difficult to model, thereby causing invalid samples with weak correlation with reconstructed images to be increased blindly, causing the sample data to be increased sharply, causing the imbalance of the sample data in the data sets, causing the low learning efficiency of the model and even causing the problem of model training failure.
In summary, all surveillance video pictures are from the real world and often have a lot of noise, so that the correlation between the existing image super-resolution data set samples and the surveillance video pictures is weak, and a huge amount of sample data sets are needed during super-resolution model training.
Hengyuana Zhao et al propose classssr, which is a general framework for speeding up most existing SR methods. Classssr combines classification and SR in a unified framework. The sub-images are first classified into different classes according to the degree of difficulty of restoration by a classification module. Then, the SR-module is used for performing super-resolution reconstruction on different classes by adopting different SR models. The classification module is a conventional classification network and the SR-module is a network container containing the SR network to be used for acceleration and its simplified version. On the basis, a new classification method based on class loss and average loss is provided. After the joint training, most sub-images pass through a smaller network, so that the calculation cost is greatly reduced. However, the scheme still has the following defects that 1, classSR does not reduce the number of data sets. 2. The ClassSR framework aims to reduce the training time of the existing video image super-resolution model and reduce the calculation cost, but does not consider improving the quality of video image super-resolution reconstruction.
Disclosure of Invention
The invention aims to: the method is a light-weight video super-resolution method, a super-resolution model is not trained by adopting a large number of data samples, but a relation model between the data set samples and the video image super-resolution reconstruction effect is explored, invalid samples with weak correlation with the reconstructed images are reduced, sample data with strong correlation with the reconstructed images are reserved and increased, the learning efficiency of the model is remarkably improved, the training efficiency of maximized samples in the video super-resolution is greatly improved, and the best reconstruction effect is achieved by using the fewest samples.
The invention is realized by the following technical scheme: a video super-resolution method based on sample data set optimization is characterized by comprising the following steps:
step S1: establishing a video image content classifier for identifying and classifying the content of an input video image and simultaneously performing up-sampling pretreatment on the input original low-resolution video image to form a pretreated video image;
step S2: the output result of the video image content classifier and the sample data set with the classification label are input into a content relation model together, and the selected sample data set is output according to the data analysis result of the content relation model;
and step S3: inputting the video image subjected to the up-sampling pretreatment and the sample data set selected in the step S2 into a mapping relation model together, and outputting the selected original video image data set according to the data analysis result of the mapping relation model;
and step S4: inputting the selected sample data set into an initial super-resolution network model, and performing pre-training on a super-resolution reconstruction network model to form a pre-trained network model;
step S5: inputting the original video image data set selected in the step S3 into the pre-trained super-resolution network model in the step S4, and generating a super-resolution network model adjusted by incremental data through incremental learning training, thereby completing super-resolution reconstruction network model training and forming a trained network model;
step S6: and inputting the original low-resolution video image into the trained network model, outputting a high-resolution video image, and completing super-resolution reconstruction work.
Compared with the prior art, the invention has the beneficial effects that:
a lightweight video super-resolution method is constructed, a super-resolution model is not trained by adopting a large number of data samples, but a relation model between a data set sample and a video image super-resolution reconstruction effect is explored, invalid samples with weak correlation with a reconstructed image are reduced, sample data with strong correlation with the reconstructed image is reserved and increased, the learning efficiency of the model is remarkably improved, the training efficiency of maximized samples in the video super-resolution is greatly improved, and the best reconstruction effect is achieved by using the minimum samples.
Drawings
FIG. 1 is a data flow diagram of the present invention.
Detailed Description
The invention is described in detail below with reference to the following description of the drawings:
as shown in fig. 1: a method for video super-resolution based on sample data set optimization is characterized by comprising the following steps:
step S1: establishing a video image content classifier for identifying and classifying the content of an input video image and simultaneously performing up-sampling pretreatment on the input original low-resolution video image to form a pretreated video image;
step S2: the output result of the video image content classifier and the sample data set with the classification label are input into a content relation model together, and the selected sample data set is output according to the data analysis result of the content relation model;
and step S3: inputting the video image subjected to the up-sampling pretreatment and the sample data set selected in the step S2 into a mapping relation model together, and outputting the selected original video image data set according to the data analysis result of the mapping relation model;
and step S4: inputting the selected sample data set into an initial super-resolution network model, and performing pre-training on a super-resolution reconstruction network model to form a pre-trained network model;
step S5: inputting the original video image data set selected in the step S3 into the pre-trained super-resolution network model in the step S4, and generating a super-resolution network model adjusted by incremental data through incremental learning training, thereby completing super-resolution reconstruction network model training and forming a trained network model;
step S6: and inputting the original low-resolution video image into the trained network model, outputting a high-resolution video image, and completing super-resolution reconstruction work.
In the step S1, a video image content classifier is constructed by adopting a classical ResNet152 network architecture; resNet152 contains five parts, respectively: conv1, conv2_ x, conv3_ x, conv4_ x, conv5_ x, wherein conv1 is a convolution layer of 7x7x64, conv2_ x, conv3_ x, conv4_ x, conv5_ x are 3, 8, 36, 3 building blocks respectively, 50 in total, each block is 3 layers, so that 50x3=150 layers, and finally fc layers for classification, 152 layers in total, (note: 152 layer network means only convolution or full connection layer, and active layer or Pooling layer is not counted); the ResNet152 network utilizes the existing ImageNet data set to pre-train the ResNet152 network, and a classical video image content classifier can be obtained after the network model training is completed.
The step S2 specifically includes: according to the result output by the video image content classifier in the step S1, the content relation model analyzes the correlation between the output result and the sample data set, according to the analysis result of the content relation model, sample data with strong correlation with the target video is selected from a massive sample data set, the sample data with weak correlation with the input video image is abandoned, the number of the sample data sets subsequently input into the super-resolution network model is reduced, and the balance of the sample data sets is kept.
It should be noted that the content relationship model needs to be built at an early stage, the content relationship model is essentially a pre-trained neural network, and the pre-training process includes loading a sample data set which is labeled with a classification label manually into the neural network for content relationship model training. And the trained content relation model performs content correlation probability analysis on the subsequent sample data set with the classification label and the video image output by the content classifier, and selects the sample data which is most suitable for the subsequent super-resolution network model training. The content relation model can be used for exploring and increasing the relation between the data sample and the super-resolution reconstruction result through the content relation model, so that the training efficiency of the sample is maximized in the video super-resolution, and the best reconstruction effect is achieved by using the least samples.
The step S3 specifically includes: and (3) jointly inputting the original video image subjected to the up-sampling pretreatment and the selected sample data set in the step (S2) into the mapping relation model, and selecting the original video image with the similar mapping relation as an incremental data set according to the mapping relation analysis result of the mapping relation model between the high-resolution image pair and the low-resolution image pair of the original video image and the sample data set.
The mapping relation model also needs to be built in an early stage, the mapping relation model is a pre-trained neural network, and the pre-training process comprises the step of loading a sample data set labeled by a mapping relation label to the neural network for training the mapping relation model. And the trained mapping relation model forms an image pair for the original video image and the video image subjected to the up-sampling pretreatment, and the sample data set with the mapping relation label is subjected to mapping relation correlation analysis to select an incremental data set which is most suitable for the subsequent super-resolution model training.
The step S5 specifically includes: and continuously updating and learning the mapping relation between the high-resolution video image and the corresponding low-resolution video image by using the incremental data set input by the pre-training network model, and continuously improving the training quality of the super-resolution reconstruction model, thereby improving the performance of the super-resolution reconstruction model.
The working principle is as follows:
the method essentially constructs a brand-new frame process for super-resolution reconstruction of the video image: firstly, a content classification module is added before a super-resolution model is trained, and the classification of the content of a target video is judged; secondly, selecting sample data in real time according to the analysis result of the content relation model, inputting the selected sample data into the mapping relation model, and selecting an incremental data set in real time according to the analysis result of the mapping relation model; then inputting the selected sample data set into a video image super-resolution model for training to obtain a pre-trained network model; inputting the selected incremental data set into a pre-trained network model for training to obtain a trained super-resolution reconstruction model; and finally, inputting the video image to be super-resolution reconstructed into the trained super-resolution reconstruction network model to obtain the super-resolution reconstructed video image, and completing the super-resolution reconstruction task of the video image.
Compared with the classssr framework, the core of the present invention is to classify the content contained in the video image, and the classification module of the classssr framework classifies the smooth region and the complex texture region of the video image.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (3)
1. A video super-resolution method based on sample data set optimization is characterized by comprising the following steps:
step S1: establishing a video image content classifier for identifying and classifying the content of an input video image and simultaneously performing up-sampling pretreatment on the input original low-resolution video image to form a pretreated video image;
step S2: the output result of the video image content classifier and the sample data set with the classification label are input into the content relation model together, and the selected sample data set is output according to the data analysis result of the content relation model;
and step S3: inputting the video image subjected to the up-sampling pretreatment and the sample data set selected in the step S2 into a mapping relation model together, and outputting the selected original video image data set according to the data analysis result of the mapping relation model;
and step S4: inputting the selected sample data set into an initial super-resolution network model, and performing pre-training on a super-resolution reconstruction network model to form a pre-trained network model;
step S5: inputting the original video image data set selected in the step S3 into the pre-trained super-resolution network model in the step S4, and generating a super-resolution network model adjusted by incremental data through incremental learning training, thereby completing super-resolution reconstruction network model training and forming a trained network model;
step S6: inputting the original low-resolution video image into the trained network model, outputting a high-resolution video image, and completing super-resolution reconstruction work;
wherein, the step S2 specifically comprises: according to the result output by the video image content classifier in the step S1, the content relation model performs correlation analysis on the output result and the sample data set, according to the analysis result of the content relation model, sample data with strong correlation with the target video is selected from a massive sample data set, the sample data with weak correlation with the input video image is abandoned, the number of the sample data sets of the subsequent input super-resolution network model is reduced, and the balance of the sample data sets is kept;
the content relation model is a neural network needing to be trained, and the training process comprises loading a sample data set which is manually marked with a classification label to the neural network for content relation model training; the trained content relation model performs content correlation probability analysis on a subsequent sample data set with a classification label and a video image output by a content classifier, and selects the sample data most suitable for the training of the subsequent super-resolution network model;
the step S3 specifically comprises the following steps: inputting the original video image subjected to the up-sampling pretreatment and the selected sample data set in the step S2 into a mapping relation model together, and selecting the original video image with a similar mapping relation as an incremental data set according to the analysis result of the mapping relation between the high-resolution image pair and the low-resolution image pair of the mapping relation model;
the mapping relation model is a neural network which needs to be trained, and the training process comprises the steps of loading a sample data set labeled by a mapping relation label to the neural network for training the mapping relation model; and the trained mapping relation model forms an image pair for the original video image and the video image subjected to up-sampling pretreatment, and the mapping relation correlation analysis is carried out on the sample data set with the mapping relation label, so as to select the incremental data set which is most suitable for the subsequent super-resolution model training.
2. The video super-resolution method based on sample data set optimization of claim 1, wherein: in the step S1, a video image content classifier is constructed by adopting a classical ResNet152 network architecture; resNet152 contains five parts, respectively: conv1, conv2_ x, conv3_ x, conv4_ x, conv5_ x, wherein conv1 is a convolution layer of 7x7x64, conv2_ x, conv3_ x, conv4_ x, conv5_ x are 3, 8, 36, 3 building blocks respectively, 50 in total, each block is 3 layers, so 50x3=150 layers, finally, fc layers for classification, 152 in total; the ResNet152 network utilizes the existing sample data set for pre-training, and a classical video image content classifier can be obtained after the network model training is completed.
3. The method for video super-resolution optimized based on sample data set according to claim 1, wherein: the step S5 specifically comprises the following steps: the pre-trained super-resolution network model can continuously update and learn the mapping relation between the high-resolution video images and the corresponding low-resolution video images according to the input increment data set, and continuously improve the training quality of the super-resolution reconstruction model, so that the performance of the super-resolution reconstruction model is improved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210158976.6A CN114723604B (en) | 2022-02-21 | 2022-02-21 | Video super-resolution method based on sample data set optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210158976.6A CN114723604B (en) | 2022-02-21 | 2022-02-21 | Video super-resolution method based on sample data set optimization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114723604A CN114723604A (en) | 2022-07-08 |
CN114723604B true CN114723604B (en) | 2023-02-10 |
Family
ID=82235146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210158976.6A Active CN114723604B (en) | 2022-02-21 | 2022-02-21 | Video super-resolution method based on sample data set optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114723604B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934766A (en) * | 2017-03-15 | 2017-07-07 | 西安理工大学 | A kind of infrared image super resolution ratio reconstruction method based on rarefaction representation |
CN108764368A (en) * | 2018-06-07 | 2018-11-06 | 西安邮电大学 | A kind of image super-resolution rebuilding method based on matrix mapping |
CN110020986A (en) * | 2019-02-18 | 2019-07-16 | 西安电子科技大学 | The single-frame image super-resolution reconstruction method remapped based on Euclidean subspace group two |
CN110264407A (en) * | 2019-06-28 | 2019-09-20 | 北京金山云网络技术有限公司 | Image Super-resolution model training and method for reconstructing, device, equipment and storage medium |
CN112884648A (en) * | 2021-01-25 | 2021-06-01 | 汉斯夫(杭州)医学科技有限公司 | Method and system for multi-class blurred image super-resolution reconstruction |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9538126B2 (en) * | 2014-12-03 | 2017-01-03 | King Abdulaziz City For Science And Technology | Super-resolution of dynamic scenes using sampling rate diversity |
CN112785499A (en) * | 2020-12-31 | 2021-05-11 | 马培峰 | Super-resolution reconstruction model training method and computer equipment |
CN114022363A (en) * | 2021-11-19 | 2022-02-08 | 深圳市德斯戈智能科技有限公司 | Image super-resolution reconstruction method, device and computer-readable storage medium |
-
2022
- 2022-02-21 CN CN202210158976.6A patent/CN114723604B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934766A (en) * | 2017-03-15 | 2017-07-07 | 西安理工大学 | A kind of infrared image super resolution ratio reconstruction method based on rarefaction representation |
CN108764368A (en) * | 2018-06-07 | 2018-11-06 | 西安邮电大学 | A kind of image super-resolution rebuilding method based on matrix mapping |
CN110020986A (en) * | 2019-02-18 | 2019-07-16 | 西安电子科技大学 | The single-frame image super-resolution reconstruction method remapped based on Euclidean subspace group two |
CN110264407A (en) * | 2019-06-28 | 2019-09-20 | 北京金山云网络技术有限公司 | Image Super-resolution model training and method for reconstructing, device, equipment and storage medium |
CN112884648A (en) * | 2021-01-25 | 2021-06-01 | 汉斯夫(杭州)医学科技有限公司 | Method and system for multi-class blurred image super-resolution reconstruction |
Non-Patent Citations (2)
Title |
---|
一种基于自样本学习的人脸图像超分辨率复原算法;李晓光等;《高技术通讯》;20090425(第04期);全文 * |
基于预分类学习的超分辨率复原算法;曹杨等;《数据采集与处理》;20090715(第04期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114723604A (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109300121B (en) | A kind of construction method of cardiovascular disease diagnosis model, system and the diagnostic device | |
JP7386370B1 (en) | Multi-task hybrid supervised medical image segmentation method and system based on federated learning | |
CN109191392B (en) | Image super-resolution reconstruction method driven by semantic segmentation | |
CN106097253B (en) | A kind of single image super resolution ratio reconstruction method based on block rotation and clarity | |
CN111583285A (en) | Liver image semantic segmentation method based on edge attention strategy | |
CN113592715B (en) | Super-resolution image reconstruction method for small sample image set | |
CN111524144A (en) | Intelligent pulmonary nodule diagnosis method based on GAN and Unet network | |
CN112364838B (en) | Method for improving handwriting OCR performance by utilizing synthesized online text image | |
CN112200720B (en) | Super-resolution image reconstruction method and system based on filter fusion | |
CN114581552A (en) | Gray level image colorizing method based on generation countermeasure network | |
CN115019173A (en) | Garbage identification and classification method based on ResNet50 | |
CN112488963A (en) | Method for enhancing crop disease data | |
CN115100039B (en) | Lightweight image super-resolution reconstruction method based on deep learning | |
Hongmeng et al. | A detection method for deepfake hard compressed videos based on super-resolution reconstruction using CNN | |
CN114596503A (en) | Road extraction method based on remote sensing satellite image | |
CN117576038A (en) | Fabric flaw detection method and system based on YOLOv8 network | |
CN115661578A (en) | Industrial defect image generation method | |
CN116580184A (en) | YOLOv 7-based lightweight model | |
CN116188509A (en) | High-efficiency three-dimensional image segmentation method | |
CN114723604B (en) | Video super-resolution method based on sample data set optimization | |
CN117078516B (en) | Mine image super-resolution reconstruction method based on residual mixed attention | |
CN111767842B (en) | Micro-expression type discrimination method based on transfer learning and self-encoder data enhancement | |
CN111882545A (en) | Fabric defect detection method based on bidirectional information transmission and feature fusion | |
CN116503499A (en) | Sketch drawing generation method and system based on cyclic generation countermeasure network | |
CN116543277A (en) | Model construction method and target detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |