CN110619631A - Super-resolution image detection method based on residual error network - Google Patents
Super-resolution image detection method based on residual error network Download PDFInfo
- Publication number
- CN110619631A CN110619631A CN201910872452.1A CN201910872452A CN110619631A CN 110619631 A CN110619631 A CN 110619631A CN 201910872452 A CN201910872452 A CN 201910872452A CN 110619631 A CN110619631 A CN 110619631A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- super
- network model
- residual error
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 49
- 238000012549 training Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 36
- 238000000034 method Methods 0.000 claims description 32
- 230000004913 activation Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 11
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 claims description 5
- 238000013459 approach Methods 0.000 claims description 3
- 230000008034 disappearance Effects 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000004880 explosion Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000012952 Resampling Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a super-resolution image detection method based on a residual error network, which comprises the following steps: selecting an image to obtain an image data set; performing data set enhancement on the image data set and generating a low resolution image; processing the low-resolution image to generate a corresponding super-resolution image set; constructing a residual error network model, inputting a super-resolution image set as a model, and training the residual error network model; and inputting the image to be detected into the trained residual error network model to finish the detection of whether the image is subjected to depth super-resolution. According to the super-resolution image detection method provided by the invention, super-resolution detection is carried out on the image by constructing the residual error network model, the input image is directly detected, an intermediate result does not need to be reserved, and the detection efficiency is high; the residual error network model is of a full convolution structure, is suitable for image detection with any size and various different super-resolution technologies, and has strong universality; the detection time is short, the detection can be implemented, and the detection efficiency is greatly improved.
Description
Technical Field
The invention relates to the technical field of computer image processing, in particular to a super-resolution image detection method based on a residual error network.
Background
The image resampling research is a hotspot problem in the field of passive image forensics, in the image tampering process, the trace of image scaling or rotation is inevitably introduced into the image, and the image can be used as the basis for whether the image is tampered or not by detecting whether the image leaves the resampled trace or not. With the rise of the deep learning technology, more and more image super-resolution technologies are further developed, and different from the traditional image resampling technology, the deep learning-based super-resolution method directly fits the mapping relation between the low-resolution image and the high-resolution image through a deep learning model, and has great inconsistency with the traditional resampling method. However, the existing super-resolution algorithm network model structure has poor detection effect on the mapping relation, long detection time and low efficiency.
Disclosure of Invention
The invention provides a super-resolution image detection method based on a residual error network, aiming at overcoming the technical defects of poor detection effect, long detection time consumption and low efficiency of the existing super-resolution algorithm network model structure on a super-resolution technology.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a super-resolution image detection method based on a residual error network comprises the following steps:
s1: selecting an image from the public data set to obtain an image data set;
s2: carrying out data set enhancement expansion on the image data set to generate a real high-resolution image serving as a positive sample;
s3: the real high-resolution image is downsampled by using a double cubic interpolation method to generate a low-resolution image;
s4: processing the low-resolution image by using a classical super-resolution method based on deep learning to generate a corresponding super-resolution image set as a negative sample data set;
s5: constructing a residual error network model, including a characteristic extraction part and a characteristic classification detection part, inputting a negative sample data set as a model, and training the residual error network model;
s6: and inputting the image to be detected into the trained residual error network model, and outputting the class corresponding to the classification with the maximum probability, namely finishing the detection of whether the image is subjected to depth super-resolution.
In step S2, the process of enhancing and expanding the data set specifically includes: the image is segmented by 240 × 240 to a resolution of 120 steps, thereby generating a true high resolution image as a positive sample.
In step S3, the original image is reduced to 1/2, 1/3, and 1/4 by different down-sampling factors, thereby generating a low-resolution image.
In step S5, the feature extraction part includes a residual block and a convolutional layer, which is specifically expressed as:
wherein, FconvRepresenting the output vector of the convolutional layer in the residual network model, Fresidual blockRepresenting the output vector of the residual block in the residual network model, FDenseOutput vector representing a jump connection in a residual network model, operationRepresenting a bitwise addition operation, δ representing an activation function; automatically training a feature map in a mode of combining a convolutional layer and a residual block, enabling the image to meet classification performance, and conveying the image to the feature classification detection part;
the feature classification detection part maps a plurality of images obtained by the feature extraction part to a (0, 1) interval, gives the probability of each classification, and outputs a final class which is a class corresponding to the classification with the maximum probability, wherein the final class is specifically represented as:
wherein x isiDenotes the corresponding sample, prob (x)i) And the calculated probability of the sample to different categories is shown, y represents the label of the final classification and represents the corresponding category with the maximum probability.
Wherein, the activation function adopts a correction linear unit Relu activation function, which is specifically expressed as:
f(x)=max(0,x)
the correction linear unit Relu activation function carries out gradient descent and back propagation, avoids the problems of gradient explosion and gradient disappearance and simplifies the calculation process.
Wherein, when training the residual error network model in step S5, the cross entropy loss function is used as a loss function, which is specifically expressed as:
where L represents the final calculated loss function, y represents the true output vector,representing a desired output vector; the loss function is used for measuring the quality of the model, the cross entropy is that the loss function value has nonnegativity, and when the real output and the expected output are very close, the value of the cross entropy loss function approaches to 0, so the cross entropy loss function needs to be minimized, and a high-performance network model is trained.
The cross entropy loss function is minimized by adopting a small batch gradient descent method, the calculation speed is high, redundant samples are prevented from being repeated, samples which contribute less to parameter updating are avoided, and the generalization error is improved.
Wherein the final classified label comprises a high resolution data set IHRAnd a super-resolved manipulated image dataset ISR。
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the super-resolution image detection method based on the residual error network, super-resolution detection is carried out on the image by constructing the residual error network model, the input image is directly detected, intermediate results do not need to be reserved, and the detection efficiency is high; the residual error network model is of a full convolution structure, is suitable for image detection with any size and various different super-resolution technologies, and has strong universality; the detection time is short, the detection can be implemented, and the detection efficiency is greatly improved.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
fig. 2 is a schematic structural diagram of a residual error network model.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a super-resolution image detection method based on a residual error network includes the following steps:
s1: selecting a certain number of images with rich types from a public data set DIV2K to obtain an image data set;
s2: carrying out data set enhancement expansion on the image data set to generate a real high-resolution image serving as a positive sample;
s3: the real high-resolution image is downsampled by using a double cubic interpolation method to generate a low-resolution image;
s4: processing the low-resolution image by using a classical super-resolution method based on deep learning to generate a corresponding super-resolution image set as a negative sample data set;
s5: constructing a residual error network model, including a characteristic extraction part and a characteristic classification detection part, inputting a negative sample data set as a model, and training the residual error network model;
s6: and inputting the image to be detected into the trained residual error network model, and outputting the class corresponding to the classification with the maximum probability, namely finishing the detection of whether the image is subjected to depth super-resolution.
In the specific implementation process, the method carries out super-resolution detection on the image by constructing a residual error network model, directly detects the input image, does not need to keep an intermediate result, and has high detection efficiency; the residual error network model is of a full convolution structure, is suitable for image detection with any size and various different super-resolution technologies, and has strong universality; the detection time is short, the detection can be implemented, and the detection efficiency is greatly improved.
Example 2
More specifically, on the basis of embodiment 1, in the step S2, the process of enhancing and expanding the data set specifically includes: the image is segmented by 240 × 240 to a resolution of 120 steps, thereby generating a true high resolution image as a positive sample.
More specifically, in step S3, the images are respectively reduced to 1/2, 1/3 and 1/4 of the original image by using different down-sampling factors, thereby generating a low-resolution image.
More specifically, as shown in fig. 2, in step S5, the feature extraction part includes a residual block and a convolutional layer, which is specifically represented as:
wherein, FconvRepresenting the output vector of the convolutional layer in the residual network model, Fresidual blockRepresenting the output vector of the residual block in the residual network model, FDenseOutput vector representing a jump connection in a residual network model, operationRepresenting a bitwise addition operation, δ representing an activation function; automatically training a feature map in a mode of combining a convolutional layer and a residual block, enabling the image to meet classification performance, and conveying the image to the feature classification detection part;
the feature classification detection part maps a plurality of images obtained by the feature extraction part to a (0, 1) interval through a softmax layer, gives the probability of each classification, and outputs a final class which is a class corresponding to the classification with the maximum probability, wherein the final class is specifically represented as:
wherein x isiDenotes the corresponding sample, prob (x)i) And the calculated probability of the sample to different categories is shown, y represents the label of the final classification and represents the corresponding category with the maximum probability.
In the specific implementation process, the residual error network model adopts a full convolution mode, the compensation of each convolution layer is set to be 1, and the consistency of each characteristic diagram and output is ensured.
More specifically, the activation function adopts a correction linear unit Relu activation function, which is specifically expressed as:
f(x)=max(0,x)
the correction linear unit Relu activation function carries out gradient descent and back propagation, avoids the problems of gradient explosion and gradient disappearance and simplifies the calculation process.
More specifically, in the step S5, when training the residual error network model, the cross entropy loss function is used as a loss function, which is specifically expressed as:
where L represents the final calculated loss function, y represents the true output vector,representing a desired output vector; the loss function is used for measuring the quality of the model, the cross entropy is that the loss function value has nonnegativity, and when the real output and the expected output are very close, the value of the cross entropy loss function approaches to 0, so the cross entropy loss function needs to be minimized, and a high-performance network model is trained.
More specifically, the cross entropy loss function is minimized by adopting a small batch gradient descent method, the calculation speed is high, redundant samples are prevented from being repeated, samples which contribute less to parameter updating are avoided, and the generalization error is improved.
More specifically, the final classification label includes highResolution of data set IHRAnd a super-resolved manipulated image dataset ISR。
In the specific implementation process, after the training parameters of the residual network model are trained, the image to be detected can be detected, and table 1 shows that the accuracy results of the detection results of different super-resolution methods are compared by the method, wherein the EDSR, the SRGAN and the ESRGAN are different super-resolution methods, and as can be seen from the experimental results, the model has higher accuracy and good detection performance for different super-resolution methods.
TABLE 1 detection accuracy for different super-resolution factors and different super-resolution methods
Super resolution factor | EDSR | SRGAN | ESRGAN |
2x | 0.9421 | 0.9217 | 0.9225 |
3x | 0.9632 | 0.9303 | 0.9542 |
4x | 0.9847 | 0.9723 | 0.9632 |
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (8)
1. A super-resolution image detection method based on a residual error network is characterized by comprising the following steps:
s1: selecting an image from the public data set to obtain an image data set;
s2: carrying out data set enhancement expansion on the image data set to generate a real high-resolution image serving as a positive sample;
s3: the real high-resolution image is downsampled by using a double cubic interpolation method to generate a low-resolution image;
s4: processing the low-resolution image by using a classical super-resolution method based on deep learning to generate a corresponding super-resolution image set as a negative sample data set;
s5: constructing a residual error network model, including a characteristic extraction part and a characteristic classification detection part, inputting a negative sample data set as a model, and training the residual error network model;
s6: and inputting the image to be detected into the trained residual error network model, and outputting the class corresponding to the classification with the maximum probability, namely finishing the detection of whether the image is subjected to depth super-resolution.
2. The method for super-resolution image detection based on residual error network as claimed in claim 1, wherein in said step S2, the process of enhancing and expanding the data set specifically comprises: the image is segmented by 240 × 240 to a resolution of 120 steps, thereby generating a true high resolution image as a positive sample.
3. The method as claimed in claim 2, wherein in step S3, the images are down-sampled to 1/2, 1/3 and 1/4 of the original image by different down-sampling factors, so as to generate low resolution images.
4. The method according to claim 3, wherein in the step S5, the feature extraction part includes a residual block and a convolutional layer, and is specifically represented as:
wherein, FconvRepresenting the output vector of the convolutional layer in the residual network model, FresidualblockRepresenting the output vector of the residual block in the residual network model, FDenseOutput vector representing a jump connection in a residual network model, operationRepresenting a bitwise addition operation, δ representing an activation function; automatically training a feature map in a mode of combining a convolutional layer and a residual block, enabling the image to meet classification performance, and conveying the image to the feature classification detection part;
the feature classification detection part maps a plurality of images obtained by the feature extraction part to a (0, 1) interval, gives the probability of each classification, and outputs a final class which is a class corresponding to the classification with the maximum probability, wherein the final class is specifically represented as:
wherein x isiDenotes the corresponding sample, prob (x)i) Representing calculated pairs of samples of different classesThe other probability, y, represents the label of the final classification, and represents the corresponding class with the highest probability.
5. The method according to claim 4, wherein the activation function is a correction linear unit Relu activation function, and is specifically represented as:
f(x)=max(0,x)
the correction linear unit Relu activation function carries out gradient descent and back propagation, avoids the problems of gradient explosion and gradient disappearance and simplifies the calculation process.
6. The method according to claim 3, wherein in the step S5, when training the residual network model, a cross entropy loss function is used as a loss function, which is specifically represented as:
where L represents the final calculated loss function, y represents the true output vector,representing a desired output vector; the loss function is used for measuring the quality of the model, the cross entropy is that the loss function value has nonnegativity, and when the real output and the expected output are very close, the value of the cross entropy loss function approaches to 0, so the cross entropy loss function needs to be minimized, and a high-performance network model is trained.
7. The method for detecting the super-resolution image based on the residual error network as claimed in claim 6, wherein a small batch gradient descent method is adopted to minimize the cross entropy loss function, the calculation speed is high, redundant samples and samples which contribute less to parameter updating are avoided, and the generalization error is improved.
8. The method of claim 4, wherein the label of the final classification comprises a high resolution data set IHRAnd a super-resolved manipulated image dataset ISR。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910872452.1A CN110619631A (en) | 2019-09-16 | 2019-09-16 | Super-resolution image detection method based on residual error network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910872452.1A CN110619631A (en) | 2019-09-16 | 2019-09-16 | Super-resolution image detection method based on residual error network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110619631A true CN110619631A (en) | 2019-12-27 |
Family
ID=68923340
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910872452.1A Pending CN110619631A (en) | 2019-09-16 | 2019-09-16 | Super-resolution image detection method based on residual error network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110619631A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368849A (en) * | 2020-05-28 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111783252A (en) * | 2020-07-20 | 2020-10-16 | 浙江浙能台州第二发电有限责任公司 | Control loop valve viscosity detection method based on residual error network |
CN113838024A (en) * | 2021-09-22 | 2021-12-24 | 哈尔滨工业大学 | OLED panel defect prediction method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657259A (en) * | 2017-09-30 | 2018-02-02 | 平安科技(深圳)有限公司 | Distorted image detection method, electronic installation and readable storage medium storing program for executing |
CN110033410A (en) * | 2019-03-28 | 2019-07-19 | 华中科技大学 | Image reconstruction model training method, image super-resolution rebuilding method and device |
CN110197205A (en) * | 2019-05-09 | 2019-09-03 | 三峡大学 | A kind of image-recognizing method of multiple features source residual error network |
CN110210498A (en) * | 2019-05-31 | 2019-09-06 | 北京交通大学 | Digital image device evidence-obtaining system based on residual error study convolution converged network |
-
2019
- 2019-09-16 CN CN201910872452.1A patent/CN110619631A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657259A (en) * | 2017-09-30 | 2018-02-02 | 平安科技(深圳)有限公司 | Distorted image detection method, electronic installation and readable storage medium storing program for executing |
CN110033410A (en) * | 2019-03-28 | 2019-07-19 | 华中科技大学 | Image reconstruction model training method, image super-resolution rebuilding method and device |
CN110197205A (en) * | 2019-05-09 | 2019-09-03 | 三峡大学 | A kind of image-recognizing method of multiple features source residual error network |
CN110210498A (en) * | 2019-05-31 | 2019-09-06 | 北京交通大学 | Digital image device evidence-obtaining system based on residual error study convolution converged network |
Non-Patent Citations (1)
Title |
---|
黄韬: "基于目标建模的图像重构技术", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368849A (en) * | 2020-05-28 | 2020-07-03 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111783252A (en) * | 2020-07-20 | 2020-10-16 | 浙江浙能台州第二发电有限责任公司 | Control loop valve viscosity detection method based on residual error network |
CN111783252B (en) * | 2020-07-20 | 2024-01-02 | 浙江浙能台州第二发电有限责任公司 | Control loop valve viscosity detection method based on residual error network |
CN113838024A (en) * | 2021-09-22 | 2021-12-24 | 哈尔滨工业大学 | OLED panel defect prediction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110619631A (en) | Super-resolution image detection method based on residual error network | |
CN113096017B (en) | Image super-resolution reconstruction method based on depth coordinate attention network model | |
CN113505792B (en) | Multi-scale semantic segmentation method and model for unbalanced remote sensing image | |
CN110136062B (en) | Super-resolution reconstruction method combining semantic segmentation | |
CN109784183B (en) | Video saliency target detection method based on cascade convolution network and optical flow | |
CN110689483B (en) | Image super-resolution reconstruction method based on depth residual error network and storage medium | |
EP3038049A1 (en) | Method for upscaling an image and apparatus for upscaling an image | |
WO2019065703A1 (en) | Information processing device | |
CN113449612B (en) | Three-dimensional target point cloud identification method based on sub-flow sparse convolution | |
CN109376763A (en) | Sample classification method, system and medium based on multisample reasoning neural network | |
CN111709443B (en) | Calligraphy character style classification method based on rotation invariant convolution neural network | |
CN104992407B (en) | A kind of image super-resolution method | |
CN109741358A (en) | Superpixel segmentation method based on the study of adaptive hypergraph | |
CN110633706B (en) | Semantic segmentation method based on pyramid network | |
CN112396554B (en) | Image super-resolution method based on generation of countermeasure network | |
CN115205527A (en) | Remote sensing image bidirectional semantic segmentation method based on domain adaptation and super-resolution | |
CN113052187B (en) | Global feature alignment target detection method based on multi-scale feature fusion | |
CN115867933A (en) | Computer-implemented method, computer program product and system for processing images | |
CN115511705A (en) | Image super-resolution reconstruction method based on deformable residual convolution neural network | |
CN117036266A (en) | Industrial image anomaly detection method and system based on knowledge distillation | |
CN112232102B (en) | Building target recognition method and system based on deep neural network and multi-task learning | |
CN110348339B (en) | Method for extracting handwritten document text lines based on case segmentation | |
CN116630160A (en) | Cell image super-resolution reconstruction method and system based on convolution network | |
CN106651864B (en) | A kind of dividing method towards high-resolution remote sensing image | |
CN114022521B (en) | Registration method and system for non-rigid multimode medical image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191227 |
|
RJ01 | Rejection of invention patent application after publication |