CN112767427A - Low-resolution image recognition algorithm for compensating edge information - Google Patents

Low-resolution image recognition algorithm for compensating edge information Download PDF

Info

Publication number
CN112767427A
CN112767427A CN202110070894.1A CN202110070894A CN112767427A CN 112767427 A CN112767427 A CN 112767427A CN 202110070894 A CN202110070894 A CN 202110070894A CN 112767427 A CN112767427 A CN 112767427A
Authority
CN
China
Prior art keywords
resolution image
low
edge
module
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110070894.1A
Other languages
Chinese (zh)
Inventor
毕萍
刘玉霞
谭仕立
刘颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202110070894.1A priority Critical patent/CN112767427A/en
Publication of CN112767427A publication Critical patent/CN112767427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Abstract

The invention discloses a low-resolution image recognition algorithm for compensating edge information, which comprises an image recognition module, a low-resolution image recognition module and a low-resolution image recognition module, wherein the image recognition module is used for recognizing a low-resolution image; and the edge information generating module is used for estimating the edge information approximate to the high-resolution image according to the low-resolution image. The invention predicts the edge information of the low-resolution image by using the low-resolution image, the edge information contains more high-frequency detail information lost in the low-resolution image, and the recognition rate of the low-resolution image can be improved by using the edge information obtained by prediction. Meanwhile, an attention mechanism module is added into a residual block of the edge generation model, so that the extracted features can be more favorable for generating the edge of the image with approximate high resolution, and certain anti-noise performance is achieved.

Description

Low-resolution image recognition algorithm for compensating edge information
Technical Field
The invention belongs to the field of image processing, and particularly relates to a low-resolution image recognition algorithm for compensating edge information.
Background
In recent years, the popularization and popularization of monitoring equipment bring great convenience to investigation and case solving. However, due to the influence of factors such as illumination, shooting distance and shooting angle, the resolution of the obtained target image is low, only dozens of pixels are obtained, and some images still have speckle noise. Therefore, how to identify the target object in the low-resolution image is a practical problem to be solved urgently.
Image recognition is a classic problem in the field of image processing and pattern recognition, and currently, there are very excellent algorithms, for example, LeNet-5 model, VGG16 model, deep Face algorithm, Face + + algorithm, etc., and the image recognition rate can reach 99%. However, the algorithm is generally suitable for recognizing common natural images, and for the recognition problem of low-resolution images, no accepted effective algorithm exists at present.
Aiming at the problem of low-resolution image identification, a low-resolution image is converted into a high-resolution image, and then a classical identification algorithm is used for identification. The method mainly combines an image super-resolution reconstruction algorithm and an image recognition algorithm, and converts a problem into two classical problems. The quality of the image super-resolution reconstruction algorithm directly influences the final recognition result, and the method needs to respectively train two networks, splits the original integration problem, has no direct correlation with each other, and has a relatively complex process.
And designing features with identification degree directly aiming at the low-resolution images for identification. The traditional method of artificially designing image features has been replaced by features obtained from network models, so that image recognition using deep neural networks is the main research direction of the problems. The convolutional kernel size of the classical neural network identification model is often larger, the size of the low-resolution image is smaller, and various features cannot be effectively extracted when feature extraction is performed through convolutional operation, so that the identification effect is not ideal. In addition, high-frequency information in the low-resolution image is seriously lost, and even the image features captured by the neural network are also lack of high-frequency detail features, and the high-frequency features of the image have very important value for subsequent recognition of the image.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a low resolution image recognition algorithm for compensating edge information. The technical problem to be solved by the invention is realized by the following technical scheme:
the invention provides a low-resolution image recognition algorithm for compensating edge information, which comprises an image recognition module, a low-resolution image recognition module and a low-resolution image recognition module, wherein the image recognition module is used for recognizing a low-resolution image; the edge information generating module is used for estimating the edge information approximate to the high-resolution image according to the low-resolution image; the edge generation module comprises the following steps:
(1) input image preparation. The original high resolution image I in the training data setgtSampling under N times, and generating a low-resolution image with the same size as the high-resolution image after up-sampling by N times
Figure BDA0002905735440000021
Then respectively extracting edge information of the high-resolution image and the low-resolution image by using a Canny operator to respectively obtain edges C of the high-resolution imagegtAnd edges of low resolution images
Figure BDA0002905735440000022
(2) A predicted edge is generated. Image of low resolution
Figure BDA0002905735440000023
And its corresponding edge
Figure BDA0002905735440000024
Sending the signal into a generation network as an input signal;
(3) and judging the predicted edge. Predicting edge CpredEdge C of high resolution image via discriminant networkgtComparing;
(4) and (3) repeating the steps (2) and (3) until a preset maximum iteration number or a preset minimum loss value is reached.
In one embodiment of the invention, the image recognition module has 7 layers, including 2 convolutional layers, 2 pooling layers, and 3 fully-connected layers.
In one embodiment of the invention, the operation of the edge generation module comprises two parts of generating a network and judging the network; the generation network comprises a down-sampling module, a residual module with an attention mechanism and an up-sampling module.
In one embodiment of the present invention, the residual module with attention mechanism is performed as follows:
(1) generating a channel attention feature; and (3) sending the output characteristics of the down-sampling module into a characteristic extraction module to obtain characteristics F1, sending the characteristics into a multilayer perceptron with a hidden layer after passing through a maximum pooling layer and an average pooling layer respectively, and adding the results to obtain a channel attention characteristic F'.
(2) Generating a first correction feature F2; multiplying the channel attention feature F' by the output feature of the feature extraction module to obtain a first correction feature F2;
(3) generating a spatial attention feature; splicing the output result of the step (2) after passing through the maximum pooling layer and the average pooling layer respectively to obtain a two-dimensional feature, and extracting the feature to obtain a spatial attention feature F';
(4) generating a second correction feature F3; multiplying the spatial attention feature F' by the output result of (2) to obtain a second correction feature F3;
(5) and adding the output characteristics of the down-sampling module and the output characteristics of the (4) to obtain residual characteristics F.
In one embodiment of the invention, there are 8 residual blocks with attention mechanism.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention predicts the edge information of the low-resolution image by using the low-resolution image, the edge information contains more high-frequency detail information lost in the low-resolution image, and the recognition rate of the low-resolution image can be improved by using the edge information obtained by prediction.
(2) Mimicking the human selective visual attention mechanism, the ability to select from a multitude of information more critical to the current task. According to the invention, an attention mechanism module is added in a residual block of an edge generation model, so that the extracted features can be more favorable for generating the edge of an approximate high-resolution image, and meanwhile, the edge generation model has certain anti-noise performance, therefore, the recognition algorithm of the invention is more robust, and can also be used for recognizing a low-resolution image containing noise.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
FIG. 1 is a general architecture of an algorithm of a low resolution image recognition algorithm for compensating edge information according to an embodiment of the present invention;
fig. 2 is a sample of experimental data of a low resolution way identification portion provided by an embodiment of the present invention.
Fig. 3 is a diagram illustrating the recognition result of the low-resolution image in table 4 according to the embodiment of the present invention.
Fig. 4 is a ROC curve of the MNIST dataset of table 4 provided by an embodiment of the present invention.
FIG. 5 is a ROC curve for the Fashon _ mnist dataset in Table 4, provided by an embodiment of the present invention.
Fig. 6 is a comparison of feature maps on various types of data sets after edge features are added according to an embodiment of the present invention.
Fig. 7 is a graph of MNIST data × 7 experimental results in table 5 according to an embodiment of the present invention.
Fig. 8 is a result of MNIST data × 3 in table 5 according to an embodiment of the present invention.
FIG. 9 is a graph showing the results of the fast ion-mnist data × 7 experiment in Table 5 according to the embodiment of the present invention.
FIG. 10 is a graph showing the results of the fast ion-mnist data × 3 experiment shown in Table 5, according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention for achieving the predetermined objects, a low resolution image recognition algorithm for compensating edge information according to the present invention is described in detail below with reference to the accompanying drawings and the detailed description.
The foregoing and other technical matters, features and effects of the present invention will be apparent from the following detailed description of the embodiments, which is to be read in connection with the accompanying drawings. The technical means and effects of the present invention adopted to achieve the predetermined purpose can be more deeply and specifically understood through the description of the specific embodiments, however, the attached drawings are provided for reference and description only and are not used for limiting the technical scheme of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of additional identical elements in the article or device comprising the element.
Referring to fig. 1, fig. 1 is an overall algorithm architecture of a low resolution image recognition algorithm for compensating edge information, wherein (a) is an overall network structure diagram including an edge generation module and an identification module, and (b) is a residual module structure diagram including an attention mechanism in the edge generation module.
Example 1
The image identification module is mainly used for identifying the low-resolution images. The module adopts a classical LeNet-5 model, which is an end-to-end network, and has 7 layers comprising 2 convolutional layers, 2 pooling layers and 3 full-connection layers, wherein the parameters of each layer are shown in Table 1. Conventional identification networks directly combine low resolution images
Figure BDA0002905735440000061
Sent into the network for training, and unlike conventional strategies, the present invention uses low resolution images
Figure BDA0002905735440000062
And its estimated edge information image CpredAfter being fused, the fused image is sent to a network for training as an input image, and the edge information image is enhancedHigh-frequency detail information in the resolution images, so that the recognition network can capture richer image characteristics, and the recognition rate of the low-resolution images is improved.
TABLE 1 LeNet-5 identifies parameters of each layer of the network
Figure BDA0002905735440000063
Example 2
The edge information generating module is mainly used for estimating the edge information of the high-resolution image approximate to the low-resolution image according to the low-resolution image. The module adopts a generation countermeasure network structure and is divided into two parts of a generation network and a judgment network, and specific network structure parameters are shown in table 2.
Table 2 network parameters of each layer of edge information generation module
Figure BDA0002905735440000064
Figure BDA0002905735440000071
The specific generation steps are as follows:
step 1: input image preparation. The original high resolution image I in the training data setgtDown-sampling by N times, up-sampling by N times, and generating low-resolution image with size consistent with that of high-resolution image
Figure BDA0002905735440000072
Then respectively extracting edge information of the high-resolution image and the low-resolution image by using a Canny operator to respectively obtain an edge C of the high-resolution imagegtAnd edges of low resolution images
Figure BDA0002905735440000073
Obviously, the edge of the low resolution image is greatly different from the edge of the high resolution image,the module is used for generating and high-resolution image edge C by using networkgtApproximate low resolution image edge CpredAnd compensating the edge with low resolution so as to obtain more accurate high-frequency information.
step 2: a predicted edge is generated. Image of low resolution
Figure BDA0002905735440000081
And its corresponding edge
Figure BDA0002905735440000082
The signal is sent into a generating network as an input signal, the specific network structure and the parameters thereof are shown in table 2, wherein the generating network comprises 8 groups of residual blocks of the fusion attention mechanism module in total for extracting features. Can be obtained by generating a network
Figure BDA0002905735440000083
Predicted edge C ofpred
step 3: and judging the predicted edge. Predicting edge CpredEdge C of high resolution image via discriminant networkgtComparing, the concrete network structure and the parameters thereof are shown in the table 2, and calculating the resistance loss function L according to the formula (1) and the formula (2)advSum-feature matching penalty LFM
Figure BDA0002905735440000084
Figure BDA0002905735440000085
Wherein: m denotes the number of convolution layers in the arbiter, NiIndicates the number of elements in the ith active layer, D(i)Indicating the activation value of the i-th layer of the arbiter. Feature matching loss function LFMTraining the network by comparing the similarity of the predicted low-resolution image edge and the feature map of the real high-resolution image edge in each intermediate layer, and finally enabling the predicted low-resolution image edge to be low in scoreThe resolution image edge is similar to the true high resolution image edge.
step 4: and repeating step2 and step3 until reaching a preset maximum iteration number or a preset minimum loss value. And finishing the training of the edge generation network to obtain an edge generation model of the low-resolution image.
Example 3
The residual error module with the attention mechanism is mainly used for selecting the features which are more critical to the task of predicting the edge from a plurality of features and enhancing the anti-noise performance of the generated network. The specific network structure and its parameters are shown in table 2.
step 1: a channel attention feature F1 is generated. And the output features of the down-sampling module are sent into a feature extraction module to obtain features F1, the features are sent into a multilayer perceptron with a hidden layer after passing through a maximum pooling layer and an average pooling layer respectively, and the results are added to obtain channel attention features F'.
step 2: a first correction feature F2 is generated. The channel attention feature F' is multiplied by the output feature of the feature extraction module to obtain a first correction feature F2.
step 3: spatial attention features are generated. And (4) splicing the output result of step2 after passing through the maximum pooling layer and the average pooling layer respectively to obtain a two-dimensional feature, and extracting the feature to obtain a spatial attention feature F'.
step 4: a second correction feature F3 is generated. The spatial attention feature F "is multiplied by the output of step2 to obtain a second modified feature F3.
step 5: the output characteristic of the down-sampling module is added with the output characteristic of step4 to obtain the residual characteristic F. The residual error modules in the invention are 8 in number.
Example 4
The edge generation network training part of the invention is carried out according to the following steps:
step1, preprocessing training set images.
(1) High resolution image IgtN times of down sampling and N times of up sampling are carried out to generate a low score which is consistent with the size of the high-resolution imageResolution image
Figure BDA0002905735440000091
(2) For high resolution image IgtAnd low resolution images
Figure BDA0002905735440000092
Respectively extracting edge information by using Canny operators to respectively obtain edges C of the high-resolution imagesgtAnd edges of low resolution images
Figure BDA0002905735440000093
(3) High resolution image IgtEdge C of high resolution imagegtLow resolution image
Figure BDA0002905735440000094
And edges of low resolution images
Figure BDA0002905735440000095
For one training sample, m samples are randomly drawn from the training sample set for each training into the generator network.
step2, setting network parameters such as training iteration times, the lowest value of a loss function, the number of randomly extracted samples, the initial value of a convolution kernel in each layer of network and the like, and starting iterative training.
step3, in one training, extracting m low-resolution images
Figure BDA0002905735440000101
And its Canny edge image
Figure BDA0002905735440000102
After splicing, sending the data to a generator network for training the network; edge C of its high resolution imagegtAnd sending the data to a discriminator for training the network.
step4, in a generator network, firstly sending the spliced image into a down-sampling module.
(1) Firstly, the spliced image is sent into a mirror image filling layer, a convolutional layer, a spectrum normalization layer, an example normalization layer and a correction linear unit, and then a characteristic diagram is output;
(2) the characteristic diagram is sent into a convolution layer, a spectrum normalization layer, an example normalization layer and a corrected linear unit and then output;
(3) sending the characteristic diagram into a convolution layer, a spectrum normalization layer, an example normalization layer and a correction linear unit to obtain a third output characteristic diagram;
step5, down sampling output characteristic diagram is sent to a residual error module with attention mechanism.
(1) Sending the down-sampled output characteristic diagram into a mirror image filling layer, a convolution layer, an example normalization layer, a correction linear unit, a mirror image filling layer, a convolution layer and an example normalization layer to obtain a characteristic diagram;
(2) the characteristic diagrams are respectively sent into a maximum pooling layer and an average pooling layer to obtain two paths of characteristic diagrams;
(3) the two characteristic diagrams are respectively sent into a convolution layer, a correction linear unit and a convolution layer which share parameters, and then two characteristic diagrams are obtained;
(4) adding the two characteristic graphs, and sending the two characteristic graphs into a Sigmoid function to obtain a channel attention characteristic F';
(5) multiplying the channel attention feature F' by the output feature of the feature extraction module to obtain a first correction feature F2;
(6) respectively sending the first corrected characteristic F2 to the maximum pooling layer and the average pooling layer again to obtain two characteristic diagrams;
(7) splicing the two characteristic graphs to obtain a two-dimensional characteristic, and sending the two-dimensional characteristic into a convolutional layer and a Sigmoid function to obtain a spatial attention characteristic F';
(8) multiplying the spatial attention feature F' by the first correction feature to obtain a second correction feature F3;
(9) adding the down-sampled output characteristic diagram and the second corrected F3 characteristic to obtain an output characteristic diagram of a residual module;
(10) repeating the processes (1) to (9) for 8 times, namely, obtaining 8 residual modules, and obtaining the final output characteristic diagram F of the residual modules.
step 6, the characteristic diagram F output by the residual error module is sent to the up-sampling module.
(1) The residual error module outputs a characteristic diagram F to be sent to a transposed convolution layer, a spectrum normalization layer, an example normalization layer and a modified linear unit to obtain a characteristic diagram F4;
(2) the characteristic diagram F4 is sent into a transposition convolution layer, a spectrum normalization layer, an example normalization layer and a correction linear unit to obtain a characteristic diagram F5 again;
(3) the feature map F5 is sent into the mirror image filling layer and the convolution layer to obtain a predicted edge image Cpred
step 7, the output of the upsampling module, i.e. the predicted edge CpredAnd sending the data to a discriminator network for discrimination.
(1) Predicting edge CpredSending the data into a convolutional layer, a spectrum normalization layer and a linear unit with leakage correction to obtain a characteristic diagram;
(2) the characteristic diagram is sent into a convolution layer, a spectrum normalization layer and a linear unit with leakage correction, and then the characteristic diagram is obtained and circulated for 3 times;
(3) the obtained characteristic diagram is sent to a convolution layer and a spectrum normalization layer.
And step 8, obtaining an edge generation model of the low-resolution image after the judgment condition is met.
(1) Edge C of high resolution imagegtSending the data into a discriminator network to obtain characteristic graphs of all levels;
(2) will predict edge CpredAnd the edge C of the high resolution imagegtSubstituting the characteristic graphs of each stage into formulas (1) and (2) to calculate a loss function for generating edges;
(3) and stopping when the loss function reaches a preset value, and finishing the training of the edge generation model.
And step 9, if the loss function value does not reach the preset value, randomly extracting m samples from the samples, sending the m samples into a generator network, turning to step4 to continue training until the preset training iteration times are reached, and finishing the training of the edge generation model.
Example 5
The edge generation network testing part of the invention is carried out according to the following steps:
step1, preprocessing the test set image.
(1) High resolution image IgtN times of down sampling and N times of up sampling are carried out to generate a low-resolution image with the same size as the high-resolution image
Figure BDA0002905735440000121
(2) For high resolution image IgtAnd low resolution images
Figure BDA0002905735440000122
Respectively extracting edge information by using Canny operators to respectively obtain edges C of the high-resolution imagesgtAnd edges of low resolution images
Figure BDA0002905735440000123
(3) High resolution image IgtEdge C of high resolution imagegtLow resolution image
Figure BDA0002905735440000124
And edges of low resolution images
Figure BDA0002905735440000125
Is a test sample.
step2, sending the test sample into the trained model to obtain a predicted edge Cpred
(1) Loading the trained edge generation model;
(2) sending the test sample into a down-sampling module, a residual error module with an attention mechanism and an up-sampling module, and outputting a predicted edge Cpred
Example 6
The low-resolution image identification part of the invention is carried out according to the following steps:
step1, low resolution image
Figure BDA0002905735440000126
And its predicted edge image CpredAnd merging to obtain the identification network input sample.
step2 and LeNet-5 identify the network for identification.
(1) Inputting a sample, and sending the sample into the convolutional layer and the maximum pooling layer to obtain a characteristic diagram;
(2) sending the characteristic diagram into the convolution layer and the maximum pooling layer to obtain the characteristic diagram;
(3) and obtaining an identification result after the characteristic diagram passes through the three full-connection layers.
And step3, counting the identification results of all the test samples to obtain the identification rate.
Two data sets, MNIST and Fashion-MNIST, were used in the experiment, with image sizes of 28 × 28(28 pixels × 28 pixels), 60000 images in each training set, 10000 images in each test set, and 10 classes in each data set. The statistics of the number of each category in the dataset are shown in table 3. Mixing 28X 28 original image IgtDownsampling to 7 × 7 and 3 × 3 low-resolution images, respectively, and then upsampling to restore the low-resolution images to 28 × 28 (noise) images, and defining the images as low-resolution (noise) images
Figure BDA0002905735440000131
Some experimental data are shown in figure 2.
Example 7
Preferentially, the recognition test is performed by the algorithm provided by the invention, the low-resolution image is recognized as shown in fig. 3, the recognition rate is the mean value and the standard deviation, the result is shown in table 4, and the ROC curves of the MNIST data set and the fast _ MNIST data set are shown in fig. 4 and fig. 5. Experimental results show that the algorithm strategy can effectively enhance the high-frequency information lost by the low-resolution images and improve the recognition rate of the low-resolution images.
TABLE 3 number statistics of data set categories
Figure BDA0002905735440000132
TABLE 4 Low resolution image recognition results
Figure BDA0002905735440000133
Figure BDA0002905735440000141
Example 8
Preferably, the low resolution image with 0.01 salt and pepper noise added is sent to the recognition algorithm of the present invention for recognition, and the mean and standard deviation of the recognition rate are shown in table 5. The MNIST data set edge learning results are shown in fig. 7 and 8, and the fast-MNIST edge learning results are shown in fig. 9 and 10. The experimental result shows that compared with the traditional denoising algorithm, the algorithm has a certain denoising effect when the noise is small.
TABLE 5 Low resolution image recognition results with different density noise added
Figure BDA0002905735440000142
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions may be made without departing from the spirit of the invention, which should be construed as belonging to the scope of the invention.

Claims (5)

1. A low resolution image recognition algorithm that compensates for edge information, comprising:
the image identification module is used for identifying the low-resolution image;
the edge information generating module is used for estimating the edge information approximate to the high-resolution image according to the low-resolution image;
the method is characterized in that: the edge generation module comprises the following steps:
(1) input devicePreparing an image; the original high resolution image I in the training data setgtN times of down sampling and N times of up sampling are carried out to generate a low-resolution image with the same size as the high-resolution image
Figure FDA0002905735430000011
Then respectively extracting edge information of the high-resolution image and the low-resolution image by using a Canny operator to respectively obtain an edge C of the high-resolution imagegtAnd edges of low resolution images
Figure FDA0002905735430000012
(2) Generating a predicted edge; image of low resolution
Figure FDA0002905735430000013
And its corresponding edge
Figure FDA0002905735430000014
As input signal into the generation network;
(3) judging a predicted edge; predicting edge CpredEdge C of high resolution image via discriminant networkgtComparing;
(4) and (3) repeating the steps (2) and (3) until a preset maximum iteration number or a preset minimum loss value is reached.
2. The low resolution image recognition algorithm that compensates for edge information according to claim 1, wherein: the image identification module has 7 layers, including 2 convolutional layers, 2 pooling layers and 3 full-connection layers.
3. The low resolution image recognition algorithm that compensates for edge information according to claim 1, wherein: the operation of the edge generation module comprises two parts of network generation and network judgment; the generation network comprises a down-sampling module, a residual error module with an attention mechanism and an up-sampling module.
4. The low resolution image recognition algorithm for compensating edge information according to claim 3, wherein: the residual error module with the attention mechanism comprises the following steps:
(1) generating a channel attention feature; the output characteristics of the down-sampling module are sent to a characteristic extraction module to obtain characteristics F1, the characteristics are sent to a multilayer perceptron with a hidden layer after passing through a maximum pooling layer and an average pooling layer respectively, and then the results are added to obtain channel attention characteristics F';
(2) generating a first correction feature F2; multiplying the channel attention feature F' by the output feature of the feature extraction module to obtain a first correction feature F2;
(3) generating a spatial attention feature; splicing the output result of the step (2) after passing through the maximum pooling layer and the average pooling layer respectively to obtain a two-dimensional feature, and extracting the feature to obtain a spatial attention feature F';
(4) generating a second correction feature F3; multiplying the spatial attention feature F' by the output result of (2) to obtain a second correction feature F3;
(5) and (4) adding the output characteristics of the downsampling module and the output characteristics of the downsampling module to obtain residual error characteristics F.
5. The low resolution image recognition algorithm for compensating edge information according to claim 3, wherein: there are 8 residual modules with attention mechanism.
CN202110070894.1A 2021-01-19 2021-01-19 Low-resolution image recognition algorithm for compensating edge information Pending CN112767427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110070894.1A CN112767427A (en) 2021-01-19 2021-01-19 Low-resolution image recognition algorithm for compensating edge information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110070894.1A CN112767427A (en) 2021-01-19 2021-01-19 Low-resolution image recognition algorithm for compensating edge information

Publications (1)

Publication Number Publication Date
CN112767427A true CN112767427A (en) 2021-05-07

Family

ID=75703270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110070894.1A Pending CN112767427A (en) 2021-01-19 2021-01-19 Low-resolution image recognition algorithm for compensating edge information

Country Status (1)

Country Link
CN (1) CN112767427A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240584A (en) * 2021-05-11 2021-08-10 上海大学 Multitask gesture picture super-resolution method based on picture edge information

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291170A1 (en) * 2006-06-16 2007-12-20 Samsung Electronics Co., Ltd. Image resolution conversion method and apparatus
CN102243711A (en) * 2011-06-24 2011-11-16 南京航空航天大学 Neighbor embedding-based image super-resolution reconstruction method
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN109272447A (en) * 2018-08-03 2019-01-25 天津大学 A kind of depth map super-resolution method
CN109816593A (en) * 2019-01-18 2019-05-28 大连海事大学 A kind of super-resolution image reconstruction method of the generation confrontation network based on attention mechanism
CN110175953A (en) * 2019-05-24 2019-08-27 鹏城实验室 A kind of image super-resolution method and system
CN110298791A (en) * 2019-07-08 2019-10-01 西安邮电大学 A kind of super resolution ratio reconstruction method and device of license plate image
CN111062872A (en) * 2019-12-17 2020-04-24 暨南大学 Image super-resolution reconstruction method and system based on edge detection
CN111861886A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Image super-resolution reconstruction method based on multi-scale feedback network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291170A1 (en) * 2006-06-16 2007-12-20 Samsung Electronics Co., Ltd. Image resolution conversion method and apparatus
CN102243711A (en) * 2011-06-24 2011-11-16 南京航空航天大学 Neighbor embedding-based image super-resolution reconstruction method
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN109272447A (en) * 2018-08-03 2019-01-25 天津大学 A kind of depth map super-resolution method
CN109816593A (en) * 2019-01-18 2019-05-28 大连海事大学 A kind of super-resolution image reconstruction method of the generation confrontation network based on attention mechanism
CN110175953A (en) * 2019-05-24 2019-08-27 鹏城实验室 A kind of image super-resolution method and system
CN110298791A (en) * 2019-07-08 2019-10-01 西安邮电大学 A kind of super resolution ratio reconstruction method and device of license plate image
CN111062872A (en) * 2019-12-17 2020-04-24 暨南大学 Image super-resolution reconstruction method and system based on edge detection
CN111861886A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Image super-resolution reconstruction method based on multi-scale feedback network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SANGHYUN WOO等: "CBAM: Convolutional Block Attention Module", 《PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION(ECCV)2018》, pages 1 - 17 *
刘颖等: "图像超分辨率技术的回顾与展望", 《计算机科学与探索》, pages 1 - 19 *
刘颖等: "基于边缘学习的低分辨率图像识别算法", 《计算机应用》, pages 1 - 7 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240584A (en) * 2021-05-11 2021-08-10 上海大学 Multitask gesture picture super-resolution method based on picture edge information
CN113240584B (en) * 2021-05-11 2023-04-28 上海大学 Multitasking gesture picture super-resolution method based on picture edge information

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN106683048B (en) Image super-resolution method and device
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN112633382B (en) Method and system for classifying few sample images based on mutual neighbor
Kadam et al. Detection and localization of multiple image splicing using MobileNet V1
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
WO2022083335A1 (en) Self-attention mechanism-based behavior recognition method
Zhang et al. License plate localization in unconstrained scenes using a two-stage CNN-RNN
CN115171165A (en) Pedestrian re-identification method and device with global features and step-type local features fused
CN110826411B (en) Vehicle target rapid identification method based on unmanned aerial vehicle image
Guo et al. Rethinking gradient operator for exposing AI-enabled face forgeries
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
Mallick et al. Copy move and splicing image forgery detection using cnn
CN115311508A (en) Single-frame image infrared dim target detection method based on depth U-type network
CN112767427A (en) Low-resolution image recognition algorithm for compensating edge information
Xu et al. Exposing fake images generated by text-to-image diffusion models
CN110503157B (en) Image steganalysis method of multitask convolution neural network based on fine-grained image
CN113378672A (en) Multi-target detection method for defects of power transmission line based on improved YOLOv3
CN112818840A (en) Unmanned aerial vehicle online detection system and method
CN112818774A (en) Living body detection method and device
CN116823852A (en) Strip-shaped skin scar image segmentation method and system based on convolutional neural network
CN116523858A (en) Attention mechanism-based oil leakage detection method for power equipment and storage medium
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN115273202A (en) Face comparison method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination