CN115272083B - Image super-resolution method, device, equipment and medium - Google Patents
Image super-resolution method, device, equipment and medium Download PDFInfo
- Publication number
- CN115272083B CN115272083B CN202211177890.4A CN202211177890A CN115272083B CN 115272083 B CN115272083 B CN 115272083B CN 202211177890 A CN202211177890 A CN 202211177890A CN 115272083 B CN115272083 B CN 115272083B
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- infrared
- feature map
- filter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Abstract
The application discloses an image super-resolution method, an apparatus, a device and a medium, which relate to the technical field of image recognition and comprise the following steps: acquiring an image to be processed, and performing feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map; fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics; determining a guide image based on the enhanced image features, performing deformation processing on the enhanced image features to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image. Through the above technical scheme of this application, can effectively promote infrared image quality, increase infrared image super-resolution treatment effeciency.
Description
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a method, an apparatus, a device, and a medium for super-resolution of an image.
Background
At present, a single infrared image is subjected to super-resolution, internal information of the image, such as texture, edge, structure and the like, is mined, or common information is captured by learning a large number of pictures of the same type based on deep learning, and the information is utilized for carrying out super-resolution on the image. The main problems are that no new special information gain of a single picture exists, and most of super-resolution means are heuristic and cannot achieve high image reconstruction quality. The infrared image super-resolution is guided, and the overdetermined problem of the ill-conditioned image super-resolution is limited by the additional information brought by the color image. Generally, information of a color image is extracted in a heuristic mode, or fusion processing is performed after characteristics are simply extracted by directly utilizing a neural network. Therefore, the existing method is limited by the infrared sensor process level, the infrared image resolution is usually low, the processing of a back-end task is not facilitated, and the color image information cannot be efficiently utilized.
Therefore, how to improve the quality of the infrared image and increase the super-resolution processing efficiency of the infrared image is a problem to be solved in the field.
Disclosure of Invention
In view of the above, the present invention provides an image super-resolution method, apparatus, device and medium, which can effectively improve the quality of an infrared image and increase the super-resolution processing efficiency of the infrared image. The specific scheme is as follows:
in a first aspect, the present application discloses an image super-resolution method, comprising:
acquiring an image to be processed, and performing feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map;
fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics;
determining a guide image based on the enhanced image characteristics, performing deformation processing on the enhanced image characteristics to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image.
Optionally, the acquiring the image to be processed includes:
acquiring an image to be processed which comprises an infrared image, a visible light image determined based on the infrared image, an image obtained by fuzzifying the visible light image and an infrared image obtained by performing bicubic up-sampling processing on the infrared image.
Optionally, the performing a feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map includes:
performing feature extraction operation on the infrared image by utilizing a cascaded three-layer convolutional neural network to obtain an infrared feature map;
performing corresponding feature extraction operation on the visible light image, the blurred image and the infrared image subjected to the bicubic upsampling processing by using convolutional neural network edge detection to obtain a first feature map, a second feature map and a third feature map;
and determining the transfer characteristic diagram based on the first characteristic diagram, the second characteristic diagram and the third characteristic diagram.
Optionally, the determining the transfer feature map based on the first feature map, the second feature map, and the third feature map includes:
determining a maximum index graph and an attention graph according to the corresponding relation between the second feature graph and the third feature graph;
and extracting the first feature map by using the maximum index map to obtain a transfer feature map.
Optionally, the determining a guide image based on the enhanced image feature and performing deformation processing on the enhanced image feature to obtain a filter space includes:
determining the guide image based on the enhanced image features and the infrared image subjected to the bicubic up-sampling processing;
and carrying out deformation processing on the enhanced image characteristics to obtain a filter space formed by a bilateral grid space.
Optionally, the determining a filter from the filter space and determining a target image based on the filter and the guide image includes:
determining a filter from the filter space according to the pixel position and the amplitude in the guide image;
and inserting the image coordinates in the guide image into the filter, and performing filtering optimization processing on the infrared image subjected to the bicubic up-sampling processing by using the filter to obtain a target image.
Optionally, the fusing the infrared feature map and the transfer feature map to obtain fused image features, and performing a feature enhancement operation on the fused image features to obtain enhanced image features, includes:
performing series fusion on the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics;
and performing first-time feature enhancement operation on the fused image features by utilizing a convolution layer and the attention map to obtain a first feature enhanced image, performing second-time feature enhancement operation on the first feature enhanced image by utilizing residual connection to obtain a second feature enhanced image, and performing up-sampling operation and third-time feature enhancement operation on the second feature enhanced image by utilizing an intensive residual layer to obtain enhanced image features.
In a second aspect, the present application discloses an image super-resolution device, comprising:
the device comprises a characteristic extraction module, a feature extraction module and a feature extraction module, wherein the characteristic extraction module is used for acquiring an image to be processed and performing characteristic extraction operation on the image to be processed to obtain an infrared characteristic diagram and a transfer characteristic diagram;
the feature enhancement module is used for fusing the infrared feature map and the transfer feature map to obtain fused image features and performing feature enhancement operation on the fused image features to obtain enhanced image features;
and the target image determining module is used for determining a guide image based on the enhanced image characteristics, performing deformation processing on the enhanced image characteristics to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the aforementioned image super-resolution method.
In a fourth aspect, the present application discloses a computer storage medium for storing a computer program; wherein the computer program realizes the steps of the image super-resolution method disclosed in the foregoing when being executed by a processor.
The image super-resolution method comprises the steps of obtaining an image to be processed, and carrying out feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map; fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics; determining a guide image based on the enhanced image features, performing deformation processing on the enhanced image features to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image. According to the method, the Transformer framework is used, the influence of unmatched trans-modal information is reduced by utilizing the edge characteristics, trans-modal image information fusion is realized, the fused characteristics are mapped to a bilateral grid space, a bilateral filtering kernel is dynamically generated by utilizing a guide image, dynamic convolution is realized, the infrared image quality is effectively improved, and the infrared image super-resolution processing efficiency is increased.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of an image super-resolution method disclosed in the present application;
FIG. 2 is a flowchart of an image super-resolution method disclosed in the present application;
FIG. 3 is a detailed flowchart of an image super-resolution method disclosed in the present application;
FIG. 4 is a schematic diagram of an image super-resolution method disclosed in the present application;
FIG. 5 is a schematic diagram of a super-resolution image apparatus according to the present disclosure;
fig. 6 is a block diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, a single infrared image is subjected to super-resolution, internal information of the image, such as texture, edge, structure and the like, is mined, or common information is captured by learning a large number of pictures of the same type based on deep learning, and the information is utilized for carrying out super-resolution on the image. The main problems are that no new special information gain of a single picture exists, and most of super-resolution means are heuristic and cannot achieve high image reconstruction quality. The infrared image super-resolution is guided, and the overdetermined problem of the ill-conditioned image super-resolution is limited by the additional information brought by the color image. Generally, information of a color image is extracted in a heuristic mode, or fusion processing is performed after characteristics are simply extracted by directly utilizing a neural network. Therefore, the existing method is limited by the infrared sensor technology level, the infrared image resolution is usually low, which is not beneficial to the processing of the back-end task, and the color image information cannot be utilized efficiently. Therefore, how to improve the quality of the infrared image and increase the super-resolution processing efficiency of the infrared image is a problem to be solved in the field.
Image Super Resolution (ISR) is a process of recovering a High Resolution (HR) Image from a given Low Resolution (LR) Image, and is a classic application in the field of Computer Vision (CV). The ISR means that a corresponding high-resolution image is reconstructed from an observed low-resolution image by a software or hardware method, and has important application value in the fields of monitoring equipment, satellite image remote sensing, digital high definition, microscopic imaging, video coding communication, video restoration, medical images and the like. Since mapping from a low information content image to a high information content image is a one-to-many mapping, there are an infinite number of HR images that can be deresolved to LR images, and thus the image super resolution problem is a highly underdetermined ill-conditioned problem that presents a significant challenge. And guiding the image to be super-resolved, and reconstructing a high-resolution infrared image. The original image is a low-resolution infrared image and a high-resolution color image under the same scene. Due to the difference of imaging mechanisms of the sensors, the information difference between the color image and the infrared image is large, and it is difficult to process the images of different information domains by using the same processing mode, so how to effectively extract the information of the color image and help the super resolution of the infrared image is a very challenging problem.
Referring to fig. 1, an embodiment of the present invention discloses an image super-resolution method, which specifically includes:
step S11: and acquiring an image to be processed, and performing feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map.
In this embodiment, an image to be processed, which is composed of an infrared image, a visible light image determined based on the infrared image, an image obtained by blurring the visible light image, and an infrared image obtained by performing a bicubic upsampling process on the infrared image, is obtained. It can be understood that the visible light image is determined manually, then the visible light image is subjected to the blurring processing, the infrared image is subjected to the bicubic up-sampling processing to obtain the image subjected to the blurring processing and the processed infrared image, and the four images are taken as the images to be processed.
In this embodiment, after an image to be processed is acquired, feature extraction is performed on the infrared image by using a cascaded three-layer convolutional neural network to obtain an infrared feature map, and then corresponding feature extraction is performed on the visible light image, the blurred image, and the infrared image subjected to the bicubic upsampling processing by using convolutional neural network edge detection to obtain a first feature map, a second feature map, and a third feature map; and determining the transfer characteristic diagram based on the first characteristic diagram, the second characteristic diagram and the third characteristic diagram. Determining a maximum index map and an attention map according to the corresponding relation between the second feature map and the third feature map, and then extracting the first feature map by using the maximum index map to obtain a transfer feature map.
The method comprises the following specific steps: and performing corresponding feature extraction operation on the visible light image, the blurred image and the infrared image subjected to the bicubic up-sampling processing by using convolutional neural network edge detection to obtain a first feature map K, a second feature map Q and a third feature map V, wherein the resolutions of K and Q are the resolutions of the original infrared low-resolution image, and V is the resolutions of three scales which are respectively 1 time, 2 times and 4 times of the low-resolution image. And extracting the characteristic blocks of Q and K through a sliding window, and learning the corresponding relation of points between the characteristic blocks and the K. The specific formula is as follows:
wherein the content of the first and second substances,representing the correspondence between the ith point in Q and the jth point in K,() The ith feature block representing the sliding window extraction in feature Q,() Which represents the jth feature block extracted by the sliding window in the feature Q. According to the bimodal corresponding relation, an attention diagram Att of visible light features needing to be extracted and a maximum value index diagram Ind can be obtained. The maximum index map is obtained by:(ii) a The attention map is derived from:(ii) a And (5) extracting the characteristics of the characteristic V from the index map Ind to obtain the transfer characteristics of the visible light image.
Step S12: and fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics.
Step S13: determining a guide image based on the enhanced image characteristics, performing deformation processing on the enhanced image characteristics to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image.
In this embodiment, the guide image is determined based on the enhanced image feature and the infrared image subjected to the bicubic upsampling process; and performing deformation processing on the enhanced image characteristics to obtain a filter space formed by a bilateral grid space, determining a filter from the filter space according to the pixel position and the amplitude in the guide image, then inserting the image coordinates in the guide image into the filter, and performing filtering optimization processing on the infrared image subjected to the bicubic up-sampling processing by using the filter to obtain a target image.
Specifically, the enhanced image features are deformed into a bilateral grid space. As shown in the following formula:;
the 3-dimensional enhanced image features are deformed to obtain 4-dimensional features. In the physical sense, can be understood as being in spaceThe coordinates areThe position storage shape isVector of (2), text settingTo 27, which is modified to giveA filter of size. According to the coordinates of the guide imagePixel value of (2)In a four-dimensional structureMedium trilinear interpolated samples, as follows:;
get out of pairs according to the above formulaEdge filtering kernelThen, subsequently, inImage of a personAs a center, cuttingImage block of sizeTo is aligned withUsing a filter kernelFiltering to obtain an optimized target mapThe pixel value of the location is as follows:。
in the embodiment, an image to be processed is obtained, and feature extraction operation is performed on the image to be processed to obtain an infrared feature map and a transfer feature map; fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics; determining a guide image based on the enhanced image features, performing deformation processing on the enhanced image features to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image. By using the Transformer framework, the influence of mismatching of trans-modal information is reduced by utilizing the edge characteristics, trans-modal image information fusion is realized, the fused characteristics are mapped to a bilateral grid space, a bilateral filtering kernel is dynamically generated by utilizing a guide image, dynamic convolution is realized, the infrared image quality can be effectively improved, and the infrared image super-resolution processing efficiency is increased.
Referring to fig. 2, an embodiment of the present invention discloses an image super-resolution method, which specifically includes:
step S21: and acquiring an image to be processed, and performing feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map.
Step S22: and performing series fusion on the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, performing first characteristic enhancement operation on the fused image characteristics by using a layer of convolution layer and the attention diagram to obtain a first characteristic enhanced image, performing second characteristic enhancement operation on the first characteristic enhanced image by using residual connection to obtain a second characteristic enhanced image, and performing up-sampling operation and third characteristic enhancement operation on the second characteristic enhanced image by using a dense residual layer to obtain enhanced image characteristics.
Step S23: determining a guide image based on the enhanced image features, performing deformation processing on the enhanced image features to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image.
The method and the device use Transformer to fuse cross-modal bilateral information to conduct dynamic guiding image super-resolution. The Transformer is a model that uses an attention mechanism to increase the training speed of the model. The Transformer encoder consists of a self-attention layer, a position feed-forward network, a layer normalization module and a residual connector. The whole network is mainly divided into two parts, one is a feature extraction and fusion module, and the other is a bilateral filtering optimization module. The specific network structure is shown in fig. 3, and the feature extraction and fusion module fuses visible light and infrared image information by using a Transformer structure. The feature fusion is divided into two parts, wherein one part is a reconstruction feature F extracted from an infrared low-resolution image LR and extracted by a reconstruction feature module, the other part is obtained by selective migration from a visible light image, and the reconstruction feature extraction module is formed by three layers of cascaded convolutional neural networks according to a common super-resolution mode of a single image. The reconstruction characteristic F is obtained by extracting an infrared low-resolution image by a reconstruction characteristic extraction module; in another part, firstly, the information modalities of the infrared image and the visible light image are different, and the information of the infrared image and the information of the visible light image cannot be directly fused. In order to reduce the degree of information mismatching between the modes, the method and the device use the edge feature module to extract the network to extract the infrared and visible light images, and the influence of factors such as color and brightness of the visible light images on the super-resolution of the infrared images can be effectively reduced. The edge feature extraction module may learn parameters in the training of the network to adapt to the distribution of the bimodal features. The module is a simplified version of RCF (CNN edge detection), uses a front 7-layer network of the RCF, can output three edge features with different scales to obtain a first feature graph K, a second feature graph Q and a third feature graph V, learns the corresponding relation of points between the Q feature graph K and the Q feature graph K according to the feature blocks of the Q feature graph and the K feature graph, finally obtains an attention graph Att and a maximum index graph Ind, and then enters a fusion process. As shown in fig. 4, in the fusion module, the transfer feature T and the reconstruction feature F are first fused in series, and then feature enhancement is performed by multiplying a convolution layer by an attention map. Finally, the reconstructed features are connected through Residual errors to supplement the features with enhanced attention, and the enhanced features are output to a next layer of fusion module after being subjected to Dense Residual error Block (RDB) and upsampling, so that feature fusion is performed after the resolution is doubled. And inputting the Att after double-triple up-sampling into a next-level fusion module to finally obtain the enhanced image characteristics. Then, switching to a bilateral filtering optimization module, dividing the characteristics output by the characteristic fusion module into two paths for different processing through shape change, wherein one path aims to learn guidance and extract a guidance image of the bilateral filter, namely determining a guidance image based on the enhanced image characteristics; the other way is to learn the bilateral grid space for extracting the bilateral filter.
In addition, in training, the pairs are propagated backward through the neural networkAll parameters of the network are updated. The neural network uses 3 loss functions in training to reconstruct the loss functionGradient loss functionAnd a function of countering lossAs follows:(ii) a The reconstruction loss function calculates the loss between the reconstructed image and the original high resolution image. The gradient loss function mainly takes into account the explicit guard guide map and the reconstructed imageThe calculation formula of the medium high frequency information is as follows:(ii) a Herein, theThe laplacian operator is represented and the horizontal and vertical gradients are calculated. The reconstructed image, which is mainly considered against the loss function, has good visibility, so GAN 2 is adopted therein]Loss, wherein the arbiter losesSum generator lossAs shown in the following formula:;。
in the embodiment, an image to be processed is obtained, and feature extraction operation is performed on the image to be processed to obtain an infrared feature map and a transfer feature map; fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics; determining a guide image based on the enhanced image features, performing deformation processing on the enhanced image features to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image. According to the method, the Transformer framework is used, the influence of unmatched trans-modal information is reduced by utilizing the edge characteristics, trans-modal image information fusion is realized, the fused characteristics are mapped to the bilateral grid space, the bilateral filtering kernel is dynamically generated by utilizing the guide image, dynamic convolution is realized, the infrared image quality can be effectively improved, and the infrared image super-resolution processing efficiency is increased.
Referring to fig. 5, an embodiment of the present invention discloses an image super-resolution device, which may specifically include:
the characteristic extraction module 11 is configured to acquire an image to be processed, and perform characteristic extraction operation on the image to be processed to obtain an infrared characteristic diagram and a transfer characteristic diagram;
a feature enhancement module 12, configured to fuse the infrared feature map and the transfer feature map to obtain a fused image feature, and perform a feature enhancement operation on the fused image feature to obtain an enhanced image feature;
a target image determining module 13, configured to determine a guide image based on the enhanced image features, perform a deformation process on the enhanced image to obtain a filter space, determine a filter from the filter space, and determine a target image based on the filter and the guide image.
In the embodiment, an image to be processed is obtained, and feature extraction operation is performed on the image to be processed to obtain an infrared feature map and a transfer feature map; fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics; determining a guide image based on the enhanced image characteristics, performing deformation processing on the enhanced image characteristics to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image. According to the method, the Transformer framework is used, the influence of unmatched trans-modal information is reduced by utilizing the edge characteristics, trans-modal image information fusion is realized, the fused characteristics are mapped to the bilateral grid space, the bilateral filtering kernel is dynamically generated by utilizing the guide image, dynamic convolution is realized, the infrared image quality can be effectively improved, and the infrared image super-resolution processing efficiency is increased.
In some specific embodiments, the feature extraction module 11 may specifically include:
the image acquisition module is used for acquiring an image to be processed which is composed of an infrared image, a visible light image determined based on the infrared image, an image obtained by fuzzifying the visible light image and an infrared image obtained by performing bicubic up-sampling processing on the infrared image.
In some specific embodiments, the feature extraction module 11 may specifically include:
the infrared characteristic diagram determining module is used for performing characteristic extraction operation on the infrared image by utilizing the cascaded three layers of convolutional neural networks to obtain an infrared characteristic diagram;
the feature extraction module is used for performing corresponding feature extraction operation on the visible light image, the blurred image and the infrared image subjected to the bicubic upsampling processing by utilizing convolutional neural network edge detection to obtain a first feature map, a second feature map and a third feature map;
a transfer feature map determination module, configured to determine the transfer feature map based on the first feature map, the second feature map, and the third feature map.
In some specific embodiments, the feature extraction module 11 may specifically include:
the maximum index map determining module is used for determining a maximum index map and an attention map according to the corresponding relation between the second feature map and the third feature map;
and the transfer characteristic diagram determining module is used for extracting the first characteristic diagram by using the maximum index diagram to obtain a transfer characteristic diagram.
In some specific embodiments, the target image determining module 13 may specifically include:
a guide image determining module, configured to determine the guide image based on the enhanced image feature and the bicubic up-sampling processed infrared image;
and the deformation processing module is used for carrying out deformation processing on the enhanced image characteristics to obtain a filter space formed by a bilateral grid space.
In some specific embodiments, the target image determining module 13 may specifically include:
a filter determination module for determining a filter from the filter space based on pixel positions and amplitudes in the guide image;
and the target image determining module is used for inserting the image coordinates in the guide image into the filter and performing filtering optimization processing on the infrared image subjected to the bicubic up-sampling processing by using the filter to obtain a target image.
In some specific embodiments, the feature enhancing module 12 may specifically include:
the series fusion module is used for carrying out series fusion on the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics;
the feature enhancement module is used for performing first feature enhancement operation on the fused image features by using a convolution layer and the attention map to obtain a first feature enhanced image, performing second feature enhancement operation on the first feature enhanced image by using residual connection to obtain a second feature enhanced image, and performing up-sampling operation and third feature enhancement operation on the second feature enhanced image by using an intensive residual layer to obtain enhanced image features.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein, the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to realize the relevant steps in the image super-resolution method executed by the electronic device disclosed in any of the foregoing embodiments.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for storing resources, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., the resources stored thereon include an operating system 221, a computer program 222, data 223, etc., and the storage mode may be a transient storage mode or a permanent storage mode.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the data 223 in the memory 22 by the processor 21, which may be Windows, unix, linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the image super-resolution method disclosed in any of the foregoing embodiments and executed by the electronic device 20. The data 223 may include data received by the image super-resolution device and transmitted from an external device, data collected by the own input/output interface 25, and the like.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
Further, an embodiment of the present application further discloses a computer-readable storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the steps of the image super-resolution method disclosed in any of the foregoing embodiments are implemented.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The image super-resolution method, apparatus, device and storage medium provided by the present invention are described in detail above, and the principle and the implementation of the present invention are explained herein by applying specific examples, and the description of the above examples is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (5)
1. The image super-resolution method is characterized by comprising the steps of obtaining an image to be processed, and performing feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map;
fusing the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics, and performing characteristic enhancement operation on the fused image characteristics to obtain enhanced image characteristics;
determining a guide image based on the enhanced image features, performing deformation processing on the enhanced image features to obtain a filter space, determining a filter from the filter space, and determining a target image based on the filter and the guide image;
wherein, the acquiring the image to be processed comprises: acquiring an image to be processed, which is composed of an infrared image, a visible light image determined based on the infrared image, an image obtained by blurring the visible light image, and an infrared image obtained by performing bicubic up-sampling processing on the infrared image;
the performing feature extraction operation on the image to be processed to obtain an infrared feature map and a transfer feature map comprises: performing feature extraction operation on the infrared image by utilizing a cascaded three-layer convolutional neural network to obtain an infrared feature map; performing corresponding feature extraction operation on the visible light image, the blurred image and the bicubic up-sampling processed infrared image by using convolutional neural network edge detection to obtain a first feature map, a second feature map and a third feature map; determining the transfer feature map based on the first feature map, the second feature map and the third feature map;
the determining the transfer feature map based on the first feature map, the second feature map and the third feature map includes: determining a maximum index graph and an attention graph according to the corresponding relation between the second feature graph and the third feature graph; extracting the first feature map by using the maximum index map to obtain a transfer feature map;
the determining a guide image based on the enhanced image features and performing deformation processing on the enhanced image features to obtain a filter space includes: determining the guide image based on the enhanced image features and the infrared image subjected to the bicubic up-sampling processing; carrying out deformation processing on the enhanced image characteristics to obtain a filter space formed by a bilateral grid space;
the determining a filter from the filter space and a target image based on the filter and the guide image comprises: determining a filter from the filter space according to the pixel position and the amplitude in the guide image; and inserting the image coordinates in the guide image into the filter, and performing filtering optimization processing on the infrared image subjected to the bicubic up-sampling processing by using the filter to obtain a target image.
2. The image super-resolution method according to claim 1, wherein the fusing the infrared feature map and the transfer feature map to obtain fused image features and performing a feature enhancement operation on the fused image features to obtain enhanced image features includes:
performing series fusion on the infrared characteristic diagram and the transfer characteristic diagram to obtain fused image characteristics;
and performing first-time feature enhancement operation on the fused image features by utilizing a convolution layer and the attention map to obtain a first feature enhanced image, performing second-time feature enhancement operation on the first feature enhanced image by utilizing residual connection to obtain a second feature enhanced image, and performing up-sampling operation and third-time feature enhancement operation on the second feature enhanced image by utilizing an intensive residual layer to obtain enhanced image features.
3. An image super-resolution device, comprising:
the device comprises a characteristic extraction module, a feature extraction module and a feature extraction module, wherein the characteristic extraction module is used for acquiring an image to be processed and performing characteristic extraction operation on the image to be processed to obtain an infrared characteristic diagram and a transfer characteristic diagram;
the feature enhancement module is used for fusing the infrared feature map and the transfer feature map to obtain fused image features and performing feature enhancement operation on the fused image features to obtain enhanced image features;
a target image determining module, configured to determine a guide image based on the enhanced image feature, perform deformation processing on the enhanced image feature to obtain a filter space, determine a filter from the filter space, and determine a target image based on the filter and the guide image;
wherein the feature extraction module is configured to: acquiring an image to be processed, which is composed of an infrared image, a visible light image determined based on the infrared image, an image obtained by blurring the visible light image, and an infrared image obtained by performing bicubic up-sampling processing on the infrared image;
the characteristic extraction module is used for performing characteristic extraction operation on the infrared image by utilizing a cascaded three-layer convolutional neural network to obtain an infrared characteristic diagram; performing corresponding feature extraction operation on the visible light image, the blurred image and the bicubic up-sampling processed infrared image by using convolutional neural network edge detection to obtain a first feature map, a second feature map and a third feature map; determining the transfer feature map based on the first feature map, the second feature map and the third feature map;
the feature extraction module is specifically configured to determine a maximum index map and an attention map according to a correspondence between the second feature map and the third feature map; extracting the first feature map by using the maximum index map to obtain a transfer feature map;
the target image determining module is specifically configured to determine the guide image based on the enhanced image features and the infrared image subjected to the bicubic upsampling processing; carrying out deformation processing on the enhanced image characteristics to obtain a filter space formed by a bilateral grid space;
the target image determining module is specifically configured to determine a filter from the filter space according to a pixel position and an amplitude in the guide image; and inserting the image coordinates in the guide image into the filter, and performing filtering optimization processing on the infrared image subjected to the bicubic up-sampling processing by using the filter to obtain a target image.
4. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the image super-resolution method of any one of claims 1 to 2.
5. A computer-readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the image super-resolution method of any of claims 1 to 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211177890.4A CN115272083B (en) | 2022-09-27 | 2022-09-27 | Image super-resolution method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211177890.4A CN115272083B (en) | 2022-09-27 | 2022-09-27 | Image super-resolution method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115272083A CN115272083A (en) | 2022-11-01 |
CN115272083B true CN115272083B (en) | 2022-12-02 |
Family
ID=83756443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211177890.4A Active CN115272083B (en) | 2022-09-27 | 2022-09-27 | Image super-resolution method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272083B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103971354A (en) * | 2014-05-19 | 2014-08-06 | 四川大学 | Method for reconstructing low-resolution infrared image into high-resolution infrared image |
CN105556964A (en) * | 2013-01-30 | 2016-05-04 | 英特尔公司 | Content adaptive bi-directional or functionally predictive multi-pass pictures for high efficiency next generation video coding |
CN109614996A (en) * | 2018-11-28 | 2019-04-12 | 桂林电子科技大学 | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image |
CN110544205A (en) * | 2019-08-06 | 2019-12-06 | 西安电子科技大学 | Image super-resolution reconstruction method based on visible light and infrared cross input |
CN111915546A (en) * | 2020-08-04 | 2020-11-10 | 西安科技大学 | Infrared and visible light image fusion method and system, computer equipment and application |
WO2021048863A1 (en) * | 2019-09-11 | 2021-03-18 | The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organization (Aro) (Volcani Center) | Methods and systems for super resolution for infra-red imagery |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103188988A (en) * | 2010-08-27 | 2013-07-03 | 索尼公司 | Image processing apparatus and method |
WO2012041492A1 (en) * | 2010-09-28 | 2012-04-05 | MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. | Method and device for recovering a digital image from a sequence of observed digital images |
US9251565B2 (en) * | 2011-02-03 | 2016-02-02 | Massachusetts Institute Of Technology | Hyper-resolution imaging |
CN111340711B (en) * | 2020-05-21 | 2020-09-08 | 腾讯科技(深圳)有限公司 | Super-resolution reconstruction method, device, equipment and storage medium |
US20210390747A1 (en) * | 2020-06-12 | 2021-12-16 | Qualcomm Incorporated | Image fusion for image capture and processing systems |
-
2022
- 2022-09-27 CN CN202211177890.4A patent/CN115272083B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105556964A (en) * | 2013-01-30 | 2016-05-04 | 英特尔公司 | Content adaptive bi-directional or functionally predictive multi-pass pictures for high efficiency next generation video coding |
CN103971354A (en) * | 2014-05-19 | 2014-08-06 | 四川大学 | Method for reconstructing low-resolution infrared image into high-resolution infrared image |
CN109614996A (en) * | 2018-11-28 | 2019-04-12 | 桂林电子科技大学 | The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image |
CN110544205A (en) * | 2019-08-06 | 2019-12-06 | 西安电子科技大学 | Image super-resolution reconstruction method based on visible light and infrared cross input |
WO2021048863A1 (en) * | 2019-09-11 | 2021-03-18 | The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organization (Aro) (Volcani Center) | Methods and systems for super resolution for infra-red imagery |
CN111915546A (en) * | 2020-08-04 | 2020-11-10 | 西安科技大学 | Infrared and visible light image fusion method and system, computer equipment and application |
Non-Patent Citations (1)
Title |
---|
Reference-based Image Super-Resolution with Deformable Attention Transformer;Jiezhang Cao;《Computer Vision and Pattern Recognition》;20220804;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN115272083A (en) | 2022-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921799B (en) | Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network | |
CN110427968B (en) | Binocular stereo matching method based on detail enhancement | |
CN106780590B (en) | Method and system for acquiring depth map | |
CN107123089B (en) | Remote sensing image super-resolution reconstruction method and system based on depth convolution network | |
Yang et al. | Coupled dictionary training for image super-resolution | |
CN108833785B (en) | Fusion method and device of multi-view images, computer equipment and storage medium | |
Guo et al. | Single image dehazing based on fusion strategy | |
CN112529776B (en) | Training method of image processing model, image processing method and device | |
CN107067380A (en) | High-definition picture reconstructing method based on low-rank tensor sum stratification dictionary learning | |
CN113222819B (en) | Remote sensing image super-resolution reconstruction method based on deep convolution neural network | |
CN113658040A (en) | Face super-resolution method based on prior information and attention fusion mechanism | |
CN112767290A (en) | Image fusion method, image fusion device, storage medium and terminal device | |
CN111028279A (en) | Point cloud data processing method and device, electronic equipment and storage medium | |
CN111353955A (en) | Image processing method, device, equipment and storage medium | |
Liu et al. | Haze removal for a single inland waterway image using sky segmentation and dark channel prior | |
CN113240584B (en) | Multitasking gesture picture super-resolution method based on picture edge information | |
CN115272083B (en) | Image super-resolution method, device, equipment and medium | |
CN116342377A (en) | Self-adaptive generation method and system for camouflage target image in degraded scene | |
CN109615584A (en) | A kind of SAR image sequence MAP super resolution ratio reconstruction method based on homography constraint | |
CN113012071B (en) | Image out-of-focus deblurring method based on depth perception network | |
CN110895790A (en) | Scene image super-resolution method based on posterior degradation information estimation | |
CN112017113B (en) | Image processing method and device, model training method and device, equipment and medium | |
CN111861897A (en) | Image processing method and device | |
Xue et al. | An end-to-end multi-resolution feature fusion defogging network | |
Xu et al. | Depth map super-resolution via multiclass dictionary learning with geometrical directions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |