WO2023046136A1 - Feature fusion method, image defogging method and device - Google Patents

Feature fusion method, image defogging method and device Download PDF

Info

Publication number
WO2023046136A1
WO2023046136A1 PCT/CN2022/121209 CN2022121209W WO2023046136A1 WO 2023046136 A1 WO2023046136 A1 WO 2023046136A1 CN 2022121209 W CN2022121209 W CN 2022121209W WO 2023046136 A1 WO2023046136 A1 WO 2023046136A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
fused
fusion
features
mth
Prior art date
Application number
PCT/CN2022/121209
Other languages
French (fr)
Chinese (zh)
Inventor
董航
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023046136A1 publication Critical patent/WO2023046136A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Definitions

  • the present application relates to the technical field of image processing, in particular to a feature fusion method, an image defogging method and a device.
  • Image dehazing is a classic image processing problem.
  • the main purpose of image defogging is to repair the foggy image, so as to obtain a clear and fog-free image. Since in various advanced computer vision tasks (image detection, image recognition, etc.), it is first necessary to dehaze the image to obtain a clear image, the problem of image dehazing has received extensive attention in the computer vision community.
  • the multi-scale network improves the overall performance of image defogging by extracting and utilizing features from different scales
  • the multi-scale network architecture will lose the spatial information of image features during the downsampling process of image features, and the non-adjacent network layers.
  • There are problems such as lack of sufficient connections between features at different scales. Integrating multi-scale features and improving the reuse of network features has been proven to be an effective way to improve network performance in many deep learning architectures.
  • a multi-scale feature fusion method that is widely used at present is: a multi-scale feature fusion method based on reprojection technology.
  • the multi-scale feature fusion method based on re-projection technology can achieve multi-scale feature fusion
  • the re-projection technology will limit the content between different scale features, and limit the diversity of generated features during multi-scale feature fusion, which will affect the image quality. Defogging learning ability.
  • the present application provides a feature fusion method, image defogging method and device, which are used to solve the problem that the multi-scale feature fusion method in the related art will limit the diversity of features in the network architecture.
  • the embodiment of the present application provides a feature fusion method, including: acquiring a target feature and at least one feature to be fused, the target feature and the at least one feature to be fused are respectively different spatial scales of the same image feature; divide the target feature into a first feature and a second feature; process the first feature based on a residual dense block RDB to obtain a third feature; combine the second feature and the at least one to-be-fused The features are fused to obtain a fourth feature; the third feature and the fourth feature are combined to generate a fusion result of the target feature and at least one feature to be fused.
  • the merging the second feature and the at least one feature to be fused to obtain the fourth feature includes: according to the spatial scale of each feature to be fused and the spatial scale of the second feature The difference value sorts the at least one feature to be fused in descending order to obtain a sorting result; the first feature to be fused in the sorting result and the second feature are fused to generate the first feature to be fused Fusion results; fusion of other features to be fused with the fusion result of the last feature to be fused in the sorting results one by one, generating a fusion result of the last feature to be fused in the sorting results; The fusion result of the last feature to be fused is used as the fourth feature.
  • the fusing the first feature to be fused with the second feature in the sorting result to generate the fusion result of the first feature to be fused includes: combining the second feature
  • the feature sampling is a feature with the same spatial scale as the first feature to be fused, and the first sampled feature corresponding to the first feature to be fused is generated; the sum of the first sampled feature corresponding to the first feature to be fused is calculated The difference value of the first feature to be fused, obtaining the feature difference corresponding to the first feature to be fused; sampling the feature difference corresponding to the first feature to be fused to be the same as the second feature space scale feature, obtain the second sampling feature corresponding to the first feature to be fused; add and fuse the second feature and the second sampling feature corresponding to the first feature to be fused to generate the first The fusion result of the features to be fused.
  • the one-by-one fusion of other features to be fused in the ranking result and the fusion result of the last feature to be fused includes: combining the m-1th feature to be fused in the sorting result
  • the fusion result is sampled as the feature with the same spatial scale as the mth feature to be fused in the sorting result, and the first sampling feature corresponding to the mth to be fused is generated, and m is a positive integer greater than 1;
  • the feature difference corresponding to the m feature to be fused Sampling is a feature with the same spatial scale as the fusion result of the m-1 feature to be fused, and obtaining a second sampling feature corresponding to the m-th feature to be fused; for the m-1 feature to be fused
  • the fusion result is added and fused with the second sampling feature
  • the dividing the target feature into a first feature and a second feature includes: dividing the target feature into a first feature and a second feature based on a feature channel of the target feature.
  • an embodiment of the present application provides an image defogging method, including: processing the target image through an encoding module to obtain encoding features; wherein, the encoding module includes L cascaded and spatially different The same encoder, the mth encoder is used to fuse the image features of the encoding module on the mth encoder and the mth encoder before using the feature fusion method described in any one of the first aspect
  • the fusion results output by all the encoders, generate the fusion results of the m-th encoder, and output the fusion results of the m-th encoder to all encoders after the m-th encoder, L, Both m are positive integers, and m ⁇ L;
  • the encoded feature is processed by a feature restoration module composed of at least one residual block RDB to obtain the restored feature;
  • the restored feature is processed by the decoding module to obtain the described Dehazing effect image of the target image;
  • the decoding module includes L cascaded decoders with different spatial scales, and
  • an embodiment of the present application provides a feature fusion device, including: an acquisition unit, configured to acquire a target feature and at least one feature to be fused, the target feature and the at least one feature to be fused are respectively the same image The features of different spatial scales; the division unit is used to divide the target feature into the first feature and the second feature; the first processing unit is used to process the first feature based on the residual dense connection network to obtain The third feature; a second processing unit, configured to fuse the second feature and the at least one feature to be fused to obtain a fourth feature; a merging unit, configured to merge the third feature and the fourth feature , generating a fusion result of the target feature and at least one feature to be fused.
  • the second processing unit is specifically configured to sort the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtain the sorting Result; fuse the first feature to be fused with the second feature in the sorting result to generate a fusion result of the first feature to be fused; pair the other features to be fused and the sorting result one by one
  • the fusion result of the last feature to be fused is fused to generate the fusion result of the last feature to be fused in the sorting result; the fusion result of the last feature to be fused in the sorting result is used as the fourth feature.
  • the second processing unit is specifically configured to sample the second feature as a feature with the same space scale as the first feature to be fused, and generate a corresponding to the first feature to be fused.
  • the first sampling feature calculating the difference between the first sampling feature corresponding to the first feature to be fused and the first feature to be fused, and obtaining the feature difference corresponding to the first feature to be fused;
  • the feature difference sampling corresponding to the first feature to be fused is the same feature as the second feature space scale, and the second sampling feature corresponding to the first feature to be fused is obtained; for the second feature and the first feature
  • the second sampling feature corresponding to a feature to be fused is added and fused to generate a fusion result of the first feature to be fused.
  • the second processing unit is specifically configured to sample the fusion result of the m-1th feature to be fused in the sorting result into a space similar to the mth feature to be fused in the sorting result For features of the same scale, generate the first sampling feature corresponding to the mth to-be-fused feature, where m is a positive integer greater than 1; calculate the first sample corresponding to the m-th to-be-fused feature and the m-th to-be-fused feature A feature difference, obtaining the feature difference corresponding to the mth feature to be fused; sampling the feature difference corresponding to the mth feature to be fused as the spatial scale of the fusion result of the m-1th feature to be fused The same feature, obtaining the second sampling feature corresponding to the mth feature to be fused; performing the fusion result of the m-1th feature to be fused and the second sampling feature corresponding to the mth feature to be fused Adding and merging to generate a fusion result of the m
  • the dividing unit is specifically configured to divide the target feature into a first feature and a second feature based on a feature channel of the target feature.
  • an embodiment of the present application provides an image defogging device, including: a feature extraction unit, configured to process a target image through an encoding module to obtain encoding features; wherein, the encoding module includes L cascaded and spatial Encoders with different scales, the mth encoder is used to fuse the image features of the encoding module on the mth encoder and the mth encoder through the feature fusion method described in any one of the first aspects The fusion results output by all encoders before the encoder, generating the fusion results of the m-th encoder, and outputting the fusion results of the m-th encoder to all encoders after the m-th encoder , L and m are both positive integers, and m ⁇ L; the feature processing unit is used to process the encoded features through a feature restoration module composed of at least one residual block RDB to obtain restored features; the image generation unit uses The decoding module processes the restoration features to obtain the target image defogging effect image;
  • an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory is used to store a computer program; the processor is used to enable the electronic device to implement the first Aspect or the feature fusion method described in any embodiment of the first aspect.
  • the embodiment of the present application provides a computer-readable storage medium, when the computer program is executed by the computing device, the computing device realizes the feature fusion described in the first aspect or any embodiment of the first aspect method.
  • an embodiment of the present application provides a computer program product, which enables the computer to implement the feature fusion method described in the first aspect or any embodiment of the first aspect when the computer program product is run on a computer.
  • the feature fusion method After obtaining the target feature and at least one feature to be fused, first divide the target feature into the first feature and the second feature, and then process the first feature based on the RDB to obtain the third feature. feature, fusing the second feature and the at least one feature to be fused to obtain a fourth feature, and finally merging the third feature and the fourth feature to generate a fusion of the target feature and at least one feature to be fused result.
  • the feature fusion method provided by the embodiment of the present application divides the features that need enhanced fusion into two parts, the first feature and the second feature, and processes one part (the first feature) based on the RDB, and divides the other part ( The second feature) is fused with the feature to be fused. Since features can be updated and redundant features can be generated based on RDB, the fusion of the second feature and the features to be fused can realize the introduction of effective information in features of other spatial scales, and realize multi-scale feature fusion. Therefore, this application
  • the feature fusion method provided in the embodiment can ensure the generation of new features and ensure the diversity of features in the network architecture when realizing multi-scale feature fusion. Therefore, the feature fusion method provided in the embodiment of the present application can solve many problems in related technologies.
  • the scale feature fusion method will limit the diversity of features in the network architecture.
  • Fig. 1 is a flow chart of the steps of the feature fusion method provided by the embodiment of the present application
  • Fig. 2 is one of the data flow diagrams of the feature fusion method provided by the embodiment of the present application.
  • Fig. 3 is the second schematic diagram of the data flow of the feature fusion method provided by the embodiment of the present application.
  • FIG. 4 is a flow chart of the steps of the image defogging method provided by the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a network model for implementing an image defogging method provided by an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of a feature fusion device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image defogging device provided in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • the meaning of "plurality” refers to two or more.
  • the embodiment of the present application provides a feature fusion method, which can be used in the image processing process of any image processing scene.
  • the feature fusion method provided by the embodiment of the present application can fuse the features of the extracted image in the image dehazing scene; another example: the feature fusion method provided by the embodiment of the present application can also fuse the extracted image in the image restoration process features are fused.
  • the feature fusion method provided in the embodiment of the present application can also fuse the features of the extracted image during the image super-resolution process.
  • the embodiment of the present application does not limit the usage scenario of the feature fusion method, as long as the usage scenario includes multiple image features of different spatial scales that need to be fused. Referring to Figure 1, the feature fusion method includes the following steps:
  • the target feature and the at least one feature to be fused are features of different spatial scales of the same image.
  • the target feature in the embodiment of the present application refers to a feature that needs to be fused and enhanced
  • the feature to be fused refers to a feature used to perform fusion and enhancement on the target feature.
  • feature extraction may be performed on the image to be processed based on feature extraction functions or feature extraction networks of different spatial scales, so as to obtain the target feature and the at least one feature to be fused.
  • the implementation of dividing the target feature into the first feature and the second feature may include:
  • the target feature is divided into a first feature and a second feature based on a feature channel of the target feature.
  • the channel of the feature in the embodiment of the present application refers to the feature map (feature map) contained in the feature, and a channel of the feature is the feature map obtained by feature extraction based on a certain dimension, so the feature
  • the channel of is a feature map in a specific sense. Dividing features based on feature channels is to divide the feature maps of some dimensions in the feature into a feature set, and use the feature maps of the remaining dimensions as another feature set.
  • the ratio of the first feature to the second feature is not limited in the embodiment of the present application.
  • the amount of effective information of the features of the spatial scale and the amount of new features that need to be generated determine the ratio of the first feature to the second feature.
  • the ratio of the first feature to the second feature may be 1:1.
  • the residual dense block includes three main parts, which are: Contiguous Memory (CM), Local Feature Fusion (LFF) and Local Residual Learning (LRL).
  • CM Contiguous Memory
  • LFF Local Feature Fusion
  • LRL Local Residual Learning
  • CM is mainly used to send the output of the previous RDB to each convolutional layer in the current RDB
  • LFF is mainly used to fuse the output of the previous RDB with the output of all convolutional layers of the current RDB
  • LRL is mainly used It is used to add the output of the previous RDB to the output of the LFF of the current RDB, and use the addition result as the output of the current RDB.
  • RDB can perform feature update and redundant feature generation
  • processing the first feature based on the residual dense block can increase the diversity of features.
  • step S14 (fusing the second feature and the at least one feature to be fused to obtain a fourth feature) includes the following steps a to d:
  • Step a sort the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtain a sorting result.
  • the spatial scale of a feature to be fused is different from the spatial scale of the second feature, the higher the position of the feature to be fused is in the sorting result, and if the spatial scale of the feature to be fused is different from the second feature The smaller the difference in the spatial scale of the second feature is, the lower the position of the feature to be fused is in the ranking result.
  • Step b Fusing the first feature to be fused with the second feature in the ranking result to generate a fusion result of the first feature to be fused.
  • the first feature to be fused in the sorting result is J 0
  • the second feature is j n2 to illustrate the above step b.
  • the implementation of the above step b may include the following steps 1 to 4:
  • Step 1 Sampling the second feature j n2 as a feature with the same spatial scale as the first feature to be fused, and generating the first sampled feature corresponding to the first feature to be fused J 0
  • the sampling in the above steps can be up-sampling or down-sampling, which is specifically determined by the spatial scale of the first feature to be fused J 0 and the spatial scale of the second feature j n2 .
  • Step 2 Calculate the first sampling feature corresponding to the first feature J to be fused and the difference between the first feature to be fused J 0 and obtain the feature difference corresponding to the first feature to be fused J 0
  • step 2 The process of the above step 2 can be described as:
  • Step 3 the feature difference corresponding to the first feature J 0 to be fused Sampling is a feature with the same spatial scale as the second feature j n2 , and obtaining the second sampling feature corresponding to the first feature to be fused J 0
  • the sampling in the above steps can be up-sampling or down-sampling, specifically the feature difference corresponding to the first feature to be fused J 0
  • the spatial scale of is determined by the spatial scale of the second feature j n2 .
  • Step 4 the second sampling feature corresponding to the second feature j n2 and the first feature J to be fused Perform additive fusion to generate the fusion result J 0 n of the first feature to be fused.
  • step 4 The process of the above step 4 can be described as:
  • Step c Fusing other features to be fused in the ranking result with the fusion result of the last feature to be fused one by one to generate a fusion result of the last feature to be fused in the ranking result.
  • the fusion result of the mth (a positive integer greater than 1) feature to be fused and the previous feature to be fused (the m-1th feature to be fused) in the sorting result is fused
  • the implementation method includes the following steps I to VI:
  • Step 1 Sampling the fusion result of the m-1th feature to be fused in the sorting result as a feature with the same spatial scale as the mth feature to be fused in the sorting result, and generating the mth feature to be fused corresponding to the first sampled features.
  • Step II Calculate the difference between the m th feature to be fused and the first sampled feature corresponding to the m th feature to be fused, and obtain the feature difference corresponding to the m th feature to be fused.
  • Step III Sampling the feature difference corresponding to the mth feature to be fused as a feature with the same spatial scale as the fusion result of the m-1th feature to be fused, and obtaining the th Two sampling features.
  • Step VI Add and fuse the fusion result of the m-1 feature to be fused and the second sampled feature corresponding to the m th feature to be fused to generate a fusion result of the m th feature to be fused.
  • the difference between obtaining the fusion result of the mth feature to be fused in the sorting results obtained in steps I to VI and the fusion result of the first feature to be fused in steps 1 to 4 is that the first feature to be fused is obtained:
  • the input is the second feature and the first feature to be fused, and when obtaining the fusion result of the mth feature to be fused, the input is the fusion result of the m-1th feature to be fused and the mth features to be merged. It is calculated in the same way.
  • the sorting results include: feature to be fused J 0 , feature to be fused J 1 , feature to be fused J 2 , ..., feature to be fused J t as an example for the above steps c for explanation.
  • the process of obtaining the fusion result J 0 n of the first feature to be fused and obtaining the fusion result J t n of the last feature J t to be fused in the sorting results also includes:
  • the feature difference corresponding to the second feature to be fused J 1 Sampling is a feature with the same spatial scale as the fusion result J 0 n of the first feature to be fused J 0 , and obtains the second sampling feature corresponding to the second feature to be fused J 1
  • the fusion result J 1 n of the second feature to be fused J 1 is sampled as a feature with the same spatial scale as the third feature to be fused J 2 , and the first sampling feature corresponding to the third to be fused is generated
  • the feature difference corresponding to the third feature to be fused J 2 Sampling is a feature with the same spatial scale as the fusion result J 1 n of the second feature to be fused J 1 , and obtains the second sampling feature corresponding to the third feature to be fused J 2
  • the fourth feature to be fused J 3 , the fifth feature to be fused J 4 , ..., the t-th feature to be fused J t-1 and the t+1 th feature to be fused J in the sorting results are obtained one by one t , and finally obtain the fusion result J t n of the t+1th feature J t to be fused.
  • Step d Taking the fusion result of the last feature to be fused in the ranking results as the fourth feature.
  • the sorting results include in turn: the feature to be fused J 0 , the feature to be fused J 1 , the feature to be fused J 2 , ..., the feature to be fused J t , then the last A fusion result J t n of a feature to be fused J t is used as the fourth feature.
  • combining the third feature and the fourth feature may include: connecting the third feature and the fourth feature in series in a channel dimension.
  • the feature fusion method After obtaining the target feature and at least one feature to be fused, first divide the target feature into the first feature and the second feature, and then process the first feature based on the RDB to obtain the third feature. feature, fusing the second feature and the at least one feature to be fused to obtain a fourth feature, and finally merging the third feature and the fourth feature to generate a fusion of the target feature and at least one feature to be fused result.
  • the feature fusion method provided by the embodiment of the present application divides the features that need enhanced fusion into two parts, the first feature and the second feature, and processes one part (the first feature) based on the RDB, and divides the other part ( The second feature) is fused with the feature to be fused. Since features can be updated and redundant features can be generated based on RDB, the fusion of the second feature and the features to be fused can realize the introduction of effective information in features of other spatial scales, and realize multi-scale feature fusion. Therefore, this application
  • the feature fusion method provided in the embodiment can ensure the generation of new features and ensure the diversity of features in the network architecture when realizing multi-scale feature fusion. Therefore, the feature fusion method provided in the embodiment of the present application can solve many problems in related technologies.
  • the scale feature fusion method will limit the diversity of features in the network architecture.
  • the above embodiment divides the target feature into the first feature and the second feature, and only the second feature will participate in the multi-spatial scale feature fusion, so the above embodiment can also reduce the number of features that need to be fused (the feature of the second feature The number of features is less than the number of target features), thereby reducing the calculation amount of feature fusion and improving the efficiency of feature fusion.
  • the embodiments of the present application further provide an image defogging method.
  • the image defogging method provided by the embodiment of the present application includes the following steps S41 to S43:
  • the encoding module includes L cascaded encoders with different spatial scales
  • the mth encoder is used to fuse the encoding module in the mth encoder through the feature fusion method described in any of the above embodiments
  • L and m are both positive integers, and m ⁇ L.
  • S43 Process the restoration feature through the decoding module, and acquire the image with the defogging effect of the target image.
  • the decoding module includes L cascaded decoders with different spatial scales, and the mth decoder is used to fuse the encoding module in the mth
  • the encoding module, feature restoration module, and decoding module used to execute the embodiment shown in FIG. 4 above form a U-Net.
  • the U-Net is a special convolutional neural network.
  • the U-Net neural network mainly includes: an encoding module (also known as a contraction path), a feature restoration module, and a decoding module (also known as an expansion path. ).
  • the encoding module is mainly used to capture the context information in the original image, and the corresponding decoding module is used to accurately localize the part that needs to be segmented in the original image, and then generate the processed image. Image.
  • the improvement of the U-Net is that in order to accurately locate the parts that need to be segmented in the original image, the features extracted from the encoding module will be in the U-Net.
  • the upsampling process is combined with a new feature map to preserve the important information in the feature to the greatest extent, thereby reducing the number of training samples and the demand for computing resources.
  • the network model used to implement the embodiment shown in FIG. 4 includes: an encoding module 51 forming a U-shaped network, a feature restoration module 52 and a decoding module 53 .
  • the encoding module 51 includes L cascaded encoders with different spatial scales, which are used to process the target image I and obtain encoding features i L .
  • the mth encoder is used in the feature fusion method provided by the above embodiment to fuse the image features of the encoding module on the mth encoder and the fusion results output by all encoders before the mth encoder , generating a fusion result of the m-th encoder, and outputting the fusion result of the m-th encoder to all encoders after the m-th encoder.
  • the feature restoration module 52 includes at least one RDB for receiving the encoded feature i L output by the encoding module 51, and processing the encoded feature i L through the at least one RDB to obtain the restored feature j L .
  • the decoding module 53 includes L cascaded decoders with different spatial scales, and the mth decoder is used to fuse the decoding module on the mth decoder through the feature fusion method provided by the above-mentioned embodiment
  • the image features of the m-th decoder and the fusion results output by all decoders before the m-th decoder generate the fusion results of the m-th decoder, and output the fusion results of the m-th decoder to the All decoders after the m decoders; and obtaining the dehazing effect image J of the target image I according to the fusion result j 1 output by the last decoder.
  • the mth encoder in the encoding module 51 fuses the image features of the encoding module on the mth encoder and all encoders before the mth encoder (the first The operation of the fusion result output from the first encoder to the m-1th encoder) can be described as:
  • i m i m1 +i m2
  • i m1 represents the first feature obtained by dividing the feature im of the encoding module in the m-th encoder
  • f((7) represents the operation of processing the feature based on RDB
  • i m2 represents the second feature obtained by dividing the feature im of the encoding module in the m encoder
  • the mth decoder in the decoding module 53 fuses the image features of the decoding module on the mth decoder and all decoders before the mth decoder (th L
  • the operation of the fusion result output from the first decoder to the m+1th decoder) can be described as:
  • j m1 represents the first feature obtained by dividing the feature j m of the decoding module in the m-th decoder
  • f((7) represents the operation of processing the feature based on RDB
  • j m2 represents the second feature obtained by dividing the feature j m of the decoding module in the m-th decoder
  • L is the total number of encoders in the encoding module
  • the fusion result output by the mth decoder of the decoding module is the total number of encoders in the encoding module
  • the image defogging method provided in the embodiment of the present application can perform feature fusion through the feature fusion method provided in the above embodiment, the image defogging method provided in the embodiment of the present application can ensure the generation of new features when realizing multi-scale feature fusion , which ensures the diversity of features in the network architecture, so the image defogging method provided in the embodiment of the present application can improve the performance of image defogging.
  • the embodiment of the present application also provides a feature fusion device.
  • the device embodiment corresponds to the aforementioned method embodiment.
  • this device embodiment no longer implements the aforementioned method.
  • the details in the examples are described one by one, but it should be clear that the feature fusion device in this embodiment can correspondingly implement all the content in the foregoing method embodiments.
  • FIG. 6 is a schematic structural diagram of the feature fusion device. As shown in FIG. 6, the feature fusion device 600 includes:
  • the obtaining unit 61 is configured to obtain a target feature and at least one feature to be fused, and the target feature and the at least one feature to be fused are respectively features of different spatial scales of the same image.
  • a division unit 62 configured to divide the target feature into a first feature and a second feature.
  • the first processing unit 63 is configured to process the first feature based on the residual densely connected network to obtain a third feature.
  • the second processing unit 64 is configured to fuse the second feature and the at least one feature to be fused to obtain a fourth feature.
  • the combining unit 65 is configured to combine the third feature and the fourth feature to generate a fusion result of the target feature and at least one feature to be fused.
  • the second processing unit 64 is specifically configured to sort the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtain Sorting results; fusing the first feature to be fused with the second feature in the sorting result to generate a fusion result of the first feature to be fused; pairing other features to be fused in the sorting result one by one Fusing with the fusion result of the last feature to be fused to generate a fusion result of the last feature to be fused in the sorting result; using the fusion result of the last feature to be fused in the sorting result as the fourth feature.
  • the second processing unit 64 is specifically configured to sample the second feature as a feature with the same spatial scale as the first feature to be fused, and generate the first feature to be fused corresponding to The first sampling feature of the first feature to be fused; calculate the difference between the first sampling feature corresponding to the first feature to be fused and the first feature to be fused, and obtain the feature difference corresponding to the first feature to be fused;
  • the feature difference sampling corresponding to the first feature to be fused is a feature with the same space scale as the second feature, and the second sampling feature corresponding to the first feature to be fused is obtained; for the second feature and the The second sampling feature corresponding to the first feature to be fused is added and fused to generate a fusion result of the first feature to be fused.
  • the second processing unit 64 is specifically configured to sample the fusion result of the m-1th feature to be fused in the sorting result as the same as the mth feature to be fused in the sorting result
  • For features with the same spatial scale generate the first sampling feature corresponding to the mth to-be-fused feature, where m is a positive integer greater than 1; calculate the first sampling feature corresponding to the m-th to-be-fused feature and the m-th to-be-fused feature Sampling the difference of features, obtaining the feature difference corresponding to the mth feature to be fused; sampling the feature difference corresponding to the mth feature to be fused as a fusion result space with the m-1th feature to be fused For features of the same scale, obtain the second sampling feature corresponding to the mth feature to be fused; the fusion result of the m-1th feature to be fused and the second sampling feature corresponding to the mth feature to be fused Perform additive fusion to generate a fusion result of
  • the dividing unit 61 is specifically configured to divide the target feature into a first feature and a second feature based on a feature channel of the target feature.
  • the feature fusion device provided in this embodiment can execute the feature fusion method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of the image defogging device. As shown in FIG. 7 , the image defogging device 700 includes:
  • the feature extraction unit 71 is used to process the target image through the encoding module to obtain encoding features; wherein, the encoding module includes L cascaded encoders with different spatial scales, and the mth encoder is used to pass the above-mentioned
  • the feature fusion method described in any embodiment fuses the image features of the encoding module on the mth encoder with the fusion results output by all encoders before the mth encoder to generate the mth encoder encoder, and output the fusion result of the m encoder to all encoders after the m encoder, L and m are positive integers, and m ⁇ L.
  • the feature processing unit 72 is configured to process the encoded features through a feature restoration module composed of at least one residual block RDB to obtain restored features.
  • the image generation unit 73 is configured to process the restoration feature through a decoding module to obtain a defogging effect image of the target image; wherein, the decoding module includes L cascaded decoders with different spatial scales,
  • the mth decoder is used to fuse the image features of the encoding module on the mth encoder and the output of all decoders before the mth decoder through the feature fusion method described in any of the above embodiments.
  • a fusion result generating a fusion result of the m-th decoder, and outputting the fusion result of the m-th decoder to all decoders after the m-th decoder.
  • the image defogging device provided in this embodiment can implement the image defogging method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device provided by this embodiment includes: a memory 81 and a processor 82, the memory 81 is used to store computer programs; the processing The device 82 is configured to execute the feature fusion method or the image defogging method provided in the above embodiments when calling a computer program.
  • an embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computing device implements the above-mentioned embodiment Provided feature fusion methods or image defogging methods.
  • an embodiment of the present application also provides a computer program product, which enables the computing device to implement the feature fusion method or the image defogging method provided in the foregoing embodiments when the computer program product is run on a computer.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
  • the processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • Memory may include non-permanent storage in computer readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read only memory (ROM) or flash RAM.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash random access memory
  • Computer-readable media includes both volatile and non-volatile, removable and non-removable storage media.
  • the storage medium may store information by any method or technology, and the information may be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, A magnetic tape cartridge, disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer readable media excludes transitory computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The present application provides a feature fusion method, an image defogging method, and a device. The method comprises: acquiring a target feature and at least one feature to be fused, where the target feature and the at least one feature to be fused are features on different spatial scales of the same image; dividing the target feature into a first feature and a second feature; processing the first feature on the basis of a residual dense block (RDB) to obtain a third feature; fusing the second feature and the at least one feature to be fused to obtain a fourth feature; and merging the third feature and the fourth feature to generate a fusion result of the target feature and the at least one feature to be fused.

Description

一种特征融合方法、图像去雾方法及装置A feature fusion method, image defogging method and device
相关申请的交叉引用Cross References to Related Applications
本申请要求于2021年9月27日提交的,申请名称为“一种特征融合方法、图像去雾方法及装置”的、中国专利申请号为“202111138532.8”的优先权,该中国专利申请的全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application number "202111138532.8" filed on September 27, 2021 with the title of "A Feature Fusion Method, Image Dehazing Method and Device", and the entirety of the Chinese patent application The contents are incorporated by reference in this application.
技术领域technical field
本申请涉及图像处理技术领域,尤其涉及一种特征融合方法、图像去雾方法及装置。The present application relates to the technical field of image processing, in particular to a feature fusion method, an image defogging method and a device.
背景技术Background technique
图像去雾是一个经典的图像处理问题。图像去雾的主要目的是对有雾图像进行修复,从而获取出清晰无雾的图像。由于在各种高级计算机视觉任务(图像检测、图像识别等)中首先需要对图像进行去雾处理得到清晰的图像才能进行,因此图像去雾问题在计算机视觉界受到了广泛重视。Image dehazing is a classic image processing problem. The main purpose of image defogging is to repair the foggy image, so as to obtain a clear and fog-free image. Since in various advanced computer vision tasks (image detection, image recognition, etc.), it is first necessary to dehaze the image to obtain a clear image, the problem of image dehazing has received extensive attention in the computer vision community.
在图像去雾领域,输入图像内部一般都存在着大量的冗余信息,充分的利用这些冗余信息可以有效的提升图像修复的效果。为了能够充分的利用这些冗余信息,需要从图像不同位置提取这些冗余信息,因此深度学习网络的感受野成为了一个重要的设计标准。为了扩大深度学习网的感受野,多尺度网络在图像去雾领域被广泛应用并取得了不错的效果。虽然多尺度网络通过提取和利用来自不同尺度的特征,提升了图像去雾的整体性能,但是多尺度网络架构存在图像特征下采样过程中会丢失图像特征的空间信息、非相邻网络层间的不同尺度的特征之间缺少足够的联系等问题。对多尺度的特征进行融合,提升网络特征的复用程度,已经在众多深度学习架构中被证明是一种有效地提升网络性能的手段。目前使用较为广泛的一种多尺度特征融合方式为:基于重投影技术的多尺度特征融合方式。然而,虽然基于重投影技术的多尺度特征融合方式能够实现多尺度特征融合,但重投影技术会限制不同尺度特征间的内容,在多尺度特征融合时限制了生成特征的多样性,进而影响图像去雾的学习能力。In the field of image defogging, there is generally a large amount of redundant information in the input image, and making full use of this redundant information can effectively improve the effect of image restoration. In order to make full use of these redundant information, it is necessary to extract these redundant information from different positions of the image, so the receptive field of the deep learning network has become an important design criterion. In order to expand the receptive field of deep learning network, multi-scale network has been widely used in the field of image dehazing and achieved good results. Although the multi-scale network improves the overall performance of image defogging by extracting and utilizing features from different scales, the multi-scale network architecture will lose the spatial information of image features during the downsampling process of image features, and the non-adjacent network layers. There are problems such as lack of sufficient connections between features at different scales. Integrating multi-scale features and improving the reuse of network features has been proven to be an effective way to improve network performance in many deep learning architectures. A multi-scale feature fusion method that is widely used at present is: a multi-scale feature fusion method based on reprojection technology. However, although the multi-scale feature fusion method based on re-projection technology can achieve multi-scale feature fusion, the re-projection technology will limit the content between different scale features, and limit the diversity of generated features during multi-scale feature fusion, which will affect the image quality. Defogging learning ability.
技术解决方案technical solution
有鉴于此,本申请提供了一种特征融合方法、图像去雾方法及装置,用于解决相关技术中的多尺度特征融合方式会限制网络架构中的特征的多样性的问题。In view of this, the present application provides a feature fusion method, image defogging method and device, which are used to solve the problem that the multi-scale feature fusion method in the related art will limit the diversity of features in the network architecture.
为了实现上述目的,本申请实施例提供技术方案如下:In order to achieve the above purpose, the embodiment of the present application provides the following technical solutions:
第一方面,本申请的实施例提供了一种特征融合方法,包括:获取目标特征和至少一个待融合特征,所述目标特征和所述至少一个待融合特征分别为同一图像的不同空间尺度的特 征;将所述目标特征划分为第一特征和第二特征;基于残差稠密块RDB对所述第一特征进行处理,获取第三特征;对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征;合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。In the first aspect, the embodiment of the present application provides a feature fusion method, including: acquiring a target feature and at least one feature to be fused, the target feature and the at least one feature to be fused are respectively different spatial scales of the same image feature; divide the target feature into a first feature and a second feature; process the first feature based on a residual dense block RDB to obtain a third feature; combine the second feature and the at least one to-be-fused The features are fused to obtain a fourth feature; the third feature and the fourth feature are combined to generate a fusion result of the target feature and at least one feature to be fused.
在一些实施例中,所述对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征,包括:按照各待融合特征的空间尺度与所述第二特征的空间尺度的差值对所述至少一个待融合特征进行降序排序,获取排序结果;对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果;逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,生成所述排序结果中的最后一个待融合特征的融合结果;将所述排序结果中的最后一个待融合特征的融合结果作为所述第四特征。In some embodiments, the merging the second feature and the at least one feature to be fused to obtain the fourth feature includes: according to the spatial scale of each feature to be fused and the spatial scale of the second feature The difference value sorts the at least one feature to be fused in descending order to obtain a sorting result; the first feature to be fused in the sorting result and the second feature are fused to generate the first feature to be fused Fusion results; fusion of other features to be fused with the fusion result of the last feature to be fused in the sorting results one by one, generating a fusion result of the last feature to be fused in the sorting results; The fusion result of the last feature to be fused is used as the fourth feature.
在一些实施例中,所述对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果,包括:将所述第二特征采样为与所述第一个待融合特征空间尺度相同的特征,生成所述第一个待融合特征对应的第一采样特征;计算所述第一个待融合特征对应的第一采样特征和所述第一个待融合特征的差值,获取所述第一个待融合特征对应的特征差;将所述第一个待融合特征对应的特征差采样为与所述第二特征空间尺度相同的特征,获取所述第一个待融合特征对应的第二采样特征;对所述第二特征和所述第一个待融合特征对应的第二采样特征进行相加融合,生成所述第一个待融合特征的融合结果。In some embodiments, the fusing the first feature to be fused with the second feature in the sorting result to generate the fusion result of the first feature to be fused includes: combining the second feature The feature sampling is a feature with the same spatial scale as the first feature to be fused, and the first sampled feature corresponding to the first feature to be fused is generated; the sum of the first sampled feature corresponding to the first feature to be fused is calculated The difference value of the first feature to be fused, obtaining the feature difference corresponding to the first feature to be fused; sampling the feature difference corresponding to the first feature to be fused to be the same as the second feature space scale feature, obtain the second sampling feature corresponding to the first feature to be fused; add and fuse the second feature and the second sampling feature corresponding to the first feature to be fused to generate the first The fusion result of the features to be fused.
在一些实施例中,所述逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,包括:将所述排序结果中的第m-1个待融合特征的融合结果采样为与所述排序结果中的第m个待融合特征空间尺度相同的特征,生成所述第m个待融合对应的第一采样特征,m为大于1的正整数;计算所述第m个待融合特征与所述第m个待融合对应的第一采样特征的差值,获取所述第m个待融合特征对应的特征差;将所述第m个待融合特征对应的特征差采样为与所述第m-1个待融合特征的融合结果空间尺度相同的特征,获取所述第m个待融合特征对应的第二采样特征;对所述第m-1个待融合特征的融合结果和所述第m个待融合特征对应的第二采样特征进行相加融合,生成所述第m个待融合特征的融合结果。In some embodiments, the one-by-one fusion of other features to be fused in the ranking result and the fusion result of the last feature to be fused includes: combining the m-1th feature to be fused in the sorting result The fusion result is sampled as the feature with the same spatial scale as the mth feature to be fused in the sorting result, and the first sampling feature corresponding to the mth to be fused is generated, and m is a positive integer greater than 1; The difference between the m feature to be fused and the first sampling feature corresponding to the m feature to be fused, to obtain the feature difference corresponding to the m feature to be fused; the feature difference corresponding to the m feature to be fused Sampling is a feature with the same spatial scale as the fusion result of the m-1 feature to be fused, and obtaining a second sampling feature corresponding to the m-th feature to be fused; for the m-1 feature to be fused The fusion result is added and fused with the second sampling feature corresponding to the mth feature to be fused to generate a fusion result of the mth feature to be fused.
在一些实施例中,所述将所述目标特征划分为第一特征和第二特征,包括:基于所述目标特征的特征通道将所述目标特征划分为第一特征和第二特征。In some embodiments, the dividing the target feature into a first feature and a second feature includes: dividing the target feature into a first feature and a second feature based on a feature channel of the target feature.
第二方面,本申请的实施例提供了一种图像去雾方法,包括:通过编码模块对目标图像进行处理,获取编码特征;其中,所述编码模块包括L个级联的且空间尺度均不相同的编码器,第m个编码器用于通过第一方面任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成 所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器,L、m均为正整数,且m≤L;通过由至少一个残差块RDB构成的特征复原模块对所述编码特征进行处理,获取复原特征;通过解码模块对所述复原特征进行处理,获取所述目标图像去雾效果图像;其中,所述解码模块包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通过第一方面任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器。In the second aspect, an embodiment of the present application provides an image defogging method, including: processing the target image through an encoding module to obtain encoding features; wherein, the encoding module includes L cascaded and spatially different The same encoder, the mth encoder is used to fuse the image features of the encoding module on the mth encoder and the mth encoder before using the feature fusion method described in any one of the first aspect The fusion results output by all the encoders, generate the fusion results of the m-th encoder, and output the fusion results of the m-th encoder to all encoders after the m-th encoder, L, Both m are positive integers, and m≤L; the encoded feature is processed by a feature restoration module composed of at least one residual block RDB to obtain the restored feature; the restored feature is processed by the decoding module to obtain the described Dehazing effect image of the target image; wherein, the decoding module includes L cascaded decoders with different spatial scales, and the mth decoder is used to fuse all The image features of the encoding module on the mth encoder and the fusion results of all decoder outputs before the mth decoder are generated to generate the fusion result of the mth decoder, and the The fusion results of the m decoders are output to all decoders after the m decoder.
第三方面,本申请的实施例提供了一种特征融合装置,包括:获取单元,用于获取目标特征和至少一个待融合特征,所述目标特征和所述至少一个待融合特征分别为同一图像的不同空间尺度的特征;划分单元,用于将所述目标特征划分为第一特征和第二特征;第一处理单元,用于基于残差稠密连接网络对所述第一特征进行处理,获取第三特征;第二处理单元,用于对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征;合并单元,用于合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。In a third aspect, an embodiment of the present application provides a feature fusion device, including: an acquisition unit, configured to acquire a target feature and at least one feature to be fused, the target feature and the at least one feature to be fused are respectively the same image The features of different spatial scales; the division unit is used to divide the target feature into the first feature and the second feature; the first processing unit is used to process the first feature based on the residual dense connection network to obtain The third feature; a second processing unit, configured to fuse the second feature and the at least one feature to be fused to obtain a fourth feature; a merging unit, configured to merge the third feature and the fourth feature , generating a fusion result of the target feature and at least one feature to be fused.
在一些实施例中,所述第二处理单元,具体用于按照各待融合特征的空间尺度与所述第二特征的空间尺度的差值对所述至少一个待融合特征进行降序排序,获取排序结果;对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果;逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,生成所述排序结果中的最后一个待融合特征的融合结果;将所述排序结果中的最后一个待融合特征的融合结果作为所述第四特征。In some embodiments, the second processing unit is specifically configured to sort the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtain the sorting Result; fuse the first feature to be fused with the second feature in the sorting result to generate a fusion result of the first feature to be fused; pair the other features to be fused and the sorting result one by one The fusion result of the last feature to be fused is fused to generate the fusion result of the last feature to be fused in the sorting result; the fusion result of the last feature to be fused in the sorting result is used as the fourth feature.
在一些实施例中,所述第二处理单元,具体用于将所述第二特征采样为与所述第一个待融合特征空间尺度相同的特征,生成所述第一个待融合特征对应的第一采样特征;计算所述第一个待融合特征对应的第一采样特征和所述第一个待融合特征的差值,获取所述第一个待融合特征对应的特征差;将所述第一个待融合特征对应的特征差采样为与所述第二特征空间尺度相同的特征,获取所述第一个待融合特征对应的第二采样特征;对所述第二特征和所述第一个待融合特征对应的第二采样特征进行相加融合,生成所述第一个待融合特征的融合结果。In some embodiments, the second processing unit is specifically configured to sample the second feature as a feature with the same space scale as the first feature to be fused, and generate a corresponding to the first feature to be fused. The first sampling feature; calculating the difference between the first sampling feature corresponding to the first feature to be fused and the first feature to be fused, and obtaining the feature difference corresponding to the first feature to be fused; The feature difference sampling corresponding to the first feature to be fused is the same feature as the second feature space scale, and the second sampling feature corresponding to the first feature to be fused is obtained; for the second feature and the first feature The second sampling feature corresponding to a feature to be fused is added and fused to generate a fusion result of the first feature to be fused.
在一些实施例中,所述第二处理单元,具体用于将所述排序结果中的第m-1个待融合特征的融合结果采样为与所述排序结果中的第m个待融合特征空间尺度相同的特征,生成所述第m个待融合对应的第一采样特征,m为大于1的正整数;计算所述第m个待融合特征与所述第m个待融合对应的第一采样特征的差值,获取所述第m个待融合特征对应的特征差;将所述第m个待融合特征对应的特征差采样为与所述第m-1个待融合特征的融合结果 空间尺度相同的特征,获取所述第m个待融合特征对应的第二采样特征;对所述第m-1个待融合特征的融合结果和所述第m个待融合特征对应的第二采样特征进行相加融合,生成所述第m个待融合特征的融合结果。In some embodiments, the second processing unit is specifically configured to sample the fusion result of the m-1th feature to be fused in the sorting result into a space similar to the mth feature to be fused in the sorting result For features of the same scale, generate the first sampling feature corresponding to the mth to-be-fused feature, where m is a positive integer greater than 1; calculate the first sample corresponding to the m-th to-be-fused feature and the m-th to-be-fused feature A feature difference, obtaining the feature difference corresponding to the mth feature to be fused; sampling the feature difference corresponding to the mth feature to be fused as the spatial scale of the fusion result of the m-1th feature to be fused The same feature, obtaining the second sampling feature corresponding to the mth feature to be fused; performing the fusion result of the m-1th feature to be fused and the second sampling feature corresponding to the mth feature to be fused Adding and merging to generate a fusion result of the mth feature to be fused.
在一些实施例中,所述划分单元,具体用于基于所述目标特征的特征通道将所述目标特征划分为第一特征和第二特征。In some embodiments, the dividing unit is specifically configured to divide the target feature into a first feature and a second feature based on a feature channel of the target feature.
第四方面,本申请实施例提供了图像去雾装置,包括:特征提取单元,用于通过编码模块对目标图像进行处理,获取编码特征;其中,所述编码模块包括L个级联的且空间尺度均不相同的编码器,第m个编码器用于通过第一方面任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器,L、m均为正整数,且m≤L;特征处理单元,用于通过由至少一个残差块RDB构成的特征复原模块对所述编码特征进行处理,获取复原特征;图像生成单元,用于通过解码模块对所述复原特征进行处理,获取所述目标图像去雾效果图像;其中,所述解码模块包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通过第一方面任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器。In a fourth aspect, an embodiment of the present application provides an image defogging device, including: a feature extraction unit, configured to process a target image through an encoding module to obtain encoding features; wherein, the encoding module includes L cascaded and spatial Encoders with different scales, the mth encoder is used to fuse the image features of the encoding module on the mth encoder and the mth encoder through the feature fusion method described in any one of the first aspects The fusion results output by all encoders before the encoder, generating the fusion results of the m-th encoder, and outputting the fusion results of the m-th encoder to all encoders after the m-th encoder , L and m are both positive integers, and m≤L; the feature processing unit is used to process the encoded features through a feature restoration module composed of at least one residual block RDB to obtain restored features; the image generation unit uses The decoding module processes the restoration features to obtain the target image defogging effect image; wherein, the decoding module includes L cascaded decoders with different spatial scales, and the mth decoder is used for Fuse the image features of the encoding module on the mth encoder with the fusion results output by all decoders before the mth decoder by using the feature fusion method described in any one of the first aspects to generate the The fusion result of the mth decoder, and output the fusion result of the mth decoder to all decoders after the mth decoder.
第五方面,本申请实施例提供了一种电子设备,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在调用计算机程序时,使得所述电子设备实现第一方面或第一方面任一实施例所述的特征融合方法。In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory is used to store a computer program; the processor is used to enable the electronic device to implement the first Aspect or the feature fusion method described in any embodiment of the first aspect.
第六方面,本申请实施例提供一种计算机可读存储介质,当所述计算机程序被计算设备执行时,使得所述计算设备实现第一方面或第一方面任一实施例所述的特征融合方法。In the sixth aspect, the embodiment of the present application provides a computer-readable storage medium, when the computer program is executed by the computing device, the computing device realizes the feature fusion described in the first aspect or any embodiment of the first aspect method.
第七方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现第一方面或第一方面任一实施例所述的特征融合方法。In a seventh aspect, an embodiment of the present application provides a computer program product, which enables the computer to implement the feature fusion method described in the first aspect or any embodiment of the first aspect when the computer program product is run on a computer.
本申请实施例提供的特征融合方法在获取目标特征和至少一个待融合特征后,首先将目标特征划分为第一特征和第二特征,然后分别基于RDB对所述第一特征进行处理获取第三特征,对所述第二特征和所述至少一个待融合特征进行融合获取第四特征,最后合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。即,本申请实施例提供的特征融合方法将需要增强融合的特征分为第一特征和第二特征两部分,且基于RDB对其中的一部分(第一特征)进行处理,将其中的另一部分(第二特征)与待融合的特征进行融合。由于基于RDB对特征进行处理时可以进行特征更新和冗余特征的生成,融合第二特征和待融合特征可以实现将其它空间尺度的特征中的有效信息引入,实现多尺度特 征融合,因此本申请实施例提供的特征融合方法可以在实现多尺度特征融合时,保证新特征的生成,保证了网络架构中的特征的多样性,因此本申请实施例提供的特征融合方法可以解决相关技术中的多尺度特征融合方式会限制网络架构中的特征的多样性的问题。In the feature fusion method provided by the embodiment of the present application, after obtaining the target feature and at least one feature to be fused, first divide the target feature into the first feature and the second feature, and then process the first feature based on the RDB to obtain the third feature. feature, fusing the second feature and the at least one feature to be fused to obtain a fourth feature, and finally merging the third feature and the fourth feature to generate a fusion of the target feature and at least one feature to be fused result. That is, the feature fusion method provided by the embodiment of the present application divides the features that need enhanced fusion into two parts, the first feature and the second feature, and processes one part (the first feature) based on the RDB, and divides the other part ( The second feature) is fused with the feature to be fused. Since features can be updated and redundant features can be generated based on RDB, the fusion of the second feature and the features to be fused can realize the introduction of effective information in features of other spatial scales, and realize multi-scale feature fusion. Therefore, this application The feature fusion method provided in the embodiment can ensure the generation of new features and ensure the diversity of features in the network architecture when realizing multi-scale feature fusion. Therefore, the feature fusion method provided in the embodiment of the present application can solve many problems in related technologies. The scale feature fusion method will limit the diversity of features in the network architecture.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description serve to explain the principles of the application.
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or related technologies, the following will briefly introduce the drawings that need to be used in the descriptions of the embodiments or related technologies. Obviously, for those of ordinary skill in the art, Other drawings can also be obtained from these drawings without any creative effort.
图1为本申请实施例提供的特征融合方法的步骤流程图;Fig. 1 is a flow chart of the steps of the feature fusion method provided by the embodiment of the present application;
图2为本申请实施例提供的特征融合方法的数据流示意图之一;Fig. 2 is one of the data flow diagrams of the feature fusion method provided by the embodiment of the present application;
图3为本申请实施例提供的特征融合方法的数据流示意图之二;Fig. 3 is the second schematic diagram of the data flow of the feature fusion method provided by the embodiment of the present application;
图4为本申请实施例提供的图像去雾方法的步骤流程图;FIG. 4 is a flow chart of the steps of the image defogging method provided by the embodiment of the present application;
图5为本申请实施例提供的用于实现图像去雾方法的网络模型的结构示意图;FIG. 5 is a schematic structural diagram of a network model for implementing an image defogging method provided by an embodiment of the present application;
图6为本申请实施例提供的特征融合装置的结构示意图;FIG. 6 is a schematic structural diagram of a feature fusion device provided by an embodiment of the present application;
图7为本申请实施例提供的图像去雾装置的结构示意图;FIG. 7 is a schematic structural diagram of an image defogging device provided in an embodiment of the present application;
图8为本申请实施例提供的电子设备的硬件结构示意图。FIG. 8 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
为了能够更清楚地理解本申请的上述目的、特征和优点,下面将对本申请的方案进行进一步描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。In order to better understand the above purpose, features and advantages of the present application, the solution of the present application will be further described below. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments can be combined with each other.
在下面的描述中阐述了很多具体细节以便于充分理解本申请,但本申请还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本申请的一部分实施例,而不是全部的实施例。In the following description, a lot of specific details have been set forth in order to fully understand the present application, but the present application can also be implemented in other ways different from those described here; obviously, the embodiments in the description are only a part of the present application, and Not all examples.
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。此外,在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。In the embodiments of the present application, words such as "exemplary" or "for example" are used as examples, illustrations or illustrations. Any embodiment or design scheme described as "exemplary" or "for example" in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete manner. In addition, in the description of the embodiments of the present application, unless otherwise specified, the meaning of "plurality" refers to two or more.
本申请实施例提供了一种特征融合方法,该特征融合方法可以用于任意图像处理场景的 图像处理过程中。例如:本申请实施例提供的特征融合方法可以在图像去雾场景中对提取的图像的特征进行融合;再例如:本申请实施例提供的特征融合方法也可以在图像修复过程中对提取的图像的特征进行融合。再例如:本申请实施例提供的特征融合方法还可以在图像超分过程中对提取的图像的特征进行融合。本申请实施例对特征融合方法的使用场景不做限定,以使用场景包括多个需要融合的不同空间尺度的图像特征为准。参照图1所示,该特征融合方法包括如下步骤:The embodiment of the present application provides a feature fusion method, which can be used in the image processing process of any image processing scene. For example: the feature fusion method provided by the embodiment of the present application can fuse the features of the extracted image in the image dehazing scene; another example: the feature fusion method provided by the embodiment of the present application can also fuse the extracted image in the image restoration process features are fused. Another example: the feature fusion method provided in the embodiment of the present application can also fuse the features of the extracted image during the image super-resolution process. The embodiment of the present application does not limit the usage scenario of the feature fusion method, as long as the usage scenario includes multiple image features of different spatial scales that need to be fused. Referring to Figure 1, the feature fusion method includes the following steps:
S11、获取目标特征和至少一个待融合特征。S11. Obtain target features and at least one feature to be fused.
其中,所述目标特征和所述至少一个待融合特征分别为同一图像的不同空间尺度的特征。Wherein, the target feature and the at least one feature to be fused are features of different spatial scales of the same image.
具体的,本申请实施例中的目标特征是指需要进行融合增强的特征,待融合特征是指用于对目标特征进行融合增强的特征。具体可以基于不同空间尺度的特征提取函数或特征提取网络分别对待处理图像进行特征提取,以获取所述目标特征和所述至少一个待融合特征。Specifically, the target feature in the embodiment of the present application refers to a feature that needs to be fused and enhanced, and the feature to be fused refers to a feature used to perform fusion and enhancement on the target feature. Specifically, feature extraction may be performed on the image to be processed based on feature extraction functions or feature extraction networks of different spatial scales, so as to obtain the target feature and the at least one feature to be fused.
S12、将所述目标特征划分为第一特征和第二特征。S12. Divide the target feature into a first feature and a second feature.
在一些实施例中,将所述目标特征划分为第一特征和第二特征的实现方式可以包括:In some embodiments, the implementation of dividing the target feature into the first feature and the second feature may include:
基于所述目标特征的特征通道将所述目标特征划分为第一特征和第二特征。The target feature is divided into a first feature and a second feature based on a feature channel of the target feature.
具体的,本申请实施例中特征的通道(channel)是指特征所包含的特征图(feature map),特征的一个通道即为基于某一维度对特征进行特征提取所得到的特征图,因此特征的通道即为特定意义上的特征图。基于特征的特征通道对特征进行划分即为:将特征中的一部分维度的特征图划分为一个特征集合,将剩余维度的特征图作为另一个特征集合。Specifically, the channel of the feature in the embodiment of the present application refers to the feature map (feature map) contained in the feature, and a channel of the feature is the feature map obtained by feature extraction based on a certain dimension, so the feature The channel of is a feature map in a specific sense. Dividing features based on feature channels is to divide the feature maps of some dimensions in the feature into a feature set, and use the feature maps of the remaining dimensions as another feature set.
本申请实施例中不限定第一特征和第二特征的比例。第一特征的比例越高,则可以更多的生成新特征,第二特征的比例越高,则可以更多引入的其它空间尺度的特征的有效信息,因此实际应用中可以根据需要引入的其它空间尺度的特征的有效信息的量以及需要生成的新特征的量来确定第一特征和第二特征的比例。示例性的,第一特征和第二特征的比例可以1:1。The ratio of the first feature to the second feature is not limited in the embodiment of the present application. The higher the proportion of the first feature, the more new features can be generated, and the higher the proportion of the second feature, the more effective information of the features of other spatial scales can be introduced, so in practical applications, other features can be introduced as needed The amount of effective information of the features of the spatial scale and the amount of new features that need to be generated determine the ratio of the first feature to the second feature. Exemplarily, the ratio of the first feature to the second feature may be 1:1.
S13、基于残差稠密块(ResidualDense Block,RDB)对所述第一特征进行处理,获取第三特征。S13. Process the first feature based on a residual dense block (ResidualDense Block, RDB) to obtain a third feature.
具体的,残差稠密块包括主要三部分,该三部分分别为:近邻记忆(Contiguous Memory,CM)、局部特征融合(Local Feature Fusion,LFF)以及局部残差学习(Local Residual Learning,LRL)。其中,CM主要用于将上一个RDB的输出发送到当前RDB中的每一个卷积层;LFF主要用于将前一个RDB的输出与当前RDB的所有卷积层的输出融合在一起;LRL主要用于将前一个RDB的输出与当前RDB的LFF的输出相加,并将相加结果作为当前RDB的输出。Specifically, the residual dense block includes three main parts, which are: Contiguous Memory (CM), Local Feature Fusion (LFF) and Local Residual Learning (LRL). Among them, CM is mainly used to send the output of the previous RDB to each convolutional layer in the current RDB; LFF is mainly used to fuse the output of the previous RDB with the output of all convolutional layers of the current RDB; LRL is mainly used It is used to add the output of the previous RDB to the output of the LFF of the current RDB, and use the addition result as the output of the current RDB.
由于RDB可以进行特征更新和冗余特征的生成,因此基于残差稠密块对第一特征进行 处理可以增加特征的多样性。Since RDB can perform feature update and redundant feature generation, processing the first feature based on the residual dense block can increase the diversity of features.
S14、对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征。S14. Fusion the second feature and the at least one feature to be fused to obtain a fourth feature.
在一些实施例中,上述步骤S14(对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征)包括如下步骤a至步骤d:In some embodiments, the above step S14 (fusing the second feature and the at least one feature to be fused to obtain a fourth feature) includes the following steps a to d:
步骤a、按照各待融合特征的空间尺度与所述第二特征的空间尺度的差值对所述至少一个待融合特征进行降序排序,获取排序结果。Step a: sort the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtain a sorting result.
即,若某一待融合特征的空间尺度与所述第二特征的空间尺度相差越大,则该待融合特征在排序结果中的位置越靠前,而若待融合特征的空间尺度与所述第二特征的空间尺度相差越小,则该待融合特征在排序结果中的位置越靠后。That is, if the spatial scale of a feature to be fused is different from the spatial scale of the second feature, the higher the position of the feature to be fused is in the sorting result, and if the spatial scale of the feature to be fused is different from the second feature The smaller the difference in the spatial scale of the second feature is, the lower the position of the feature to be fused is in the ranking result.
步骤b、对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果。Step b. Fusing the first feature to be fused with the second feature in the ranking result to generate a fusion result of the first feature to be fused.
参照图2所示,图2中以排序结果中的第一个待融合特征为J 0,第二特征为j n2对上述步骤b进行说明。上述步骤b的实现方式可以包括如下步骤1至步骤4: Referring to FIG. 2 , in FIG. 2 the first feature to be fused in the sorting result is J 0 , and the second feature is j n2 to illustrate the above step b. The implementation of the above step b may include the following steps 1 to 4:
步骤1、将所述第二特征j n2采样为与所述第一个待融合特征空间尺度相同的特征,生成所述第一个待融合特征J 0对应的第一采样特征
Figure PCTCN2022121209-appb-000001
Step 1. Sampling the second feature j n2 as a feature with the same spatial scale as the first feature to be fused, and generating the first sampled feature corresponding to the first feature to be fused J 0
Figure PCTCN2022121209-appb-000001
上述步骤中的采样可以为上采样也可以为下采样,具体由第一个待融合特征J 0的空间尺度与第二特征j n2的空间尺度决定。 The sampling in the above steps can be up-sampling or down-sampling, which is specifically determined by the spatial scale of the first feature to be fused J 0 and the spatial scale of the second feature j n2 .
步骤2、计算所述第一个待融合特征J 0对应的第一采样特征
Figure PCTCN2022121209-appb-000002
和所述第一个待融合特征J 0的差值,获取所述第一个待融合特征J 0对应的特征差
Figure PCTCN2022121209-appb-000003
Step 2. Calculate the first sampling feature corresponding to the first feature J to be fused
Figure PCTCN2022121209-appb-000002
and the difference between the first feature to be fused J 0 and obtain the feature difference corresponding to the first feature to be fused J 0
Figure PCTCN2022121209-appb-000003
上述步骤2的过程可以描述为:The process of the above step 2 can be described as:
Figure PCTCN2022121209-appb-000004
Figure PCTCN2022121209-appb-000004
步骤3、将所述第一个待融合特征J 0对应的特征差
Figure PCTCN2022121209-appb-000005
采样为与所述第二特征j n2空间尺度相同的特征,获取所述第一个待融合特征J 0对应的第二采样特征
Figure PCTCN2022121209-appb-000006
Step 3, the feature difference corresponding to the first feature J 0 to be fused
Figure PCTCN2022121209-appb-000005
Sampling is a feature with the same spatial scale as the second feature j n2 , and obtaining the second sampling feature corresponding to the first feature to be fused J 0
Figure PCTCN2022121209-appb-000006
上述步骤中的采样可以为上采样也可以为下采样,具体由第一个待融合特征J 0对应的特征差
Figure PCTCN2022121209-appb-000007
的空间尺度与第二特征j n2的空间尺度决定。
The sampling in the above steps can be up-sampling or down-sampling, specifically the feature difference corresponding to the first feature to be fused J 0
Figure PCTCN2022121209-appb-000007
The spatial scale of is determined by the spatial scale of the second feature j n2 .
步骤4、对所述第二特征j n2和所述第一个待融合特征J 0对应的第二采样特征
Figure PCTCN2022121209-appb-000008
进行相加融合,生成所述第一个待融合特征的融合结果J 0 n
Step 4, the second sampling feature corresponding to the second feature j n2 and the first feature J to be fused
Figure PCTCN2022121209-appb-000008
Perform additive fusion to generate the fusion result J 0 n of the first feature to be fused.
上述步骤4的过程可以描述为:The process of the above step 4 can be described as:
Figure PCTCN2022121209-appb-000009
Figure PCTCN2022121209-appb-000009
步骤c、逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,生成所述排序结果中的最后一个待融合特征的融合结果。Step c. Fusing other features to be fused in the ranking result with the fusion result of the last feature to be fused one by one to generate a fusion result of the last feature to be fused in the ranking result.
在一些实施例中,上述步骤c中对排序结果中的第m(大于1的正整数)个待融合特征 和上一个待融合特征(第m-1个待融合特征)的融合结果进行融合的实现方式包括如下步骤Ⅰ至Ⅵ:In some embodiments, in the above step c, the fusion result of the mth (a positive integer greater than 1) feature to be fused and the previous feature to be fused (the m-1th feature to be fused) in the sorting result is fused The implementation method includes the following steps I to VI:
步骤Ⅰ、将所述排序结果中的第m-1个待融合特征的融合结果采样为与所述排序结果中的第m个待融合特征空间尺度相同的特征,生成所述第m个待融合对应的第一采样特征。 Step 1. Sampling the fusion result of the m-1th feature to be fused in the sorting result as a feature with the same spatial scale as the mth feature to be fused in the sorting result, and generating the mth feature to be fused corresponding to the first sampled features.
步骤Ⅱ、计算所述第m个待融合特征与所述第m个待融合对应的第一采样特征的差值,获取所述第m个待融合特征对应的特征差。Step II: Calculate the difference between the m th feature to be fused and the first sampled feature corresponding to the m th feature to be fused, and obtain the feature difference corresponding to the m th feature to be fused.
步骤Ⅲ、将所述第m个待融合特征对应的特征差采样为与所述第m-1个待融合特征的融合结果空间尺度相同的特征,获取所述第m个待融合特征对应的第二采样特征。Step III. Sampling the feature difference corresponding to the mth feature to be fused as a feature with the same spatial scale as the fusion result of the m-1th feature to be fused, and obtaining the th Two sampling features.
步骤Ⅵ、对所述第m-1个待融合特征的融合结果和所述第m个待融合特征对应的第二采样特征进行相加融合,生成所述第m个待融合特征的融合结果。Step VI: Add and fuse the fusion result of the m-1 feature to be fused and the second sampled feature corresponding to the m th feature to be fused to generate a fusion result of the m th feature to be fused.
步骤Ⅰ至Ⅵ获取排序结果中的第m个待融合特征的融合结果与步骤1至4获取排序结果中的第1个待融合特征的融合结果的不同之处仅在在于:获取第1个待融合特征的融合结果时,输入为第二特征和第1个待融合特征,而获取第m个待融合特征的融合结果时,输入为第m-1个待融合特征的融合结果和第m个待融合特征。其计算方式相同。The difference between obtaining the fusion result of the mth feature to be fused in the sorting results obtained in steps I to VI and the fusion result of the first feature to be fused in steps 1 to 4 is that the first feature to be fused is obtained: When merging the fusion result of features, the input is the second feature and the first feature to be fused, and when obtaining the fusion result of the mth feature to be fused, the input is the fusion result of the m-1th feature to be fused and the mth features to be merged. It is calculated in the same way.
示例性的,参照图3所示,图3中以排序结果依次包括:待融合特征J 0、待融合特征J 1、待融合特征J 2、……、待融合特征J t为例对上述步骤c进行说明。在图2所示实施例的基础上,获取第一个待融合特征的融合结果J 0 n,获取排序结果中最后一个待融合特征J t的融合结果J t n的过程还包括: Exemplarily, as shown in FIG. 3 , in FIG. 3 , the sorting results include: feature to be fused J 0 , feature to be fused J 1 , feature to be fused J 2 , ..., feature to be fused J t as an example for the above steps c for explanation. On the basis of the embodiment shown in Figure 2, the process of obtaining the fusion result J 0 n of the first feature to be fused and obtaining the fusion result J t n of the last feature J t to be fused in the sorting results also includes:
将第1个待融合特征J 0的融合结果J 0 n采样为与第2个待融合特征J 1空间尺度相同的特征,生成第2个待融合对应的第一采样特征
Figure PCTCN2022121209-appb-000010
Sampling the fusion result J 0 n of the first feature to be fused J 0 as a feature with the same spatial scale as the second feature to be fused J 1 to generate the first sampling feature corresponding to the second to be fused
Figure PCTCN2022121209-appb-000010
计算第2个待融合特征J 1与第2个待融合特征J 1对应的第一采样特征
Figure PCTCN2022121209-appb-000011
的差值,获取所述第2个待融合特征对应的特征差
Figure PCTCN2022121209-appb-000012
Calculate the first sampling feature corresponding to the second feature J 1 to be fused and the second feature J 1 to be fused
Figure PCTCN2022121209-appb-000011
The difference, to obtain the feature difference corresponding to the second feature to be fused
Figure PCTCN2022121209-appb-000012
将第2个待融合特征J 1对应的特征差
Figure PCTCN2022121209-appb-000013
采样为与第1个待融合特征J 0的融合结果J 0 n空间尺度相同的特征,获取第2个待融合特征J 1对应的第二采样特征
Figure PCTCN2022121209-appb-000014
The feature difference corresponding to the second feature to be fused J 1
Figure PCTCN2022121209-appb-000013
Sampling is a feature with the same spatial scale as the fusion result J 0 n of the first feature to be fused J 0 , and obtains the second sampling feature corresponding to the second feature to be fused J 1
Figure PCTCN2022121209-appb-000014
对第1个待融合特征J 0的融合结果J 0 n和第2个待融合特征J 1对应的第二采样特征
Figure PCTCN2022121209-appb-000015
进行相加融合,生成第2个待融合特征J 1的融合结果J 1 n
For the fusion result J 0 n of the first feature to be fused J 0 and the second sampling feature corresponding to the second feature to be fused J 1
Figure PCTCN2022121209-appb-000015
Perform additive fusion to generate the fusion result J 1 n of the second feature J 1 to be fused.
将第2个待融合特征J 1的融合结果J 1 n采样为与第3个待融合特征J 2空间尺度相同的特征,生成第3个待融合对应的第一采样特征
Figure PCTCN2022121209-appb-000016
The fusion result J 1 n of the second feature to be fused J 1 is sampled as a feature with the same spatial scale as the third feature to be fused J 2 , and the first sampling feature corresponding to the third to be fused is generated
Figure PCTCN2022121209-appb-000016
计算第3个待融合特征J 2与第3个待融合特征J 2对应的第一采样特征
Figure PCTCN2022121209-appb-000017
的差值,获取所述第3个待融合特征对应的特征差
Figure PCTCN2022121209-appb-000018
Calculate the first sampling feature corresponding to the third feature J 2 to be fused and the third feature J 2 to be fused
Figure PCTCN2022121209-appb-000017
The difference, to obtain the feature difference corresponding to the third feature to be fused
Figure PCTCN2022121209-appb-000018
将第3个待融合特征J 2对应的特征差
Figure PCTCN2022121209-appb-000019
采样为与第2个待融合特征J 1的融合结果J 1 n空间尺度相同的特征,获取第3个待融合特征J 2对应的第二采样特征
Figure PCTCN2022121209-appb-000020
The feature difference corresponding to the third feature to be fused J 2
Figure PCTCN2022121209-appb-000019
Sampling is a feature with the same spatial scale as the fusion result J 1 n of the second feature to be fused J 1 , and obtains the second sampling feature corresponding to the third feature to be fused J 2
Figure PCTCN2022121209-appb-000020
对第2个待融合特征J 1的融合结果J 1 n和第3个待融合特征J 2对应的第二采样特征
Figure PCTCN2022121209-appb-000021
进行相加融合,生成第3个待融合特征J 2的融合结果J 2 n
For the fusion result J 1 n of the second feature to be fused J 1 and the second sampling feature corresponding to the third feature to be fused J 2
Figure PCTCN2022121209-appb-000021
Perform additive fusion to generate the fusion result J 2 n of the third feature to be fused J 2 .
基于上述方式逐一获取排序结果中的第4个待融合特征J 3、第5个待融合特征J 4、……、第t个待融合特征J t-1以及第t+1个待融合特征J t的融合结果,最终获取第t+1个待融合特征J t的融合结果J t nBased on the above method, the fourth feature to be fused J 3 , the fifth feature to be fused J 4 , ..., the t-th feature to be fused J t-1 and the t+1 th feature to be fused J in the sorting results are obtained one by one t , and finally obtain the fusion result J t n of the t+1th feature J t to be fused.
步骤d、将所述排序结果中的最后一个待融合特征的融合结果作为所述第四特征。Step d. Taking the fusion result of the last feature to be fused in the ranking results as the fourth feature.
承上图3所示实施例,排序结果依次包括:待融合特征J 0、待融合特征J 1、待融合特征J 2、……、待融合特征J t,则将所述排序结果中的最后一个待融合特征J t的融合结果J t n作为所述第四特征。 Continuing from the embodiment shown in Figure 3 above, the sorting results include in turn: the feature to be fused J 0 , the feature to be fused J 1 , the feature to be fused J 2 , ..., the feature to be fused J t , then the last A fusion result J t n of a feature to be fused J t is used as the fourth feature.
S15、合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。S15. Merge the third feature and the fourth feature to generate a fusion result of the target feature and at least one feature to be fused.
具体的,合并所述第三特征和所述第四特征可以包括:将所述第三特征和所述第四特征在通道维度上串联。Specifically, combining the third feature and the fourth feature may include: connecting the third feature and the fourth feature in series in a channel dimension.
本申请实施例提供的特征融合方法在获取目标特征和至少一个待融合特征后,首先将目标特征划分为第一特征和第二特征,然后分别基于RDB对所述第一特征进行处理获取第三特征,对所述第二特征和所述至少一个待融合特征进行融合获取第四特征,最后合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。即,本申请实施例提供的特征融合方法将需要增强融合的特征分为第一特征和第二特征两部分,且基于RDB对其中的一部分(第一特征)进行处理,将其中的另一部分(第二特征)与待融合的特征进行融合。由于基于RDB对特征进行处理时可以进行特征更新和冗余特征的生成,融合第二特征和待融合特征可以实现将其它空间尺度的特征中的有效信息引入,实现多尺度特征融合,因此本申请实施例提供的特征融合方法可以在实现多尺度特征融合时,保证新特征的生成,保证了网络架构中的特征的多样性,因此本申请实施例提供的特征融合方法可以解决相关技术中的多尺度特征融合方式会限制网络架构中的特征的多样性的问题。In the feature fusion method provided by the embodiment of the present application, after obtaining the target feature and at least one feature to be fused, first divide the target feature into the first feature and the second feature, and then process the first feature based on the RDB to obtain the third feature. feature, fusing the second feature and the at least one feature to be fused to obtain a fourth feature, and finally merging the third feature and the fourth feature to generate a fusion of the target feature and at least one feature to be fused result. That is, the feature fusion method provided by the embodiment of the present application divides the features that need enhanced fusion into two parts, the first feature and the second feature, and processes one part (the first feature) based on the RDB, and divides the other part ( The second feature) is fused with the feature to be fused. Since features can be updated and redundant features can be generated based on RDB, the fusion of the second feature and the features to be fused can realize the introduction of effective information in features of other spatial scales, and realize multi-scale feature fusion. Therefore, this application The feature fusion method provided in the embodiment can ensure the generation of new features and ensure the diversity of features in the network architecture when realizing multi-scale feature fusion. Therefore, the feature fusion method provided in the embodiment of the present application can solve many problems in related technologies. The scale feature fusion method will limit the diversity of features in the network architecture.
还需要说明的是,多个空间尺度的特征进行融合时,一般需要进行上采样/下采样的卷积和反卷积,而上采样/下采样的卷积和反卷积需要大量的计算资源,因此性能开销比较大。上述实施例通过将目标特征划分为第一特征和第二特征,且仅会使第二特征参与多空间尺度特征融合,因此上述实施例还可以减少需要融合的特征的数量(第二特征的特征数少于目标特征的特征数),进而减少特征融合的计算量,提升特征融合的效率。It should also be noted that when features of multiple spatial scales are fused, upsampling/downsampling convolution and deconvolution are generally required, and upsampling/downsampling convolution and deconvolution require a lot of computing resources , so the performance overhead is relatively large. The above embodiment divides the target feature into the first feature and the second feature, and only the second feature will participate in the multi-spatial scale feature fusion, so the above embodiment can also reduce the number of features that need to be fused (the feature of the second feature The number of features is less than the number of target features), thereby reducing the calculation amount of feature fusion and improving the efficiency of feature fusion.
在上述实施例的基础上,本申请实施例还提供了一种图像去雾方法。参照图4所示,本申请实施例提供的图像去雾方法包括如下步骤S41至S43:On the basis of the foregoing embodiments, the embodiments of the present application further provide an image defogging method. Referring to Figure 4, the image defogging method provided by the embodiment of the present application includes the following steps S41 to S43:
S41、通过编码模块对目标图像进行处理,获取编码特征。S41. Process the target image through the coding module to obtain coding features.
其中,所述编码模块包括L个级联的且空间尺度均不相同的编码器,第m个编码器用于通过上述任一实施例所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器,L、m均为正整数,且m≤L。Wherein, the encoding module includes L cascaded encoders with different spatial scales, and the mth encoder is used to fuse the encoding module in the mth encoder through the feature fusion method described in any of the above embodiments The image features on the encoder and the fusion results output by all encoders before the m encoder, generate the fusion result of the m encoder, and output the fusion result of the m encoder For all encoders after the m-th encoder, L and m are both positive integers, and m≤L.
S42、通过由至少一个RDB构成的特征复原模块对所述编码特征进行处理,获取复原特征。S42. Use a feature restoration module composed of at least one RDB to process the coded features to obtain the restored features.
S43、通过解码模块对所述复原特征进行处理,获取所述目标图像去雾效果图像。S43. Process the restoration feature through the decoding module, and acquire the image with the defogging effect of the target image.
其中,所述解码模块包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通过上述任一实施例所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器。Wherein, the decoding module includes L cascaded decoders with different spatial scales, and the mth decoder is used to fuse the encoding module in the mth The image features on the first encoder and the fusion results of all decoder outputs before the mth decoder, generate the fusion result of the mth decoder, and output the fusion result of the mth decoder to all decoders after the mth decoder.
即,用于执行上述图4所示实施例的编码模块、特征复原模块以及解码模块形成U型网络(U-Net)。That is, the encoding module, feature restoration module, and decoding module used to execute the embodiment shown in FIG. 4 above form a U-Net.
具体的,U型网络(U-Net)一种特殊的卷积神经网络,U型网络神经网络主要包括:编码模块(又称为收缩路径)、特征复原模块以及解码模块(又称为扩展路径)。编码模块主要是用来捕捉原始图像中的上下文信息(context information),而与之相对称的解码模块则是为了对原始图像中所需要分割出来的部分进行精准定位(localization),进而生成处理后的图像。相比于全卷积神经网络(Fully Convolutional Neural,FCN)U型网络的改进之处在于,U-Net为了能精准的定位原始图像中需要分割出来的部分,编码模块上提取出来的特征会在升采样(upsampling)过程中与新的特征图(feature map)进行结合,以最大程度的保留特征中的重要信息,进而减少对训练样本数量和计算资源的需求。Specifically, the U-Net is a special convolutional neural network. The U-Net neural network mainly includes: an encoding module (also known as a contraction path), a feature restoration module, and a decoding module (also known as an expansion path. ). The encoding module is mainly used to capture the context information in the original image, and the corresponding decoding module is used to accurately localize the part that needs to be segmented in the original image, and then generate the processed image. Image. Compared with the fully convolutional neural network (Fully Convolutional Neural, FCN) U-shaped network, the improvement of the U-Net is that in order to accurately locate the parts that need to be segmented in the original image, the features extracted from the encoding module will be in the U-Net. The upsampling process is combined with a new feature map to preserve the important information in the feature to the greatest extent, thereby reducing the number of training samples and the demand for computing resources.
参照图5所示,用于执行上述图4所示实施例的网络模型包括:形成U型网络的编码模块51、特征复原模块52以及解码模块53。Referring to FIG. 5 , the network model used to implement the embodiment shown in FIG. 4 includes: an encoding module 51 forming a U-shaped network, a feature restoration module 52 and a decoding module 53 .
所述编码模块51包括L个级联的且空间尺度均不相同的编码器,用于对目标图像I进行处理,获取编码特征i L。其中,第m个编码器用于上述实施例提供的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器。 The encoding module 51 includes L cascaded encoders with different spatial scales, which are used to process the target image I and obtain encoding features i L . Wherein, the mth encoder is used in the feature fusion method provided by the above embodiment to fuse the image features of the encoding module on the mth encoder and the fusion results output by all encoders before the mth encoder , generating a fusion result of the m-th encoder, and outputting the fusion result of the m-th encoder to all encoders after the m-th encoder.
所述特征复原模块52包括至少一个RDB,用于接收所述编码模块51输出的编码特征i L,以及通过所述至少一个RDB对编码特征i L进行处理,获取复原特征j LThe feature restoration module 52 includes at least one RDB for receiving the encoded feature i L output by the encoding module 51, and processing the encoded feature i L through the at least one RDB to obtain the restored feature j L .
所述解码模块53包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通 过上述实施例提供的特征融合方法融合所述解码模块在所述第m个解码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器;以及根据最后一个解码器的输出的融合结果j 1,获取所述目标图像I去雾效果图像J。 The decoding module 53 includes L cascaded decoders with different spatial scales, and the mth decoder is used to fuse the decoding module on the mth decoder through the feature fusion method provided by the above-mentioned embodiment The image features of the m-th decoder and the fusion results output by all decoders before the m-th decoder generate the fusion results of the m-th decoder, and output the fusion results of the m-th decoder to the All decoders after the m decoders; and obtaining the dehazing effect image J of the target image I according to the fusion result j 1 output by the last decoder.
编码模块51中的第m个编码器通过上述实施例提供的特征融合方法融合所述编码模块在第m个编码器上的图像特征和所述第m个编码器之前的所有编码器(第1个编码器至第m-1个编码器)输出的融合结果的操作可以描述为:The mth encoder in the encoding module 51 fuses the image features of the encoding module on the mth encoder and all encoders before the mth encoder (the first The operation of the fusion result output from the first encoder to the m-1th encoder) can be described as:
i m=i m1+i m2 i m =i m1 +i m2
Figure PCTCN2022121209-appb-000022
Figure PCTCN2022121209-appb-000022
Figure PCTCN2022121209-appb-000023
Figure PCTCN2022121209-appb-000023
Figure PCTCN2022121209-appb-000024
Figure PCTCN2022121209-appb-000024
其中,i m1表示对编码模块在第m个编码器中的特征i m进行划分得到的第一特征,f(…)表示基于RDB对特征进行处理的操作,
Figure PCTCN2022121209-appb-000025
表示基于RDB对i m1进行处理得到的第三特征,i m2表示对编码模块在第m个编码器中的特征i m进行划分得到的第二特征,
Figure PCTCN2022121209-appb-000026
表示第1个编码器至第m-1个编码器输出的融合结果,
Figure PCTCN2022121209-appb-000027
表示对i m2
Figure PCTCN2022121209-appb-000028
进行融合的操作,
Figure PCTCN2022121209-appb-000029
表示对i m2
Figure PCTCN2022121209-appb-000030
进行融合得到的融合结果,
Figure PCTCN2022121209-appb-000031
编码模块的第m个编码器输出的融合结果。
Among them, i m1 represents the first feature obtained by dividing the feature im of the encoding module in the m-th encoder, f(...) represents the operation of processing the feature based on RDB,
Figure PCTCN2022121209-appb-000025
Represents the third feature obtained by processing i m1 based on RDB, and i m2 represents the second feature obtained by dividing the feature im of the encoding module in the m encoder,
Figure PCTCN2022121209-appb-000026
Indicates the fusion result output from the first encoder to the m-1th encoder,
Figure PCTCN2022121209-appb-000027
means for i m2 and
Figure PCTCN2022121209-appb-000028
perform the fusion operation,
Figure PCTCN2022121209-appb-000029
means for i m2 and
Figure PCTCN2022121209-appb-000030
The fusion result obtained by fusion,
Figure PCTCN2022121209-appb-000031
The fusion result output by the mth encoder of the encoding module.
解码模块53中的第m个解码器通过上述实施例提供的特征融合方法融合所述解码模块在第m个解码器上的图像特征和所述第m个解码器之前的所有解码器(第L个解码器至第m+1个解码器)输出的融合结果的操作可以描述为:The mth decoder in the decoding module 53 fuses the image features of the decoding module on the mth decoder and all decoders before the mth decoder (th L The operation of the fusion result output from the first decoder to the m+1th decoder) can be described as:
j m=j m1+j m2 j m =j m1 +j m2
Figure PCTCN2022121209-appb-000032
Figure PCTCN2022121209-appb-000032
Figure PCTCN2022121209-appb-000033
Figure PCTCN2022121209-appb-000033
Figure PCTCN2022121209-appb-000034
Figure PCTCN2022121209-appb-000034
其中,j m1表示对解码模块在第m个解码器中的特征j m进行划分得到的第一特征,f(…)表示基于RDB对特征进行处理的操作,
Figure PCTCN2022121209-appb-000035
表示基于RDB对j m1进行处理得到的第三特征,j m2表示对解码模块在第m个解码器中的特征j m进行划分得到的第二特征,L为编码模块中编码器的总数量,
Figure PCTCN2022121209-appb-000036
表示第L个解码器至第m+1个解码器输出的融合结果,
Figure PCTCN2022121209-appb-000037
表示对j m2
Figure PCTCN2022121209-appb-000038
进行融合的操作,
Figure PCTCN2022121209-appb-000039
表示对j m2
Figure PCTCN2022121209-appb-000040
进行融合得到的融合结果,
Figure PCTCN2022121209-appb-000041
解码模块的第m个解码器输出的融合结果。
Among them, j m1 represents the first feature obtained by dividing the feature j m of the decoding module in the m-th decoder, f(...) represents the operation of processing the feature based on RDB,
Figure PCTCN2022121209-appb-000035
Represents the third feature obtained by processing j m1 based on RDB, j m2 represents the second feature obtained by dividing the feature j m of the decoding module in the m-th decoder, L is the total number of encoders in the encoding module,
Figure PCTCN2022121209-appb-000036
Represents the fusion result output from the L-th decoder to the m+1-th decoder,
Figure PCTCN2022121209-appb-000037
means that for j m2 and
Figure PCTCN2022121209-appb-000038
perform the fusion operation,
Figure PCTCN2022121209-appb-000039
means that for j m2 and
Figure PCTCN2022121209-appb-000040
The fusion result obtained by fusion,
Figure PCTCN2022121209-appb-000041
The fusion result output by the mth decoder of the decoding module.
由于本申请实施例提供的图像去雾方法可以通过上述实施例提供的特征融合方法进行 特征融合,因此本申请实施例提供的图像去雾方法可以在实现多尺度特征融合时,保证新特征的生成,保证了网络架构中的特征的多样性,因此本申请实施例提供的图像去雾方法可以提升图像去雾的性能。Since the image defogging method provided in the embodiment of the present application can perform feature fusion through the feature fusion method provided in the above embodiment, the image defogging method provided in the embodiment of the present application can ensure the generation of new features when realizing multi-scale feature fusion , which ensures the diversity of features in the network architecture, so the image defogging method provided in the embodiment of the present application can improve the performance of image defogging.
基于同一申请构思,作为对上述方法的实现,本申请实施例还提供了一种特征融合装置,该装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的特征融合装置能够对应实现前述方法实施例中的全部内容。Based on the same application idea, as an implementation of the above method, the embodiment of the present application also provides a feature fusion device. The device embodiment corresponds to the aforementioned method embodiment. For the convenience of reading, this device embodiment no longer implements the aforementioned method. The details in the examples are described one by one, but it should be clear that the feature fusion device in this embodiment can correspondingly implement all the content in the foregoing method embodiments.
本申请实施例提供了一种特征融合装置,图6为该特征融合装置的结构示意图,如图6所示,该特征融合装置600包括:The embodiment of the present application provides a feature fusion device. FIG. 6 is a schematic structural diagram of the feature fusion device. As shown in FIG. 6, the feature fusion device 600 includes:
获取单元61,用于获取目标特征和至少一个待融合特征,所述目标特征和所述至少一个待融合特征分别为同一图像的不同空间尺度的特征。The obtaining unit 61 is configured to obtain a target feature and at least one feature to be fused, and the target feature and the at least one feature to be fused are respectively features of different spatial scales of the same image.
划分单元62,用于将所述目标特征划分为第一特征和第二特征。A division unit 62, configured to divide the target feature into a first feature and a second feature.
第一处理单元63,用于基于残差稠密连接网络对所述第一特征进行处理,获取第三特征。The first processing unit 63 is configured to process the first feature based on the residual densely connected network to obtain a third feature.
第二处理单元64,用于对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征。The second processing unit 64 is configured to fuse the second feature and the at least one feature to be fused to obtain a fourth feature.
合并单元65,用于合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。The combining unit 65 is configured to combine the third feature and the fourth feature to generate a fusion result of the target feature and at least one feature to be fused.
在一些实施例中,所述第二处理单元64,具体用于按照各待融合特征的空间尺度与所述第二特征的空间尺度的差值对所述至少一个待融合特征进行降序排序,获取排序结果;对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果;逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,生成所述排序结果中的最后一个待融合特征的融合结果;将所述排序结果中的最后一个待融合特征的融合结果作为所述第四特征。In some embodiments, the second processing unit 64 is specifically configured to sort the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtain Sorting results; fusing the first feature to be fused with the second feature in the sorting result to generate a fusion result of the first feature to be fused; pairing other features to be fused in the sorting result one by one Fusing with the fusion result of the last feature to be fused to generate a fusion result of the last feature to be fused in the sorting result; using the fusion result of the last feature to be fused in the sorting result as the fourth feature.
在一些实施例中,所述第二处理单元64,具体用于将所述第二特征采样为与所述第一个待融合特征空间尺度相同的特征,生成所述第一个待融合特征对应的第一采样特征;计算所述第一个待融合特征对应的第一采样特征和所述第一个待融合特征的差值,获取所述第一个待融合特征对应的特征差;将所述第一个待融合特征对应的特征差采样为与所述第二特征空间尺度相同的特征,获取所述第一个待融合特征对应的第二采样特征;对所述第二特征和所述第一个待融合特征对应的第二采样特征进行相加融合,生成所述第一个待融合特征的融合结果。In some embodiments, the second processing unit 64 is specifically configured to sample the second feature as a feature with the same spatial scale as the first feature to be fused, and generate the first feature to be fused corresponding to The first sampling feature of the first feature to be fused; calculate the difference between the first sampling feature corresponding to the first feature to be fused and the first feature to be fused, and obtain the feature difference corresponding to the first feature to be fused; The feature difference sampling corresponding to the first feature to be fused is a feature with the same space scale as the second feature, and the second sampling feature corresponding to the first feature to be fused is obtained; for the second feature and the The second sampling feature corresponding to the first feature to be fused is added and fused to generate a fusion result of the first feature to be fused.
在一些实施例中,所述第二处理单元64,具体用于将所述排序结果中的第m-1个待融 合特征的融合结果采样为与所述排序结果中的第m个待融合特征空间尺度相同的特征,生成所述第m个待融合对应的第一采样特征,m为大于1的正整数;计算所述第m个待融合特征与所述第m个待融合对应的第一采样特征的差值,获取所述第m个待融合特征对应的特征差;将所述第m个待融合特征对应的特征差采样为与所述第m-1个待融合特征的融合结果空间尺度相同的特征,获取所述第m个待融合特征对应的第二采样特征;对所述第m-1个待融合特征的融合结果和所述第m个待融合特征对应的第二采样特征进行相加融合,生成所述第m个待融合特征的融合结果。In some embodiments, the second processing unit 64 is specifically configured to sample the fusion result of the m-1th feature to be fused in the sorting result as the same as the mth feature to be fused in the sorting result For features with the same spatial scale, generate the first sampling feature corresponding to the mth to-be-fused feature, where m is a positive integer greater than 1; calculate the first sampling feature corresponding to the m-th to-be-fused feature and the m-th to-be-fused feature Sampling the difference of features, obtaining the feature difference corresponding to the mth feature to be fused; sampling the feature difference corresponding to the mth feature to be fused as a fusion result space with the m-1th feature to be fused For features of the same scale, obtain the second sampling feature corresponding to the mth feature to be fused; the fusion result of the m-1th feature to be fused and the second sampling feature corresponding to the mth feature to be fused Perform additive fusion to generate a fusion result of the mth feature to be fused.
在一些实施例中,所述划分单元61,具体用于基于所述目标特征的特征通道将所述目标特征划分为第一特征和第二特征。In some embodiments, the dividing unit 61 is specifically configured to divide the target feature into a first feature and a second feature based on a feature channel of the target feature.
本实施例提供的特征融合装置可以执行上述方法实施例提供的特征融合方法,其实现原理与技术效果类似,此处不再赘述。The feature fusion device provided in this embodiment can execute the feature fusion method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
本申请实施例提供了一种图像去雾装置,图7为该图像去雾装置的结构示意图,如图7所示,该图像去雾装置700包括:An embodiment of the present application provides an image defogging device. FIG. 7 is a schematic structural diagram of the image defogging device. As shown in FIG. 7 , the image defogging device 700 includes:
特征提取单元71,用于通过编码模块对目标图像进行处理,获取编码特征;其中,所述编码模块包括L个级联的且空间尺度均不相同的编码器,第m个编码器用于通过上述任一实施例所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器,L、m均为正整数,且m≤L。The feature extraction unit 71 is used to process the target image through the encoding module to obtain encoding features; wherein, the encoding module includes L cascaded encoders with different spatial scales, and the mth encoder is used to pass the above-mentioned The feature fusion method described in any embodiment fuses the image features of the encoding module on the mth encoder with the fusion results output by all encoders before the mth encoder to generate the mth encoder encoder, and output the fusion result of the m encoder to all encoders after the m encoder, L and m are positive integers, and m≤L.
特征处理单元72,用于通过由至少一个残差块RDB构成的特征复原模块对所述编码特征进行处理,获取复原特征。The feature processing unit 72 is configured to process the encoded features through a feature restoration module composed of at least one residual block RDB to obtain restored features.
图像生成单元73,用于通过解码模块对所述复原特征进行处理,获取所述目标图像去雾效果图像;其中,所述解码模块包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通过上述任一实施例所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器。The image generation unit 73 is configured to process the restoration feature through a decoding module to obtain a defogging effect image of the target image; wherein, the decoding module includes L cascaded decoders with different spatial scales, The mth decoder is used to fuse the image features of the encoding module on the mth encoder and the output of all decoders before the mth decoder through the feature fusion method described in any of the above embodiments. A fusion result, generating a fusion result of the m-th decoder, and outputting the fusion result of the m-th decoder to all decoders after the m-th decoder.
本实施例提供的图像去雾装置可以执行上述方法实施例提供的图像去雾方法,其实现原理与技术效果类似,此处不再赘述。The image defogging device provided in this embodiment can implement the image defogging method provided in the above method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
基于同一申请构思,本申请实施例还提供了一种电子设备。图8为本申请实施例提供的电子设备的结构示意图,如图8所示,本实施例提供的电子设备包括:存储器81和处理器82,所述存储器81用于存储计算机程序;所述处理器82用于在调用计算机程序时执行上述 实施例提供的特征融合方法或图像去雾方法。Based on the same application concept, the embodiment of the present application also provides an electronic device. FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present application. As shown in FIG. 8 , the electronic device provided by this embodiment includes: a memory 81 and a processor 82, the memory 81 is used to store computer programs; the processing The device 82 is configured to execute the feature fusion method or the image defogging method provided in the above embodiments when calling a computer program.
基于同一申请构思,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,当计算机程序被处理器执行时,使得所述计算设备实现上述实施例提供的特征融合方法或图像去雾方法。Based on the same application idea, an embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computing device implements the above-mentioned embodiment Provided feature fusion methods or image defogging methods.
基于同一申请构思,本申请实施例还提供了一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算设备实现上述实施例提供的特征融合方法或图像去雾方法。Based on the same application idea, an embodiment of the present application also provides a computer program product, which enables the computing device to implement the feature fusion method or the image defogging method provided in the foregoing embodiments when the computer program product is run on a computer.
本领域技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
存储器可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。存储器是计算机可读介质的示例。Memory may include non-permanent storage in computer readable media, in the form of random access memory (RAM) and/or nonvolatile memory such as read only memory (ROM) or flash RAM. The memory is an example of a computer readable medium.
计算机可读介质包括永久性和非永久性、可移动和非可移动存储介质。存储介质可以由任何方法或技术来实现信息存储,信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。根据本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both volatile and non-volatile, removable and non-removable storage media. The storage medium may store information by any method or technology, and the information may be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, A magnetic tape cartridge, disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media excludes transitory computer readable media, such as modulated data signals and carrier waves.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and are not intended to limit it; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the various embodiments of the present application. scope.

Claims (11)

  1. 一种特征融合方法,包括:A feature fusion method, comprising:
    获取目标特征和至少一个待融合特征,所述目标特征和所述至少一个待融合特征分别为同一图像的不同空间尺度的特征;Acquiring target features and at least one feature to be fused, where the target feature and the at least one feature to be fused are features of different spatial scales of the same image;
    将所述目标特征划分为第一特征和第二特征;dividing the target feature into a first feature and a second feature;
    基于残差稠密块RDB对所述第一特征进行处理,获取第三特征;Processing the first feature based on the residual dense block RDB to obtain a third feature;
    对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征;Fusing the second feature and the at least one feature to be fused to obtain a fourth feature;
    合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。Combining the third feature and the fourth feature to generate a fusion result of the target feature and at least one feature to be fused.
  2. 根据权利要求1所述的方法,其中,所述对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征,包括:The method according to claim 1, wherein said merging said second feature and said at least one feature to be fused to obtain a fourth feature comprises:
    按照各待融合特征的空间尺度与所述第二特征的空间尺度的差值对所述至少一个待融合特征进行降序排序,获取排序结果;sorting the at least one feature to be fused in descending order according to the difference between the spatial scale of each feature to be fused and the spatial scale of the second feature, and obtaining a sorting result;
    对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果;Fusing the first feature to be fused with the second feature in the ranking result to generate a fusion result of the first feature to be fused;
    逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,生成所述排序结果中的最后一个待融合特征的融合结果;Fusing other features to be fused in the sorting result with the fusion result of the last feature to be fused one by one to generate a fusion result of the last feature to be fused in the sorting result;
    将所述排序结果中的最后一个待融合特征的融合结果作为所述第四特征。The fusion result of the last feature to be fused in the ranking results is used as the fourth feature.
  3. 根据权利要求2所述的方法,其中,所述对所述排序结果中的第一个待融合特征和所述第二特征进行融合,生成所述第一个待融合特征的融合结果,包括:The method according to claim 2, wherein said merging the first feature to be fused and the second feature in the ranking result to generate a fusion result of the first feature to be fused comprises:
    将所述第二特征采样为与所述第一个待融合特征空间尺度相同的特征,生成所述第一个待融合特征对应的第一采样特征;Sampling the second feature as a feature with the same space scale as the first feature to be fused, and generating a first sampled feature corresponding to the first feature to be fused;
    计算所述第一个待融合特征对应的第一采样特征和所述第一个待融合特征的差值,获取所述第一个待融合特征对应的特征差;calculating the difference between the first sampling feature corresponding to the first feature to be fused and the first feature to be fused, and obtaining the feature difference corresponding to the first feature to be fused;
    将所述第一个待融合特征对应的特征差采样为与所述第二特征空间尺度相同的特征,获取所述第一个待融合特征对应的第二采样特征;Sampling the feature difference corresponding to the first feature to be fused as a feature with the same spatial scale as the second feature, and obtaining a second sampled feature corresponding to the first feature to be fused;
    对所述第二特征和所述第一个待融合特征对应的第二采样特征进行相加融合,生成所述第一个待融合特征的融合结果。Addition and fusion are performed on the second feature and the second sampling feature corresponding to the first feature to be fused to generate a fusion result of the first feature to be fused.
  4. 根据权利要求2所述的方法,其中,所述逐一对所述排序结果中的其它待融合特征和上一个待融合特征的融合结果进行融合,包括:The method according to claim 2, wherein the merging of other features to be fused in the sorting result and the fusion result of the last feature to be fused one by one includes:
    将所述排序结果中的第m-1个待融合特征的融合结果采样为与所述排序结果中的第m个待融合特征空间尺度相同的特征,生成所述第m个待融合对应的第一采样特征,m为大 于1的正整数;Sampling the fusion result of the m-1th feature to be fused in the sorting result as a feature with the same space scale as the mth feature to be fused in the sorting result, and generating the mth feature corresponding to the mth to be fused A sampling feature, m is a positive integer greater than 1;
    计算所述第m个待融合特征与所述第m个待融合对应的第一采样特征的差值,获取所述第m个待融合特征对应的特征差;calculating the difference between the mth feature to be fused and the first sampling feature corresponding to the mth feature to be fused, and obtaining the feature difference corresponding to the mth feature to be fused;
    将所述第m个待融合特征对应的特征差采样为与所述第m-1个待融合特征的融合结果空间尺度相同的特征,获取所述第m个待融合特征对应的第二采样特征;Sampling the feature difference corresponding to the mth feature to be fused as a feature with the same spatial scale as the fusion result of the m-1th feature to be fused, and obtaining a second sampling feature corresponding to the mth feature to be fused ;
    对所述第m-1个待融合特征的融合结果和所述第m个待融合特征对应的第二采样特征进行相加融合,生成所述第m个待融合特征的融合结果。Adding and fusing the fusion result of the m-1 feature to be fused and the second sampling feature corresponding to the m th feature to be fused to generate a fusion result of the m th feature to be fused.
  5. 根据权利要求1-4任一项所述的方法,其中,所述将所述目标特征划分为第一特征和第二特征,包括:The method according to any one of claims 1-4, wherein said dividing said target features into first features and second features comprises:
    基于所述目标特征的特征通道将所述目标特征划分为第一特征和第二特征。The target feature is divided into a first feature and a second feature based on a feature channel of the target feature.
  6. 一种图像去雾方法,包括:An image defogging method, comprising:
    通过编码模块对目标图像进行处理,获取编码特征;其中,所述编码模块包括L个级联的且空间尺度均不相同的编码器,第m个编码器用于通过权利要求1-5任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器,L、m均为正整数,且m≤L;Process the target image through the encoding module to obtain the encoding features; wherein, the encoding module includes L cascaded encoders with different spatial scales, and the mth encoder is used to pass any one of claims 1-5 The feature fusion method fuses the image features of the encoding module on the m-th encoder and the fusion results output by all encoders before the m-th encoder to generate the m-th encoder Fusion results, and output the fusion results of the m-th encoder to all encoders after the m-th encoder, where L and m are positive integers, and m≤L;
    通过由至少一个残差块RDB构成的特征复原模块对所述编码特征进行处理,获取复原特征;Process the encoded features through a feature restoration module composed of at least one residual block RDB to obtain restored features;
    通过解码模块对所述复原特征进行处理,获取所述目标图像去雾效果图像;其中,所述解码模块包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通过权利要求1-5任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器。Process the restored features through the decoding module to obtain the dehazing effect image of the target image; wherein, the decoding module includes L cascaded decoders with different spatial scales, and the mth decoder is used to pass The feature fusion method according to any one of claims 1-5 fuses the image features of the encoding module on the mth encoder and the fusion results output by all decoders before the mth decoder to generate the fusion result of the mth decoder, and output the fusion result of the mth decoder to all decoders after the mth decoder.
  7. 一种特征融合装置,包括:A feature fusion device, comprising:
    获取单元,用于获取目标特征和至少一个待融合特征,所述目标特征和所述至少一个待融合特征分别为同一图像的不同空间尺度的特征;An acquisition unit, configured to acquire a target feature and at least one feature to be fused, the target feature and the at least one feature to be fused are respectively features of different spatial scales of the same image;
    划分单元,用于将所述目标特征划分为第一特征和第二特征;a division unit, configured to divide the target feature into a first feature and a second feature;
    第一处理单元,用于基于残差稠密连接网络对所述第一特征进行处理,获取第三特征;A first processing unit, configured to process the first feature based on a residual densely connected network to obtain a third feature;
    第二处理单元,用于对所述第二特征和所述至少一个待融合特征进行融合,获取第四特征;a second processing unit, configured to fuse the second feature and the at least one feature to be fused to obtain a fourth feature;
    合并单元,用于合并所述第三特征和所述第四特征,生成所述目标特征和至少一个待融合特征的融合结果。A merging unit, configured to combine the third feature and the fourth feature to generate a fusion result of the target feature and at least one feature to be fused.
  8. 一种图像去雾装置,包括:An image defogging device, comprising:
    特征提取单元,用于通过编码模块对目标图像进行处理,获取编码特征;其中,所述编码模块包括L个级联的且空间尺度均不相同的编码器,第m个编码器用于通过权利要求1-5任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个编码器之前的所有编码器输出的融合结果,生成所述第m个编码器的融合结果,并将所述第m个编码器的融合结果输出至所述第m个编码器之后的所有编码器,L、m均为正整数,且m≤L;The feature extraction unit is used to process the target image through the encoding module to obtain the encoding features; wherein, the encoding module includes L cascaded encoders with different spatial scales, and the mth encoder is used to pass the claim The feature fusion method described in any one of 1-5 fuses the image features of the encoding module on the m-th encoder and the fusion results output by all encoders before the m-th encoder to generate the The fusion result of the m-th encoder, and outputting the fusion result of the m-th encoder to all encoders after the m-th encoder, where L and m are positive integers, and m≤L;
    特征处理单元,用于通过由至少一个残差块RDB构成的特征复原模块对所述编码特征进行处理,获取复原特征;A feature processing unit, configured to process the encoded features through a feature restoration module composed of at least one residual block RDB to obtain restored features;
    图像生成单元,用于通过解码模块对所述复原特征进行处理,获取所述目标图像去雾效果图像;其中,所述解码模块包括L个级联的且空间尺度均不相同的解码器,第m个解码器用于通过权利要求1-5任一项所述的特征融合方法融合所述编码模块在所述第m个编码器上的图像特征和所述第m个解码器之前的所有解码器输出的融合结果,生成所述第m个解码器的融合结果,并将所述第m个解码器的融合结果输出至所述第m个解码器之后的所有解码器。An image generating unit, configured to process the restored features through a decoding module to obtain a defogging effect image of the target image; wherein, the decoding module includes L cascaded decoders with different spatial scales, the first The m decoders are used to fuse the image features of the encoding module on the m encoder and all decoders before the m encoder through the feature fusion method described in any one of claims 1-5. The output fusion result is to generate the fusion result of the m-th decoder, and output the fusion result of the m-th decoder to all decoders after the m-th decoder.
  9. 一种电子设备,包括:存储器和处理器,所述存储器用于存储计算机程序;所述处理器用于在调用计算机程序时,使得所述电子设备实现权利要求1-6任一项所述的方法。An electronic device, comprising: a memory and a processor, the memory is used to store a computer program; the processor is used to enable the electronic device to implement the method described in any one of claims 1-6 when calling the computer program .
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现权利要求1-6任一项所述的方法。A computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a computing device, the computing device implements the method according to any one of claims 1-6 .
  11. 一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现如权利要求1-6任一项所述的方法。A computer program product, when the computer program product is run on a computer, the computer is made to implement the method according to any one of claims 1-6.
PCT/CN2022/121209 2021-09-27 2022-09-26 Feature fusion method, image defogging method and device WO2023046136A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111138532.8 2021-09-27
CN202111138532.8A CN115880192A (en) 2021-09-27 2021-09-27 Feature fusion method, image defogging method and device

Publications (1)

Publication Number Publication Date
WO2023046136A1 true WO2023046136A1 (en) 2023-03-30

Family

ID=85720113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121209 WO2023046136A1 (en) 2021-09-27 2022-09-26 Feature fusion method, image defogging method and device

Country Status (2)

Country Link
CN (1) CN115880192A (en)
WO (1) WO2023046136A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090292468A1 (en) * 2008-03-25 2009-11-26 Shunguang Wu Collision avoidance method and system using stereo vision and radar sensor fusion
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 Image defogging method based on global and local feature fusion
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
CN111968064A (en) * 2020-10-22 2020-11-20 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112232132A (en) * 2020-09-18 2021-01-15 北京理工大学 Target identification and positioning method fusing navigation information
CN112801047A (en) * 2021-03-19 2021-05-14 腾讯科技(深圳)有限公司 Defect detection method and device, electronic equipment and readable storage medium
CN112884682A (en) * 2021-01-08 2021-06-01 福州大学 Stereo image color correction method and system based on matching and fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090292468A1 (en) * 2008-03-25 2009-11-26 Shunguang Wu Collision avoidance method and system using stereo vision and radar sensor fusion
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 Image defogging method based on global and local feature fusion
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
CN112232132A (en) * 2020-09-18 2021-01-15 北京理工大学 Target identification and positioning method fusing navigation information
CN111968064A (en) * 2020-10-22 2020-11-20 成都睿沿科技有限公司 Image processing method and device, electronic equipment and storage medium
CN112884682A (en) * 2021-01-08 2021-06-01 福州大学 Stereo image color correction method and system based on matching and fusion
CN112801047A (en) * 2021-03-19 2021-05-14 腾讯科技(深圳)有限公司 Defect detection method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN115880192A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
US20200334819A1 (en) Image segmentation apparatus, method and relevant computing device
US11604272B2 (en) Methods and systems for object detection
CN110544214A (en) Image restoration method and device and electronic equipment
WO2021143207A1 (en) Image processing method and apparatus, computation processing device, and medium
CN109766918B (en) Salient object detection method based on multilevel context information fusion
CN112598673A (en) Panorama segmentation method, device, electronic equipment and computer readable medium
CN113962861A (en) Image reconstruction method and device, electronic equipment and computer readable medium
Wang et al. ARFP: A novel adaptive recursive feature pyramid for object detection in aerial images
CN113705575B (en) Image segmentation method, device, equipment and storage medium
WO2023046136A1 (en) Feature fusion method, image defogging method and device
CN113255675B (en) Image semantic segmentation network structure and method based on expanded convolution and residual path
Liu et al. Single‐image super‐resolution using lightweight transformer‐convolutional neural network hybrid model
Gao et al. Multi-branch aware module with channel shuffle pixel-wise attention for lightweight image super-resolution
CN114331982A (en) Target counting method and device
WO2023125522A1 (en) Image processing method and apparatus
CN115358962B (en) End-to-end visual odometer method and device
Shen et al. Itsrn++: Stronger and better implicit transformer network for continuous screen content image super-resolution
CN115578261A (en) Image processing method, deep learning model training method and device
WO2023072176A1 (en) Video super-resolution method and device
Yu et al. Dual-branch feature learning network for single image super-resolution
WO2023116814A1 (en) Blurry video repair method and apparatus
CN111524090A (en) Depth prediction image-based RGB-D significance detection method
WO2023174355A1 (en) Video super-resolution method and device
CN118509599A (en) Image compression method, device and medium based on gradient attention mechanism
WO2024140109A1 (en) Image super-resolution method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22872180

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11/07/2024)