WO2023201876A1 - 低照度遥感图像的高动态重建方法及装置 - Google Patents

低照度遥感图像的高动态重建方法及装置 Download PDF

Info

Publication number
WO2023201876A1
WO2023201876A1 PCT/CN2022/101094 CN2022101094W WO2023201876A1 WO 2023201876 A1 WO2023201876 A1 WO 2023201876A1 CN 2022101094 W CN2022101094 W CN 2022101094W WO 2023201876 A1 WO2023201876 A1 WO 2023201876A1
Authority
WO
WIPO (PCT)
Prior art keywords
remote sensing
low
feature
term
long
Prior art date
Application number
PCT/CN2022/101094
Other languages
English (en)
French (fr)
Inventor
张磊
魏巍
张欣媛
丁晨
张艳宁
Original Assignee
西北工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西北工业大学 filed Critical 西北工业大学
Publication of WO2023201876A1 publication Critical patent/WO2023201876A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the invention relates to the field of image processing, and in particular to a method and device for highly dynamic reconstruction of low-light remote sensing images.
  • remote sensing images have been widely used in geological exploration, urban planning, disaster monitoring and other fields.
  • only low-illumination remote sensing images can usually be captured due to limitations in imaging time or unexpected weather conditions, such as at night or cloudy days with insufficient illumination. This results in low-contrast images that are difficult for machines to understand.
  • How to effectively enhance the brightness and complete the task of high-dynamic reconstruction of remote sensing images when the illumination of the original remote sensing images is too low has gradually attracted widespread attention from professionals at home and abroad.
  • Embodiments of the present invention provide a method and device for high-dynamic reconstruction of low-illumination remote sensing images, so as to at least solve the problem of low accuracy in high-dynamic reconstruction of low-illumination remote sensing images in the prior art.
  • a method for highly dynamic reconstruction of low-light remote sensing images includes: acquiring low-light remote sensing images; mapping low-light remote sensing image data to a deep learning feature space to obtain deep features F x ; According to the depth feature F x , determine the short-term feature y s and the long-term feature y l .
  • the short-term feature y s is a pixel-level dynamic feature determined at least based on the convolution operation in the spatial domain.
  • the long-term feature y l represents the depth feature F x after The dependencies between the representation features determined after processing by the Transformer-based pre-training model; the brightness enhancement curve is determined based on the short-term feature y s and the long-term feature y l ; based on the brightness enhancement curve, the low-light remote sensing image is adjusted pixel by pixel.
  • determining the brightness enhancement curve based on the short-term feature ys and the long-term feature y l includes: inputting the short-term feature y s and the long-term feature y l into the pre-trained brightness enhancement model to obtain the brightness enhancement curve.
  • the high-dynamic reconstruction method of low-illumination remote sensing images also includes: optimizing the brightness enhancement model through a backpropagation algorithm based on the solution result of the loss function.
  • the base network correspondingly has multiple convolution windows of different sizes; the short-term feature y s is determined based on the output results of the weight generation network and the output results of multiple base networks.
  • determining the short-term feature y s based on the output results of the weight generation network and the output results of the multiple base networks includes: inputting the output results of the weight generation network and the output results of the multiple base networks into the linear fusion model , the short-term feature y s is obtained; where, g i (F x , ⁇ i ) is the output result of the i-th base network parameterized by ⁇ i , n is the total number of base networks, for reasons Parameterized weights generate the output of the network.
  • determine the short-term feature y s and the long-term feature y l according to the deep feature F x including: flattening the deep feature F x into a series of vectors Input the vector F t into the Transformer-based pre-training model to obtain the long-term feature y l ; where L is the vector length and Ct is the number of mapped channels.
  • the Transformer-based pre-trained model is used to: add a learnable position encoding to each labeled vector feature; adopt a multi-head self-attention model to determine the inter-vector dependencies in the deep feature space; utilize skip connections with The feedforward neural network processes the output of the multi-head self-attention model to obtain the long-term feature y l .
  • obtaining a low-light remote sensing image includes: simulating and generating low-light remote sensing image data corresponding to the initial remote sensing image according to the input initial remote sensing image.
  • a high-dynamic reconstruction device for low-illumination remote sensing images which includes: an acquisition unit for acquiring low-illumination remote sensing images; and a mapping unit for converting the low-illumination remote sensing images into The data is mapped to the deep learning feature space to obtain the deep feature F x ; the first determination unit is used to determine the short-term feature y s and the long-term feature y l based on the deep feature F The pixel-level dynamic features determined by the operation, the long-term features y l represent the depth features F l , determine the brightness enhancement curve; the adjustment unit is used to adjust the low-light remote sensing image pixel by pixel according to the brightness enhancement curve.
  • the high-dynamic reconstruction method of low-illumination remote sensing images in the embodiment of the present invention includes: acquiring low-illumination remote sensing images; mapping the low-illumination remote sensing image data to the deep learning feature space to obtain depth features F x ; and determining short-term features based on the depth features F x y s and long-term feature y l , the short-term feature y s is a pixel-level dynamic feature determined at least based on the convolution operation in the spatial domain, and the long-term feature y l represents the depth feature F x that is determined after being processed by a Transformer-based pre-training model.
  • the high-dynamic reconstruction method of low-light remote sensing images using the above implementation method simultaneously utilizes the long-term and short-term characteristics of low-light remote sensing images in the process of high-dynamic reconstruction of low-exposure remote sensing images, combined with pixel-level dynamic features and feature dependencies. , determine the brightness enhancement curve, and then adjust the low-light remote sensing image pixel by pixel according to the brightness enhancement curve.
  • Figure 1 is a schematic flow chart of a high-dynamic reconstruction method for low-light remote sensing images provided by an embodiment of the present invention
  • Figure 2 is a schematic diagram of a high-dynamic reconstruction device for low-light remote sensing images provided by an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a high-dynamic reconstruction method for low-illumination remote sensing images according to an embodiment of the present invention. As shown in Figure 1, the method includes:
  • Step S102 obtain low-light remote sensing images
  • Step S104 map the low-light remote sensing image data to the deep learning feature space to obtain the depth feature F x ;
  • Step S106 determine the short-term feature y s and the long-term feature yl according to the depth feature F Dependencies between representation features determined after processing by the Transformer-based pre-training model;
  • Step S108 determine the brightness enhancement curve based on the short-term feature ys and the long-term feature y l ;
  • Step S110 Adjust the low-illumination remote sensing image pixel by pixel according to the brightness enhancement curve.
  • the high-dynamic reconstruction method of low-light remote sensing images includes: acquiring low-light remote sensing images; mapping the low-light remote sensing image data to the deep learning feature space to obtain depth features F x ; and determining based on the depth features F x Short-term feature y s and long-term feature y l , the short-term feature y s is a pixel-level dynamic feature determined based on at least a convolution operation in the spatial domain, and the long-term feature y l represents the depth feature F x determined after being processed by a pre-training model based on Transformer Characterize the dependence between features; determine the brightness enhancement curve based on the short-term feature ys and the long-term feature y l ; adjust the low-light remote sensing image pixel by pixel based on the brightness enhancement curve.
  • the high-dynamic reconstruction method of low-light remote sensing images using the above implementation method simultaneously utilizes the long-term and short-term characteristics of low-light remote sensing images in the process of high-dynamic reconstruction of low-exposure remote sensing images, combined with pixel-level dynamic features and feature dependencies. , determine the brightness enhancement curve, and then adjust the low-light remote sensing image pixel by pixel according to the brightness enhancement curve. For different low-light remote sensing images, specific adjustments will be made based on their specific long-term and short-term characteristics, so that low-light remote sensing images can be adjusted with high accuracy.
  • the basis in the dynamic reconstruction process is more comprehensive and accurate, and can adaptively fit specific images, effectively improving the accuracy of high-dynamic reconstruction of low-illumination remote sensing images, and solving the problem of high-dynamic reconstruction of low-illumination remote sensing images in the existing technology.
  • the problem of low reconstruction accuracy is more comprehensive and accurate, and can adaptively fit specific images, effectively improving the accuracy of high-dynamic reconstruction of low-illumination remote sensing images, and solving the problem of high-dynamic reconstruction of low-illumination remote sensing images in the existing technology.
  • the low-light remote sensing image mentioned in this application does not mean that the exposure of the remote sensing image should be lower than a certain value. According to the actual brightness requirements, as long as the brightness of the remote sensing image does not meet expectations, it can be considered It is a low-illumination remote sensing image. It can also be understood that as long as at least part of the brightness of the remote sensing image is improved after high-dynamic reconstruction, the remote sensing image before reconstruction is a low-illumination remote sensing image compared to after reconstruction.
  • the so-called image reconstruction generates a new image. This new image can be an image that is recreated independently of the original image, or it can be a new image formed by directly modifying and covering the original image.
  • H the image length
  • W the image height
  • C the number of image channels.
  • determining the brightness enhancement curve based on the short-term feature y s and the long-term feature y l includes: inputting the short-term feature y s and the long-term feature y l into the pre-trained brightness enhancement model to obtain the brightness enhancement curve.
  • Mapping the low-light remote sensing image data to the deep learning feature space to obtain the depth feature F x includes: mapping the low-light remote sensing image data to the depth feature space through a convolution layer; and obtaining the depth through an adaptive global average pooling layer Characteristic F x .
  • the convolution window size of the convolution layer is 7 ⁇ 7
  • the stride is 4,
  • the output channel is 16.
  • the adaptive global average pooling layer shrinks the feature map to one-eighth of its original size.
  • the high-dynamic reconstruction method of low-illumination remote sensing images also includes: optimizing the brightness enhancement model through the back-propagation algorithm according to the solution result of the loss function. This will help improve the accuracy of the brightness enhancement model, thereby ensuring that more accurate high-dynamic images can be created later.
  • the loss function/error function is The learning rate of the brightness enhancement model is le-3, and the total number of training times is 400.
  • the network correspondingly has multiple convolution windows of different sizes; the short-term feature y s is determined based on the output results of the weight generation network and the output results of multiple base networks.
  • determining the short-term feature y s based on the output results of the weight generation network and the output results of multiple base networks includes: inputting the output results of the weight generation network and the output results of multiple base networks into the linear fusion model , the short-term feature y s is obtained; where, g i (F x , ⁇ i ) is the output result of the i-th base network parameterized by ⁇ i , n is the total number of base networks, for reasons Parameterized weights generate the output of the network.
  • n is 3.
  • Fx is input into three parallel base networks and one weight generation network respectively.
  • the three base networks are respectively composed of convolution blocks with convolution windows of 3, 7 and 11, and two It is composed of convolution blocks with a convolution window size of 3 and is activated with a linear rectification layer (Rectified Linear Unit, ReLU).
  • the weight generation network consists of two stacked convolutional blocks with a convolution window of 3 and is activated with a linear rectification layer.
  • the linear fusion module is then used to integrate the results of all branches with the weights output by the weight generation network.
  • the deep feature F x determine the short-term feature y s and the long-term feature y l , including: flattening the deep feature F x into a series of vectors Input the vector F t into the Transformer-based pre-training model to obtain the long-term feature y l ; where L is the vector length and Ct is the number of mapped channels.
  • the Transformer-based pre-training model is used to: add a learnable position encoding to each labeled vector feature; adopt a multi-head self-attention model to determine the inter-vector dependencies in the deep feature space; utilize skip connections with The feedforward neural network processes the output results of the multi-head self-attention model to obtain long-term features y l .
  • the dependence between vectors in space, that is, The output of the multi-head self-attention model is processed using a feedforward neural network with skip connections to obtain the long-term feature y l , that is Among them, the above p is a learnable position encoding, MSA is a multi-head self-attention model, FFN is a feedforward neural network, and LN represents layer normalization.
  • the dynamic long-term and short-term feature extraction network includes two branches, one of which is a pixel-level dynamic feature extraction branch, which includes three parallel base network branches and One weight generation network branch, and the second branch is the long-term feature extraction branch, making the model more integrated and easier to manage and use.
  • obtaining a low-light remote sensing image includes: simulating and generating low-light remote sensing image data corresponding to the initial remote sensing image according to the input initial remote sensing image.
  • the low-illumination remote sensing image is simulated and generated based on the input initial remote sensing image, and then the long-term and short-term features are used in subsequent steps to perform highly dynamic reconstruction of the simulated low-illumination remote sensing image, which can more intuitively compare the difference between the initial remote sensing image and the reconstructed image. , so as to more clearly grasp the effect of remote sensing image reconstruction, facilitate the adjustment of method steps, model parameters, etc., which in turn is conducive to improving the quality of high-dynamic reconstruction of remote sensing images.
  • generating the low-light remote sensing image data corresponding to the initial remote sensing image can be achieved by using a variety of technical means, as long as the camera can simulate the low-light remote sensing image captured on a cloudy day or night.
  • Ei represents the pixel value of the original remote sensing image at i.
  • Zi,j represents the pixel value of pixel i at the continuous exposure time index j, that is, the obtained low-illuminance remote sensing image data.
  • the most representative curves can be selected from multiple preset response curves.
  • Ei ⁇ tj is then normalized so that the average pixel value of Ei ⁇ tT/2+1 is 0.5.
  • embodiments of the present invention also provide a high-dynamic reconstruction device for low-illumination remote sensing images, which includes: an acquisition unit for acquiring low-illumination remote sensing images; a mapping unit for converting low-illumination remote sensing images into The remote sensing image data is mapped to the deep learning feature space to obtain the depth feature F x ; the first determination unit is used to determine the short-term feature y s and the long-term feature y l based on the depth feature F The pixel-level dynamic features determined by the convolution operation, the long-term feature y l representing the depth feature F Feature y l determines the brightness enhancement curve; the adjustment unit is used to adjust the low-light remote sensing image pixel by pixel according to the brightness enhancement curve.
  • the high-dynamic reconstruction device for low-light remote sensing images using the above implementation method it simultaneously utilizes the long-term and short-term characteristics of low-light remote sensing images, combines pixel-level dynamic features and feature dependencies to determine the brightness enhancement curve, and then determines the brightness enhancement curve according to the brightness enhancement
  • the curve adjusts the low-light remote sensing images pixel by pixel, and for different low-light remote sensing images, specific adjustments will be made based on their specific long-term and short-term characteristics, making the process of high-dynamic reconstruction of low-light remote sensing images more comprehensive and comprehensive. It is accurate and can adaptively fit specific images, effectively improves the accuracy of high-dynamic reconstruction of low-illumination remote sensing images, and solves the problem of low accuracy of high-dynamic reconstruction of low-illumination remote sensing images in the existing technology.
  • the low-light remote sensing image mentioned in this application does not mean that the exposure of the remote sensing image should be lower than a certain value. According to the actual brightness requirements, as long as the brightness of the remote sensing image does not meet expectations, it can be considered It is a low-illumination remote sensing image. It can also be understood that as long as at least part of the brightness of the remote sensing image is improved after high-dynamic reconstruction, the remote sensing image before reconstruction is a low-illumination remote sensing image compared to after reconstruction.
  • the so-called image reconstruction generates a new image. This new image can be an image that is recreated independently of the original image, or it can be a new image formed by directly modifying and covering the original image.
  • H the image length
  • W the image height
  • C the number of image channels.
  • the second determination unit is used to input the short-term feature y s and the long-term feature y l into the brightness enhancement model obtained by pre-training to obtain the brightness enhancement curve.
  • the mapping unit is used to: map low-light remote sensing image data to the depth feature space through the convolution layer; obtain the depth feature F x through the adaptive global average pooling layer.
  • the convolution window size of the convolution layer is 7 ⁇ 7
  • the stride is 4
  • the output channel is 16.
  • the adaptive global average pooling layer shrinks the feature map to one-eighth of its original size.
  • the high-dynamic reconstruction device for low-illumination remote sensing images also includes an optimization unit, which is used to optimize the brightness enhancement model through a backpropagation algorithm based on the solution result of the loss function. This will help improve the accuracy of the brightness enhancement model, thereby ensuring that more accurate high-dynamic images can be created later.
  • the loss function/error function is The learning rate of the brightness enhancement model is le-3, and the total number of training times is 400.
  • the first determination unit includes a first input module and a determination module: the first input module is used to input the depth feature F x into the weight generation network and multiple base networks respectively, and activate it with a linear rectification layer, where the multiple bases The network correspondingly has multiple convolution windows of different sizes; the determination module is used to determine the short-term feature y s based on the output results of the weight-generated network and the output results of multiple base networks.
  • the determination module is used to: input the output results of the weight generation network and the output results of the multiple base networks into the linear fusion model , the short-term feature y s is obtained; where, g i (F x , ⁇ i ) is the output result of the i-th base network parameterized by ⁇ i , n is the total number of base networks, for reasons Parameterized weights generate the output of the network.
  • n is 3.
  • Fx is input into three parallel base networks and one weight generation network respectively.
  • the three base networks are respectively composed of convolution blocks with convolution windows of 3, 7 and 11, and two It is composed of convolution blocks with a convolution window size of 3 and is activated with a linear rectification layer (Rectified Linear Unit, ReLU).
  • the weight generation network consists of two stacked convolutional blocks with a convolution window of 3 and is activated with a linear rectification layer.
  • the linear fusion module is then used to integrate the results of all branches with the weights output by the weight generation network.
  • the first determination unit also includes a flattening module and a second input module: the flattening module is used to flatten the depth feature F x into a series of vectors The second input module is used to input the vector F t into the pre-training model based on Transformer to obtain the long-term feature y l ; where L is the vector length and Ct is the number of mapped channels.
  • the Transformer-based pre-training model is used to: add a learnable position encoding to each labeled vector feature; adopt a multi-head self-attention model to determine the inter-vector dependencies in the deep feature space; utilize skip connections with The feedforward neural network processes the output results of the multi-head self-attention model to obtain long-term features y l .
  • the dependence between vectors in space, that is, The output of the multi-head self-attention model is processed using a feedforward neural network with skip connections to obtain the long-term feature y l , that is Among them, the above p is a learnable position encoding, MSA is a multi-head self-attention model, FFN is a feedforward neural network, and LN represents layer normalization.
  • the dynamic long-term and short-term feature extraction network includes two branches, one of which is a pixel-level dynamic feature extraction branch, which includes three parallel base network branches and One weight generation network branch, and the second branch is the long-term feature extraction branch, making the model more integrated and easier to manage and use.
  • the acquisition unit includes a simulation module, which is used to simulate and generate low-light remote sensing image data corresponding to the initial remote sensing image according to the input initial remote sensing image.
  • the low-illumination remote sensing image is simulated and generated based on the input initial remote sensing image, and then the long-term and short-term features are used in subsequent steps to perform highly dynamic reconstruction of the simulated low-illumination remote sensing image, which can more intuitively compare the difference between the initial remote sensing image and the reconstructed image. , so as to more clearly grasp the effect of remote sensing image reconstruction, facilitate the adjustment of method steps, model parameters, etc., which in turn is conducive to improving the quality of high-dynamic reconstruction of remote sensing images.
  • generating low-light remote sensing image data corresponding to the initial remote sensing image can be achieved by using a variety of technical means, as long as the camera can simulate the low-light remote sensing image captured on a cloudy day or night.
  • Ei represents the pixel value of the original remote sensing image at i.
  • Zi,j represents the pixel value of pixel i at the continuous exposure time index j, that is, the obtained low-illumination remote sensing image data.
  • the most representative curves can be selected from multiple preset response curves.
  • Ei ⁇ tj is then normalized so that the average pixel value of Ei ⁇ tT/2+1 is 0.5.
  • embodiments of the present invention also provide a non-volatile storage medium.
  • the non-volatile storage medium includes a stored program. When the program is running, the device where the non-volatile storage medium is located is controlled to perform the above-mentioned low illumination process. Highly dynamic reconstruction method for remote sensing images.
  • embodiments of the present invention also provide a processor, and the processor is configured to run a program, wherein when the program is running, the above-mentioned high-dynamic reconstruction method of low-light remote sensing images is executed.
  • embodiments of the present invention also provide a high-dynamic reconstruction device for low-illumination remote sensing images, including a display, a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the above Highly dynamic reconstruction method for low-illumination remote sensing images.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明实施例公开了一种低照度遥感图像的高动态重建方法和装置,低照度遥感图像的高动态重建方法包括:获取低照度遥感图像;将低照度遥感图像数据映射至深度学习特征空间,得到深度特征Fx;根据深度特征Fx,确定短期特征ys和长期特征yl,短期特征ys为至少基于空间域的卷积操作确定的像素级动态特征,长期特征yl表征深度特征Fx经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;根据短期特征ys和长期特征yl,确定亮度增强曲线;根据亮度增强曲线,对低照度遥感图像进行逐像素调整。通过本发明,解决了现有技术中对低照度遥感图像进行高动态重建的精度较低的问题,提高了低照度遥感图像的高动态重建精度。

Description

低照度遥感图像的高动态重建方法及装置 技术领域
本发明涉及图像处理领域,尤其涉及一种低照度遥感图像的高动态重建方法及装置。
背景技术
目前,遥感图像已经被广泛应用于地质勘探、城市规划、灾害监测等领域。在实际的应用中,由于成像时间或意外天气条件的限制,例如晚上或光照不足的阴天,通常只能捕获到低照度的遥感图像。这导致图像的对比度较低,机器难以理解。如何在原始遥感图像照度过低的情况下,有效的增强亮度,完成遥感图像高动态重建任务逐渐受到国内外学者的广泛关注。
现有技术中,在对低照度遥感图像进行调整时存在精确度较低的问题,并且针对不同的图像,调整之后的效果差异也很大,导致调整后的图像显示效果不佳。
针对上述的问题,目前尚未提出有效的解决方案。
在背景技术部分中公开的以上信息只是用来加强对本文所描述技术的背景技术的理解。因此,背景技术中可能包含某些信息,这些信息对于本领域技术人员来说并未形成在已知的现有技术。
发明内容
本发明实施例提供了一种低照度遥感图像的高动态重建方法及装置,以至少解决现有技术中对低照度遥感图像进行高动态重建的精度较低的问题。
根据本发明实施例的第一个方面,提供了一种低照度遥感图像的高动态重建方法,其包括:获取低照度遥感图像;将低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x;根据深度特征F x,确定短期特征y s和长期特征y l,短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,长期特征y l表征深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;根据短期特征y s和长期特征y l,确定亮度增强曲线;根据亮度增强曲线,对低照度遥感图像进行逐像素调整。
可选地,根据短期特征y s和长期特征y l,确定亮度增强曲线,包括:将短期特征y s和长期特征y l输入预训练得到的亮度增强模型中,得到亮度增强曲线。
可选地,亮度增强模型用于:根据函数LE i(I(x);α i)=LE i-1(x)+α iLE i-1(x)(1-LE i-1(x))计算亮度增强曲线;其中,α i=τ(tanh(FC([y s,y l]))),α i为像素尺度因子,FC([y s,y l])表示通过使用全连接层转换长距离特征,LE i表示亮度增强的结果,LE 0(x)=x,τ为插值函数,i为迭代次数。
可选地,低照度遥感图像的高动态重建方法还包括:根据损失函数的求解结果,通过反向传播算法对亮度增强模型进行优化。
可选地,根据深度特征F x,确定短期特征y s和长期特征y l,包括:将深度特征F x分别输入权重生成网络和多个基网络中,并用线性整流层激活,其中,多个基网络对应地具有多个大小不同的卷积窗口;根据权重生成网络的输出结果和多个基网络的输出结果,确定短期特征y s
可选地,根据权重生成网络的输出结果和多个基网络的输出结果,确定短期特征y s,包括:将权重生成网络的输出结果和多个基网络的输出结果输入线性融合模型
Figure PCTCN2022101094-appb-000001
Figure PCTCN2022101094-appb-000002
中,得到短期特征y s;其中,g i(F xi)为由θ i参数化的第i个基网络的输出结果,n为基网络的总个数,
Figure PCTCN2022101094-appb-000003
为由
Figure PCTCN2022101094-appb-000004
参数化的权重生成网络的输出结果。
可选地,根据深度特征F x,确定短期特征y s和长期特征y l,包括:将深度特征F x拉平为一系列向量
Figure PCTCN2022101094-appb-000005
将向量F t输入基于Transformer的预训练模型,得到长期特征y l;其中,L为向量长度,Ct为映射后的通道数。
可选地,基于Transformer的预训练模型用于:在每个标记的向量特征中添加一个可学习的位置编码;采用多头自注意模型确定在深度特征空间中的向量间依赖关系;利用具有跳跃连接的前馈神经网络对多头自注意模型的输出结果进行处理,得到长期特征y l
可选地,获取低照度遥感图像,包括:根据输入的初始遥感图像,模拟生成初始遥感图像对应的低照度遥感图像数据。
根据本发明实施例的第二个方面,还提供了一种低照度遥感图像的高动态重建装置,其包括:获取单元,用于获取低照度遥感图像;映射单元,用于将低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x;第一确定单元,用于根据深度特征F x,确定短期特征y s和长期特征y l,短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,长期特征y l表征深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;第二确定单元,用于根据短期特征y s和长期特征y l,确定亮度增强曲线;调整单元,用于根据亮度增强曲线,对低照度遥感图像进行逐像素调整。
本发明实施例的低照度遥感图像的高动态重建方法包括:获取低照度遥感图像;将低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x;根据深度特征F x,确定短 期特征y s和长期特征y l,短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,长期特征y l表征深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;根据短期特征y s和长期特征y l,确定亮度增强曲线;根据亮度增强曲线,对低照度遥感图像进行逐像素调整。采用上述实现方式的低照度遥感图像的高动态重建方法在对低曝光遥感图像进行高动态重建的过程中,同时利用了低照度遥感图像长期和短期特征,结合像素级动态特征和特征件依赖关系,确定亮度增强曲线,进而根据亮度增强曲线对低照度遥感图像进行逐像素调整,并且针对不同的低照度遥感图像,会基于其特定的长短期特征进行特定调节,使得对低照度遥感图像进行高动态重建的过程中的依据更加全面、准确,并能自适应地拟合特定图像,有效地提高了低照度遥感图像的高动态重建精度,解决了现有技术中对低照度遥感图像进行高动态重建的精度较低的问题。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:
图1为本发明实施例提供的一种低照度遥感图像的高动态重建方法的流程示意图;
图2为本发明实施例提供的低照度遥感图像的高动态重建装置的示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于限定特定顺序。
本发明实施例提供了一种低照度遥感图像的高动态重建方法,图1为本发明实施例的低照度遥感图像的高动态重建方法的流程示意图,如图1所示,该方法包括:
步骤S102,获取低照度遥感图像;
步骤S104,将低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x
步骤S106,根据深度特征F x,确定短期特征y s和长期特征y l,短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,长期特征y l表征深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;
步骤S108,根据短期特征y s和长期特征y l,确定亮度增强曲线;
步骤S110,根据亮度增强曲线,对低照度遥感图像进行逐像素调整。
根据本发明的实施例的低照度遥感图像的高动态重建方法包括:获取低照度遥感图像;将低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x;根据深度特征F x,确定短期特征y s和长期特征y l,短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,长期特征y l表征深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;根据短期特征y s和长期特征y l,确定亮度增强曲线;根据亮度增强曲线,对低照度遥感图像进行逐像素调整。采用上述实现方式的低照度遥感图像的高动态重建方法在对低曝光遥感图像进行高动态重建的过程中,同时利用了低照度遥感图像长期和短期特征,结合像素级动态特征和特征件依赖关系,确定亮度增强曲线,进而根据亮度增强曲线对低照度遥感图像进行逐像素调整,并且针对不同的低照度遥感图像,会基于其特定的长短期特征进行特定调节,使得对低照度遥感图像进行高动态重建的过程中的依据更加全面、准确,并能自适应地拟合特定图像,有效地提高了低照度遥感图像的高动态重建精度,解决了现有技术中对低照度遥感图像进行高动态重建的精度较低的问题。
需要指出的是,本申请中所说的低照度遥感图像并非是指遥感图像的曝光度要低于某一特定值,根据实际的亮度需求,只要遥感图像的亮度没有达到期望,则就可认为其是低照度遥感图像,也可理解为,遥感图像在进行高动态重建之后,只要其至少部分的亮度得到了提高,则遥感图像在重建之前相对于重建之后就属于低照度遥感图像。而所说的图像重建即生成了新的图像,这个新的图像可以是独立于原图像而重新创建的图像,也可以是直接在原图像基础上修改并覆盖而形成的新图像。
对于低照度的遥感图像数据
Figure PCTCN2022101094-appb-000006
其高动态图像数据为
Figure PCTCN2022101094-appb-000007
其中H表示图像长度,W表示图像高度,C表示图像通道数。
具体地,根据短期特征y s和长期特征y l,确定亮度增强曲线,包括:将短期特征y s和长期特征y l输入预训练得到的亮度增强模型中,得到亮度增强曲线。
将所述低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x,包括:通过卷积层将低照度遥感图像数据映射到深度特征空间;通过自适应全局平均池化层,得到深度特征F x。这样,可有效地减小在对遥感图像进行处理的过程中的计算量,有利于提高处理速度。例如,在一个具体实施例中,卷积层的卷积窗口大小为7×7,步长为4,输出通道为16。自适应全局平均池化层将特征图缩小为原来的八分之一。
在一个具体实施例中,亮度增强模型用于:根据函数LE i(I(x);α i)=LE i-1(x)+α iLE i-1(x)(1-LE i-1(x))计算亮度增强曲线;其中,α i=τ(tanh(FC([y s,y l]))),α i为像素 尺度因子,FC([y s,y l])表示通过使用全连接层转换长距离特征,LE i表示亮度增强的结果,LE 0(x)=x,τ为插值函数,i为迭代次数。采用该函数来计算亮度增强曲线可得到更符合低照度遥感图像的亮度特性的增强曲线,从而提高重建后的遥感图像的显示效果。
具体地,低照度遥感图像的高动态重建方法还包括:根据损失函数的求解结果,通过反向传播算法对亮度增强模型进行优化。这样,有利于提高亮度增强模型的精度,从而保证后续可以创建出更准确的高动态图像。
在一个优选的实施例中,损失函数/误差函数为
Figure PCTCN2022101094-appb-000008
亮度增强模型的学习率为le-3,总的训练次数为400次。
具体地,根据深度特征F x,确定短期特征y s和长期特征y l,包括:将深度特征F x分别输入权重生成网络和多个基网络中,并用线性整流层激活,其中,多个基网络对应地具有多个大小不同的卷积窗口;根据权重生成网络的输出结果和多个基网络的输出结果,确定短期特征y s
具体地,根据权重生成网络的输出结果和多个基网络的输出结果,确定短期特征y s,包括:将权重生成网络的输出结果和多个基网络的输出结果输入线性融合模型
Figure PCTCN2022101094-appb-000009
Figure PCTCN2022101094-appb-000010
中,得到短期特征y s;其中,g i(F xi)为由θ i参数化的第i个基网络的输出结果,n为基网络的总个数,
Figure PCTCN2022101094-appb-000011
为由
Figure PCTCN2022101094-appb-000012
参数化的权重生成网络的输出结果。
在本实施例中,n为3,此处将Fx分别输入三个并行的基网络和一个权重生成网络,三个基网络分别由卷积窗口为3、7和11的卷积块,和两个卷积窗口大小为3的卷积块组成,并用线性整流层(Rectified Linear Unit,ReLU)激活。权重生成网络由两个堆叠的卷积块构成,卷积窗口为3,并用线性整流层激活。然后使用线性融合模块将所有分支的结果与权重生成网络输出的权值进行集成。
根据深度特征F x,确定短期特征y s和长期特征y l,包括:将深度特征F x拉平为一系列向量
Figure PCTCN2022101094-appb-000013
将向量F t输入基于Transformer的预训练模型,得到长期特征y l;其中,L为向量长度,Ct为映射后的通道数。
具体地,基于Transformer的预训练模型用于:在每个标记的向量特征中添加一个可学习的位置编码;采用多头自注意模型确定在深度特征空间中的向量间依赖关系;利用具有跳跃连接的前馈神经网络对多头自注意模型的输出结果进行处理,得到长期特征y l
在本实施例中,基于Transformer的预训练模型具体用于:在每个标记的向量特征中添加一个可学习的位置编码,即y 0=F t+p;采用多头自注意模型确定在深度特征空间中的向量间依赖关系,即
Figure PCTCN2022101094-appb-000014
利用具有跳跃连接的前馈神经网络对所述多头自注意模型的输出结果进行处理,得到所述长期特征y l,即
Figure PCTCN2022101094-appb-000015
其 中,上述的p为一个可学习的位置编码,MSA为多头自注意模型,FFN为前馈神经网络,LN表示层归一化。
在一个具体实施例中,提出了一种全新的双分支网络架构,动态长短期特征提取网络包括两个分支,其中一个分支为像素级动态特征提取分支,其包括三个并行的基网络分支和一个权重生成网络分支,第二个分支为长期特征提取分支,从而使得模型集成度更高,方便管理和使用。
具体地,获取低照度遥感图像,包括:根据输入的初始遥感图像,模拟生成初始遥感图像对应的低照度遥感图像数据。
根据输入的初始遥感图像模拟生成低照度遥感图像,再通过后续的步骤使用长短期特征对模拟生成的低照度遥感图像进行高动态重建,可更直观地对比初始遥感图像和重建后的图像的差异,从而更清晰地掌握遥感图像重建的效果,方便对方法步骤、模型参数等进行调整,进而有利于提高遥感图像高动态重建的质量。
具体地,根据输入的初始遥感图像,生成所述初始遥感图像对应的低照度遥感图像数据可以采用多种技术手段来实现,只要能够实现相机模拟在阴天或夜晚捕获到的低照度遥感图像即可。例如,在一个具体实施例中,可采用公式Z i,j=f(E iΔt j)来实现低照度图像的模拟,其中,f表示相机响应函数,Δtj表示曝光时间。Ei表示在i处原始遥感图像的像素值。Zi,j表示像素i在持续曝光时间指数j时的像素值,即得到的低照度遥感图像数据。为了提高模拟质量,在本实施例中,可以从多条预设响应曲线中选出最具代表性的若干条曲线,对于曝光时间的选取,令
Figure PCTCN2022101094-appb-000016
然后对EiΔtj进行归一化,使EiΔtT/2+1的平均像素值为0.5。将曝光时间代入上述低照度图像模拟公式,得到对应的低照度遥感图像数据。优选地,T=8,τ=√2,j=1,2,...,T+1。
其次,如图2所示,本发明的实施例还提供了一种低照度遥感图像的高动态重建装置,其包括:获取单元,用于获取低照度遥感图像;映射单元,用于将低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x;第一确定单元,用于根据深度特征F x,确定短期特征y s和长期特征y l,短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,长期特征y l表征深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;第二确定单元,用于根据短期特征y s和长期特征y l,确定亮度增强曲线;调整单元,用于根据亮度增强曲线,对低照度遥感图像进行逐像素调整。采用上述实现方式的低照度遥感图像的高动态重建装置在使用时,同时利用了低照度遥感图像长期和短期特征,结合像素级动态特征和特征件依赖关系,确定亮度增强曲线,进而根据亮度增强曲线对低 照度遥感图像进行逐像素调整,并且针对不同的低照度遥感图像,会基于其特定的长短期特征进行特定调节,使得对低照度遥感图像进行高动态重建的过程中的依据更加全面、准确,并能自适应地拟合特定图像,有效地提高了低照度遥感图像的高动态重建精度,解决了现有技术中对低照度遥感图像进行高动态重建的精度较低的问题。
需要指出的是,本申请中所说的低照度遥感图像并非是指遥感图像的曝光度要低于某一特定值,根据实际的亮度需求,只要遥感图像的亮度没有达到期望,则就可认为其是低照度遥感图像,也可理解为,遥感图像在进行高动态重建之后,只要其至少部分的亮度得到了提高,则遥感图像在重建之前相对于重建之后就属于低照度遥感图像。而所说的图像重建即生成了新的图像,这个新的图像可以是独立于原图像而重新创建的图像,也可以是直接在原图像基础上修改并覆盖而形成的新图像。
对于低照度的遥感图像数据
Figure PCTCN2022101094-appb-000017
其高动态图像数据为
Figure PCTCN2022101094-appb-000018
其中H表示图像长度,W表示图像高度,C表示图像通道数。
具体地,第二确定单元用于:将短期特征y s和长期特征y l输入预训练得到的亮度增强模型中,得到亮度增强曲线。
映射单元用于:通过卷积层将低照度遥感图像数据映射到深度特征空间;通过自适应全局平均池化层,得到深度特征F x。这样,可有效地减小在对遥感图像进行处理的过程中的计算量,有利于提高处理速度。例如,在一个具体实施例中,卷积层的卷积窗口大小为7×7,步长为4,输出通道为16。自适应全局平均池化层将特征图缩小为原来的八分之一。
具体地,亮度增强模型用于:根据函数LE i(I(x);α i)=LE i-1(x)+α iLE i-1(x)(1-LE i-1(x))计算亮度增强曲线;其中,α i=τ(tanh(FC([y s,y l]))),α i为像素尺度因子,FC([y s,y l])表示通过使用全连接层转换长距离特征,LE i表示亮度增强的结果,LE 0(x)=x,τ为插值函数,i为迭代次数。采用该函数来计算亮度增强曲线可得到更符合低照度遥感图像的亮度特性的增强曲线,从而提高重建后的遥感图像的显示效果。
具体地,低照度遥感图像的高动态重建装置还包括优化单元,优化单元用于根据损失函数的求解结果,通过反向传播算法对亮度增强模型进行优化。这样,有利于提高亮度增强模型的精度,从而保证后续可以创建出更准确的高动态图像。
在一个优选的实施例中,损失函数/误差函数为
Figure PCTCN2022101094-appb-000019
亮度增强模型的学习率为le-3,总的训练次数为400次。
具体地,第一确定单元包括第一输入模块和确定模块:第一输入模块用于将深度特征F x分别输入权重生成网络和多个基网络中,并用线性整流层激活,其中,多个基网络对应地具有多个大小不同的卷积窗口;确定模块用于根据权重生成网络的输出结果和多个基网 络的输出结果,确定短期特征y s
具体地,确定模块用于:将权重生成网络的输出结果和多个基网络的输出结果输入线性融合模型
Figure PCTCN2022101094-appb-000020
中,得到短期特征y s;其中,g i(F xi)为由θ i参数化的第i个基网络的输出结果,n为基网络的总个数,
Figure PCTCN2022101094-appb-000021
为由
Figure PCTCN2022101094-appb-000022
参数化的权重生成网络的输出结果。
在本实施例中,n为3,此处将Fx分别输入三个并行的基网络和一个权重生成网络,三个基网络分别由卷积窗口为3、7和11的卷积块,和两个卷积窗口大小为3的卷积块组成,并用线性整流层(Rectified Linear Unit,ReLU)激活。权重生成网络由两个堆叠的卷积块构成,卷积窗口为3,并用线性整流层激活。然后使用线性融合模块将所有分支的结果与权重生成网络输出的权值进行集成。
具体地,第一确定单元还包括拉平模块和第二输入模块:拉平模块用于将深度特征F x拉平为一系列向量
Figure PCTCN2022101094-appb-000023
第二输入模块用于将向量F t输入基于Transformer的预训练模型,得到长期特征y l;其中,L为向量长度,Ct为映射后的通道数。
具体地,基于Transformer的预训练模型用于:在每个标记的向量特征中添加一个可学习的位置编码;采用多头自注意模型确定在深度特征空间中的向量间依赖关系;利用具有跳跃连接的前馈神经网络对多头自注意模型的输出结果进行处理,得到长期特征y l
在本实施例中,基于Transformer的预训练模型具体用于:在每个标记的向量特征中添加一个可学习的位置编码,即y 0=F t+p;采用多头自注意模型确定在深度特征空间中的向量间依赖关系,即
Figure PCTCN2022101094-appb-000024
利用具有跳跃连接的前馈神经网络对所述多头自注意模型的输出结果进行处理,得到所述长期特征y l,即
Figure PCTCN2022101094-appb-000025
其中,上述的p为一个可学习的位置编码,MSA为多头自注意模型,FFN为前馈神经网络,LN表示层归一化。
在一个具体实施例中,提出了一种全新的双分支网络架构,动态长短期特征提取网络包括两个分支,其中一个分支为像素级动态特征提取分支,其包括三个并行的基网络分支和一个权重生成网络分支,第二个分支为长期特征提取分支,从而使得模型集成度更高,方便管理和使用。
具体地,获取单元包括模拟模块,模拟模块用于根据输入的初始遥感图像,模拟生成初始遥感图像对应的低照度遥感图像数据。
根据输入的初始遥感图像模拟生成低照度遥感图像,再通过后续的步骤使用长短期特征对模拟生成的低照度遥感图像进行高动态重建,可更直观地对比初始遥感图像和重建后的图像的差异,从而更清晰地掌握遥感图像重建的效果,方便对方法步骤、模型参数等进 行调整,进而有利于提高遥感图像高动态重建的质量。
具体地,根据输入的初始遥感图像,生成所述初始遥感图像对应的低照度遥感图像数据可以采用多种技术手段来实现,只要能够实现相机模拟在阴天或夜晚捕获到的低照度遥感图像即可。例如,在一个具体实施例中,可采用公式Z i,j=f(E iΔt j)来实现低照度图像的模拟,其中,f表示相机响应函数,Δtj表示曝光时间。Ei表示在i处原始遥感图像的像素值。Zi,j表示像素i在持续曝光时间指数j时的像素值,即得到的低照度遥感图像数据。为了提高模拟质量,在本实施例中,可以从多条预设响应曲线中选出最具代表性的若干条曲线,对于曝光时间的选取,令
Figure PCTCN2022101094-appb-000026
然后对EiΔtj进行归一化,使EiΔtT/2+1的平均像素值为0.5。将曝光时间代入上述低照度图像模拟公式,得到对应的低照度遥感图像数据。优选地,T=8,τ=√2,j=1,2,...,T+1。
另外,本发明的实施例还提供了一种非易失性存储介质,非易失性存储介质包括存储的程序,其中,在程序运行时控制非易失性存储介质所在设备执行上述的低照度遥感图像的高动态重建方法。
再次,本发明的实施例还提供了一种处理器,处理器用于运行程序,其中,程序运行时执行上述的低照度遥感图像的高动态重建方法。
最后,本发明的实施例还提供了一种低照度遥感图像的高动态重建设备,包括显示器、存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行上述的低照度遥感图像的高动态重建方法。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (10)

  1. 一种低照度遥感图像的高动态重建方法,其特征在于,包括:
    获取低照度遥感图像;
    将所述低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x
    根据所述深度特征F x,确定短期特征y s和长期特征y l,所述短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,所述长期特征y l表征所述深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;
    根据所述短期特征y s和所述长期特征y l,确定亮度增强曲线;
    根据所述亮度增强曲线,对所述低照度遥感图像进行逐像素调整。
  2. 根据权利要求1所述的低照度遥感图像的高动态重建方法,其特征在于,根据所述短期特征y s和所述长期特征y l,确定亮度增强曲线,包括:
    将所述短期特征y s和所述长期特征y l输入预训练得到的亮度增强模型中,得到所述亮度增强曲线。
  3. 根据权利要求2所述的低照度遥感图像的高动态重建方法,其特征在于,所述亮度增强模型用于:
    根据函数LE i(I(x);α i)=LE i-1(x)+α iLE i-1(x)(1-LE i-1(x))计算所述亮度增强曲线;
    其中,α i=τ(tanh(FC([y s,y l]))),α i为像素尺度因子,FC([y s,y l])表示通过使用全连接层转换长距离特征,LE i表示亮度增强的结果,LE 0(x)=x,τ为插值函数,i为迭代次数。
  4. 根据权利要求3所述的低照度遥感图像的高动态重建方法,其特征在于,所述低照度遥感图像的高动态重建方法还包括:
    根据损失函数的求解结果,通过反向传播算法对所述亮度增强模型进行优化。
  5. 根据权利要求1所述的低照度遥感图像的高动态重建方法,其特征在于,根据所述深度特征F x,确定短期特征y s和长期特征y l,包括:
    将所述深度特征F x分别输入权重生成网络和多个基网络中,并用线性整流层激活,其中,多个所述基网络对应地具有多个大小不同的卷积窗口;
    根据所述权重生成网络的输出结果和多个所述基网络的输出结果,确定所述短期特征y s
  6. 根据权利要求5所述的低照度遥感图像的高动态重建方法,其特征在于,根据所述权重生成网络的输出结果和多个所述基网络的输出结果,确定所述短期特征y s,包括:
    将所述权重生成网络的输出结果和多个所述基网络的输出结果输入线性融合模型
    Figure PCTCN2022101094-appb-100001
    中,得到所述短期特征y s
    其中,g i(F xi)为由θ i参数化的第i个所述基网络的输出结果,n为所述基网络的总个数,
    Figure PCTCN2022101094-appb-100002
    为由
    Figure PCTCN2022101094-appb-100003
    参数化的权重生成网络的输出结果。
  7. 根据权利要求1所述的低照度遥感图像的高动态重建方法,其特征在于,根据所述深度特征F x,确定短期特征y s和长期特征y l,包括:
    将所述深度特征F x拉平为一系列向量
    Figure PCTCN2022101094-appb-100004
    将所述向量F t输入所述基于Transformer的预训练模型,得到所述长期特征y l
    其中,L为向量长度,Ct为映射后的通道数。
  8. 根据权利要求7所述的低照度遥感图像的高动态重建方法,其特征在于,所述基于Transformer的预训练模型用于:
    在每个标记的向量特征中添加一个可学习的位置编码;
    采用多头自注意模型确定在深度特征空间中的向量间依赖关系;
    利用具有跳跃连接的前馈神经网络对所述多头自注意模型的输出结果进行处理,得到所述长期特征y l
  9. 根据权利要求1至8中任一项所述的低照度遥感图像的高动态重建方法,其特征在于,获取低照度遥感图像,包括:
    根据输入的初始遥感图像,模拟生成所述初始遥感图像对应的低照度遥感图像数据。
  10. 一种低照度遥感图像的高动态重建装置,其特征在于,包括:
    获取单元,用于获取低照度遥感图像;
    映射单元,用于将所述低照度遥感图像数据映射至深度学习特征空间,得到深度特征F x
    第一确定单元,用于根据所述深度特征F x,确定短期特征y s和长期特征y l,所述短期特征y s为至少基于空间域的卷积操作确定的像素级动态特征,所述长期特征y l表征所述深度特征F x经过基于Transformer的预训练模型处理后确定的表征特征间依赖关系;
    第二确定单元,用于根据所述短期特征y s和所述长期特征y l,确定亮度增强曲线;
    调整单元,用于根据所述亮度增强曲线,对所述低照度遥感图像进行逐像素调整。
PCT/CN2022/101094 2022-04-19 2022-06-24 低照度遥感图像的高动态重建方法及装置 WO2023201876A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210408527.2A CN114943652A (zh) 2022-04-19 2022-04-19 低照度遥感图像的高动态重建方法及装置
CN202210408527.2 2022-04-19

Publications (1)

Publication Number Publication Date
WO2023201876A1 true WO2023201876A1 (zh) 2023-10-26

Family

ID=82907302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/101094 WO2023201876A1 (zh) 2022-04-19 2022-06-24 低照度遥感图像的高动态重建方法及装置

Country Status (2)

Country Link
CN (1) CN114943652A (zh)
WO (1) WO2023201876A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118229589A (zh) * 2024-05-24 2024-06-21 中国海洋大学 遥感图像修复方法及系统、模型、电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097515A (zh) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 基于深度学习与时空滤波的低照度图像处理算法及装置
US20190333200A1 (en) * 2017-01-17 2019-10-31 Peking University Shenzhen Graduate School Method for enhancing low-illumination image
CN111489321A (zh) * 2020-03-09 2020-08-04 淮阴工学院 基于派生图和Retinex的深度网络图像增强方法和系统
CN111915525A (zh) * 2020-08-05 2020-11-10 湖北工业大学 基于改进深度可分离生成对抗网络的低照度图像增强方法
US20220067950A1 (en) * 2020-08-31 2022-03-03 Samsung Electronics Co., Ltd. Method and apparatus to complement depth image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190333200A1 (en) * 2017-01-17 2019-10-31 Peking University Shenzhen Graduate School Method for enhancing low-illumination image
CN110097515A (zh) * 2019-04-22 2019-08-06 苏州千视通视觉科技股份有限公司 基于深度学习与时空滤波的低照度图像处理算法及装置
CN111489321A (zh) * 2020-03-09 2020-08-04 淮阴工学院 基于派生图和Retinex的深度网络图像增强方法和系统
CN111915525A (zh) * 2020-08-05 2020-11-10 湖北工业大学 基于改进深度可分离生成对抗网络的低照度图像增强方法
US20220067950A1 (en) * 2020-08-31 2022-03-03 Samsung Electronics Co., Ltd. Method and apparatus to complement depth image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118229589A (zh) * 2024-05-24 2024-06-21 中国海洋大学 遥感图像修复方法及系统、模型、电子设备

Also Published As

Publication number Publication date
CN114943652A (zh) 2022-08-26

Similar Documents

Publication Publication Date Title
CN110163246A (zh) 基于卷积神经网络的单目光场图像无监督深度估计方法
WO2020228525A1 (zh) 地点识别及其模型训练的方法和装置以及电子设备
CN111259940B (zh) 一种基于空间注意力地图的目标检测方法
CN109063742A (zh) 蝴蝶识别网络构建方法、装置、计算机设备及存储介质
WO2021026944A1 (zh) 基于粒子群和神经网络的工业无线流媒体自适应传输方法
CN111292264A (zh) 一种基于深度学习的图像高动态范围重建方法
CN104217404A (zh) 雾霾天视频图像清晰化处理方法及其装置
CN110490252B (zh) 一种基于深度学习的室内人数检测方法及系统
WO2023201876A1 (zh) 低照度遥感图像的高动态重建方法及装置
CN112561807B (zh) 一种基于卷积神经网络的端到端径向畸变校正方法
CN113554599A (zh) 一种基于人类视觉效应的视频质量评价方法
CN113420794B (zh) 一种基于深度学习的二值化Faster R-CNN柑橘病虫害识别方法
CN113344773A (zh) 基于多级对偶反馈的单张图片重构hdr方法
CN116519106B (zh) 一种用于测定生猪体重的方法、装置、存储介质和设备
CN113592018A (zh) 基于残差密集网络和梯度损失的红外光与可见光图像融合方法
CN116258936A (zh) 一种基于多尺度特征的红外与可见光图像融合方法
CN115063648A (zh) 一种绝缘子缺陷检测模型构建方法及系统
CN111368733B (zh) 一种基于标签分布学习的三维手部姿态估计方法、存储介质及终端
CN116993975A (zh) 基于深度学习无监督领域适应的全景相机语义分割方法
CN112257727A (zh) 一种基于深度学习自适应可变形卷积的特征图像提取方法
CN116402679A (zh) 一种轻量级红外超分辨率自适应重建方法
CN113609904B (zh) 一种基于动态全局信息建模和孪生网络的单目标跟踪算法
CN107729821B (zh) 一种基于一维序列学习的视频概括方法
CN113393510B (zh) 一种图像处理方法、智能终端及存储介质
CN110866866B (zh) 图像仿色处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22938105

Country of ref document: EP

Kind code of ref document: A1