CN115456886A - Aviation remote sensing image shadow removing method based on deep learning and illumination model - Google Patents

Aviation remote sensing image shadow removing method based on deep learning and illumination model Download PDF

Info

Publication number
CN115456886A
CN115456886A CN202210956781.6A CN202210956781A CN115456886A CN 115456886 A CN115456886 A CN 115456886A CN 202210956781 A CN202210956781 A CN 202210956781A CN 115456886 A CN115456886 A CN 115456886A
Authority
CN
China
Prior art keywords
shadow
light intensity
penumbra
pixel
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210956781.6A
Other languages
Chinese (zh)
Inventor
周婷婷
孟祥周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202210956781.6A priority Critical patent/CN115456886A/en
Publication of CN115456886A publication Critical patent/CN115456886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention belongs to the technical field of application of remote sensing image processing, relates to an aviation remote sensing image shadow removing method, and further comprises application of the aviation remote sensing image shadow removing method. Firstly, acquiring an aerial remote sensing image shadow data set, and combining a shadow index of a YCbCr space with an original image to improve shadow characteristics; then, extracting the shadow by utilizing a CGAN network; and then based on the shadow extraction result, improving a shadow information recovery and edge optimization method based on the illumination model, and finally realizing effective shadow removal of the aerial remote sensing image. The method provided by the invention improves the shadow detection precision, effectively realizes the penumbra compensation and improves the umbra compensation precision, and can efficiently remove the shadow of the large-scale high-resolution remote sensing image.

Description

Aviation remote sensing image shadow removing method based on deep learning and illumination model
Technical Field
The invention belongs to the technical field of application of remote sensing image processing, and relates to a method for removing aerial remote sensing image shadows.
Background
When light is blocked by an object on the ground (e.g., a building, a tall structure, vegetation), the sides of the object and the ground form shadows. Shadows are common in remote sensed images and can cause information loss, especially in urban areas with dense buildings. If the shadow can be accurately extracted, the shadow information can be used for 3D reconstruction of buildings and can also be used as an important information source for shadow recovery. In addition, accurate information recovery of the shadow region can greatly promote further application of the remote sensing image, particularly in the aspects of terrain classification and information inversion of city remote sensing.
The related methods of shadow extraction can be largely classified into model-based methods, feature-based methods, and machine learning methods:
(1) Model-based methods typically require a number of prerequisites, including sensor parameters, solar altitude and azimuth, to determine the position of the shadow, with digital surface models being the most common. However, these prerequisites are often difficult to obtain, and thus such methods are not suitable for most shadow detection scenarios.
(2) Feature-based methods involve feature classification and segmentation, but such methods are not robust and have wide application differences on different data.
(3) The machine learning method has made great progress in the past few years and is widely applied in the field of shadow extraction. However, most of the current data sets are based on single shadow scenes, and are not suitable for large-scale remote sensing images under complex backgrounds. Furthermore, the developed networks need improvement in finding more hidden features and extracting fine shadows.
For the shadow information compensation, the traditional method is mostly based on the information recovery of single shadow, firstly, the shadow and the non-shadow are distinguished, the information of the shadow area is compensated from the non-shadow area through a mathematical and physical model, and the traditional shadow information compensation method can be divided into the traditional shadow information compensation method, including a gradient correction method, a histogram equalization method, a linear correlation correction method and a machine learning correlation method:
(1) The gradient correction corrects the gradient information of the shaded region by calculating the gradient change from the non-shaded region to the shaded region.
(2) The histogram equalization method compensates for shading by equalizing histogram information of non-shaded areas to shaded areas.
(3) The linear correlation correction method applies a linear correlation function to reconstruct a shadow region, and compensates the shadow region by adjusting the intensity of shadow pixels according to the statistical characteristics of the corresponding non-shadow region.
However, most of the traditional shadow compensation is only suitable for simple single-shadow images, the large-scale remote sensing images of complex scenes are difficult to apply, and shadow edges and shadows are often treated equally during information compensation, so that shadow edge transition compensation is caused.
(4) The machine learning method has strong generalization capability, and shadow information recovery can be carried out in a plurality of scenes through transfer learning. However, since the remote sensing image cannot obtain a shadow/shadow-free image pair as a data set, the current shadow compensation method based on machine learning cannot be effectively applied to the remote sensing image.
In summary, the current methods for extracting shadows and compensating information of remote sensing images are still insufficient, and especially effective compensation of the shadows of large-scale remote sensing images is still in the initial stage of research.
Disclosure of Invention
The technical problem to be solved by the invention is that the aerial remote sensing image is seriously influenced by shadow, the shadow extraction effect of the existing shadow extraction method on fine and complex backgrounds is limited, and the existing shadow information compensation method is not suitable for large-scale remote sensing images.
In order to solve the technical problem, the invention provides a remote sensing image shadow removing method based on deep learning and an illumination model, which comprises the following steps:
(1) Data set preparation: acquiring an aerial image shadow data set comprising an original shadow-containing image and a manually drawn shadow mask, then converting the original image from an RGB space to a YCbCr space, calculating a shadow index SI in the space to enhance the characteristics of the shadow, and combining the SI and the original image to form input data of a network.
(2) Training and generating an extraction shadow by using a CGAN network based on the data set manufactured in the step (1).
(3) And (3) counting the pixel width occupied by the penumbra of the shadow image for the extracted shadow generated in the step (2), and dividing the penumbra, the penumbra and the non-shadow by the extracted shadow generated by a morphological processing method.
(4) And separating the shadow area from the non-shadow area, and performing information compensation on the extracted shadow area by adopting a modified direct light intensity and environment light intensity ratio method on the basis of the illumination model.
(5) And (4) regarding the penumbra area obtained in the step (3), taking the width of a single pixel as a step length, calculating the ratio of direct light intensity of the inner ring of the width of the single pixel to the ambient light intensity in the penumbra range, and performing information compensation on the penumbra area by using a dynamic ratio method.
Preferably, the acquiring of the aerial image shadow data set includes acquiring aerial image shadow data by an aerial image acquisition device or acquiring aerial image electronic data.
Preferably, the step (1) of preprocessing the data set comprises:
(1) Acquiring an aerial image shadow data set comprising shadow and shadow mask image pairs, and converting a shadow image from an RGB space to a YCbCr space, wherein the YCbCr space is defined as follows:
Figure BDA0003791673330000031
where Y is a luminance component, cb and Cr are blue-and red-difference chrominance components, and correspond to Intensity (Intensity, I), saturation (Saturation, S), and chrominance (Hue, H) in the HSI space, and HIS is formed by acronyms of the three english names.
(2) The shadow in the YCbCr space is enhanced by a Shadow Index (SI), which is specifically defined as follows:
Figure BDA0003791673330000032
(3) Combining the SI and the original shadow image into a four-waveband image as a preprocessed data set
Preferably, step (2) trains the generation of the extraction shadow using the CGAN network:
preferably, the CGAN network includes a generator G for generating an image proximate to a reference shadow mask and a discriminator D for distinguishing differences between the generated extracted shadow and the reference shadow mask.
An objective function of the CGAN network:
Figure BDA0003791673330000041
in the formula, the original shadow image c 1 And a shading index c 2 Constraint of composition c (c) 1 ,c 2 ) Generating an extraction shadow for the input of the CGAN network
Figure BDA0003791673330000042
x is a reference shadow mask that generates correspondence of the extracted shadow.
Preferably, the loss function of the CGAN network is L1 loss.
Preferably, step (3) uses the generated extracted shadow to divide the umbra, penumbra and non-shadow.
The shadow can be divided into a main shadow and a penumbra according to different illumination conditions. The shadow is generated by completely shielding the light intensity of the direct sunlight, and the penumbra is formed by partially shielding the light intensity of the direct sunlight. The penumbra is generally positioned in a transition zone from shadow to non-shadow, and from the shadow area to the non-shadow area, the direct sunlight intensity gradually increases, and the shadow intensity gradually decreases. According to the theory, the width of the statistical penumbra is known to be generally within the range of 5 pixel width, the obtained extracted shadow generally comprises most of the penumbra, and the penumbra with too low shadow intensity in a small part cannot be detected.
For the extracted shadow R obtained in step (2) mask The shadow R is extracted after the shadow R is obtained by inwards corroding the shadow R by the width of 3 pixels umbra As the home shadow area; r is to be mask Expanding the width of 2 pixels outwards to obtain R dilate Then half shadow R pen Is defined as R pen =R umbra ∩R dilate The non-shaded region R unshadow Is positioned as R unshadow =1-R dilate
Preferably, step (4) uses a modified direct light intensity and ambient light intensity ratio method to compensate the information of the extracted shadow.
Preferably, according to the theory of image formation, for any of the imagesValue I of a pixel I i Can be expressed as the illumination intensity L of the pixel point i And the point reflectivity R i The product of (a): i = L i ·R i
The illumination intensity, hereinafter referred to as light intensity, includes direct illumination from direct solar radiation and ambient illumination from scattered sky.
Preferably, the shaded areas are formed by ambient light intensity and partial or no direct illumination, and the non-shaded areas are formed by direct light intensity and ambient light intensity. The value i of any pixel can be expressed as:
Figure BDA0003791673330000051
in the formula, for the pixel i,
Figure BDA0003791673330000052
is the intensity of the ambient light and,
Figure BDA0003791673330000053
alpha is the attenuation factor of the direct illumination for direct light intensity. Where α =1 indicates that the pixel is located in a non-shaded area; α =0 indicates that the pixel is located in the home shadow; α ∈ (0,1) indicates that the pixel is located in the penumbra.
According to the theory of the formula (4), the information compensation process of the umbra area is specifically as follows:
(1) Respectively calculating the average value of the waveband B, B epsilon { R, G, B } umbrellas and the unshaded area
Figure BDA0003791673330000054
Figure BDA0003791673330000055
(2) Calculating the ratio r of the ambient light intensity and the direct light intensity of each wave band b The formula is as follows:
Figure BDA0003791673330000056
(3) For extracting shadow R mask The information compensation process of any pixel i in the region for each band in which the pixel i is located is as follows:
Figure BDA0003791673330000057
in the formula (I), the compound is shown in the specification,
Figure BDA0003791673330000058
for extracting the value of the shadow region pixel i after compensation in the band b, R mask,i The value of the shading area pixel i in the band b.
Preferably, the step (5) compensates the penumbra region by using a dynamic ratio method.
And (4) defining according to the step (3), the penumbra area is a transition zone from the shadow to the non-shadow, wherein the direct light intensity is gradually increased, and the shadow intensity is gradually reduced, and occupies the width of 5 pixels.
Assuming that from the ghost to the unshaded shadow, the direct light intensity is gradually increased by taking the width of one pixel as a step length, namely, the attenuation proportion of the direct light intensity in the single-step range of the penumbra area is considered to be consistent. Therefore, for penumbra compensation, information compensation is dynamically performed from the umbra to the non-shadow with the width of a single image element as the minimum processing unit. The specific process is as follows:
(1) Will shadow R mask Is set to R dilate,0 Let step =1, let R dilate,0 Expanding outward by a pixel width to obtain R dilate,step And is combined with the book shadow R umbra Intersection is carried out to obtain a single-pixel width penumbra region R pen,step
(2) Respectively calculating the average value of the penumbra region and the unshaded region of the wave band B, B epsilon { R, G, B }
Figure BDA0003791673330000061
Figure BDA0003791673330000062
(3) Calculating the ratio r of the ambient light intensity and the direct light intensity of each wave band b The formula is as follows:
Figure BDA0003791673330000063
(4) For R pen,step The information compensation process of any pixel i in the system for each band in which the pixel is located is as follows:
Figure BDA0003791673330000064
in the formula (I), the compound is shown in the specification,
Figure BDA0003791673330000065
is R pen,step Value of inner pixel i compensated in band b, R pen,step,i Is the value of the pixel i in the band b in the half-shaded area of the step size.
(5) step = step +1, and steps (1) to (4) are cycled until step =5.
The method for removing the shadow of the aerial remote sensing image can be applied to removing the shadow of the large-scale remote sensing image. Applications include, but are not limited to:
the network is guided to find hidden shadow features, and the shadow detection precision is improved;
in the shadow information compensation part, a shadow information compensation method based on an illumination intensity ratio method is used, and the ratio of the direct light intensity to the ambient light intensity is calculated by applying the local shadow and the non-shadow area value, so that the local shadow compensation precision is improved;
and in the penumbra compensation part, information recovery is carried out on the penumbra area by using a dynamic ratio method, and the penumbra compensation is realized by calculating the ratio of direct light intensity to ambient light intensity in the step length range of each pixel in the penumbra area.
The invention has the advantages that:
(1) And in a shadow detection part, converting the shadow image into a YCbCr space, developing a shadow index SI in the space, adding the SI into an input layer of the CGAN network, guiding the network to find more hidden shadow features, and improving the shadow detection precision.
(2) In the shading information compensation section, a modified shading information compensation method based on a light intensity ratio method is developed. Different from the traditional method which directly utilizes the shadow detection result to calculate the ratio of the direct light intensity to the ambient light intensity, the method ignores part of the direct light intensity contained in the penumbra area, thereby causing the shadow compensation proportion to be lower. The invention defines the range of the umbra, the penumbra and the non-shadow, and uses the area values of the umbra and the non-shadow to calculate the ratio of the direct light intensity and the ambient light intensity, thereby improving the precision of the umbra compensation.
(3) In the penumbra compensation part, a dynamic ratio method is developed to carry out information recovery on a penumbra area, the method sets the area from the penumbra to the non-shadow area, the increase degree of the direct light intensity is certain in the range of each pixel step length, further, the penumbra compensation is finally realized by calculating the ratio of the direct light intensity to the environmental light intensity in the range of each pixel step length of the penumbra area, and the embodiment verifies that the method can effectively compensate the penumbra.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that each of the drawings in the following description is directed to some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a remote sensing image shadow removal method based on deep learning and an illumination model.
Fig. 2 is a schematic diagram of the conversion from RGB space to YCbCr space and the effect of SI index on shadow enhancement.
Fig. 3 is a detailed structure and parameters of a generator and a discriminator of a CGAN network used in the present invention.
FIG. 4 is a schematic diagram of the calculation process of penumbra compensation according to the present invention.
FIG. 5 is a schematic diagram of the shadow removal result according to the embodiment of the present invention, where (a) is an original image, (b) is a shadow extraction result, (c) is a compensation result of extracting shadow region information, and (d) is a penumbra compensation result.
Detailed Description
The method aims to solve the problems that the conventional shadow extraction method has a limited shadow extraction effect on fine and complex backgrounds and the conventional shadow information compensation method is not suitable for large-scale remote sensing images due to serious shadows of aerial remote sensing images. The invention provides a remote sensing image shadow removing method based on deep learning and an illumination model. Firstly, acquiring an aerial remote sensing image shadow data set, and combining a shadow index of a YCbCr space with an original image to improve shadow characteristics; then, extracting the shadow by utilizing a CGAN network; and then based on the shadow extraction result, improving a shadow information recovery and edge optimization method based on the illumination model, and finally realizing effective shadow removal of the aerial remote sensing image. The method can realize the efficient removal of the large-scale high-resolution remote sensing image shadow.
The technical solutions will be described clearly and completely through the embodiments of the present application, and it is obvious that the described embodiments are only a part of the preferred embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any inventive step, are intended to be within the scope of the present application.
Example 1 data set Pre-processing
(1) The acquired aerial image shadow dataset comes from a website: https:// github. Com/RSrscoder/AISD, including a shadow image (including three bands of R, G, B) and a shadow mask (binary image, shadow region pixel value of 1, non-shadow region pixel value of 0) image pair, including training set 412 pair, verification set 51 pair, and test set 51 pair. Converting all shadow images in the data set from an RGB space to a YCbCr space, wherein the YCbCr space is defined as follows:
Figure BDA0003791673330000081
where Y is a luminance component, cb and Cr are blue-difference and red-difference chrominance components, and correspond to Intensity (I), saturation (S), and chrominance (Hue, H) in the HSI space. The shaded areas are blocked from electromagnetic radiation from the sun and therefore have a lower intensity I than the unshaded areas. Meanwhile, due to the Rayleigh effect of atmospheric scattering, the blue-violet wavelength in the shadow area is short, and the saturation S is high. Therefore, the shaded area pixels have a higher Cb and a lower Y. As shown in FIG. 2, the shaded area from (a) to (b) in the YCbCr space turns green, indicating that the Cb component of the shaded area is enhanced and Y is significantly reduced.
(2) The shadow in the YCbCr space is enhanced by a Shadow Index (SI), which is specifically defined as follows:
Figure BDA0003791673330000091
the diagram of the effect of SI on shadow enhancement in the equation is shown in fig. 2 (c).
(3) And combining the SI and the original shadow image into a data pair, and simultaneously using the data pair as the input of a shadow extraction network.
Example 2 training of CGAN network to generate extraction shadows
And (3) screening 80% of data as a training set and the rest 20% of data as a testing set for the data set preprocessed in the step (1). The CGAN network is comprised of a generator G for generating an extracted image proximate to a reference shadow mask and a discriminator D for distinguishing differences between the generated extracted shadow and the reference shadow mask.
An objective function of the CGAN network:
Figure BDA0003791673330000092
in the formula, the original shadow image c 1 And a shading index c 2 Constraint of composition c (c) 1 ,c 2 ) Generating an extraction shadow for the input of the CGAN network
Figure BDA0003791673330000093
x is the correspondence of generating extraction shadowReference shadow mask. The loss function of the CGAN network is L1 loss.
The parameters of the generator and discriminator layers are shown in fig. 3 (a), the wide arrows at both ends of each module in fig. 3 (a) represent the input and output channels of the convolutional layer, respectively; x3 indicates that the block was executed three times. Dashed arrows indicate series connection of deconvolution and convolution broad arrows at both ends of each block in fig. 3 (b) indicate input and output channels of discriminator convolution, respectively.
The network initializes all weights of the shadow detection network by sampling from a zero-mean normal distribution with a standard deviation of 0.2. Other parameters are set as follows, kernel size 4X 4, stride 2, padding 1, optimizer Adam, learning rate 0.0002, batch 40, and training times 1000 epochs. The network was trained on a PyTorch framework on a server cluster containing four nvidiirtx 3090 GPUs.
After training, testing the residual 20% of data by using the parameters obtained by training to obtain the shadow detection result of the test set, wherein the part of the training set and the corresponding shadow detection result are applied to the test of subsequent information compensation.
Embodiment 3 partitioning of a ghost, a penumbra, and a non-shadow using the generated extracted shadow
The shadow can be divided into a main shadow and a penumbra according to different illumination conditions. The shadow is generated by completely shielding the light intensity of the direct sunlight, and the penumbra is formed by partially shielding the light intensity of the direct sunlight. The penumbra is generally positioned in a transition zone from shadow to non-shadow, and from the shadow area to the non-shadow area, the direct sunlight intensity gradually increases, and the shadow intensity gradually decreases. According to the theory, the width of the statistical penumbra is known to be generally within the range of 5 pixel width, the obtained extracted shadow generally comprises most of the penumbra, and the penumbra with too low shadow intensity in a small part cannot be detected.
For the extracted shadow R obtained in step (2) mask The shadow R is extracted after the shadow R is obtained by inwards corroding the shadow R by the width of 3 pixels umbra As a home image area; r is to be mask Expanding the width of 2 pixels outwards to obtain R dilate Then half shadow R pen Is defined as R pen =R umbra ∩R dilate The non-shaded region R unshadow Is positioned as R unshadow =1-R dilate
Example 4 information compensation of extracted shadows by a modified direct and ambient light intensity ratio
According to the theory of image formation, the value I of any pixel point I in the image i Can be expressed as the illumination intensity L of the pixel point i And the point reflectivity R i The product of (a): i = L i ·R i . The illumination intensity, hereinafter referred to as light intensity, includes direct illumination from direct solar radiation and ambient illumination from scattered sky. The shaded regions are formed by the ambient light intensity and partial or no direct illumination, while the non-shaded regions are comprised of the direct light intensity and the ambient light intensity. The value i of any pixel can be expressed as:
Figure BDA0003791673330000101
in the formula, for the pixel i,
Figure BDA0003791673330000102
is the light intensity of the environment, and the light intensity of the environment,
Figure BDA0003791673330000103
alpha is the attenuation factor of the direct illumination for direct light intensity. Where α =1 indicates that the pixel is located in a non-shaded area; α =0 indicates that the pixel is located in the home shadow; α ∈ (0,1) indicates that the pixel is located in a penumbra.
According to the theory of the formula (4), the information compensation process of the umbra area is specifically as follows:
(4) Respectively calculating the average value of the waveband B, B epsilon { R, G, B } umbrellas and the unshaded area
Figure BDA0003791673330000111
Figure BDA0003791673330000112
(5) Calculating the ratio r of the ambient light intensity and the direct light intensity of each wave band b The formula is as follows:
Figure BDA0003791673330000113
(6) For extracting shadow R mask The information compensation process of any pixel i in the region for each band in which the pixel i is located is as follows:
Figure BDA0003791673330000114
in the formula (I), the compound is shown in the specification,
Figure BDA0003791673330000115
to extract the compensated value of the pixel i in the shadow region in the band b, R mask,i The value of the shading area pixel i in the band b.
Example 5 information compensation of penumbra region by dynamic ratio method
And (4) defining according to the step (3), the penumbra area is a transition zone from the shadow to the non-shadow, wherein the direct light intensity is gradually increased, and the shadow intensity is gradually reduced, and occupies the width of 5 pixels. Assuming that from the ghost to the unshaded shadow, the direct light intensity is gradually increased by taking the width of one pixel as a step length, namely, the attenuation proportion of the direct light intensity in the single-step range of the penumbra area is considered to be consistent. Therefore, for penumbra compensation, from the umbra to the non-shadow, the information compensation is dynamically performed by taking the width of a single pixel as the minimum processing unit, and the specific process is as follows:
(1) Will shadow R mask Is set to R dilate,0 Let step =1, let R ailate,0 Expanding outward by a pixel width to obtain R dilate,step And is combined with the book shadow R umbra Intersection is carried out to obtain a single-pixel width penumbra region R pen,step
(2) Respectively calculating the average value of the penumbra region and the unshaded region of the wave band B, B epsilon { R, G, B }
Figure BDA0003791673330000116
Figure BDA0003791673330000117
(3) Calculating the ratio r of the ambient light intensity and the direct light intensity of each wave band b The formula is as follows:
Figure BDA0003791673330000118
(4) For R pen,step The information compensation process of any pixel i in the system for each band in which the pixel i is located is as follows:
Figure BDA0003791673330000121
in the formula (I), the compound is shown in the specification,
Figure BDA0003791673330000122
is R pen,step The value of the inner pixel i compensated in the band b, R pen,step,i Is the value of the pixel i in the band b in the half-shaded area of the step size.
(5) step = step +1, and steps (1) to (4) are cycled until step =5.
The embodiments described above are only specific embodiments of the present application, but the scope of protection of the present application is not limited thereto, and any changes or substitutions that can be suggested by those skilled in the art without inventive work within the technical scope disclosed in the present application should be covered by the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims in the present application.

Claims (10)

1. The method for removing the shadow of the aerial remote sensing image is characterized by being based on a deep learning and illumination model and comprising the following steps of:
step 1, data set preparation: acquiring an aviation image shadow data set, including an original shadow image and a shadow mask of a shadow; then converting the original image from RGB space to YCbCr space, calculating the shadow index SI in the space to enhance the shadow feature, and combining SI and the original image to form the input data of the network;
step 2, based on the data set produced in the step 1, using a CGAN network to train and generate a shadow extraction result;
step 3, counting the pixel width occupied by the penumbra of the shadow image for the shadow extracted in the step 2, and dividing the penumbra, the penumbra and the non-shadow by using the generated extracted shadow;
step 4, separating the shadowy area from the non-shadowy area, and performing information compensation on the extracted shadowy area by adopting a direct light intensity and environment light intensity ratio method on the basis of an illumination model;
and 5, regarding the penumbra area obtained in the step 3, taking the width of a single pixel as a step length, calculating the ratio of direct light intensity to ambient light intensity in the width of the single pixel in the penumbra range, and performing information compensation on the penumbra area by using a dynamic ratio method.
2. The aerial remote sensing image shadow removal method according to claim 1, wherein the aerial image shadow data set obtained in the step 1 comprises aerial image shadow data obtained by obtaining or collecting aerial image electronic data through aerial image acquisition equipment;
step 1, converting the shadow from an RGB space to a YCbCr space, enhancing the shadow characteristics by utilizing a shadow index SI, and forming the SI and an original image into the input of a network; or comprises the following steps 1.1-1.3:
step 1.1, acquiring an aerial image shadow data set, wherein the aerial image shadow data set comprises a shadow and shadow mask image pair, and converting a shadow image from an RGB space to a YCbCr space, wherein the YCbCr space is defined as follows:
Figure FDA0003791673320000011
where Y is the luminance component, cb and Cr are the blue-difference and red-difference chrominance components;
step 1.2, enhancing the shadow in the YCbCr space by a shadow index SI, wherein the SI is specifically defined as follows:
Figure FDA0003791673320000021
and step 1.3, combining the SI and the original shadow image into a four-waveband image as a preprocessed data set.
3. The aerial remote sensing image shadow removal method according to claim 1, wherein the extracted shadow is generated in step 2 by using CGAN network training:
the CGAN network is composed of a generator G and a discriminator D, wherein G is used for generating a shadow extraction result close to a reference shadow mask, and D is used for distinguishing the difference of the generated extraction shadow and the reference shadow mask;
an objective function of the CGAN network:
Figure FDA0003791673320000022
wherein, the original shadow image c 1 And a shading index c 2 Constraint of composition c (c) 1 ,c 2 ) For the input of the CGAN network, generating a shadow of
Figure FDA0003791673320000023
x is a reference shadow mask corresponding to the generated extraction shadow;
the loss function used by the CGAN network is L1 loss.
4. The aerial remote sensing image shadow removal method according to claim 1, wherein the extracted shadow generated by applying the morphological processing method in the step 3 is divided into a home shadow, a penumbra and a non-shadow; or alternatively
The step 3 comprises the following steps:
step 3.1, for the extracted shadow R obtained in step (2) mask Etching the film with the width of 3 pixels inwards by a morphological processing method to obtain an etched shadow R umbra As the home shadow area;
step 3.2 by subjecting R mask Expanding the width of 2 pixels outwards to obtain R dilate Then R is added dilate Taking intersection with this shadow to obtain penumbra R pen Preferably, R pen =R umbra ∩R dilate
Step 3.3, non-shaded region R shown unshadow Subtracting the penumbra region from the whole image region and subtracting the umbra region, preferably, R unshadow =1-R dilate
5. The aerial remote sensing image shadow removal method according to claim 4, wherein the shadow is divided into a main shadow and a penumbra according to different illumination conditions: the self-shadow is generated by completely shielding the light intensity of the direct sunlight, and the semi-shadow is formed by partially shielding the light intensity of the direct sunlight; the penumbra is generally positioned in a transition zone from shadow to non-shadow, and from a penumbra area to a non-shadow area, the direct solar light intensity is gradually increased, and the shadow intensity is gradually weakened; or
The penumbra is within the width range of 5 pixels, the obtained extracted shadow contains most of the penumbra, and the penumbra with too low shadow intensity cannot be detected by the shadow detection method.
6. The aerial remote sensing image shadow removal method according to claim 1, wherein step 4 adopts a modified direct light intensity and ambient light intensity ratio method to perform information compensation on the extracted shadow; or
Step 4 comprises the following steps:
for the value I of any pixel point I in the image i Is expressed as the illumination intensity L of the pixel point i And the point reflectivity R i The product of (a): i = L i ·R i
The value of the pixel, I, is expressed as:
Figure FDA0003791673320000031
wherein, for a pixel i,
Figure FDA0003791673320000032
is the light intensity of the environment, and the light intensity of the environment,
Figure FDA0003791673320000033
is the direct light intensity, alpha is the attenuation factor of the direct illumination;
α =1 indicates that the pixel is located in a non-shaded area;
α =0 indicates that the pixel is located in the home shadow;
α ∈ (0,1) indicates that the pixel is located in a penumbra;
the extracted shading information compensation process is specifically as follows:
step 4.1, respectively calculating the average values of the wave bands B, B E { R, G, B } umbral region and the unshaded region
Figure FDA0003791673320000034
Figure FDA0003791673320000035
Step 4.2, calculating the ratio r of the ambient light intensity and the direct light intensity of each wave band b The formula is as follows:
Figure FDA0003791673320000036
step 4.3, for extracting shadow R mask The information compensation process of any pixel i in the region on each band is as follows:
Figure FDA0003791673320000037
wherein the content of the first and second substances,
Figure FDA0003791673320000038
to extract the compensated value of the pixel i in the shadow region in the band b, R mask,i The value of the shading area pixel i in the band b.
7. The aerial remote sensing image shadow removal method according to claim 6, wherein the illumination intensity or light intensity comprises direct illumination and ambient illumination, wherein the direct illumination is from direct solar radiation, and the ambient light intensity is from sky scattering; the shaded areas are formed by the ambient light intensity and partial or no direct light intensity, while the non-shaded areas are formed by the direct light intensity and the ambient light intensity.
8. The aerial remote sensing image shadow removal method according to claim 1, wherein step 5 is to perform information compensation on the penumbra area by using a dynamic ratio method; or
The step 5 comprises the following steps:
for penumbra compensation, from the umbra to the non-shadow, the information compensation is dynamically performed by taking the width of a single pixel as the minimum processing unit, and the specific process is as follows:
step 5.1, the shadow R mask Is set to R dilate,0 Let step =1, let R dilate,0 Expanding outward by a pixel width to obtain R dilate,step And is combined with the book shadow R umbra Intersection is carried out to obtain a single-pixel width penumbra region R pen,step
Step 5.2, respectively calculating the average values of the wave bands B, B epsilon { R, G, B } penumbra area and the unshaded area
Figure FDA0003791673320000041
Figure FDA0003791673320000042
Step 5.3, calculating the ratio r of the ambient light intensity and the direct light intensity of each wave band b The formula is as follows:
Figure FDA0003791673320000043
step 5.4, for R pen,step The information compensation process of any pixel i in the system for each band in which the pixel i is located is as follows:
Figure FDA0003791673320000044
in the formula (I), the compound is shown in the specification,
Figure FDA0003791673320000045
is R pen,step Value of inner pixel i compensated in band b, R pen,step,i The value of the pixel i in the step size half-shadow area in the wave band b is obtained;
step 5.5, step = step +1, and steps 1 to 4 are cycled until step =5.
9. The application of the aerial remote sensing image shadow removal method as claimed in claim 1, wherein the aerial remote sensing image shadow removal method is used for removing the shadow of the large-scale remote sensing image.
10. The use of claim 9, wherein said use includes but is not limited to:
the network is guided to find hidden shadow features, and the shadow detection precision is improved;
in the shadow information compensation part, a shadow information compensation method based on an illumination intensity ratio method is used, and the ratio of the direct light intensity to the ambient light intensity is calculated by applying the local shadow and the non-shadow area value, so that the local shadow compensation precision is improved;
and in the penumbra compensation part, information recovery is carried out on the penumbra area by using a dynamic ratio method, and the penumbra compensation is realized by calculating the ratio of direct light intensity to ambient light intensity in the step length range of each pixel of the penumbra area.
CN202210956781.6A 2022-08-10 2022-08-10 Aviation remote sensing image shadow removing method based on deep learning and illumination model Pending CN115456886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210956781.6A CN115456886A (en) 2022-08-10 2022-08-10 Aviation remote sensing image shadow removing method based on deep learning and illumination model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210956781.6A CN115456886A (en) 2022-08-10 2022-08-10 Aviation remote sensing image shadow removing method based on deep learning and illumination model

Publications (1)

Publication Number Publication Date
CN115456886A true CN115456886A (en) 2022-12-09

Family

ID=84298148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210956781.6A Pending CN115456886A (en) 2022-08-10 2022-08-10 Aviation remote sensing image shadow removing method based on deep learning and illumination model

Country Status (1)

Country Link
CN (1) CN115456886A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252789A (en) * 2023-11-10 2023-12-19 中国科学院空天信息创新研究院 Shadow reconstruction method and device for high-resolution remote sensing image and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252789A (en) * 2023-11-10 2023-12-19 中国科学院空天信息创新研究院 Shadow reconstruction method and device for high-resolution remote sensing image and electronic equipment
CN117252789B (en) * 2023-11-10 2024-02-02 中国科学院空天信息创新研究院 Shadow reconstruction method and device for high-resolution remote sensing image and electronic equipment

Similar Documents

Publication Publication Date Title
Wang et al. Single image dehazing based on the physical model and MSRCR algorithm
Huang et al. Building extraction from multi-source remote sensing images via deep deconvolution neural networks
CN107358585B (en) Foggy day image enhancement method based on fractional order differential and dark channel prior
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
CN111626951B (en) Image shadow elimination method based on content perception information
CN112419196B (en) Unmanned aerial vehicle remote sensing image shadow removing method based on deep learning
Kang et al. Fog model-based hyperspectral image defogging
CN112215085A (en) Power transmission corridor foreign matter detection method and system based on twin network
CN109598202A (en) A kind of object-based satellite image multi objective built-up areas extraction method
CN110852207A (en) Blue roof building extraction method based on object-oriented image classification technology
CN115205713A (en) Method for recovering details of scenery color and texture in shadow area of remote sensing image of unmanned aerial vehicle
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN106875407A (en) A kind of unmanned plane image crown canopy dividing method of combining form and marking of control
CN111310566A (en) Static and dynamic multi-feature fusion mountain fire detection method and system
CN115456886A (en) Aviation remote sensing image shadow removing method based on deep learning and illumination model
Attarzadeh et al. Object-based building extraction from high resolution satellite imagery
CN108711160A (en) A kind of Target Segmentation method based on HSI enhancement models
CN109696406B (en) Moon table hyperspectral image shadow region unmixing method based on composite end member
CN112883823A (en) Land cover category sub-pixel positioning method based on multi-source remote sensing data fusion
CN116883303A (en) Infrared and visible light image fusion method based on characteristic difference compensation and fusion
CN115526795A (en) Unmanned aerial vehicle image shadow compensation method based on region matching and color migration
CN110163874B (en) Bilateral filtering algorithm based on homogeneous region segmentation
CN115082533A (en) Near space remote sensing image registration method based on self-supervision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination