CN113284153A - Satellite cloud layer image processing method and device, computer equipment and storage medium - Google Patents

Satellite cloud layer image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113284153A
CN113284153A CN202110529241.5A CN202110529241A CN113284153A CN 113284153 A CN113284153 A CN 113284153A CN 202110529241 A CN202110529241 A CN 202110529241A CN 113284153 A CN113284153 A CN 113284153A
Authority
CN
China
Prior art keywords
layer
overlapping
image
cloud
cloud layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110529241.5A
Other languages
Chinese (zh)
Inventor
岳安志
席智浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute Of Space Information Technology Institute Of Remote Sensing And Digital Earth Chinese Academy Of Sciences Huizhou
Original Assignee
Institute Of Space Information Technology Institute Of Remote Sensing And Digital Earth Chinese Academy Of Sciences Huizhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute Of Space Information Technology Institute Of Remote Sensing And Digital Earth Chinese Academy Of Sciences Huizhou filed Critical Institute Of Space Information Technology Institute Of Remote Sensing And Digital Earth Chinese Academy Of Sciences Huizhou
Priority to CN202110529241.5A priority Critical patent/CN113284153A/en
Publication of CN113284153A publication Critical patent/CN113284153A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The application provides a satellite cloud layer image processing method and device, computer equipment and a storage medium. The method comprises the steps of carrying out overlapping segmentation processing on a satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image is provided with at least one overlapping area; carrying out convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region; and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability. The probability of each pixel point of each segmented image of the satellite cloud layer image in the overlapping area is obtained through convolution classification operation of the segmented overlapping cloud layer image, and finally the gray value of the corresponding pixel point on the output image layer is adjusted according to the probability distribution condition of the same pixel point of the same characteristic image layer, so that various background images in the overlapping area can be conveniently distinguished, and the blocking effect in the overlapping area is reduced.

Description

Satellite cloud layer image processing method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of cloud layer image processing, in particular to a satellite cloud layer image processing method and device, computer equipment and a storage medium.
Background
With the development of image processing technology, the processing of satellite images becomes an important image processing branch, for the processing of satellite images, a U-net segmentation network model is usually adopted, and the traditional U-net is a segmentation network aiming at medical tissue cell images and has a better function of distinguishing semantic information of the images.
However, each feature point of the central part of the conventional U-net has a small receptive field, cannot capture a large-range semantic information, is more likely to generate an erroneous segmentation result for some objects with large spatial span, and is not suitable for a detection task of cloud and cloud shadow, and the conventional U-net needs to adopt a block prediction method to splice satellite images with large widths to form a complete image, but the block images are input into a network model, and the boundary semantic information of the block images may not be complete to cause a high misjudgment rate of boundary pixels, and finally the whole image generates a blocking effect, so that the integrity of the final detected image is reduced, and the definition of the final detected image is reduced.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a satellite cloud layer image processing method, a satellite cloud layer image processing device, computer equipment and a storage medium, wherein the probability of blocking effect is reduced.
The purpose of the invention is realized by the following technical scheme:
a method of satellite cloud image processing, the method comprising:
performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image has at least one overlapping region;
performing convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability.
In one embodiment, the performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images includes: and performing line-direction overlapping and separating processing on the satellite cloud layer image to obtain a plurality of line-direction overlapping cloud layer images.
In one embodiment, the performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images includes: and performing row-direction overlapping and separating processing on the satellite cloud layer images to obtain a plurality of row-direction overlapping cloud layer images.
In one embodiment, the performing a convolution classification operation on the same feature layer of the two overlapping cloud layer images includes: and performing residual error operation on the characteristic image layer of each overlapped cloud layer image.
In one embodiment, the performing the residual error operation on the feature layer of each of the overlapping cloud layer images includes: and respectively carrying out down-sampling residual error processing and up-sampling residual error processing on the characteristic image layers to form a plurality of compression path image layers and a plurality of expansion path image layers.
In one embodiment, the performing down-sampling residual processing and up-sampling residual processing on the feature layer respectively further includes: and performing feature fusion operation on the compressed path layer and the corresponding expanded path layer.
In one embodiment, the performing a convolution classification operation on the same feature layer of the two overlapping cloud layer images further includes: and performing bidirectional asymmetric hole convolution operation on the minimum-size compression path layer on the compression path to output the initial expansion path layer on the expansion path.
A satellite cloud image processing apparatus, the apparatus comprising:
the satellite cloud layer image processing device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for performing overlapping segmentation processing on a satellite cloud layer image to obtain a plurality of overlapping cloud layer images, and each overlapping cloud layer image is provided with at least one overlapping area;
the second processing module is used for carrying out convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and the gray level adjusting module is used for adjusting the gray level value of the corresponding pixel point on the output layer according to the characteristic probability.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image has at least one overlapping region;
performing convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image has at least one overlapping region;
performing convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability.
Compared with the prior art, the invention has at least the following advantages:
the probability of each pixel point of each segmented image of the satellite cloud layer image in the overlapping region is obtained through convolution classification operation of the segmented overlapping cloud layer image and is used for displaying probability distribution of the pixel points in the overlapping region, and finally the gray value of the corresponding pixel point on the output image layer is adjusted through the probability distribution condition of the same pixel points of the same characteristic image layer, so that various background images in the overlapping region can be conveniently distinguished, the blocking effect in the overlapping region is reduced, and the definition of the detection output image of the satellite cloud layer image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart illustrating a satellite cloud image processing method according to an embodiment;
FIG. 2 is a block diagram of a pyramid pooling module based on convolution kernels in one embodiment;
FIG. 3 is a schematic diagram of a convolution distribution of the bi-directional asymmetric hole convolution of the pyramid pooling module based on convolution kernels shown in FIG. 2;
FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not represent the only embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The invention relates to a satellite cloud layer image processing method. In one embodiment, the satellite cloud layer image processing method comprises the steps of performing overlapping segmentation processing on a satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image has at least one overlapping region; performing convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region; and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability. The probability of each pixel point of each segmented image of the satellite cloud layer image in the overlapping region is obtained through convolution classification operation of the segmented overlapping cloud layer image and is used for displaying probability distribution of the pixel points in the overlapping region, and finally the gray value of the corresponding pixel point on the output image layer is adjusted through the probability distribution condition of the same pixel points of the same characteristic image layer, so that various background images in the overlapping region can be conveniently distinguished, the blocking effect in the overlapping region is reduced, and the definition of the detection output image of the satellite cloud layer image is improved.
Please refer to fig. 1, which is a flowchart illustrating a satellite cloud image processing method according to an embodiment of the present invention. The satellite cloud layer image processing method comprises part or all of the following steps.
S100: and performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image has at least one overlapping region.
In this embodiment, the satellite cloud layer image is acquired from a GF-1WFV satellite image, which may also be considered as taking the GF-1WFV satellite image as the satellite cloud layer image, and the satellite cloud layer image includes three images, that is, three images including a cloud, a cloud shadow, and a background are included in the satellite cloud layer image. The satellite cloud layer image is a satellite image of the earth shot by a high-grade first satellite, wherein the specific position of the cloud layer is mainly collected in real time and is used for distinguishing the cloud layer from the mountains, waters and ice snow on the ground. Due to the fact that shooting time and angles of the satellite are different, a certain dislocation situation also exists between the cloud layer and the cloud shadow, in order to distinguish the three images, the satellite image with the large width needs to be segmented, namely, the satellite image with the large width needs to be subjected to overlapping segmentation, so that the satellite cloud layer image with the large size is segmented into a plurality of sub-images, and the satellite cloud layer image is convenient to be subjected to image processing. In order to reduce the cutting of the boundary semantics of each sub-image, namely reduce the probability of blocking effect, the overlapping segmentation processing of the satellite cloud layer image is to obtain an overlapping cloud layer image with an overlapping region. Moreover, the overlapping regions are all located at the edges of the overlapping cloud layer images, namely, the two adjacent overlapping cloud layer images are overlapped at the edge positions, and the two overlapping cloud layer images complement each other in semantics, so that the semantics at the edge positions keep integrity, thereby facilitating the subsequent probability processing of pixel points in the overlapping regions, further facilitating the subsequent distinguishing of three different images in the overlapping regions, and reducing the probability of blocking effect in the boundary regions.
S200: and carrying out convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region.
In this embodiment, the feature layer corresponds to the overlapping region, and the feature layer is formed after feature classification is performed on overlapping cloud images, that is, the feature layer is formed after feature classification operation is performed on the overlapping region of two overlapping cloud images. Carrying out convolution classification operation on the same feature layers of the two overlapped cloud layer images, namely, carrying out single-point position classification on the feature layers in the overlapped region, so that the probability of each pixel point of the feature layers in the overlapped region can be displayed, the feature probability of each pixel point of each feature layer in the overlapped region can be conveniently output, the probability distribution condition of each pixel point in the overlapped region can be conveniently reflected, and the subsequent judgment of the output gray scale of each pixel point in the overlapped region can be conveniently carried out.
S300: and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability.
In this embodiment, the feature probability is the occurrence probability of each pixel point in the overlapping region, the same pixel points are located in the overlapping region on the same feature layer of the two overlapping cloud layer images, and the same pixel points correspond to respective feature probabilities, and the feature probabilities of the same pixel point of the same feature layer are fused, so that the occurrence image condition of the pixel point is conveniently analyzed, for example, if the feature probability of the pixel point on the feature layer of the cloud shadow is the maximum, the gray value output on the same pixel point on the output layer is 128, that is, gray; if the probability of the characteristic of the pixel point on the characteristic layer of the cloud layer is the maximum, the gray value output on the same pixel point on the output layer is 255, namely white; for another example, if the feature probability of the pixel point on the feature layer of the mountain ice and snow background is the maximum, the gray value output on the same pixel point on the output layer is 0, that is, black.
In each of the above embodiments, the probability of each pixel point of each segmented image of the satellite cloud image in the overlapping region is obtained through convolution classification operation on the segmented overlapping cloud image, and is used for displaying probability distribution of the pixel points in the overlapping region.
In one embodiment, the performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images includes: and performing line-direction overlapping and separating processing on the satellite cloud layer image to obtain a plurality of line-direction overlapping cloud layer images. In this embodiment, the satellite cloud layer image is a satellite image with a large width, that is, the size of the satellite cloud layer image is large, and in order to improve the efficiency of image processing, the satellite cloud layer image is divided into a plurality of overlapping cloud layer images by means of image segmentation, and the overlapping cloud layer images are processed respectively, so that the data size of the image processed at one time is reduced, and the processing efficiency of the satellite cloud layer image is improved. The satellite cloud layer images are subjected to line-wise overlapping and separating processing, namely a plurality of separated overlapping cloud layer images are arranged in the line direction, and the overlapping area between two adjacent overlapping cloud layer images is in the column direction of the images, namely the overlapping area is arranged on two sides of the overlapping cloud layer images, so that the semantics of the side images of the overlapping cloud layer images are kept complete, and the probability of blocking effect after the overlapping cloud layer images are spliced in the follow-up process is reduced.
In one embodiment, the performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images includes: and performing row-direction overlapping and separating processing on the satellite cloud layer images to obtain a plurality of row-direction overlapping cloud layer images. In this embodiment, the satellite cloud layer image is a satellite image with a large width, that is, the size of the satellite cloud layer image is large, and in order to improve the efficiency of image processing, the satellite cloud layer image is divided into a plurality of overlapping cloud layer images by means of image segmentation, and the overlapping cloud layer images are processed respectively, so that the data size of the image processed at one time is reduced, and the processing efficiency of the satellite cloud layer image is improved. The satellite cloud layer images are subjected to column-direction overlapping and separating processing, namely, a plurality of separated overlapping cloud layer images are arranged according to a column direction, and overlapping regions between two adjacent overlapping cloud layer images are in the row direction of the images, namely, the overlapping regions are at two ends of the overlapping cloud layer images, so that the semantics of end images of the overlapping cloud layer images are kept complete, and the probability of blocking effect after the overlapping cloud layer images are spliced in the follow-up process is reduced. In another embodiment, the overlapping segmentation process includes a row-wise overlapping segmentation process and a column-wise overlapping segmentation process, so that each side of the overlapping cloud layer images has an overlapping region, further reducing the blocking effect probability of each overlapping cloud layer image in the edge overlapping region.
In one embodiment, the performing a convolution classification operation on the same feature layer of the two overlapping cloud layer images includes: and performing residual error operation on the characteristic image layer of each overlapped cloud layer image. In this embodiment, the characteristic layers of the overlapped cloud layer images are subjected to network classification by the residual error module, the residual error module is used for performing multi-layer convolution classification on the overlapped cloud layer images, that is, the number of the used residual error modules is multiple, and the number of channels of the residual error module is specifically adjusted according to actual needs, so as to conveniently transmit semantic information on different characteristic layers. The residual module loads the input semantic information into the output semantic information, so that the residual module effectively transmits the shallow network information to the deep network information, the semantic information is conveniently and completely maintained and transmitted, and the loss of semantic space detail information is reduced.
Further, the performing residual error operations on the feature layer of each of the overlapping cloud layer images includes: and respectively carrying out down-sampling residual error processing and up-sampling residual error processing on the characteristic image layers to form a plurality of compression path image layers and a plurality of expansion path image layers. In this embodiment, the downsampling residual processing is to perform downsampling on the feature layer of the overlapping cloud layer image, perform image compression on the feature layer of the overlapping cloud layer image, and the upsampling residual processing is to perform upsampling on the output layer of the downsampling residual processing, and perform image expansion on the output layer of the downsampling residual processing. Therefore, under the action of the residual module, the integrity of semantic information is ensured and the loss of the semantic information in an overlapping area is reduced besides the dimension reduction/lifting action in the down-sampling and up-sampling processes.
Further, the performing down-sampling residual processing and up-sampling residual processing on the feature layer respectively further includes: and performing feature fusion operation on the compressed path layer and the corresponding expanded path layer. In this embodiment, the feature fusion operation is the fusion of each feature layer, and in the process of downsampling, each downsampling residual error module outputs one compression path layer, in the up-sampling process, each up-sampling residual error module outputs an extended path layer, the compressed path layer and the corresponding extended path layer are subjected to feature fusion, for example, the number of the down-sampling residual modules is equal to the number of the up-sampling residual modules, the compression path layer output by the first down-sampling residual module is fused with the expansion path layer output by the last up-sampling residual module, the compression path layer output by the second down-sampling residual module is fused with the expansion path layer output by the last up-sampling residual module, by analogy, each compression path layer and one expansion path layer are subjected to feature fusion, so that the integrity of the semantic information of the overlapping region is ensured conveniently.
Still further, the performing convolution classification operation on the same feature layer of the two overlapping cloud layer images further includes: and performing bidirectional asymmetric hole convolution operation on the minimum-size compression path layer on the compression path to output the initial expansion path layer on the expansion path. In this embodiment, the compression path image layer with the minimum size on the compression path is a feature image layer finally output by each downsampling residual module on the compression path, and is an image layer for reducing the dimension of the overlapped cloud image to the small size. The bidirectional Asymmetric cavity Convolution operation is a Convolution sublayer of a pyramid pooling module based on a Convolution kernel, the pyramid pooling module based on the Convolution kernel is an improved model of a cavity space pyramid module, the main improvement is the improvement of the Convolution model, namely, a bidirectional Asymmetric cavity Convolution (DADC) model is adopted, wherein the model comprises a bidirectional Asymmetric cavity Convolution module, a global average pooling module and a 1 × 1 Convolution module which are combined in series and parallel, and the structural diagram of the pyramid pooling module based on the Convolution kernel is shown in detail in FIG. 2. The method greatly expands the sense field of the central part of the network while ensuring that the spatial resolution of the features is unchanged, and simultaneously fuses the features with different depths and different extents, so that the finally generated feature map has not only a sufficiently large sense field but also abundant multidimensional semantic information, and the feature scale is unchanged without losing relative information in space.
The bi-directional asymmetric hole convolution formula is defined as follows:
Figure BDA0003067423050000081
wherein the convolution kernel size is K1×K2I and j are arguments, i.e. the number of rows and columns corresponding to matrix Y, X is the input feature of each pixel of the feature layer, r1And r2Respectively representing the expansion rate in two dimensions of the feature space, w representing a weight value, k1Take 0 to K1Arbitrary value in between, k2Take 0 to K2Any value in between. Furthermore, set K1=K2r 11 or r 21, two complementary convolution kernels are shared in the bidirectional asymmetric hole convolution module, and when the expansion rate of one convolution kernel is r1=1,r2When r, another convolution kernel expansion rate is r1=r,r2And (1) performing fusion addition on the output characteristics of the two complementary convolution kernels to extract richer spatial relationship information, wherein the specific convolution distribution situation refers to fig. 3.
The series-parallel connection bidirectional asymmetric cavity convolution realizes multi-scale feature fusion by fusing and adding expansion convolutions of different dimensions, capturing richer global semantic information by multiplying the network receptive field under the condition of not reducing the resolution of a feature map, and fusing feature information of different depths and widths together in a stacking mode through a series-parallel connection structure. The decoder part performs up-sampling by using the transposed convolution to restore a semantic label graph with the same scale as the original image, and the cross-connection part is different from a U-net network to splice and fuse the characteristic graph of the encoder and the characteristic graph of the decoder. Moreover, the cross entropy loss is a target optimization function commonly used in the semantic segmentation task, and is used as a loss evaluation criterion for semantic information of the overlapping segmentation processing, that is, for embodying the semantic loss degree of the overlapping region, the expression of the cross entropy loss is as follows:
Figure BDA0003067423050000091
wherein, l is the true label of each pixel point, l: Ω → {1,2, … K }, w representing weight values that measure the importance of different pixels. According to the method, greater punishment weight is given to cloud shadow types to make up for the shortage of the cloud shadow types in the training data, P is the result of the last layer of feature graph of the network normalized by softmax, and the specific expression is as follows:
Figure BDA0003067423050000092
wherein, ak'(x) The response value of the position pixel x at the K' th layer of the feature map is shown, and K is the category number.
The number of samples is generally expanded by a data enhancement method before the model is trained, and the generalization capability of the model is improved. Because the satellite cloud layer image has isotropy and has no so-called upper, lower, left and right parts, the data can be effectively amplified by rotating and overturning the image. Meanwhile, as the satellite cloud layer images are in a overlook state, most objects are stretched and extended, and the semantics can be kept unchanged. Moreover, the satellite cloud layer images have some brightness changes due to different shooting time, and the earth surface coverage of each place has larger difference, so that the method can be used for enhancing the input sample by using a color amplification method.
In the actual processing process, the image width of the satellite cloud layer image is large, for example, a GF-1 panoramic image, the size of which is about 15000 × 15000, and the specific processing method is as follows:
1. dividing the satellite cloud layer image into a plurality of 512 x 512 overlapped cloud layer images with overlapped areas;
2. passing the overlapped cloud layer image through a plurality of down-sampling residual error modules, and fusing an output feature layer of each down-sampling residual error module with an output feature layer of a corresponding up-sampling residual error module;
3. inputting the feature layer output by the last downsampling residual module into a bidirectional asymmetric cavity convolution model to obtain three feature layers of each overlapped cloud layer image, namely a cloud feature layer, a cloud shadow feature layer and a background feature layer, wherein the background feature layer comprises an ice and snow layer, a water body layer and a terrain shadow layer;
4. acquiring the feature probability of each pixel point of each feature layer after softmax normalization, and adding the feature probabilities of the same pixel points of different overlapped cloud layer images on the same feature layer to obtain the probability of each pixel point on different feature layers;
5. selecting the category of the maximum probability on each feature layer as the determination category of each pixel point of the overlapping region, for example, if the probability of a pixel point on the feature layer of the cloud shadow is maximum, the gray value output on the same pixel point on the output layer is 128, that is, gray; if the probability of the pixel point on the feature layer of the cloud layer is the maximum, the gray value output on the same pixel point on the output layer is 255, namely white; if the probability of the pixel point on the feature layer of the mountain ice and snow background is the maximum, the gray value output on the same pixel point on the output layer is 0, that is, black.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
The application also provides a satellite cloud layer image processing device which is realized by adopting the satellite cloud layer image processing method in any embodiment. In one embodiment, the satellite cloud layer image processing device is provided with functional modules corresponding to the steps of the satellite cloud layer image processing method. The satellite cloud layer image processing device comprises a first processing module, a second processing module and a gray level adjusting module, wherein:
the satellite cloud layer image processing device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for performing overlapping segmentation processing on a satellite cloud layer image to obtain a plurality of overlapping cloud layer images, and each overlapping cloud layer image is provided with at least one overlapping area;
the second processing module is used for carrying out convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and the gray level adjusting module is used for adjusting the gray level value of the corresponding pixel point on the output layer according to the characteristic probability.
In this embodiment, the convolution classification operation of the segmented overlapping cloud layer images is performed through the second processing module, the second processing module obtains the probability of each pixel point of each segmented image of the satellite cloud layer images in the overlapping region, and is used for displaying the probability distribution of the pixel points in the overlapping region, and finally the gray level adjusting module adjusts the gray level value of the corresponding pixel point on the output image layer through the probability distribution condition of the same pixel point of the same characteristic image layer, so that various background images in the overlapping region can be distinguished conveniently, the blocking effect in the overlapping region is reduced, and the definition of the detection output image of the satellite cloud layer images is improved.
In one embodiment, the first processing module is further configured to perform line-wise overlapping and separating processing on the satellite cloud layer image to obtain a plurality of line-wise overlapping cloud layer images. In this embodiment, the satellite cloud layer image is a satellite image with a large width, that is, the size of the satellite cloud layer image is large, and in order to improve the efficiency of image processing, the first processing module divides the satellite cloud layer image into a plurality of overlapping cloud layer images for respective processing in an image dividing manner, so that the data amount of the image processed at one time is reduced, and the processing efficiency of the satellite cloud layer image is improved. The first processing module performs line-wise overlapping and separating processing on the satellite cloud layer images, namely, a plurality of separated overlapping cloud layer images are arranged in a line direction, and an overlapping region between two adjacent overlapping cloud layer images is in the column direction of the images, namely, the overlapping region is on two sides of the overlapping cloud layer images, so that the semantics of the side images of the overlapping cloud layer images are kept complete, and the probability of blocking effect after splicing the overlapping cloud layer images is reduced.
In one embodiment, the first processing module is further configured to perform column-wise overlapping and partitioning processing on the satellite cloud layer image to obtain a plurality of column-wise overlapping cloud layer images. In this embodiment, the satellite cloud layer image is a satellite image with a large width, that is, the size of the satellite cloud layer image is large, and in order to improve the efficiency of image processing, the first processing module divides the satellite cloud layer image into a plurality of overlapping cloud layer images for respective processing in an image dividing manner, so that the data amount of the image processed at one time is reduced, and the processing efficiency of the satellite cloud layer image is improved. The first processing module performs row-to-column overlapping and separating processing on the satellite cloud layer images, namely, a plurality of separated overlapping cloud layer images are arranged in a column direction, and overlapping regions between two adjacent overlapping cloud layer images are in the row direction of the images, namely, the overlapping regions are at two ends of the overlapping cloud layer images, so that the semantics of end images of the overlapping cloud layer images are kept complete, and the probability of blocking effect after the overlapping cloud layer images are spliced in the follow-up process is reduced. In another embodiment, the first processing module includes a row-wise overlapping and separating processing module and a column-wise overlapping and separating processing module, so that each side of the overlapping cloud layer images has an overlapping region, and the blocking effect probability of each overlapping cloud layer image in the edge overlapping region is further reduced.
In one embodiment, the second processing module is further configured to perform a residual error operation on the feature layer of each of the overlapping cloud layer images. In this embodiment, the second processing module performs network classification on the feature layers of the overlapped cloud layer images through a residual error module, where the residual error module performs multi-layer convolution classification on the overlapped cloud layer images, that is, the number of used residual error modules is multiple, and the number of channels of the residual error module is specifically adjusted according to actual needs, so as to transmit semantic information on different feature layers. The residual module loads the input semantic information into the output semantic information, so that the residual module effectively transmits the shallow network information to the deep network information, the semantic information is conveniently and completely maintained and transmitted, and the loss of semantic space detail information is reduced.
Further, the second processing module is further configured to perform downsampling residual error processing and upsampling residual error processing on the feature layer, so as to form a plurality of compression path layers and a plurality of expansion path layers. In this embodiment, the downsampling residual processing is to perform downsampling on the feature layer of the overlapping cloud layer image, perform image compression on the feature layer of the overlapping cloud layer image, and the upsampling residual processing is to perform upsampling on the output layer of the downsampling residual processing, and perform image expansion on the output layer of the downsampling residual processing. Therefore, under the action of the residual module, the integrity of semantic information is ensured and the loss of the semantic information in an overlapping area is reduced besides the dimension reduction/lifting action in the down-sampling and up-sampling processes.
Furthermore, the second processing module is further configured to perform a feature fusion operation on the compressed path layer and the corresponding extended path layer. In this embodiment, the feature fusion operation adopted by the second processing module is the fusion of each feature layer, each downsampling residual error module outputs a compressed path layer during downsampling, each upsampling residual error module outputs an extended path layer during upsampling, the compressed path layers are feature fused with the corresponding extended path layers, for example, the number of downsampling residual error modules is equal to that of upsampling residual error modules, the compressed path layer output by the first downsampling residual error module is fused with the extended path layer output by the last upsampling residual error module, the compressed path layer output by the second downsampling residual error module is fused with the extended path layer output by the last upsampling residual error module, and so on, each compressed path layer is feature fused with one extended path layer, it is convenient to ensure the integrity of the semantic information of the overlapping region.
Still further, the second processing module is further configured to perform a bidirectional asymmetric hole convolution operation on the minimum-sized compression path layer on the compression path to output an initial expansion path layer on the expansion path. In this embodiment, the compression path image layer with the minimum size on the compression path of the second processing module is a feature image layer that is finally output by each downsampling residual module on the compression path, and is used to reduce the dimension of the overlapped cloud image to a small size. The bidirectional Asymmetric cavity Convolution operation adopted by the second processing module is a Convolution sublayer of a pyramid pooling module based on a Convolution kernel, the pyramid pooling module based on the Convolution kernel is an improved model of a cavity space pyramid module, the main improvement is the improvement of the Convolution model, namely, a bidirectional Asymmetric cavity Convolution (DADC) model is adopted, wherein the model comprises a bidirectional Asymmetric cavity Convolution module, a global average pooling module and a 1 × 1 Convolution module which are combined in series and parallel, and the structural diagram of the pyramid pooling module based on the Convolution kernel is shown in detail in FIG. 2. The method greatly expands the sense field of the central part of the network while ensuring that the spatial resolution of the features is unchanged, and simultaneously fuses the features with different depths and different extents, so that the finally generated feature map has not only a sufficiently large sense field but also abundant multidimensional semantic information, and the feature scale is unchanged without losing relative information in space.
The bi-directional asymmetric hole convolution formula is defined as follows:
Figure BDA0003067423050000131
wherein the convolution kernel size is K1×K2I and j are independent variables, i.e. correspondThe number of rows and columns in the matrix Y, X being the input characteristic of each pixel in the characteristic layer, r1And r2Respectively representing the expansion rate in two dimensions of the feature space, w representing a weight value, k1Take 0 to K1Arbitrary value in between, k2Take 0 to K2Any value in between. Furthermore, set K1=K2r 11 or r 21, two complementary convolution kernels are shared in the bidirectional asymmetric hole convolution module, and when the expansion rate of one convolution kernel is r1=1,r2When r, another convolution kernel expansion rate is r1=r,r2And (1) performing fusion addition on the output characteristics of the two complementary convolution kernels to extract richer spatial relationship information, wherein the specific convolution distribution situation refers to fig. 3.
The series-parallel connection bidirectional asymmetric cavity convolution realizes multi-scale feature fusion by fusing and adding expansion convolutions of different dimensions, capturing richer global semantic information by multiplying the network receptive field under the condition of not reducing the resolution of a feature map, and fusing feature information of different depths and widths together in a stacking mode through a series-parallel connection structure. The decoder part performs up-sampling by using the transposed convolution to restore a semantic label graph with the same scale as the original image, and the cross-connection part is different from a U-net network to splice and fuse the characteristic graph of the encoder and the characteristic graph of the decoder. Moreover, the cross entropy loss is a target optimization function commonly used in the semantic segmentation task, and is used as a loss evaluation criterion for semantic information of the overlapping segmentation processing, that is, for embodying the semantic loss degree of the overlapping region, the expression of the cross entropy loss is as follows:
Figure BDA0003067423050000141
wherein, l is the true label of each pixel point, l: Ω → {1,2, … K }, w representing weight values that measure the importance of different pixels. According to the method, greater punishment weight is given to cloud shadow types to make up for the shortage of the cloud shadow types in the training data, P is the result of the last layer of feature graph of the network normalized by softmax, and the specific expression is as follows:
Figure BDA0003067423050000142
wherein, ak'(x) The response value of the position pixel x at the K' th layer of the feature map is shown, and K is the category number.
The number of samples is generally expanded by a data enhancement method before the model is trained, and the generalization capability of the model is improved. Because the satellite cloud layer image has isotropy and has no so-called upper, lower, left and right parts, the data can be effectively amplified by rotating and overturning the image. Meanwhile, as the satellite cloud layer images are in a overlook state, most objects are stretched and extended, and the semantics can be kept unchanged. Moreover, the satellite cloud layer images have some brightness changes due to different shooting time, and the earth surface coverage of each place has larger difference, so that the method can be used for enhancing the input sample by using a color amplification method.
In the actual processing process, the image width of the satellite cloud layer image is large, for example, a GF-1 panoramic image, the size of which is about 15000 × 15000, and the specific processing method is as follows:
1. dividing the satellite cloud layer image into a plurality of 512 x 512 overlapped cloud layer images with overlapped areas;
2. passing the overlapped cloud layer image through a plurality of down-sampling residual error modules, and fusing an output feature layer of each down-sampling residual error module with an output feature layer of a corresponding up-sampling residual error module;
3. inputting the feature layer output by the last downsampling residual module into a bidirectional asymmetric cavity convolution model to obtain three feature layers of each overlapped cloud layer image, namely a cloud feature layer, a cloud shadow feature layer and a background feature layer, wherein the background feature layer comprises an ice and snow layer, a water body layer and a terrain shadow layer;
4. acquiring the feature probability of each pixel point of each feature layer after softmax normalization, and adding the feature probabilities of the same pixel points of different overlapped cloud layer images on the same feature layer to obtain the probability of each pixel point on different feature layers;
5. selecting the category of the maximum probability on each feature layer as the determination category of each pixel point of the overlapping region, for example, if the probability of a pixel point on the feature layer of the cloud shadow is maximum, the gray value output on the same pixel point on the output layer is 128, that is, gray; if the probability of the pixel point on the feature layer of the cloud layer is the maximum, the gray value output on the same pixel point on the output layer is 255, namely white; if the probability of the pixel point on the feature layer of the mountain ice and snow background is the maximum, the gray value output on the same pixel point on the output layer is 0, that is, black.
For specific limitations of the satellite cloud image processing apparatus, reference may be made to the above limitations of the satellite cloud image processing method, and details are not repeated here. The modules in the satellite cloud layer image processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The application also provides a computer device, which can be a terminal, and the internal structure diagram of the computer device can be shown in fig. 4. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a satellite cloud image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the above method embodiments when executing the computer program.
In one embodiment, the present application further provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the steps in the above-described method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A satellite cloud layer image processing method is characterized by comprising the following steps:
performing overlapping segmentation processing on the satellite cloud layer image to obtain a plurality of overlapping cloud layer images, wherein each overlapping cloud layer image has at least one overlapping region;
performing convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and adjusting the gray value of the corresponding pixel point on the output layer according to the characteristic probability.
2. The satellite cloud image processing method according to claim 1, wherein the performing overlapping segmentation processing on the satellite cloud image to obtain a plurality of overlapping cloud images includes:
and performing line-direction overlapping and separating processing on the satellite cloud layer image to obtain a plurality of line-direction overlapping cloud layer images.
3. The satellite cloud image processing method according to claim 1, wherein the performing overlapping segmentation processing on the satellite cloud image to obtain a plurality of overlapping cloud images includes:
and performing row-direction overlapping and separating processing on the satellite cloud layer images to obtain a plurality of row-direction overlapping cloud layer images.
4. The satellite cloud image processing method according to claim 1, wherein said performing a convolution classification operation on the same feature map layer of two overlapping cloud images comprises:
and performing residual error operation on the characteristic image layer of each overlapped cloud layer image.
5. The satellite cloud image processing method according to claim 4, wherein performing a residual operation on the feature layer of each of the overlapping cloud images includes:
and respectively carrying out down-sampling residual error processing and up-sampling residual error processing on the characteristic image layers to form a plurality of compression path image layers and a plurality of expansion path image layers.
6. The method according to claim 5, wherein the performing down-sampling residual processing and up-sampling residual processing on the feature layer respectively further comprises:
and performing feature fusion operation on the compressed path layer and the corresponding expanded path layer.
7. The satellite cloud image processing method according to claim 5, wherein said performing a convolution classification operation on the same feature map layer of two of said overlapping cloud images further comprises:
and performing bidirectional asymmetric hole convolution operation on the minimum-size compression path layer on the compression path to output the initial expansion path layer on the expansion path.
8. A satellite cloud image processing apparatus, the apparatus comprising:
the satellite cloud layer image processing device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for performing overlapping segmentation processing on a satellite cloud layer image to obtain a plurality of overlapping cloud layer images, and each overlapping cloud layer image is provided with at least one overlapping area;
the second processing module is used for carrying out convolution classification operation on the same feature layers of the two overlapped cloud layer images to obtain the feature probability of each pixel point of each feature layer in the overlapped region;
and the gray level adjusting module is used for adjusting the gray level value of the corresponding pixel point on the output layer according to the characteristic probability.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110529241.5A 2021-05-14 2021-05-14 Satellite cloud layer image processing method and device, computer equipment and storage medium Pending CN113284153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110529241.5A CN113284153A (en) 2021-05-14 2021-05-14 Satellite cloud layer image processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110529241.5A CN113284153A (en) 2021-05-14 2021-05-14 Satellite cloud layer image processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113284153A true CN113284153A (en) 2021-08-20

Family

ID=77279213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110529241.5A Pending CN113284153A (en) 2021-05-14 2021-05-14 Satellite cloud layer image processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113284153A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643312A (en) * 2021-10-12 2021-11-12 江苏维沛通信科技发展有限公司 Cloud layer segmentation method based on true color satellite cloud picture and image processing
CN116664449A (en) * 2023-07-26 2023-08-29 中色蓝图科技股份有限公司 Satellite image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101844866B1 (en) * 2017-11-07 2018-04-03 이승찬 System for processing and providing satellite image, and method thereof
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks
CN110363780A (en) * 2019-07-23 2019-10-22 腾讯科技(深圳)有限公司 Image partition method, device, computer readable storage medium and computer equipment
US20200272825A1 (en) * 2019-05-27 2020-08-27 Beijing Dajia Internet Information Technology Co., Ltd. Scene segmentation method and device, and storage medium
CN112132145A (en) * 2020-08-03 2020-12-25 深圳大学 Image classification method and system based on model extended convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101844866B1 (en) * 2017-11-07 2018-04-03 이승찬 System for processing and providing satellite image, and method thereof
CN108564587A (en) * 2018-03-07 2018-09-21 浙江大学 A kind of a wide range of remote sensing image semantic segmentation method based on full convolutional neural networks
US20200272825A1 (en) * 2019-05-27 2020-08-27 Beijing Dajia Internet Information Technology Co., Ltd. Scene segmentation method and device, and storage medium
CN110363780A (en) * 2019-07-23 2019-10-22 腾讯科技(深圳)有限公司 Image partition method, device, computer readable storage medium and computer equipment
CN112132145A (en) * 2020-08-03 2020-12-25 深圳大学 Image classification method and system based on model extended convolutional neural network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643312A (en) * 2021-10-12 2021-11-12 江苏维沛通信科技发展有限公司 Cloud layer segmentation method based on true color satellite cloud picture and image processing
CN116664449A (en) * 2023-07-26 2023-08-29 中色蓝图科技股份有限公司 Satellite image processing method
CN116664449B (en) * 2023-07-26 2023-10-13 中色蓝图科技股份有限公司 Satellite image processing method

Similar Documents

Publication Publication Date Title
CN115331087B (en) Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
CN114187450B (en) Remote sensing image semantic segmentation method based on deep learning
CN113609889B (en) High-resolution remote sensing image vegetation extraction method based on sensitive characteristic focusing perception
CN114863236B (en) Image target detection method based on dual-attention mechanism
CN111291826B (en) Pixel-by-pixel classification method of multisource remote sensing image based on correlation fusion network
CN114092833B (en) Remote sensing image classification method and device, computer equipment and storage medium
CN113901900A (en) Unsupervised change detection method and system for homologous or heterologous remote sensing image
CN113284153A (en) Satellite cloud layer image processing method and device, computer equipment and storage medium
CN114494821B (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN113159300A (en) Image detection neural network model, training method thereof and image detection method
CN113971764B (en) Remote sensing image small target detection method based on improvement YOLOv3
CN111523439B (en) Method, system, device and medium for target detection based on deep learning
CN116645592B (en) Crack detection method based on image processing and storage medium
CN116258976A (en) Hierarchical transducer high-resolution remote sensing image semantic segmentation method and system
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN111462050A (en) Improved YO L Ov3 minimum remote sensing image target detection method, device and storage medium
CN112348116A (en) Target detection method and device using spatial context and computer equipment
CN111325134B (en) Remote sensing image change detection method based on cross-layer connection convolutional neural network
CN113743346A (en) Image recognition method and device, electronic equipment and storage medium
CN113705538A (en) High-resolution remote sensing image road change detection device and method based on deep learning
CN117671452A (en) Construction method and system of broken gate detection model of lightweight up-sampling YOLOX
CN115861922B (en) Sparse smoke detection method and device, computer equipment and storage medium
CN113902744B (en) Image detection method, system, equipment and storage medium based on lightweight network
CN116310899A (en) YOLOv 5-based improved target detection method and device and training method
CN114359232B (en) Image change detection method and device based on context covariance matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516000 Tonghu ecological wisdom Zone Innovation Park, 137 Zhongkai 6 road, Chenjiang street, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: Huizhou Institute of spatial information technology

Address before: 516000 Tonghu ecological wisdom Zone Innovation Park, 137 Zhongkai 6 road, Chenjiang street, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant before: Institute of space information technology, Institute of remote sensing and digital earth, Chinese Academy of Sciences, Huizhou

CB02 Change of applicant information