CN114519819B - Remote sensing image target detection method based on global context awareness - Google Patents

Remote sensing image target detection method based on global context awareness Download PDF

Info

Publication number
CN114519819B
CN114519819B CN202210126106.0A CN202210126106A CN114519819B CN 114519819 B CN114519819 B CN 114519819B CN 202210126106 A CN202210126106 A CN 202210126106A CN 114519819 B CN114519819 B CN 114519819B
Authority
CN
China
Prior art keywords
feature
target
feature map
candidate region
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210126106.0A
Other languages
Chinese (zh)
Other versions
CN114519819A (en
Inventor
张科
吴虞霖
王靖宇
苏雨
张烨
李浩宇
谭明虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210126106.0A priority Critical patent/CN114519819B/en
Publication of CN114519819A publication Critical patent/CN114519819A/en
Application granted granted Critical
Publication of CN114519819B publication Critical patent/CN114519819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image target detection method based on global context awareness, which uses a depth residual error network (ResNet 101) to extract the characteristics of an image, and uses a characteristic pyramid network (FPN, feature Pyramid Network) to further extract the characteristics and generate a candidate region; after generating candidate areas, using feature pooling alignment features; adding a global context extraction module at the highest layer of the feature extraction network, and fusing the extracted features and the original features in an addition manner to obtain new features; and finally, classifying the new features by using the full connection layer to generate target categories and frames. The invention fully extracts the scene information of the image by utilizing the characteristic of abundant semantic information of the high-level features, further strengthens the feature representation, increases the recognition accuracy of dense targets, and also improves the recognition accuracy of other targets to a certain extent, thereby improving the target detection performance in the remote sensing image as a whole.

Description

Remote sensing image target detection method based on global context awareness
Technical Field
The invention belongs to the technical field of pattern recognition, and particularly relates to a remote sensing image target detection method.
Background
The remote sensing image analysis is always a hot spot for computer vision field research, and is widely applied to the fields of urban planning, land utilization management, environment monitoring and the like. The target detection is a basic task in the field of computer vision, and can provide support for subsequent tasks such as event detection, target tracking, man-machine interaction, scene segmentation and the like. The remote sensing image is usually photographed from high altitude, and the photographing angle and photographing height are not fixed due to the difference of the onboard or satellite-borne sensors. Compared with natural images, the remote sensing image has the advantages that scene information in the remote sensing image is more abundant, the target categories are more and the arrangement is dense, so that the target detection of the remote sensing image faces a great challenge. Because of the above problems, although some algorithms have been proposed for remote sensing image target detection, there is still room for improvement in performance, so remote sensing image target detection is still one of the hot problems of current researches.
Shi Wenxu (characteristic enhanced SSD algorithm and application thereof in remote sensing target detection, photonics report, 2020,49 (01): 154-163.) in order to improve detection accuracy of multi-scale remote sensing targets in complex scenes, a characteristic enhanced target detection algorithm based on multi-scale single shot detection is proposed. According to the method, the shallow characteristic enhancement module is designed, so that the small target characteristic extraction capability of the network is improved; and designing a deep feature enhancement module to replace a deep network in the SSD pyramid feature layer. However, the method does not fully utilize the abundant scene information in the remote sensing image, and the improvement effect is limited.
The original FPN detection algorithm has poor detection effect on dense targets in the remote sensing image because of the lack of sufficient scene information in the feature pyramid network. The detection of dense targets requires the reliance on scene information, e.g. cars only appear in a parking lot or on the road, the surroundings of which are typically cars. The lack of awareness of the context information, i.e. the global context, therefore makes it difficult for the network to identify dense targets.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a remote sensing image target detection method based on global context awareness, which uses a depth residual error network (ResNet 101) to extract the characteristics of an image, and uses a characteristic pyramid network (FPN, feature Pyramid Network) to further extract the characteristics and generate a candidate region; after generating candidate areas, using feature pooling alignment features; adding a global context extraction module at the highest layer of the feature extraction network, and fusing the extracted features and the original features in an addition manner to obtain new features; and finally, classifying the new features by using the full connection layer to generate target categories and frames. The invention fully extracts the scene information of the image by utilizing the characteristic of abundant semantic information of the high-level features, further strengthens the feature representation, increases the recognition accuracy of dense targets, and also improves the recognition accuracy of other targets to a certain extent, thereby improving the target detection performance in the remote sensing image as a whole.
The technical scheme adopted by the invention for solving the technical problems comprises the following steps:
step 1: preprocessing and partitioning a data set;
uniformly cutting marked images in a standard data set into a plurality of 1024 x 1024 images, respectively keeping the coincidence rate of 10% of pixels when cutting, and randomly dividing the images into a training set, a verification set and a test set, wherein the training set, the verification set and the test set have no intersection;
step 2: constructing a target detection depth neural network and training the target detection depth neural network by adopting a gradient descent and back propagation algorithm; the target detection depth neural network firstly adopts a Res101 residual network to extract features, then uses a feature pyramid network FPN to generate a candidate region, then carries out local context sensing on the candidate region, and finally obtains a target category and a boundary frame through feature pooling and a full connection layer, and the method comprises the following specific steps:
step 2-1: initializing Res101 model parameters by using a pre-training model;
step 2-2: after 1024 x 1024 images are input into a Res101 residual network to extract features, 5 feature graphs with different sizes are generated, and are marked as C1-C6, and the scales are respectively 512 x 512,256 x 256,128 x 128,64 x 64,32 x 32 and 16 x 16;
step 2-3: carrying out global maximum pooling on the feature map C6 to obtain scene features containing scene information; the scene features are convolved with a 1*1 by 10 x 10 convolution to obtain global features;
step 2-4: taking the feature map C5 as a feature map P5 of a feature pyramid;
up-sampling the feature map C5, adding the up-sampled feature map C5 with the feature map C4 convolved by 1*1 to generate a feature map P4 of a feature pyramid;
up-sampling the feature map C4, adding the up-sampled feature map C4 with the feature map C3 convolved by 1*1 to generate a feature map P3 of a feature pyramid;
up-sampling the feature map C3, adding the up-sampled feature map C3 with the feature map C2 convolved by 1*1 to generate a feature map P2 of a feature pyramid;
step 2-5: the feature graphs P2, P3, P4, P5 of the feature pyramid have dimensions of 256, respectively 2 、128 2 、64 2 、32 2 The method comprises the steps of carrying out a first treatment on the surface of the Generating anchor points for each feature graph in the feature pyramid by using a region generation network, wherein the aspect ratio corresponding to each anchor point comprises three types of 1:2, 1:1 and 2:1; thus, the feature pyramid generates 15 different anchors;
generating a target candidate region by using an anchor, wherein the calculation formula is as follows:
wherein, (x) c ,y c ) Is the anchor point coordinates, (w, h) is the width and height of the target candidate region, respectively, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the upper left corner and the lower right corner of the target candidate region;
calculating the intersection ratio IoU of the target candidate region and the real tag: if IoU is more than or equal to 0.7, setting the target candidate region as a positive sample; if IoU <0.3, the target candidate region is set to the negative sample; taking the obtained positive sample and negative sample as labels of training target candidate areas;
step 2-6: carrying out feature pooling on the target candidate region, and calculating a corresponding feature layer of the target candidate region after feature pooling by adopting a formula (2):
where 1024 refers to the input image size, k 0 Is a reference value;
because the target candidate region is generated from four different feature graphs P2, P3, P4 and P5 through an anchor, 4 different feature layers are also corresponding after feature pooling;
the value rules of the 4 different feature layers after feature pooling are as follows:
after the target candidate areas in the feature graphs P2, P3, P4 and P5 are subjected to feature pooling, each target candidate area outputs 7*7 results respectively, namely 49 features are extracted;
step 2-7: adding 49 features obtained in the step 2-6 and the global features obtained in the step 2-3, sequentially inputting two full-connection layers, and outputting results of the two full-connection layers as a target category and a target boundary box;
step 3: and inputting the remote sensing image to be detected into a trained target detection depth neural network, and outputting to obtain a class boundary box of the target.
Preferably, said k 0 =4。
The beneficial effects of the invention are as follows:
the invention fully extracts the scene information of the image by utilizing the characteristic of abundant semantic information of the high-level features, further strengthens the feature representation, increases the recognition accuracy of dense targets, and also improves the recognition accuracy of other targets to a certain extent, thereby improving the target detection performance in the remote sensing image as a whole.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a block diagram of a deep neural network for object detection in accordance with the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
The invention designs a remote sensing image target detection method based on global context sensing, which aims at solving the problems of rich scene information, multiple target categories and dense arrangement of targets in a remote sensing image by using a general target detection method in most existing remote sensing image target detection algorithms.
As shown in fig. 1, a remote sensing image target detection method based on global context awareness includes the following steps:
step 1: the DOTA dataset is processed. Because the original data image of the DOTA data set is not fixed in size, and the labeling data of the test set are not disclosed, 1869 images with labels are uniformly cut into 1024 x 1024 images for facilitating the training of the neural network, and the width and the height respectively keep the coincidence rate of 10% pixels for preventing the target from being lost due to image cutting during cutting. 19219 images and labeling information thereof are obtained after processing, and are randomly divided into 11531 images of a training set, 3844 images of a verification set and 3844 images of a test set, so that no intersection of the training set, the verification set and the test set on an image sample space is ensured.
Step 2: as shown in fig. 2, a target detection depth neural network is constructed and trained by using gradient descent and back propagation algorithms; the target detection depth neural network firstly adopts a Res101 residual network to extract features, then uses a feature pyramid network FPN to generate a candidate region, then carries out local context sensing on the candidate region, and finally obtains a target category and a boundary frame through feature pooling and a full connection layer, and the method comprises the following specific steps:
step 2-1: because the neural network parameters are numerous and difficult to train, initializing Res101 model parameters by using a pre-training model before training the model;
step 2-2: training the neural network on a training data set, inputting 1024 x 1024 images into a Res101 residual network to extract features, and generating 5 feature graphs with different sizes, namely C1-C6, with scales of 512 x 512,256 x 256,128 x 128,64 x 64,32 x 32 and 16 x 16 respectively; selection of C2, C3, C4, and C5 creates a pyramid. If C1 is used, excessive memory is occupied, so that a pyramid is not built by C1;
step 2-3: carrying out global maximum pooling on the feature map C6 to obtain scene features containing scene information; the scene features are convolved with a 1*1 by 10 x 10 convolution to obtain global features;
step 2-4: taking the feature map C5 as a feature map P5 of a feature pyramid;
up-sampling the feature map C5, adding the up-sampled feature map C5 with the feature map C4 convolved by 1*1 to generate a feature map P4 of a feature pyramid;
up-sampling the feature map C4, adding the up-sampled feature map C4 with the feature map C3 convolved by 1*1 to generate a feature map P3 of a feature pyramid;
up-sampling the feature map C3, adding the up-sampled feature map C3 with the feature map C2 convolved by 1*1 to generate a feature map P2 of a feature pyramid;
1*1 convolution is to ensure that the number of added signature channels is the same;
step 2-5: the feature graphs P2, P3, P4, P5 of the feature pyramid have dimensions of 256, respectively 2 、128 2 、64 2 、32 2 The method comprises the steps of carrying out a first treatment on the surface of the Generating anchor points for each feature graph in the feature pyramid by using a region generation network, wherein the aspect ratio corresponding to each anchor point comprises three types of 1:2, 1:1 and 2:1; thus, the feature pyramid generates 15 different anchors;
generating a target candidate region by using an anchor, wherein the calculation formula is as follows:
wherein, (x) c ,y c ) Is the anchor point coordinates, (w, h) is the width and height of the target candidate region, respectively, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the upper left corner and the lower right corner of the target candidate region;
calculating the intersection ratio IoU (Intersection over Union) of the target candidate region and the real tag: if IoU is more than or equal to 0.7, setting the target candidate region as a positive sample; if IoU <0.3, the target candidate region is set to the negative sample; taking the obtained positive sample and negative sample as labels (corresponding to each anchor) of the training target candidate region;
step 2-6: carrying out feature pooling on the target candidate region, and calculating a corresponding feature layer of the target candidate region after feature pooling by adopting a formula (2):
where 1024 refers to the input image size, k 0 Taking 4 as a reference value;
because the target candidate region is generated from four different feature graphs P2, P3, P4 and P5 through an anchor, 4 different feature layers are also corresponding after feature pooling;
the value rules of the 4 different feature layers after feature pooling are as follows:
after the target candidate areas in the feature graphs P2, P3, P4 and P5 are subjected to feature pooling, each target candidate area outputs 7*7 results respectively, namely 49 features are extracted;
step 2-7: adding 49 features obtained in the step 2-6 and the global features obtained in the step 2-3, sequentially inputting two full-connection layers, and outputting results of the two full-connection layers as a target category and a target boundary box;
step 3: and inputting the remote sensing image to be detected into a trained target detection depth neural network, and outputting to obtain a class boundary box of the target.

Claims (2)

1. The remote sensing image target detection method based on global context awareness is characterized by comprising the following steps of:
step 1: preprocessing and partitioning a data set;
uniformly cutting marked images in a standard data set into a plurality of 1024 x 1024 images, respectively keeping the coincidence rate of 10% of pixels when cutting, and randomly dividing the images into a training set, a verification set and a test set, wherein the training set, the verification set and the test set have no intersection;
step 2: constructing a target detection depth neural network and training the target detection depth neural network by adopting a gradient descent and back propagation algorithm; the target detection depth neural network firstly adopts a Res101 residual network to extract features, then uses a feature pyramid network FPN to generate a candidate region, then carries out local context sensing on the candidate region, and finally obtains a target category and a boundary frame through feature pooling and a full connection layer, and the method comprises the following specific steps:
step 2-1: initializing Res101 model parameters by using a pre-training model;
step 2-2: after 1024 x 1024 images are input into a Res101 residual network to extract features, 6 feature graphs with different sizes are generated, and the feature graphs are marked as C1-C6, and the scales are respectively 512 x 512,256 x 256,128 x 128,64 x 64,32 x 32 and 16 x 16;
step 2-3: carrying out global maximum pooling on the feature map C6 to obtain scene features containing scene information; the scene features are convolved with a 1*1 by 10 x 10 convolution to obtain global features;
step 2-4: taking the feature map C5 as a feature map P5 of a feature pyramid;
up-sampling the feature map C5, adding the up-sampled feature map C5 with the feature map C4 convolved by 1*1 to generate a feature map P4 of a feature pyramid;
up-sampling the feature map C4, adding the up-sampled feature map C4 with the feature map C3 convolved by 1*1 to generate a feature map P3 of a feature pyramid;
up-sampling the feature map C3, adding the up-sampled feature map C3 with the feature map C2 convolved by 1*1 to generate a feature map P2 of a feature pyramid;
step 2-5: the feature graphs P2, P3, P4, P5 of the feature pyramid have dimensions of 256, respectively 2 、128 2 、64 2 、32 2 The method comprises the steps of carrying out a first treatment on the surface of the Generating anchor points for each feature graph in the feature pyramid by using a region generation network, wherein the aspect ratio corresponding to each anchor point comprises three types of 1:2, 1:1 and 2:1; thus, the feature pyramid generates 15 different anchors;
generating a target candidate region by using an anchor, wherein the calculation formula is as follows:
wherein, (x) c ,y c ) Is the anchor point coordinates, (w, h) is the width and height of the target candidate region, respectively, (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the upper left corner and the lower right corner of the target candidate region;
calculating the intersection ratio IoU of the target candidate region and the real tag: if IoU is more than or equal to 0.7, setting the target candidate region as a positive sample; if IoU <0.3, the target candidate region is set to the negative sample; taking the obtained positive sample and negative sample as labels of training target candidate areas;
step 2-6: carrying out feature pooling on the target candidate region, and calculating a corresponding feature layer of the target candidate region after feature pooling by adopting a formula (2):
where 1024 refers to the input image size, k 0 Is a reference value;
because the target candidate region is generated from four different feature graphs P2, P3, P4 and P5 through an anchor, 4 different feature layers are also corresponding after feature pooling;
the value rules of the 4 different feature layers after feature pooling are as follows:
after the target candidate areas in the feature graphs P2, P3, P4 and P5 are subjected to feature pooling, each target candidate area outputs 7*7 results respectively, namely 49 features are extracted;
step 2-7: adding 49 features obtained in the step 2-6 and the global features obtained in the step 2-3, sequentially inputting two full-connection layers, and outputting results of the two full-connection layers as a target category and a target boundary box;
step 3: and inputting the remote sensing image to be detected into a trained target detection depth neural network, and outputting to obtain a class boundary box of the target.
2. The global context awareness based remote sensing image target detection method of claim 1, wherein k is 0 =4。
CN202210126106.0A 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness Active CN114519819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210126106.0A CN114519819B (en) 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210126106.0A CN114519819B (en) 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness

Publications (2)

Publication Number Publication Date
CN114519819A CN114519819A (en) 2022-05-20
CN114519819B true CN114519819B (en) 2024-04-02

Family

ID=81596492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210126106.0A Active CN114519819B (en) 2022-02-10 2022-02-10 Remote sensing image target detection method based on global context awareness

Country Status (1)

Country Link
CN (1) CN114519819B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937672A (en) * 2022-11-22 2023-04-07 南京林业大学 Remote sensing rotating target detection method based on deep neural network
CN116486077B (en) * 2023-04-04 2024-04-30 中国科学院地理科学与资源研究所 Remote sensing image semantic segmentation model sample set generation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
CN111368775A (en) * 2020-03-13 2020-07-03 西北工业大学 Complex scene dense target detection method based on local context sensing
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112070729A (en) * 2020-08-26 2020-12-11 西安交通大学 Anchor-free remote sensing image target detection method and system based on scene enhancement
CN112766409A (en) * 2021-02-01 2021-05-07 西北工业大学 Feature fusion method for remote sensing image target detection
CN113111740A (en) * 2021-03-27 2021-07-13 西北工业大学 Characteristic weaving method for remote sensing image target detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018214195A1 (en) * 2017-05-25 2018-11-29 中国矿业大学 Remote sensing imaging bridge detection method based on convolutional neural network
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN111368775A (en) * 2020-03-13 2020-07-03 西北工业大学 Complex scene dense target detection method based on local context sensing
CN112070729A (en) * 2020-08-26 2020-12-11 西安交通大学 Anchor-free remote sensing image target detection method and system based on scene enhancement
CN112766409A (en) * 2021-02-01 2021-05-07 西北工业大学 Feature fusion method for remote sensing image target detection
CN113111740A (en) * 2021-03-27 2021-07-13 西北工业大学 Characteristic weaving method for remote sensing image target detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
联合膨胀卷积残差网络和金字塔池化表达的高分影像建筑物自动识别;乔文凡;慎利;戴延帅;曹云刚;;地理与地理信息科学;20180827(05);全文 *

Also Published As

Publication number Publication date
CN114519819A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN110188705B (en) Remote traffic sign detection and identification method suitable for vehicle-mounted system
CN111145174B (en) 3D target detection method for point cloud screening based on image semantic features
CN112288008B (en) Mosaic multispectral image disguised target detection method based on deep learning
CN110659664B (en) SSD-based high-precision small object identification method
CN111832655A (en) Multi-scale three-dimensional target detection method based on characteristic pyramid network
CN114519819B (en) Remote sensing image target detection method based on global context awareness
CN113076871A (en) Fish shoal automatic detection method based on target shielding compensation
CN111461039B (en) Landmark identification method based on multi-scale feature fusion
CN113609896A (en) Object-level remote sensing change detection method and system based on dual-correlation attention
CN113191204B (en) Multi-scale blocking pedestrian detection method and system
CN112560675A (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN111768415A (en) Image instance segmentation method without quantization pooling
Zang et al. Traffic lane detection using fully convolutional neural network
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
Cai et al. Vehicle Detection Based on Deep Dual‐Vehicle Deformable Part Models
CN117152414A (en) Target detection method and system based on scale attention auxiliary learning method
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
CN111368775A (en) Complex scene dense target detection method based on local context sensing
CN112668662B (en) Outdoor mountain forest environment target detection method based on improved YOLOv3 network
CN113111740A (en) Characteristic weaving method for remote sensing image target detection
CN114882490B (en) Unlimited scene license plate detection and classification method based on point-guided positioning
CN114494893B (en) Remote sensing image feature extraction method based on semantic reuse context feature pyramid
CN114550016A (en) Unmanned aerial vehicle positioning method and system based on context information perception
Yan et al. Building Extraction at Amodal-Instance-Segmentation Level: Datasets and Framework
Li et al. Cloud detection from remote sensing images by cascaded U-shape attention networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant