CN115908894A - Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation - Google Patents

Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation Download PDF

Info

Publication number
CN115908894A
CN115908894A CN202211328346.5A CN202211328346A CN115908894A CN 115908894 A CN115908894 A CN 115908894A CN 202211328346 A CN202211328346 A CN 202211328346A CN 115908894 A CN115908894 A CN 115908894A
Authority
CN
China
Prior art keywords
segmentation
panoramic
image
initial
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211328346.5A
Other languages
Chinese (zh)
Inventor
汪承义
郭艳君
陈建胜
杜云艳
王雷
汪祖家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Information Research Institute of CAS
Original Assignee
Aerospace Information Research Institute of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Information Research Institute of CAS filed Critical Aerospace Information Research Institute of CAS
Priority to CN202211328346.5A priority Critical patent/CN115908894A/en
Publication of CN115908894A publication Critical patent/CN115908894A/en
Priority to PCT/CN2023/092747 priority patent/WO2024087574A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for classifying an optical remote sensing image ocean raft type culture area based on panoramic segmentation, which comprises the following steps: acquiring an image to be segmented, wherein the image to be segmented is an optical remote sensing image of a marine culture area; inputting an image to be segmented into a pre-trained panoramic segmentation model, and predicting to obtain a multi-classification segmentation result; performing semantic segmentation on an image to be segmented by utilizing a semantic segmentation branch network to obtain an initial semantic segmentation result; carrying out example segmentation on an image to be segmented by utilizing an example segmentation branch network so as to obtain an initial example segmentation result; and fusing the initial semantic segmentation result and the initial instance segmentation result by using a panoramic fusion module to obtain a multi-classification segmentation result. The invention can effectively utilize various rich information in the remote sensing image, realize high-precision multi-classification recognition tasks of the ocean raft culture area, and improve the segmentation precision of the panoramic segmentation model.

Description

Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation
Technical Field
The invention relates to the technical field of ocean remote sensing and image processing, in particular to a method for classifying optical remote sensing image ocean raft type culture zones based on panoramic segmentation.
Background
Ocean raft culture is the important component of ocean culture, for pond and the beach culture near the coast, raft culture range is wide and the region disperses, utilizes traditional navigation positioning system to carry out field measurement, not only wastes time and energy but also is difficult to obtain the accurate result in large area. The development of the remote sensing technology greatly overcomes the defects of small coverage area and insufficient data acquisition of the traditional ground measurement. Meanwhile, the distribution and the condition of the ocean culture area are quickly and accurately obtained by utilizing a deep learning algorithm, and the method is a reliable and advanced technical means for carrying out ocean raft culture dynamic monitoring. For the implementation of ocean resource development and utilization and supporting macro regulation, the classification of ocean culture areas needs to be more detailed on the premise of not reducing the segmentation precision, such as fish (net cages), algae (extension ropes), shellfish (floating rafts) and the like.
Synthetic Aperture Radar (SAR) has the characteristics of all-time and all-weather, and is widely applied to the field of remote sensing, but SAR images have the defects of low resolution, high possibility of being interfered by noise, serious geometric distortion, few available features and the like. With the development of the optical remote sensing technology, the resolution of an optical remote sensing image is greatly improved, the contained characteristic information is more and more abundant, but certain cloud, fog and illumination interference exists in part of the optical image, the interference factors restrict the extraction of the characteristic information of the optical remote sensing image, and the difficulty of target identification and segmentation in the optical remote sensing image is increased.
The existing extraction of mariculture areas based on convolutional neural networks is mainly divided into semantic segmentation and example segmentation, such as segmentation algorithms of improved SOLO, D-ResUnet, HCHNet and the like. The semantic segmentation and the instance segmentation belong to pixel-level classification, wherein the predicted value of each pixel point is mapped to the probability value of [0,1] through a Softmax function in the training process of a semantic segmentation model, then the error between the predicted value and the real label value is judged through a cross entropy loss function, and the model is continuously trained through a gradient descent method to enable the error between the predicted value and the real label value to reach a minimum value. The more types of semantically segmented targets, that is, the more labels of the data set, the more interference items are received when each target is identified and segmented, and when the interference items are reflected on the mathematical model, the more dispersed the probability distribution of the predicted value of each sample is, and the larger the variance of the probability distribution is, the more difficulty that the sample predicted probability distribution focuses on one label value is increased, so that the convergence rate of the loss function is reduced, and the segmentation and identification accuracy is reduced. From the analysis, the relationship of mutual restriction exists between multiple types of identification and high-precision segmentation in the semantic segmentation task, so that the refined classification of the ocean raft culture area cannot be finished by simple target detection, identification and segmentation.
The data labels of the conventional ocean raft culture area convolutional neural network model are only suitable for training of a semantic or instance segmentation single-task model, and the data labels for multi-classification ocean culture area panoramic segmentation are lacked.
In view of the above, a method for segmenting the panoramic view of the marine culture area by multi-task classification is needed.
Disclosure of Invention
The invention provides a method for classifying an optical remote sensing image ocean raft type culture area based on panoramic segmentation, which is used for solving the problems.
The invention provides a panoramic segmentation-based optical remote sensing image ocean raft culture area classification method, which comprises the following steps:
acquiring an image to be segmented, wherein the image to be segmented is an optical remote sensing image of a marine culture area;
inputting the image to be segmented into a pre-trained panoramic segmentation model, and predicting to obtain a multi-classification segmentation result, wherein the multi-classification segmentation result comprises a raft culture area, a non-raft culture area and a plurality of culture area categories;
the pre-trained panorama segmentation model comprises a semantic segmentation branch network, an instance segmentation branch network and a panorama fusion module;
performing semantic segmentation on the image to be segmented by using the semantic segmentation branch network to obtain an initial semantic segmentation result, wherein the initial semantic segmentation result comprises an initial raft culture area and an initial non-raft culture area;
carrying out example segmentation on the image to be segmented by utilizing the example segmentation branch network so as to obtain an initial example segmentation result, wherein the initial example segmentation result comprises a plurality of initial breeding area categories;
and fusing the initial semantic segmentation result and the initial instance segmentation result by using the panoramic fusion module to obtain a multi-classification segmentation result.
According to the optical remote sensing image ocean raft type cultivation area classification method based on panoramic segmentation, the semantic segmentation branch network is an improved U 2 -Net network, said improved U 2 -the Net network comprises at least 6 sub-encoders of U-type configuration and 5 sub-decoders of U-type configuration, said 6 sub-encoders of U-type configuration being in turn 4 first sub-encoders and 2 second sub-encoders, said 5 sub-decoders of U-type configuration being in turn 4 first sub-decoders and 1 second sub-decoder; the first secondary encoder and the first secondary decoder are respectively composed of a first convolution block, an LSFE module, a plurality of down-sampling modules, a DPC module, a second convolution block, a first convolution block and a plurality of up-sampling modules in sequence;
the LSFE module is used for extracting the characteristics of the culture area in a large visual field range and comprises a separable convolution and an output filter;
the DPC module is configured to capture remote context information, which includes separable convolutions and output channels.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation, the example segmentation branch network comprises an improved SOTR network, and the improved SOTR network at least comprises a transform module; wherein the Transformer module comprises a separable convolution and an iABN synchronization layer; the Transformer module is used for predicting each instance class.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation, the example segmentation branch network further comprises a feature extraction module, and the feature extraction module comprises a mobile reverse bottleneck unit and a bidirectional feature pyramid network;
performing feature extraction on the image to be segmented by using the feature extraction module to obtain multi-scale features;
performing example segmentation on the image to be segmented through the improved SOTR network based on the multi-scale features to obtain an initial example segmentation result.
According to the optical remote sensing image ocean raft culture area classification method based on panoramic segmentation provided by the invention, the pre-trained panoramic segmentation model is obtained by training in the following way:
acquiring a training data set and a label corresponding to the training data set, and constructing a panoramic segmentation model; wherein the labels comprise semantic labels of a raft culture area and a non-raft culture area and example labels of various culture area categories;
inputting the training data set into the semantic segmentation branch network, predicting to obtain a semantic segmentation result for training, and calculating the loss between the semantic segmentation result for training and the semantic label to obtain a first loss;
inputting the training data set into the example segmentation branch network, predicting to obtain a training example segmentation result, and calculating the loss between the training example segmentation result and the example label to obtain a second loss;
the panorama fusion module is used for carrying out self-adaptive fusion on the training semantic segmentation result and the training example segmentation result to obtain a training multi-classification result;
and acquiring total loss according to the first loss and the second loss, and training the panoramic segmentation model based on the training multi-classification result and the total loss until the panoramic segmentation model is converged to acquire the trained panoramic segmentation model.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation provided by the invention, after the training data set and the corresponding label are obtained, the method further comprises the following steps:
respectively constructing normalized vegetation index features and normalized water body index features of the training data set;
fusing the normalized vegetation index features and normalized water index features with the training dataset to obtain a shared synthetic dataset;
accordingly, the inputting the training data set into the semantic segmentation branch network comprises:
inputting the shared synthetic dataset into the semantic segmentation branch network;
the inputting the training data set into the instance segmentation branch network comprises:
inputting the shared synthetic dataset into the instance splitting branching network.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation, the training data set comprises a label data set and a confrontation sample set, the label data set is a data set which is labeled and provided with corresponding labels, and the confrontation sample set is obtained by carrying out confrontation training on training example segmentation results.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation, the label data set is obtained through the following method:
the method comprises the steps of obtaining an optical remote sensing image of a training marine culture area, and performing at least storage format unification, cloud and fog removal processing, normalization processing and cutting processing on the optical remote sensing image.
According to the optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation, the multiple culture area categories at least comprise fishes, algae and shellfishes.
The invention also provides a panoramic-segmentation-based optical remote sensing image ocean raft type culture area classification device, which comprises:
the image acquisition module is used for acquiring an image to be segmented, wherein the image to be segmented is an optical remote sensing image of a marine culture area;
the image segmentation module is used for inputting the image to be segmented into a pre-trained panoramic segmentation model and predicting to obtain a multi-classification segmentation result, wherein the multi-classification segmentation result comprises a raft culture area, a non-raft culture area and a plurality of culture area categories;
the pre-trained panorama segmentation model comprises a semantic segmentation branch network, an instance segmentation branch network and a panorama fusion module;
performing semantic segmentation on the image to be segmented by using the semantic segmentation branch network to obtain an initial semantic segmentation result, wherein the initial semantic segmentation result comprises an initial raft culture area and an initial non-raft culture area;
carrying out example segmentation on the image to be segmented by utilizing the example segmentation branch network so as to obtain an initial example segmentation result, wherein the initial example segmentation result comprises a plurality of initial breeding area categories;
and fusing the initial semantic segmentation result and the initial instance segmentation result by using the panoramic fusion module to obtain a multi-classification segmentation result.
The optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation provided by the invention has the advantages that the images to be segmented are segmented in parallel through the semantic segmentation branch network and the example segmentation branch network, the outputs of the two branch networks are fused through the parameter-free panoramic fusion module, and finally, multi-classification segmentation results are obtained, so that multi-task classification is realized. And the logic output of the semantic dividing heads and the example dividing heads can be more completely utilized by the self-adaptive fusion mode of the panoramic fusion module, so that the accuracy of multi-classification tasks is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is one of the flow diagrams of the optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation according to the embodiment of the present invention;
fig. 2 is a second schematic flow chart of the optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation according to the embodiment of the present invention;
FIG. 3 is a conventional U 2 -a schematic diagram of the Net network;
FIG. 4 shows a conventional U 2 -a graph comparing the structure of the En _1 secondary structure in the Net network with the structure of the improved En _1 secondary structure of the present invention;
fig. 5 is a schematic diagram of a training process of a panorama segmentation model according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an optical remote sensing image ocean raft culture zone classification device based on panoramic segmentation according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Based on the mentioned prior convolution neural network model, the single task of semantic or example segmentation can be realized, and in the face of complex and changeable marine environments, various rich information in optical remote sensing images needs to be more fully and effectively utilized, so that the invention predicts sub-networks by unifying semantic segmentation and example segmentation, and fuses the output to form an integral panoramic segmentation network model; construct ocean raft culture district multi-classification task's panorama and cut apart label data set to realize multi-classification task, and make ocean raft culture district's classification more meticulous, promote the segmentation precision of model. The optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation provided by the invention is specifically explained below by combining with the accompanying drawings.
Fig. 1 is one of the flow diagrams of the optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation according to the embodiment of the present invention; fig. 2 is a second schematic flow chart of the optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation according to the embodiment of the invention.
As shown in fig. 1 and 2, the optical remote sensing image ocean raft culture area classification method based on panoramic segmentation comprises the following steps:
s101, obtaining an image to be segmented, wherein the image to be segmented is an optical remote sensing image of a marine culture area.
And S102, inputting the image to be segmented into a pre-trained panoramic segmentation model, and predicting to obtain a multi-classification segmentation result.
The multi-classification segmentation result comprises a raft culture area, a non-raft culture area and a plurality of culture area categories, wherein the culture area categories comprise fishes, algae, shellfish and the like.
The pre-trained panorama segmentation model (HPPS) includes a semantic segmentation branch network, an instance segmentation branch network, and a panorama fusion module.
And performing semantic segmentation on the image to be segmented by utilizing the semantic segmentation branch network to obtain an initial semantic segmentation result, wherein the initial semantic segmentation result comprises an initial raft culture area and an initial non-raft culture area.
And carrying out example segmentation on the image to be segmented by utilizing the example segmentation branch network so as to obtain an initial example segmentation result, wherein the initial example segmentation result comprises a plurality of initial breeding area categories.
And fusing the initial semantic segmentation result and the initial instance segmentation result by using the panoramic fusion module to obtain a multi-classification segmentation result.
The panorama fusion module is a parameter-free panorama fusion module which selectively attenuates or amplifies the fusion logic output fraction to adaptively fuse the initial semantic segmentation result and the initial instance segmentation result based on the head prediction adaptability of the pixel. The whole panoramic division network is jointly optimized in an end-to-end mode, so that the final multi-classification high-precision panoramic division output result of the ocean raft culture area is obtained, and the multi-classification task of the ocean raft culture area of the optical remote sensing image is realized.
In addition, before the images to be segmented are input into the pre-trained panoramic segmentation model, the images to be segmented are subjected to standardization preprocessing, then are cut into images with the size of 2048 × 2048 in a sliding mode, the cut images are sequentially input into the trained HPPS model (namely the panoramic segmentation model), the output result of the HPPS model is the multi-classification result of the marine culture area, and all the images are spliced to obtain the overall panoramic segmentation result image of the images to be segmented.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation, the images to be segmented are segmented in parallel through the semantic segmentation branch network and the example segmentation branch network, the outputs of the two branch networks are fused through the parameter-free panoramic fusion module, and finally multi-classification segmentation results are obtained, so that multi-task classification is achieved. And the logic output of the semantic dividing heads and the example dividing heads can be more completely utilized by the self-adaptive fusion mode of the panoramic fusion module, so that the accuracy of multi-classification tasks is improved.
Further, the semantic segmentation of the branched network into improved U 2 -Net network, said semantic segmentation divisionWith subsidiary networks being modified U 2 -Net network, said improved U 2 -the Net network comprises at least 6 sub-encoders of U-type configuration and 5 sub-decoders of U-type configuration, said 6 sub-encoders of U-type configuration being in turn 4 first sub-encoders and 2 second sub-encoders, said 5 sub-decoders of U-type configuration being in turn 4 first sub-decoders and 1 second sub-decoder; the first secondary encoder and the first secondary decoder are respectively composed of a first convolution block, an LSFE module, a plurality of down-sampling modules, a DPC module, a second convolution block, a first convolution block and a plurality of up-sampling modules in sequence.
The LSFE module is used for extracting the characteristics of the culture area in the large visual field range and comprises a separable convolution and an output filter. Specifically two 3 x 3 separable convolutions and 128 output filters.
The DPC module is configured to capture remote context information, which includes separable convolutions and output channels. Specifically, the method includes a 3 × 3 separable convolution and 256 output channels, and extends to five parallel branches, and then connects the outputs of all the parallel branches to generate a tensor having 1280 channels, which is finally input to a 1 × 1 convolution having 256 output channels, and the output of the 1 × 1 convolution is the output of the DPC module.
In the prior art, U 2 The Net network is a significance detection model, and a specific network structure diagram of the Net network is shown in fig. 3, and the Net network is a two-stage nested U-shaped structure, the whole U-shaped structure is called a current-stage structure, each small U-shaped structure contained in the current-stage structure is called a secondary structure, and the invention does not make a specific improvement on the current-stage structure, but makes a specific improvement on the secondary structure.
Wherein, the specific conception of the improvement is as follows: the remote sensing image is used for observing the sea coast in a large-range area from the satellite platform, so that the visual angle is wide, the data is huge, and the distribution condition of the marine culture area is macroscopically and comprehensively reflected. Therefore, the invention proposes an improvement U 2 The Net network model not only needs to realize the macroscopic detection of the mariculture area, but also needs to advance each small culture floating raftLine identification, extraction and classification. In order to achieve the aim, the invention improves a secondary U-shaped structure, and particularly uses a Large Scale Feature Extractor (LSFE) module and a Dense Prediction unit (DPC) module in the secondary structure.
Starting from the overall network structure, the following is directed to the improved U 2 -Net networks are described in detail.
First, the present invention provides an improved U 2 The Net network is also shown in fig. 3 in the present level structure, and specifically includes 6 secondary encoders (i.e., en _1 to En _ 6) with U-type structure and 5 secondary decoders (i.e., de _1 to De _ 5) with U-type structure, where En _1 corresponds to De _1, en _2 corresponds to De _2, en _3 corresponds to De _3, en _4 corresponds to De _4, and En _5 corresponds to De _5 in the structure.
Invention is to U 2 The first four sub-structures of the Net network are improved, that is, the structures of En _1, de _1, en _2, de _2, en _3, de _3, en _4, de _4 are improved, while the structures of En _5, en _6, and De _5 are not improved, and the structures in the prior art are still used, and the present invention is not described in detail herein.
Taking En _1 as an example, the existing En _1 network structure sequentially includes 2 first volume blocks (i.e., (1) Conv + BN + RELU in FIG. 4), 5 downsampling modules (i.e., (3) Downsamply × 1/2Conv + BN + RELU in FIG. 4), 1 second volume block (i.e., (5) Conv + BN + RELU dillations =4 in FIG. 4), 1 first volume block, and 5 upsampling modules (i.e., (8) Upsample × Conv + BN + RELU in FIG. 4).
The improved En _1 network structure sequentially includes 1 first convolution block, 1 LSFE module (i.e., (2) in fig. 4), 4 down-sampling modules, 1 DPC module (i.e., (4) in fig. 4), 1 second convolution block, 1 first convolution block, and 5 up-sampling modules. That is, in the prior art, the first convolution block at the second position is replaced by the LSFE module, the last downsampling module is replaced by the DPC module, and the rest are used before. Since the differences in structure between En _2, en _3, and En _4 in the prior art are the number of down-sampling modules and the number of up-sampling modules, compared to En _1 described above. Therefore, similarly, the improved En _2 network structure sequentially includes 1 first convolution block, 1 LSFE module, 3 down-sampling modules, 1 DPC module, 1 second convolution block, 1 first convolution block, and 4 up-sampling modules. The improved En _3 network structure is sequentially 1 first convolution block, 1 LSFE module, 2 down-sampling modules, 1 DPC module, 1 second convolution block, 1 first convolution block, and 3 up-sampling modules. The improved En _4 network structure is sequentially 1 first convolution block, 1 LSFE module, 1 down-sampling module, 1 DPC module, 1 second convolution block, 1 first convolution block, and 2 up-sampling modules.
In addition, the modified De _1, de _2, de _3, de _4 correspond to the modified En _1, en _2, en _3, en _4 one by one, and detailed description thereof is omitted.
The invention adopts two-classification semantic segmentation, and the corresponding data set labels only comprise a label 0 (non-raft culture area) and a label 1 (raft culture area). In the training process, a loss function involved in the semantic segmentation branch network adopts a binary cross entropy loss function.
The optical remote sensing image ocean raft culture area classification method based on panoramic segmentation provided by the embodiment of the invention extracts the characteristics of the culture area in a large visual field range through an LSFE module so as to realize the macroscopic detection of the ocean culture area; remote context information is captured through the DPC module, so that each small cultivation floating raft is identified, extracted and classified, and the segmentation precision of the semantic segmentation branch network is improved.
Further, the example split branch network comprises a modified SOTR network comprising at least a fransformer module; wherein the Transformer module comprises a separable convolution and iABN synchronization layer; the Transformer module is used for predicting each instance class.
In the prior art, SOTR utilizes a Transformer to simplify the segmentation process, and uses two parallel subtasks: 1) Predicting each instance category through a Transformer; 2) A segmentation mask is dynamically generated using a multi-level upsampling module. Wherein the encoder-decoder transform model unifies the instance partitioning task through a series of learnable mask embeddings. The invention expands the Transformer by using separable convolution and iABN (inplace activated batch normalization) synchronous layers, and improves the segmentation precision and the training convergence to a certain extent.
The optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation improves the segmentation precision and the training convergence of the model through separable convolution and expansion of the iABN synchronous layer.
Further, the example segmentation branching network further includes a Feature extraction module, which includes a moving reverse bottleneck unit and a bidirectional Feature Pyramid Network (FPN).
And performing feature extraction on the image to be segmented by using the feature extraction module to obtain multi-scale features.
Performing example segmentation on the image to be segmented through the improved SOTR network based on the multi-scale features to obtain an initial example segmentation result.
The optical remote sensing image ocean raft culture area classification method based on panoramic segmentation provided by the embodiment of the invention realizes extraction of multi-scale features by moving the reverse bottleneck unit and the bidirectional feature pyramid network, can obtain more large-scale features, small-scale features, shallow information and deep information, and is used for improving the accuracy of instance segmentation.
Fig. 5 is a schematic diagram of a training process of a panorama segmentation model according to an embodiment of the present invention; as shown in fig. 5, the pre-trained panorama segmentation model is obtained by training as follows:
and acquiring a training data set and a label corresponding to the training data set, and constructing a panoramic segmentation model.
Wherein the labels comprise semantic labels of a raft culture area and a non-raft culture area and example labels of various culture area categories.
It should be noted that after the training data set is obtained, the training data set needs to be labeled, and the specific label is divided into a "stuff" class and a "thing" class. Wherein, the stuff class is marked by meaning mask, 0 represents non-raft culture area, 1 represents raft culture area; the "thing" class is labeled with an instance mask, and contains 4 instances of fish, algae, shellfish, and others. According to the labeling categories, semantic labels of a background (namely a non-raft culture area) and a foreground (namely a raft culture area) are created for the training set and the test set, and example labels of example 1 (fish), example 2 (algae), example 3 (shellfish) and example 4 (other) are created on the basis of the semantic labels.
And inputting the training data set into the semantic segmentation branch network, predicting to obtain a training semantic segmentation result, and calculating the loss between the training semantic segmentation result and the semantic label to obtain a first loss.
And inputting the training data set into the example segmentation branch network, predicting to obtain a training example segmentation result, and calculating the loss between the training example segmentation result and the example label to obtain a second loss.
And performing self-adaptive fusion on the training semantic segmentation result and the training instance segmentation result by using the panoramic fusion module to obtain a training multi-classification result.
And acquiring total loss according to the first loss and the second loss, and training the panoramic segmentation model based on the training multi-classification result and the total loss until the panoramic segmentation model is converged to obtain the trained panoramic segmentation model.
And the total loss is obtained by adaptively weighting and summing the first loss and the second loss according to the attenuation or amplification fused logic output fraction.
It should be noted that the present invention trains the panorama segmentation model by sharing the synthetic data set, and after the panorama segmentation model reaches the convergence accuracy. And then testing the panoramic segmentation model by the test set to obtain a qualitative evaluation result.
If the test does not meet the precision requirement, the model is trained again by means of adjusting the hyper-parameters, supplementing training samples and the like until the qualitative evaluation requirement is met, the model is detected by using the detection set and the panoramic segmentation precision PQ of the model is evaluated, and finally the trained panoramic segmentation model is obtained.
According to the method for classifying the ocean raft culture areas based on the panoramic segmentation, provided by the embodiment of the invention, the mentioned data set labels not only carry out overall classification and labeling on the raft culture areas and the background areas, but also further carry out fine classification and labeling on various culture categories of the raft culture areas so as to realize multi-classification tasks.
Further, after the acquiring the training data set and the corresponding label thereof, the method further includes:
and respectively constructing Normalized Difference Vegetation Index (NDVI) and Normalized Water body Index (NDWI) characteristics of the training data set.
And fusing the normalized vegetation index features and normalized water index features with the training data set to obtain a shared synthetic data set.
Accordingly, the inputting the training data set into the semantic segmentation branch network comprises:
inputting the shared synthetic dataset into the semantic segmentation branch network.
The inputting the training data set into the instance segmentation branch network comprises:
inputting the shared synthetic dataset into the instance split branch network.
It should be noted that the normalized vegetation index feature and the normalized water index feature may also be fused with the image to be segmented after the image to be segmented is acquired, so that the image obtained by fusion is segmented by using the panoramic segmentation model.
According to the ocean raft type culture area classification method based on the optical remote sensing image of the panoramic segmentation, the NDVI and the NDWI are fused in the training data set to obtain the shared synthetic data set, so that various rich information in the optical remote sensing image can be more fully and effectively utilized, and the segmentation precision of the panoramic segmentation model is improved.
Further, the training data set includes a label data set and a countermeasure sample set, the label data set is a data set labeled with a corresponding label, and the countermeasure sample set is obtained by performing countermeasure training on the training instance segmentation result.
Specifically, the multi-classification data label set of the marine culture area is applied to carry out countermeasure training on the example division branches so as to improve the anti-interference capability of multiple targets and multiple classifications. And adding the confrontation samples generated in the confrontation training process into the training data set, and forming the training data set together with the label data set.
It should be noted that the countermeasure training can be performed by adding a discriminator or generating a new sample according to the gradient feedback, and the countermeasure training method is a conventional method, which is not limited in the present invention.
According to the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation provided by the embodiment of the invention, the confrontation sample set is obtained through confrontation training, and the panoramic segmentation model is trained based on the confrontation sample set and the label data set, so that the anti-interference capacity of the panoramic segmentation model can be improved.
Further, the tag data set is obtained by:
the method comprises the steps of obtaining an optical remote sensing image of a training marine culture area, and performing at least storage format unification, cloud and fog removal processing, normalization processing and cutting processing on the optical remote sensing image.
Specifically, a middle-resolution optical remote sensing image covering Sentinel-2, GF-1 (PMS/WFV) and Landsat within 30km of a coastline of China is obtained firstly. Then, the optical remote sensing image is subjected to storage format unification, cloud defogging processing and normalization processing, and then is subjected to slide cutting to obtain standard images with sizes of 2048 × 2048, so that a standard label data set is formed.
The optical remote sensing image ocean raft type culture zone classification device based on panoramic segmentation provided by the invention is described below, and the optical remote sensing image ocean raft type culture zone classification device based on panoramic segmentation described below and the optical remote sensing image ocean raft type culture zone classification method based on panoramic segmentation described above can be referred to correspondingly.
Fig. 6 is a schematic structural diagram of an optical remote sensing image ocean raft type culture zone classification device based on panoramic segmentation according to an embodiment of the present invention; as shown in fig. 6, the optical remote sensing image ocean raft type culture zone classification device based on panoramic segmentation includes an image acquisition module 601 and an image segmentation module 602.
The image acquisition module 601 is configured to acquire an image to be segmented, where the image to be segmented is an optical remote sensing image of a marine culture area.
And the image segmentation module 602 is configured to input the image to be segmented into a pre-trained panorama segmentation model, and predict to obtain a multi-class segmentation result.
The multi-classification segmentation result comprises a raft culture area, a non-raft culture area and a plurality of culture area categories, and the plurality of culture area categories comprise fishes, algae, shellfishes and the like.
The pre-trained panorama segmentation model (HPPS) includes a semantic segmentation branch network, an instance segmentation branch network, and a panorama fusion module.
And performing semantic segmentation on the image to be segmented by utilizing the semantic segmentation branch network to obtain an initial semantic segmentation result, wherein the initial semantic segmentation result comprises an initial raft culture area and an initial non-raft culture area.
And carrying out example segmentation on the image to be segmented by utilizing the example segmentation branch network so as to obtain an initial example segmentation result, wherein the initial example segmentation result comprises a plurality of initial breeding area categories.
And fusing the initial semantic segmentation result and the initial instance segmentation result by using the panoramic fusion module to obtain a multi-classification segmentation result.
The panorama fusion module is a parameter-free panorama fusion module which selectively attenuates or amplifies the fusion logic output fraction to adaptively fuse the initial semantic segmentation result and the initial instance segmentation result based on the head prediction adaptability of the pixel. The whole panoramic division network is jointly optimized in an end-to-end mode, so that the final multi-classification high-precision panoramic division output result of the ocean raft culture area is obtained, and the multi-classification task of the ocean raft culture area of the optical remote sensing image is realized.
In addition, before the image to be segmented is input into a pre-trained panoramic segmentation model, the image to be segmented is subjected to standardization preprocessing, then is cut into images with the size of 2048 × 2048 in a sliding mode, the cut images are sequentially input into a trained HPPS model (namely the panoramic segmentation model), the output result of the HPPS model is the multi-classification result of the marine culture area, and all the images are spliced to obtain an overall panoramic segmentation result graph of the image to be segmented.
The optical remote sensing image ocean raft type culture zone classification device based on panoramic segmentation provided by the embodiment of the invention has the advantages that the images to be segmented are segmented in parallel through the semantic segmentation branch network and the example segmentation branch network, the outputs of the two branch networks are fused through the parameter-free panoramic fusion module, and finally, the multi-classification segmentation result is obtained, so that the multi-task classification is realized. And the logic output of the semantic dividing heads and the example dividing heads can be more completely utilized by the self-adaptive fusion mode of the panoramic fusion module, so that the accuracy of multi-classification tasks is improved.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. The utility model provides an optical remote sensing image ocean raft culture area classification method based on panorama segmentation which characterized in that includes:
acquiring an image to be segmented, wherein the image to be segmented is an optical remote sensing image of a marine culture area;
inputting the image to be segmented into a pre-trained panoramic segmentation model, and predicting to obtain a multi-classification segmentation result, wherein the multi-classification segmentation result comprises a raft culture area, a non-raft culture area and a plurality of culture area categories;
the pre-trained panorama segmentation model comprises a semantic segmentation branch network, an instance segmentation branch network and a panorama fusion module;
performing semantic segmentation on the image to be segmented by using the semantic segmentation branch network to obtain an initial semantic segmentation result, wherein the initial semantic segmentation result comprises an initial raft culture area and an initial non-raft culture area;
carrying out example segmentation on the image to be segmented by utilizing the example segmentation branch network so as to obtain an initial example segmentation result, wherein the initial example segmentation result comprises a plurality of initial breeding area categories;
and fusing the initial semantic segmentation result and the initial instance segmentation result by using the panoramic fusion module to obtain a multi-classification segmentation result.
2. The optical remote sensing image ocean raft culture zone classification method based on panoramic division according to claim 1, wherein the semantic division branch network is an improved U 2 -Net network, said improved U 2 The Net network comprises at least 6 secondary encoders of U-type configuration, in turn 4 first secondary encoders and 2 second secondary encoders, and 5 secondary decoders of U-type configuration, in turn 4 first secondary decoders and 1 second secondary decoder; the first secondary encoder and the first secondary decoder are respectively composed of a first convolution block, an LSFE module, a plurality of down-sampling modules, a DPC module, a second convolution block, a first convolution block and a plurality of up-sampling modules in sequence;
the LSFE module is used for extracting the characteristics of the culture area in a large visual field range and comprises a separable convolution and an output filter;
the DPC module is configured to capture remote context information, which includes separable convolutions and output channels.
3. The method for classifying the ocean raft culture zone based on the panoramic segmentation optical remote sensing images according to claim 1, wherein the example segmentation branch network comprises a modified SOTR network, and the modified SOTR network at least comprises a transform module; wherein the Transformer module comprises a separable convolution and an iABN synchronization layer; the Transformer module is used for predicting each instance class.
4. The optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation of claim 3, wherein the instance segmentation branch network further comprises a feature extraction module, and the feature extraction module comprises a mobile reverse bottleneck unit and a bidirectional feature pyramid network;
performing feature extraction on the image to be segmented by using the feature extraction module to obtain multi-scale features;
performing example segmentation on the image to be segmented through the improved SOTR network based on the multi-scale features to obtain an initial example segmentation result.
5. The ocean raft culture zone classification method based on the optical remote sensing images of the panoramic segmentation according to any one of claims 1 to 4, wherein the pre-trained panoramic segmentation model is trained in the following way:
acquiring a training data set and a label corresponding to the training data set, and constructing a panoramic segmentation model; wherein the labels comprise semantic labels of a raft culture area and a non-raft culture area and example labels of various culture area categories;
inputting the training data set into the semantic segmentation branch network, predicting to obtain a training semantic segmentation result, and calculating the loss between the training semantic segmentation result and the semantic label to obtain a first loss;
inputting the training data set into the example segmentation branch network, predicting to obtain a training example segmentation result, and calculating the loss between the training example segmentation result and the example label to obtain a second loss;
utilizing the panoramic fusion module to perform self-adaptive fusion on the training semantic segmentation result and the training instance segmentation result to obtain a training multi-classification result;
and acquiring total loss according to the first loss and the second loss, and training the panoramic segmentation model based on the training multi-classification result and the total loss until the panoramic segmentation model is converged to obtain the trained panoramic segmentation model.
6. The method for classifying the ocean raft culture zone based on the optical remote sensing images with the panoramic segmentation function according to claim 5, wherein after the training data set and the corresponding labels are obtained, the method further comprises the following steps:
respectively constructing normalized vegetation index features and normalized water body index features of the training data set;
fusing the normalized vegetation index features and normalized water index features with the training dataset to obtain a shared synthetic dataset;
accordingly, the inputting the training data set into the semantic segmentation branch network comprises:
inputting the shared synthetic dataset into the semantic segmentation branch network;
the inputting the training data set into the instance segmentation branch network comprises:
inputting the shared synthetic dataset into the instance split branch network.
7. The method for classifying the optical remote sensing image ocean raft culture zone based on panoramic segmentation of claim 5, wherein the training data set comprises a label data set and a confrontation sample set, the label data set is a data set with corresponding labels after labeling, and the confrontation sample set is obtained by performing confrontation training on the training example segmentation result.
8. The optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation of claim 7, wherein the tag data set is obtained by:
the method comprises the steps of obtaining an optical remote sensing image of a training marine culture area, and performing at least storage format unification, cloud and fog removal processing, normalization processing and cutting processing on the optical remote sensing image.
9. The optical remote sensing image ocean raft culture zone classification method based on panoramic segmentation of claim 7, wherein the multiple culture zone categories at least comprise fish, algae and shellfish.
10. The utility model provides an optics remote sensing image ocean raft culture area classification device based on panorama is cut apart which characterized in that includes:
the image acquisition module is used for acquiring an image to be segmented, wherein the image to be segmented is an optical remote sensing image of a marine culture area;
the image segmentation module is used for inputting the image to be segmented into a pre-trained panoramic segmentation model and predicting to obtain a multi-classification segmentation result, wherein the multi-classification segmentation result comprises a raft culture area, a non-raft culture area and a plurality of culture area categories;
the pre-trained panorama segmentation model comprises a semantic segmentation branch network, an instance segmentation branch network and a panorama fusion module;
performing semantic segmentation on the image to be segmented by using the semantic segmentation branch network to obtain an initial semantic segmentation result, wherein the initial semantic segmentation result comprises an initial raft culture area and an initial non-raft culture area;
carrying out example segmentation on the image to be segmented by utilizing the example segmentation branch network so as to obtain an initial example segmentation result, wherein the initial example segmentation result comprises a plurality of initial breeding area categories;
and fusing the initial semantic segmentation result and the initial instance segmentation result by using the panoramic fusion module to obtain a multi-classification segmentation result.
CN202211328346.5A 2022-10-27 2022-10-27 Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation Pending CN115908894A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211328346.5A CN115908894A (en) 2022-10-27 2022-10-27 Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation
PCT/CN2023/092747 WO2024087574A1 (en) 2022-10-27 2023-05-08 Panoptic segmentation-based optical remote-sensing image raft mariculture area classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211328346.5A CN115908894A (en) 2022-10-27 2022-10-27 Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation

Publications (1)

Publication Number Publication Date
CN115908894A true CN115908894A (en) 2023-04-04

Family

ID=86475256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211328346.5A Pending CN115908894A (en) 2022-10-27 2022-10-27 Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation

Country Status (2)

Country Link
CN (1) CN115908894A (en)
WO (1) WO2024087574A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452901A (en) * 2023-06-19 2023-07-18 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning
CN117036982A (en) * 2023-10-07 2023-11-10 山东省国土空间数据和遥感技术研究院(山东省海域动态监视监测中心) Method and device for processing optical satellite image of mariculture area, equipment and medium
WO2024087574A1 (en) * 2022-10-27 2024-05-02 中国科学院空天信息创新研究院 Panoptic segmentation-based optical remote-sensing image raft mariculture area classification method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460411B (en) * 2018-02-09 2021-05-04 北京市商汤科技开发有限公司 Instance division method and apparatus, electronic device, program, and medium
CN111292331B (en) * 2020-02-23 2023-09-12 华为云计算技术有限公司 Image processing method and device
CN112949388B (en) * 2021-01-27 2024-04-16 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and storage medium
CN114842215A (en) * 2022-04-20 2022-08-02 大连海洋大学 Fish visual identification method based on multi-task fusion
CN115100652A (en) * 2022-08-02 2022-09-23 北京卫星信息工程研究所 Electronic map automatic generation method based on high-resolution remote sensing image
CN115908894A (en) * 2022-10-27 2023-04-04 中国科学院空天信息创新研究院 Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024087574A1 (en) * 2022-10-27 2024-05-02 中国科学院空天信息创新研究院 Panoptic segmentation-based optical remote-sensing image raft mariculture area classification method
CN116452901A (en) * 2023-06-19 2023-07-18 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning
CN116452901B (en) * 2023-06-19 2023-09-15 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning
CN117036982A (en) * 2023-10-07 2023-11-10 山东省国土空间数据和遥感技术研究院(山东省海域动态监视监测中心) Method and device for processing optical satellite image of mariculture area, equipment and medium
CN117036982B (en) * 2023-10-07 2024-01-09 山东省国土空间数据和遥感技术研究院(山东省海域动态监视监测中心) Method and device for processing optical satellite image of mariculture area, equipment and medium

Also Published As

Publication number Publication date
WO2024087574A1 (en) 2024-05-02

Similar Documents

Publication Publication Date Title
CN115908894A (en) Optical remote sensing image ocean raft type culture area classification method based on panoramic segmentation
CN109086818B (en) Ocean frontal surface identification method and device
CN109766830A (en) A kind of ship seakeeping system and method based on artificial intelligence image procossing
CN112149547B (en) Remote sensing image water body identification method based on image pyramid guidance and pixel pair matching
Lu et al. P_SegNet and NP_SegNet: New neural network architectures for cloud recognition of remote sensing images
Sun et al. Global Mask R-CNN for marine ship instance segmentation
Zhang et al. Self-attention guidance and multi-scale feature fusion based uav image object detection
CN113569672A (en) Lightweight target detection and fault identification method, device and system
Zhang et al. Nearshore vessel detection based on Scene-mask R-CNN in remote sensing image
CN115115863A (en) Water surface multi-scale target detection method, device and system and storage medium
Chen et al. A novel lightweight bilateral segmentation network for detecting oil spills on the sea surface
Xu et al. Intelligent ship recongnition from synthetic aperture radar images
Mehran et al. An effective deep learning model for ship detection from satellite images
He et al. A novel image recognition algorithm of target identification for unmanned surface vehicles based on deep learning
CN113177956A (en) Semantic segmentation method for unmanned aerial vehicle remote sensing image
Huang et al. A deep learning approach to detecting ships from high-resolution aerial remote sensing images
CN117349784A (en) Remote sensing data processing method, device and equipment
Zhai et al. Ship detection based on faster R-CNN network in optical remote sensing images
CN116503602A (en) Unstructured environment three-dimensional point cloud semantic segmentation method based on multi-level edge enhancement
CN116503750A (en) Large-range remote sensing image rural block type residential area extraction method and system integrating target detection and visual attention mechanisms
CN116109942A (en) Ship target detection method for visible light remote sensing image
CN116434074A (en) Target identification method based on adjacent branch complementation significance and multiple priori sparse representation
CN115661932A (en) Fishing behavior detection method
Xu et al. Accurate and rapid localization of tea bud leaf picking point based on YOLOv8
Damodaran et al. Extraction of Overhead Transmission Towers from UAV Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination