CN116229254A - Remote sensing extraction method for offshore buoyant raft and deep water culture area - Google Patents

Remote sensing extraction method for offshore buoyant raft and deep water culture area Download PDF

Info

Publication number
CN116229254A
CN116229254A CN202211560583.4A CN202211560583A CN116229254A CN 116229254 A CN116229254 A CN 116229254A CN 202211560583 A CN202211560583 A CN 202211560583A CN 116229254 A CN116229254 A CN 116229254A
Authority
CN
China
Prior art keywords
remote sensing
training
spatial
forest
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211560583.4A
Other languages
Chinese (zh)
Inventor
刘培
于吉涛
张霖
凡仁福
叶茂松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HAINAN ACADEMY OF OCEAN AND FISHERIES SCIENCES
Yazhouwan Innovation Research Institute Of Hainan Institute Of Tropical Oceanography
Original Assignee
HAINAN ACADEMY OF OCEAN AND FISHERIES SCIENCES
Yazhouwan Innovation Research Institute Of Hainan Institute Of Tropical Oceanography
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HAINAN ACADEMY OF OCEAN AND FISHERIES SCIENCES, Yazhouwan Innovation Research Institute Of Hainan Institute Of Tropical Oceanography filed Critical HAINAN ACADEMY OF OCEAN AND FISHERIES SCIENCES
Priority to CN202211560583.4A priority Critical patent/CN116229254A/en
Publication of CN116229254A publication Critical patent/CN116229254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Abstract

The invention provides a remote sensing extraction method of an offshore buoyant raft and a deepwater culture zone, which comprises the following operation steps: s1, constructing and optimizing a depth random rotation forest model; s2, extracting multiple features of remote sensing data based on identification of the culture area. S1 mainly includes two aspects: firstly, on the basis of a characteristic combination random forest, improving the splitting of mixed nodes, and improving the forest scale through a multi-input adaptive model; and secondly, combining multi-granularity scanning with deep cascading, starting from the characteristics of multi-granularity scanning, simultaneously scanning by a plurality of parallel rotating forests, performing characteristic optimization on the context information of the space structure and texture attributes, and performing multistage serial connection on the optimization result of multi-granularity scanning to generate a depth random rotating forest with strong generalization capability. The invention maximizes the recognition capability of the depth rotation forest on typical targets in the high-resolution remote sensing image, and can improve the efficiency and precision of automatic recognition of the mariculture area.

Description

Remote sensing extraction method for offshore buoyant raft and deep water culture area
Technical Field
The invention relates to a remote sensing image extraction method, in particular to a remote sensing extraction method for offshore floating rafts and deep water culture areas.
Background
Aquaculture is a typical coastal zone of human activity. Over the past decade, the world aquaculture industry has been in an expanded state. According to the fact that national fishery economic statistics gazette in 2020 of the agricultural rural area of the people's republic of China shows that the yield of the national aquatic products is 6549.02 ten thousand tons and is increased by 2.86% in the same ratio. Therefore, the distribution and the area of the offshore floating raft culture area are rapidly and accurately extracted, and decision information and scientific basis are provided for a fishery management department to reasonably plan the culture sea, control the culture density, inhibit the deterioration of the culture environment and prevent and treat the culture diseases.
The current research results show that aiming at the problem of complex near-shore water color environment, a higher requirement is provided for the spatial resolution of remote sensing images for accurately extracting the offshore buoyant raft aquaculture area under the complex water color background. And a new remote sensing information extraction method is designed aiming at better requirements.
Disclosure of Invention
The invention aims to solve the technical problems of the prior art, and provides a remote sensing extraction method for offshore floating rafts and deepwater cultivation areas.
In order to solve the technical problems, the invention adopts the following technical scheme: the remote sensing extraction method for the offshore buoyant raft and the deepwater culture area is characterized by comprising the following operation steps of:
s1, inputting a high-resolution remote sensing image and extracting texture features;
s2, extracting local autocorrelation spatial structural features and establishing a spatial context information model;
s3, optimizing spectral characteristics according to main component transformation and independent component transformation of the multispectral information;
s4, carrying out cross sampling and sparse expression reconstruction on texture features, spatial structure features and optimized spectrum features in the S1-S3, and generating new features through repeated sampling, expression and reconstruction;
s5, constructing a depth cascade rotating forest model, and carrying out depth expression on the spectral features, the spatial structure features and the texture features through multi-granularity scanning and forest cascade parallel connection processing to obtain deep information in the image;
and S6, gradually and serially connecting the rotating forest models in series on the basis of parallel connection, updating the rotating forest models by using a deep self-adaptive continuous iteration optimization mode, and completing high-precision extraction of remote sensing information.
Preferably, the specific manner of S1 is:
the wavelet transformation is improved into self-adaptive multi-scale transformation, mathematical morphological sequence reconstruction sections with different scales are extracted on the basis of mathematical morphological opening and closing operation, and the texture information of the remote sensing image is fully excavated; the specific formula is:
Figure BDA0003984459780000021
Figure BDA0003984459780000022
/>
Figure BDA0003984459780000023
Figure BDA0003984459780000024
the expression of the set theory for the corrosion and expansion operation of the A by the expression (1) and the expression (2) is the basis of the mathematical morphological filtering filling operation, and the mathematical model for measuring the corrosion and the expansion is shown as the expression (3) and the expression (4), wherein S is a marked image, T is a template image, and when n=0, D (S) =S, E (S) =S, and the mathematical morphological reconstruction for measuring the corrosion and the expansion is realized through the iteration.
Preferably, the specific manner of S2 is:
on the basis of the spatial autocorrelation, the neighborhood analysis is enhanced, and the local spatial autocorrelation is improved by the following formula:
Figure BDA0003984459780000025
Figure BDA0003984459780000031
Figure BDA0003984459780000032
wherein x is in (5) i Is the attribute value, w, of space element i ij The space weight matrix represents the influence degree between the space units i and j; i i Is MORT index with value range of [ -1,1]Positive values indicate that the spatial unit is similar to the attribute values of neighboring units, and the spatial autocorrelation is a positive correlation; negative values indicate that the spatial cell is not similar to the attribute values of neighboring cells, and that the spatial autocorrelation is a negative correlation; 0 indicates no spatially dependent properties.
Preferably, the specific operation of S4-S6 is:
s01, randomly sampling the sample set F without replacement, and uniformly dividing the whole data set according to the method to obtain K training sets. The diversity of the base classifier can be effectively improved by using the training set divided by the sampling method without replacement. The K training set has M=n/K characteristic, n is less than or equal to K;
s02, carrying out partial subset of K training setsPCA feature change operation is performed to prevent the base classifier from selecting the same training set for training so as to increase the inter-class difference between the base classifiers, wherein the partial subset selection method for PCA feature change operation is as follows: 75% resampling of the K training sets resulted in training subset X ij Where i is the number of the base classifier, j is the training subset number, for this X ij PCA feature variation is carried out on each training subset, wherein the generated principal component coefficients are as follows
Figure BDA0003984459780000033
Length is M, and simultaneously, the characteristic with the characteristic vector of 0 is removed;
s03, rotating matrix processing: multiple operations step S02 obtains K training subsets, wherein the generated principal component coefficients are input into a sparse rotation matrix R i
Figure BDA0003984459780000034
Rotation matrix R i Is n x j M j Through a rotation matrix R i After the flow process of (2), a base classifier D can be generated i Training dataset XR of (X) i a The specific operation flow of the rotation matrix is realized by adjusting the R columns i To correspond to the order of the original feature set, and the rotation matrix after the order adjustment
Figure BDA0003984459780000041
When the size is Nxn and the final prediction result is tested, the test data set x is used for multiplying x by +.>
Figure BDA0003984459780000042
Input base classifier D i Middle use->
Figure BDA0003984459780000043
Representation base classifier D i Predicting x to be of class w 1 And calculates the confidence mu that x belongs to each category j (x):
Figure BDA0003984459780000044
S04, comparing and detecting the confidence coefficient of each category, and obtaining the category with the maximum confidence coefficient, namely the category to which the final category belongs.
Preferably, the specific manner of S04 is: assuming that 1000 (super parameters) completely random trees exist in each completely random tree forest in the cascade structure, randomly adopting one feature in data as a discrimination condition for segmentation, and generating sub-nodes according to the discrimination condition segmentation until each leaf node in the forest only has the same class of examples or the number of examples is not more than 10. And performing characteristic preprocessing on the input image by utilizing multi-granularity scanning. The sample sliding window is scanned to obtain the original features.
Compared with the prior art, the invention has the following advantages:
1. the invention processes deep self-adaption continuous iterative optimization on information such as spectrum, space structure, texture, context and the like by resampling and construction, multi-granularity scanning and parallel processing methods, and updates a model to realize high-precision extraction of an offshore floating raft culture area. The method is characterized in that the water quality environment of the offshore surrounding the aquaculture area is complex, the phenomenon of identical foreign matters and foreign matters are serious, and a corresponding texture feature decomposition scale model and a context information model are established by utilizing multi-scale wavelet transformation, morphological opening and closing and reconstruction operation and space structure feature extraction in the field of pattern recognition. The recognition capability of the depth rotation forest on typical targets in the high-resolution remote sensing image is maximized, and the automatic recognition efficiency and accuracy of the mariculture area can be improved.
The invention is described in further detail below with reference to the drawings and examples.
Drawings
Fig. 1 is a schematic overall flow diagram of the present invention.
FIG. 2 is a schematic diagram of a rotational forest model according to the present invention
FIG. 3 is a schematic diagram of class vector generation in the present invention.
Figure 4 is a schematic diagram of a depth rotation forest cascade structure in the present invention.
Fig. 5 is a schematic diagram of the offshore buoyant raft culture extraction results obtained by the various algorithms in this example.
FIG. 6 is a schematic diagram of the results of offshore deep water aquaculture extraction obtained by various algorithms in this example.
Detailed Description
As shown in fig. 1 and 2, the present invention includes the following operation steps:
s1, inputting a Jilin number one high-resolution remote sensing image, extracting texture features, calculating the spatial association index of the MORT, the GEARY and the GETIS-ORD of the research area by using a look' scose approach rule, and converting the spatial association index into G=256 gray levels.
S2, extracting local autocorrelation spatial structural features and establishing a spatial context information model; the spatial index is divided and spatial analysis is performed by morphology, and an autocorrelation region highly positive to the building type and an autocorrelation region negatively correlated to the building type are calculated and extracted respectively. Isolated small area objects are removed and growth calculations are performed on meaningful objects using mathematical morphological filling. And performing intersection operation on the optimized object graph, and identifying and extracting the region with positive autocorrelation and negative autocorrelation.
S3, optimizing spectral characteristics according to main component transformation and independent component transformation of multispectral information: optimizing spectrum information for main component transformation and independent component transformation of the multispectral information; for an N-dimensional random variable, the traditional principal component is obtained by searching all directions W 1 ,……,W n So that the variance is maximized. The principal component and independent component transformation are required to be performed first, and then the components obtained by the principal component transformation are labeled to obtain a result. Principal component transformation is the subtraction of noise or data of no interest, leaving the principal component signal behind.
S4, carrying out cross sampling and sparse expression reconstruction on the texture features, the spatial structure features and the optimized spectrum features in the S1-S3, and generating Jilin I new features which are favorable for extraction of the culture region through repeated sampling, expression and reconstruction;
s5, further acquiring a new feature set F' through multi-granularity scanning of the training set acquired in the S4, and constructing a deep rotation forest suitable for the Jilin first high-resolution image through a multi-layer cascade and iterative deepening network of the rotation forest;
and S6, gradually and serially connecting the rotating forest models in series on the basis of parallel connection, updating the rotating forest models by using a deep self-adaptive continuous iteration optimization mode, and completing high-precision extraction of remote sensing information.
In this embodiment, the specific manner of S1 is:
the wavelet transformation is improved into self-adaptive multi-scale transformation, mathematical morphological sequence reconstruction sections with different scales are extracted on the basis of mathematical morphological opening and closing operation, and the texture information of the remote sensing image is fully excavated; the specific formula is:
Figure BDA0003984459780000061
Figure BDA0003984459780000062
Figure BDA0003984459780000063
Figure BDA0003984459780000064
the expression of the set theory for the corrosion and expansion operation of the A by the expression (1) and the expression (2) is the basis of the mathematical morphological filtering filling operation, and the mathematical model for measuring the corrosion and the expansion is shown as the expression (3) and the expression (4), wherein S is a marked image, T is a template image, and when n=0, D (S) =S, E (S) =S, and the mathematical morphological reconstruction for measuring the corrosion and the expansion is realized through the iteration.
In this embodiment, the specific manner of S2 is:
on the basis of the spatial autocorrelation, the neighborhood analysis is enhanced, and the local spatial autocorrelation is improved by the following formula:
Figure BDA0003984459780000065
Figure BDA0003984459780000066
/>
Figure BDA0003984459780000067
wherein x is in (5) i Is the attribute value, w, of space element i ij The space weight matrix represents the influence degree between the space units i and j; i i Is MORT index with value range of [ -1,1]Positive values indicate that the spatial unit is similar to the attribute values of neighboring units, and the spatial autocorrelation is a positive correlation; negative values indicate that the spatial cell is not similar to the attribute values of neighboring cells, and that the spatial autocorrelation is a negative correlation; 0 indicates no spatially dependent properties.
In this embodiment, the specific operations of S4-S6 are as follows:
s01, randomly sampling the sample set F without replacement, and uniformly dividing the whole data set according to the method to obtain K training sets. The diversity of the base classifier can be effectively improved by using the training set divided by the sampling method without replacement. The K training set has M=n/K characteristic, n is less than or equal to K;
s02, performing PCA feature change operation on partial subsets in the K training sets to prevent the same training set from being selected by the base classifier for training, so as to increase the inter-class difference among the base classifiers, wherein the partial subset selection method requiring PCA feature change operation comprises the following steps: 75% resampling of the K training sets resulted in training subset X ij Where i is the number of the base classifier, j is the training subset number, for this X ij PCA feature variation is carried out on each training subset, wherein the generated principal component coefficients are as follows
Figure BDA0003984459780000071
Length is M, and simultaneously, the characteristic with the characteristic vector of 0 is removed;
s03, rotating matrix processing: multiple operations step S02 obtains K training subsets, wherein the generated principal component coefficients are input into a sparse rotation matrix R i
Figure BDA0003984459780000072
Rotation matrix R i Is n x j M j Through a rotation matrix R i After the flow process of (2), a base classifier D can be generated i Training data set of (a)
Figure BDA0003984459780000073
The specific operation flow of the rotation matrix is to adjust the R list i Is made to correspond to the order of the original feature sets, the rotation matrix after the order is adjusted +.>
Figure BDA0003984459780000074
When the size is Nxn and the final prediction result is tested, the test data set x is used for multiplying x by +.>
Figure BDA0003984459780000075
Input base classifier D i Middle use->
Figure BDA0003984459780000076
Representation base classifier D i Predicting x to be of class w 1 And calculates the confidence mu that x belongs to each category j (x):
Figure BDA0003984459780000081
S04, comparing and detecting the confidence coefficient of each category, and obtaining the category with the maximum confidence coefficient, namely the category to which the final category belongs.
In this embodiment, the specific manner of S04 is as follows: assuming that 1000 (super parameters) completely random trees exist in each completely random tree forest in the cascade structure, randomly adopting one feature in data as a discrimination condition for segmentation, and generating sub-nodes according to the discrimination condition segmentation until each leaf node in the forest only has the same class of examples or the number of examples is not more than 10. The flow of generating the middle class distribution vector of the rotation forest of each layer is shown in fig. 3.
And performing characteristic preprocessing on the input image by utilizing multi-granularity scanning. The sample sliding window is scanned to obtain the original features. As shown in fig. 4, the sample sliding window is scanned to obtain the original features. As shown in the figure, for the serial image data of 400-dim (dimension), the input image is subjected to characteristic processing by sliding sampling by using a sampling window with the size of 100-dim. 301 feature vectors of size 100-dim will be generated. The collected feature vectors (or feature maps) are sequentially input into a rotating forest a and a rotating forest B, respectively.
Table 1, table 2, fig. 5 and fig. 6 illustrate performance comparisons of various algorithms and algorithms proposed by the present invention.
Table 1 the algorithm of the invention is compared with the existing algorithm identification result (buoyant raft culture extraction result)
Figure BDA0003984459780000082
Table 2 the algorithm of the invention is compared with the existing algorithm recognition result (deep water culture extraction result)
Figure BDA0003984459780000091
It is obvious from the above that the algorithm provided by the invention has higher accuracy and higher Kappa coefficient, and is more suitable for remote sensing extraction of offshore floating rafts and deep water culture areas.
The above description is only of the preferred embodiments of the present invention, and is not intended to limit the present invention. Any simple modification, variation and equivalent variation of the above embodiments according to the technical substance of the invention still fall within the scope of the technical solution of the invention.

Claims (4)

1. The remote sensing extraction method for the offshore buoyant raft and the deepwater culture area is characterized by comprising the following operation steps of:
s1, inputting a high-resolution remote sensing image and extracting texture features;
s2, extracting local autocorrelation spatial structural features and establishing a spatial context information model;
s3, optimizing spectral characteristics according to main component transformation and independent component transformation of the multispectral information;
s4, carrying out cross sampling and sparse expression reconstruction on texture features, spatial structure features and optimized spectrum features in the S1-S3, and generating new features through repeated sampling, expression and reconstruction;
s5, constructing a depth cascade rotating forest model, and carrying out depth expression on the spectral features, the spatial structure features and the texture features through multi-granularity scanning and forest cascade parallel processing to obtain deep information in the image;
and S6, gradually and serially connecting the rotating forest models in series on the basis of parallel connection, updating the rotating forest models by using a deep self-adaptive continuous iteration optimization mode, and completing high-precision extraction of remote sensing information.
2. The remote sensing extraction method of offshore buoyant rafts and deepwater farming areas according to claim 1, wherein the specific mode of S1 is:
the wavelet transformation is improved into self-adaptive multi-scale transformation, mathematical morphological sequence reconstruction sections with different scales are extracted on the basis of mathematical morphological opening and closing operation, and the texture information of the remote sensing image is fully excavated; the specific formula is:
Figure FDA0003984459770000011
Figure FDA0003984459770000012
Figure FDA0003984459770000013
Figure FDA0003984459770000014
the expression of the set theory for the corrosion and expansion operation of the A by the expression (1) and the expression (2) is the basis of the mathematical morphological filtering filling operation, and the mathematical model for measuring the corrosion and the expansion is shown as the expression (3) and the expression (4), wherein S is a marked image, T is a template image, and when n=0, D (S) =S, E (S) =S, and the mathematical morphological reconstruction for measuring the corrosion and the expansion is realized through the iteration.
3. The remote sensing extraction method of offshore buoyant rafts and deepwater farming areas according to claim 1, wherein the specific mode of S2 is:
on the basis of the spatial autocorrelation, the neighborhood analysis is enhanced, and the local spatial autocorrelation is improved by the following formula:
Figure FDA0003984459770000021
Figure FDA0003984459770000022
Figure FDA0003984459770000023
/>
wherein x is in (5) i Is the attribute value, w, of space element i ij The space weight matrix represents the influence degree between the space units i and j; i i Is MORT index with value range of [ -1,1]Positive values indicate that the spatial unit is similar to the attribute values of neighboring units, and the spatial autocorrelation is a positive correlation; negative values indicate that the spatial cell is not similar to the attribute values of neighboring cells, and that the spatial autocorrelation is a negative correlation; 0 indicates no spatially dependent properties.
4. The remote sensing extraction method of offshore buoyant rafts and deepwater farming areas according to claim 1, wherein the specific operations of S4-S6 are:
s01, randomly sampling the sample set F without replacement, and uniformly dividing the whole data set according to the method to obtain K training sets; the training set divided by the sampling method without replacement can effectively improve the diversity of the base classifier; the K training set has M=n/K characteristic, n is less than or equal to K;
s02, performing PCA feature change operation on partial subsets in the K training sets to prevent the same training set from being selected by the base classifier for training, so as to increase the inter-class difference among the base classifiers, wherein the partial subset selection method requiring PCA feature change operation comprises the following steps: 75% resampling of the K training sets resulted in training subset X ij Where i is the number of the base classifier, j is the training subset number, for this X ij PCA feature variation is carried out on each training subset, wherein the generated principal component coefficients are as follows
Figure FDA0003984459770000031
Length is M, and simultaneously, the characteristic with the characteristic vector of 0 is removed;
s03, rotating matrix processing: multiple operations step S02 obtains K training subsets, wherein the generated principal component coefficients are input into a sparse rotation matrix R i
Figure FDA0003984459770000032
Rotation matrix R i Is n x j M j Through a rotation matrix R i After the flow process of (2), a base classifier D can be generated i Training data set of (a)
Figure FDA0003984459770000037
The specific operation flow of the rotation matrix is to adjust the R list i Is made to correspond to the order of the original feature sets, the rotation matrix after the order is adjusted +.>
Figure FDA0003984459770000033
When the size is Nxn and the final prediction result is tested, the test data set x is used for multiplying x by +.>
Figure FDA0003984459770000034
Input base classifier D i Middle use->
Figure FDA0003984459770000035
Representation base classifier D i Predicting x to be of class w 1 And calculates the confidence mu that x belongs to each category j (x):
Figure FDA0003984459770000036
S04, comparing and detecting the confidence coefficient of each category, and obtaining the category with the maximum confidence coefficient, namely the category to which the X finally belongs.
CN202211560583.4A 2022-12-07 2022-12-07 Remote sensing extraction method for offshore buoyant raft and deep water culture area Pending CN116229254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211560583.4A CN116229254A (en) 2022-12-07 2022-12-07 Remote sensing extraction method for offshore buoyant raft and deep water culture area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211560583.4A CN116229254A (en) 2022-12-07 2022-12-07 Remote sensing extraction method for offshore buoyant raft and deep water culture area

Publications (1)

Publication Number Publication Date
CN116229254A true CN116229254A (en) 2023-06-06

Family

ID=86575641

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211560583.4A Pending CN116229254A (en) 2022-12-07 2022-12-07 Remote sensing extraction method for offshore buoyant raft and deep water culture area

Country Status (1)

Country Link
CN (1) CN116229254A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452901A (en) * 2023-06-19 2023-07-18 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452901A (en) * 2023-06-19 2023-07-18 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning
CN116452901B (en) * 2023-06-19 2023-09-15 中国科学院海洋研究所 Automatic extraction method for ocean culture area of remote sensing image based on deep learning

Similar Documents

Publication Publication Date Title
CN110728224B (en) Remote sensing image classification method based on attention mechanism depth Contourlet network
WO2022160771A1 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
CN109086824B (en) Seabed substrate sonar image classification method based on convolutional neural network
CN110348399B (en) Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network
CN111242206B (en) High-resolution ocean water temperature calculation method based on hierarchical clustering and random forests
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN112733659B (en) Hyperspectral image classification method based on self-learning double-flow multi-scale dense connection network
CN106845497B (en) Corn early-stage image drought identification method based on multi-feature fusion
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN108846338A (en) Polarization characteristic selection and classification method based on object-oriented random forest
CN112580554B (en) Classification recognition method for MSTAR data noise intensity control based on CNN
CN116402825B (en) Bearing fault infrared diagnosis method, system, electronic equipment and storage medium
CN106886793B (en) Hyperspectral image waveband selection method based on discrimination information and manifold information
CN113344045B (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN112016596B (en) Farmland soil fertility evaluation method based on convolutional neural network
CN111695611B (en) Bee colony optimization kernel extreme learning and sparse representation mechanical fault identification method
CN115512123A (en) Multi-period key growth characteristic extraction and time period classification method for hypsizygus marmoreus
CN113011397A (en) Multi-factor cyanobacterial bloom prediction method based on remote sensing image 4D-FractalNet
CN116229254A (en) Remote sensing extraction method for offshore buoyant raft and deep water culture area
CN107766792A (en) A kind of remote sensing images ship seakeeping method
Sivasakthi et al. Plant leaf disease identification using image processing and svm, ann classifier methods
CN109670408A (en) A kind of object-based remote sensing images Clean water withdraw method
CN116597199A (en) Point cloud tree classification method and system based on airborne LiDAR
CN115661627A (en) Single-beam underwater target identification method based on GAF-D3Net
CN112801955B (en) Plankton detection method under unbalanced population distribution condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination