CN112668403A - Fine-grained ship image target identification method for multi-feature area - Google Patents

Fine-grained ship image target identification method for multi-feature area Download PDF

Info

Publication number
CN112668403A
CN112668403A CN202011448337.0A CN202011448337A CN112668403A CN 112668403 A CN112668403 A CN 112668403A CN 202011448337 A CN202011448337 A CN 202011448337A CN 112668403 A CN112668403 A CN 112668403A
Authority
CN
China
Prior art keywords
network
characteristic
layer
ship
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011448337.0A
Other languages
Chinese (zh)
Inventor
孙久武
徐志京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN202011448337.0A priority Critical patent/CN112668403A/en
Publication of CN112668403A publication Critical patent/CN112668403A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a fine-grained ship image target identification method for a multi-feature area, which specifically comprises the steps of preprocessing ship image data to obtain the image size required by a convolutional neural network; inputting the processed image into an improved RA-CNN network for detection and identification, wherein the image is sent to a classification network VGG-SDP network of a first scale layer for feature extraction and classification, and meanwhile, a fifth pooling layer of the VGG-SDP is input into a positioning network JCMR-APN, and is sent into a second scale layer after being cut; sending the cut four feature area images to a VGG-SDP network for feature extraction and classification, inputting a fifth pooling layer of the VGG-SDP to a positioning network APN, cutting and sending to a third scale layer; sending the four feature area images with smaller sizes to a VGG-SDP network for feature extraction and classification, and outputting feature areas by the APN network; and finally, fusing the results of the three scale layers to obtain the characteristic region position and the category of the ship.

Description

Fine-grained ship image target identification method for multi-feature area
Technical Field
The invention relates to detection and identification of ship targets, in particular to a fine-grained ship image target identification method for a multi-feature area.
Background
The construction of a set of ship target identification system becomes an important work, and the strengthening of marine ship target identification is a research direction with military value and economic value. The traditional radar target tracking equipment and Automatic Identification System (AIS) of the ship mainly take positioning as a main part, and have many defects in the aspect of identifying the specific type of the ship. With the development of deep learning, a vessel target identification technology based on a Convolutional Neural Network (CNN) becomes a key point of research in the field of marine navigation, and Network models such as VGG-16 and VGG-19 are widely applied to vessel target identification; in terms of the CNN framework based on the proposed regions, fast-CNN has better performance in this area. At the present stage, ships are various, different models are derived from the same ship class, and the difference between the models is very small, so that the identification of the ship type is difficult in the aspects of time and accuracy, and the various CNN networks mentioned above are generally only suitable for the scenes with large differences between the ships. Aiming at the problem, a fine-grained image classification network (RA-CNN) is widely applied, but the research in the field of ship target identification is less, and the method is still in the primary stage at present. In 2019, a fine-grained ship identification method of a single-feature area is provided by Huolihao et al based on an RA-CNN network, and is verified on a photoelectric ship data set. However, because the RA-CNN network has 3 scale layers, based on the characteristics of the network itself, as the scale layers become deeper, the effect of the single feature region becomes worse, specifically, global information cannot be fully utilized, so that robustness becomes worse and the identification accuracy is also reduced.
Disclosure of Invention
The invention aims to provide a fine-grained ship image target identification method for a multi-feature area, and solves the actual problem that the similarity of an existing ship is too high. The identification performance is improved by adding the Scale Dependent Pooling (SDP) in the VGG-19 classification network, a plurality of feature regions are generated by adding the joint clustering (joint clustering) algorithm in the attention-directed suggestion network (APN), the overlapping rate among the plurality of feature regions is reduced by designing the optimization algorithm, a new loss function is defined to cross train the VGG-19 and the APN, and the accuracy and the robustness of the whole model are effectively improved.
The invention has the following advantages:
1. by introducing the SDP algorithm into the VGG-19 network, the problem of excessive pooling of small targets is solved, and the classification performance of the network is improved;
2. the multi-feature area positioning network can better utilize global information, so that the target identification has higher robustness;
3. the characteristic region optimization algorithm reduces the overlapping rate of a plurality of characteristic regions, so that each positioning region can be mutually independent;
4. a channel loss function is introduced to carry out common optimization on the network, so that the robustness of the image to noise can be improved, and the ship target identification rate is improved.
Specifically, the present invention achieves the above object by the following scheme:
the ship target identification method combining the multi-feature area and the fine-grained image is characterized by being more detailed
Identifying the target, comprising the steps of:
s1, preprocessing a ship data set;
s2, inputting the preprocessed ship image into the trained improved RA-CNN network;
s3, framing a ship characteristic region by fusing results of 3 scale layers of the improved RA-CNN network and displaying the classification of the ship characteristic region at the upper left corner;
the step S1 further includes the steps of:
s1.1, uniformly converting an input ship image into a ship image with a resolution of 224 multiplied by 224;
s1.2, uniformly mapping each pixel value of the image from [0, 255] to [ 1,1] according to the formula, wherein the calculation formula is as follows:
Figure BDA0002825714120000031
wherein x isi,jAnd x'i,jRespectively represent the pixel values before and after the pre-processing, i ∈ [0,223 ]],j∈[2,223];
The step S2 further includes the steps of:
s2.1, firstly, in a first scale layer, an input image passes through a VGG-SDP network v1Extraction of features, output P of 5 th pooling layer5Will be the input of the JCMR-APN network;
s2.2, JCMR-APN network m1Will be according to P5Clustering the generated channels and selecting a proper part to generate a plurality of independent characteristic areas;
S2.3、v1the network selects the pixel number N of the minimum characteristic area in a plurality of characteristic areas as the input of the self-adaptive pooling criterion, selects a proper pooling layer to generate a classification confidence vector Y of a first scale(1)
S2.4, evaluating the coverage rate among the characteristic areas, and adjusting the areas;
s2.5, intercepting and amplifying the adjusted characteristic region to be used as the input of the next scale layer;
s2.6, inputting the characteristics of S2.5 into VGG-SDP network v of the second scale layer2Extracting features, repeating S2.2 and S2.3 to generate a classification confidence vector Y of a second scale(2). The positioning network m of the second layer is different from the first layer2Intercepting and amplifying each input characteristic region by adopting an APN (access point name) network consistent with RA-CNN (random access network-common network) and inputting the input characteristic region to a third scale layer;
s2.7, repeating S2.6 in the third scale layer to generate a classification confidence vector Y of the third scale layer(3)APN network m in this layer3Only generating corresponding characteristic regions, not intercepting any more, and determining final positioning according to each characteristic region generated by the first scale layer;
step S2.1 comprises the following steps:
s2.1-1, when VGG-SDP inputs image I, the image is firstly subjected to feature extraction through 5 convolution blocks and a fifth pooling layer P5The output of the network is sent to a JCMR-APN network;
s2.1-2, the JCMR-APN network inputs a proper feature map selected from the last three pooling layers to a full connection layer for classification according to the size N of a plurality of feature area statistical areas and the VGG-SDP network, and the self-adaptive pooling selection standard is as follows:
Figure BDA0002825714120000041
wherein, P3(I) And P4(I) Respectively representing the output of the third and fourth pooling layers, selecting the appropriate output of the pooling layer according to the size of N by the M (I) function, and selecting the fifth pooling layer P by the network when the characteristic region N is too large5To represent a feature of the target; when N is too small, the network then selects the third pooling layer P that undergoes less convolutional pooling3To characterize to retain more information;
step S2.2 comprises the following steps:
s2.2-1, setting a characteristic diagram channel sample set S ═ S1,s2,…skConstructing an adjacency matrix W (matrix of weight among samples) and a degree matrix D (sum of weight of each sample and all connected samples) from the sample set S;
s2.2-2, obtaining a laplacian matrix L ═ D-W from W, D;
s2.2-3, standardizing L:
Figure BDA0002825714120000042
s2.2-4, mixing LnormThe eigenvalues of the system are arranged from large to small, and the first K eigenvalues are taken to calculate the eigenvectors of the eigenvalues;
s2.2-5, standardizing each eigenvector and forming an eigenvector matrix Lf
S2.2-6, taking LfGenerating a new sample set S 'for each row vector, carrying out K-means clustering on S' to generate K clusters corresponding to the characteristic regions of K ship images;
step S2.4 comprises the following steps:
s2.4-1, calculating the pixel value of each characteristic region from the coordinate parameter sequence, taking the position of the maximum characteristic region as a reference region, and recording the region into a fixed region sequence;
s2.4-2, calculating the overlapping area N of the second characteristic area and the reference areaolThe calculation formula is as follows:
Figure BDA0002825714120000051
Nol=max(tx(br)-tx(ul),0)×max(ty(br)-ty(ul),0) (2)
s2.4-3, when the ratio of the overlapping area to the characteristic area is larger than a certain threshold value, adjusting the characteristic area until the ratio is lower than the threshold value;
s2.4-4, counting the second characteristic region into a fixed region sequence after adjustment, and sequentially selecting comparison regions from the fixed region sequence from large to small for S2.4-2 and S2.4-3 operations in subsequent characteristic regions.
Drawings
FIG. 1 is an overall block diagram of a multi-feature area ship target identification system of the present invention;
FIG. 2 is a diagram of an improved RA-CNN network vessel target identification architecture of the present invention;
FIG. 3 is a block diagram of a VGG-19 network formed in conjunction with an SDP algorithm in accordance with the present invention;
FIG. 4 is a flow chart of the federated clustering algorithm of the present invention;
FIG. 5 is a schematic diagram of a JCMR-APN network overfitting according to the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 to 5, the present invention provides a fine-grained ship image target identification method for a multi-feature area, the method includes the following steps, and the overall block diagram of the system is shown in fig. 1:
s1, preprocessing a ship data set;
s2, inputting the new image formed in step S1 to the improved RA-CNN network;
s3, performing cross training on two parts of networks in the improved RA-CNN network, and finally framing out the ship
The region of the target and the category is displayed in the upper left corner;
in the step S2, the conventional RA-CNN is an identification method based on a single feature region, and as the scale layers become deeper, extracting a single feature may make the original network unable to use global information well, thereby causing a decrease in identification rate, and for this problem, this research designs a novel network for identifying a ship target in which an RA-CNN network and a multi-feature region are fused, the novel RA-CNN network still maintains 3 original scale layers, but a classification network in each scale layer is changed from VGG-19 to VGG-SDP, a positioning network APN in a first scale layer is changed to a JCMR-APN network, other two scale layers still maintain APN structures, and the improved RA-CNN network ship target identification is as shown in fig. 2, and the specific steps may be divided into:
s2.1, firstly, in a first scale layer, an input image passes through a VGG-SDP network v1Extraction of features, output P of 5 th pooling layer5Will be the input of the JCMR-APN network;
s2.2, JCMR-APN network m1Will be according to P5Clustering the generated channels and selecting a proper part to generate a plurality of independent characteristic areas;
S2.3、v1the network selects the pixel number N of the minimum characteristic area in a plurality of characteristic areas as the input of the self-adaptive pooling criterion, selects a proper pooling layer to generate a classification confidence vector Y of a first scale(1)
S2.4, evaluating the coverage rate among the characteristic areas, and adjusting the areas;
s2.5, intercepting and amplifying the adjusted characteristic region to be used as the input of the next scale layer;
s2.6, inputting the characteristics of S2.5 into VGG-SDP network v of the second scale layer2Extracting features, repeating S2.2 and S2.3 to generate a classification confidence vector Y of a second scale(2). The second layer being different from the first layerPositioning network m2Intercepting and amplifying each input characteristic region by adopting an APN (access point name) network consistent with RA-CNN (random access network-common network) and inputting the input characteristic region to a third scale layer;
s2.7, repeating S2.6 in the third scale layer to generate a classification confidence vector Y of the third scale layer(3)APN network m in this layer3Only generating corresponding characteristic regions, not intercepting any more, and determining final positioning according to each characteristic region generated by the first scale layer;
in the context of figure 2, it is shown,
Figure BDA0002825714120000061
representing the prediction probability of the first scale layer truth, Y(1)Representing a prediction class label. Is provided with YtruthRepresenting a true category label, Y(2)And Y(3)Respectively representing the prediction class labels of the second and third scale layers, then the classification loss L is obtainedinner=(Y(i),Ytruth) It is a real label YtruthAnd a predictive label Y(i)And performing a cross entropy operation result. In addition to this, the present invention is,
Figure BDA0002825714120000062
respectively representing the true prediction probabilities of the second and third scales, with the loss between the first and second scales being
Figure BDA0002825714120000071
The second and third inter-scale losses are
Figure BDA0002825714120000072
The step S2.1, aiming at the problem of excessive pooling of small-sized targets, adds the SDP algorithm to the VGG-19 network, so that the network intelligently selects a proper convolution block output for classification, thereby improving the identification rate of the network on small ships, and the VGG-19 forms a VGG-SDP network structure diagram by combining with the SDP algorithm as shown in fig. 3, and the specific steps can be divided into:
s2.1-1, when VGG-SDP inputs image I, the image is firstly subjected to feature extraction through 5 convolution blocks and a fifth pooling layer P5Output of (2) into JCMR-APNA network;
s2.1-2, the JCMR-APN network selects a proper feature map from the last three pooling layers according to the size of N according to the size of a plurality of feature area statistical areas, the VGG-SDP network inputs the feature map into a full connection layer for classification (here, the VGG-SDP network selects the pixel number N of the minimum feature area in a plurality of independent feature areas), and the self-adaptive pooling selection standard is as follows:
Figure BDA0002825714120000073
wherein, P3(I) And P4(I) Respectively representing the output of the third and fourth pooling layers, selecting the appropriate output of the pooling layer according to the size of N by the M (I) function, and selecting the fifth pooling layer P by the network when the characteristic region N is too large5To represent a feature of the target; when N is too small, the network then selects the third pooling layer P that undergoes less convolutional pooling3To characterize to retain more information;
in the step 2.2, the flow chart of the joint clustering algorithm is shown in fig. 4, and the specific steps can be divided into:
s2.2-1, setting a characteristic diagram channel sample set S ═ S1,s2,…skConstructing an adjacency matrix W (matrix of weight among samples) and a degree matrix D (sum of weight of each sample and all connected samples) from the sample set S;
s2.2-2, obtaining a laplacian matrix L ═ D-W from W, D;
s2.2-3, standardizing L:
Figure BDA0002825714120000082
s2.2-4, mixing LnormThe eigenvalues of the system are arranged from large to small, and the first K eigenvalues are taken to calculate the eigenvectors of the eigenvalues;
s2.2-5, standardizing each eigenvector and forming an eigenvector matrix Lf
S2.2-6, taking LfGenerating a new sample set S 'for each row vector, performing K-means clustering on S' to generate K clusters corresponding to K shipsCharacteristic regions of the ship image;
in the step 2.4, a schematic diagram of the JCMR-APN network overfitting is shown in fig. 5, and the specific steps can be divided into:
s2.4-1, calculating the pixel value of each characteristic region from the coordinate parameter sequence, taking the position of the maximum characteristic region as a reference region, and recording the region into a fixed region sequence;
s2.4-2, calculating the overlapping area N of the second characteristic area and the reference areaolThe calculation formula is as follows:
Figure BDA0002825714120000081
Nol=max(tx(br)-tx(ul),0)×max(ty(br)-ty(ul),0) (2)
s2.4-3, when the ratio of the overlapping area to the characteristic area is larger than a certain threshold value, adjusting the characteristic area until the ratio is lower than the threshold value;
s2.4-4, counting the second characteristic region into a fixed region sequence after adjustment, and sequentially selecting comparison regions from the fixed region sequence from large to small for S2.4-2 and S2.4-3 operations in subsequent characteristic regions.

Claims (1)

1. The fine-grained ship image target identification method for the multi-feature area is characterized by comprising the following steps of:
s1, preprocessing a ship data set;
s2, inputting the preprocessed ship image into the trained improved RA-CNN network;
s3, marking a ship feature area by fusing the results of 3 scale layers of the improved RA-CNN network and displaying the category of the ship feature area at the upper left corner;
the step S1 includes the steps of:
s1.1, uniformly converting an input ship image into a ship image with a resolution of 224 multiplied by 224;
s1.2, uniformly mapping each pixel value of the image from [0, 255] to [ 1,1] according to the formula, wherein the calculation formula is as follows:
Figure FDA0002825714110000011
wherein x isi,jAnd x'i,jRespectively represent the pixel values before and after the pre-processing, i ∈ [0,223 ]],j∈[2,223];
The step S2 includes the steps of:
s2.1, firstly, in a first scale layer, an input image passes through a VGG-SDP network v1Extraction of features, output P of 5 th pooling layer5Will be the input of the JCMR-APN network;
s2.2, JCMR-APN network m1Will be according to P5Clustering the generated channels and selecting a proper part to generate a plurality of independent characteristic areas;
S2.3、v1the network selects the pixel number N of the minimum characteristic area in a plurality of characteristic areas as the input of the self-adaptive pooling criterion, selects a proper pooling layer to generate a classification confidence vector Y of a first scale(1)
S2.4, evaluating the coverage rate among the characteristic areas, and adjusting the areas;
s2.5, intercepting and amplifying the adjusted characteristic region to be used as the input of the next scale layer;
s2.6, inputting the characteristics of S2.5 into VGG-SDP network v of the second scale layer2Extracting features, repeating S2.2 and S2.3 to generate a classification confidence vector Y of a second scale(2)(ii) a The positioning network m of the second layer is different from the first layer2Intercepting and amplifying each input characteristic region by adopting an APN (access point name) network consistent with RA-CNN (random access network-common network) and inputting the input characteristic region to a third scale layer;
s2.7, repeating S2.6 in the third scale layer to generate a classification confidence vector Y of the third scale layer(3)APN network m in this layer3Only generating corresponding characteristic regions, not intercepting any more, and determining final positioning according to each characteristic region generated by the first scale layer;
step S2.1 comprises the following steps:
s2.1-1, when VGG-SDP inputs image I, the image is firstly subjected to feature extraction through 5 convolution blocks and a fifth pooling layer P5The output of the network is sent to a JCMR-APN network;
s2.1-2, the JCMR-APN network inputs a proper feature map selected from the last three pooling layers to a full connection layer for classification according to the size N of a plurality of feature area statistical areas and the VGG-SDP network, and the self-adaptive pooling selection standard is as follows:
Figure FDA0002825714110000021
wherein, P3(I) And P4(I) Respectively representing the output of the third and fourth pooling layers, selecting the appropriate output of the pooling layer according to the size of N by the M (I) function, and selecting the fifth pooling layer P by the network when the characteristic region N is too large5To represent a feature of the target; when N is too small, the network then selects the third pooling layer P that undergoes less convolutional pooling3To characterize to retain more information;
step S2.2 comprises the following steps:
s2.2-1, setting a characteristic diagram channel sample set S ═ S1,s2,…skConstructing an adjacent matrix W and a degree matrix D from the sample set S;
s2.2-2, obtaining a laplacian matrix L ═ D-W from W, D;
s2.2-3, standardizing L:
Figure FDA0002825714110000022
s2.2-4, mixing LnormThe eigenvalues of the system are arranged from large to small, and the first K eigenvalues are taken to calculate the eigenvectors of the eigenvalues;
s2.2-5, standardizing each eigenvector and forming an eigenvector matrix Lf
S2.2-6, taking LfGenerating a new sample set S 'for each row vector, carrying out K-means clustering on S' to generate K clusters corresponding to the characteristic regions of K ship images;
step S2.4 comprises the following steps:
s2.4-1, calculating the pixel value of each characteristic region from the coordinate parameter sequence, taking the position of the maximum characteristic region as a reference region, and recording the region into a fixed region sequence;
s2.4-2, calculating the overlapping area N of the second characteristic area and the reference areaolThe calculation formula is as follows:
Figure FDA0002825714110000031
Nol=max(tx(br)-tx(ul),0)×max(ty(br)-ty(ul),0) (2)
s2.4-3, when the ratio of the overlapping area to the characteristic area is larger than a certain threshold value, adjusting the characteristic area until the ratio is lower than the threshold value;
s2.4-4, counting the second characteristic region into a fixed region sequence after adjustment, and sequentially selecting comparison regions from the fixed region sequence from large to small for subsequent characteristic regions to carry out the operations of the steps S2.4-2 and S2.4-3.
CN202011448337.0A 2020-12-09 2020-12-09 Fine-grained ship image target identification method for multi-feature area Withdrawn CN112668403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011448337.0A CN112668403A (en) 2020-12-09 2020-12-09 Fine-grained ship image target identification method for multi-feature area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011448337.0A CN112668403A (en) 2020-12-09 2020-12-09 Fine-grained ship image target identification method for multi-feature area

Publications (1)

Publication Number Publication Date
CN112668403A true CN112668403A (en) 2021-04-16

Family

ID=75402258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011448337.0A Withdrawn CN112668403A (en) 2020-12-09 2020-12-09 Fine-grained ship image target identification method for multi-feature area

Country Status (1)

Country Link
CN (1) CN112668403A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492634A (en) * 2022-01-25 2022-05-13 中国人民解放军国防科技大学 Fine-grained equipment image classification and identification method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492634A (en) * 2022-01-25 2022-05-13 中国人民解放军国防科技大学 Fine-grained equipment image classification and identification method and system
CN114492634B (en) * 2022-01-25 2024-01-19 中国人民解放军国防科技大学 Fine granularity equipment picture classification and identification method and system

Similar Documents

Publication Publication Date Title
CN109949317B (en) Semi-supervised image example segmentation method based on gradual confrontation learning
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109241913B (en) Ship detection method and system combining significance detection and deep learning
CN112818903B (en) Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN114565860B (en) Multi-dimensional reinforcement learning synthetic aperture radar image target detection method
CN114332649B (en) Cross-scene remote sensing image depth countermeasure migration method based on double-channel attention
CN112287941B (en) License plate recognition method based on automatic character region perception
CN111523553A (en) Central point network multi-target detection method based on similarity matrix
CN113420643B (en) Lightweight underwater target detection method based on depth separable cavity convolution
CN112489054A (en) Remote sensing image semantic segmentation method based on deep learning
CN111723693A (en) Crowd counting method based on small sample learning
CN111738113A (en) Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN115512103A (en) Multi-scale fusion remote sensing image semantic segmentation method and system
CN112883850A (en) Multi-view aerospace remote sensing image matching method based on convolutional neural network
CN110738132A (en) target detection quality blind evaluation method with discriminant perception capability
CN116580322A (en) Unmanned aerial vehicle infrared small target detection method under ground background
CN116912796A (en) Novel dynamic cascade YOLOv 8-based automatic driving target identification method and device
CN111242028A (en) Remote sensing image ground object segmentation method based on U-Net
CN114549909A (en) Pseudo label remote sensing image scene classification method based on self-adaptive threshold
CN114170526A (en) Remote sensing image multi-scale target detection and identification method based on lightweight network
CN112668403A (en) Fine-grained ship image target identification method for multi-feature area
CN111666953B (en) Tidal zone surveying and mapping method and device based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210416

WW01 Invention patent application withdrawn after publication