CN114821358A - Optical remote sensing image marine ship target extraction and identification method - Google Patents
Optical remote sensing image marine ship target extraction and identification method Download PDFInfo
- Publication number
- CN114821358A CN114821358A CN202210463979.0A CN202210463979A CN114821358A CN 114821358 A CN114821358 A CN 114821358A CN 202210463979 A CN202210463979 A CN 202210463979A CN 114821358 A CN114821358 A CN 114821358A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- saliency map
- target
- ship
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000003287 optical effect Effects 0.000 title claims abstract description 18
- 238000000605 extraction Methods 0.000 title claims abstract description 12
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 238000012360 testing method Methods 0.000 claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 20
- 239000013598 vector Substances 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 13
- 230000000007 visual effect Effects 0.000 claims abstract description 9
- 230000004927 fusion Effects 0.000 claims description 11
- 239000012634 fragment Substances 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000011524 similarity measure Methods 0.000 claims description 5
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 4
- 238000003066 decision tree Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims 1
- 238000006116 polymerization reaction Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 12
- 238000013461 design Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
本发明涉及一种光学遥感图像海上舰船目标提取与识别方法,包括:输入可见光遥感图像;使用基于协方差联合特征的视觉显著性方法对所述遥感图像中的海面进行全局检测,生成单一尺度显著图;对生成的所述单一尺度显著图下行采样,建立多尺度显著图,获得最终显著图;计算出海面舰船目标的检测阈值,二值化显著图,实现显著图的粗分割;建立训练集和测试集;以得到可用于训练和测试的特征向量;根据建立的框架及训练集中的正负样本,训练模型,对所述特征向量分类;根据建立的框架及测试集中的正负样本,测试候选区域,完成光学遥感图像海上舰船目标提取与识别。本发明能够高效地搜索海面目标区域,大大降低了虚警率,提高了检测准确性。
The invention relates to a method for extracting and recognizing marine ship targets in optical remote sensing images, comprising: inputting visible light remote sensing images; using a visual saliency method based on covariance joint features to globally detect the sea surface in the remote sensing images to generate a single scale saliency map; downsample the generated single-scale saliency map, establish a multi-scale saliency map, and obtain the final saliency map; calculate the detection threshold of the surface ship target, binarize the saliency map, and realize the coarse segmentation of the saliency map; establish training set and test set; to obtain feature vectors that can be used for training and testing; according to the established framework and the positive and negative samples in the training set, train the model, and classify the feature vectors; according to the established framework and the positive and negative samples in the test set , test the candidate area, and complete the target extraction and identification of ships at sea from optical remote sensing images. The invention can efficiently search the target area on the sea surface, greatly reduces the false alarm rate, and improves the detection accuracy.
Description
技术领域technical field
本发明涉及一种光学遥感图像海上舰船目标提取与识别方法。The invention relates to a method for extracting and identifying targets of ships at sea in optical remote sensing images.
背景技术Background technique
近年来,海洋遥感技术一直是计算机视觉领域具有挑战性的研究课题之一,其中较具发展前景的当属舰船检测技术。随着遥感信息科学的迅猛发展,利用遥感技术的舰船检测不仅应用于海事侦查、海上打击分析和评估等军用领域,而且被广泛应用于海岛资源调查、海洋勘探、海上救援等民用领域,具有重要的价值。In recent years, marine remote sensing technology has always been one of the challenging research topics in the field of computer vision, among which ship detection technology is the most promising. With the rapid development of remote sensing information science, ship detection using remote sensing technology is not only used in military fields such as maritime reconnaissance, maritime strike analysis and evaluation, but also in civil fields such as island resource survey, marine exploration, and maritime rescue. important value.
随着空、天平台遥感数据获取能力的日益增强及高分辨率卫星的飞速发展,越来越多的遥感数据可用于研究。从数据获取的角度来看,当前舰船检测大致可以分三类:合成孔径雷达(SAR)图像舰船检测、红外(IR)图像舰船检测和可见光遥感(VRS)图像舰船检测。由于合成孔径雷达(SAR)方法具有不受光照和云雾等复杂天气条件影响的昼夜成像能力以及一定的穿透性,多用来监测海表溢油、海洋表面洋流。世界各国一直都在致力于新型SAR载荷的研制,不断提高空间分辨率,并且取得了令人印象深刻的性能。然而,SAR图像的相干成像机制导致图像中产生大量的斑点噪声,严重干扰了边缘和纹理特征,加之不能利用颜色信息,因此不适合识别舰船目标。此外,红外图像用于增强弱光条件下的视觉效果,但也存在信噪比低、结构信息不足等缺点。而可见光图像中可利用的特征较多,比如颜色、纹理、边缘、方向以及频域特征,因而能够捕捉到更多细节和复杂的结构。With the increasing ability of remote sensing data acquisition of air and space platforms and the rapid development of high-resolution satellites, more and more remote sensing data can be used for research. From the perspective of data acquisition, the current ship detection can be roughly divided into three categories: synthetic aperture radar (SAR) image ship detection, infrared (IR) image ship detection and visible light remote sensing (VRS) image ship detection. Since the synthetic aperture radar (SAR) method has the ability of day and night imaging that is not affected by complex weather conditions such as light, clouds and fog, and has a certain penetrability, it is mostly used to monitor oil spills and ocean currents on the ocean surface. Countries around the world have been working on the development of new SAR payloads, continuously improving spatial resolution, and achieving impressive performance. However, the coherent imaging mechanism of SAR images results in a large amount of speckle noise in the image, which seriously interferes with edge and texture features, and cannot utilize color information, so it is not suitable for ship target recognition. In addition, infrared images are used to enhance visual effects in low-light conditions, but they also suffer from low signal-to-noise ratios and insufficient structural information. However, there are more features available in visible light images, such as color, texture, edge, direction, and frequency domain features, so more details and complex structures can be captured.
目前,海上舰船目标区域的提取和检测方法包括:传统光学遥感图像舰船检测基于灰度信息分割图像,这类方法仅仅适用于平静海面且无云雾干扰的情况,鲁棒性较差;也有基于模板匹配的方法,这类方法主要根据舰船目标形状来剔除海上岛屿区域,但是对于不同的场景不同的舰船种类,模板不易选取;还有基于传统的机器学习方法,主要是为了分离目标和背景区域,高度依赖理想的训练样本;基于深度学习方法,能够有效分类目标和背景,但是对硬件要求高,训练步骤复杂,可解释性较差;基于稀疏表示的方法,目前不成体系,有部分应用于红外图像的舰船检测;还有基于视觉显著性分割方法,与机器学习方法里的滑窗法和灰度信息分割图像法产生的候选区域框的数量相比,能够产生较少的疑似目标区域。At present, the extraction and detection methods of the target area of ships at sea include: the traditional optical remote sensing image ship detection is based on the grayscale information segmentation image, this kind of method is only suitable for the situation of calm sea surface and no cloud and fog interference, and the robustness is poor; there are also The method based on template matching, this type of method mainly eliminates the offshore island area according to the shape of the ship target, but for different types of ships in different scenes, the template is not easy to select; there are also traditional machine learning methods, mainly to separate the target. and background regions, highly dependent on ideal training samples; based on deep learning methods, it can effectively classify targets and backgrounds, but it has high hardware requirements, complex training steps, and poor interpretability; methods based on sparse representation are currently not systematic, and there are It is partly applied to ship detection in infrared images; there are also segmentation methods based on visual saliency, which can generate fewer candidate regions than the number of candidate regions generated by the sliding window method and the grayscale information segmentation method in the machine learning method. Suspected target area.
总而言之,目前光学遥感图像中的舰船目标检测任务,至少存在以下不足:All in all, the current ship target detection task in optical remote sensing images has at least the following shortcomings:
1.岛屿、浓云、海浪以及各种不确定的海况导致虚警率高。1. Islands, dense clouds, waves and various uncertain sea conditions lead to a high false alarm rate.
2.可见光成像传感器参数限定、海杂波干扰、舰船尾迹干扰造成舰船目标同质性降低。2. Visible light imaging sensor parameter limitation, sea clutter interference, and ship wake interference reduce the homogeneity of ship targets.
3.舰船颜色、纹理、大小和种类等自身因素造成目标灰度关联性低;3. The color, texture, size, type and other factors of the ship cause the low correlation of the target grayscale;
4.基于大规模遥感数据快速性的要求,降低计算负担成为关键问题;4. Based on the rapidity of large-scale remote sensing data, reducing the computational burden has become a key issue;
5.目标方向各异、特征不明显在检测过程中造成的检测效率低下的问题。5. The problem of low detection efficiency caused by different target directions and insignificant features during the detection process.
因此,如何在恶劣海况、目标同质性和灰度关联性低下、目标几何畸变、目标旋转等复杂情况下进行快速、稳定、鲁棒地提取与检测成为了当前迫切需要解决的问题。Therefore, how to perform fast, stable and robust extraction and detection under complex conditions such as harsh sea conditions, low target homogeneity and gray correlation, target geometric distortion, and target rotation has become an urgent problem to be solved.
发明内容SUMMARY OF THE INVENTION
有鉴于此,有必要提供一种光学遥感图像海上舰船目标提取与识别方法。In view of this, it is necessary to provide an optical remote sensing image marine ship target extraction and identification method.
本发明提供一种光学遥感图像海上舰船目标提取与识别方法,该方法包括如下步骤:a.输入可见光遥感图像;b.引入协方差统计量和同源相似性度量,使用基于协方差联合特征的视觉显著性方法对所述遥感图像中的海面进行全局检测,生成单一尺度显著图;c.对生成的所述单一尺度显著图下行采样,建立多尺度显著图,通过叠乘融合机制并归一化融合获得最终显著图;d.根据获得的最终显著图的灰度统计特征,计算出海面舰船目标的检测阈值,二值化显著图,实现显著图的粗分割,并标记回原始遥感图像,找出每个目标的区域,分离疑似目标和海面背景;e.建立训练集和测试集,所述训练集包括正样本、负样本,所述测试集包括正样本、负样本;f.设计CF-Fourier特征,嵌入聚合通道特征(ACF)和金字塔特征(FGPM),建立框架,以得到可用于训练和测试的特征向量;g.根据建立的框架及训练集中的正负样本,训练模型,使用Boosting决策树对步骤f的所述特征向量分类;h.根据建立的框架及测试集中的正负样本,测试候选区域,完成光学遥感图像海上舰船目标提取与识别。The present invention provides a method for extracting and recognizing a marine vessel target from an optical remote sensing image. The method includes the following steps: a. inputting a visible light remote sensing image; b. introducing a covariance statistic and a homologous similarity measure, and using a covariance-based joint feature The visual saliency method in the remote sensing image performs global detection on the sea surface in the remote sensing image, and generates a single-scale saliency map; c. Down-sampling the generated single-scale saliency map, establishes a multi-scale saliency map, and merges it through the stacking fusion mechanism The final saliency map is obtained by integrating the saliency map; d. According to the grayscale statistical characteristics of the obtained final saliency map, the detection threshold of the ship target on the sea surface is calculated, and the saliency map is binarized to realize the rough segmentation of the saliency map and mark it back to the original remote sensing image, find out the area of each target, separate the suspected target and the sea surface background; e. establish a training set and a test set, the training set includes positive samples and negative samples, and the test set includes positive samples and negative samples; f. Design CF-Fourier features, embed aggregated channel features (ACF) and pyramid features (FGPM), and establish a framework to obtain feature vectors that can be used for training and testing; g. According to the established framework and positive and negative samples in the training set, train the model , using the Boosting decision tree to classify the feature vector in step f; h. According to the established framework and the positive and negative samples in the test set, test the candidate area, and complete the extraction and identification of the marine ship target of the optical remote sensing image.
优选地,所述的步骤a包括:Preferably, the step a includes:
输入空间分辨率为H×W的光学遥感图像f(x,y),所述遥感图像中有舰船、海雾、厚重云层、岛屿等,其中,舰船的大小和颜色极性各不相同,在海面上的位置随机分布。Input the optical remote sensing image f(x, y) with a spatial resolution of H×W. The remote sensing image includes ships, sea fog, thick clouds, islands, etc., among which the size and color polarity of the ships are different. , the positions on the sea surface are randomly distributed.
优选地,所述的步骤b包括:Preferably, the step b includes:
步骤S21:根据输入的所述遥感图像,计算像素m的亮度,提取水平方向和垂直方向的梯度特征,亮度二阶导数特征,以及接近人类视觉的Lab色彩空间的亮度L、对立颜色维度a和b特征,与位置坐标(x,y)形成九维特征向量fm:Step S21: according to the described remote sensing image of input, calculate the brightness of pixel m, extract the gradient features of the horizontal direction and the vertical direction, the second-order derivative feature of brightness, and the brightness L of the Lab color space close to human vision, the opposite color dimension a and b feature, and the position coordinates (x, y) form a nine-dimensional feature vector f m :
步骤S22:将所述遥感图像划分为大小相同的正方形区域R,计算特征均值,用fm对称构建9×9的协方差特征矩阵作为区域描述符S:Step S22: Divide the remote sensing image into square regions R of the same size, calculate the feature mean, and construct a 9×9 covariance feature matrix symmetrically with f m as the region descriptor S:
步骤S23:对所述区域描述符采用Cholesky分解,得到上三角矩阵中的每一行向量Li,则区域描述符等价于欧式空间的一组点集S:Step S23: Cholesky decomposition is adopted for the region descriptor to obtain each row vector Li in the upper triangular matrix, then the region descriptor is equivalent to a set of point sets S in the Euclidean space:
合并特征均值μ和点集S得到对CR编码具有欧式空间计算能力的特征向量ψμ(CR):Combining the feature mean μ and the point set S obtains the eigenvector ψ μ (C R ) with Euclidean space computing power for the CR encoding:
ψμ(CR)=(μ,s1,s2,...,sk,,sk+1,.sk+2..,s2k);ψ μ (C R )=(μ,s 1 ,s 2 ,...,s k ,,s k+1 ,.s k+2 ..,s 2k );
步骤S24:通过上下文相似性度量,寻取最相似的T个度量表示区域的显著性,公式如下:Step S24: Through the context similarity measure, find the most similar T measures to represent the saliency of the region, and the formula is as follows:
步骤S25:在上述显著性的基础上设计了同源相似性权值函数wj,增强对比度,获得显著区域的稀疏图:Step S25: On the basis of the above-mentioned saliency, a homologous similarity weight function w j is designed to enhance the contrast and obtain a sparse map of the salient region:
其中,权值函数使用特征距离反函数来度量,被定义为高斯函数:Among them, the weight function is measured by the inverse feature distance function, which is defined as a Gaussian function:
优选地,所述的步骤d包括:Preferably, the step d includes:
采用OTSU方法,获取自适应分割阈值T来建立连通区域用以提取目标:Using the OTSU method, an adaptive segmentation threshold T is obtained to establish a connected region to extract the target:
优选地,所述的步骤f具体包括:Preferably, the step f specifically includes:
步骤S61:平面图像I(x,y)在像素(x,y)处的梯度表示为(D(x,y),θ(D(x,y))),连续的梯度方向脉冲曲线计算为:Step S61: The gradient of the plane image I(x,y) at the pixel (x,y) is expressed as (D(x,y), θ(D(x,y))), and the continuous gradient direction pulse curve is calculated as :
h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)));h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)));
步骤S62:对梯度方向脉冲曲线使用Fourier分析:Step S62: Use Fourier analysis on the gradient direction pulse curve:
系数 coefficient
步骤S63:矢量场内旋转图像并寻找旋转不变性的条件和设计的自导向性核函数Pj(r):Step S63: Rotate the image in the vector field and find the condition of rotation invariance and the designed self-directed kernel function P j (r):
步骤S64:采用上述核函数卷积建模,根据上述旋转不变性的条件,Fourier HOG旋转不变性描述符表示如下:Step S64: Using the above kernel function convolution modeling, according to the above conditions of rotation invariance, the Fourier HOG rotation invariance descriptor is expressed as follows:
步骤S65:引入圆周频率滤波器CF,利用舰船和周围背景的亮度差异,设计舰船目标的灰度值变化模式,计算像素(i,j)处灰度值的离散傅里叶变换DFT:Step S65: Introduce the circular frequency filter CF, use the brightness difference between the ship and the surrounding background, design the gray value change mode of the ship target, and calculate the discrete Fourier transform DFT of the gray value at the pixel (i, j):
步骤S66:提取出的旋转不变性梯度特征和圆周频率特征,被送入分类器用来区分是否是真船还是虚警:针对直接收集图像金字塔特征效率较慢,在基础尺度d0的基础上,通过估计比例因子λ来实现不同尺度d1的快速金字塔特征估计:Step S66 : The extracted rotation invariant gradient features and circumferential frequency features are sent to the classifier to distinguish whether it is a real ship or a false alarm: for the direct collection of image pyramid features, the efficiency is slow. Estimate the scale factor λ to achieve fast pyramid feature estimation for different scales d 1 :
Fd1=Fd0·(d0/d1)-λ。F d1 =F d0 ·(d 0 /d 1 ) -λ .
优选地,所述的步骤g具体包括如下步骤:Preferably, described step g specifically comprises the following steps:
通过将正负样本按照1:3的比例,作为模型的输入端,训练模型,并分类生成候选区域的置信分数,使用交并比作为是否为真是目标的判定标准。By using the positive and negative samples as the input of the model in a ratio of 1:3, the model is trained, and the confidence scores of the candidate regions are generated by classification, and the intersection ratio is used as the criterion for determining whether the target is true.
优选地,所述的步骤h具体包括:Preferably, the step h specifically includes:
将步骤d提取的疑似舰船目标区域碎片进行测试,判别是真实目标还是虚警,是真实目标则保留,是虚警则剔除,最终在输入图像中标记出来。The suspected ship target area fragments extracted in step d are tested to determine whether it is a real target or a false alarm. If it is a real target, it will be retained, and if it is a false alarm, it will be eliminated, and finally marked in the input image.
本申请没有诸多复杂的参数设置,也不依赖海面背景及目标分布特性的先验知识,针对海面背景下舰船目标的特点,提出了基于协方差联合统计特征的结合视觉显著性检测的方法,将区域的协方差估计与同源相似性加权融合修正不足,增强了整体的优势,从而抑制海面背景干扰。而多尺度叠乘融合增强了检测到的目标整体连续性和目标间的可区分性,高效地搜索海面目标区域。对图像中可能出现的厚重云层和岛屿等虚警,使用嵌入CF-Fourier空间频域联合特征的聚合通道特征-特征金字塔加速框架(ACF-FPGM)对检测到的目标进行进一步鉴别,判断检测到的目标是否为舰船,大大降低了虚警率,提高了检测准确性。This application does not have many complex parameter settings, nor does it rely on the prior knowledge of the sea surface background and target distribution characteristics. Aiming at the characteristics of ship targets in the sea surface background, a method for combined visual saliency detection based on covariance joint statistical features is proposed. Integrating regional covariance estimation with homologous similarity weighted fusion corrects the deficiency and enhances the overall advantage, thereby suppressing sea surface background interference. The multi-scale stacking fusion enhances the overall continuity of the detected targets and the distinguishability between targets, and efficiently searches the sea surface target area. For false alarms such as thick clouds and islands that may appear in the image, the Aggregated Channel Feature-Feature Pyramid Acceleration Framework (ACF-FPGM) embedded in the CF-Fourier spatial frequency domain joint feature is used to further identify the detected targets. Whether the target is a ship or not, the false alarm rate is greatly reduced and the detection accuracy is improved.
另外,本申请检测和鉴别时间都为秒级,实时性好,在自动化程度上有了明显提升,能实现大范围海域、多重背景干扰下舰船目标快速发现定位和数量确定,检测鲁棒性好。为进一步结合无人机平台或卫星姿态数据计算出各个舰船的位置、航向等情报信息,以及舰船目标的分类识别打下基础。In addition, the detection and identification time of this application is in the second level, the real-time performance is good, and the degree of automation has been significantly improved, which can realize the rapid discovery, positioning and quantity determination of ship targets in a large area of sea area and multiple background interference, and the detection is robust. it is good. It lays a foundation for further calculating the position, heading and other intelligence information of each ship in combination with the UAV platform or satellite attitude data, as well as the classification and identification of ship targets.
附图说明Description of drawings
图1为本发明光学遥感图像海上舰船目标提取与识别方法的流程图;Fig. 1 is the flow chart of the method for extracting and recognizing marine vessel targets in optical remote sensing images according to the present invention;
图2为本发明实施例提供的光学遥感图像海上舰船目标提取与识别方法的流程框图;FIG. 2 is a flowchart of a method for extracting and identifying targets of ships at sea from optical remote sensing images according to an embodiment of the present invention;
图3为本发明实施例提供的视觉显著性目标提取流程的示意图;3 is a schematic diagram of a process for extracting a visually saliency target provided by an embodiment of the present invention;
图4为本发明实施例提供的多尺度融合效果图例示意图:其中,图4(a)原图;图4(b)σ=2-4;图4(c)σ=2-5;图4(d)σ=2-6;图4(e)融合显著图;FIG. 4 is a schematic diagram of a multi-scale fusion effect legend provided by an embodiment of the present invention: wherein, FIG. 4(a) the original image; FIG. 4(b) σ=2-4; FIG. 4 (c) σ= 2-5 ; (d) σ= 2-6 ; Fig. 4 (e) fusion saliency map;
图5(a)为本发明实施例提供的原始图像I;FIG. 5(a) is an original image I provided by an embodiment of the present invention;
图5(b)为本发明实施例提供的方向梯度图Dx/Dy;FIG. 5(b) is a directional gradient map Dx/Dy provided by an embodiment of the present invention;
图5(c)为本发明实施例提供的梯度的Fourier分析系数示意图:Figure 5 (c) is a schematic diagram of the Fourier analysis coefficient of the gradient provided by the embodiment of the present invention:
图6为本发明实施例提供的CF特征灰度值变化模式以及特征示意图:其中,图6(a)Ships;图6(b)灰度统计图;图6(c)伪彩色图;FIG. 6 is a schematic diagram of a CF feature gray value change mode and a feature diagram provided by an embodiment of the present invention: wherein, FIG. 6(a) Ships; FIG. 6(b) grayscale statistics map; FIG. 6(c) pseudo-color map;
图7为本发明实施例提供的精细判别流程示意图。FIG. 7 is a schematic diagram of a fine discrimination flow according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合附图及具体实施例对本发明作进一步详细的说明。The present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
参阅图1、图2所示,是本发明光学遥感图像海上舰船目标提取与识别方法较佳实施例的作业流程图。Referring to FIG. 1 and FIG. 2 , it is a working flow chart of a preferred embodiment of the method for extracting and recognizing a marine vessel target from an optical remote sensing image of the present invention.
步骤S1,输入空间分辨率为H*W的可见光遥感图像f(x,y)。具体而言:Step S1, input a visible light remote sensing image f(x, y) with a spatial resolution of H*W. in particular:
输入空间分辨率为H×W的光学遥感图像f(x,y),所述遥感图像中有舰船、海雾、厚重云层、岛屿等,其中,舰船的大小和颜色极性各不相同,在海面上的位置分布也很随机。Input the optical remote sensing image f(x, y) with a spatial resolution of H×W. The remote sensing image includes ships, sea fog, thick clouds, islands, etc., among which the size and color polarity of the ships are different. , the location distribution on the sea surface is also very random.
步骤S2,引入协方差统计量和同源相似性度量,使用基于协方差联合特征的视觉显著性方法对所述遥感图像中的海面进行全局检测,生成单一尺度显著图。具体而言:Step S2, introducing covariance statistics and homology similarity measures, and using a visual saliency method based on covariance joint features to globally detect the sea surface in the remote sensing image, and generate a single-scale saliency map. in particular:
步骤S21:根据输入的所述遥感图像,计算像素m的亮度,提取水平方向和垂直方向的梯度特征,亮度二阶导数特征,以及接近人类视觉的Lab色彩空间的亮度L、对立颜色维度a和b特征,上述三类特征分别对应图3的第二列图的三行图像序列,与位置坐标(x,y)形成九维特征向量fm:Step S21: according to the described remote sensing image of input, calculate the brightness of pixel m, extract the gradient features of the horizontal direction and the vertical direction, the second-order derivative feature of brightness, and the brightness L of the Lab color space close to human vision, the opposite color dimension a and b feature, the above three types of features respectively correspond to the three-line image sequence of the second column of Figure 3, and the position coordinates (x, y) form a nine-dimensional feature vector f m :
步骤S22:将所述遥感图像划分为大小相同的正方形区域R,计算特征均值,用fm对称构建9×9的协方差特征矩阵作为区域描述符S:Step S22: Divide the remote sensing image into square regions R of the same size, calculate the feature mean, and construct a 9×9 covariance feature matrix symmetrically with f m as the region descriptor S:
步骤S23:对所述区域描述符采用Cholesky分解,得到上三角矩阵中的每一行向量Li,那么区域描述符等价于欧式空间的一组点集S:Step S23: Cholesky decomposition is adopted for the region descriptor to obtain each row vector Li in the upper triangular matrix, then the region descriptor is equivalent to a set of point sets S in the Euclidean space:
合并特征均值μ和点集S得到对CR编码具有欧式空间计算能力的特征向量ψμ(CR):Combining the feature mean μ and the point set S obtains the eigenvector ψ μ (C R ) with Euclidean space computing power for the CR encoding:
ψμ(CR)=(μ,s1,s2,...,sk,,sk+1,.sk+2..,s2k)ψ μ (C R )=(μ,s 1 ,s 2 ,...,s k ,,s k+1 ,.s k+2 ..,s 2k )
步骤S24:通过上下文相似性度量,用欧氏距离表示该相似性,寻取最相似的T个度量表示区域的显著性。其中,所述上下文包括:半径为3划分单元长度的三倍,即Ri的编号范围为1-9且不包括R的编号5,本实施例设定T=5,公式如下:Step S24: Through the context similarity measure, the similarity is represented by the Euclidean distance, and the most similar T measures are found to represent the saliency of the region. Wherein, the context includes: the radius is three times the length of the 3-division unit, that is, the numbering range of R i is 1-9 and does not include the
步骤S25:在上述显著性的基础上设计了同源相似性权值函数wj,增强对比度,获得显著区域的稀疏图:Step S25: On the basis of the above-mentioned saliency, a homologous similarity weight function w j is designed to enhance the contrast and obtain a sparse map of the salient region:
其中,权值函数使用特征距离反函数来度量,被定义为高斯函数:Among them, the weight function is measured by the inverse feature distance function, which is defined as a Gaussian function:
步骤S3,对生成的所述单一尺度显著图下行采样,建立多尺度显著图,通过叠乘融合机制并归一化融合获得最终显著图。具体而言:Step S3, down-sampling the generated single-scale saliency map, establishes a multi-scale saliency map, and obtains the final saliency map through a stacking and multiplying fusion mechanism and normalized fusion. in particular:
根据上述单一尺度显著图生成过程,扩展到多尺度,平衡区域表征能力和显著图的空间分辨率的对抗关系,采用多尺度乘积融合策略并归一化,请参阅图4,本实施例使用3个尺度,Γ={σ|2k}(k=-4,-5,-6),如图4(b)-图4(d)所示,最终显著图(图4(e))在细尺度上去除云雾效果较好,粗尺度上有利于凸显舰船目标:According to the above single-scale saliency map generation process, expand to multi-scale, balance the confrontation relationship between the regional representation ability and the spatial resolution of the saliency map, adopt the multi-scale product fusion strategy and normalize, please refer to Figure 4, this embodiment uses 3 scales, Γ={σ|2 k }(k=-4,-5,-6), as shown in Figure 4(b)-Figure 4(d), the final saliency map (Figure 4(e)) is in The fine-scale cloud removal effect is better, and the coarse-scale is conducive to highlighting the ship target:
步骤S4,根据获得的最终显著图的灰度统计特征,计算出海面舰船目标的检测阈值,二值化显著图,实现显著图的粗分割,并标记回原始遥感图像,找出每个目标的区域,分离疑似目标和海面背景。具体而言:Step S4, according to the gray-scale statistical features of the obtained final saliency map, calculate the detection threshold of the surface ship target, binarize the saliency map, realize the rough segmentation of the saliency map, and mark it back to the original remote sensing image to find out each target. , separate the suspected target and the sea background. in particular:
本实施例采用OTSU方法,获取自适应分割阈值T来建立连通区域用以提取目标:This embodiment adopts the OTSU method to obtain an adaptive segmentation threshold T to establish a connected region to extract the target:
步骤S5,请一并参阅图7,建立训练集和测试集。所述训练集包括正样本、负样本;所述测试集包括正样本、负样本。Step S5, please refer to FIG. 7 together to establish a training set and a test set. The training set includes positive samples and negative samples; the test set includes positive samples and negative samples.
本实施例建立训练集和测试集,数据集共630张,每个图像大小为56像素*56像素。所述正样本包括不同背景下的各种舰船,所述舰船大小6-20个像素不等;所述负样本来自于海上可能存在的背景干扰,比如海浪、尾迹波、薄云雾、浓云块、岛屿等。In this example, a training set and a test set are established, with a total of 630 data sets, and the size of each image is 56 pixels*56 pixels. The positive samples include various ships in different backgrounds, and the size of the ships varies from 6 to 20 pixels; the negative samples come from possible background interference at sea, such as ocean waves, wake waves, thin clouds, dense Clouds, islands, etc.
步骤S6,设计CF-Fourier特征,嵌入聚合通道特征(ACF)和金字塔特征(FGPM),建立框架,以得到可用于训练和测试的特征向量。具体而言:Step S6, design CF-Fourier features, embed aggregated channel features (ACF) and pyramid features (FGPM), and establish a framework to obtain feature vectors that can be used for training and testing. in particular:
步骤S61:如图5(a)、图5(b)所示,平面图像I(x,y)在像素(x,y)处的梯度表示为(D(x,y),θ(D(x,y))),那么连续的梯度方向脉冲曲线计算为:Step S61: As shown in Figure 5(a) and Figure 5(b), the gradient of the plane image I(x,y) at the pixel (x,y) is expressed as (D(x,y), θ(D( x, y))), then the continuous gradient direction pulse curve is calculated as:
h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)))h(ζ)=||D(x,y)||δ(ζ-θ(D(x,y)))
步骤S62:对梯度方向脉冲曲线使用Fourier分析:Step S62: Use Fourier analysis on the gradient direction pulse curve:
系数对应的Fourier域系数图像如图5(c)所示;coefficient The corresponding Fourier domain coefficient image is shown in Figure 5(c);
步骤S63:矢量场内旋转图像并寻找旋转不变性的条件和设计的自导向性核函数Pj(r):Step S63: Rotate the image in the vector field and find the condition of rotation invariance and the designed self-directed kernel function P j (r):
步骤S64:采用上述核函数卷积建模,根据上述旋转不变性的条件,Fourier HOG旋转不变性描述符表示如下:Step S64: Using the above kernel function convolution modeling, according to the above conditions of rotation invariance, the Fourier HOG rotation invariance descriptor is expressed as follows:
步骤S65:引入圆周频率滤波器(CF),利用舰船和周围背景的亮度差异,设计舰船目标的灰度值变化模式,请参阅图6,需要计算像素(i,j)处灰度值的离散傅里叶变换(DFT):Step S65: Introduce a circular frequency filter (CF), and use the brightness difference between the ship and the surrounding background to design the gray value change mode of the ship target, please refer to Figure 6, and the gray value at the pixel (i, j) needs to be calculated The discrete Fourier transform (DFT) of :
步骤S66:提取出的旋转不变性梯度特征和圆周频率特征,将被送入分类器用来区分是否是真船还是虚警。采用聚合通道特征(ACF)从结构上进行特征细化。将获取的ACF输入到boosting决策树中,快速检测率和低计算要求至关重要。针对直接收集图像金字塔特征效率较慢,在基础尺度d0的基础上,通过估计比例因子λ来实现不同尺度d1的快速金字塔特征估计:Step S66: The extracted rotation invariant gradient features and circular frequency features will be sent to the classifier to distinguish whether it is a real ship or a false alarm. Feature refinement is performed structurally using Aggregated Channel Features (ACF). The fast detection rate and low computational requirements are crucial for inputting the acquired ACF into a boosting decision tree. In view of the slow efficiency of directly collecting image pyramid features, on the basis of the basic scale d 0 , the fast pyramid feature estimation of different scales d 1 is achieved by estimating the scale factor λ:
Fd1=Fd0·(d0/d1)-λ F d1 =F d0 ·(d 0 /d 1 ) -λ
步骤S7,根据建立的框架及训练集中的正负样本,训练模型,使用Boosting决策树对步骤S6的所述特征向量分类。Step S7, according to the established framework and the positive and negative samples in the training set, train the model, and use the Boosting decision tree to classify the feature vector in step S6.
通过将正负样本按照1:3的比例,作为模型的输入端,训练模型,并分类生成候选区域的置信分数,使用交并比(iou)作为是否为真是目标的判定标准。By using the positive and negative samples as the input of the model in a ratio of 1:3, the model is trained, and the confidence score of the candidate region is generated by classification, and the intersection ratio (iou) is used as the criterion for determining whether it is a true target.
步骤S8,根据建立的框架及测试集中的正负样本,测试候选区域,完成光学遥感图像海上舰船目标提取与识别。Step S8, according to the established framework and the positive and negative samples in the test set, test the candidate area, and complete the extraction and identification of the marine ship target of the optical remote sensing image.
本实施例对步骤S4提取的疑似舰船目标区域碎片设置为56*56,送入模型进行测试,判别是真实目标还是虚警,是真实目标则保留,是虚警则剔除,最终在输入图像中标记出来。In this embodiment, the suspected ship target area fragments extracted in step S4 are set to 56*56, and are sent to the model for testing to determine whether it is a real target or a false alarm. marked in.
本申请包括视觉显著性分割提取和监督体系精细判别。视觉显著性分割提取阶段,通过构造属于海面舰船目标的协方差特征,利用同源相似性设计二阶统计量权值,在最优相似性度量后得到的单显著图能够很好弱化背景,凸显目标;最后设计多尺度融合策略和自适应阈值分割模块,在目标尺度不一、云雾背景杂乱无章、海杂波和尾迹波干扰等多重环境下能够实现无监督海面舰船区域高效的提取。而在监督体系精细判别阶段,设计舰船空间频域CF-Fourier HOG特征,该特征具有旋转不变性,在聚合通道特征和快速特征金字塔加速的框架下,完成对舰船目标的鉴定,对虚警的剔除。This application includes visual saliency segmentation extraction and fine-grained discrimination for supervised systems. In the stage of visual saliency segmentation and extraction, by constructing covariance features belonging to surface ships, and using homologous similarity to design second-order statistic weights, the single saliency map obtained after the optimal similarity measurement can well weaken the background. Highlight the target; finally design a multi-scale fusion strategy and an adaptive threshold segmentation module, which can achieve efficient extraction of unsupervised surface ship areas in multiple environments such as different target scales, cluttered cloud and fog backgrounds, sea clutter and wake wave interference. In the fine discrimination stage of the supervision system, the CF-Fourier HOG feature in the space frequency domain of the ship is designed, which is rotationally invariant. Under the framework of aggregated channel features and fast feature pyramid acceleration, the identification of ship targets is completed, and the virtual Police culling.
虽然本发明参照当前的较佳实施方式进行了描述,但本领域的技术人员应能理解,上述较佳实施方式仅用来说明本发明,并非用来限定本发明的保护范围,任何在本发明的精神和原则范围之内,所做的任何修饰、等效替换、改进等,均应包含在本发明的权利保护范围之内。Although the present invention has been described with reference to the current preferred embodiments, those skilled in the art should understand that the above preferred embodiments are only used to illustrate the present invention, not to limit the protection scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the scope of the spirit and principle of the present invention shall be included in the protection scope of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210463979.0A CN114821358A (en) | 2022-04-29 | 2022-04-29 | Optical remote sensing image marine ship target extraction and identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210463979.0A CN114821358A (en) | 2022-04-29 | 2022-04-29 | Optical remote sensing image marine ship target extraction and identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114821358A true CN114821358A (en) | 2022-07-29 |
Family
ID=82509200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210463979.0A Pending CN114821358A (en) | 2022-04-29 | 2022-04-29 | Optical remote sensing image marine ship target extraction and identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114821358A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115424249A (en) * | 2022-11-03 | 2022-12-02 | 中国工程物理研究院电子工程研究所 | Self-adaptive detection method for small and weak targets in air under complex background |
CN116109936A (en) * | 2022-10-21 | 2023-05-12 | 中国科学院长春光学精密机械与物理研究所 | Target detection and identification method based on optical remote sensing |
CN117611998A (en) * | 2023-11-22 | 2024-02-27 | 盐城工学院 | An optical remote sensing image target detection method based on improved YOLOv7 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102044072A (en) * | 2010-11-29 | 2011-05-04 | 北京航空航天大学 | SAR Image Fusion Processing Method Based on Statistical Model |
CN102096824A (en) * | 2011-02-18 | 2011-06-15 | 复旦大学 | Multi-spectral image ship detection method based on selective visual attention mechanism |
WO2016101279A1 (en) * | 2014-12-26 | 2016-06-30 | 中国海洋大学 | Quick detecting method for synthetic aperture radar image of ship target |
CN109427055A (en) * | 2017-09-04 | 2019-03-05 | 长春长光精密仪器集团有限公司 | The remote sensing images surface vessel detection method of view-based access control model attention mechanism and comentropy |
-
2022
- 2022-04-29 CN CN202210463979.0A patent/CN114821358A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102044072A (en) * | 2010-11-29 | 2011-05-04 | 北京航空航天大学 | SAR Image Fusion Processing Method Based on Statistical Model |
CN102096824A (en) * | 2011-02-18 | 2011-06-15 | 复旦大学 | Multi-spectral image ship detection method based on selective visual attention mechanism |
WO2016101279A1 (en) * | 2014-12-26 | 2016-06-30 | 中国海洋大学 | Quick detecting method for synthetic aperture radar image of ship target |
CN109427055A (en) * | 2017-09-04 | 2019-03-05 | 长春长光精密仪器集团有限公司 | The remote sensing images surface vessel detection method of view-based access control model attention mechanism and comentropy |
Non-Patent Citations (2)
Title |
---|
余东行;张保明;郭海涛;赵传;徐俊峰: "联合显著性特征与卷积神经网络的遥感影像舰船检测", 中国图象图形学报, no. 012, 31 December 2018 (2018-12-31) * |
徐芳;刘晶红;曾冬冬;王宣;: "基于视觉显著性的无监督海面舰船检测与识别", 光学精密工程, no. 05, 15 May 2017 (2017-05-15) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116109936A (en) * | 2022-10-21 | 2023-05-12 | 中国科学院长春光学精密机械与物理研究所 | Target detection and identification method based on optical remote sensing |
CN116109936B (en) * | 2022-10-21 | 2023-08-29 | 中国科学院长春光学精密机械与物理研究所 | Target detection and identification method based on optical remote sensing |
CN115424249A (en) * | 2022-11-03 | 2022-12-02 | 中国工程物理研究院电子工程研究所 | Self-adaptive detection method for small and weak targets in air under complex background |
CN115424249B (en) * | 2022-11-03 | 2023-01-31 | 中国工程物理研究院电子工程研究所 | Self-adaptive detection method for small and weak targets in air under complex background |
CN117611998A (en) * | 2023-11-22 | 2024-02-27 | 盐城工学院 | An optical remote sensing image target detection method based on improved YOLOv7 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805904B (en) | A moving ship detection and tracking method based on satellite image sequence | |
CN111160120A (en) | Fast R-CNN article detection method based on transfer learning | |
CN114821358A (en) | Optical remote sensing image marine ship target extraction and identification method | |
CN106384344A (en) | Sea-surface ship object detecting and extracting method of optical remote sensing image | |
CN103400156A (en) | CFAR (Constant False Alarm Rate) and sparse representation-based high-resolution SAR (Synthetic Aperture Radar) image ship detection method | |
CN102214298A (en) | Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism | |
Xue et al. | Rethinking automatic ship wake detection: state-of-the-art CNN-based wake detection via optical images | |
CN110647802A (en) | Deep learning-based ship target detection method in remote sensing images | |
CN105389799B (en) | SAR image object detection method based on sketch map and low-rank decomposition | |
CN114596500A (en) | Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV3plus | |
CN105512622B (en) | A kind of visible remote sensing image sea land dividing method based on figure segmentation and supervised learning | |
CN116703895B (en) | Small sample 3D visual detection method and system based on generation countermeasure network | |
CN108681691A (en) | A kind of marine ships and light boats rapid detection method based on unmanned water surface ship | |
He et al. | Ship detection without sea-land segmentation for large-scale high-resolution optical satellite images | |
CN108573280B (en) | Method for unmanned ship to autonomously pass through bridge | |
Zhang et al. | Nearshore vessel detection based on Scene-mask R-CNN in remote sensing image | |
CN109063669B (en) | Bridge area ship navigation situation analysis method and device based on image recognition | |
CN116188944A (en) | A Method of Infrared Weak Small Target Detection Based on Swin-Transformer and Multi-scale Feature Fusion | |
CN117079097A (en) | Sea surface target identification method based on visual saliency | |
CN117789030A (en) | A method and system for detecting small ship targets in remote sensing images | |
CN110298855A (en) | A kind of sea horizon detection method based on gauss hybrid models and texture analysis | |
CN116109936B (en) | Target detection and identification method based on optical remote sensing | |
CN112949380A (en) | Intelligent underwater target identification system based on laser radar point cloud data | |
CN108765439A (en) | A kind of sea horizon detection method based on unmanned water surface ship | |
Wang et al. | Scattering information fusion network for oriented ship detection in SAR images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |