CN102867196B - Based on the complicated sea remote sensing image Ship Detection of Gist feature learning - Google Patents
Based on the complicated sea remote sensing image Ship Detection of Gist feature learning Download PDFInfo
- Publication number
- CN102867196B CN102867196B CN201210339791.1A CN201210339791A CN102867196B CN 102867196 B CN102867196 B CN 102867196B CN 201210339791 A CN201210339791 A CN 201210339791A CN 102867196 B CN102867196 B CN 102867196B
- Authority
- CN
- China
- Prior art keywords
- feature
- image slice
- remote sensing
- brightness
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 44
- 230000000007 visual effect Effects 0.000 claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 230000002093 peripheral effect Effects 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 238000007500 overflow downdraw method Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 230000001629 suppression Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 6
- 238000004422 calculation algorithm Methods 0.000 description 14
- 239000013598 vector Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 239000004956 Amodel Substances 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
一种基于Gist特征学习的复杂海面遥感影像舰船检测方法,包括以下步骤:步骤1,采集不同时相、不同传感器、不同尺度的复杂海面遥感影像数据;步骤2,对复杂海面遥感影像进行分块预处理,获取样本影像切片和检测影像切片;步骤3,提取样本影像切片和检测影像切片的显著特征和Gist特征;步骤4,根据步骤3所得显著特征和Gist特征对样本影像切片进行训练,得到训练模型;步骤5,根据步骤4所得训练模型,用SVM分类器判断检测影像切片是否含有舰船;步骤6,基于改进的itti视觉注意模型寻找检测影像切片的单个舰船。本发明在保证了不漏检舰船的同时尽量降低其虚警率,能有效地处理含有海杂波、云雾等干扰情况的复杂海面遥感影像,计算复杂度低、针对性强。
A ship detection method for complex sea surface remote sensing images based on Gist feature learning, comprising the following steps: step 1, collecting complex sea surface remote sensing image data of different time phases, different sensors, and different scales; step 2, analyzing complex sea surface remote sensing images Block preprocessing, obtaining sample image slices and detecting image slices; step 3, extracting sample image slices and detecting salient features and Gist features of image slices; step 4, training sample image slices according to the salient features and Gist features obtained in step 3, Obtain the training model; step 5, according to the training model obtained in step 4, use the SVM classifier to judge whether the detection image slice contains ships; step 6, find a single ship in the detection image slice based on the improved itti visual attention model. The present invention reduces the false alarm rate as much as possible while ensuring that ships are not missed, can effectively process complex sea surface remote sensing images containing sea clutter, clouds and other interference conditions, and has low computational complexity and strong pertinence.
Description
技术领域 technical field
本发明涉及遥感影像处理技术领域,尤其是涉及一种基于Gist生物视觉特征学习的复杂海面遥感影像舰船检测方法。The invention relates to the technical field of remote sensing image processing, in particular to a ship detection method based on Gist biological visual feature learning for complex sea surface remote sensing images.
背景技术 Background technique
舰船目标检测是世界各海岸地带国家的传统任务,在舰船寻找与救助、捕鱼船监视、非法移民、保卫领土、反毒品、舰船非法倾倒油污的监视与管理等方面有着广泛的应用。随着遥感成像技术的发展,利用遥感图像进行舰船目标检测成为可能,其研究对象包括对舰船本身和舰船尾迹的检测。在复杂海洋环境条件下,卫星遥感影像上呈现杂乱无章的鱼鳞光、大面积反光区域、不规则运动的丰富纹理波浪等,中小型舰船目标可能会隐藏在复杂的背景杂波中,因而影响舰船目标识别效果。舰船目标影像上海况(风浪)、气象(云雾)、水色等造成遥感影像中海陆特性不稳定、海面波浪、水面悬浮物、舰船尾迹等干扰非常多,舰船检测难度大。Ship target detection is a traditional task of coastal countries in the world, and it has a wide range of applications in ship search and rescue, fishing boat monitoring, illegal immigration, territorial defense, anti-drugs, monitoring and management of illegal dumping of oil by ships, etc. . With the development of remote sensing imaging technology, it is possible to use remote sensing images to detect ship targets, and its research objects include the detection of ships themselves and ship wakes. Under complex marine environment conditions, satellite remote sensing images present chaotic fish scales, large reflective areas, and irregularly moving rich textured waves, etc. Small and medium-sized ship targets may be hidden in the complex background clutter, thus affecting the Ship target recognition effect. The sea conditions (wind and waves), weather (clouds and fog), and water color of the ship target image cause unstable sea and land characteristics in the remote sensing image, and there are many interferences such as sea waves, suspended objects on the water surface, and ship wakes, making ship detection difficult.
现有舰船目标检测主要有如下算法:基于全局阈值分割的算法、基于局部阈值分割的算法、最佳熵自动门限法、基于分布模型的算法、基于分形模型的算法、基于特征域的目标检测算法等等,但它们普遍存在检测虚警率过高的问题,普适性较差。其中,全局阈值算法无法根据图像中局部区域的变化自动调节阈值,因此检测结果易受局部变化而引入大量虚警和漏检,对幅宽较宽的图像,由于成像机理的原因,海面背景有强烈的灰度变化,也会导致出现虚警和漏检。该类方法在检测过程中只利用了目标的灰度统计特性,没有考虑目标的空间结构信息,并且直方图形状与图像内容的联系也具有不确定性;局部阈值算法在斑噪较多、海面风浪较大时,容易造成大量的虚警。同时这个算法由于需要反复计算窗口内背景区域的统计参数,运算量极大,处理过程的速度较慢,不能够满足实际应用中实时或近实时处理的需要;最佳熵自动门限法阈值的选取只利用了灰度统计特性,没有考虑目标的空间结构信息,并且直方图的分布与图像内容的联系也具有不确定性。在传统的KSW算法中,准则函数被简单地定义为目标灰度熵与背景灰度熵之和,背景灰度熵与目标灰度熵在准则函数中占有同等的比例。这忽略了背景与目标在图像上所占比例的不同,也忽略了背景与目标在灰度范围上的差异。当图像包含较强的海杂波时,往往得不到好的检测结果;基于分布模型的算法首先需要对背景作杂波分布的假设,这就需要一定的先验知识,但实际上一般情况下背景杂波也并不严格地服从某种分布。其次,该类算法需要对图像中每个像素都进行统计,因此计算量较大,并且随着滑动窗口尺寸的增加而增加;基于分形模型的算法认为自然景物和舰船目标的分形维数有一定的差别,根据差异进行检测。但实际图像中受背景复杂度、随机噪声、成像质量等影响,单一的尺度或恒常的分形维数很难区分自然景物和人造目标;基于特征域的目标检测算法,当背景的灰度分布较复杂时、受噪声影响极大,此时会增加噪声对特征图的影响,产生误分割。此外,在特征转换的过程中还会对目标本身的轮廓有一些影响,利用该方法分割目标,形状信息将有所丢失。The existing ship target detection mainly includes the following algorithms: algorithm based on global threshold segmentation, algorithm based on local threshold segmentation, optimal entropy automatic threshold method, algorithm based on distribution model, algorithm based on fractal model, target detection based on feature domain Algorithms, etc., but they generally have the problem of high false alarm rate and poor universality. Among them, the global threshold algorithm cannot automatically adjust the threshold according to the changes in the local area in the image, so the detection results are susceptible to local changes and introduce a large number of false alarms and missed detections. For images with a wide width, due to the imaging mechanism, the sea background has Strong grayscale changes can also lead to false alarms and missed detections. This type of method only uses the gray statistical characteristics of the target in the detection process, without considering the spatial structure information of the target, and the connection between the histogram shape and the image content is also uncertain; When the wind and waves are strong, it is easy to cause a large number of false alarms. At the same time, because this algorithm needs to repeatedly calculate the statistical parameters of the background area in the window, the amount of calculation is huge, and the processing speed is slow, which cannot meet the needs of real-time or near-real-time processing in practical applications; the selection of the threshold value of the optimal entropy automatic threshold method Only the statistical characteristics of the gray level are used, and the spatial structure information of the target is not considered, and the relationship between the distribution of the histogram and the content of the image is also uncertain. In the traditional KSW algorithm, the criterion function is simply defined as the sum of the target gray entropy and the background gray entropy, and the background gray entropy and the target gray entropy occupy the same proportion in the criterion function. This ignores the difference in the proportion of the background and the target in the image, and also ignores the difference in the gray scale range of the background and the target. When the image contains strong sea clutter, good detection results are often not obtained; the algorithm based on the distribution model first needs to make an assumption about the background clutter distribution, which requires certain prior knowledge, but in fact the general situation The background clutter does not strictly obey a certain distribution. Secondly, this type of algorithm needs to count each pixel in the image, so the amount of calculation is large, and it increases with the increase of the sliding window size; the algorithm based on the fractal model believes that the fractal dimension of natural scenery and ship targets has A certain difference is detected according to the difference. However, in actual images, affected by background complexity, random noise, and imaging quality, it is difficult to distinguish natural scenes from artificial targets with a single scale or constant fractal dimension; the target detection algorithm based on feature domains, when the gray distribution of the background is relatively When it is complex, it is greatly affected by noise. At this time, the influence of noise on the feature map will be increased, resulting in mis-segmentation. In addition, in the process of feature conversion, there will be some influence on the outline of the target itself. Using this method to segment the target, the shape information will be lost.
可以看出各种算法依然要受到诸多条件的限制,比如图像背景的干扰、目标受天气、光照变化的影响,特别是对于中低分辨率的遥感数据而言,舰船目标在图像上呈现为小目标,监测过程中出现漏警、虚警的概率较高。同时,在一般情况下,可见光图像会受到云层以及油污、海浪等干扰,建立背景模型比较困难,图像中目标与背景差异性不一致,图像中舰船灰度分布不均匀,与海背景对比也不明显,因此也不易分割,特别对于黑极性舰船来讲,更是如此。It can be seen that various algorithms are still limited by many conditions, such as the interference of the image background, and the target is affected by weather and illumination changes. Especially for low- and medium-resolution remote sensing data, the ship target appears on the image as For small targets, the probability of missing alarms and false alarms during the monitoring process is high. At the same time, under normal circumstances, visible light images will be disturbed by clouds, oil pollution, sea waves, etc., and it is difficult to establish a background model. Obviously, so it is not easy to separate, especially for black polar ships.
发明内容 Contents of the invention
针对上述问题,本发明提出了一种基于Gist特征学习的复杂海面遥感影像舰船检测方法。In view of the above problems, the present invention proposes a complex sea surface remote sensing image ship detection method based on Gist feature learning.
本发明的技术方案为一种基于Gist特征学习的复杂海面遥感影像舰船检测方法,包括以下步骤:The technical solution of the present invention is a complex sea surface remote sensing image ship detection method based on Gist feature learning, comprising the following steps:
步骤1,采集复杂海面遥感影像的数据;Step 1, collecting data of complex sea surface remote sensing images;
步骤2,对复杂海面遥感影像进行分块预处理,获取样本影像切片和检测影像切片;Step 2, perform block preprocessing on complex sea surface remote sensing images, obtain sample image slices and detect image slices;
步骤3,提取样本影像切片和检测影像切片的显著特征和Gist特征;Step 3, extract sample image slices and detect salient features and Gist features of image slices;
步骤4,根据步骤3所得显著特征和Gist特征对样本影像切片进行训练,得到训练模型;Step 4, according to the salient features and Gist features obtained in step 3, the sample image slices are trained to obtain the training model;
步骤5,根据步骤4所得训练模型,用SVM分类器判断任一检测影像切片是否含有舰船,是则进入步骤5,否则结束对该检测影像切片的检测;Step 5, according to the training model obtained in step 4, use the SVM classifier to judge whether any detection image slice contains a ship, if so, enter step 5, otherwise end the detection of the detection image slice;
步骤6,基于itti视觉注意模型寻找检测影像切片的单个舰船,获得船舶目标。Step 6, based on the itti visual attention model, find a single ship in the detected image slice, and obtain the ship target.
而且,步骤3中对任一样本影像切片和检测影像切片进行显著特征提取,实现方式如下,Moreover, in step 3, the salient feature extraction is performed on any sample image slice and detection image slice, and the implementation method is as follows,
将影像切片分解到颜色、亮度、方向这3个特征通道上,颜色、亮度特征通道分别用高斯金字塔表示,通过中央周边差操作生成颜色特征通道上的特征图和亮度特征通道上的特征图;方向特征通道用Gabor滤波得到0°,45°,90°,135°4个方向的方向金字塔,通过中央周边差操作生成每个方向上的特征图;The image slice is decomposed into three feature channels of color, brightness, and direction. The color and brightness feature channels are respectively represented by Gaussian pyramids, and the feature map on the color feature channel and the feature map on the brightness feature channel are generated through the central peripheral difference operation; The direction feature channel uses Gabor filtering to obtain direction pyramids in 4 directions of 0°, 45°, 90°, and 135°, and generates feature maps in each direction through the central peripheral difference operation;
通过局部非线性融合将各特征图的特征子图融合为颜色显著图、亮度显著图和方向显著图,最后用各显著图的均值、标准差、局部极大值点和局部极大值点间距离表示影像切片的显著特征。The feature subgraphs of each feature map are fused into color saliency map, brightness saliency map and direction saliency map through local nonlinear fusion, and finally the mean value, standard deviation, local maximum point and local maximum value point of each saliency map are used The distances represent salient features of image slices.
而且,步骤2中对任一样本影像切片和检测影像切片进行所述的Gist特征提取,实现方式如下,Moreover, in step 2, the Gist feature extraction is performed on any sample image slice and detection image slice, and the implementation method is as follows,
将每个影像切片分解到颜色、亮度、方向3个特征通道上,颜色、亮度特征通道分别用9层高斯金字塔表示,通过中央周边差操作生成颜色特征通道上的特征图和亮度特征通道上的特征图;方向特征通道用Gabor滤波得到0°,45°,90°,135°4个方向的方向金字塔,得到每个方向通道上的特征图;Decompose each image slice into three feature channels of color, brightness, and direction. The color and brightness feature channels are represented by 9-layer Gaussian pyramids respectively, and the feature map on the color feature channel and the feature map on the brightness feature channel are generated by the central peripheral difference operation. Feature map; the direction feature channel uses Gabor filtering to obtain the direction pyramid in 4 directions of 0°, 45°, 90°, and 135°, and obtains the feature map on each direction channel;
对各特征图的每张特征子图分为4×4大小的子网格,分别求16个子网格的均值、左上角4个子网格、右上角4个子网格、左下角4个子网格、左上角4个子网格、左上角4个子网格分别合并后的均值以及整个特征子图的均值,最后用所得结果表示Gist特征向量。Each feature submap of each feature map is divided into 4×4 subgrids, and the average value of 16 subgrids, 4 subgrids in the upper left corner, 4 subgrids in the upper right corner, and 4 subgrids in the lower left corner are respectively calculated , the 4 sub-grids in the upper left corner, the combined mean of the 4 sub-grids in the upper left corner, and the mean of the entire feature submap, and finally use the obtained results to represent the Gist feature vector.
而且,步骤6中基于改进的itti视觉注意模型寻找检测影像切片的单个舰船,包括以下步骤,Moreover, in step 6, based on the improved itti visual attention model, searching for a single ship that detects image slices includes the following steps,
步骤6.1,对检测影像切片进行初级视觉特征提取,包括亮度、颜色、方向和纹理特征;Step 6.1, performing primary visual feature extraction on the detected image slice, including brightness, color, direction and texture features;
步骤6.2,显著图生成,包括将步骤6.1得到的亮度、颜色、方向、纹理4个特征通过中央周边差操作生成各个特征子图,然后用局部非线性融合方法融合生成亮度显著图、颜色显著图、方向显著图、纹理显著图,最后用局部非线性融合方法融合生成整体显著图;在融合生成亮度显著图、颜色显著图、方向显著图、纹理显著图前对相应特征子图的每个像素值进行平方运算;Step 6.2, saliency map generation, including the four features of brightness, color, direction, and texture obtained in step 6.1 to generate each feature submap through the central peripheral difference operation, and then use the local nonlinear fusion method to fuse to generate a brightness saliency map and a color saliency map , directional saliency map, and texture saliency map, and finally use the local nonlinear fusion method to fuse and generate the overall saliency map; before fused to generate the brightness saliency map, color saliency map, direction saliency map, and texture saliency map, each pixel of the corresponding feature submap The value is squared;
步骤6.3,舰船目标检测,包括对步骤6.2中得到的显著图通过返回抑制技术检测检测影像切片的单个舰船,获得船舶目标。Step 6.3, ship target detection, including detecting a single ship in the image slice by using the return suppression technique on the saliency map obtained in step 6.2, to obtain the ship target.
而且,采集复杂海面遥感影像的数据时选择来源于不同时相、不同传感器、不同尺度的遥感影像数据,对复杂海面遥感影像进行分块预处理后,标记含有舰船的正样本图像和不含有舰船的负样本图像作为样本影像切片。Moreover, when collecting complex sea surface remote sensing image data, select remote sensing image data from different time phases, different sensors, and different scales. Negative sample images of ships are used as sample image slices.
本发明提出的方法首先对影像进行分块,提取每一个子影像块的显著特征和Gist特征,根据提取出的特征向量对样本数据进行训练,然后利用SVM分类器判断子影像块中是否含有舰船,最后基于改进的itti视觉注意模型寻找子影像块中的单个舰船。本发明在保证了不漏检舰船的同时尽量降低其虚警率,能有效地处理含有海杂波、云雾等干扰的复杂海面遥感影像,计算复杂度低、针对性强;适用于不同时相、不同分辨率的复杂海面遥感影像数据,具有良好的普适性;能对大幅面的海量遥感影像数据进行快速处理检测舰船。The method proposed by the present invention first divides the image into blocks, extracts the salient features and Gist features of each sub-image block, trains the sample data according to the extracted feature vector, and then uses the SVM classifier to judge whether the sub-image block contains Gist or not. ship, and finally find a single ship in the sub-image block based on the improved itti visual attention model. The present invention reduces the false alarm rate as much as possible while ensuring that ships are not missed, and can effectively process complex sea surface remote sensing images containing interference from sea clutter, clouds, etc., with low computational complexity and strong pertinence; Complex sea surface remote sensing image data of different phases and different resolutions has good universality; it can quickly process and detect ships with large-format massive remote sensing image data.
附图说明 Description of drawings
图1为本发明的流程图。Fig. 1 is a flowchart of the present invention.
具体实施方式 Detailed ways
以下结合附图和实施例详细说明本发明技术方案。The technical solution of the present invention will be described in detail below in conjunction with the drawings and embodiments.
本发明基于Gist生物视觉特征学习的复杂海面遥感影像舰船检测方法是利用舰船的显著特征和Gist特征,对巨幅影像的子影像块用SVM分类器(支持向量机分类器)进行训练,得到一个预测模型,该模型中的支持向量代表了舰船的典型特征,再根据改进的itti视觉注意模型对有疑似舰船存在的子影像块进行单个舰船的检测。Gist特征可参见现有文献《Drivingme Around the Bend:Learning to Drive from Visual Gist》,可以利用生物视觉特征计算Gist特征。实施例流程可采用计算机软件技术实现自动运行,如图1所示,具体包括以下步骤:The ship detection method of complex sea surface remote sensing images based on Gist biological visual feature learning of the present invention is to use the salient features of ships and Gist features to train the sub-image blocks of huge images with SVM classifiers (support vector machine classifiers), A prediction model is obtained, and the support vectors in the model represent the typical characteristics of the ship, and then a single ship is detected for sub-image blocks with suspected ships based on the improved itti visual attention model. Gist features can be found in the existing literature "Drivingme Around the Bend: Learning to Drive from Visual Gist", and Gist features can be calculated using biological visual features. The embodiment process can adopt computer software technology to realize automatic operation, as shown in Figure 1, specifically comprising the following steps:
步骤1,采集复杂海面遥感影像的数据。Step 1, collecting data of complex sea surface remote sensing images.
具体实施时,采用训练数据覆盖应尽可能的多,以应对后面的检测步骤。实施例采集不同时相、不同传感器、不同尺度的复杂海面遥感影像数据。In specific implementation, the training data coverage should be as much as possible to cope with the subsequent detection steps. The embodiment collects complex sea surface remote sensing image data of different time phases, different sensors, and different scales.
步骤2,对复杂海面遥感影像进行分块预处理,制作样本数据,获取样本影像切片和检测影像切片。Step 2, perform block preprocessing on complex sea surface remote sensing images, make sample data, obtain sample image slices and detect image slices.
实施例对遥感影像分块处理,是将原始的巨幅影像分块为大小相等,N×N大小的影像切片,即每个切片包括N×N个像素。The block processing of the remote sensing image in the embodiment is to block the original huge image into N×N image slices of equal size, that is, each slice includes N×N pixels.
将采集的不同时相、不同传感器、不同尺度的复杂海面遥感影像数据分块后,从所得影像切片中可以选择训练样本,由人工标记S张正样本影像切片(舰船)和S张负样本影像切片(非舰船),共同构成样本影像切片的训练集。或者可以直接采用预先已标记的样本影像切片。其他影像切片都可以作为待分类的检测影像切片。一般都是将整幅复杂海面遥感影像作为待检测影像,对待检测影像分块后,所得的所有子影像块作为待分类的检测影像切片。After dividing the collected complex sea surface remote sensing image data of different time phases, different sensors, and different scales into blocks, training samples can be selected from the obtained image slices, and S positive sample image slices (ships) and S negative samples can be manually marked. Image slices (not ships) together constitute the training set of sample image slices. Alternatively, pre-labeled sample image slices can be used directly. All other image slices can be used as detection image slices to be classified. Generally, the entire complex sea surface remote sensing image is taken as the image to be detected, and after the image to be detected is divided into blocks, all sub-image blocks obtained are used as slices of the detected image to be classified.
其中,N和S的具体取值可由本领域技术人员自行根据情况设定。Wherein, the specific values of N and S can be set by those skilled in the art according to the situation.
步骤3,提取样本影像切片和检测影像切片的显著特征和Gist特征。Step 3, extract sample image slices and detect salient features and Gist features of the image slices.
实施例中,对任一样本影像切片和检测影像切片进行显著特征提取,具体是:In the embodiment, the salient features are extracted from any sample image slice and detection image slice, specifically:
将影像切片分解到颜色、亮度、方向3个特征通道上,颜色、亮度特征通道分别用9层高斯金字塔表示,通过中央周边差操作分别生成颜色特征通道上的特征图和亮度特征通道上的特征图;方向特征通道用Gabor滤波得到0°,45°,90°,135°4个方向的9层方向金字塔,分别通过中央周边差操作生成每个方向上的特征图。其中,颜色特征通道上的特征图包括12张颜色特征子图(分两组,一组是红绿颜色对比,一组是黄蓝颜色对比,每组各6张),亮度特征通道上的特征图包括6张亮度特征子图(9层金字塔经过中央周边差操作后变为6张),方向特征通道每个方向上的特征图包括6张方向特征子图(9层金字塔经过中央周边差操作后变为6张),4个方向上总共24张特征子图。总共42张特征子图,即42=12+6+4×6。然后通过局部非线性融合将特征图融合为颜色、亮度、方向显著图,其中12张颜色特征子图融合为1个颜色显著图,6张亮度特征子图融合为1个亮度显著图,4个方向上的共24张方向特征子图融合为1个方向显著图,共3个显著图。最后用影像切片的每个显著图分别的均值、标准差、局部极大值点、局部极大值点间距离,用3个显著图分别的4个值构成的12维向量表示影像切片的显著特征。图像的均值、标准差、局部极大值点、局部极大值点间距离具体求取、中央周边差操作和局部非线性融合为现有技术,可参见文献“L.Itti,C.Koch,E.Niebur,AModel of Saliency-Based Visual Attention for Rapid Scene Analysis,IEEE Transactions on PatternAnalysis and Machine Intelligence,Vol.20,No.11,pp.1254-1259,Nov 1998.”、“L.Itti,C.Koch,Comparison of Feature Combination Strategies for Saliency-Based Visual Attention Systems,In:Proc.SPIE Human Vision and Electronic Imaging IV(HVEI'99),San Jose,CA,Vol.3644,pp.473-82,Bellingham,WA:SPIE Press,Jan 1999.”The image slice is decomposed into three feature channels of color, brightness, and direction. The color and brightness feature channels are respectively represented by 9-layer Gaussian pyramids, and the feature maps on the color feature channel and the features on the brightness feature channel are generated by the central peripheral difference operation. Figure: The direction feature channel uses Gabor filtering to obtain a 9-layer direction pyramid in 4 directions of 0°, 45°, 90°, and 135°, and generates a feature map in each direction through the central peripheral difference operation. Among them, the feature map on the color feature channel includes 12 color feature submaps (divided into two groups, one group is the contrast of red and green colors, and the other group is the contrast of yellow and blue colors, each group has 6 pictures), the feature map on the brightness feature channel The graph includes 6 brightness feature submaps (the 9-layer pyramid becomes 6 after the central peripheral difference operation), and the feature map in each direction of the direction feature channel includes 6 directional feature submaps (the 9-layer pyramid undergoes the central peripheral difference operation Then it becomes 6), a total of 24 feature submaps in 4 directions. There are a total of 42 feature submaps, that is, 42=12+6+4×6. Then the feature maps are fused into color, brightness, and direction saliency maps through local nonlinear fusion, in which 12 color feature submaps are fused into 1 color saliency map, 6 luminance feature submaps are fused into 1 brightness saliency map, and 4 A total of 24 direction feature submaps in the direction are fused into one direction saliency map, and a total of 3 saliency maps. Finally, the mean, standard deviation, local maximum point, and distance between local maximum points of each saliency map of the image slice are used, and the 12-dimensional vector composed of 4 values of the 3 saliency maps is used to represent the salience of the image slice. feature. The image mean, standard deviation, local maximum point, specific calculation of the distance between local maximum points, central peripheral difference operation and local nonlinear fusion are existing technologies, which can be found in the literature "L.Itti, C.Koch, E.Niebur, AModel of Saliency-Based Visual Attention for Rapid Scene Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp.1254-1259, Nov 1998.", "L.Itti, C. Koch, Comparison of Feature Combination Strategies for Saliency-Based Visual Attention Systems, In: Proc. SPIE Human Vision and Electronic Imaging IV (HVEI'99), San Jose, CA, Vol.3644, pp.473-82, Bellingham, WA : SPIE Press, Jan 1999.”
实施例中,对任一样本影像切片和检测影像切片进行Gist特征提取,具体是:In the embodiment, Gist feature extraction is performed on any sample image slice and detection image slice, specifically:
将影像切片分解到颜色、亮度、方向3个特征通道上,颜色、亮度特征通道分别用9层高斯金字塔表示,通过中央周边差操作分别生成颜色特征通道上的特征图和亮度特征通道上的特征图;方向特征通道用Gabor滤波得到0°,45°,90°,135°4个方向的4层方向金字塔,取每个方向金字塔的4个尺度的图像(即原始图像分别除以20、21、22、23得到的4层图像)作为相应方向的方向特征子图,得到4个方向上的特征图。颜色特征通道上的特征图包括12张颜色特征子图,亮度特征通道上的特征图包括6张亮度特征子图,方向特征通道每个方向上的特征图包括4张方向特征子图。总共34张特征子图,即34=6+12+4×4。对每张特征子图分为4×4个子网格,分别求16个子网格的均值,左上角4个子网格、右上角4个子网格、左下角4个子网格、左上角4个子网格、左上角4个子网格分别合并后的均值(就是将4个子网格做为一个整体计算其均值)以及整个特征子图的均值,得到一个21维的向量,最后得到一个34×21=714维的Gist特征向量。The image slice is decomposed into three feature channels of color, brightness, and direction. The color and brightness feature channels are respectively represented by 9-layer Gaussian pyramids, and the feature maps on the color feature channel and the features on the brightness feature channel are generated by the central peripheral difference operation. Figure: The direction feature channel uses Gabor filtering to obtain 4-layer direction pyramids in 4 directions of 0°, 45°, 90°, and 135°, and takes images of 4 scales of each direction pyramid (that is, the original image is divided by 20, 21 respectively , 22, and 23) as the direction feature submap of the corresponding direction, and the feature maps in the four directions are obtained. The feature map on the color feature channel includes 12 color feature submaps, the feature map on the brightness feature channel includes 6 brightness feature submaps, and the feature map on each direction of the direction feature channel includes 4 direction feature submaps. There are a total of 34 feature submaps, that is, 34=6+12+4×4. Divide each feature submap into 4×4 subgrids, calculate the average value of 16 subgrids respectively, 4 subgrids in the upper left corner, 4 subgrids in the upper right corner, 4 subgrids in the lower left corner, and 4 subnetworks in the upper left corner Grid, the average value of the 4 sub-grids in the upper left corner (that is, calculate the mean value of the 4 sub-grids as a whole) and the mean value of the entire feature sub-graph to obtain a 21-dimensional vector, and finally get a 34×21= 714-dimensional Gist feature vector.
具体实施时,高斯金字塔层数可由本领域技术人员自行根据情况设定,层数越多,则提取的特征维也就越高。During specific implementation, the number of layers of the Gaussian pyramid can be set by those skilled in the art according to the situation. The more layers there are, the higher the extracted feature dimension will be.
步骤4,根据步骤3所得显著特征和Gist特征对样本影像切片进行训练,得到训练模型。Step 4: Train the sample image slices according to the salient features and Gist features obtained in Step 3 to obtain a training model.
根据样本影像切片的显著特征,以及属于正样本影像切片还是负样本影像切片,即可进行训练。具体训练实现可采用现有技术中的SVM训练器,对所有样本影像切片进行训练后即可获得训练模型。According to the salient features of the sample image slice, and whether it is a positive sample image slice or a negative sample image slice, training can be carried out. The specific training implementation can use the SVM trainer in the prior art, and the training model can be obtained after training all the sample image slices.
步骤5,根据步骤4所得训练模型,用SVM分类器判断检测影像切片中是否含有舰船。对于任一检测影像切片,若判断结果为含有,进入步骤6,若判断结果为不含有,结束对这个检测影像切片的处理。Step 5, according to the training model obtained in step 4, use the SVM classifier to judge whether the detected image slice contains ships. For any detected image slice, if the judgment result is contained, go to step 6; if the judged result is not contained, the processing of this detected image slice is ended.
训练模型和预测结果都是通过现有的SVM方法实现的,例如SVM核函数用现有RBF核函数(径向基函数),本发明不予赘述。Both the training model and the prediction result are realized by the existing SVM method, for example, the SVM kernel function uses the existing RBF kernel function (radial basis function), which will not be described in detail in the present invention.
步骤6,基于itti视觉注意模型寻找检测影像切片的单个舰船,获得船舶目标。Step 6, based on the itti visual attention model, find a single ship in the detected image slice, and obtain the ship target.
itti视觉注意模型具体实现为现有技术,为提高检测准确率起见,实施例进一步提出对itti视觉注意模型进行改进,基于改进的itti视觉注意模型寻找检测影像切片的单个舰船。所述改进的itti视觉注意模型是在原始itti模型的基础上增加了纹理特征,扩展了其特征范围,同时在不同尺度的特征子图融合前进行一个平方拉伸运算,加大显著区与非显著区的差异,突出舰船目标。The specific implementation of the itti visual attention model is an existing technology. In order to improve the detection accuracy, the embodiment further proposes to improve the itti visual attention model, and based on the improved itti visual attention model, find a single ship for detecting image slices. The improved itti visual attention model adds texture features on the basis of the original itti model, and expands its feature range. At the same time, a square stretching operation is performed before the fusion of feature subgraphs of different scales to increase the salient area and non- Significant area differences, highlighting ship targets.
实施例的步骤如下:The steps of the embodiment are as follows:
步骤6.1,对检测影像切片进行初级视觉特征提取,包括亮度、颜色、方向和纹理特征。原始itti视觉注意模型分别提取亮度、颜色、方向作为初级视觉特征,实施例增加了提取纹理特征,可通过现有离散矩变换技术实现提取纹理特征,参见文献“VD.Gesu,C.Valent,L.Strinati.Local operators to detect regions of interest[J].Pattern Recognition Letter,1997,18(11-13):1077-1081.”In step 6.1, primary visual features are extracted from the detected image slices, including brightness, color, orientation and texture features. The original itti visual attention model extracts brightness, color, and direction as primary visual features respectively, and the embodiment increases the extraction of texture features, which can be realized by existing discrete moment transform technology. See the document "VD.Gesu, C.Valent, L .Strinati.Local operators to detect regions of interest[J].Pattern Recognition Letter, 1997,18(11-13):1077-1081."
现有技术离散矩变换(DMT)的公式State of the Art Discrete Moment Transform (DMT) Formula
式中,i,j是检测影像切片中行列值,g(i-r,j-s)表示检测影像切片中第i-r行第j-s列的原始像素值;k是预设的窗口尺寸参数,例如k=1,窗口为3×3;r,s是窗口里的循环变量,p,q是指数,具体实施时本领域技术人员可自行根据需要设定k、r、s取值。实施例分别取(p=0,q=1)(p=1,q=0),(p=1,q=1)即三个DMT纹理特征,DMT1.0 DMT0.1 DMT1.1。In the formula, i, j are the row and column values in the detection image slice, g(ir, js) represents the original pixel value of the irth row and js column in the detection image slice; k is the preset window size parameter, for example k=1, The window size is 3×3; r and s are the loop variables in the window, and p and q are indices. Those skilled in the art can set the values of k, r and s according to their needs during specific implementation. In the embodiment, (p=0, q=1), (p=1, q=0), (p=1, q=1) are three DMT texture features, DMT 1.0 DMT 0.1 DMT 1.1 .
步骤6.2,显著图生成,包括将步骤6.1得到的亮度、颜色、方向、纹理4个特征通过中央周边差操作生成各个特征子图,然后用局部非线性融合方法融合生成亮度显著图、颜色显著图、方向显著图、纹理显著图,最后用局部非线性融合方法融合生成整体显著图,在融合生成亮度显著图、颜色显著图、方向显著图、纹理显著图前对相应特征子图的每个像素值进行平方运算。由于平方运算,拉伸了显著图中像素值的范围,使得显著区域分层更明显,突出了舰船目标。Step 6.2, saliency map generation, including the four features of brightness, color, direction, and texture obtained in step 6.1 to generate each feature submap through the central peripheral difference operation, and then use the local nonlinear fusion method to fuse to generate a brightness saliency map and a color saliency map , directional saliency map, and texture saliency map, and finally use the local nonlinear fusion method to fuse and generate the overall saliency map. The value is squared. Due to the square operation, the range of pixel values in the saliency map is stretched, making the layering of the saliency region more obvious and highlighting the ship target.
实施例中,颜色、亮度特征通道分别用9层高斯金字塔表示,通过中央周边差操作分别生成颜色特征通道上的特征图和亮度特征通道上的特征图;方向特征通道用Gabor滤波得到0°,45°,90°,135°4个方向的9层方向金字塔,通过中央周边差操作生成每个方向上的特征图。其中,颜色特征通道上的特征图包括12张颜色特征子图,亮度特征通道上的特征图包括6张亮度特征子图,方向特征通道每个方向上的特征图包括6张方向特征子图,4个方向上总共24张特征子图。总共42张特征子图,即42=12+6+4×6。然后将所有不同尺度的特征子图作为输入,通过局部非线性融合将特征图融合为颜色、亮度、方向显著图,其中12张颜色特征子图融合为1个颜色显著图,6张亮度特征子图融合为1个亮度显著图,4个方向上的共24张方向特征子图融合成1个方向显著图,共3个显著图。In the embodiment, the color and brightness feature channels are respectively represented by a 9-layer Gaussian pyramid, and the feature maps on the color feature channel and the feature maps on the brightness feature channel are respectively generated by the central peripheral difference operation; the direction feature channel is obtained by Gabor filtering to obtain 0°, A 9-layer direction pyramid with 4 directions of 45°, 90°, and 135° generates a feature map in each direction through a central peripheral difference operation. Among them, the feature map on the color feature channel includes 12 color feature submaps, the feature map on the brightness feature channel includes 6 brightness feature submaps, and the feature map on each direction of the direction feature channel includes 6 direction feature submaps, A total of 24 feature submaps in 4 directions. There are a total of 42 feature submaps, that is, 42=12+6+4×6. Then, all feature submaps of different scales are used as input, and the feature maps are fused into color, brightness, and direction saliency maps through local nonlinear fusion, in which 12 color feature submaps are fused into 1 color saliency map, and 6 brightness feature submaps The graph is fused into a brightness saliency map, and a total of 24 directional feature submaps in 4 directions are fused into a directional saliency map, and a total of 3 saliency maps.
采用离散矩变换得到的纹理特征通道上的特征图包括3张纹理特征子图,将3个纹理特征子图融合为1张纹理显著图,加上之前的3个特征显著图,总共4个显著图,将它们再融合为1个整体显著图,综合所有特征。The feature map on the texture feature channel obtained by discrete moment transform includes 3 texture feature submaps, and the 3 texture feature submaps are fused into 1 texture saliency map, plus the previous 3 feature saliency maps, a total of 4 saliency maps map, and then fuse them into an overall saliency map, which integrates all features.
步骤6.3,舰船目标检测,在步骤6.2中得到的显著图通过返回抑制技术检测检测影像切片的单个舰船目标。现有返回抑制技术可参见“L.Itti,C.Koch,E.Niebur,A Model ofSaliency-Based Visual Attention for Rapid Scene Analysis,IEEE Transactions on Pattern Analysisand Machine Intelligence,Vol.20,No.11,pp.1254-1259,Nov 1998.”本发明不予赘述。Step 6.3, ship target detection, the saliency map obtained in step 6.2 is used to detect a single ship target in the image slice through the return suppression technique. Existing return suppression technology can be found in "L.Itti, C.Koch, E.Niebur, A Model of Saliency-Based Visual Attention for Rapid Scene Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.20, No.11, pp. 1254-1259, Nov 1998." The present invention will not go into details.
本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210339791.1A CN102867196B (en) | 2012-09-13 | 2012-09-13 | Based on the complicated sea remote sensing image Ship Detection of Gist feature learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210339791.1A CN102867196B (en) | 2012-09-13 | 2012-09-13 | Based on the complicated sea remote sensing image Ship Detection of Gist feature learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102867196A CN102867196A (en) | 2013-01-09 |
CN102867196B true CN102867196B (en) | 2015-10-21 |
Family
ID=47446060
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210339791.1A Active CN102867196B (en) | 2012-09-13 | 2012-09-13 | Based on the complicated sea remote sensing image Ship Detection of Gist feature learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102867196B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177608A (en) * | 2013-03-01 | 2013-06-26 | 上海海事大学 | Offshore suspicious ship and ship oil contamination discovering system |
CN103530628B (en) * | 2013-10-29 | 2016-08-17 | 上海市城市建设设计研究总院 | High-resolution remote sensing image ortho-rectification method based on floating control point |
CN105335765A (en) * | 2015-10-20 | 2016-02-17 | 北京航天自动控制研究所 | Method for detecting characteristic region matched with SAR |
CN105243154B (en) * | 2015-10-27 | 2018-08-21 | 武汉大学 | Remote sensing image retrieval method based on notable point feature and sparse own coding and system |
CN106651937B (en) * | 2016-10-19 | 2019-10-18 | 成都电科智达科技有限公司 | A kind of small drone object detection method based on super-pixel and scene prediction |
CN106778495A (en) * | 2016-11-21 | 2017-05-31 | 北京航天宏图信息技术股份有限公司 | Ship Detection in remote sensing image under complicated sea background |
CN107704865A (en) * | 2017-05-09 | 2018-02-16 | 北京航空航天大学 | Fleet Targets Detection based on the extraction of structure forest edge candidate region |
CN107563303B (en) * | 2017-08-09 | 2020-06-09 | 中国科学院大学 | Robust ship target detection method based on deep learning |
CN109427055B (en) * | 2017-09-04 | 2022-12-20 | 长春长光精密仪器集团有限公司 | Remote sensing image sea surface ship detection method based on visual attention mechanism and information entropy |
CN109558771B (en) * | 2017-09-26 | 2023-06-09 | 中电科海洋信息技术研究院有限公司 | Behavior state identification method, device and equipment of marine ship and storage medium |
CN107977621A (en) * | 2017-11-29 | 2018-05-01 | 淮海工学院 | Shipwreck identification model construction method, device, electronic equipment and storage medium |
CN108052585B (en) * | 2017-12-11 | 2021-11-23 | 江苏丰华联合科技有限公司 | Method for judging dynamic target in complex environment |
CN108133184B (en) * | 2017-12-20 | 2020-04-28 | 中国水产科学研究院渔业机械仪器研究所 | Fish identification and vaccine injection method based on fractal theory and BP algorithm |
CN109800716A (en) * | 2019-01-22 | 2019-05-24 | 华中科技大学 | One kind being based on the pyramidal Oceanic remote sensing image ship detecting method of feature |
CN110008833B (en) * | 2019-02-27 | 2021-03-26 | 中国科学院半导体研究所 | Target ship detection method based on optical remote sensing image |
CN109871823B (en) * | 2019-03-11 | 2021-08-31 | 中国电子科技集团公司第五十四研究所 | Satellite image ship detection method combining rotating frame and context information |
CN110084181B (en) * | 2019-04-24 | 2021-04-20 | 哈尔滨工业大学 | A method for ship target detection in remote sensing images based on sparse MobileNetV2 network |
CN111738236B (en) * | 2020-08-14 | 2020-11-20 | 之江实验室 | An adaptive level image segmentation and recognition method, device and system |
CN112766083B (en) * | 2020-12-30 | 2023-10-27 | 中南民族大学 | Remote sensing scene classification method and system based on multi-scale feature fusion |
CN113256720B (en) * | 2021-06-03 | 2021-09-24 | 浙江大学 | Method for simultaneously detecting SAR image ship and trail thereof |
CN116310734B (en) * | 2023-04-25 | 2023-12-15 | 慧铁科技股份有限公司 | Fault detection method and system for railway wagon running part based on deep learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102214298A (en) * | 2011-06-20 | 2011-10-12 | 复旦大学 | Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism |
CN102663348A (en) * | 2012-03-21 | 2012-09-12 | 中国人民解放军国防科学技术大学 | Marine ship detection method in optical remote sensing image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8243991B2 (en) * | 2008-06-17 | 2012-08-14 | Sri International | Method and apparatus for detecting targets through temporal scene changes |
-
2012
- 2012-09-13 CN CN201210339791.1A patent/CN102867196B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102214298A (en) * | 2011-06-20 | 2011-10-12 | 复旦大学 | Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism |
CN102663348A (en) * | 2012-03-21 | 2012-09-12 | 中国人民解放军国防科学技术大学 | Marine ship detection method in optical remote sensing image |
Also Published As
Publication number | Publication date |
---|---|
CN102867196A (en) | 2013-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102867196B (en) | Based on the complicated sea remote sensing image Ship Detection of Gist feature learning | |
Cheng et al. | FusionNet: Edge aware deep convolutional networks for semantic segmentation of remote sensing harbor images | |
Chen et al. | A deep neural network based on an attention mechanism for SAR ship detection in multiscale and complex scenarios | |
CN109427055B (en) | Remote sensing image sea surface ship detection method based on visual attention mechanism and information entropy | |
CN106384344B (en) | A method for detection and extraction of ships on sea surface from optical remote sensing images | |
CN104217215B (en) | A kind of classifying identification method of water surface foggy image and picture rich in detail | |
CN104660994B (en) | Maritime affairs dedicated video camera and maritime affairs intelligent control method | |
CN105654091B (en) | Sea-surface target detection method and device | |
CN109815807B (en) | A detection method for berthing ships based on edge line analysis and aggregated channel features | |
JP6797860B2 (en) | Water intrusion detection system and its method | |
CN107563303A (en) | A kind of robustness Ship Target Detection method based on deep learning | |
CN108647648A (en) | A kind of Ship Recognition system and method under visible light conditions based on convolutional neural networks | |
CN110516605A (en) | Any direction Ship Target Detection method based on cascade neural network | |
CN103400156A (en) | CFAR (Constant False Alarm Rate) and sparse representation-based high-resolution SAR (Synthetic Aperture Radar) image ship detection method | |
CN109117802A (en) | Ship Detection towards large scene high score remote sensing image | |
CN105354541A (en) | SAR (Synthetic Aperture Radar) image target detection method based on visual attention model and constant false alarm rate | |
CN104732215A (en) | Remote-sensing image coastline extracting method based on information vector machine | |
CN110414509B (en) | Port docking ship detection method based on sea-land segmentation and characteristic pyramid network | |
CN104217196A (en) | A method for detecting automatically a circular oil tank with a remote sensing image | |
CN109766823A (en) | A high-resolution remote sensing ship detection method based on deep convolutional neural network | |
CN109636758A (en) | A kind of floating on water object detecting method based on space-time dynamic operator | |
Hou et al. | SAR image ship detection based on visual attention model | |
CN102288166A (en) | Video-based multi-model combined surface ship detection method | |
CN105260715A (en) | Remote-area-oriented small-animal target detecting method | |
CN104077609A (en) | Saliency detection method based on conditional random field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |