WO2021098163A1 - Corner-based aerial target detection method - Google Patents

Corner-based aerial target detection method Download PDF

Info

Publication number
WO2021098163A1
WO2021098163A1 PCT/CN2020/089937 CN2020089937W WO2021098163A1 WO 2021098163 A1 WO2021098163 A1 WO 2021098163A1 CN 2020089937 W CN2020089937 W CN 2020089937W WO 2021098163 A1 WO2021098163 A1 WO 2021098163A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
corner
edge
target
feature
Prior art date
Application number
PCT/CN2020/089937
Other languages
French (fr)
Chinese (zh)
Inventor
苗锋
白俊奇
杨沛
朱伟
杜瀚宇
马浩
刘�文
邱文嘉
翟尚礼
Original Assignee
南京莱斯电子设备有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京莱斯电子设备有限公司 filed Critical 南京莱斯电子设备有限公司
Publication of WO2021098163A1 publication Critical patent/WO2021098163A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the invention relates to image detection technology, in particular to a corner point-based aerial target detection method.
  • photoelectric imaging equipment converts optical signals into electrical signals through imaging methods and transmits them to the processing terminal.
  • the processing terminal detects the target in the video by image detection means and extracts the target position at the same time.
  • Image target detection usually uses the characteristics of different objects in the image to show different colors, grays, and textures from the background, and distinguishes the target by detecting the edges of the target.
  • the disadvantage of this method is that under cloudy weather conditions, the clouds show different shapes, colors and infrared radiation characteristics, forming a complex and irregular image background, and the image edges are cluttered, which affects the system’s impact on aircraft, drones, etc. The detection and tracking effect of similar early warning targets.
  • the objective of the present invention is to provide a corner point-based aerial target detection method that can overcome the defects of the prior art.
  • the invention discloses a corner point-based aerial target detection method, which includes the following steps:
  • Step S1 Receive the video image, unpack the video and place it in the video buffer to obtain the video information.
  • the video information includes the image resolution R ⁇ C, the frame rate N f , and the image color space type, where R represents the number of horizontal pixels of the image , C represents the number of vertical pixels of the image;
  • Step S2 According to the video image color space type, extract the gray image, which is marked as Im gray ;
  • Step S3 Image edge detection: for the gray image Im gray , an edge detection algorithm is adopted to obtain the edge image Im edge , and all the closed edges in the edge image Im edge are extracted, that is, the connected domain;
  • Step S5 Perform corner grouping, and divide the feature corner set P into m subsets
  • Step S6 Target extraction: extract the outer contours of the corner points in the m subsets respectively, find the smallest circumscribed rectangular frame of each group of corner points, and at the same time find the target centroid of the group as the target position, calculate the centroid to each side of the circumscribed rectangle The maximum width and height of, take the centroid as the center and the maximum width and height as the sides to make a rectangle, as the final target frame;
  • Step S7 Target information output: output the target position and target frame.
  • the video image is a visible light or infrared video image.
  • step S2 includes:
  • RGB images are separated into R channel (R means red), G channel (G means green), and B channel image (B means blue).
  • R means red
  • G means green
  • B channel image B means blue
  • the Y channel is a grayscale image
  • YUV is a type of true-color color space (color space) compiled.
  • Proper nouns such as Y'UV, YUV, YCbCr, YPbPr, etc. can all be called YUV, which overlap with each other.
  • "Y” represents brightness (Luminance or Luma), which is the grayscale value
  • "U” and “V” represent chrominance (Chrominance or Chroma), which are used to describe the color and saturation of the image, and are used to specify pixels s color.
  • HSV Hue, Saturation, Value
  • S saturation
  • V lightness
  • step S3 includes the following steps:
  • the Gaussian template is a rectangular structure with a size of l 1 ⁇ l 2 , the standard deviation in the column direction is ⁇ x , and the standard deviation in the row direction is ⁇ y , to obtain smoothness
  • the rear image Im gauss , l 1 and l 2 respectively represent the length and width of the rectangular structure;
  • S3.2 Use the canny edge detection algorithm to extract the edge of the image Im gauss to obtain the binarized edge image Im edge .
  • the dual threshold parameters are T h and T l respectively .
  • Common edge detection algorithms such as sobel edge detection and canny edge Detection, etc. (Reference: Canny JA Computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,PAMI-8(6):679-698.);
  • S3.3 Perform the closed operation in geometric morphology on the image Im edge to form a new binary edge image Im morp .
  • the structure element used is a rectangular structure with a size of l 3 ⁇ l 4 , l 3 , l 4 Respectively indicate the length and width of the rectangular structure;
  • step S5 includes: the corner points in the same closed edge are grouped into the same group, and two or more closed edges divide the characteristic corner point set P into m subsets G 1 , G 2 ... G m , G m represents The mth subset, and the subset satisfies:
  • the characteristic corner point subsets G 1 , G 2 ... G m are non-empty and disjoint subsets.
  • the edge detection in step S2 can ensure that all the characteristic corner points are distributed within the connected domain of each edge, satisfying the characteristic corner point subsets G 1 , G 2 ...G m is a non-empty and disjoint subset, and
  • step S6 includes the following steps:
  • i 1, 2, ..., m
  • n i is the number of feature corners in the i-th feature corner subset G i
  • p k is the kth in the i-th feature corner subset G i Feature corner points.
  • Y r max(abs(Y i -y up ), abs(Y i -y down )).
  • the present invention has the following beneficial effects:
  • corner points By combining corner points and edges, the corner points can be grouped and multiple targets can be distinguished;
  • Figure 1 is a flow chart of the present invention.
  • the present invention discloses an air target detection method based on corner detection, which includes the following steps:
  • S1 Receive visible light or infrared video image: unpack the video and place it in the video buffer to obtain video information: image resolution R ⁇ C, frame rate N f , and image color space type;
  • Corner point grouping the corner points within the same closed edge C j are grouped into the same group. Multiple closed edges divide the point set P in S4 into subsets G 1 , G 2 ... G m . Since the characteristic corner points are usually near the edge of the object, the edge detection in S2 can ensure that all the characteristic corner points are distributed in Within the connected domain of each edge, the feature corner subsets G 1 , G 2 ... G m are non-empty and disjoint subsets, namely:
  • Target extraction extract the outer contours of the corner points of each subset respectively, find the smallest bounding rectangle of each group of corner points, and at the same time find the target centroid of the group as the target position, and calculate the maximum from the centroid to each side of the rectangle Width and height, take the centroid as the center and the maximum width and height as the sides to make a rectangle, as the final target frame; this step is specifically divided into the following four steps:
  • Target information output output target position and target frame.
  • the present invention provides a method for detecting aerial targets based on corner points.
  • This technical solution There are many specific methods and ways to implement this technical solution. The above are only the preferred embodiments of the present invention. It should be pointed out that for those of ordinary skill in the art, In other words, without departing from the principle of the present invention, several improvements and modifications can be made, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components that are not clear in this embodiment can be implemented using existing technology.

Abstract

A corner-based aerial target detection method, comprising: firstly, receiving an infrared image or a visible image, and placing image data in a buffer area; performing color space conversion on the image to obtain a corresponding gray-scale image; performing edge detection on the gray-scale image to obtain an edge closed region; obtain all corners conforming to a computation rule in the image by using a corner detection algorithm, and screening the corners according to a screening rule; grouping the corners by using the closed region, and taking the corners in the same contour as the same group; respectively calculating the outer contour of the central position of each group of corners as location information of the detection target; and outputting the location and size information of each target to complete target detection. By means of the method, the location of the target can be quickly determined, and the cloudy background interference in the air can be effectively eliminated, so that the target loss phenomenon occurring when an aerial target passes through the cloudy background is broken, and the aerial target detection problem under the cloudy background is solved.

Description

一种基于角点的空中目标探测方法An air target detection method based on corner points 技术领域Technical field
本发明涉及图像检测技术,特别是涉及一种基于角点的空中目标探测方法。The invention relates to image detection technology, in particular to a corner point-based aerial target detection method.
背景技术Background technique
光电预警探测中,光电成像设备通过成像方式将光信号转换为电信号,传送至处理终端,处理终端通过图像检测手段探测视频中目标,同时提取目标位置。图像目标检测通常利用不同物体在图像中呈现出与背景的不同色彩、灰度、文理等特性,通过检测目标边缘的手段区分出目标。此种方法的缺点在于,在多云的气象条件下,云层呈现出不同的形状、色彩以及红外辐射特性,形成复杂、无规则的图像背景,图像边缘杂乱,影响系统对飞机、无人机等各类预警目标的检测、跟踪效果。In photoelectric early warning detection, photoelectric imaging equipment converts optical signals into electrical signals through imaging methods and transmits them to the processing terminal. The processing terminal detects the target in the video by image detection means and extracts the target position at the same time. Image target detection usually uses the characteristics of different objects in the image to show different colors, grays, and textures from the background, and distinguishes the target by detecting the edges of the target. The disadvantage of this method is that under cloudy weather conditions, the clouds show different shapes, colors and infrared radiation characteristics, forming a complex and irregular image background, and the image edges are cluttered, which affects the system’s impact on aircraft, drones, etc. The detection and tracking effect of similar early warning targets.
光电预警探测过程中,遇到多云天气,飞机等目标从云层背景穿过时,杂乱的边缘特性会影响目标检测与跟踪,对于重要的军事目标或安全防控目标,需要提升目标检测概率与跟踪稳定性。因此,研究适用于不同恶劣气象条件的目标检测方法,是当前图像检测领域的重要课题之一。空中目标探测技术需要解决的关键问题有:In the process of photoelectric early warning and detection, in cloudy weather, when aircraft and other targets pass through the cloud background, the cluttered edge characteristics will affect target detection and tracking. For important military targets or safety prevention and control targets, it is necessary to improve target detection probability and tracking stability Sex. Therefore, research on target detection methods suitable for different severe weather conditions is one of the important topics in the current image detection field. The key issues that need to be solved in air target detection technology are:
1、选取一种能够有效区分目标与多云背景的图像特征;1. Select an image feature that can effectively distinguish the target from the cloudy background;
2、设计一种准确提取目标位置与大小的方法。2. Design a method to accurately extract the location and size of the target.
发明内容Summary of the invention
发明目的:本发明的目的是提供一种能够克服现有技术存在的缺陷的基于角点的空中目标探测方法。Objective of the invention: The objective of the present invention is to provide a corner point-based aerial target detection method that can overcome the defects of the prior art.
技术方案:Technical solutions:
本发明公开了一种基于角点的空中目标探测方法,包括以下步骤:The invention discloses a corner point-based aerial target detection method, which includes the following steps:
步骤S1:接收视频图像,将视频解包后置于视频缓冲区,获取视频信息,视频信息包括图像分辨率R×C、帧频N f,以及图像色彩空间类型,其中R表示图像水平像素数,C表示图像垂直像素数; Step S1: Receive the video image, unpack the video and place it in the video buffer to obtain the video information. The video information includes the image resolution R×C, the frame rate N f , and the image color space type, where R represents the number of horizontal pixels of the image , C represents the number of vertical pixels of the image;
步骤S2:根据视频图像色彩空间类型,提取灰度图像,记为Im grayStep S2: According to the video image color space type, extract the gray image, which is marked as Im gray ;
步骤S3:图像边缘检测:针对灰度图像Im gray,采用边缘检测算法,获取边缘图像Im edge,提取边缘图像Im edge中所有的闭合边缘即连通域; Step S3: Image edge detection: for the gray image Im gray , an edge detection algorithm is adopted to obtain the edge image Im edge , and all the closed edges in the edge image Im edge are extracted, that is, the connected domain;
步骤S4:图像角点检测:针对灰度图像Im gray,采用角点检测算法,获取图像中所有特征角点,记为特征角点集P={p 1,p 2…p n},记录各个角点坐标; Step S4: Image corner detection: For the gray image Im gray , adopt the corner detection algorithm to obtain all the characteristic corner points in the image, record it as the characteristic corner point set P={p 1 ,p 2 …p n }, record each Corner coordinates
步骤S5:进行角点分组,将特征角点集P划分为m个子集;Step S5: Perform corner grouping, and divide the feature corner set P into m subsets;
步骤S6:目标提取:分别提取m个子集中角点的外部轮廓,求取各组角点的最小外接矩形框,同时求取该组目标形心作为目标位置,计算形心到外接矩形框各边的最大宽和高,以形心为中心,最大宽高为边作矩形,作为最终目标框;Step S6: Target extraction: extract the outer contours of the corner points in the m subsets respectively, find the smallest circumscribed rectangular frame of each group of corner points, and at the same time find the target centroid of the group as the target position, calculate the centroid to each side of the circumscribed rectangle The maximum width and height of, take the centroid as the center and the maximum width and height as the sides to make a rectangle, as the final target frame;
步骤S7:目标信息输出:输出目标位置和目标框。Step S7: Target information output: output the target position and target frame.
本发明中,步骤S1中,所述视频图像为可见光或红外视频图像。In the present invention, in step S1, the video image is a visible light or infrared video image.
本发明中,步骤S2包括:In the present invention, step S2 includes:
对于RGB图像,RGB图像分离R通道(R即红色)、G通道(G即绿色)、B通道图像(B即蓝色),灰度图像Gray获取方法为:For RGB images, RGB images are separated into R channel (R means red), G channel (G means green), and B channel image (B means blue). The method of obtaining Gray image is as follows:
Gray=R*0.299+G*0.587+B*0.114,Gray=R*0.299+G*0.587+B*0.114,
对于YUV图像,Y通道即为灰度图像;YUV是编译true-color颜色空间(color space)的种类,Y'UV,YUV,YCbCr,YPbPr等专有名词都可以称为YUV,彼此有重叠。“Y”表示明亮度(Luminance或Luma),也就是灰阶值,“U”和“V”表示的则是色度(Chrominance或Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色。For YUV images, the Y channel is a grayscale image; YUV is a type of true-color color space (color space) compiled. Proper nouns such as Y'UV, YUV, YCbCr, YPbPr, etc. can all be called YUV, which overlap with each other. "Y" represents brightness (Luminance or Luma), which is the grayscale value, and "U" and "V" represent chrominance (Chrominance or Chroma), which are used to describe the color and saturation of the image, and are used to specify pixels s color.
对于HSV图像中,V通道为灰度图像。HSV(Hue,Saturation,Value)是根据颜色的直观特性由A.R.Smith在1978年创建的一种颜色空间,也称六角锥体模型(Hexcone Model)。这个模型中颜色的参数分别是:色调(H),饱和度(S),明度(V)。For HSV images, the V channel is a grayscale image. HSV (Hue, Saturation, Value) is a color space created by A.R. Smith in 1978 based on the intuitive characteristics of colors, also called Hexcone Model. The color parameters in this model are: hue (H), saturation (S), and lightness (V).
本发明中,步骤S3包括如下步骤:In the present invention, step S3 includes the following steps:
S3.1:对灰度图像Im gray各像素点进行高斯卷积,高斯模板是大小为l 1×l 2的矩形结构,列方向标准偏差为σ x,行方向标准偏差为σ y,获取平滑后图像Im gauss,l 1、l 2分别表示矩形结构的长和宽; S3.1: Perform Gaussian convolution on each pixel of the grayscale image Im gray , the Gaussian template is a rectangular structure with a size of l 1 ×l 2 , the standard deviation in the column direction is σ x , and the standard deviation in the row direction is σ y , to obtain smoothness The rear image Im gauss , l 1 and l 2 respectively represent the length and width of the rectangular structure;
S3.2:利用canny边缘检测算法对图像Im gauss进行边缘提取,获取二值化的边缘图像Im edge,双阈值参数分别为T h、T l,常见的边缘检测算法如sobel边缘检测、canny边缘检测等(参考文献:Canny J.A Computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,PAMI-8(6):679-698.); S3.2: Use the canny edge detection algorithm to extract the edge of the image Im gauss to obtain the binarized edge image Im edge . The dual threshold parameters are T h and T l respectively . Common edge detection algorithms such as sobel edge detection and canny edge Detection, etc. (Reference: Canny JA Computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,PAMI-8(6):679-698.);
S3.3:对图像Im edge进行几何形态学中的闭运算操作,形成新的二值化边缘图Im morp,使用的结构元素是大小为l 3×l 4的矩形结构,l 3、l 4分别表示矩形结构的长和宽; S3.3: Perform the closed operation in geometric morphology on the image Im edge to form a new binary edge image Im morp . The structure element used is a rectangular structure with a size of l 3 ×l 4 , l 3 , l 4 Respectively indicate the length and width of the rectangular structure;
S3.4:对图像Im morp进行连通区域检测,将相邻的边缘点标记为同一连通区域,得到连通域,第j个连通域记为C j,j=1,2,…,m,使用的相邻结构元素是大小为l 5×l 6的矩形结构,l 5、l 6分别表示矩形结构的长和宽。 S3.4: image Im morp connectivity detection region, adjacent the same edge point is marked as the communication area, communication domain obtained, the j-th communication domain referred to as C j, j = 1,2, ... , m, using The adjacent structural element of is a rectangular structure with a size of l 5 ×l 6 , and l 5 and l 6 represent the length and width of the rectangular structure, respectively.
本发明中,步骤S4中,所述角点检测算法选用orb特征点检测的方法(参考文献:Rublee E,Rabaud V,Konolige K,et al.ORB:an efficient alternative to SIFT or SURF[C]//IEEE International Conference on Computer Vision,ICCV 2011,Barcelona,Spain,November 6-13,2011.IEEE,2011.),利用标准方向FAST特征点检测,然后对特征点进行BRIEF特征描述,选取最优的不多于N个特征角点,获取特征角点集P={p 1,p 2…p n},其中n≤N,p n表示第n个特征角点,第k个特征角点坐标为
Figure PCTCN2020089937-appb-000001
In the present invention, in step S4, the corner detection algorithm uses an orb feature point detection method (reference: Ruble E, Rabaud V, Konolige K, et al. ORB: an efficient alternative to SIFT or SURF[C]/ /IEEE International Conference on Computer Vision,ICCV 2011,Barcelona,Spain,November 6-13,2011.IEEE,2011.), using standard direction FAST feature point detection, and then the feature points are described by BRIEF feature, and the best one is selected. More than N characteristic corner points, obtain the characteristic corner point set P={p 1 ,p 2 …p n }, where n≤N, p n represents the nth characteristic corner point, and the kth characteristic corner point coordinate is
Figure PCTCN2020089937-appb-000001
本发明中,步骤S5包括:在相同闭合边缘内的角点归为同一组,两个以上的闭合边缘将特征角点集P划分为m个子集G 1,G 2…G m,G m表示第m个子集,且子集满足: In the present invention, step S5 includes: the corner points in the same closed edge are grouped into the same group, and two or more closed edges divide the characteristic corner point set P into m subsets G 1 , G 2 … G m , G m represents The mth subset, and the subset satisfies:
Figure PCTCN2020089937-appb-000002
Figure PCTCN2020089937-appb-000002
其中,特征角点子集G 1,G 2…G m为非空互不相交的子集。 Among them, the characteristic corner point subsets G 1 , G 2 ... G m are non-empty and disjoint subsets.
角点分组过程中,由于特征角点通常处于物体边缘附近,通过步骤S2中的边缘检测,能够保证所有特征角点都分布在各个边缘的连通域内部,满足特征角点子集G 1,G 2…G m为非空互不相交的子集,且
Figure PCTCN2020089937-appb-000003
In the corner grouping process, because the characteristic corner points are usually near the edge of the object, the edge detection in step S2 can ensure that all the characteristic corner points are distributed within the connected domain of each edge, satisfying the characteristic corner point subsets G 1 , G 2 …G m is a non-empty and disjoint subset, and
Figure PCTCN2020089937-appb-000003
本发明中,步骤S6包括如下步骤:In the present invention, step S6 includes the following steps:
S6.1:计算第i个特征角点子集G i形心(X i,Y i): S6.1: Calculate the centroid (X i ,Y i ) of the i-th feature corner subset G i:
S6.2:求取第i个特征角点子集G i中所有特征角点的外接矩形边界; S6.2: Obtain the circumscribed rectangular boundary of all the feature corner points in the i-th feature corner point subset G i;
S6.3:计算形心到外接矩形边的x方向距离X r,形心到外接矩形边的y方向距离Y rS6.3: Calculate the x-direction distance X r from the centroid to the side of the circumscribed rectangle, and the y-direction distance Y r from the centroid to the side of the circumscribed rectangle;
S6.4:计算目标框边界。S6.4: Calculate the boundary of the target frame.
S6.1中,根据如下公式计算第i个特征角点子集G i形心(X i,Y i): In S6.1, calculate the i-th feature corner subset G i centroid (X i ,Y i ) according to the following formula:
Figure PCTCN2020089937-appb-000004
Figure PCTCN2020089937-appb-000004
Figure PCTCN2020089937-appb-000005
Figure PCTCN2020089937-appb-000005
其中,i=1,2,...,m,n i是第i个特征角点子集G i中特征角点的个数,p k为第i个特征角点子集G i中第k个特征角点。 Among them, i = 1, 2, ..., m, n i is the number of feature corners in the i-th feature corner subset G i , and p k is the kth in the i-th feature corner subset G i Feature corner points.
S6.2中,,根据如下公式求取第i个特征角点子集G i中所有特征角点的外接矩形的上、下、左、右边界坐标x left、x right、y up、y downIn S6.2, the upper, lower, left, and right boundary coordinates x left , x right , y up , y down of the circumscribed rectangle of all the characteristic corner points in the i-th characteristic corner point subset G i are obtained according to the following formula:
Figure PCTCN2020089937-appb-000006
Figure PCTCN2020089937-appb-000006
Figure PCTCN2020089937-appb-000007
Figure PCTCN2020089937-appb-000007
Figure PCTCN2020089937-appb-000008
Figure PCTCN2020089937-appb-000008
Figure PCTCN2020089937-appb-000009
Figure PCTCN2020089937-appb-000009
其中,
Figure PCTCN2020089937-appb-000010
为第n i个特征角点的x坐标,
Figure PCTCN2020089937-appb-000011
为第n i个特征角点的y坐标。
among them,
Figure PCTCN2020089937-appb-000010
Is the x coordinate of the n i- th characteristic corner point,
Figure PCTCN2020089937-appb-000011
Is the y coordinate of the n i- th characteristic corner point.
本发明中,S6.3中,采用如下公式计算形心到外接矩形半径:In the present invention, in S6.3, the following formula is used to calculate the radius from the centroid to the circumscribed rectangle:
X r=max(abs(X i-x left),abs(X i-x right)), X r =max(abs(X i -x left ), abs(X i -x right )),
Y r=max(abs(Y i-y up),abs(Y i-y down))。 Y r =max(abs(Y i -y up ), abs(Y i -y down )).
本发明中,S6.4中,采用如下公式计算最终目标框边界上、下、左、右边界坐标x′ left、x′ right、y′ up、y′ downIn the present invention, in S6.4, the following formulas are used to calculate the upper, lower, left, and right boundary coordinates x′ left , x′ right , y′ up , and y′ down of the final target frame:
x′ left=X i-X rx′ left =X i -X r ,
x′ right=X i+X rx′ right =X i +X r ,
y′ up=Y i-Y ry′ up =Y i -Y r ,
y′ down=Y i+Y r y'down =Y i +Y r .
有益效果:Beneficial effects:
与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:
(1)采用角点的检测,提取人造目标特有的特征,如飞机、直升机、无人机等,与背景云层等常见干扰进行有效区分,提升检测概率同时降低虚警率;(1) Use corner detection to extract unique features of man-made targets, such as airplanes, helicopters, drones, etc., and effectively distinguish them from common interferences such as background clouds to increase the detection probability while reducing the false alarm rate;
(2)采用角点与边缘相结合的手段,将角点进行分组,能够区分出多个目标;(2) By combining corner points and edges, the corner points can be grouped and multiple targets can be distinguished;
(3)利用特征点分布情况,准确的确定目标中心与边界,提升目标检测显示效果与跟踪准确性。(3) Using the distribution of feature points, accurately determine the target center and boundary, and improve the target detection display effect and tracking accuracy.
附图说明Description of the drawings
下面结合附图和具体实施方式对本发明做更进一步的具体说明,本发明的上述或其他方面的优点将会变得更加清楚。In the following, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments, and the above-mentioned or other advantages of the present invention will become clearer.
图1是本发明流程图。Figure 1 is a flow chart of the present invention.
具体实施方式Detailed ways
下面结合附图及实施例对本发明做进一步说明。The present invention will be further described below in conjunction with the drawings and embodiments.
如图1所示,本发明公开了一种基于角点检测的空中目标探测方法,包括以下步骤:As shown in Figure 1, the present invention discloses an air target detection method based on corner detection, which includes the following steps:
S1:接收可见光或红外视频图像:将视频解包后置于视频缓冲区,获取视频信息:图像分辨率R×C、帧频N f,以及图像色彩空间类型; S1: Receive visible light or infrared video image: unpack the video and place it in the video buffer to obtain video information: image resolution R×C, frame rate N f , and image color space type;
S2:根据图像色彩空间类型,提取对应的灰度图像:针对RGB图像、YUV图像、HSV图像、单通道图像等不同色彩空间,提取灰度图像Im grayS2: Extract the corresponding gray image according to the image color space type: extract the gray image Im gray for different color spaces such as RGB image, YUV image, HSV image, single-channel image, etc.;
S3:图像边缘检测:针对灰度图像Im gray,采用边缘检测算法,获取边缘图像Im edge,提取边缘图像Im edge中所有的闭合边缘C j,主要分为以下四步: S3: Image edge detection: For the gray image Im gray , the edge detection algorithm is adopted to obtain the edge image Im edge , and extract all the closed edges C j in the edge image Im edge , which is mainly divided into the following four steps:
S3.1:对灰度图像Im gray各点进行高斯卷积,高斯模板是大小为l 1×l 2的矩形结构,列方向标准偏差为σ x,行方向标准偏差为σ y,获取平滑后图像Im gauss,其中典型参数值为l 1=7,l 2=7,σ x=1.5,σ y=1.5; S3.1: Perform Gaussian convolution on each point of the gray image Im gray , the Gaussian template is a rectangular structure with a size of l 1 ×l 2 , the standard deviation in the column direction is σ x , and the standard deviation in the row direction is σ y , after obtaining the smoothed Image Im gauss , where the typical parameter values are l 1 =7, l 2 =7, σ x =1.5, and σ y =1.5;
S3.2:利用canny边缘检测算法对Im gauss进行边缘提取,获取二值化的边缘图像Im edge,双阈值参数为T h、T l,典型参数值为T h=40、T l=30。 S3.2: Use the canny edge detection algorithm to perform edge extraction on Im gauss to obtain a binarized edge image Im edge . The dual threshold parameters are T h and T l , and the typical parameter values are T h =40 and T l =30.
S3.3:对Im edge进行几何形态学中的闭运算操作,形成新的二值化边缘图Im morp,使用的结构元素是大小为l 3×l 4的矩形结构,典型参数值为l 3=5,l 4=5; S3.3: Perform closed operations in geometric morphology on Im edge to form a new binary edge graph Im morp . The structure element used is a rectangular structure with a size of l 3 ×l 4 , and the typical parameter value is l 3 =5, l 4 =5;
S3.4:对Im morp进行连通区域检测,将相邻的边缘点标记为同一连通区域,得到连通域C j,j=1,2,…,m,使用的相邻结构元素是大小为l 5×l 6的矩形结构,典型参数值为l 5=3,l 6=3。 S3.4: Im morp connectivity for detection region, adjacent the same edge point is marked as the communication area to obtain communication domain C j, j = 1,2, ... , m, the adjacent structural elements used are of size l For a 5 ×l 6 rectangular structure, typical parameter values are l 5 =3 and l 6 =3.
S4:图像角点检测:针对灰度图像Im gray,采用角点检测算法,获取图像中所有特征角点,记为点集P={p 1,p 2…p n},记录各个角点坐标
Figure PCTCN2020089937-appb-000012
其中k=1,2,…,n;本步骤中角点检测选用orb特征点检测的方法,利用标准方向FAST特征点检测,然后对特征点进行BRIEF特征描述,选取最优的不多于N个特征点,获取特征角点集P={p 1,p 2…p n},其中n≤N,典型参数值为N=50。
S4: Image corner detection: For the gray image Im gray , use the corner detection algorithm to obtain all the characteristic corners in the image, record it as the point set P = {p 1 ,p 2 …p n }, record the coordinates of each corner
Figure PCTCN2020089937-appb-000012
Where k = 1, 2, ..., n; in this step, the corner point detection uses the orb feature point detection method, using the standard direction FAST feature point detection, and then the feature point is described by the BRIEF feature, and the optimal one is not more than N Obtain the characteristic corner point set P={p 1 , p 2 …p n }, where n≤N, and the typical parameter value is N=50.
S5:角点分组:在相同闭合边缘C j内的角点归为同一组。多个闭合边缘将S4中的点集P划分为子集G 1,G 2…G m,由于特征角点通常处于物体边缘附近,通过S2中的边缘检测,能够保证所有特征角点都分布在各个边缘的连通域内部,满足特征角点子集G 1,G 2…G m为非空互不相交的子集,即: S5: Corner point grouping: the corner points within the same closed edge C j are grouped into the same group. Multiple closed edges divide the point set P in S4 into subsets G 1 , G 2 … G m . Since the characteristic corner points are usually near the edge of the object, the edge detection in S2 can ensure that all the characteristic corner points are distributed in Within the connected domain of each edge, the feature corner subsets G 1 , G 2 … G m are non-empty and disjoint subsets, namely:
Figure PCTCN2020089937-appb-000013
Figure PCTCN2020089937-appb-000013
S6:目标提取:分别提取各子集角点的外部轮廓,求取各组角点的最小外接矩形框,同时求取该组目标形心作为目标位置,计算形心到矩形框各边的最大宽和高,以形心为中心,最大宽高为边作矩形,作为最终目标框;本步骤具体分为以下四步:S6: Target extraction: extract the outer contours of the corner points of each subset respectively, find the smallest bounding rectangle of each group of corner points, and at the same time find the target centroid of the group as the target position, and calculate the maximum from the centroid to each side of the rectangle Width and height, take the centroid as the center and the maximum width and height as the sides to make a rectangle, as the final target frame; this step is specifically divided into the following four steps:
S6.1:求特征角点集G i形心(X i,Y i),公式如下 S6.1: Find the center of the characteristic corner point set G i (X i ,Y i ), the formula is as follows
Figure PCTCN2020089937-appb-000014
Figure PCTCN2020089937-appb-000014
Figure PCTCN2020089937-appb-000015
Figure PCTCN2020089937-appb-000015
其中,i=1,2,…,m,n i是点集G i中特征角点的个数; Among them, i=1,2,...,m, n i is the number of characteristic corner points in the point set G i;
S6.2:求取点集G i中所有特征角点的外接矩形边界,公式如下: S6.2: Calculate the circumscribed rectangle boundary of all the characteristic corner points in the point set G i, the formula is as follows:
Figure PCTCN2020089937-appb-000016
Figure PCTCN2020089937-appb-000016
Figure PCTCN2020089937-appb-000017
Figure PCTCN2020089937-appb-000017
Figure PCTCN2020089937-appb-000018
Figure PCTCN2020089937-appb-000018
Figure PCTCN2020089937-appb-000019
Figure PCTCN2020089937-appb-000019
S6.3:计算形心到外接矩形边的x方向距离X r,形心到外接矩形边的y方向距离Y r,公式如下: S6.3: Calculate the x-direction distance X r from the centroid to the side of the circumscribed rectangle, and the y-direction distance Y r from the centroid to the side of the circumscribed rectangle. The formula is as follows:
X r=max(abs(X i-x left),abs(X i-x right)) X r =max(abs(X i -x left ), abs(X i -x right ))
Y r=max(abs(Y i-y up),abs(Y i-y down)) Y r =max(abs(Y i -y up ), abs(Y i -y down ))
S6.4:计算目标框边界,公式如下:S6.4: Calculate the boundary of the target frame, the formula is as follows:
x′ left=X i-X r x′ left =X i -X r
x′ right=X i+X r x′ right =X i +X r
y′ up=Y i-Y r y′ up =Y i -Y r
y′ down=Y i+Y r y′ down =Y i +Y r
S7:目标信息输出:输出目标位置和目标框。S7: Target information output: output target position and target frame.
本发明提供了一种基于角点的空中目标探测方法,具体实现该技术方案的方法和途径很多,以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。本实施例中未明确的各组成部分均可用现有技术加以实现。The present invention provides a method for detecting aerial targets based on corner points. There are many specific methods and ways to implement this technical solution. The above are only the preferred embodiments of the present invention. It should be pointed out that for those of ordinary skill in the art, In other words, without departing from the principle of the present invention, several improvements and modifications can be made, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components that are not clear in this embodiment can be implemented using existing technology.

Claims (10)

  1. 一种基于角点的空中目标探测方法,其特征在于,包括以下步骤:An air target detection method based on corner points, characterized in that it comprises the following steps:
    步骤S1:接收视频图像,将视频解包后置于视频缓冲区,获取视频信息,视频信息包括图像分辨率R×C、帧频N f,以及图像色彩空间类型,其中R表示图像水平像素数,C表示图像垂直像素数; Step S1: Receive the video image, unpack the video and place it in the video buffer to obtain the video information. The video information includes the image resolution R×C, the frame rate N f , and the image color space type, where R represents the number of horizontal pixels of the image , C represents the number of vertical pixels of the image;
    步骤S2:根据视频图像色彩空间类型,提取灰度图像,记为Im grayStep S2: According to the video image color space type, extract the gray image, which is marked as Im gray ;
    步骤S3:图像边缘检测:针对灰度图像Im gray,采用边缘检测算法,获取边缘图像Im edge,提取边缘图像Im edge中所有的闭合边缘即连通域; Step S3: Image edge detection: for the gray image Im gray , an edge detection algorithm is adopted to obtain the edge image Im edge , and all the closed edges in the edge image Im edge are extracted, that is, the connected domain;
    步骤S4:图像角点检测:针对灰度图像Im gray,采用角点检测算法,获取图像中所有特征角点,记为特征角点集P={p 1,p 2…p n},记录各个角点坐标; Step S4: Image corner detection: For the gray image Im gray , adopt the corner detection algorithm to obtain all the characteristic corner points in the image, record it as the characteristic corner point set P={p 1 ,p 2 …p n }, record each Corner coordinates
    步骤S5:进行角点分组,将特征角点集P划分为m个子集;Step S5: Perform corner grouping, and divide the feature corner set P into m subsets;
    步骤S6:目标提取:分别提取m个子集中角点的外部轮廓,求取各组角点的最小外接矩形框,同时求取该组目标形心作为目标位置,计算形心到外接矩形框各边的最大宽和高,以形心为中心,最大宽高为边作矩形,作为最终目标框;Step S6: Target extraction: extract the outer contours of the corner points in the m subsets respectively, find the smallest circumscribed rectangular frame of each group of corner points, and at the same time find the target centroid of the group as the target position, calculate the centroid to each side of the circumscribed rectangle The maximum width and height of, take the centroid as the center and the maximum width and height as the sides to make a rectangle, as the final target frame;
    步骤S7:目标信息输出:输出目标位置和目标框。Step S7: Target information output: output the target position and target frame.
  2. 根据权利要求1所述的方法,其特征在于,步骤S1中,所述视频图像为可见光或红外视频图像。The method according to claim 1, wherein in step S1, the video image is a visible light or infrared video image.
  3. 根据权利要求2所述的方法,其特征在于,步骤S2包括:The method according to claim 2, wherein step S2 comprises:
    对于RGB图像,RGB图像分离R通道、G通道、B通道图像,灰度图像Gray获取方法为:For RGB images, RGB images are separated into R channel, G channel, and B channel images. The method of obtaining Gray image is as follows:
    Gray=R*0.299+G*0.587+B*0.114,Gray=R*0.299+G*0.587+B*0.114,
    对于YUV图像,Y通道即为灰度图像;For YUV images, the Y channel is the grayscale image;
    对于HSV图像中,V通道为灰度图像。For HSV images, the V channel is a grayscale image.
  4. 根据权利要求3所述的方法,其特征在于,步骤S3包括如下步骤:The method according to claim 3, wherein step S3 comprises the following steps:
    S3.1:对灰度图像Im gray各像素点进行高斯卷积,高斯模板是大小为l 1×l 2的矩形结构,列方向标准偏差为σ x,行方向标准偏差为σ y,获取平滑后图像Im gauss,l 1、l 2分别表示矩形结构的长和宽; S3.1: Perform Gaussian convolution on each pixel of the grayscale image Im gray , the Gaussian template is a rectangular structure with a size of l 1 ×l 2 , the standard deviation in the column direction is σ x , and the standard deviation in the row direction is σ y , to obtain smoothness The rear image Im gauss , l 1 and l 2 respectively represent the length and width of the rectangular structure;
    S3.2:利用边缘检测算法对图像Im gauss进行边缘提取,获取二值化的边缘图像Im edgeS3.2: Use the edge detection algorithm to perform edge extraction on the image Im gauss to obtain the binarized edge image Im edge ;
    S3.3:对图像Im edge进行几何形态学中的闭运算操作,形成新的二值化边缘图Im morp,使用的结构元素是大小为l 3×l 4的矩形结构,l 3、l 4分别表示矩形结构的长和宽; S3.3: Perform the closed operation in geometric morphology on the image Im edge to form a new binary edge image Im morp . The structure element used is a rectangular structure with a size of l 3 ×l 4 , l 3 , l 4 Respectively indicate the length and width of the rectangular structure;
    S3.4:对图像Im morp进行连通区域检测,将相邻的边缘点标记为同一连通区域,得到连通域,第j个连通域记为C j,j=1,2,…,m,使用的相邻结构元素是大小为l 5×l 6的矩形结构,l 5、l 6分别表示矩形结构的长和宽。 S3.4: image Im morp connectivity detection region, adjacent the same edge point is marked as the communication area, communication domain obtained, the j-th communication domain referred to as C j, j = 1,2, ... , m, using The adjacent structural element of is a rectangular structure with a size of l 5 ×l 6 , and l 5 and l 6 represent the length and width of the rectangular structure, respectively.
  5. 根据权利要求4所述的方法,其特征在于,步骤S4中,所述角点检测算法选用orb特征点检测的方法,利用标准方向FAST特征点检测,然后对特征点进行BRIEF特征描述,选取最优的不多于N个特征角点,获取特征角点集P={p 1,p 2…p n},其中n≤N,p n表示第n个特征角点,第k个特征角点坐标为
    Figure PCTCN2020089937-appb-100001
    k=1,2,...,n。
    The method according to claim 4, characterized in that, in step S4, the corner detection algorithm selects the method of orb feature point detection, uses the standard direction FAST feature point detection, and then performs the BRIEF feature description on the feature points, and selects the most There are no more than N characteristic corner points, and the characteristic corner point set P={p 1 ,p 2 …p n } is obtained, where n≤N, p n represents the nth characteristic corner point and the kth characteristic corner point The coordinates are
    Figure PCTCN2020089937-appb-100001
    k=1, 2,..., n.
  6. 根据权利要求5所述的方法,其特征在于,步骤S5包括:在相同闭合边缘内的角点归为同一组,两个以上的闭合边缘将特征角点集P划分为m个子集G 1,G 2…G m,G m表示第m个子集,且子集满足: The method according to claim 5, characterized in that step S5 comprises: the corner points within the same closed edge are grouped into the same group, and two or more closed edges divide the characteristic corner point set P into m subsets G 1 , G 2 …G m , G m represents the m-th subset, and the subset satisfies:
    Figure PCTCN2020089937-appb-100002
    Figure PCTCN2020089937-appb-100002
    其中,特征角点子集G 1,G 2…G m为非空互不相交的子集。 Among them, the characteristic corner point subsets G 1 , G 2 ... G m are non-empty and disjoint subsets.
  7. 根据权利要求6所述的方法,其特征在于,步骤S6包括如下步骤:The method according to claim 6, wherein step S6 comprises the following steps:
    S6.1:计算第i个特征角点子集G i形心(X i,Y i): S6.1: Calculate the centroid (X i ,Y i ) of the i-th feature corner subset G i:
    S6.2:求取第i个特征角点子集G i中所有特征角点的外接矩形边界; S6.2: Obtain the circumscribed rectangular boundary of all the feature corner points in the i-th feature corner point subset G i;
    S6.3:计算形心到外接矩形边的x方向距离X r,形心到外接矩形边的y方向距离Y rS6.3: Calculate the x-direction distance X r from the centroid to the side of the circumscribed rectangle, and the y-direction distance Y r from the centroid to the side of the circumscribed rectangle;
    S6.4:计算目标框边界。S6.4: Calculate the boundary of the target frame.
  8. 根据权利要求7所述的方法,其特征在于,S6.1中,根据如下公式计算第i个特征角点子集G i形心(X i,Y i): The method according to claim 7, characterized in that, in S6.1, the i-th feature corner subset G i centroid (X i ,Y i ) is calculated according to the following formula:
    Figure PCTCN2020089937-appb-100003
    Figure PCTCN2020089937-appb-100003
    Figure PCTCN2020089937-appb-100004
    Figure PCTCN2020089937-appb-100004
    其中,i=1,2,…,m,n i是第i个特征角点子集G i中特征角点的个数,p k为第i个特征角点子集G i中第k个特征角点。 Among them, i = 1, 2, ..., m, n i is the number of feature corners in the i-th feature corner subset G i , and p k is the k-th feature angle in the i-th feature corner subset G i point.
  9. 根据权利要求8所述的方法,其特征在于,S6.2中,,根据如下公式求取第i个特征角点子集G i中所有特征角点的外接矩形的上、下、左、右边界坐标x left、x right、y up、y downThe method according to claim 8, characterized in that, in S6.2, the upper, lower, left, and right boundaries of the circumscribed rectangle of all the characteristic corner points in the i-th characteristic corner point subset G i are obtained according to the following formula Coordinates x left , x right , y up , y down :
    Figure PCTCN2020089937-appb-100005
    Figure PCTCN2020089937-appb-100005
    Figure PCTCN2020089937-appb-100006
    Figure PCTCN2020089937-appb-100006
    Figure PCTCN2020089937-appb-100007
    Figure PCTCN2020089937-appb-100007
    Figure PCTCN2020089937-appb-100008
    Figure PCTCN2020089937-appb-100008
    其中,
    Figure PCTCN2020089937-appb-100009
    为第n i个特征角点的x坐标,
    Figure PCTCN2020089937-appb-100010
    为第n i个特征角点的y坐标。
    among them,
    Figure PCTCN2020089937-appb-100009
    Is the x coordinate of the n i- th characteristic corner point,
    Figure PCTCN2020089937-appb-100010
    Is the y coordinate of the n i- th characteristic corner point.
  10. 根据权利要求9所述的方法,其特征在于,S6.3中,采用如下公式计算形心到外接矩形半径:The method according to claim 9, wherein in S6.3, the following formula is used to calculate the radius from the centroid to the circumscribed rectangle:
    X r=max(abs(X i-x left),abs(X i-x right)), X r =max(abs(X i -x left ),abs(X i -x right )),
    Y r=max(abs(Y i-y up),abs(Y i-y down)), Y r =max(abs(Y i -y up ),abs(Y i -y down )),
    其中,abs()为取绝对值运算;Among them, abs() is the absolute value operation;
    S6.4中,采用如下公式计算新的目标框上、下、左、右边界坐标x′ left、x′ right、y′ up、y′d ownIn S6.4, the following formulas are used to calculate the coordinates x′ left , x′ right , y′ up , and y′d own of the new target frame's upper, lower, left, and right boundary:
    x′ left=X i-X rx′ left =X i -X r ,
    x′ right=X i+X rx′ right =X i +X r ,
    y′ up=Y i-Y ry′ up =Y i -Y r ,
    y′ down=Y i+Y r y'down =Y i +Y r .
PCT/CN2020/089937 2019-11-18 2020-05-13 Corner-based aerial target detection method WO2021098163A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911124953.8 2019-11-18
CN201911124953.8A CN110852323B (en) 2019-11-18 2019-11-18 Angular point-based aerial target detection method

Publications (1)

Publication Number Publication Date
WO2021098163A1 true WO2021098163A1 (en) 2021-05-27

Family

ID=69601766

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/089937 WO2021098163A1 (en) 2019-11-18 2020-05-13 Corner-based aerial target detection method

Country Status (2)

Country Link
CN (1) CN110852323B (en)
WO (1) WO2021098163A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310390A (en) * 2023-05-17 2023-06-23 上海仙工智能科技有限公司 Visual detection method and system for hollow target and warehouse management system
CN116740332A (en) * 2023-06-01 2023-09-12 南京航空航天大学 Method for positioning center and measuring angle of space target component on satellite based on region detection

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852323B (en) * 2019-11-18 2022-03-15 南京莱斯电子设备有限公司 Angular point-based aerial target detection method
CN112542007A (en) * 2020-11-30 2021-03-23 福州外语外贸学院 Dangerous target detection method and system between financial withdrawals
CN113240789B (en) * 2021-04-13 2023-05-23 青岛小鸟看看科技有限公司 Virtual object construction method and device
CN114648575A (en) * 2022-04-01 2022-06-21 合肥学院 Track slope displacement binocular vision detection method and system based on ORB algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298698A (en) * 2011-05-30 2011-12-28 河海大学 Remote sensing image airplane detection method based on fusion of angle points and edge information
US20120093434A1 (en) * 2009-06-05 2012-04-19 Serene Banerjee Edge detection
CN108629343A (en) * 2018-04-28 2018-10-09 湖北民族学院 A kind of license plate locating method and system based on edge detection and improvement Harris Corner Detections
CN110264459A (en) * 2019-06-24 2019-09-20 江苏开放大学(江苏城市职业学院) A kind of interstices of soil characteristics information extraction method
CN110852323A (en) * 2019-11-18 2020-02-28 南京莱斯电子设备有限公司 Angular point-based aerial target detection method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567994B (en) * 2011-12-31 2014-08-20 南京理工大学 Infrared small target detection method based on angular point gaussian characteristic analysis
CN105260749B (en) * 2015-11-02 2018-11-13 中国电子科技集团公司第二十八研究所 Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN109215018A (en) * 2018-08-23 2019-01-15 上海海事大学 Based on Canny operator and the morphologic ship detecting method of Gauss
CN109614864B (en) * 2018-11-06 2021-08-27 南京莱斯电子设备有限公司 Method for detecting retractable state of undercarriage of multi-model aircraft at ground-based view angle
CN109858455B (en) * 2019-02-18 2023-06-20 南京航空航天大学 Block detection scale self-adaptive tracking method for round target

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120093434A1 (en) * 2009-06-05 2012-04-19 Serene Banerjee Edge detection
CN102298698A (en) * 2011-05-30 2011-12-28 河海大学 Remote sensing image airplane detection method based on fusion of angle points and edge information
CN108629343A (en) * 2018-04-28 2018-10-09 湖北民族学院 A kind of license plate locating method and system based on edge detection and improvement Harris Corner Detections
CN110264459A (en) * 2019-06-24 2019-09-20 江苏开放大学(江苏城市职业学院) A kind of interstices of soil characteristics information extraction method
CN110852323A (en) * 2019-11-18 2020-02-28 南京莱斯电子设备有限公司 Angular point-based aerial target detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIU JIANBIN: "A New Approach to Detect Aircrafts in Remote Sensing Images Based on Corner and Edge Information Fusion", MICROELECTRONICS & COMPUTER, vol. 28, no. 9, 30 September 2011 (2011-09-30), pages 214 - 216, XP055813810, DOI: 10.19304/j.cnki.issn1000-7180.2011.09.056 *
SAEED, MIRGHASEMI: "A Parallel Approach to Combine SVM, Edge and Corner Detection Methods for Target Detection", JOURNAL OF MULTIMEDIA PROCESSING AND TECHNOLOGIES, vol. 4, no. 2, 30 June 2013 (2013-06-30), pages 47 - 54, XP055813812 *
YU, RUIXING ET AL.: "Target Extraction and Image Matching Algorithm Based on Combination of Edge and Corner", JOURNAL OF NORTHWESTERN POLYTECHNICAL UNIVERSITY, vol. 35, no. 4, 31 August 2017 (2017-08-31), pages 586 - 590, XP055813814 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310390A (en) * 2023-05-17 2023-06-23 上海仙工智能科技有限公司 Visual detection method and system for hollow target and warehouse management system
CN116310390B (en) * 2023-05-17 2023-08-18 上海仙工智能科技有限公司 Visual detection method and system for hollow target and warehouse management system
CN116740332A (en) * 2023-06-01 2023-09-12 南京航空航天大学 Method for positioning center and measuring angle of space target component on satellite based on region detection
CN116740332B (en) * 2023-06-01 2024-04-02 南京航空航天大学 Method for positioning center and measuring angle of space target component on satellite based on region detection

Also Published As

Publication number Publication date
CN110852323B (en) 2022-03-15
CN110852323A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
WO2021098163A1 (en) Corner-based aerial target detection method
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN108596849A (en) A kind of single image to the fog method based on sky areas segmentation
WO2018086233A1 (en) Character segmentation method and device, and element detection method and device
CN108537239A (en) A kind of method of saliency target detection
CN104598907B (en) Lteral data extracting method in a kind of image based on stroke width figure
CN106127817B (en) A kind of image binaryzation method based on channel
CN111461036B (en) Real-time pedestrian detection method using background modeling to enhance data
CN110268442B (en) Computer-implemented method of detecting a foreign object on a background object in an image, device for detecting a foreign object on a background object in an image, and computer program product
WO2020223963A1 (en) Computer-implemented method of detecting foreign object on background object in image, apparatus for detecting foreign object on background object in image, and computer-program product
CN110175556B (en) Remote sensing image cloud detection method based on Sobel operator
CN105069816B (en) A kind of method and system of inlet and outlet people flow rate statistical
CN110008968A (en) A kind of robot clearing automatic trigger method based on image vision
CN109409356B (en) Multi-direction Chinese print font character detection method based on SWT
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN108038458B (en) Method for automatically acquiring outdoor scene text in video based on characteristic abstract diagram
CN108711160B (en) Target segmentation method based on HSI (high speed input/output) enhanced model
Liu et al. A novel multi-oriented chinese text extraction approach from videos
CN112861654A (en) Famous tea picking point position information acquisition method based on machine vision
Yao et al. Recognition and location of solar panels based on machine vision
CN110619336B (en) Goods identification algorithm based on image processing
CN113744326B (en) Fire detection method based on seed region growth rule in YCRCB color space
CN109165659B (en) Vehicle color identification method based on superpixel segmentation
CN105957067B (en) A kind of color image edge detection method based on color difference
CN108830834B (en) Automatic extraction method for video defect information of cable climbing robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20890224

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20890224

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20890224

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20890224

Country of ref document: EP

Kind code of ref document: A1