CN107992899A - A kind of airdrome scene moving object detection recognition methods - Google Patents

A kind of airdrome scene moving object detection recognition methods Download PDF

Info

Publication number
CN107992899A
CN107992899A CN201711345550.7A CN201711345550A CN107992899A CN 107992899 A CN107992899 A CN 107992899A CN 201711345550 A CN201711345550 A CN 201711345550A CN 107992899 A CN107992899 A CN 107992899A
Authority
CN
China
Prior art keywords
mrow
msup
msub
moving object
airport
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711345550.7A
Other languages
Chinese (zh)
Inventor
韩松臣
詹昭焕
李炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201711345550.7A priority Critical patent/CN107992899A/en
Publication of CN107992899A publication Critical patent/CN107992899A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种机场场面运动目标检测识别方法,具体过程为:首先使用倾向流法获取运动目标的区域建议,然后使用卷积神经网络对运动目标进行识别。该方法克服了传统的目标提取方法对于尺寸不一的运动目标提取不完整、噪声较多的缺点,弥补了深度学习目标检测网络对于机场场面小目标检测能力不足的缺陷。本发明所设计的识别网络卷积层次少,特征维度小,计算量小,准确率高,符合机场场面监视的需求。

The invention provides a method for detecting and recognizing moving targets on an airport scene. The specific process is as follows: firstly, using a tendency flow method to obtain area suggestions for moving targets, and then using a convolutional neural network to identify the moving targets. This method overcomes the shortcomings of the traditional target extraction method for incomplete extraction of moving targets of different sizes and more noise, and makes up for the shortcomings of the deep learning target detection network for the detection of small targets on the airport scene. The recognition network designed by the invention has fewer convolutional layers, smaller feature dimensions, less calculation, and higher accuracy, which meets the requirements of airport scene monitoring.

Description

一种机场场面运动目标检测识别方法A method for detection and recognition of moving objects in airport scenes

技术领域technical field

本发明涉及数字图像处理技术领域,具体地说,是一种机场运动目标检测识别方法。The invention relates to the technical field of digital image processing, in particular to an airport moving target detection and recognition method.

背景技术Background technique

随着我国民用航空运输业的快速发展,机场的飞机、车辆和人员的数量迅速增加,机场场面的运行环境日趋复杂。有必要引入机场场面监视系统。With the rapid development of my country's civil aviation transportation industry, the number of aircraft, vehicles and personnel at the airport has increased rapidly, and the operating environment of the airport scene has become increasingly complex. It is necessary to introduce airport scene surveillance system.

传统的机场场面监视方法以场面监视雷达为主,国内大型机场如北京首都机场、上海浦东机场等都有装备场面监视雷达。但是,由于场面监视雷达高昂的安装和维护费用,国内的绝大多数中小机场并未配备场面监视雷达,而是依赖于管制员的目视和人为操作来实现对场面的监视功能,这将大大增加机场场面运行的风险。The traditional airport scene surveillance method is mainly based on the scene surveillance radar, and domestic large-scale airports such as Beijing Capital Airport and Shanghai Pudong Airport are equipped with scene surveillance radar. However, due to the high installation and maintenance costs of surface surveillance radars, the vast majority of small and medium airports in China are not equipped with surface surveillance radars, but rely on the controller's visual and human operations to realize the surveillance function of the scene, which will greatly Increase the risk of airport surface operations.

随着计算机视觉的发展,近些年来兴起了基于视频技术的机场场面监视技术,这种技术成本低,且与监视雷达方法相比,可覆盖的区域更广,因此更具灵活性。当前对于机场场面动态目标的研究主要集中在对已知类型的目标的跟踪上,而对目标的检测研究并不多。目标检测方法主要可分为两大类:基于手工特征的方法和基于深度学习网络的方法。常见的基于手工特征的目标检测方法包括光流法、帧差法和ViBe等算法,这些方法准确率低,时间成本高,并不适用于机场场面监视。基于深度学习网络的目标检测算法包括R-CNN,Faster R-CNN,SSD(Single Shot MultiBox Detector)等方法,这类方法具有检测准确率高,效率高的特点,但是由于此类网络模型的设计特点,其对于小目标检测的效果并不好,尤其是机场远场小目标检测准确率较低。鉴于机场场面的大部分目标较为固定,出于安全监视的目的,我们对机场场面的动态目标的兴趣远远大于对静态目标的兴趣。With the development of computer vision, airport scene surveillance technology based on video technology has emerged in recent years. This technology is low in cost, and compared with the surveillance radar method, it can cover a wider area, so it is more flexible. At present, the researches on dynamic targets on airport scenes mainly focus on the tracking of known types of targets, but there are not many researches on target detection. Object detection methods can be mainly divided into two categories: methods based on manual features and methods based on deep learning networks. Common target detection methods based on manual features include algorithms such as optical flow method, frame difference method, and ViBe. These methods have low accuracy and high time cost, and are not suitable for airport scene surveillance. Target detection algorithms based on deep learning networks include R-CNN, Faster R-CNN, SSD (Single Shot MultiBox Detector) and other methods. These methods have the characteristics of high detection accuracy and high efficiency, but due to the design of such network models characteristics, its effect on small target detection is not good, especially the detection accuracy of airport far-field small targets is low. Given that most of the targets in the airport scene are relatively fixed, for the purpose of security surveillance, we are much more interested in the dynamic targets in the airport scene than in the static targets.

发明内容Contents of the invention

有鉴于此,本发明利用倾向流法检测运动目标的运动信息,并设计一个识别深度网络对目标进行识别,实现对包括小目标在内的机场场面动目标的检测识别。本发明将机场动目标分为飞机、汽车和行人三个类别。In view of this, the present invention uses the tendency flow method to detect the movement information of the moving target, and designs a recognition depth network to recognize the target, so as to realize the detection and recognition of the moving target on the airport scene including small targets. The invention divides the airport moving objects into three categories of aircraft, automobiles and pedestrians.

本发明的技术方案具体实现如下:Technical scheme of the present invention is concretely realized as follows:

(一)使用倾向流法获取运动目标的区域建议,具体形式为矩形框坐标。运动目标倾向流计算具体步骤如下:(1) Obtain region proposals for moving objects using the tendency flow method, in the form of rectangular frame coordinates. The specific steps of calculating the moving target tendency flow are as follows:

(a)首先求取运动目标的光流矢量对图像上的一个像素点(x,y),在t时刻的亮度为E(x,y,t),经过一段时间间隔Δt之后的亮度为E(x+Δx,y+Δy,t+Δt)。当Δt趋近于无穷小的时候,认为该点亮度不变:(a) First obtain the optical flow vector of the moving target For a pixel point (x, y) on the image, the brightness at time t is E(x, y, t), and the brightness after a period of time Δt is E(x+Δx, y+Δy, t+Δt ). When Δt approaches infinitesimal, it is considered that the brightness of the point remains unchanged:

Exu+Eyv+Et=0E x u+E y v+E t =0

其中,分别表示像素点沿x,y,in, Respectively represent the pixel point along x, y,

t三个方向的梯度。分别表示光流在x,y方向的速度分量,即光流矢量。选取一个n×n(n>1)尺寸的邻域窗口,建立邻域像素系统方程来求解光流矢量,从像素1,2,...,i=n2中可以得到方程组:t Gradients in three directions. Respectively represent the velocity components of the optical flow in the x and y directions, that is, the optical flow vector. Select a neighborhood window of n×n (n>1) size, and establish the neighborhood pixel system equation to solve the optical flow vector. From the pixels 1, 2, ..., i=n 2 , a system of equations can be obtained:

上式可写成矩阵的形式:The above formula can be written in matrix form:

Ad=-bAd=-b

其最小二乘解即光流矢量为: Its least squares solution, that is, the optical flow vector is:

(b)由光流矢量求取运动像素的脉线Qi。根据第一步求取得到的光流矢量,设点表示运动目标的某个粒子在时间t,在第i帧中的位置,其初始位置为q点。脉线的定义为:(b) Calculate the pulse line Q i of the moving pixel from the optical flow vector. According to the optical flow vector obtained in the first step, set the point Represents the position of a particle of a moving target at time t in the i-th frame, and its initial position is point q. The veins are defined as:

脉线上每一个粒子的平移为:The translation of each particle on the vein line is:

脉线等于这些粒子的集合,由下式求取得到:The vein line is equal to the collection of these particles, obtained by the following formula:

Qi={xi(t),yi(t),ui,vi}Q i = {x i (t), yi (t), u i , v i }

其中, in,

(c)计算运动目标的倾向流Ωi。倾向流定义为Qs=(us、vs)T,us与vs分别表示倾向流在两个方向的速度矢量。设集合U=[ui],,认为ui是相邻三个像素的线性插值:(c) Calculate the tendency flow Ω i of the moving target. The inclined flow is defined as Q s =(u s , v s ) T , where u s and v s respectively represent the velocity vectors of the inclined flow in two directions. Let the set U=[u i ], consider that u i is the linear interpolation of three adjacent pixels:

ui=b1us(k1)+b2us(k2)+b3us(k3)其中kj表示相邻像素的索引号,u i =b 1 u s (k 1 )+b 2 u s (k 2 )+b 3 u s (k 3 ) where k j represents the index number of the adjacent pixel,

bj表示已知的第j个相邻像素在该域的三角基函数。对于U中的所有的点,使用上式构成一个系统为:b j represents the known triangular basis function of the jth adjacent pixel in this domain. For all points in U, use the above formula to form a system as:

Bus=UBu s = U

其中B由元素bi组成,使用最小二乘法可解出us。同理可求出vs,进而得到倾向流Among them, B is composed of elements b i , and u s can be solved using the least square method. In the same way, v s can be calculated, and then the tendency flow can be obtained

Ωs=(us,vs)TΩ s = (u s , v s ) T .

(d)取倾向流提取到的每个目标最大包围矩形框,即对于每个目标取[xmin,ymin]和[xmax,ymax]围成的矩形框,形成运动目标区域建议。(d) Take the maximum enclosing rectangular frame of each target extracted by the tendency flow, that is, take the rectangular frame surrounded by [x min , y min ] and [x max , y max ] for each target to form a moving target area suggestion.

(二)基于倾向流法捕获的运动目标,使用卷积神经网络对运动目标进行识别。(2) Based on the moving target captured by the tendency flow method, the moving target is identified using a convolutional neural network.

本发明的卷积神经网络共有14层,包括12个卷积层,2个全连接层。本发明的卷积神经网络针对机场运动目标尺寸差距较大的特点,使用了5个Inceptiion结构,分别利用1×1,3×3,5×5三种卷积核对目标多尺度特征进行提取。为了减少网络过拟合,加快网络训练和测试时间,对每一层的特征图数量和卷积核尺寸进行了精心的设计;The convolutional neural network of the present invention has 14 layers in total, including 12 convolutional layers and 2 fully connected layers. The convolutional neural network of the present invention aims at the characteristics of large size differences of airport moving targets, uses five Inceptiion structures, and uses three convolution kernels of 1×1, 3×3, and 5×5 to extract multi-scale features of the target. In order to reduce network overfitting and speed up network training and testing time, the number of feature maps and convolution kernel size of each layer are carefully designed;

识别网络对运动目标的识别步骤如下:The identification steps of the identification network for moving targets are as follows:

(a)识别网络提取运动目标特征。对图片执行卷积操作为下式所示:(a) The recognition network extracts the moving target features. Performing a convolution operation on an image is shown in the following formula:

这里卷积层输入K个特征图,输出L个特征图,所需卷积核数量为LK个,卷积核尺寸为IJ,Fl表示输出的第l个特征图,Mk表示输入的第k个特征图,Xk,l表示第k行、l列的二维卷积核,bkl表示偏置值;Here, the convolutional layer inputs K feature maps and outputs L feature maps. The number of required convolution kernels is LK, and the size of the convolution kernel is IJ. F l represents the output lth feature map, and M k represents the input feature map. k feature maps, X k, l represent the two-dimensional convolution kernel of row k and column l, and b kl represents the bias value;

将提取到的特征加权求和,并使用ReLU(Rectified Linear Units)非饱和激活函数进行处理:The extracted features are weighted and summed, and processed using the ReLU (Rectified Linear Units) unsaturated activation function:

σ(x)=max{0,x}σ(x)=max{0,x}

经过激活函数处理后,网络将对特征图进行最大值池化操作,形成新的特征映射为:After being processed by the activation function, the network will perform a maximum pooling operation on the feature map to form a new feature map as:

其中Y’l表示池化后的特征图;where Y'l represents the feature map after pooling;

按照本发明的识别网络层次安排,反复执行上述的卷积和池化工作将得到二维格式数据从低级到高级的语义抽象。According to the hierarchical arrangement of the recognition network of the present invention, repeated execution of the above-mentioned convolution and pooling work will obtain the semantic abstraction of the two-dimensional format data from low level to high level.

(b)识别网络对运动目标进行分类:以上的特征经过全连接层连接后,最终将特征向量输入到softmax函数中进行分类。本发明将机场动目标分为飞机、汽车和行人三个类别。(b) The recognition network classifies moving objects: After the above features are connected through the fully connected layer, the feature vector is finally input into the softmax function for classification. The invention divides the airport moving objects into three categories of aircraft, automobiles and pedestrians.

有益效果:本发明首先利用倾向流法提取出运动目标在图像中的区域,然后构建卷积神经网络对倾向流法提出的区域中的运动目标进行识别,形成了一个高效准确的机场运动目标检测识别框架。Beneficial effects: the present invention first uses the tendency flow method to extract the area of the moving target in the image, and then constructs a convolutional neural network to identify the moving target in the area proposed by the tendency flow method, forming an efficient and accurate airport moving target detection Identify the frame.

附图说明Description of drawings

图1为本发明实施例中倾向流法提出区域建议流程图。Fig. 1 is a flow chart of area suggestion proposed by the inclined flow method in the embodiment of the present invention.

图2为本发明实施例中Inception结构图。Fig. 2 is a structural diagram of Inception in the embodiment of the present invention.

图3为本发明实施例中运动目标检测效果图。Fig. 3 is an effect diagram of moving target detection in the embodiment of the present invention.

图4为本发明实施例中识别网络结构图。Fig. 4 is a structural diagram of an identification network in an embodiment of the present invention.

Claims (5)

1. A detection and identification method for an airport scene moving target is characterized by comprising the following specific processes:
acquiring a region suggestion of a moving target by using a tendency flow method;
and (II) identifying the moving target by using a convolutional neural network.
2. The method for detecting and identifying the moving object on the airport surface according to claim 1, wherein the specific process of the step (one) is as follows:
step one, obtaining an optical flow vector of a moving objectThe value is determined by:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mi>A</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>A</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow>
the above equation isA least squares solution of; wherein E represents the brightness of the pixel point; respectively representing gradients of pixel points along the three directions of x, y and t;optical flow vectors respectively representing optical flows in x and y directions;
step two, obtaining the pulse line Q of the motion pixel by the optical flow vectoriThe value is determined by:
Qi={xi(t),yi(t),ui,vi}
wherein,dotThe position of a certain particle representing a moving object in the ith frame at time t is the initial position of a point q;
step three, calculating the trend flow omega of the moving targetsThe value is determined by:
Ωs=(us,vs)T
wherein u issAnd vsRespectively representing the velocity vectors of the inclined flow in two directions; by usFor example, the value is found by a least squares solution of the following equation:
ui=b1us(k1)+b2us(k2)+b3us(k3)
wherein k isjIndex number representing adjacent pixel, bjA triangular basis function representing the known j-th adjacent pixel in the domain;
step four, taking each target maximum bounding rectangle extracted by the trend stream, namely taking [ x ] for each targetmin,ymin]And [ x ]max,ymax]And forming a motion target area suggestion by the enclosed rectangular frame.
3. The method for detecting and identifying the moving object on the airport surface according to claim 1, wherein the specific process of the step (two) is as follows:
step one, identifying a network to extract moving target characteristics, and performing convolution operation on a picture as shown in the following formula:
<mrow> <msub> <mi>F</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>M</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>X</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>b</mi> <mi>l</mi> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>K</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>I</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>J</mi> <mo>=</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>M</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>+</mo> <mi>i</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mi>j</mi> <mo>)</mo> </mrow> <msub> <mi>X</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>b</mi> <mrow> <mi>k</mi> <mi>l</mi> </mrow> </msub> </mrow>
the convolutional layer inputs K feature maps and outputs L feature maps, the required number of convolutional kernels is LK, and the sizes of the convolutional kernels are IJ and FlThe ith feature map, M, representing the outputkK characteristic diagram, X, representing inputk,lTwo-dimensional representation of the k-th row, l columnConvolution kernel, bklRepresents a bias value;
the extracted features are weighted and summed and processed using a ReLU (Rectified Linear Units) unsaturated activation function:
σ(x)=max{0,x}
after the activation function processing, the network performs maximum pooling operation on the feature map to form a new feature map as follows:
<mrow> <msup> <mi>Y</mi> <mrow> <mo>&amp;prime;</mo> <mi>l</mi> </mrow> </msup> <mrow> <mo>(</mo> <msup> <mi>m</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>n</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>max</mi> <mrow> <mi>r</mi> <mo>&amp;Element;</mo> <mi>R</mi> </mrow> </munder> <msup> <mi>Y</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>+</mo> <mi>r</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow>
wherein Y'lA graph representing pooled features;
according to the hierarchical arrangement of the recognition network, the convolution and pooling are repeatedly executed to obtain semantic abstraction of two-dimensional format data from low level to high level;
step two, classifying the moving objects by the recognition network: and after the features are connected through the full-connection layer, finally inputting the feature vector into a softmax function for classification.
4. The method for detecting and identifying the moving object on the airport surface according to claim 3, characterized in that the moving object on the airport is divided into three categories of airplane, automobile and pedestrian; the convolutional neural network of the invention has 14 layers in total, comprising 12 convolutional layers and 2 full-connection layers; aiming at the characteristic of large size difference of airport moving targets, the convolutional neural network uses 5 Inceposition structures, and extracts multi-scale features of the targets by respectively utilizing three convolution kernels of 1 × 1,3 × 3 and 5 × 5; in order to reduce the network overfitting and accelerate the network training and testing time, the number of feature maps and the size of convolution kernels of each layer are carefully designed.
5. The method for detecting and identifying the moving object on the airport surface according to claim 1, wherein the method comprises the steps of firstly extracting the area of the moving object in the image by using a tendency flow method, and then constructing a convolutional neural network to identify the moving object in the area proposed by the tendency flow method, so that an efficient and accurate airport moving object detection and identification framework is formed.
CN201711345550.7A 2017-12-15 2017-12-15 A kind of airdrome scene moving object detection recognition methods Pending CN107992899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711345550.7A CN107992899A (en) 2017-12-15 2017-12-15 A kind of airdrome scene moving object detection recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711345550.7A CN107992899A (en) 2017-12-15 2017-12-15 A kind of airdrome scene moving object detection recognition methods

Publications (1)

Publication Number Publication Date
CN107992899A true CN107992899A (en) 2018-05-04

Family

ID=62038691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711345550.7A Pending CN107992899A (en) 2017-12-15 2017-12-15 A kind of airdrome scene moving object detection recognition methods

Country Status (1)

Country Link
CN (1) CN107992899A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063549A (en) * 2018-06-19 2018-12-21 中国科学院自动化研究所 High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN109409214A (en) * 2018-09-14 2019-03-01 浙江大华技术股份有限公司 The method and apparatus that the target object of a kind of pair of movement is classified
CN109598983A (en) * 2018-12-12 2019-04-09 中国民用航空飞行学院 A kind of airdrome scene optoelectronic monitoring warning system and method
CN109871786A (en) * 2019-01-30 2019-06-11 浙江大学 A standard process detection system for flight ground support operations
CN110008853A (en) * 2019-03-15 2019-07-12 华南理工大学 Pedestrian detection network and model training method, detection method, medium, equipment
CN110458090A (en) * 2019-08-08 2019-11-15 成都睿云物联科技有限公司 Working state of excavator detection method, device, equipment and storage medium
CN114170571A (en) * 2021-12-14 2022-03-11 杭州电子科技大学 A crowd anomaly detection method based on pedestrian self-organization theory

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006098119A (en) * 2004-09-28 2006-04-13 Ntt Data Corp Object detection apparatus, object detection method, and object detection program
CN102722697A (en) * 2012-05-16 2012-10-10 北京理工大学 Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN104504362A (en) * 2014-11-19 2015-04-08 南京艾柯勒斯网络科技有限公司 Face detection method based on convolutional neural network
CN104751492A (en) * 2015-04-17 2015-07-01 中国科学院自动化研究所 Target area tracking method based on dynamic coupling condition random fields
CN105512640A (en) * 2015-12-30 2016-04-20 重庆邮电大学 Method for acquiring people flow on the basis of video sequence
CN106407889A (en) * 2016-08-26 2017-02-15 上海交通大学 Video human body interaction motion identification method based on optical flow graph depth learning model
CN107016690A (en) * 2017-03-06 2017-08-04 浙江大学 The unmanned plane intrusion detection of view-based access control model and identifying system and method
CN107305635A (en) * 2016-04-15 2017-10-31 株式会社理光 Object identifying method, object recognition equipment and classifier training method
CN107316015A (en) * 2017-06-19 2017-11-03 南京邮电大学 A kind of facial expression recognition method of high accuracy based on depth space-time characteristic

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006098119A (en) * 2004-09-28 2006-04-13 Ntt Data Corp Object detection apparatus, object detection method, and object detection program
CN102722697A (en) * 2012-05-16 2012-10-10 北京理工大学 Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN104504362A (en) * 2014-11-19 2015-04-08 南京艾柯勒斯网络科技有限公司 Face detection method based on convolutional neural network
CN104751492A (en) * 2015-04-17 2015-07-01 中国科学院自动化研究所 Target area tracking method based on dynamic coupling condition random fields
CN105512640A (en) * 2015-12-30 2016-04-20 重庆邮电大学 Method for acquiring people flow on the basis of video sequence
CN107305635A (en) * 2016-04-15 2017-10-31 株式会社理光 Object identifying method, object recognition equipment and classifier training method
CN106407889A (en) * 2016-08-26 2017-02-15 上海交通大学 Video human body interaction motion identification method based on optical flow graph depth learning model
CN107016690A (en) * 2017-03-06 2017-08-04 浙江大学 The unmanned plane intrusion detection of view-based access control model and identifying system and method
CN107316015A (en) * 2017-06-19 2017-11-03 南京邮电大学 A kind of facial expression recognition method of high accuracy based on depth space-time characteristic

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
RAMIN MEHRAN 等: "A Streakline Representation of Flow in Crowded Scenes", 《ECCV 2010》 *
XIAOFEI WANG: "A classification method based on streak flow for abnormal crowd behaviors", 《OPTIK》 *
吴庆甜 等: "基于巡逻机器人的实时跑动检测系统", 《集成技术》 *
李平岐 等: "复杂背景下运动目标的检测与跟踪", 《红外与激光工程》 *
林升 等: "基于卷积神经网络的机场图像目标识别", 《第二十一届计算机工程与工艺年会暨第七届微处理器技术论坛论文集》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063549A (en) * 2018-06-19 2018-12-21 中国科学院自动化研究所 High-resolution based on deep neural network is taken photo by plane video moving object detection method
CN109063549B (en) * 2018-06-19 2020-10-16 中国科学院自动化研究所 A high-resolution aerial video moving target detection method based on deep neural network
CN109409214A (en) * 2018-09-14 2019-03-01 浙江大华技术股份有限公司 The method and apparatus that the target object of a kind of pair of movement is classified
CN109598983A (en) * 2018-12-12 2019-04-09 中国民用航空飞行学院 A kind of airdrome scene optoelectronic monitoring warning system and method
CN109871786A (en) * 2019-01-30 2019-06-11 浙江大学 A standard process detection system for flight ground support operations
CN110008853A (en) * 2019-03-15 2019-07-12 华南理工大学 Pedestrian detection network and model training method, detection method, medium, equipment
CN110458090A (en) * 2019-08-08 2019-11-15 成都睿云物联科技有限公司 Working state of excavator detection method, device, equipment and storage medium
CN114170571A (en) * 2021-12-14 2022-03-11 杭州电子科技大学 A crowd anomaly detection method based on pedestrian self-organization theory

Similar Documents

Publication Publication Date Title
CN107992899A (en) A kind of airdrome scene moving object detection recognition methods
Wang et al. A comparative study of state-of-the-art deep learning algorithms for vehicle detection
CN110059558B (en) A Real-time Detection Method of Orchard Obstacles Based on Improved SSD Network
CN103530619B (en) Gesture identification method based on a small amount of training sample that RGB-D data are constituted
Yong et al. Human object detection in forest with deep learning based on drone’s vision
CN107633220A (en) A kind of vehicle front target identification method based on convolutional neural networks
CN108510467A (en) SAR image target recognition method based on variable depth shape convolutional neural networks
CN106650690A (en) Night vision image scene identification method based on deep convolution-deconvolution neural network
Kim et al. Multi-task convolutional neural network system for license plate recognition
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN108182388A (en) A kind of motion target tracking method based on image
CN112699967B (en) Remote airport target detection method based on improved deep neural network
CN104657717B (en) A kind of pedestrian detection method based on layering nuclear sparse expression
CN106919902B (en) Vehicle identification and track tracking method based on CNN
CN107784291A (en) target detection tracking method and device based on infrared video
CN102722712A (en) Multiple-scale high-resolution image object detection method based on continuity
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
Mansour et al. Automated vehicle detection in satellite images using deep learning
de Carvalho et al. Bounding box-free instance segmentation using semi-supervised iterative learning for vehicle detection
Haider et al. Human detection in aerial thermal imaging using a fully convolutional regression network
CN104778699B (en) A kind of tracking of self adaptation characteristics of objects
CN110533068B (en) Image object identification method based on classification convolutional neural network
Majidizadeh et al. Semantic segmentation of UAV images based on U-NET in urban area
Shustanov et al. A Method for Traffic Sign Recognition with CNN using GPU.
CN107679467B (en) Pedestrian re-identification algorithm implementation method based on HSV and SDALF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180504

RJ01 Rejection of invention patent application after publication