CN107992899A - A kind of airdrome scene moving object detection recognition methods - Google Patents
A kind of airdrome scene moving object detection recognition methods Download PDFInfo
- Publication number
- CN107992899A CN107992899A CN201711345550.7A CN201711345550A CN107992899A CN 107992899 A CN107992899 A CN 107992899A CN 201711345550 A CN201711345550 A CN 201711345550A CN 107992899 A CN107992899 A CN 107992899A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msup
- msub
- moving object
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 14
- 230000003287 optical effect Effects 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 239000002245 particle Substances 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000004576 sand Substances 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 238000012549 training Methods 0.000 claims description 2
- 238000012544 monitoring process Methods 0.000 abstract description 7
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 abstract 1
- 230000007547 defect Effects 0.000 abstract 1
- 238000000605 extraction Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of airdrome scene moving object detection recognition methods, and detailed process is:The region that moving target is obtained first by tendency stream method is suggested, then moving target is identified using convolutional neural networks.The shortcomings that moving target recognition that traditional target extraction method differs for size is imperfect, noise is more is the method overcome, compensate for the defects of deep learning target detection network is for airdrome scene small target deteection scarce capacity.Identification network convolution level designed by the present invention is few, and characteristic dimension is small, and calculation amount is small, and accuracy rate is high, meets the demand of airport scene monitoring.
Description
Technical Field
The invention relates to the technical field of digital image processing, in particular to a method for detecting and identifying moving objects in an airport.
Background
With the rapid development of civil aviation transportation industry in China, the number of airplanes, vehicles and personnel at airports is rapidly increased, and the operation environment of airport scenes is increasingly complex. It is necessary to introduce airport surface surveillance systems.
The traditional airport scene monitoring method mainly uses a scene monitoring radar, and domestic large airports such as Beijing capital airport, Shanghai Pudong airport and the like are equipped with the scene monitoring radar. However, due to the high installation and maintenance cost of the scene surveillance radar, most of the medium and small airports in China are not equipped with the scene surveillance radar, and rely on the visual and manual operations of controllers to realize the scene surveillance function, which greatly increases the risk of the airport scene operation.
With the development of computer vision, airport scene monitoring technology based on video technology has been developed in recent years, which is low in cost and has wider coverage area and thus more flexibility than the monitoring radar method. Current research on airport scene dynamic targets focuses mainly on tracking targets of known type, while detection of targets is not much studied. Target detection methods can be mainly divided into two main categories: a manual feature based approach and a deep learning network based approach. Common target detection methods based on manual features comprise algorithms such as an optical flow method, a frame difference method and ViBe, and the like, and the methods are low in accuracy rate and high in time cost and are not suitable for airport scene monitoring. The target detection algorithm based on the deep learning network comprises methods such as R-CNN, Faster R-CNN, SSD (Single Shot Multi Box Detector) and the like, and the methods have the characteristics of high detection accuracy and high efficiency, but due to the design characteristics of the network model, the method has a poor effect on small target detection, and particularly has low accuracy on airport far-field small target detection. Given that most of the targets of an airport scene are relatively fixed, we are much more interested in dynamic targets than static targets for security surveillance purposes.
Disclosure of Invention
In view of this, the invention detects the motion information of the moving target by using the trend flow method, and designs an identification depth network to identify the target, thereby realizing the detection and identification of the moving target in the airport scene including the small target. The invention divides airport moving objects into three categories of airplanes, automobiles and pedestrians.
The technical scheme of the invention is concretely realized as follows:
and (I) acquiring a region proposal of the moving target by using a trend flow method, wherein the specific form is rectangular frame coordinates. The specific steps of calculating the trend flow of the moving target are as follows:
(a) first, an optical flow vector of a moving object is obtainedFor a pixel (x, y) on the image, the brightness at time t is E (x, y, t), and the brightness after a time interval Δ t is E (x + Δ x, y + Δ y, t + Δ t). When Δ t approaches infinity, the spot brightness is considered unchanged:
Exu+Eyv+Et=0
wherein,respectively, representing the positions of the pixel points along x, y,
t gradient in three directions.Respectively, the velocity components of the optical flow in the x, y directions, i.e., the optical flow vectors. Selecting a neighborhood window with the size of n multiplied by n (n is more than 1), establishing a neighborhood pixel system equation to solve the optical flow vector, and solving the optical flow vector from the pixels 1, 22The equation set can be obtained:
the above formula can be written in matrix form:
Ad=-b
the least squares solution, i.e., the optical flow vector, is:
(b) determining the pulse line Q of a moving pixel from an optical flow vectori. Setting points according to the optical flow vector obtained in the first stepThe position of a particle representing a moving object in the ith frame at time t is initially located at point q. The pulse line is defined as:
the translation of each particle on the pulse line is:
the pulse line is equal to the set of these particles, which is found by the following equation:
Qi={xi(t),yi(t),ui,vi}
wherein,
(c) calculating a tendency flow omega of a moving objecti. The tendency flow is defined as Qs=(us、vs)T,usAnd vsRespectively representing the velocity vectors of the trend flow in both directions. Let set U ═ Ui]Let u be considerediIs a linear interpolation of adjacent three pixels:
ui=b1us(k1)+b2us(k2)+b3us(k3) Wherein k isjIndicating the index numbers of the adjacent pixels,
bjrepresenting the triangular basis function of the known j-th neighboring pixel in the domain. For all points in U, the above formula is usedOne system is as follows:
Bus=U
wherein B is an element BiComposition, using least squares to solve us. In the same way, v can be obtainedsTo obtain a trend stream
Ωs=(us,vs)T。
(d) Taking each target extracted from the trend stream as a maximum bounding rectangle, i.e. taking [ x ] for each targetmin,ymin]And [ x ]max,ymax]And forming a motion target area suggestion by the enclosed rectangular frame.
And (II) identifying the moving target by using a convolutional neural network based on the moving target captured by the tendency flow method.
The convolutional neural network of the invention has 14 layers in total, and comprises 12 convolutional layers and 2 full-connection layers. Aiming at the characteristic of large size difference of airport moving objects, the convolutional neural network uses 5 Inception structures, and extracts the multi-scale features of the objects by respectively using three convolution kernels of 1 × 1,3 × 3 and 5 × 5. In order to reduce the overfitting of the network and accelerate the training and testing time of the network, the number of characteristic graphs and the size of a convolution kernel of each layer are carefully designed;
the identification steps of the identification network to the moving target are as follows:
(a) and the recognition network extracts the characteristics of the moving target. Performing a convolution operation on the picture is shown as follows:
the convolutional layer inputs K feature maps and outputs L feature maps, the required number of convolutional kernels is LK, and the sizes of the convolutional kernels are IJ and FlThe ith feature map, M, representing the outputkK characteristic diagram, X, representing inputk,lDenotes the k-th line, lTwo-dimensional convolution kernels of columns, bklRepresents a bias value;
the extracted features are weighted and summed and processed using a ReLU (Rectified Linear Units) unsaturated activation function:
σ(x)=max{0,x}
after the activation function processing, the network performs maximum pooling operation on the feature map to form a new feature map as follows:
wherein Y is’lA graph representing pooled features;
according to the hierarchical arrangement of the recognition network of the present invention, the above-described convolution and pooling work is repeatedly performed to obtain semantic abstractions from low level to high level of two-dimensional format data.
(b) The identification network classifies the moving target: and after the features are connected through the full-connection layer, finally inputting the feature vector into a softmax function for classification. The invention divides airport moving objects into three categories of airplanes, automobiles and pedestrians.
Has the advantages that: the method comprises the steps of firstly extracting the area of the moving target in the image by using the tendency flow method, and then constructing the convolutional neural network to identify the moving target in the area proposed by the tendency flow method, thereby forming an efficient and accurate airport moving target detection and identification framework.
Drawings
Fig. 1 is a flow chart of region suggestion proposed by the trend flow method in the embodiment of the present invention.
Fig. 2 is an inclusion structure diagram in an embodiment of the present invention.
Fig. 3 is a diagram illustrating the detection effect of a moving object according to an embodiment of the present invention.
Fig. 4 is a diagram of an identification network structure in an embodiment of the present invention.
Claims (5)
1. A detection and identification method for an airport scene moving target is characterized by comprising the following specific processes:
acquiring a region suggestion of a moving target by using a tendency flow method;
and (II) identifying the moving target by using a convolutional neural network.
2. The method for detecting and identifying the moving object on the airport surface according to claim 1, wherein the specific process of the step (one) is as follows:
step one, obtaining an optical flow vector of a moving objectThe value is determined by:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>u</mi> </mtd> </mtr> <mtr> <mtd> <mi>v</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msup> <mi>A</mi> <mi>T</mi> </msup> <mi>A</mi> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msup> <mi>A</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mo>-</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow>
the above equation isA least squares solution of; wherein E represents the brightness of the pixel point; respectively representing gradients of pixel points along the three directions of x, y and t;optical flow vectors respectively representing optical flows in x and y directions;
step two, obtaining the pulse line Q of the motion pixel by the optical flow vectoriThe value is determined by:
Qi={xi(t),yi(t),ui,vi}
wherein,dotThe position of a certain particle representing a moving object in the ith frame at time t is the initial position of a point q;
step three, calculating the trend flow omega of the moving targetsThe value is determined by:
Ωs=(us,vs)T
wherein u issAnd vsRespectively representing the velocity vectors of the inclined flow in two directions; by usFor example, the value is found by a least squares solution of the following equation:
ui=b1us(k1)+b2us(k2)+b3us(k3)
wherein k isjIndex number representing adjacent pixel, bjA triangular basis function representing the known j-th adjacent pixel in the domain;
step four, taking each target maximum bounding rectangle extracted by the trend stream, namely taking [ x ] for each targetmin,ymin]And [ x ]max,ymax]And forming a motion target area suggestion by the enclosed rectangular frame.
3. The method for detecting and identifying the moving object on the airport surface according to claim 1, wherein the specific process of the step (two) is as follows:
step one, identifying a network to extract moving target characteristics, and performing convolution operation on a picture as shown in the following formula:
<mrow> <msub> <mi>F</mi> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>M</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>X</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>b</mi> <mi>l</mi> </msub> <mo>=</mo> <munderover> <mo>&Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>K</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>I</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>J</mi> <mo>=</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>M</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>+</mo> <mi>i</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mi>j</mi> <mo>)</mo> </mrow> <msub> <mi>X</mi> <mrow> <mi>k</mi> <mo>,</mo> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>b</mi> <mrow> <mi>k</mi> <mi>l</mi> </mrow> </msub> </mrow>
the convolutional layer inputs K feature maps and outputs L feature maps, the required number of convolutional kernels is LK, and the sizes of the convolutional kernels are IJ and FlThe ith feature map, M, representing the outputkK characteristic diagram, X, representing inputk,lTwo-dimensional representation of the k-th row, l columnConvolution kernel, bklRepresents a bias value;
the extracted features are weighted and summed and processed using a ReLU (Rectified Linear Units) unsaturated activation function:
σ(x)=max{0,x}
after the activation function processing, the network performs maximum pooling operation on the feature map to form a new feature map as follows:
<mrow> <msup> <mi>Y</mi> <mrow> <mo>&prime;</mo> <mi>l</mi> </mrow> </msup> <mrow> <mo>(</mo> <msup> <mi>m</mi> <mo>&prime;</mo> </msup> <mo>,</mo> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>max</mi> <mrow> <mi>r</mi> <mo>&Element;</mo> <mi>R</mi> </mrow> </munder> <msup> <mi>Y</mi> <mi>l</mi> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>+</mo> <mi>r</mi> <mo>,</mo> <mi>n</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow>
wherein Y'lA graph representing pooled features;
according to the hierarchical arrangement of the recognition network, the convolution and pooling are repeatedly executed to obtain semantic abstraction of two-dimensional format data from low level to high level;
step two, classifying the moving objects by the recognition network: and after the features are connected through the full-connection layer, finally inputting the feature vector into a softmax function for classification.
4. The method for detecting and identifying the moving object on the airport surface according to claim 3, characterized in that the moving object on the airport is divided into three categories of airplane, automobile and pedestrian; the convolutional neural network of the invention has 14 layers in total, comprising 12 convolutional layers and 2 full-connection layers; aiming at the characteristic of large size difference of airport moving targets, the convolutional neural network uses 5 Inceposition structures, and extracts multi-scale features of the targets by respectively utilizing three convolution kernels of 1 × 1,3 × 3 and 5 × 5; in order to reduce the network overfitting and accelerate the network training and testing time, the number of feature maps and the size of convolution kernels of each layer are carefully designed.
5. The method for detecting and identifying the moving object on the airport surface according to claim 1, wherein the method comprises the steps of firstly extracting the area of the moving object in the image by using a tendency flow method, and then constructing a convolutional neural network to identify the moving object in the area proposed by the tendency flow method, so that an efficient and accurate airport moving object detection and identification framework is formed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711345550.7A CN107992899A (en) | 2017-12-15 | 2017-12-15 | A kind of airdrome scene moving object detection recognition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711345550.7A CN107992899A (en) | 2017-12-15 | 2017-12-15 | A kind of airdrome scene moving object detection recognition methods |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107992899A true CN107992899A (en) | 2018-05-04 |
Family
ID=62038691
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711345550.7A Pending CN107992899A (en) | 2017-12-15 | 2017-12-15 | A kind of airdrome scene moving object detection recognition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107992899A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063549A (en) * | 2018-06-19 | 2018-12-21 | 中国科学院自动化研究所 | High-resolution based on deep neural network is taken photo by plane video moving object detection method |
CN109409214A (en) * | 2018-09-14 | 2019-03-01 | 浙江大华技术股份有限公司 | The method and apparatus that the target object of a kind of pair of movement is classified |
CN109598983A (en) * | 2018-12-12 | 2019-04-09 | 中国民用航空飞行学院 | A kind of airdrome scene optoelectronic monitoring warning system and method |
CN109871786A (en) * | 2019-01-30 | 2019-06-11 | 浙江大学 | A kind of flight ground safeguard job specification process detection system |
CN110008853A (en) * | 2019-03-15 | 2019-07-12 | 华南理工大学 | Pedestrian detection network and model training method, detection method, medium, equipment |
CN110458090A (en) * | 2019-08-08 | 2019-11-15 | 成都睿云物联科技有限公司 | Working state of excavator detection method, device, equipment and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006098119A (en) * | 2004-09-28 | 2006-04-13 | Ntt Data Corp | Object detector, object detection method, and object detection program |
CN102722697A (en) * | 2012-05-16 | 2012-10-10 | 北京理工大学 | Unmanned aerial vehicle autonomous navigation landing visual target tracking method |
CN104504362A (en) * | 2014-11-19 | 2015-04-08 | 南京艾柯勒斯网络科技有限公司 | Face detection method based on convolutional neural network |
CN104751492A (en) * | 2015-04-17 | 2015-07-01 | 中国科学院自动化研究所 | Target area tracking method based on dynamic coupling condition random fields |
CN105512640A (en) * | 2015-12-30 | 2016-04-20 | 重庆邮电大学 | Method for acquiring people flow on the basis of video sequence |
CN106407889A (en) * | 2016-08-26 | 2017-02-15 | 上海交通大学 | Video human body interaction motion identification method based on optical flow graph depth learning model |
CN107016690A (en) * | 2017-03-06 | 2017-08-04 | 浙江大学 | The unmanned plane intrusion detection of view-based access control model and identifying system and method |
CN107305635A (en) * | 2016-04-15 | 2017-10-31 | 株式会社理光 | Object identifying method, object recognition equipment and classifier training method |
CN107316015A (en) * | 2017-06-19 | 2017-11-03 | 南京邮电大学 | A kind of facial expression recognition method of high accuracy based on depth space-time characteristic |
-
2017
- 2017-12-15 CN CN201711345550.7A patent/CN107992899A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006098119A (en) * | 2004-09-28 | 2006-04-13 | Ntt Data Corp | Object detector, object detection method, and object detection program |
CN102722697A (en) * | 2012-05-16 | 2012-10-10 | 北京理工大学 | Unmanned aerial vehicle autonomous navigation landing visual target tracking method |
CN104504362A (en) * | 2014-11-19 | 2015-04-08 | 南京艾柯勒斯网络科技有限公司 | Face detection method based on convolutional neural network |
CN104751492A (en) * | 2015-04-17 | 2015-07-01 | 中国科学院自动化研究所 | Target area tracking method based on dynamic coupling condition random fields |
CN105512640A (en) * | 2015-12-30 | 2016-04-20 | 重庆邮电大学 | Method for acquiring people flow on the basis of video sequence |
CN107305635A (en) * | 2016-04-15 | 2017-10-31 | 株式会社理光 | Object identifying method, object recognition equipment and classifier training method |
CN106407889A (en) * | 2016-08-26 | 2017-02-15 | 上海交通大学 | Video human body interaction motion identification method based on optical flow graph depth learning model |
CN107016690A (en) * | 2017-03-06 | 2017-08-04 | 浙江大学 | The unmanned plane intrusion detection of view-based access control model and identifying system and method |
CN107316015A (en) * | 2017-06-19 | 2017-11-03 | 南京邮电大学 | A kind of facial expression recognition method of high accuracy based on depth space-time characteristic |
Non-Patent Citations (5)
Title |
---|
RAMIN MEHRAN 等: "A Streakline Representation of Flow in Crowded Scenes", 《ECCV 2010》 * |
XIAOFEI WANG: "A classification method based on streak flow for abnormal crowd behaviors", 《OPTIK》 * |
吴庆甜 等: "基于巡逻机器人的实时跑动检测系统", 《集成技术》 * |
李平岐 等: "复杂背景下运动目标的检测与跟踪", 《红外与激光工程》 * |
林升 等: "基于卷积神经网络的机场图像目标识别", 《第二十一届计算机工程与工艺年会暨第七届微处理器技术论坛论文集》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063549A (en) * | 2018-06-19 | 2018-12-21 | 中国科学院自动化研究所 | High-resolution based on deep neural network is taken photo by plane video moving object detection method |
CN109063549B (en) * | 2018-06-19 | 2020-10-16 | 中国科学院自动化研究所 | High-resolution aerial video moving target detection method based on deep neural network |
CN109409214A (en) * | 2018-09-14 | 2019-03-01 | 浙江大华技术股份有限公司 | The method and apparatus that the target object of a kind of pair of movement is classified |
CN109598983A (en) * | 2018-12-12 | 2019-04-09 | 中国民用航空飞行学院 | A kind of airdrome scene optoelectronic monitoring warning system and method |
CN109871786A (en) * | 2019-01-30 | 2019-06-11 | 浙江大学 | A kind of flight ground safeguard job specification process detection system |
CN110008853A (en) * | 2019-03-15 | 2019-07-12 | 华南理工大学 | Pedestrian detection network and model training method, detection method, medium, equipment |
CN110458090A (en) * | 2019-08-08 | 2019-11-15 | 成都睿云物联科技有限公司 | Working state of excavator detection method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108510467B (en) | SAR image target identification method based on depth deformable convolution neural network | |
CN108171112B (en) | Vehicle identification and tracking method based on convolutional neural network | |
CN111598030B (en) | Method and system for detecting and segmenting vehicle in aerial image | |
CN107992899A (en) | A kind of airdrome scene moving object detection recognition methods | |
CN110728200B (en) | Real-time pedestrian detection method and system based on deep learning | |
Jiao et al. | A configurable method for multi-style license plate recognition | |
CN111695514B (en) | Vehicle detection method in foggy days based on deep learning | |
US20180307911A1 (en) | Method for the semantic segmentation of an image | |
WO2020020472A1 (en) | A computer-implemented method and system for detecting small objects on an image using convolutional neural networks | |
CN106683119B (en) | Moving vehicle detection method based on aerial video image | |
CN111915583B (en) | Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene | |
Mansour et al. | Automated vehicle detection in satellite images using deep learning | |
Shi et al. | Weather recognition based on edge deterioration and convolutional neural networks | |
Zang et al. | Traffic lane detection using fully convolutional neural network | |
Jaiswal et al. | Comparative analysis of CCTV video image processing techniques and application: a survey | |
CN108345835B (en) | Target identification method based on compound eye imitation perception | |
Liu et al. | Vehicle detection from aerial color imagery and airborne LiDAR data | |
Harianto et al. | Data augmentation and faster rcnn improve vehicle detection and recognition | |
CN112347967B (en) | Pedestrian detection method fusing motion information in complex scene | |
Ding et al. | Two-stage Framework for Specialty Vehicles Detection and Classification: Toward Intelligent Visual Surveillance of Airport Surface | |
CN112036246B (en) | Construction method of remote sensing image classification model, remote sensing image classification method and system | |
Majidizadeh et al. | Semantic segmentation of UAV images based on U-NET in urban area | |
Priyadharshini et al. | Vehicle data aggregation from highway video of madurai city using convolution neural network | |
Tayo et al. | Vehicle license plate recognition using edge detection and neural network | |
CN114066855A (en) | Crack segmentation and tracking method based on lightweight model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180504 |