CN109359545B - Cooperative monitoring method and device under complex low-altitude environment - Google Patents

Cooperative monitoring method and device under complex low-altitude environment Download PDF

Info

Publication number
CN109359545B
CN109359545B CN201811094761.2A CN201811094761A CN109359545B CN 109359545 B CN109359545 B CN 109359545B CN 201811094761 A CN201811094761 A CN 201811094761A CN 109359545 B CN109359545 B CN 109359545B
Authority
CN
China
Prior art keywords
monitoring
image
moving target
videos
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811094761.2A
Other languages
Chinese (zh)
Other versions
CN109359545A (en
Inventor
曹先彬
甄先通
李岩
张安然
胡宇韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201811094761.2A priority Critical patent/CN109359545B/en
Publication of CN109359545A publication Critical patent/CN109359545A/en
Application granted granted Critical
Publication of CN109359545B publication Critical patent/CN109359545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Abstract

The invention discloses a cooperative monitoring method and device under a complex low-altitude environment, and belongs to the field of aviation monitoring. The device comprises unmanned aerial vehicle unattended operation front-end equipment, a remote control center, a data processing module, an algorithm module and a processing center. Firstly, dividing M horizontal monitoring areas and N vertical monitoring areas into a training set and a testing set; then selecting the current video, extracting 6 frames of images as the original basic image I of the current videoo1~I06And processing to obtain a frame difference image. And fusing the basic image and the frame difference image to obtain the basic image characteristics and the motion characteristics, and splicing and classifying to obtain the result label of the moving target. And repeating the process to obtain the result label of the moving object in all the training sets. And the test set outputs a result label of the moving target through the fusion model. And voting the result tags of all the test sets to determine whether the moving target exists. The invention reduces the redundancy problem of video frames and improves the classification precision and the acquisition efficiency.

Description

Cooperative monitoring method and device under complex low-altitude environment
Technical Field
The invention belongs to the field of aviation monitoring, and particularly relates to a cooperative monitoring method and device under a complex low-altitude environment.
Background
The complex low-altitude environment refers to an environment with low flying height, various aircrafts and complex flying areas, the monitoring of the complex low-altitude environment is easily influenced by terrain, meteorological factors and obstacles, and the monitoring capability in low-altitude flight is difficult to guarantee by the existing air traffic control monitoring technology. In addition, the traditional low-altitude monitoring system is high in construction and maintenance cost and difficult to popularize in practical application.
With the recent country's continued openness of use in low-altitude areas, drones are often used to assist in performing low-altitude surveillance tasks. The unmanned aerial vehicle collects image data in a monitoring scene through the carried camera, intelligent analysis and processing are carried out by using the computer vision technology, and conditions in the monitoring scene are judged, so that autonomous inspection is realized.
Due to the fact that in a complex low-altitude environment, a flight area is complex, obstacles are numerous, data acquired by a single aircraft in the flight process are insufficient, along with the development of an airspace monitoring technology, a monitoring mode develops from an independent working mode to a cooperative working mode, and cooperative work of multiple aircrafts provides a feasible scheme for monitoring in the complex low-altitude environment. In addition, in a complex low-altitude environment, the flying height of the aircraft is low, and the image data acquired by the aircraft at different positions have different scales, so that the characteristics (such as moving objects like people, vehicles, birds and the like) of moving objects in the video shot by the unmanned aerial vehicle are difficult to extract and classify, and the problems provide a severe challenge for monitoring the moving objects in the complex low-altitude environment.
Disclosure of Invention
The invention provides a cooperative monitoring method and a device in a complex low-altitude environment, aiming at the problems that a plurality of aircrafts are difficult to cooperate and moving target features are difficult to extract in the existing complex low-altitude environment, and the method and the device are used for improving the classification precision.
The cooperative monitoring method comprises the following specific steps:
step one, aiming at a complex low-altitude environment, M horizontal monitoring areas are divided according to terrain, and N vertical monitoring areas are divided according to an acquisition range.
The vertical monitoring area is divided according to the flying height, each horizontal area is provided with N unmanned aerial vehicles flying vertically, and the whole monitoring range is provided with M N unmanned aerial vehicles. Each moving object can acquire M monitoring videos in a horizontal monitoring area, and N monitoring videos are arranged in each horizontal monitoring area;
step two, aiming at a certain moving target, dividing M monitoring videos or N monitoring videos in each horizontal monitoring area into a training set and a testing set respectively;
for M monitoring videos or N training videos in a certain monitoring area, 80% of the M monitoring videos or N training videos are taken as a training set, and 20% of the M monitoring videos or N training videos are taken as a testing set;
step three, selecting training sets in sequenceTaking each video as a current video, dividing the current video into 6 sections frame by frame, randomly extracting 1 frame after removing the first frame in each section of video, and taking the extracted 6 frames of images as an original basic image I of the current videoo1~I06
And step four, processing the original basic image of the current video to obtain a frame difference image.
For each frame of the basic image IoiI is 1,2,3,4,5, 6; respectively making difference with previous frame image in same segment of self-body to obtain frame difference image Id1~Id6
Figure BDA0001805278920000021
Figure BDA0001805278920000022
For basic image I in the same segmentoiThe previous frame image of (2).
And step five, obtaining the basic image characteristics and the motion characteristics of the basic image and the frame difference image through a VGG network fusion model.
The VGG network comprises sixteen convolutional layers, sixteen pooling layers, three fully-connected layers and one softmax layer, each convolutional layer followed by one pooling layer.
A basic image Io1~Io6Inputting into VGG network, sequentially passing through convolution layer, pooling layer and full-connection layer to obtain f with characteristic size of 1 × 1000o1~f06(ii) a By a summation method, the basic characteristics f of the fused image are obtained0
f0=fo1+fo2+...+f06
At the same time, the frame difference image Id1~Id6Inputting into VGG network, sequentially passing through convolution layer, pooling layer and full-connection layer to obtain f with characteristic size of 1 × 1000d1~fd6(ii) a Obtaining the fused motion characteristic f by a summation methodd
fd=fd1+fd2+...+fd6
Sixthly, the basic characteristics of the fused image are treatedSign f0And a motion characteristic fdAnd splicing, and performing secondary classification through a softmax layer of the fusion model to obtain a result label of the moving target.
Two features f with the size of 1 x 1000dAnd f0Carrying out feature fusion, and directly splicing into 1 x 2000 features F;
F=Concatenate(f0,fd)
and (4) passing the fused splicing characteristic F through a softmax layer to obtain the binary probability of the moving object, and finally obtaining a result label of the moving object according to the probability.
And step seven, returning to the step three, selecting the next video in the training set to perform two-class training through the fusion model until the result labels of the moving targets in all the training sets are obtained.
For the moving object, the two classification result tags of the M horizontal surveillance videos are present or absent, and the two classification result tags of the N surveillance videos in each horizontal surveillance area are present or absent.
And step eight, respectively passing each frame of video in the test set through the trained fusion model, and outputting a result label of the moving target.
And step nine, voting the result tags of all the test sets, and determining whether the moving target exists or not finally.
The cooperative monitoring device comprises unmanned aerial vehicle unattended front-end equipment, a remote control center, a data processing module, an algorithm module and a processing center.
The remote control center sets the flight period, the flight height and the flight time of the unmanned aerial vehicle and determines the type of the collected video data. Unmanned aerial vehicle unmanned on duty front end equipment is through carrying the camera, and the video of the complicated low latitude environmental motion target is gathered according to the command that ground remote control center sent to send and classify for the video of data processing module to gathering. The algorithm module calculates an original basic image and a frame difference image, trains a VGG network fusion model to obtain the two classification probabilities of the moving target and a determined result label, and the processing center feeds the classification result back to monitoring personnel for subsequent work.
The invention has the advantages that:
1) the cooperative monitoring method under the complex low-altitude environment solves the problem that moving target features (such as moving targets like people, vehicles, birds and the like) in the video are not easy to extract, reduces the problem of video frame redundancy, and improves the classification precision.
2) The cooperative monitoring method under the complex low-altitude environment solves the problem that the aircraft is difficult to acquire pictures in the complex low-altitude environment by dividing the monitoring area, and improves the acquisition efficiency.
3) A monitoring arrangement in coordination under the complicated low latitude environment through unmanned aerial vehicle and the cooperation of rear end processing module, effectively monitors the environment, has improved and has kept watch on efficiency.
Drawings
FIG. 1 is a flow chart of a cooperative monitoring method under a complex low-altitude environment according to the present invention;
FIG. 2 is a diagram of a cooperative monitoring apparatus under a complex low-altitude environment according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention provides a cooperative monitoring method and a cooperative monitoring device under a complex low-altitude environment, which are used for classifying moving targets in videos shot by an unmanned aerial vehicle and have great significance for subsequent tasks such as target detection and tracking. The specific process is as follows: first, a horizontal monitoring area and a vertical monitoring area are divided. And then, acquiring a video to be classified, extracting a basic image, and processing the basic image to obtain a frame difference image. And further extracting a set of image basic features and motion features from the basic image and the frame difference image through a convolutional neural network. And then splicing and classifying the fused image basic features and motion features. And finally voting the classification results of the horizontal monitoring area and the vertical monitoring area.
As shown in fig. 1, the specific steps are as follows:
step one, aiming at a complex low-altitude environment, M horizontal monitoring areas are divided according to terrain, and N vertical monitoring areas are divided according to an acquisition range.
Obstacles are various in complex low-altitude environment, and panoramic data cannot be acquired by using a single unmanned aerial vehicle. Horizontal monitoring area is divided into M according to the topography, and M unmanned aerial vehicles are used for monitoring M areas respectively. The M-way image data obtained in the horizontal monitoring area is data of different areas in the monitoring range.
The vertical monitoring area is divided according to the flying height, and the range of the data pixel collected by the unmanned aerial vehicle of the same model at different heights is different. Every horizontal area has N unmanned aerial vehicle of vertical flight, has M N unmanned aerial vehicle in the whole monitoring range.
Each moving object can acquire M monitoring videos in a horizontal monitoring area, and N monitoring videos are arranged in each horizontal monitoring area;
step two, aiming at a certain moving target, dividing M monitoring videos or N monitoring videos in each horizontal monitoring area into a training set and a testing set respectively;
and acquiring videos to be classified for each unmanned aerial vehicle, and classifying and sorting data according to the condition of the moving target. Each surveillance zone is only classified into two categories, i.e., occupied and unoccupied, and occupied and unoccupied. If there are i moving targets (people, cars, birds, etc.) in the surveillance area, training the videos of the unmanned aerial vehicle collection place i times respectively to obtain i classification results of the two classification places.
For each training, 80% of classified video data are taken as a training set, and 20% are taken as a testing set.
Step three, sequentially selecting each video in the training set as a current video, dividing the current video into 6 sections frame by frame, randomly extracting 1 frame after removing a first frame from each section of video, and taking the extracted 6 frames of images as an original basic image I of the current videoo1~I06
The classified video is segmented into 6 segments frame by frame, each segment of video comprises L/6 frames of images if the original video has L frames, then each segment of video randomly extracts 1 frame of image (not taking the first frame of image of the video, and preparing for subsequently solving frame difference images), and total 6 frames of images are taken as the original input basic image I of each videoo1~Io6
And step four, processing the original basic image of the current video to obtain a frame difference image.
When extracting the motion characteristics, 6 frames of basic images I are obtainedoiI is 1,2,3,4,5, 6; respectively making difference with previous frame image in same segment of self-body to obtain frame difference image Id1~Id6
Figure BDA0001805278920000041
Figure BDA0001805278920000042
For basic image I in the same segmentoiThe previous frame image of (2).
The frame difference image is obtained by making a difference between two frames, so that the influence of a static background is removed, and the contour characteristic of the moving object is obtained.
And step five, obtaining the basic image characteristics and the motion characteristics of the basic image and the frame difference image through a VGG network fusion model.
Each training sample, i.e. each video, is simultaneously processed with 6 frames of image Io1~Io6And (3) sending the image to a convolutional neural network to extract basic features of the image, and sharing network parameters in 6 training processes. The basic convolutional neural network used per frame of picture in this embodiment is a VGG network. The VGG network comprises sixteen convolutional layers, sixteen pooling layers, three fully-connected layers and a softmax layer, each convolutional layer being followed by a pooling layer, the softmax layer being used to output the classification results. The VGG network inputs 3-channel RGB pictures and outputs 2 classified probabilities.
A basic image Io1~Io6Inputting into VGG network, sequentially passing through convolution layer, pooling layer and full-connection layer to obtain 6 image basic features f with feature size of 1 × 1000o1~f06(ii) a By a summation method, the basic characteristics f of the fused image are obtained0
f0=fo1+fo2+...+f06
At the same time, the frame difference image Id1~Id6Input into VGG network, sequentially pass throughThe convolution layer, the pooling layer and the full-connection layer respectively obtain a motion characteristic f with a characteristic size of 1 x 1000d1~fd6(ii) a Obtaining the fused motion characteristic f by a summation methodd
fd=fd1+fd2+...+fd6
f0And fdThe influence of different time series images on the classification is considered.
Step six, the fused image basic characteristics f0And a motion characteristic fdAnd splicing, and performing secondary classification through a softmax layer of the fusion model to obtain a result label of the moving target.
Two basic image features f with the size of 1 x 1000dAnd movement f0Carrying out feature fusion, and directly splicing into 1 x 2000 features F;
F=Concatenate(f0,fd)
and (4) passing the fused splicing characteristic F through a softmax layer to obtain the binary probability of the moving object, and finally obtaining a result label of the moving object according to the probability.
And step seven, returning to the step three, selecting the next video in the training set to perform two-class training through the fusion model until the result labels of the moving targets in all the training sets are obtained.
For the moving object, the two classification result tags of the M horizontal surveillance videos are present or absent, and the two classification result tags of the N surveillance videos in each horizontal surveillance area are present or absent.
And step eight, respectively passing each frame of video in the test set through the trained fusion model, and outputting a result label of the moving target.
And step nine, voting the result tags of all the test sets, and determining whether the moving target exists or not finally.
The method comprises the following steps that M areas in a horizontal area have i two classification results (including people, vehicles and vehicles) in each area, and N corresponding unmanned aerial vehicles in the vertical direction also have i two classification results respectively. For a single moving object (a person, a vehicle, a bird and the like), two voting results in the horizontal direction and the vertical direction are obtained respectively, and in order to make the results more robust, the training times are increased, and the voting is carried out once after 10 times of training in each area. There were 20 votes in the horizontal and vertical directions. And counting the voting result, and taking the result of the large number of votes as the result of the existence of the moving target in the area.
A cooperative monitoring device under a complex low-altitude environment is shown in fig. 2 and comprises unmanned aerial vehicle unattended front-end equipment, a ground remote control center, a data processing module, an algorithm module and a processing center.
Unmanned aerial vehicle unmanned on duty front end equipment is through carrying the camera, gathers the video of complicated low latitude environmental motion target.
The unmanned aerial vehicle remote control center sets up unmanned aerial vehicle flight period, flying height, flight time, confirms the type of gathering video data.
And the data processing module is used for classifying the acquired videos.
The algorithm module, namely the cooperative monitoring method, calculates the original basic image and the frame difference image, trains the VGG network fusion model to obtain the two classification probabilities of the moving target and the determined result label,
and the processing center feeds the classification result back to monitoring personnel for follow-up work.

Claims (4)

1. The cooperative monitoring method under the complex low-altitude environment is characterized by comprising the following specific steps:
the method comprises the following steps that firstly, M horizontal monitoring areas are divided according to terrain according to a complex low-altitude environment, and N vertical monitoring areas are divided according to an acquisition range;
the vertical monitoring area is divided according to the flying height, each horizontal area is provided with N unmanned aerial vehicles flying vertically, and M × N unmanned aerial vehicles are arranged in the whole monitoring range; each moving object can acquire M monitoring videos in a horizontal monitoring area, and N monitoring videos are arranged in each horizontal monitoring area;
step two, aiming at a certain moving target, dividing M monitoring videos or N monitoring videos in each horizontal monitoring area into a training set and a testing set respectively;
step three, sequentially selecting each video in the training set as a current video, dividing the current video into 6 sections frame by frame, randomly extracting 1 frame after removing a first frame from each section of video, and taking the extracted 6 frames of images as an original basic image I of the current videoo1~Io6
Processing the original basic image of the current video to obtain a frame difference image;
for each frame of the basic image IoiI is 1,2,3,4,5, 6; respectively making difference with previous frame image in same segment of self-body to obtain frame difference image Id1~Id6
Figure FDA0002479486730000011
Figure FDA0002479486730000012
For basic image I in the same segmentoiThe previous frame image of (2);
step five, obtaining image basic characteristics and motion characteristics of the basic image and the frame difference image through a VGG network fusion model;
a basic image Io1~Io6Inputting into VGG network, sequentially passing through convolution layer, pooling layer and full-connection layer to obtain f with characteristic size of 1 × 1000o1~fo6(ii) a By a summation method, the basic characteristics f of the fused image are obtainedo
fo=fo1+fo2+...+fo6
At the same time, the frame difference image Id1~Id6Inputting into VGG network, sequentially passing through convolution layer, pooling layer and full-connection layer to obtain f with characteristic size of 1 × 1000d1~fd6(ii) a Obtaining the fused motion characteristic f by a summation methodd
fd=fd1+fd2+...+fd6
Step six, the fused image basic characteristics foAnd a motion characteristic fdSplicing is carried out, and softmax stratification through the fusion modelPerforming line two classification to obtain a result label of the moving target;
two features f with the size of 1 x 1000dAnd foCarrying out feature fusion, and directly splicing into 1 x 2000 features F;
F=Concatenate(fo,fd)
the fused splicing characteristics F pass through a softmax layer to obtain the binary probability of the moving target, and finally the result label of the moving target is obtained according to the probability;
step seven, returning to the step three, selecting the next video in the training set to perform two-class training through a fusion model until the result labels of the moving target in all the training sets are obtained;
for the moving target, the two classification result labels of the M horizontal monitoring videos are present or absent, and the two classification result labels of the N monitoring videos in each horizontal monitoring area are present or absent;
step eight, inputting each frame of video in the test set into the trained fusion model respectively, and outputting a result label of the moving target;
and step nine, voting the result tags of all the test sets, and determining whether the moving target exists or not finally.
2. The cooperative monitoring method under the complex low-altitude environment as claimed in claim 1, wherein in the second step, for the M monitoring videos or the N monitoring videos in each horizontal monitoring area, 80% is taken as the training set and 20% is taken as the testing set.
3. The cooperative monitoring method under the complex low-altitude environment as recited in claim 1, wherein the VGG network comprises sixteen convolutional layers, sixteen pooling layers, three fully-connected layers and a softmax layer, and each convolutional layer is followed by one pooling layer.
4. The cooperative monitoring device applied to the cooperative monitoring method under the complex low-altitude environment of claim 1 is characterized by comprising unmanned aerial vehicle unattended front-end equipment, a remote control center, a data processing module, an algorithm module and a processing center;
the remote control center sets the flight period, the flight height and the flight time of the unmanned aerial vehicle and determines the type of the collected video data; the unmanned aerial vehicle unattended front-end equipment collects videos of moving targets in a complex low-altitude environment according to commands sent by a ground remote control center by carrying a camera, and sends the videos to a data processing module to classify the collected videos; the algorithm module calculates an original basic image and a frame difference image, trains a VGG network fusion model to obtain the two classification probabilities of the moving target and a determined result label, and the processing center feeds the classification result back to monitoring personnel for subsequent work.
CN201811094761.2A 2018-09-19 2018-09-19 Cooperative monitoring method and device under complex low-altitude environment Active CN109359545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811094761.2A CN109359545B (en) 2018-09-19 2018-09-19 Cooperative monitoring method and device under complex low-altitude environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811094761.2A CN109359545B (en) 2018-09-19 2018-09-19 Cooperative monitoring method and device under complex low-altitude environment

Publications (2)

Publication Number Publication Date
CN109359545A CN109359545A (en) 2019-02-19
CN109359545B true CN109359545B (en) 2020-07-21

Family

ID=65351354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811094761.2A Active CN109359545B (en) 2018-09-19 2018-09-19 Cooperative monitoring method and device under complex low-altitude environment

Country Status (1)

Country Link
CN (1) CN109359545B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503081B (en) * 2019-08-30 2022-08-26 山东师范大学 Violent behavior detection method, system, equipment and medium based on interframe difference
CN111582069B (en) * 2020-04-22 2021-05-28 北京航空航天大学 Track obstacle zero sample classification method and device for air-based monitoring platform
CN114494981B (en) * 2022-04-07 2022-08-05 之江实验室 Action video classification method and system based on multi-level motion modeling

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
CN108319905A (en) * 2018-01-25 2018-07-24 南京邮电大学 A kind of Activity recognition method based on long time-histories depth time-space network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862705A (en) * 2017-11-21 2018-03-30 重庆邮电大学 A kind of unmanned plane small target detecting method based on motion feature and deep learning feature
CN108319905A (en) * 2018-01-25 2018-07-24 南京邮电大学 A kind of Activity recognition method based on long time-histories depth time-space network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Linear SVM classification using boosting HOG features for vehicle deteciotn in low-altitude airborne videos;Xianbin Cao等;《2011 18th IEEE International Conference on Image Processing》;20111231;2421-2424页 *
低空安全监测管理系统的探索与研究;王水璋等;《电子测量技术》;20180531;146页-150页 *

Also Published As

Publication number Publication date
CN109359545A (en) 2019-02-19

Similar Documents

Publication Publication Date Title
CN111145545B (en) Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN110321923B (en) Target detection method, system and medium for fusion of different-scale receptive field characteristic layers
CN108037770B (en) Unmanned aerial vehicle power transmission line inspection system and method based on artificial intelligence
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN109359545B (en) Cooperative monitoring method and device under complex low-altitude environment
CN105184271A (en) Automatic vehicle detection method based on deep learning
CN109948553A (en) A kind of multiple dimensioned dense population method of counting
CN109255286A (en) A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN110532937B (en) Method for accurately identifying forward targets of train based on identification model and classification model
CN108681718A (en) A kind of accurate detection recognition method of unmanned plane low target
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN111831010A (en) Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN112863186A (en) Vehicle-mounted unmanned aerial vehicle-based escaping vehicle rapid identification and tracking method
CN115116137A (en) Pedestrian detection method based on lightweight YOLO v5 network model and space-time memory mechanism
Liao et al. Lr-cnn: Local-aware region cnn for vehicle detection in aerial imagery
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN114419444A (en) Lightweight high-resolution bird group identification method based on deep learning network
CN112464933B (en) Intelligent identification method for weak and small target through foundation staring infrared imaging
CN113486866A (en) Visual analysis method and system for airport bird identification
CN104615987B (en) A kind of the wreckage of an plane intelligent identification Method and system based on error-duration model neutral net
CN114592411B (en) Carrier parasitic type intelligent inspection method for highway damage
Eriş et al. Implementation of target tracking methods on images taken from unmanned aerial vehicles
CN115457313A (en) Method and system for analyzing photovoltaic equipment fault based on thermal infrared image
CN108470154A (en) A kind of large-scale crowd salient region detection method
CN113949826A (en) Unmanned aerial vehicle cluster cooperative reconnaissance method and system under limited communication bandwidth condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant