CN117809217A - Method and system for scouting and beating based on real-time single-stage target recognition - Google Patents

Method and system for scouting and beating based on real-time single-stage target recognition Download PDF

Info

Publication number
CN117809217A
CN117809217A CN202311814756.5A CN202311814756A CN117809217A CN 117809217 A CN117809217 A CN 117809217A CN 202311814756 A CN202311814756 A CN 202311814756A CN 117809217 A CN117809217 A CN 117809217A
Authority
CN
China
Prior art keywords
target
real
targets
time
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311814756.5A
Other languages
Chinese (zh)
Inventor
齐冬莲
聂雪松
汪显博
闫云凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan Research Institute Of Zhejiang University
Zhejiang University ZJU
Original Assignee
Hainan Research Institute Of Zhejiang University
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan Research Institute Of Zhejiang University, Zhejiang University ZJU filed Critical Hainan Research Institute Of Zhejiang University
Priority to CN202311814756.5A priority Critical patent/CN117809217A/en
Publication of CN117809217A publication Critical patent/CN117809217A/en
Pending legal-status Critical Current

Links

Landscapes

  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

The invention discloses a scout method and a scout system based on real-time single-stage target recognition, and relates to the technical field of target recognition. The method specifically comprises the following steps: capturing a video stream by a camera, and performing image preprocessing on the captured video stream to obtain a processed image; the real-time target detection module carries out real-time target detection on the processed image by using a real-time single-stage target recognition network RTDNet to obtain category information and coordinate information of all targets in the image; the direction control module automatically adjusts the direction and the visual field of the cradle head according to the target coordinate information, so that the camera is ensured to be accurately aligned to all targets; the striking system module judges all the target category information, and if the target category is a threat target, the laser weapon system is triggered to strike the threat target. The invention introduces the RTDNet real-time target detector, achieves high-performance real-time target detection, and simultaneously carries out accurate striking on the identified threat target to ensure social security.

Description

Method and system for scouting and beating based on real-time single-stage target recognition
Technical Field
The invention belongs to the technical field of target recognition, and particularly relates to a method and a system for detecting and playing based on real-time single-stage target recognition.
Background
In recent years, with the continuous improvement of social security consciousness, a real-time target detection technology is widely used in various fields. Video surveillance systems are critical security tools for monitoring and securing various sites. In order to meet diversified monitoring requirements, a pan-tilt is generally adopted to control the direction and the field of view of the camera, so that the monitoring range is enlarged and the monitoring efficiency is improved.
However, in some cases, it is desirable to monitor and identify sudden and concealed behavior of approaching wild animals, small low-speed targets, or potential intruders in real time. These behaviors are characterized by diversity and may constitute a wide range of potential threats, and conventional monitoring systems have difficulty meeting efficient identification and response requirements. Thus, there is a need for an innovative system to meet these challenges.
Therefore, a method and a system for detecting and beating based on real-time single-stage target recognition are provided to solve the problems of the prior art, which are needed to be solved by the person skilled in the art.
Disclosure of Invention
The invention provides a method and a system for detecting and beating based on real-time single-stage target identification, which introduce an RTDNet real-time target detector to achieve high-performance real-time target detection, and accurately hit the identified threat target to ensure social security.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a scout method based on real-time single-stage target recognition comprises the following specific steps:
s1: capturing a video stream by using a camera, and performing image preprocessing on the captured video stream to obtain a processed image;
s2: the real-time target detection module carries out real-time target detection on the processed image by using a real-time single-stage target recognition network RTDNet to obtain category information and coordinate information of all targets in the image;
s3: outputting the coordinate information of all the targets to a direction control module, and automatically adjusting the direction and the visual field of the cradle head by the direction control module according to the target coordinate information so that the cameras aim at all the targets;
s4: the striking system module judges all the target category information, and if the target category is a threat target, the laser weapon system is triggered to strike the threat target.
The above method, optionally, image preprocessing includes, but is not limited to, denoising, resizing, and image enhancement.
The method, optionally, the whole structure of the RTDNet model in S2 is formed by
CSPNeXt+CSPNeXtPAFPN+shares convolution weights but calculates the SepBNHead composition of BN separately.
The method, optionally, the specific step of S3 is as follows:
s301: calculating the center point of the target bounding box, the center point coordinates (x c ,y c ) Calculated by the following formula:
where (x, y) is the upper left corner coordinates of the bounding box and (width, height) is the width and height of the bounding box;
s302: azimuth adjustment, namely adjusting a cradle head according to the deviation between a target center and an image center, wherein the angle of the cradle head to be moved is estimated by the following modes:
horizontal direction angle adjustment: (Δθ=k x ·(x c -x center ))
Vertical direction angle adjustment: (Δφ=k y ·(y c -y center ))
Wherein, (x) center ,y center ) Is the coordinates of the center of the image, (kx, ky) is a conversion factor for converting the pixel deviation into the actual angular adjustment;
s303: based on the calculation result, a control signal is generated, and the cradle head is adjusted to enable the camera to be aimed at the target.
In the above method, optionally, the specific content in S4 is: comparing the category information of all the targets with a hit detection target library in the hit detection system module, and listing the targets with the same category information as threat targets, triggering a laser weapon system, and hit the threat targets.
A system for performing real-time single-stage target recognition-based scout and play, wherein the method for performing real-time single-stage target recognition-based scout and play comprises a real-time target recognition module, a direction control module and a hit system module which are sequentially connected;
a real-time target recognition module: identifying suspicious targets captured by a camera in real time, analyzing a video stream, determining the targets in the video stream, and providing coordinates and category information of the targets;
the direction control module: according to the target coordinate information provided by the real-time target detection module, the direction and the visual field of the cradle head are automatically controlled, so that the camera is aligned to the target;
a striking system module: including laser weapons and control systems associated therewith for implementing a striking action upon detection of a threat target.
The system described above, optionally,
the system is also used for remote laser energy charging operation, so that the electronic equipment can normally operate.
Compared with the prior art, the method and the system for detecting and beating based on real-time single-stage target recognition have the following beneficial effects:
the present invention exhibits significant high efficiency and high accuracy advantages over the prior art. These advantages result from the unique technical combination and innovative implementation of the system. One of the cores is an RTDNet real-time target detector, which adopts a deep convolution building block and combines efficient data processing and control functions, so that the processing speed exceeding 300 frames per second and the average precision of 99.1% are realized, and the target detection efficiency is greatly improved. In addition, the high precision of the system benefits from the soft label and dynamic label distribution technology introduced by the system, particularly Dynamic Soft Label Assigner of RTDNet, and the accuracy of target identification is ensured by fine parameter tuning through methods such as position priori information loss, sample regression loss, sample classification loss and the like. In the whole, the system integrates the efficient matching of the real-time target recognition module, the direction control module and the striking system module, not only has excellent application potential in the fields of safety monitoring, unmanned aerial vehicle protection and the like, but also has great optimization and innovation of the traditional system in the technical aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting and beating based on real-time single-stage target recognition;
FIG. 2 is a flow chart of image preprocessing according to the present disclosure;
FIG. 3 is an overall block diagram of the RTDNet of the present disclosure;
FIG. 4 is a block diagram of a CSPLAyer in accordance with the present disclosure;
FIG. 5 is a flow chart of the azimuth control of the present disclosure;
fig. 6 is a block diagram of a striking system according to the present disclosure.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In this application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and the terms "comprise," "include," or any other variation thereof, are intended to cover a non-exclusive inclusion such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, a method for performing scout based on real-time single-stage target recognition includes the following specific steps:
s1: capturing a video stream by using a camera, and performing image preprocessing on the captured video stream to obtain a processed image;
s2: the real-time target detection module carries out real-time target detection on the processed image by using a real-time single-stage target recognition network RTDNet to obtain category information and coordinate information of all targets in the image;
s3: outputting the coordinate information of all the targets to a direction control module, and automatically adjusting the direction and the visual field of the cradle head by the direction control module according to the target coordinate information so that the cameras aim at all the targets;
s4: the striking system module judges all the target category information, and if the target category is a threat target, the laser weapon system is triggered to strike the threat target.
Further, image preprocessing includes, but is not limited to, denoising, resizing, and image enhancement.
Specifically, as shown in fig. 2: image enhancement includes single-image data enhancement: random scale transformation, random clipping, color space enhancement, random horizontal overturn; including hybrid class data enhancement: mosaic and image mixing. To make the way data is enhanced more versatile, rtdnaet used the first 280epoch with mosaic+mixup without rotation, and the intensity was increased by blending in 8 pictures, as well as the number of positive samples. The later 20epoch uses a smaller learning rate to perform fine adjustment under weaker reinforcement, and simultaneously, the parameters are slowly updated to the model under the action of EMA, so that the model can be greatly improved.
Further, the overall structure of the rtdnat model in S2 is composed of csprext+csprextpafpn+sepbsnhead that shares convolution weights but calculates BN separately.
Specifically, as shown in fig. 3: the internal core module of the RTDNet model overall structure is also CSPLlayer, but the Basic Block is improved, and CSPNeXt Block is proposed. The CSPNeXt is based on CSPDarknet and has a total 5-Layer structure, and comprises 1 Stem Layer and 4 Stage layers:
StemLayer is a 3 layer 3x3kernel ConvModule, different from the previous Focus module or a 1 layer 6x6kernel ConvModule;
the overall structure of the stagelyer is similar to the existing model, and the first 3 stagelyer consists of 1 ConvModule and 1 CSPLayer. The 4 th StageLayer is added with an SPPF module between ConvModule and CSPLlayer;
as shown in FIG. 4, CSPLayer consists of 3 ConvModule+n CSPNeXtBuck (with residual connection) +1 ChannelAttention modules. ConvModule activates the function for layer 1 3x3 Conv2d+BatchNorm+SiLU. The ChannelAttention module activates the function for layer 1 AdapteveAvgPool2d+1 layer 1×1 Conv2 d' +Hardsigmoid.
Furthermore, RTDNet proposes dynamic SoftLabelAssigner to realize dynamic matching strategy of the tag, and the method mainly comprises the steps of using position prior information loss, sample regression loss and sample classification loss, and performing Soft processing on the three losses to perform parameter tuning so as to achieve the optimal dynamic matching effect.
Final cost matrix: c=λ 1 C cls2 C reg3 C center ,λ 1 =1,λ 2 =3,λ 3 =1 is a super parameter that balances three costs, respectively
Wherein:
regional prior cost:x pred representing the center point of the predicted object, x gt The center point, which represents the true value, alpha and beta are the superparameter default settings for softening the center region to 10 and 3.
Regression cost: c (C) reg = -log (IOU), the IOU representing the IOU of the prediction and real boxes.
Classification cost: c (C) cls =CE(P,Y soft )*(Y soft -P) 2 P is the prediction score, ysoft is the iou of the prediction box and the real box.
After the sum of the three losses is calculated to obtain a final cost matrix C, the SimOTA is used for determining the number of samples matched with each tag and determining the final samples. The specific operation is as follows:
(1) The number of samples to be selected for each tag is calculated adaptively: taking the big iou of each label and the front 13 of all the prediction frames, obtaining the sum of the labels and the big iou, taking the sum as the sample number of the label, and marking the sample number as dynamic_ks, wherein the sample number is at least 1;
(2) For each tag, taking the position with small cost matrix C front dynamic_ks as the positive sample of the tag;
(3) For a certain prediction box, if being matched to a plurality of labels, the smallest one of the cost matrixes C of the labels is taken as the label;
in the training process, the Loss calculation finally participating in the whole network consists of classification Loss and boundary box Loss, and QualityFocalLoss is used as the classification Loss, wherein the formula is as follows:
QFL(σ)=-|y-σ| β ((1-y)log(1-σ)+ylog(σ))
wherein sigma represents the prediction probability of the model to a certain category; y is a true tag, typically 0 or 1; beta is an adjustment factor for controlling the degree of influence of the difference between the prediction probability and the real label, this parameter helping to balance the weights of the easy-to-classify samples (high confidence) and the difficult-to-classify samples (low confidence); y- σ represents the absolute difference between the prediction probability and the true label, reflecting the "quality" of the prediction: if the prediction is close to the real tag, it is considered a high quality prediction; if the prediction is far from the real tag, it is considered a low quality prediction; (1-y) log (1- σ) +y log (σ) is an expression of cross entropy loss for quantifying the difference between model predictions and actual labels.
The model is more focused on samples with lower prediction quality (i.e. samples with larger difference between the predicted value and the real label) in the learning process, and less weight is given to samples with higher prediction quality (i.e. samples with smaller difference between the predicted value and the real label). This may encourage the model to pay more attention to improving accuracy and reliability of predictions during the training process.
Such classification loss may generalize the focal loss of discrete labels to continuous labels, labeling the IoU of bboxes and gt as classification scores, such that the classification scores are scores that characterize regression quality. Using GioULoss as bounding box loss:
wherein IOU is the traditional cross ratio, and the ratio of the overlapping area of the two boundary boxes A and B to the total area of the two boundary boxes A and B is calculated; c is the minimum occlusion region containing bounding boxes a and B; c (A U B) is the area of the difference region between the closed region C and the combined region of A and B; |c| is the total area of the closed region C; GIOU subtracts the spatial inconsistency between bounding boxes A and B from IOU by subtracting (|C\ (A u B) |)/(|C|), this subtraction term measures the difference in position between the two bounding boxes and provides information about their spatial relationship even though they do not overlap.
In general, GIOU is a more comprehensive bounding box similarity measure that considers not only the overlap region, but also the spatial layout between bounding boxes. This allows the GIOU to provide more accurate performance than conventional IOUs in target detection and computer vision tasks, particularly in bounding box positioning.
Further, the specific steps of S3 are as follows:
s301: calculating the center point of the target bounding box, the center point coordinates (x c ,y c ) Calculated by the following formula:
where (x, y) is the upper left corner coordinates of the bounding box and (width, height) is the width and height of the bounding box;
s302: azimuth adjustment, namely adjusting a cradle head according to the deviation between a target center and an image center, wherein the angle of the cradle head to be moved is estimated by the following modes:
horizontal direction angle adjustment: (Δθ=k x ·(x c -x center ))
Vertical direction angle adjustment: (Δφ=k y ·(y c -y center ))
Wherein, (x) center ,y center ) Is the coordinates of the center of the image, (k) x ,k y ) Is a conversion factor for converting the pixel deviation into an actual angular adjustment;
s303: based on the calculation result, a control signal is generated, and the cradle head is adjusted to enable the camera to be aimed at the target.
Specific: as shown in fig. 5, the azimuth control of the camera holder is performed by the above operation result, and the holder is connected with the host computer through an interface, so that multidirectional rotation (including up, down, left, right, upper left, upper right, lower left, lower right, etc.) can be realized, and the focal length and aperture of the camera can be adjusted;
the computer uses serial port to communicate with the cradle head and the camera, and sends unidirectional control signals to the decoder. The basic function of a serial port is to convert byte data into a serial bit stream, communicate from a computer to a serial device, and receive data from the serial device and restore it to byte data. In a Windows environment, the serial port is part of the system resource. In order to communicate using a serial port, an application needs to configure a serial port address, baud rate, parity, data bits, and stop bits, and release resources (shut down the serial port) after the communication is completed. The operating system needs to approve the resource request in advance to open the serial port.
Further, the specific content in S4 is: comparing the category information of all the targets with a hit detection target library in the hit detection system module, and listing the targets with the same category information as threat targets, triggering a laser weapon system, and hit the threat targets.
Specifically, as shown in fig. 6: the invention takes infrared imaging equipment as an experimental platform, wherein an infrared camera tracker is responsible for receiving and transmitting images, and a laser simulation system generates light spots for imaging.
A system for performing real-time single-stage target recognition-based scout and play, wherein the method for performing real-time single-stage target recognition-based scout and play comprises a real-time target recognition module, a direction control module and a hit system module which are sequentially connected;
a real-time target recognition module: identifying suspicious targets captured by a camera in real time, analyzing a video stream, determining the targets in the video stream, and providing coordinates and category information of the targets;
the direction control module: according to the target coordinate information provided by the real-time target detection module, the direction and the visual field of the cradle head are automatically controlled, so that the camera is aimed at the target;
a striking system module: including laser weapons and control systems associated therewith for implementing a striking action upon detection of a threat target.
Further, the system is also used for remote laser energy charging operation, so that the electronic equipment can normally operate.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A scout method based on real-time single-stage target recognition is characterized by comprising the following specific steps:
s1: capturing a video stream by using a camera, and performing image preprocessing on the captured video stream to obtain a processed image;
s2: the real-time target detection module carries out real-time target detection on the processed image by using a real-time single-stage target recognition network RTDNet to obtain category information and coordinate information of all targets in the image;
s3: outputting the coordinate information of all the targets to a direction control module, and automatically adjusting the direction and the visual field of the cradle head by the direction control module according to the target coordinate information so that the cameras aim at all the targets;
s4: the striking system module judges all the target category information, and if the target category is a threat target, the laser weapon system is triggered to strike the threat target.
2. A method of scout based on real-time single-stage object recognition according to claim 1, wherein image preprocessing includes, but is not limited to, denoising, resizing, and image enhancement.
3. The method of claim 1, wherein the overall structure of the rtdnaet model in S2 is composed of csprext+csprexpaftn+sepbserved convolution weights but sepbsnhead calculated BN respectively.
4. The method for scouting based on real-time single-stage object recognition according to claim 1, wherein the specific steps of S3 are as follows:
s301: calculating the center point of the target bounding box, the center point coordinates (x c ,y c ) Calculated by the following formula:
where (x, y) is the upper left corner coordinates of the bounding box and (width, height) is the width and height of the bounding box;
s302: azimuth adjustment, namely adjusting a cradle head according to the deviation between a target center and an image center, wherein the angle of the cradle head to be moved is estimated by the following modes:
horizontal direction angle adjustment: (Δθ=k x ·(x c -x center ))
Vertical direction angle adjustment: (Δφ=k y ·(y c -y center ))
Wherein, (x) center ,y center ) Is the coordinates of the center of the image, (k) x ,k y ) Is a conversion factor for converting the pixel deviation into an actual angular adjustment;
s303: based on the calculation result, a control signal is generated, and the cradle head is adjusted to enable the camera to be aimed at the target.
5. The method for scouting based on real-time single-stage object recognition according to claim 1, wherein,
s4, the specific content is as follows: comparing the category information of all the targets with a hit detection target library in the hit detection system module, and listing the targets with the same category information as threat targets, triggering a laser weapon system, and hit the threat targets.
6. A system for performing real-time single-stage target recognition based on a beating system, characterized in that the method for performing real-time single-stage target recognition based on any one of claims 1-5 comprises a real-time target recognition module, a direction control module and a beating system module which are connected in sequence;
a real-time target recognition module: identifying suspicious targets captured by a camera in real time, analyzing a video stream, determining the targets in the video stream, and providing coordinates and category information of the targets;
the direction control module: according to the target coordinate information provided by the real-time target detection module, the direction and the visual field of the cradle head are automatically controlled, so that the camera can aim at a target;
a striking system module: including laser weapons and control systems associated therewith for implementing a striking action upon detection of a threat target.
7. A scout system based on real-time single-stage object recognition according to claim 6, wherein,
the system is also used for remote laser energy charging operation, so that the electronic equipment can normally operate.
CN202311814756.5A 2023-12-26 2023-12-26 Method and system for scouting and beating based on real-time single-stage target recognition Pending CN117809217A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311814756.5A CN117809217A (en) 2023-12-26 2023-12-26 Method and system for scouting and beating based on real-time single-stage target recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311814756.5A CN117809217A (en) 2023-12-26 2023-12-26 Method and system for scouting and beating based on real-time single-stage target recognition

Publications (1)

Publication Number Publication Date
CN117809217A true CN117809217A (en) 2024-04-02

Family

ID=90430958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311814756.5A Pending CN117809217A (en) 2023-12-26 2023-12-26 Method and system for scouting and beating based on real-time single-stage target recognition

Country Status (1)

Country Link
CN (1) CN117809217A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323517A1 (en) * 2015-04-29 2016-11-03 Protruly Vision Technology Group CO.,LTD Method and system for tracking moving trajectory based on human features
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN116817929A (en) * 2023-08-28 2023-09-29 中国兵器装备集团兵器装备研究所 Method and system for simultaneously positioning multiple targets on ground plane by unmanned aerial vehicle
CN116977738A (en) * 2023-08-03 2023-10-31 重庆邮电大学 Traffic scene target detection method and system based on knowledge enhancement type deep learning
CN117077428A (en) * 2023-08-31 2023-11-17 中国北方车辆研究所 Construction method of firepower planning objective function aiming at battlefield multidimensional requirements

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160323517A1 (en) * 2015-04-29 2016-11-03 Protruly Vision Technology Group CO.,LTD Method and system for tracking moving trajectory based on human features
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN116977738A (en) * 2023-08-03 2023-10-31 重庆邮电大学 Traffic scene target detection method and system based on knowledge enhancement type deep learning
CN116817929A (en) * 2023-08-28 2023-09-29 中国兵器装备集团兵器装备研究所 Method and system for simultaneously positioning multiple targets on ground plane by unmanned aerial vehicle
CN117077428A (en) * 2023-08-31 2023-11-17 中国北方车辆研究所 Construction method of firepower planning objective function aiming at battlefield multidimensional requirements

Similar Documents

Publication Publication Date Title
CN107016690B (en) Unmanned aerial vehicle intrusion detection and identification system and method based on vision
CN109255286B (en) Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework
US11948355B2 (en) Synthetic infrared data for image classification systems and methods
CN112068111A (en) Unmanned aerial vehicle target detection method based on multi-sensor information fusion
CN111179318B (en) Double-flow method-based complex background motion small target detection method
CN108805008A (en) A kind of community's vehicle security system based on deep learning
Song et al. Analysis on the impact of data augmentation on target recognition for UAV-based transmission line inspection
CN106600613B (en) Improvement LBP infrared target detection method based on embedded gpu
Xie et al. Adaptive switching spatial-temporal fusion detection for remote flying drones
Tomar et al. Dynamic Kernel CNN-LR model for people counting
CN113253289A (en) Unmanned aerial vehicle detection tracking system implementation method based on combination of laser radar and vision
Cheng et al. SLBAF-Net: Super-Lightweight bimodal adaptive fusion network for UAV detection in low recognition environment
Sommer et al. Deep learning-based drone detection in infrared imagery with limited training data
Shen et al. An improved UAV target detection algorithm based on ASFF-YOLOv5s
CN111274988A (en) Multispectral-based vehicle weight identification method and device
Zhang et al. AGVS: A new change detection dataset for airport ground video surveillance
CN114067251A (en) Unsupervised monitoring video prediction frame abnormity detection method
Zhang et al. Boosting transferability of physical attack against detectors by redistributing separable attention
Peng et al. Point-based multilevel domain adaptation for point cloud segmentation
CN117809217A (en) Method and system for scouting and beating based on real-time single-stage target recognition
CN112598032A (en) Multi-task defense model construction method for anti-attack of infrared image
Ji et al. STAE‐YOLO: Intelligent detection algorithm for risk management of construction machinery intrusion on transmission lines based on visual perception
Zhang et al. Learning nonlocal quadrature contrast for detection and recognition of infrared rotary-wing UAV targets in complex background
CN110796008A (en) Early fire detection method based on video image
Zennayi et al. Unauthorized access detection system to the equipments in a room based on the persons identification by face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination