CN116012406A - Event detection method and device, electronic equipment and storage medium - Google Patents

Event detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116012406A
CN116012406A CN202211622667.6A CN202211622667A CN116012406A CN 116012406 A CN116012406 A CN 116012406A CN 202211622667 A CN202211622667 A CN 202211622667A CN 116012406 A CN116012406 A CN 116012406A
Authority
CN
China
Prior art keywords
target
detected
tracking
track
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211622667.6A
Other languages
Chinese (zh)
Inventor
张宇豪
郝行猛
舒梅
朱梦超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211622667.6A priority Critical patent/CN116012406A/en
Publication of CN116012406A publication Critical patent/CN116012406A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an event detection method, an event detection device, electronic equipment and a storage medium, which are based on video data streams collected for an environment to be detected, adopt a target detection model to obtain respective target categories of targets to be detected, respectively adopt set corresponding category tracking frames for the respective target categories to track respective target tracks of the targets to be detected, wherein each category tracking frame comprises a plurality of tracking units, each tracking unit can be used for tracking the target track of one target to be detected, and based on the mode, each target to be detected in an airport environment is tracked in stages according to the target categories, so that the problem of track tracking errors caused by information exchange of the targets to be detected among different categories is avoided, and the accuracy of the tracked target tracks and the accuracy of corresponding detected guarantee events are further improved.

Description

Event detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of target detection technologies, and in particular, to an event detection method, an event detection device, an electronic device, and a storage medium.
Background
In an intelligent airport, target detection and target tracking are adopted for each target to be detected, and various guarantee events (such as bridge leaning events and bridge removing events aiming at a corridor bridge) occurring in an airport environment can be detected and alarmed by combining with tripwire invasion and preset judgment logic, so that airport personnel can master the key process of flight guarantee.
In the prior art, for each detected target to be detected, position coordinate information is respectively adopted, and the position coordinate information is associated in a continuous video frame collected for an airport environment to track a target track, so that the information exchange problem is easily caused between adjacent targets to be detected in the mode, and the tracked target tracks are not accurate enough, so that accurate warning of a guarantee event is influenced.
Disclosure of Invention
The embodiment of the application provides an event detection method, an event detection device, electronic equipment and a storage medium, which are used for improving and guaranteeing the event alarm accuracy.
In a first aspect, an embodiment of the present application provides an event detection method, including:
performing target detection on a video data stream acquired aiming at a to-be-detected environment by adopting a target detection model to obtain respective target categories of targets to be detected in the to-be-detected environment;
For each target category, respectively adopting a set corresponding category tracking frame to track the respective target track of each target to be detected, wherein each category tracking frame comprises a plurality of tracking units, and each tracking unit is used for tracking the target track of one target to be detected;
and detecting whether each target to be detected has a guarantee event or not based on the position relation between each target track and a set early warning area, wherein the guarantee event is an event associated with each target to be detected in the environment to be detected.
In an alternative embodiment, the object detection model is trained by:
obtaining a training sample set, wherein one training sample comprises the following steps: input information and a real class label corresponding to one training target, wherein the training target is associated with the environment to be detected;
performing multiple rounds of iterative training on a preset detection model by adopting training samples in the training sample set, and outputting a target detection model when convergence conditions are met; wherein, in a round of iterative training process, the following operations are executed:
and acquiring a prediction category based on input information in a training sample by adopting the detection model, and adjusting parameters of the detection model based on a self-adaptive loss value between the prediction category and a corresponding real category label.
In an alternative embodiment, the adaptive loss value is calculated by:
calculating an original loss value between a predicted class and a true class label corresponding to one training sample based on the predicted class of the one training sample;
and weighting the original loss value by adopting a preset weight set corresponding to the self-adaptive weight of the real class label to obtain the self-adaptive loss value between the predicted class and the real class label, wherein the self-adaptive weight correlates the quantity ratio of each training sample in the training sample set corresponding to the real class label.
In an optional implementation manner, the tracking, for each target category, by using a set corresponding category tracking frame, the respective target track of each target to be detected includes:
for each target category, the following operations are respectively executed:
based on the respective confidence coefficient of at least one target to be detected corresponding to one target category, acquiring each target to be detected, of which the corresponding confidence coefficient is greater than the confidence coefficient threshold, from the at least one target to be detected by adopting a set confidence coefficient threshold, wherein the targets to be detected are targets to be matched respectively;
Tracking respective target tracks of the targets to be matched by adopting a plurality of tracking units in corresponding class tracking frames, which are arranged for one target class;
and in response to the end of tracking the respective target track of each target to be matched, tracking the respective target track of each target to be detected in the at least one target to be detected by adopting the residual tracking units.
In an optional embodiment, after tracking the respective target track of each remaining target to be detected in the at least one target to be detected, the method further includes:
judging whether the tracking of the respective target track of each remaining target to be detected is finished; wherein:
if yes, keeping track of each target to be detected;
otherwise, adopting a newly added tracking unit to continuously track respective target tracks of the remaining targets to be detected.
In an optional implementation manner, the detecting whether the respective targets to be detected generate the security event based on the positional relationship between the respective target tracks and the set pre-warning area includes:
for each object to be detected, executing any one of the following operations respectively:
Detecting a current working state of one target to be detected in response to a target track of the one target to be detected not crossing a set early warning area, acquiring first images related to the one target to be detected based on the current working state, and detecting whether a guarantee event occurs to the one target to be detected based on the number of the first images, wherein the first images are images acquired after the one target to be detected enters the current working state and before the one target to be detected exits the current working state, and the current working state is a state related to the guarantee event of the one target to be detected;
responding to the target track of one target to be detected to enter a set early warning area, acquiring second images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the second images, wherein the second images are images acquired after the target track of the one target to be detected enters the early warning area and before the target track of the one target to be detected exits the early warning area;
and responding to the target track of one target to be detected to exit a set early warning area, acquiring all third images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the third images, wherein the third images are acquired after the target track of the one target to be detected exits the early warning area and before the target track of the one target to be detected enters the early warning area. In an alternative embodiment, the detecting the current working state of the one object to be detected includes any one of the following operations:
Acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the aspect ratio of the target detection frame;
and acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the position relationship between the target detection frame and a set rule frame.
In an optional embodiment, after detecting whether each target to be detected has a security event based on the position relationship between each target track and the set pre-warning area, the method further includes:
and responding to the detection of a target to be alerted, and alerting a security event corresponding to the target to be alerted, wherein the target to be alerted is a target to be detected in which the security event occurs.
In a second aspect, an embodiment of the present application provides an event detection apparatus, including:
the target detection module is used for carrying out target detection on the video data stream acquired aiming at the environment to be detected by adopting a target detection model to obtain respective target categories of all targets to be detected in the environment to be detected;
the track tracking module is used for tracking the respective target track of each target to be detected by adopting a set corresponding type tracking frame for each target type, wherein each type tracking frame comprises a plurality of tracking units, and each tracking unit is used for tracking the target track of one target to be detected;
The event detection module is used for detecting whether each target to be detected has a guarantee event or not based on the position relation between each target track and a set early warning area, wherein the guarantee event is an event associated with each target to be detected in the environment to be detected. In an alternative embodiment, the object detection model is trained by:
obtaining a training sample set, wherein one training sample comprises the following steps: input information and a real class label corresponding to one training target, wherein the training target is associated with the environment to be detected;
performing multiple rounds of iterative training on a preset detection model by adopting training samples in the training sample set, and outputting a target detection model when convergence conditions are met; wherein, in a round of iterative training process, the following operations are executed:
and acquiring a prediction category based on input information in a training sample by adopting the detection model, and adjusting parameters of the detection model based on a self-adaptive loss value between the prediction category and a corresponding real category label.
In an alternative embodiment, the adaptive loss value is calculated by:
Calculating an original loss value between a predicted class and a true class label corresponding to one training sample based on the predicted class of the one training sample;
and weighting the original loss value by adopting a preset weight set corresponding to the self-adaptive weight of the real class label to obtain the self-adaptive loss value between the predicted class and the real class label, wherein the self-adaptive weight correlates the quantity ratio of each training sample in the training sample set corresponding to the real class label.
In an optional implementation manner, for each target category, a set corresponding category tracking frame is used to track a respective target track of each target to be detected, and the track tracking module is used to:
for each target category, the following operations are respectively executed:
based on the respective confidence coefficient of at least one target to be detected corresponding to one target category, acquiring each target to be detected, of which the corresponding confidence coefficient is greater than the confidence coefficient threshold, from the at least one target to be detected by adopting a set confidence coefficient threshold, wherein the targets to be detected are targets to be matched respectively;
Tracking respective target tracks of the targets to be matched by adopting a plurality of tracking units in corresponding class tracking frames, which are arranged for one target class;
and in response to the end of tracking the respective target track of each target to be matched, tracking the respective target track of each target to be detected in the at least one target to be detected by adopting the residual tracking units.
In an optional embodiment, after the tracking of the respective target track of each remaining target to be inspected in the at least one target to be inspected, the track tracking module is further configured to:
judging whether the tracking of the respective target track of each remaining target to be detected is finished; wherein:
if yes, keeping track of each target to be detected;
otherwise, adopting a newly added tracking unit to continuously track respective target tracks of the remaining targets to be detected.
In an optional implementation manner, the detecting module is configured to detect whether each target to be detected has a security event based on a positional relationship between each target track and a set early warning area, where the event detecting module is configured to:
for each object to be detected, executing any one of the following operations respectively:
Detecting a current working state of one target to be detected in response to a target track of the one target to be detected not crossing a set early warning area, acquiring first images related to the one target to be detected based on the current working state, and detecting whether a guarantee event occurs to the one target to be detected based on the number of the first images, wherein the first images are images acquired after the one target to be detected enters the current working state and before the one target to be detected exits the current working state, and the current working state is a state related to the guarantee event of the one target to be detected;
responding to the target track of one target to be detected to enter a set early warning area, acquiring second images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the second images, wherein the second images are images acquired after the target track of the one target to be detected enters the early warning area and before the target track of the one target to be detected exits the early warning area;
and responding to the target track of one target to be detected to exit a set early warning area, acquiring all third images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the third images, wherein the third images are acquired after the target track of the one target to be detected exits the early warning area and before the target track of the one target to be detected enters the early warning area.
In an alternative embodiment, the detecting the current working state of the object to be detected, the event detecting module is configured to perform any one of the following operations:
acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the aspect ratio of the target detection frame;
and acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the position relationship between the target detection frame and a set rule frame.
In an optional embodiment, after detecting whether each target to be detected has a security event based on the position relationship between each target track and a set pre-warning area, the event detection module is further configured to:
and responding to the detection of a target to be alerted, and alerting a security event corresponding to the target to be alerted, wherein the target to be alerted is a target to be detected in which the security event occurs.
In a third aspect, an electronic device is provided, comprising a processor and a memory, wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of the event detection method described in the first aspect.
In a fourth aspect, a computer readable storage medium is proposed, comprising program code for causing an electronic device to perform the steps of the event detection method as described in the first aspect above, when said program code is run on said electronic device.
The technical effects of the embodiment of the application are as follows:
the embodiment of the application provides an event detection method, an event detection device, electronic equipment and a storage medium, which are based on video data streams collected for an environment to be detected, adopt a target detection model to obtain respective target categories of targets to be detected, respectively adopt set corresponding category tracking frames for the respective target categories to track respective target tracks of the targets to be detected, wherein each category tracking frame comprises a plurality of tracking units, each tracking unit can be used for tracking the target track of one target to be detected, and based on the mode, each target to be detected in an airport environment is tracked in stages according to the target categories, so that the problem of track tracking errors caused by information exchange of the targets to be detected among different categories is avoided, and the accuracy of the tracked target tracks and the accuracy of corresponding detected guarantee events are further improved.
Drawings
Fig. 1 is a flowchart of an event detection method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of an early warning area provided in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating detection of a security event according to an embodiment of the present disclosure;
fig. 4a is a schematic view of a gallery bridge according to an embodiment of the present disclosure;
fig. 4b is a schematic illustration of a gallery bridge removal provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an event detection device according to an embodiment of the present application;
fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present invention based on the embodiments herein.
It should be noted that "a plurality of" is understood as "at least two" in the description of the present application. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. A is connected with B, and can be represented as follows: both cases of direct connection of A and B and connection of A and B through C. In addition, in the description of the present application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not be construed as indicating or implying a relative importance or order.
In addition, in the technical scheme, the data are collected, transmitted, used and the like, and all meet the requirements of national related laws and regulations.
The design thought of the application is as follows:
in the prior art, for each detected target to be detected, position coordinate information is respectively adopted, and the position coordinate information is associated in a continuous video frame collected for an airport environment to track a target track, so that the information exchange problem is easily caused between adjacent targets to be detected in the mode, the tracked target track is not accurate enough, and the accuracy of a detected guarantee event is affected.
In order to improve the accuracy of event alarming, the embodiment of the application provides an event detection method, an event detection device, electronic equipment and a storage medium, which are based on video data streams collected for an environment to be detected, detect respective target categories of targets to be detected by adopting a target detection model, track respective target tracks of the targets to be detected by adopting corresponding category tracking frames respectively aiming at the target categories, wherein each category tracking frame comprises a plurality of tracking units, each tracking unit can be used for tracking the target track of one target to be detected, and based on the mode, each target to be detected in an airport environment is tracked in stages according to the target categories, so that the information exchange problem of the targets to be detected among different categories is avoided, and the accuracy of the tracked target tracks and corresponding detection guarantee events is improved; on the other hand, the class tracking frames of all the target classes are adopted for parallel tracking, so that the tracking efficiency of the target track is further improved.
An event detection method provided in the embodiments of the present application will be described and illustrated in detail below with reference to the accompanying drawings.
Firstly, the event detection method provided by the application is particularly applied to the technical field of intelligent airports, and can be integrated in electronic equipment, wherein the electronic equipment can be a terminal, a server and the like.
The terminal comprises a wireless terminal device, a vehicle-mounted camera, a mobile phone, a tablet computer, an intelligent Bluetooth device, a notebook computer, a personal computer or the like which can be an android device, an IOS device, a mobile internet device (Mobile Internet Device, MID) or a smart city.
The server may be a single server or a server cluster composed of a plurality of servers, where the servers include, but are not limited to, servers for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
In addition, the related algorithm of the event detection method provided by the application can be deployed on a network Camera (IP Camera, IPC) or a network video recorder (Network Video Recorder, NVR), or can be deployed on a decision analysis system of an intelligent airport, for example, an airport collaborative decision system (Airport Collaborative Decision Making, ACDM).
It may be understood that the event detection method in the embodiment of the present application may be performed on a terminal, or may be performed on a server, or may be performed by the terminal and the server together. The above examples should not be construed as limiting the present application.
Further, based on the above description, the environment to be inspected to which the event detection method provided in the present application is applicable may refer to a real environment that needs to perform target detection and/or target tracking, for example, an airport environment, a road traffic environment, etc., and for convenience of understanding, taking the airport environment as an example for description, further, the guarantee event may be an event associated with the environment to be inspected, for example, may be a critical event of flight guarantee, and specifically, may, but not limited to, include one or more of the following events:
an in-position event for an aircraft, an out-of-position event for an aircraft, a bridge leaning event for a corridor bridge, a bridge withdrawing event for a corridor bridge, an arrival event for a guarantee vehicle, an out-of-position event for a guarantee vehicle, wherein the guarantee vehicle can include one or more of an oil vehicle, an aerial vehicle, a luggage vehicle, a guide vehicle, a tractor and the like.
Taking the above-mentioned each security event as an example, in the embodiment of the present application, an airplane, a landing gear, a corridor bridge, an oil vehicle, an aerial vehicle, a luggage vehicle, a guiding vehicle, a tractor, and the like in an airport environment are taken as each object to be detected in the airport environment, and a plurality of acquired images or acquired video frames containing each object to be detected are taken as a video data stream acquired for the airport environment.
Referring to fig. 1, the event detection method provided in the present application specifically includes:
s101: and carrying out target detection on the video data stream acquired aiming at the to-be-detected environment by adopting a target detection model to obtain respective target types of all to-be-detected targets in the to-be-detected environment.
Optionally, cameras and the like deployed in an airport environment are adopted to collect video data streams related to all objects to be detected, and the object type of each object to be detected is detected through an object detection model.
Specifically, the target class of the target to be detected may be one or more of the above-mentioned airplane, landing gear, corridor bridge, aerial tanker, luggage van, guiding vehicle, tractor, etc.
In an alternative embodiment, to improve accuracy of the prediction category, the following steps are used to train the target detection model, including:
step 1: obtaining a training sample set, wherein one training sample comprises the following steps: input information and a real class label corresponding to one training target, wherein the training target is associated with a to-be-detected environment.
Step 2: performing multiple rounds of iterative training on a preset detection model by adopting training samples in a training sample set, and outputting a target detection model when convergence conditions are met; wherein, in a round of iterative training process, the following operations are executed: and obtaining a prediction category based on input information in the training sample by adopting the detection model, and adjusting parameters of the detection model based on the self-adaptive loss value between the prediction category and the corresponding real category label.
Specifically, an image or video frame including each training target is used as a training sample of the detection model, the training targets are associated with an airport environment, for example, an image or video frame including an airplane, a landing gear, a corridor bridge, an oil vehicle, an aerial carrier, a luggage carrier, a guide vehicle, a tractor and the like is used as the training sample, and the prediction type of each training target is obtained through the detection model.
Alternatively, to avoid class prediction bias due to long tail distribution of sample data (meaning that the number of individuals with very low values is the majority of the population), the present application may employ adaptive loss values to iteratively adjust parameters of the detection model during model training.
Specifically, the method comprises the following steps of calculating the self-adaptive loss value in the training process of one round, wherein the self-adaptive loss value comprises the following steps:
step 1: based on the predicted class of one training sample, the true class label corresponding to one training sample, an original loss value between the predicted class and the true class label is calculated.
Step 2: and weighting the original loss value by adopting a preset weight set corresponding to the self-adaptive weight of the real class label to obtain the self-adaptive loss value between the predicted class and the real class label, wherein the self-adaptive weight is associated with the quantity ratio of each training sample in the training sample set corresponding to the real class label.
Based on the mode, the method and the device perform self-adaptive weighting aiming at different prediction categories, so that the model is driven to learn individuals with less training quantity, and the detection accuracy of the target detection model on training targets such as guide vehicles, conveyor vehicles and the like which are difficult to collect corresponding sample images in an airport environment is improved; corresponding to the calculated adaptive loss value, the adaptive loss function adopted in the present application is represented by the following formula (1):
Figure BDA0004002706250000121
where n is the total number of training samples, c is the number of prediction categories, y is the true category label for one training target, p is the prediction category, alpha is the adaptive weight,
Figure BDA0004002706250000122
n i the number of each training sample corresponding to the real class label in the training sample set.
Based on the mode, the method and the device can detect each object to be detected in the airport environment through the object detection model after training, and take the prediction type output by the model as the object type of the object to be detected.
S102: and tracking the target track of each target to be detected by adopting a set corresponding class tracking frame aiming at each target class.
Specifically, based on each target category output by the target detection model, different set tracking frames are adopted to track different categories of targets to be detected, wherein each tracking frame can comprise a plurality of tracking units, and each tracking unit is used for tracking a target track of one target to be detected.
Exemplary, assume in the embodiment of the present application that for the targets 1 to n to be detected, their corresponding target categories (assumed to be od_res 1 ~OD_Res K At least one of K) and the corresponding category tracking box can be as shown in table 1 below:
TABLE 1
Target to be inspected Target class Category tracking frame
Target 1 to be inspected aircraft/OD_Res 1 TRK_Res 1
Object to be inspected 2 aircraft/OD_Res 1 TRK_Res 1
Target 3 to be inspected Corridor bridge/od_res 2 TRK_Res 2
Target 4 to be inspected Lead car/od_res 3 TRK_Res 3
Target n to be inspected aircraft/OD_Res 1 TRK_Res 1
Based on the mode, different types of targets to be detected are tracked by adopting different types of tracking frames, so that the problem of information exchange among different types of targets to be detected is avoided; optionally, for each target class, the following operations are performed:
step 1: based on the respective confidence coefficient of at least one target to be detected corresponding to one target category, a set confidence coefficient threshold value is adopted, and each target to be detected, of which the corresponding confidence coefficient is larger than the confidence coefficient threshold value, is obtained from the at least one target to be detected, and is the target to be matched.
Optionally, a trained target detection model is adopted to obtain a detection result, and a target class and a confidence coefficient are obtained according to the detection result, wherein the confidence coefficient threshold value can be adjusted according to actual conditions.
Illustratively, taking the above objects 1, 2, n to be inspected as an example, a confidence threshold of 0.5 is assumed, and the confidence is shown in the following table 2:
TABLE 2
Target to be inspected Target class Confidence level
Target 1 to be inspected aircraft/OD_Res 1 0.6
Object to be inspected 2 aircraft/OD_Res 1 0.3
Target n to be inspected aircraft/OD_Res 1 0.7
Based on the above mode, the object 1 to be detected and the object n to be detected are respectively used as the objects to be matched.
Step 2: and tracking the target track of each target to be matched by adopting a plurality of tracking units in a corresponding class tracking frame, which are arranged for one target class.
Optionally, the class tracking frame may carry a plurality of tracking units, and each tracking unit may use a target tracking algorithm to track a corresponding target to be detected, for example, a kalman filtering algorithm and a hungarian matching algorithm are used to track the corresponding target to be detected.
Illustratively, a category tracking box TRK_Res is employed 1 The object 1 to be inspected and the object n to be inspected are tracked by the plurality of trackers.
Step 3: and in response to the end of tracking the respective target track of each target to be matched, tracking the respective target track of each target to be detected in at least one target to be detected by adopting the residual tracking units.
Specifically, after the tracking of the targets to be matched with higher confidence coefficient is finished, the remaining tracking units in the category tracking frame are adopted to continuously track the targets to be matched with lower confidence coefficient.
The object 2 to be detected is tracked continuously by using a residual tracking unit.
Optionally, after tracking by using a residual tracking unit, judging whether the target track of each target to be detected is tracked in the current target category, and if so, reserving the target track; and if the complete target track of each target to be detected is not obtained, continuing to track by adopting a newly added tracking unit under the tracking frame of the type.
Based on the above mode, the method and the device track each target to be detected of the same target class according to the confidence in the detection result, preferentially ensure the target track of each target to be detected with higher confidence, continuously track each target to be detected with lower confidence (such as shielding targets and the like), and ensure the robustness of a target tracking algorithm.
S103: and detecting whether each target to be detected has a guarantee event or not based on the position relation between each target track and a set early warning area.
Optionally, an early warning area is set according to an actual working area of the airport environment, and one or more early warning areas can be used for carrying out event detection on different targets to be detected.
For example, referring to fig. 2, a solid line of the airport apron is used to set an early warning area for a security event, where the early warning area is < a, B, C, D, E, F, G, H, I >, and the early warning area further includes a sub early warning area a for warning the parking condition of the undercarriage.
Optionally, in order to avoid false event report caused by invalid targets of adjacent units, logic operation and time constraint are adopted to detect a security event of each target to be detected, specifically, referring to fig. 3, for one target to be detected, any one of the following operations is executed:
1. and detecting the current working state of one target to be detected in response to the fact that the target track of the one target to be detected does not cross the set early warning area, acquiring each first image associated with the one target to be detected based on the current working state, and detecting whether a guarantee event occurs to the one target to be detected based on the number of each first image.
Specifically, the current working state refers to a state associated with a guarantee event of a target to be detected, and the first image is an image acquired after the target to be detected enters the current working state and before the target to be detected exits the current working state.
For example, assuming that one object to be inspected is a corridor bridge, the current working state of the object to be inspected may be: a bridge leaning state associated with a bridge leaning event, wherein the first image is an image acquired by the corridor bridge after entering the bridge leaning state and before exiting the bridge leaning state; for another example, the current working state of the object to be detected may be a bridge-removing state associated with a "bridge-removing event", and the first image is an image acquired by the corridor bridge after entering the bridge-removing state and before exiting the bridge-removing state.
In an alternative embodiment, the current working state of the above-mentioned one object to be detected may be obtained by detecting any one of the following operations:
(1) And acquiring a target detection frame aiming at a target to be detected by adopting a target detection model, and detecting the current working state of the target to be detected based on the aspect ratio of the target detection frame.
Specifically, the target detection frame may be a detection frame generated for the one target to be detected when the target detection model is used for detecting the target of the video data stream, or may be a corresponding detection frame generated when the target detection model is used for detecting the target of any image or video frame containing the one target to be detected, and the current working state of the one target to be detected may be determined based on the aspect ratio of the target detection frame and the set aspect ratio threshold.
For example, referring to fig. 4a, assuming that one object to be inspected is a corridor bridge and the aspect ratio threshold set for the one object to be inspected is 1.5, an object detection model is adopted to detect an object of a graphic image including the object to be inspected, and then, based on that the aspect ratio of a generated object detection frame is greater than the set aspect ratio threshold, the current working state of the object to be inspected is determined to be a bridge leaning state.
For another example, referring to fig. 4b, assuming that one object to be inspected is a corridor bridge and the set aspect ratio threshold is 1.5 for the one object to be inspected, an object detection model is adopted to detect an object of a graphic image including the object to be inspected, and then, based on the aspect ratio of the generated object detection frame being less than or equal to the set aspect ratio threshold, the current working state of the object to be inspected is determined to be a bridge-removing state.
(2) And acquiring a target detection frame aiming at a target to be detected by adopting a target detection model, and detecting the current working state of the target to be detected based on the position relation between the target detection frame and the set rule frame.
Specifically, a corresponding rule frame is set by adopting a real working area of a target to be detected in a detection environment, and one or more rule frames can be adopted.
For example, assuming that one target to be detected is an airplane, and the set rule frame is the sub-early warning area a shown in fig. 2 for the one target to be detected, a target detection model is adopted to detect an image containing the target to be detected, wherein if an intersection exists between the generated target detection frame and the set rule frame, the current working state of the target to be detected is determined to be in-place, and if no intersection exists between the generated target detection frame and the set rule frame, the current working state of the target to be detected is determined to be out-of-place.
And based on the mode, taking the acquired image associated with the corresponding current working state as a first image of the object to be detected, and based on the number of the first images, judging whether a security event occurs by adopting a set first threshold, wherein if the number of the first images is larger than the first threshold, determining that the one object to be detected has the security event, otherwise, determining that the security event does not occur.
For example, assuming that the object to be inspected is a corridor bridge and the number of corresponding first images acquired in a bridge leaning state for the corridor bridge is 30, it is determined that a corresponding "bridge leaning event" does not occur for the corridor bridge based on the set first threshold being 50.
2. And responding to the target track of one target to be detected to enter a set early warning area, acquiring each second image related to the target to be detected, and detecting whether a guarantee event occurs to the target to be detected or not based on the number of each second image.
Specifically, the second image is an image acquired after a target to be detected enters the early warning area and before the target to be detected exits the early warning area.
For example, assuming that one object to be detected is an airplane, after the object to be detected enters the early warning area and before the object to be detected exits the early warning area, an image located in the early warning area is used as a second image.
And judging whether a security event occurs or not by adopting a set second threshold value based on the number of the second images in the mode, wherein if the number of the second images is larger than the second threshold value, determining that the security event occurs to the target to be detected, otherwise, determining that the security event does not occur.
For example, assuming that the object to be detected is an aircraft, and the number of corresponding second images acquired after the aircraft enters the early warning area and before the aircraft does not exit the early warning area is 60, determining that a corresponding "in-place event" occurs in the aircraft based on the set second threshold value being 40.
3. And responding to the target track of one target to be detected to exit the set early warning area, acquiring all third images related to the target to be detected, and detecting whether a guarantee event occurs to the target to be detected or not based on the number of the third images.
Specifically, the third image is an image acquired after a target to be detected exits the early warning area and before the target to be detected enters the early warning area.
For example, assuming that one object to be detected is an airplane, an image located outside the early warning area after the object to be detected exits the early warning area and before the object to be detected exits the early warning area is used as a third image.
And judging whether a security event occurs or not by adopting a set third threshold value based on the number of the third images in the mode, wherein if the number of the third images is larger than the third threshold value, determining that the security event occurs to the object to be detected, otherwise, determining that the security event does not occur.
For example, assuming that the object to be detected is an aircraft, and the number of corresponding third images acquired after the aircraft exits the early warning area and before the aircraft enters the early warning area again is 50, determining that the aircraft has a corresponding "dislocation event" based on the set third threshold value being 30.
In an alternative embodiment, the early warning area is set for the target track of the target to be detected to enter and/or exit, and whether the corresponding security event occurs can be rapidly detected through the following logic operation state equation, wherein the logic operation state equation is shown in the following formula:
Figure BDA0004002706250000171
Wherein, (x, y) in Area represents that the target to be detected is in the set early warning Area, and (x, y) outArea represents that the target to be detected is outside the set early warning Area, flag trip Characterizing whether to generate an entry or exit relation between a target track of the target to be detected and an early warning area, num Frame And representing the size relation between the number of corresponding images and a preset threshold value.
Specifically, when the target to be detected is in the set early warning area, if the target track enters the set early warning area, flag trip If the target track does not enter the set early warning area, flag is taken as 1 trip The value of (2) is 0; when the target to be detected is outside the set early warning area, if the target track is driven out of the setEarly warning area, flag trip If the target track does not leave the set early warning area, flag trip The value of (2) is 0.
When the target track of the target to be detected enters the set early warning area, if the number of the acquired second images is greater than a preset second threshold value, num Frame If the number of the acquired second images is not greater than the preset second threshold value, num Frame The value of (2) is 0; when the target track of the target to be detected exits the set early warning area, if the number of the acquired third images is greater than a preset third threshold value, num Frame If the number of the acquired third images is not greater than the preset third threshold value, num Frame The value of (2) is 0.
Based on the mode, the time constraint is carried out on the target to be detected which enters or exits the early warning area through the number of the acquired corresponding images, so that the detection error of the guarantee event caused by the false entry of the target of the adjacent machine position is avoided.
In an alternative embodiment, after detecting whether each target to be detected has a security event based on the position relationship between each target track and the set pre-warning area, the method further includes:
and responding to the detection of the target to be alarmed, and alarming a guarantee event corresponding to the target to be alarmed.
The target to be alerted is a target to be detected, wherein a guarantee event occurs to the target to be alerted.
For example, assuming that the targets to be detected in the environment to be detected are the target to be detected 1 and the target to be detected 2 … and the target to be detected 10 respectively, the targets to be detected are respectively used as the targets to be alerted, so as to obtain the event detection condition in the environment to be detected in real time.
Specific examples will be provided below to describe in detail the event detection method provided in the present application.
1) Detection of an aircraft in event and out of position event.
In a specific embodiment, assuming that a target to be detected is an airplane, driving into a set early warning area aiming at a target track of the target to be detected, acquiring corresponding second images, and determining that an airplane positioning event occurs and triggering an airplane positioning alarm when the number of the second images is greater than a preset second threshold value; similarly, aiming at the target to be detected, a set early warning area is driven out of the target track, corresponding third images are obtained, and when the number of the third images is larger than a preset third threshold value, the aircraft is determined to have an off-position event and an aircraft off-position warning is triggered.
2) And detecting a bridge leaning event and a bridge removing event of the corridor bridge.
In a specific embodiment, assuming that a target to be detected is detected as a corridor bridge, acquiring corresponding first images aiming at the target to be detected and the current working state of the target to be detected as a bridge leaning state, and determining that a bridge leaning event occurs to the corridor bridge and triggering a corridor bridge leaning alarm when the number of the first images is larger than a preset first threshold value; similarly, for the object to be detected, corresponding first images are acquired for the situation that the current working state is the bridge removing state, and when the number of the first images is larger than a preset first threshold value, the occurrence of a bridge removing event of the aircraft is determined and a bridge removing alarm of a corridor is triggered.
3) The method aims at detecting the arrival event and the departure event of the guarantee vehicle.
In a specific embodiment, assuming that a target to be detected is detected as a guarantee vehicle, wherein the guarantee vehicle can be any one of an oil vehicle, an aerial vehicle, a luggage vehicle, a guide vehicle, a tractor and the like, driving into a set early warning area aiming at a target track of the target to be detected, acquiring corresponding second images, and determining that the guarantee vehicle has an arrival event and triggering a guarantee vehicle arrival alarm when the number of the second images is larger than a preset second threshold; similarly, aiming at the target to be detected, a set early warning area is driven out of the target track, corresponding third images are obtained, and when the number of the third images is larger than a preset third threshold value, the exit event of the guarantee vehicle is determined and the exit warning of the guarantee vehicle is triggered.
Based on the same technical concept, the embodiment of the application also provides an event detection device, which is used for realizing the above-mentioned method flow of the embodiment of the application. Referring to fig. 5, the apparatus includes: a target detection module 501, a trajectory tracking module 502, and an event detection module 503, wherein:
the target detection module 501 is configured to perform target detection on a video data stream collected for a to-be-detected environment by using a target detection model, so as to obtain respective target categories of targets to be detected in the to-be-detected environment;
the track tracking module 502 is configured to track, for each target class, a respective target track of each target to be detected by using a set corresponding class tracking frame, where each class tracking frame includes a plurality of tracking units, and each tracking unit is configured to track a target track of one target to be detected;
the event detection module 503 is configured to detect whether each target to be detected has a security event based on a positional relationship between each target track and a set early warning area, where the security event is an event associated with each target to be detected in the environment to be detected.
In an alternative embodiment, the object detection model is trained by:
Obtaining a training sample set, wherein one training sample comprises the following steps: input information and a real class label corresponding to one training target, wherein the training target is associated with the environment to be detected;
performing multiple rounds of iterative training on a preset detection model by adopting training samples in the training sample set, and outputting a target detection model when convergence conditions are met; wherein, in a round of iterative training process, the following operations are executed:
and acquiring a prediction category based on input information in a training sample by adopting the detection model, and adjusting parameters of the detection model based on a self-adaptive loss value between the prediction category and a corresponding real category label.
In an alternative embodiment, the adaptive loss value is calculated by:
calculating an original loss value between a predicted class and a true class label corresponding to one training sample based on the predicted class of the one training sample;
and weighting the original loss value by adopting a preset weight set corresponding to the self-adaptive weight of the real class label to obtain the self-adaptive loss value between the predicted class and the real class label, wherein the self-adaptive weight correlates the quantity ratio of each training sample in the training sample set corresponding to the real class label.
In an optional implementation manner, for each target category, a set corresponding category tracking frame is used to track a respective target track of each target to be detected, and the track tracking module 502 is configured to:
for each target category, the following operations are respectively executed:
based on the respective confidence coefficient of at least one target to be detected corresponding to one target category, acquiring each target to be detected, of which the corresponding confidence coefficient is greater than the confidence coefficient threshold, from the at least one target to be detected by adopting a set confidence coefficient threshold, wherein the targets to be detected are targets to be matched respectively;
tracking respective target tracks of the targets to be matched by adopting a plurality of tracking units in corresponding class tracking frames, which are arranged for one target class;
and in response to the end of tracking the respective target track of each target to be matched, tracking the respective target track of each target to be detected in the at least one target to be detected by adopting the residual tracking units.
In an alternative embodiment, after the tracking the respective target track of each remaining target to be inspected in the at least one target to be inspected, the track tracking module 502 is further configured to:
Judging whether the tracking of the respective target track of each remaining target to be detected is finished; wherein:
if yes, keeping track of each target to be detected;
otherwise, adopting a newly added tracking unit to continuously track respective target tracks of the remaining targets to be detected.
In an optional implementation manner, the detecting module 503 is configured to detect whether a security event occurs in each of the targets to be detected based on a positional relationship between each of the target tracks and a set pre-warning area, where the event detecting module is configured to:
for each object to be detected, executing any one of the following operations respectively:
detecting a current working state of one target to be detected in response to a target track of the one target to be detected not crossing a set early warning area, acquiring first images related to the one target to be detected based on the current working state, and detecting whether a guarantee event occurs to the one target to be detected based on the number of the first images, wherein the first images are images acquired after the one target to be detected enters the current working state and before the one target to be detected exits the current working state, and the current working state is a state related to the guarantee event of the one target to be detected;
Responding to the target track of one target to be detected to enter a set early warning area, acquiring second images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the second images, wherein the second images are images acquired after the target track of the one target to be detected enters the early warning area and before the target track of the one target to be detected exits the early warning area;
and responding to the target track of one target to be detected to exit a set early warning area, acquiring all third images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the third images, wherein the third images are acquired after the target track of the one target to be detected exits the early warning area and before the target track of the one target to be detected enters the early warning area.
In an alternative embodiment, the detecting the current working state of the object to be detected, the event detecting module 503 is configured to perform any one of the following operations:
acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the aspect ratio of the target detection frame;
And acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the position relationship between the target detection frame and a set rule frame.
In an optional embodiment, after detecting whether the security event occurs to each of the objects to be detected based on the position relationship between each of the object tracks and the set pre-warning area, the event detection module 503 is further configured to:
and responding to the detection of a target to be alerted, and alerting a security event corresponding to the target to be alerted, wherein the target to be alerted is a target to be detected in which the security event occurs.
Based on the same inventive concept as the above-mentioned application embodiments, an electronic device is also provided in the application embodiments, and the electronic device may be used for event detection. In one embodiment, the electronic device may be a server, a terminal device, or other electronic device. In this embodiment, the electronic device may be configured as shown in fig. 6, including a memory 601, a communication interface 603, and one or more processors 602.
A memory 601 for storing a computer program for execution by the processor 602. The memory 601 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, programs required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 601 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 601 may also be a non-volatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a Hard Disk Drive (HDD) or a Solid State Drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 601 may be a combination of the above memories.
The processor 602 may include one or more central processing units (Central Processing Unit, CPU) or digital processing units, etc. A processor 602 for implementing the above-described event detection method when calling the computer program stored in the memory 601.
The communication interface 603 is used for communication with terminal devices and other servers.
The specific connection medium between the memory 601, the communication interface 603, and the processor 602 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 601 and the processor 602 are connected through the bus 604 in fig. 6, the bus 604 is shown with a thick line in fig. 6, and the connection manner between other components is only schematically illustrated, but not limited to. The bus 604 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Based on the same inventive concept, the embodiments of the present application also provide a storage medium storing computer instructions that, when executed on a computer, cause the computer to perform an event detection method as previously discussed.
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
The embodiment of the application provides an event detection method, an event detection device, electronic equipment and a storage medium, which are based on video data streams collected for an environment to be detected, adopt a target detection model to obtain respective target categories of targets to be detected, respectively adopt set corresponding category tracking frames for the respective target categories to track respective target tracks of the targets to be detected, wherein each category tracking frame comprises a plurality of tracking units, each tracking unit can be used for tracking the target track of one target to be detected, and based on the mode, each target to be detected in an airport environment is tracked in stages according to the target categories, so that the problem of track tracking errors caused by information exchange of the targets to be detected among different categories is avoided, and the accuracy of the tracked target tracks and the accuracy of corresponding detected guarantee events are further improved.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a server, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's equipment, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected over the Internet using an Internet service provider).
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (11)

1. An event detection method, comprising:
performing target detection on a video data stream acquired aiming at a to-be-detected environment by adopting a target detection model to obtain respective target categories of targets to be detected in the to-be-detected environment;
for each target category, respectively adopting a set corresponding category tracking frame to track the respective target track of each target to be detected, wherein each category tracking frame comprises a plurality of tracking units, and each tracking unit is used for tracking the target track of one target to be detected;
And detecting whether each target to be detected has a guarantee event or not based on the position relation between each target track and a set early warning area, wherein the guarantee event is an event associated with each target to be detected in the environment to be detected.
2. The method of claim 1, wherein the object detection model is trained by:
obtaining a training sample set, wherein one training sample comprises the following steps: input information and a real class label corresponding to one training target, wherein the training target is associated with the environment to be detected;
performing multiple rounds of iterative training on a preset detection model by adopting training samples in the training sample set, and outputting a target detection model when convergence conditions are met; wherein, in a round of iterative training process, the following operations are executed:
and acquiring a prediction category based on input information in a training sample by adopting the detection model, and adjusting parameters of the detection model based on a self-adaptive loss value between the prediction category and a corresponding real category label.
3. The method of claim 2, wherein the adaptive loss value is calculated by:
Calculating an original loss value between a predicted class and a true class label corresponding to one training sample based on the predicted class of the one training sample;
and weighting the original loss value by adopting a preset weight set corresponding to the self-adaptive weight of the real class label to obtain the self-adaptive loss value between the predicted class and the real class label, wherein the self-adaptive weight correlates the quantity ratio of each training sample in the training sample set corresponding to the real class label.
4. A method according to any one of claims 1 to 3, wherein said tracking the respective target trajectories of the respective targets to be inspected using respective category tracking boxes provided for each of the target categories, respectively, comprises:
for each target category, the following operations are respectively executed:
based on the respective confidence coefficient of at least one target to be detected corresponding to one target category, acquiring each target to be detected, of which the corresponding confidence coefficient is greater than the confidence coefficient threshold, from the at least one target to be detected by adopting a set confidence coefficient threshold, wherein the targets to be detected are targets to be matched respectively;
Tracking respective target tracks of the targets to be matched by adopting a plurality of tracking units in corresponding class tracking frames, which are arranged for one target class;
and in response to the end of tracking the respective target track of each target to be matched, tracking the respective target track of each target to be detected in the at least one target to be detected by adopting the residual tracking units.
5. The method of claim 4, wherein tracking the respective target trajectories of the remaining targets of the at least one target, respectively, further comprises:
judging whether the tracking of the respective target track of each remaining target to be detected is finished; wherein:
if yes, keeping track of each target to be detected;
otherwise, adopting a newly added tracking unit to continuously track respective target tracks of the remaining targets to be detected.
6. A method according to any one of claims 1 to 3, wherein detecting whether each of the targets to be detected has a security event based on a positional relationship between each of the target trajectories and a set pre-warning area, comprises:
for each object to be detected, executing any one of the following operations respectively:
Detecting a current working state of one target to be detected in response to a target track of the one target to be detected not crossing a set early warning area, acquiring first images related to the one target to be detected based on the current working state, and detecting whether a guarantee event occurs to the one target to be detected based on the number of the first images, wherein the first images are images acquired after the one target to be detected enters the current working state and before the one target to be detected exits the current working state, and the current working state is a state related to the guarantee event of the one target to be detected;
responding to the target track of one target to be detected to enter a set early warning area, acquiring second images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the second images, wherein the second images are images acquired after the target track of the one target to be detected enters the early warning area and before the target track of the one target to be detected exits the early warning area;
and responding to the target track of one target to be detected to exit a set early warning area, acquiring all third images related to the one target to be detected, and detecting whether a guarantee event occurs to the one target to be detected or not based on the number of the third images, wherein the third images are acquired after the target track of the one target to be detected exits the early warning area and before the target track of the one target to be detected enters the early warning area.
7. The method of claim 6, wherein detecting the current operating state of the one object to be inspected comprises any one of:
acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the aspect ratio of the target detection frame;
and acquiring a target detection frame aiming at the target to be detected by adopting the target detection model, and detecting the current working state of the target to be detected based on the position relationship between the target detection frame and a set rule frame.
8. The method of claim 7, wherein detecting whether each of the targets to be detected has a security event based on the positional relationship between each of the target trajectories and the set pre-warning area, further comprises:
and responding to the detection of a target to be alerted, and alerting a security event corresponding to the target to be alerted, wherein the target to be alerted is a target to be detected in which the security event occurs.
9. An event detection apparatus, comprising:
the target detection module is used for carrying out target detection on the video data stream acquired aiming at the environment to be detected by adopting a target detection model to obtain respective target categories of all targets to be detected in the environment to be detected;
The track tracking module is used for tracking the respective target track of each target to be detected by adopting a set corresponding type tracking frame for each target type, wherein each type tracking frame comprises a plurality of tracking units, and each tracking unit is used for tracking the target track of one target to be detected;
the event detection module is used for detecting whether each target to be detected has a guarantee event or not based on the position relation between each target track and a set early warning area, wherein the guarantee event is an event associated with each target to be detected in the environment to be detected.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-8 when executing the computer program.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1-8.
CN202211622667.6A 2022-12-16 2022-12-16 Event detection method and device, electronic equipment and storage medium Pending CN116012406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211622667.6A CN116012406A (en) 2022-12-16 2022-12-16 Event detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211622667.6A CN116012406A (en) 2022-12-16 2022-12-16 Event detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116012406A true CN116012406A (en) 2023-04-25

Family

ID=86037053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211622667.6A Pending CN116012406A (en) 2022-12-16 2022-12-16 Event detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116012406A (en)

Similar Documents

Publication Publication Date Title
US9583000B2 (en) Vehicle-based abnormal travel event detecting and reporting
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
JP2020537262A (en) Methods and equipment for automated monitoring systems
US20130093895A1 (en) System for collision prediction and traffic violation detection
DE112013002233B4 (en) System, method and program product for providing population-sensitive weather forecasts
Xie et al. Spatial analysis of highway incident durations in the context of Hurricane Sandy
DE102017129076A1 (en) AUTONOMOUS SCHOOLBUS
CN109360362A (en) A kind of railway video monitoring recognition methods, system and computer-readable medium
CN112330915B (en) Unmanned aerial vehicle forest fire prevention early warning method and system, electronic equipment and storage medium
US11741726B2 (en) Lane line detection method, electronic device, and computer storage medium
Chen et al. Radar: Road obstacle identification for disaster response leveraging cross-domain urban data
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN111523362A (en) Data analysis method and device based on electronic purse net and electronic equipment
CN113515968A (en) Method, device, equipment and medium for detecting street abnormal event
CN109493606A (en) The recognition methods and system of parking are disobeyed on a kind of highway
CN113673311A (en) Traffic abnormal event detection method, equipment and computer storage medium
CN115880928A (en) Real-time updating method, device and equipment for automatic driving high-precision map and storage medium
CN112528711B (en) Method and device for processing information
CN117611795A (en) Target detection method and model training method based on multi-task AI large model
CN116012406A (en) Event detection method and device, electronic equipment and storage medium
CN114492544B (en) Model training method and device and traffic incident occurrence probability evaluation method and device
CN116630888A (en) Unmanned aerial vehicle monitoring method, unmanned aerial vehicle monitoring device, electronic equipment and storage medium
CN111064924A (en) Video monitoring method and system based on artificial intelligence
CN112434901B (en) Intelligent re-decision method and system for traffic patrol scheme of unmanned aerial vehicle
CN115019242A (en) Abnormal event detection method and device for traffic scene and processing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination