CN114022517A - Method for detecting high-altitude parabolic object based on motion trajectory analysis monitoring video - Google Patents
Method for detecting high-altitude parabolic object based on motion trajectory analysis monitoring video Download PDFInfo
- Publication number
- CN114022517A CN114022517A CN202111427633.7A CN202111427633A CN114022517A CN 114022517 A CN114022517 A CN 114022517A CN 202111427633 A CN202111427633 A CN 202111427633A CN 114022517 A CN114022517 A CN 114022517A
- Authority
- CN
- China
- Prior art keywords
- target
- filtering
- track
- frame
- trajectory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000004458 analytical method Methods 0.000 title claims abstract description 14
- 238000012544 monitoring process Methods 0.000 title claims abstract description 14
- 238000001914 filtration Methods 0.000 claims abstract description 26
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 16
- 230000008569 process Effects 0.000 claims abstract description 7
- 238000001514 detection method Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000007958 sleep Effects 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 abstract description 5
- 230000005059 dormancy Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 241000238631 Hexapoda Species 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 239000000428 dust Substances 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000010813 municipal solid waste Substances 0.000 description 2
- 241000256626 Pterygota <winged insects> Species 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000007664 blowing Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003912 environmental pollution Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of video analysis and processing, in particular to a method for detecting a high-altitude parabola based on motion trail analysis monitoring video. The invention provides a method for detecting high-altitude parabolas based on motion trajectory analysis monitoring videos, which can solve the problems of trajectory bifurcation and convergence and can realize deep learning. A method for detecting high-altitude parabolas based on motion trail analysis monitoring videos comprises the following specific processes: firstly, detecting a moving target; secondly, judging algorithm dormancy; thirdly, tracking the moving target; fourthly, drop judgment; fifthly, filtering false alarm. The invention adopts an implicit track representation mode: each target in the current frame matches one or more targets in its previous frame, each target in the previous frame also matching one or more targets in the current frame; the motion tracks of all the targets are represented as a directed graph instead of a plurality of independent chains; this solves the problem of trajectory divergence and convergence.
Description
Technical Field
The invention relates to the technical field of video analysis and processing, in particular to a method for detecting a high-altitude parabola based on motion trail analysis monitoring video.
Background
Due to the sudden and violent advance of the domestic infrastructure technology, the building height of domestic buildings is refreshed continuously, the building density in urban construction is also promoted continuously, and the potential safety hazard problem of high-altitude parabolic is also caused. When the problem of high-altitude throwing is not considered, even if a person hurts the high-altitude throwing, the cause and the origin of an event are difficult to trace, and great trouble is caused to the life safety of the public. Meanwhile, even if personal injury is not caused, the falling garbage objects thrown from high altitude can cause environmental pollution. In the prior art, a camera for real-time monitoring is usually installed in a monitoring area below a building, video monitoring is performed through the camera, and when a problem occurs, the origin of a video tracing query event is searched. However, the following problems still exist, the tracing needs to wait until the later problem occurs, and after the problem is found, the related personnel actively conduct tracing query to know the original commission of the event. The method has no real-time monitoring and detects whether the problem of high-altitude object throwing exists.
The prior art can not process the problems of target splitting, target overlapping and the like when tracking tracks, thereby causing missing detection or false detection of high-altitude parabolic events; aiming at adverse factors such as rain and snow weather and violent shaking of a camera blown by strong wind, the method adopts a simple rule, and makes the algorithm sleep when the adverse factors are found, so as to avoid false detection; the prior art usually judges whether the object falls according to a simple rule, such as fitting a parabola, etc., but the real falling object track in the monitoring picture may not accord with the parabola rule (due to projection deformation, wind blowing, etc.); aiming at common false alarm targets, such as flying birds, dust and flying insects, the invention introduces a filter based on deep learning, and further reduces alarm false alarms.
Therefore, a group of statistics is adopted to judge and filter the track, the rules of parabolic track and common false alarm track are found out, missing report and false alarm are reduced, a track description mode based on a directed graph is provided, and the problems of track splitting and aggregation can be well solved.
Disclosure of Invention
In view of the above, the present invention provides a method for detecting high altitude parabolas based on motion trajectory analysis surveillance video, which can solve the problem of trajectory bifurcation and convergence and can perform deep learning.
In order to achieve the purpose, the invention provides the following technical scheme: a method for detecting high-altitude parabolas based on motion trail analysis monitoring videos comprises the following specific processes:
firstly, moving object detection: detecting a moving target by using a three-frame difference method;
II, judging the algorithm sleep: when a large number of foreground objects are found to exist (such as rain and snow weather, camera shake or moved and the like), the algorithm is dormant for a period of time so as to avoid high-altitude object false detection caused by the large number of foreground objects and excessive pressure on hardware;
thirdly, tracking the moving target: and (2) a multi-target tracking algorithm based on SDE (separate Detection and embedding). The principle of the method is that a multi-frame target is detected firstly, and then the same target among different frames is associated according to the similarity of the target so as to generate the motion trail of the target;
fourthly, drop judgment: 4.1 judging whether the target falls or not based on the nearest short track (falling distance D, time T frame), and converting the image coordinate into a world coordinate for calculation when the target falls, thereby obtaining a more accurate result; 4.2 filtering the track based on the statistic of the shorter track to eliminate false alarm;
fifthly, false alarm filtering: 5.1 full track alarm filtering: filtering false positives based on the complete trajectory after the target leaves the field of view; 5.2 false alarm target filtering: filtering false positives based on the target screenshot on the trajectory.
According to the technical scheme, compared with the prior art, the method adopts an implicit tracking track representation mode: each target in the current frame matches one or more targets in its previous frame, each target in the previous frame also matching one or more targets in the current frame; the motion tracks of all the targets are represented as a directed graph instead of a plurality of independent chains; this solves the problem of trajectory divergence and convergence; when the drop is judged, all track paths starting from the current target are judged, and the problem that the target is lost due to interruption of a single track is avoided. When the falling is judged, the included angle between the falling direction and the direction of the ground perpendicular line in the real world is judged based on the scene calibration information, but not the included angle relative to the longitudinal axis of the image, so that the conditions of image projection deformation, non-vertical installation of a camera and the like can be adapted; the alarm filtering strategy combining the short track and the full track can effectively filter false alarms, and simultaneously avoids various complex track problems which cannot be solved by simple parabolic fitting, such as complex tracks of light objects (paper boxes and the like) blown by wind and complex tracks formed by rebounding of the objects which collide the wall surface back and forth; and judging the target screenshot based on a deep learning image classification model, and effectively filtering various false alarms such as birds and winged insects.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses a method for detecting high-altitude parabolas based on a motion trail analysis monitoring video, which comprises the following specific processes:
firstly, moving object detection: detecting a moving target by using a three-frame difference method;
II, judging the algorithm sleep: when a large number of foreground objects are found to exist (such as rain and snow weather, camera shake or moved and the like), the algorithm is dormant for a period of time so as to avoid high-altitude object false detection caused by the large number of foreground objects and excessive pressure on hardware;
thirdly, tracking the moving target: and (2) a multi-target tracking algorithm based on SDE (separate Detection and embedding). The principle of the method is that a multi-frame target is detected firstly, and then the same target among different frames is associated according to the similarity of the target so as to generate the motion trail of the target;
fourthly, drop judgment: 4.1 judging whether the target falls or not based on the nearest short track (falling distance D, time T frame), and converting the image coordinate into a world coordinate for calculation when the target falls, thereby obtaining a more accurate result; 4.2 filtering the track based on the statistic of the shorter track to eliminate false alarm;
fifthly, false alarm filtering: 5.1 full track alarm filtering: filtering false positives based on the complete trajectory after the target leaves the field of view; 5.2 false alarm target filtering: filtering false positives based on the target screenshot on the trajectory.
Further, the moving target detection adopts a three-frame difference method to detect the moving target: two differential images D (0) ═ I (t) | (t) -I (t-1) | and D (1) | I (t +1) -I (t) | are obtained from images I (t-1), I (t) and I (t +1) of three continuous frames, and the two differential images D (0) and D (1) are respectively binarized and then are subjected to union operation to obtain the outline of the target at the time t in the image. Then, each foreground target can be found and marked through connected domain analysis, and an enclosure of the target is obtained. In the process, a binarization threshold value of a frame difference image is required to be set so as to extract a target contour; and a target size threshold is set to exclude some interfering targets, such as fallen leaves, birds, trash, etc. moving near the camera.
Further, the algorithm dormancy judges that analysis is suspended when the number of targets is too large or the area ratio of the targets is too high, so that a large number of foreground moving targets caused by camera shaking, rain and snow weather and the like are avoided, and a large number of false alarms are brought.
Further, on the premise that the free-fall motion principle and the camera observation position data are known, the moving target tracking predicts the motion position of each target in the previous frame F0 of the current frame F1 at the current moment t by using Kalman filtering; and then the correlation between two adjacent frames of targets is realized through target matching calculation. The specific process of matching is as follows: firstly, converting all target frames in a target frame set BBoxesF0 (obtained by Kalman Filter calculation) and BBoxesF1 of two adjacent frames at the time t into extension frames respectively; and computing IoU (Intersection over Union, namely dividing the overlapping area by the combining area) between every two expansion frames of the two sets to obtain the matching degree between the frames. The reason why the expanded frame is used instead of the original object frame to calculate IoU is that in a high-altitude parabolic scene, the object frame is small, and the displacement of the object frame between adjacent frames is usually much larger than the object frame, so there is usually no coincidence of the object frames between the previous and next frames.
In addition to expanding block IoU, the present invention sets rules to constrain the match:
the displacement between target frames cannot exceed the maximum distance track _ dist.
The displacement between target frames cannot exceed track _ dist _ to _ speed times the target speed (calculated based on the most recent trajectory).
The area change of the target frame (large target frame area divided by small target frame area) cannot exceed track _ size _ change.
After the constraint conditions are used, when the number of the interference targets is large, a plurality of error tracks can be avoided. After the above constraint is made, for each object of the current frame F1, one or the top N (typically 2) objects with the highest degree of matching are found in the previous frame F0, and each object of the previous frame F0 can match any number of objects in the current frame F1. If an object of the previous frame does not have a matching object in the current frame, the present invention retains its predicted object box in the current frame, and if no matching object is found for the continuation P frame, no prediction is performed.
Based on the matching between each adjacent frame, a directed graph can be established for the targets of multiple frames, and each target of the current frame can trace back the complete track of the target from the directed graph. Because the invention represents the motion tracks of all targets as a directed graph instead of a common single-chain structure, the bifurcation and the aggregation of the tracks can be perfectly represented. In a high altitude parabolic scene, trajectory divergence occurs when multiple objects are thrown together, the objects are gathered together as they just fall, have different speeds due to different resistances suffered during the fall, and are split into multiple independently moving objects. When the track aggregation occurs when the target and the background have partial colors which are similar, the area on the target which is similar to the background color can be judged as not the foreground, so that the target contour is broken and split into a plurality of foreground areas. However, as the object or background changes color during the fall, the complete object contour can be detected, and the trajectories of the plurality of broken foreground objects are aggregated into one.
In the above tracking algorithm, the present invention only uses the position of the target frame as the tracking shift distance, and does not use the matching of the feature vectors (color histogram, etc.) of the target frame used in some schemes, mainly because of the following considerations:
in a real scene, a color histogram sometimes cannot be used as a feature that can be stably matched due to factors such as tumbling of falling objects, change in illumination, and the like.
Reduce the amount of computation to facilitate running on low-cost devices (e.g., on cpu of the armv7 architecture).
Further, in the drop determination based on the segment trajectory, it is first determined whether the target is in a drop state based on the latest segment short trajectory of each target trajectory. The conditions for determining the falling state are as follows:
the target trajectory must last at least T1 frames, which reduces false positives from targets that are swiftly swept in close proximity.
The number of objects that the current frame object matches in the previous frame must be < N in the current frame. The situation that a target in the previous frame generates a plurality of matches in the current frame usually occurs when a large number of interference targets exist around a falling object, so that the method avoids judging whether the falling object falls at the moment, and false alarm is reduced.
Consider the most recent T ═ min (T1, T2) frame time, where T2> T1.
In the latest T frame, the distance > between the highest point of the target trajectory and the current position is D.
In the latest T frame, an angle between a connection line between the highest point of the target trajectory and the current position and the vertical direction is equal to a.
The ratio of the frame number of the foreground target (non-tracking prediction target) with the matched track in the latest T frame is equal to R. This reduces false detections that occur when there are a large number of undetected target tracks.
In the above-mentioned falling determination logic, by setting the time thresholds T1 and T2 and the distance threshold D, it is required that the falling speed of the target is neither too fast nor too slow, the target falling too fast may be dust or flying insects passing by the short distance, and the target falling too slow is generally light and generally causes no harm.
In the above-described fall determination logic, there are two parameters, a distance threshold D and an angle threshold a. The image distance is usually adopted as the drop distance, for example, D is 20% of the screen height; the angle of fall is taken to be the angle to the longitudinal axis of the image. In practical application, however, the invention finds that due to the factors of image projection transformation, camera installation with a certain inclination angle and the like, the direction perpendicular to the ground is not usually the longitudinal axis of the image, and the direction of the ground perpendicular line is different at different positions in the image, and the variation range can even reach nearly 90 degrees. If it is determined whether to fall or not still based on the image coordinates, setting the angle threshold a to a smaller value causes false alarm at both sides of the screen, and setting to a larger value causes false alarm at the middle area of the screen. Therefore, the invention introduces a scene calibration algorithm to establish the mapping relation between the world coordinate system and the image coordinate system of the floor. In a scene calibration algorithm, the invention draws a rectangle with two sides parallel to the ground on the surface of a building in an image, specifies the actual width and height (unit is meter) of the rectangle, and can calculate the mapping relation between an image coordinate system and a world coordinate system of a floor through a camera calibration algorithm of OpenCV, thereby calculating the position, the direction, the speed and the like of a moving target in the world coordinate system. This sample invention can calculate the falling distance and falling angle of the trajectory in the world coordinate system. At each position of the picture, the falling angle is compared with the real direction of the vertical line of the ground at the position, but not compared with the longitudinal axis of the image, so that a smaller angle threshold value A can be adopted, and false detection and missing detection can be reduced simultaneously.
Further, the false alarm filtering based on the short track is based on the latest track, and besides the falling condition, the invention also introduces the condition based on the motion smoothness to filter the false alarm. Finding the nearest frame with the falling distance > D from the current position, and calculating the statistic for the track:
area stability: the ratio of the area standard deviation of the target frame on the track to the area mean value is required to be smaller than a threshold value.
Stability of the direction of motion: the average value of the motion direction change angles of the adjacent frames on the track is required to be smaller than the threshold value.
Motion trajectory predictability: the average of the ratio of the displacement between the target frame position of each frame on the trajectory and the position predicted based on the historical trajectory to the target velocity magnitude must be less than a threshold.
Based on the rule, a large number of tracks which do not conform to smooth movement can be filtered, for example, false alarms caused by large-area leaf shaking can be filtered out, and the like.
Preferably, after the target track is tracked to the end point, the full-track alarm filtering is performed on the target track again based on the following rules:
after the target is determined to fall, the length of the trajectory in the continuous non-falling direction must not exceed the threshold D1;
after the target is determined to fall down, the length of the track which continuously moves upwards does not exceed a threshold value D2;
the average speed of the target trajectory must be less than a threshold V;
based on the three rules, the falling-to-flat flying and falling-to-rising tracks generated by targets such as flying birds, flying insects, paper sheets and the like can be filtered; and filtering tracks moving at too high a speed, e.g. fast sweeping objects near the camera, rather than falling objects on the target floor
Preferably, non-concern object filtering in order to further reduce false positives, the present invention introduces an image classification algorithm based on deep learning to detect common false positive objects, such as birds, insects, etc. These objects have unique appearance features that are distinguished from general objects, which can be identified through deep neural network model learning. The invention captures the targets belonging to the same track on a plurality of frames and sends the captured targets into the model for classification, if target images are judged to be non-critical categories such as birds or insects on the plurality of frames, the track is considered to be false alarm caused by the non-critical targets
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (1)
1. A method for detecting a high-altitude parabola based on a motion trail analysis monitoring video is characterized by comprising the following specific processes:
firstly, moving object detection: detecting a moving target by using a three-frame difference method;
II, judging the algorithm sleep: when a large number of foreground objects are found to exist (such as rain and snow weather, camera shake or moved and the like), the algorithm is dormant for a period of time so as to avoid high-altitude object false detection caused by the large number of foreground objects and excessive pressure on hardware;
thirdly, tracking the moving target: and (2) a multi-target tracking algorithm based on SDE (separate Detection and embedding). The principle of the method is that a multi-frame target is detected firstly, and then the same target among different frames is associated according to the similarity of the target so as to generate the motion trail of the target;
fourthly, drop judgment: 4.1 judging whether the target falls or not based on the nearest short track (falling distance D, time T frame), and converting the image coordinate into a world coordinate for calculation when the target falls, thereby obtaining a more accurate result; 4.2 filtering the track based on the statistic of the shorter track to eliminate false alarm;
fifthly, false alarm filtering: 5.1 full track alarm filtering: filtering false positives based on the complete trajectory after the target leaves the field of view; 5.2 false alarm target filtering: filtering false positives based on the target screenshot on the trajectory.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111427633.7A CN114022517A (en) | 2021-11-29 | 2021-11-29 | Method for detecting high-altitude parabolic object based on motion trajectory analysis monitoring video |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111427633.7A CN114022517A (en) | 2021-11-29 | 2021-11-29 | Method for detecting high-altitude parabolic object based on motion trajectory analysis monitoring video |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114022517A true CN114022517A (en) | 2022-02-08 |
Family
ID=80066977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111427633.7A Pending CN114022517A (en) | 2021-11-29 | 2021-11-29 | Method for detecting high-altitude parabolic object based on motion trajectory analysis monitoring video |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114022517A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114638861A (en) * | 2022-03-24 | 2022-06-17 | 济南博观智能科技有限公司 | High-altitude parabolic detection method, system and device |
CN114693556A (en) * | 2022-03-25 | 2022-07-01 | 英特灵达信息技术(深圳)有限公司 | Method for detecting and removing smear of moving target by high-altitude parabolic frame difference method |
CN117975373A (en) * | 2024-03-29 | 2024-05-03 | 济南大学 | Method and system for detecting and tracking high-altitude parabolic target on electric power construction site |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768431A (en) * | 2020-06-28 | 2020-10-13 | 熵康(深圳)科技有限公司 | High-altitude parabolic moving target detection method, detection equipment and detection system |
CN112257557A (en) * | 2020-10-20 | 2021-01-22 | 中国电子科技集团公司第五十八研究所 | High-altitude parabolic detection and identification method and system based on machine vision |
CN112668389A (en) * | 2020-11-13 | 2021-04-16 | 深圳市唯特视科技有限公司 | High-altitude parabolic target detection method, device, system and storage medium |
CN112800953A (en) * | 2021-01-27 | 2021-05-14 | 南京航空航天大学 | High-altitude parabolic detection, tracking and alarming system and method based on computer vision |
CN113362374A (en) * | 2021-06-07 | 2021-09-07 | 浙江工业大学 | High-altitude parabolic detection method and system based on target tracking network |
CN113379801A (en) * | 2021-06-15 | 2021-09-10 | 江苏科技大学 | High-altitude parabolic monitoring and positioning method based on machine vision |
CN113409360A (en) * | 2021-06-29 | 2021-09-17 | 深圳市商汤科技有限公司 | High altitude parabolic detection method and device, equipment and computer storage medium |
-
2021
- 2021-11-29 CN CN202111427633.7A patent/CN114022517A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768431A (en) * | 2020-06-28 | 2020-10-13 | 熵康(深圳)科技有限公司 | High-altitude parabolic moving target detection method, detection equipment and detection system |
CN112257557A (en) * | 2020-10-20 | 2021-01-22 | 中国电子科技集团公司第五十八研究所 | High-altitude parabolic detection and identification method and system based on machine vision |
CN112668389A (en) * | 2020-11-13 | 2021-04-16 | 深圳市唯特视科技有限公司 | High-altitude parabolic target detection method, device, system and storage medium |
CN112800953A (en) * | 2021-01-27 | 2021-05-14 | 南京航空航天大学 | High-altitude parabolic detection, tracking and alarming system and method based on computer vision |
CN113362374A (en) * | 2021-06-07 | 2021-09-07 | 浙江工业大学 | High-altitude parabolic detection method and system based on target tracking network |
CN113379801A (en) * | 2021-06-15 | 2021-09-10 | 江苏科技大学 | High-altitude parabolic monitoring and positioning method based on machine vision |
CN113409360A (en) * | 2021-06-29 | 2021-09-17 | 深圳市商汤科技有限公司 | High altitude parabolic detection method and device, equipment and computer storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114638861A (en) * | 2022-03-24 | 2022-06-17 | 济南博观智能科技有限公司 | High-altitude parabolic detection method, system and device |
CN114693556A (en) * | 2022-03-25 | 2022-07-01 | 英特灵达信息技术(深圳)有限公司 | Method for detecting and removing smear of moving target by high-altitude parabolic frame difference method |
CN117975373A (en) * | 2024-03-29 | 2024-05-03 | 济南大学 | Method and system for detecting and tracking high-altitude parabolic target on electric power construction site |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111368706B (en) | Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision | |
CN108021848B (en) | Passenger flow volume statistical method and device | |
Javed et al. | Tracking and object classification for automated surveillance | |
CN114022517A (en) | Method for detecting high-altitude parabolic object based on motion trajectory analysis monitoring video | |
CN105744232B (en) | A kind of method of the transmission line of electricity video external force damage prevention of Behavior-based control analytical technology | |
CN103246896B (en) | A kind of real-time detection and tracking method of robustness vehicle | |
CN104200466B (en) | A kind of method for early warning and video camera | |
CN111401311A (en) | High-altitude parabolic recognition method based on image detection | |
CN110555397A (en) | crowd situation analysis method | |
CN104200490A (en) | Rapid retrograde detecting and tracking monitoring method under complex environment | |
CN113362374A (en) | High-altitude parabolic detection method and system based on target tracking network | |
CN108537829A (en) | A kind of monitor video personnel state recognition methods | |
CN109948474A (en) | AI thermal imaging all-weather intelligent monitoring method | |
CN111696135A (en) | Intersection ratio-based forbidden parking detection method | |
CN105574468A (en) | Video flame detection method, device and system | |
Srinivas et al. | Image processing edge detection technique used for traffic control problem | |
CN108830161A (en) | Smog recognition methods based on video stream data | |
CN113114938B (en) | Target accurate monitoring system based on electronic information | |
CN113657250A (en) | Flame detection method and system based on monitoring video | |
Veeraraghavan et al. | Switching kalman filter-based approach for tracking and event detection at traffic intersections | |
CN115223106A (en) | Sprinkler detection method fusing differential video sequence and convolutional neural network | |
Gao et al. | Moving object detection for video surveillance based on improved ViBe | |
Dong et al. | An automatic object detection and tracking method based on video surveillance | |
CN115683089A (en) | Radar and visual track prediction and correction method | |
Li et al. | Moving vehicle detection based on an improved interframe difference and a Gaussian model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |