CN115909199A - Double-background modeling legacy detection method based on multiple backtracking verification - Google Patents

Double-background modeling legacy detection method based on multiple backtracking verification Download PDF

Info

Publication number
CN115909199A
CN115909199A CN202211395197.4A CN202211395197A CN115909199A CN 115909199 A CN115909199 A CN 115909199A CN 202211395197 A CN202211395197 A CN 202211395197A CN 115909199 A CN115909199 A CN 115909199A
Authority
CN
China
Prior art keywords
module
background modeling
video
image
verification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211395197.4A
Other languages
Chinese (zh)
Inventor
吕阿斌
李明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yi Tai Fei Liu Information Technology LLC
Original Assignee
Yi Tai Fei Liu Information Technology LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yi Tai Fei Liu Information Technology LLC filed Critical Yi Tai Fei Liu Information Technology LLC
Priority to CN202211395197.4A priority Critical patent/CN115909199A/en
Publication of CN115909199A publication Critical patent/CN115909199A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of remnant detection, in particular to a double-background modeling remnant detection method based on multiple backtracking verification, which comprises the following steps: the video stream is read in from the video stream reading module and is decoded, the decoded picture is subjected to image preprocessing operation, the preprocessed image is divided into regions of interest according to actual conditions, and partial unreasonable false alarms can be effectively eliminated through proper region of interest division. The invention provides a carry-over object detection algorithm combining double-background modeling and video sequence time sequence information, improves the precision of primary screening of carry-over objects, and simultaneously provides a backtracking verification technology based on multiple carry-over object videos, which can obviously improve the robustness and accuracy of the double-background modeling algorithm.

Description

Double-background modeling remnant detection method based on multiple backtracking verification
Technical Field
The invention relates to the technical field of legacy detection, in particular to a dual-background modeling legacy detection method based on multiple backtracking verification.
Background
With the rapid development of social economy, social public safety problems are more and more emphasized by people, and the traditional manual-based video monitoring system cannot be matched with the requirement of increasing data volume. The intelligent video monitoring system is widely applied as a technology with lower cost, higher timeliness and high detection rate. The technology for detecting the remnants is an important ring in an intelligent monitoring system, and is favorable for detecting the unknown remnants or finding the lost objects in places with dense pedestrian traffic flow, such as expressways, tunnel subway stations, high-speed rail stations, gymnasiums and the like, so that the potential safety hazard problem is solved.
Existing carryover detection algorithms can be broadly divided into two categories: the first method is based on deep learning target detection YOLO technology, and performs target detection on each frame of image in a video stream to identify suspected remains in the image. And the second method is video front background image detection based on background modeling, and is used for further screening out the remnants.
Implementation details based on deep learning target detection: firstly, a large number of images of the objects left behind in different scenes need to be collected for labeling and training a target detection model, then the images in the video stream are read, and finally the images are subjected to the target detection of the objects left behind, which has the main defects that: due to the fact that the types of the objects to be left are various and the related scene environment is complex, a large number of training pictures need to be marked on the scheme based on target detection, meanwhile, the objects to be left under the monitoring video are small generally, and therefore technical requirements for small-object detection are high.
The implementation details of the video front background image detection based on background modeling are as follows: firstly, reading a picture from a video stream, initializing two mixed Gaussian models to model a target scene background, then setting different updating speeds for the two Gaussian models, and finally positioning the position information of a legacy in the picture through the difference value of the two Gaussian background models. The double-background modeling detection technology does not utilize time sequence information of video data, has high calculation cost and cannot detect the remnants in real time.
In an all-weather complex scene with dense pedestrian and traffic flows, the traditional method for detecting the abandoned object is very easily influenced by factors such as shielding, illumination, shadow change, movement of a background object and the like, so that the missing detection rate and the false detection rate of the abandoned object detection technology based on background modeling are high. In order to solve the problem, a double-background modeling residue detection method based on multiple backtracking verification is provided.
Disclosure of Invention
The invention aims to provide a double-background modeling remnant detection method based on multiple backtracking verification, which has the advantages of improving the primary screening precision of the remnant, remarkably improving the robustness of detection algorithm for detecting the remnant under complex light and complex background environment conditions, ensuring higher detection rate, and solving the problem that the traditional remnant detection method is very easily influenced by factors such as shielding, illumination, shadow change, background object movement and the like, so that the detection omission rate and the false detection rate of the remnant detection technology based on background modeling are higher.
In order to achieve the purpose, the invention provides the following technical scheme: a double-background modeling remnant detection method based on multiple backtracking verification comprises the following steps:
(1) Reading a video stream from a video stream reading module, performing decoding operation, and performing image preprocessing operation on a decoded picture;
(2) Dividing the preprocessed image into interested areas according to actual conditions, wherein the proper dividing of the interested areas can effectively eliminate part of unreasonable false alarms;
(3) Performing double-background Gaussian modeling on each pixel point in the image interesting area, and generating position information of suspected remnants according to a double-background modeling result;
(4) According to the position information obtained in the step (3), backtracking the video, and comprehensively analyzing the backtracked video to detect real remnants;
(5) And comprehensively analyzing the position information of the objects left in the step (4), displaying the result in the original image, and performing alarm operation by an event early warning module.
Preferably, in the step (1), the image data obtained by the video stream reading module is decoded into a picture and then frame extraction processing is performed to obtain the object picture of the legacy.
Preferably, in step (1), the image preprocessing operation includes a scale change process and a gaussian filtering process.
Preferably, in the step (1), the preprocessed photographed object can detect consistent key points at any scale, each feature point corresponds to a scale factor, and the ratio between the scale factors of different feature points is equal to the ratio of image scales.
Preferably, in the step (2), feature point extraction is performed on the preprocessed image, and the feature region is divided into regions of interest.
Preferably, in the step (3), when performing the dual-background gaussian modeling, two background models with different update speeds are established only for the RGB values of the image in the region of interest.
Preferably, in the step (4), after receiving the position information of the suspected relic preliminarily screened by the gaussian mixture model, multiple relic video backtracking verification is performed.
Preferably, in the step (4), target detection is performed on each input picture by using a detection model of Yolo V3, and the detection targets are people, cars, and non-left objects.
Preferably, in the step (5), after the alarm module receives the position information of the left-over object, the position information of the left-over object is drawn in the original image and alarm information is output on an interface.
Preferably, the system structure for detecting the carry-over by using the deep learning technology and the traditional image processing is also included, and the system structure mainly comprises: the video image acquisition module is provided with an interested area module, a Gaussian background modeling module, a video backtracking verification module and an event alarm module.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a remnant detection algorithm combining double background modeling and video sequence time sequence information, which improves the precision of primary screening of a remnant, and simultaneously provides a multiple remnant video backtracking verification technology, which can obviously improve the robustness and accuracy of the double background modeling algorithm.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
FIG. 2 is a background modeling flow diagram of the hybrid Gaussian model of the present invention;
FIG. 3 is a flow chart of the video backtracking verification module for legacy of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A double-background modeling legacy detection method based on multiple backtracking verification comprises the following steps:
(1) Reading a video stream from a video stream reading module, performing decoding operation, and performing image preprocessing operation on a decoded picture;
(2) Dividing the preprocessed image into regions of interest according to actual conditions, wherein the proper region of interest division can effectively eliminate partial unreasonable false alarms;
(3) Performing double-background Gaussian modeling on each pixel point in the image interesting area, and generating position information of suspected remnants according to a double-background modeling result;
(4) According to the position information obtained in the step (3), backtracking the video, and comprehensively analyzing the backtracked video to detect real remnants;
(5) And comprehensively analyzing the position information of the objects left in the step (4), displaying the result in the original image, and performing alarm operation by an event early warning module.
As shown in fig. 1, the system structure of the double-background modeling legacy detection method based on multiple backtracking verification includes: the system comprises a video image acquisition module 101, a region of interest setting 102, a Gaussian background modeling module 103, a video backtracking verification module 104 and a time alarm module 105.
The module 101: the video image acquisition module acquires video streams from an actual monitoring camera (including an analog camera, a digital camera and the like), decodes the video streams into pictures and completes image preprocessing.
The module 102: and an interested region module is arranged, and in order to reduce the complexity of Gaussian background modeling and reduce unreasonable false alarm, a region to be detected needs to be marked out on an image.
The module 103: and the Gaussian background modeling module is used for performing Gaussian background modeling on the image data preprocessed by the module 101, establishing two background models with different updating speeds only for RGB values of the image in the divided region of the module 102, and judging the position information of the suspected vestige according to the difference of background mask images generated by the background models.
The module 104: and the video backtracking verification module is used for backtracking whether the position information of the video backtracking verification module 103 accords with the characteristic attribute of the remnant or not and outputting the final position information of the remnant.
The module 105: and after the receiving module 104 outputs the position information of the abandoned object, the alarm module is responsible for drawing the position information of the abandoned object in the original image and outputting alarm information on an interface.
As shown in fig. 2, a background modeling flow chart of the gaussian mixture model is specifically implemented as follows:
the module 201: the preprocessed image data is input and the ROI is set.
A module 202: receiving an input image of the module 201, establishing two adaptive gaussian mixture models F (the number of gaussian mixture models is not more than 4) for each RGB pixel value of the image, and for a next picture I, an update formula of the gaussian mixture models is as follows: λ F + (1- λ) I, where λ is the update rate, i.e., the learning rate, and two Gaussian mixture models in the module 202 are respectively provided with a long background model F with a slow background update rate L A short scene model F with fast background update speed S
Each background model outputs a binary mask image to represent foreground information output by the model (0 represents background and 1 represents foreground), and each pixel point in an image has four states in total, namely S = F L F S (00, 01,10, 00) the description of each state is shown in the following table:
TABLE 1 State in Module 202
Figure BDA0003935493660000061
The 10 state represents a possible carry-over state among all states of the module 202.
The module 203: the finite state machine module receives the output state of the module 203 and outputs a binary image with a value of 0 or 1.
According to the characteristics of the left object (left by people or vehicles in the moving process and stopped), the time sequence state of the left object is firstly moving and then continuously stopped, namely the state S of the pixel points in the image is changed from 11 to 10 and continues, so that the triggering condition of the state machine of the module 203 is that the state of the module 202 is changed from 11 to 10, then the state 10 needs to last for T time periods, and finally the module 203 outputs 1, otherwise, 0 is output to the pixel points which do not meet the conditions and the state machine is reset.
The module 204: and the position module of the remnant receives the binary image input by the module 203 and outputs the position information of the suspected remnant.
Searching all connected domains from the binary image of the module 203, and then setting an effective range [ area ] of the connected domain for the connected domains according to the size of the actual remnant min ,area max ]Screening out all connected domains meeting the conditions, searching the corresponding position contour of the remnant on the mask image output by the long background model on the module 202 for each connected domain meeting the requirements, solving the minimum rectangle of the wrapping contour, and finally outputting the information of the matrix frame of the remnant.
As shown in fig. 3, after receiving the position information of the suspected legacy primarily screened by the gaussian modeling module 103, the flow chart of the video backtracking verification module for the legacy performs multiple video backtracking verifications for the legacy, which includes the following specific contents:
the module 301: reading and storing pictures, mainly responsible for reading the preprocessed picture data, setting the number t of pictures to be saved (t includes a complete process of discarding the remnants) according to the learning rate set by the module 202, and caching the mask information output by the long background model corresponding to the image.
The module 302: the receiving module 103 outputs the location information of the suspected relic, i.e. the relic target frame.
Module 303 and module 304: the similarity calculation module is used for making a preliminary judgment module of the left-over object, selecting whether the left-over object appears at the position output by the module 302 in the first frame picture and the t frame picture in the backtracking video, respectively intercepting the pictures of the two pictures at the position of the left-over object by the module 303, then converting gray-scale pictures (capable of effectively eliminating the influence of illumination), respectively calculating the perception hash values pHash of the two pictures, and finally calculating the Hamming distance of the two hash values.
The module 304 sets a proper threshold value to judge the similarity of the two graphs, if the two graphs are similar, the false alarm is judged to be finished directly, otherwise, a next verification link is started.
Module 305 and module 306: the module 305 is a target detection module, and has the main functions of performing target detection on each input picture by using a Yolo V3 detection model, detecting objects such as people, vehicles and objects which are not left behind, and then tracking, allocating and tracking a target ID to the target detected in the module 305 by using kalman filtering and hungarian matching algorithm by using the module 306.
Block 307: the module 307 is a determining module, configured to determine whether there is a suspected person or vehicle around the object left on the current frame picture (the object left is an object left by the object and the person should be away from the left object), determine whether the position frame of the object left of the module 302 intersects the person detection frame output by the module 305, if the detection frames intersect, the verification is finished, otherwise, the next verification link is performed.
The module 308: judging whether the legacy moves, calculating whether the pixel point of the legacy moves in the frame of the legacy in the long background mask image time sequence cached by the calculation module 301, namely calculating whether the distance of the central point of the legacy in the frame position of the legacy in the time sequence t generates displacement, calculating the movement moment gamma of the legacy, if the legacy does not move, ending, otherwise, entering the next verification.
Module 309: and a legacy tracking module, which directly uses a mask graph of a t time sequence as a Camshift reverse projection graph to directly track in order to reduce the calculation amount of tracking, outputs and stores the motion trail of a legacy frame, counts ID information of the intersection of the motion trail of the legacy before the motion gamma of the legacy judged by the module 308 and a detection frame tracked by a module 306 target, and performs next verification if the intersection ID is found, or directly ends if the intersection ID is not found. Finally, the movement pattern symbol of the thrower is verified not to be in accordance with the feature of the carry-over (the carry-over is thrown away and is far away from the carry-over), and the ID obtained in the last step of verification is at the distance D from the position of the carry-over of the module 302 at the time gamma 1 And the distance D between ID and legacy at time t 2 Judgment of D 1 ≤D 2 And if the result is positive, the abandoned object is considered to be abandoned, the position information of the abandoned object is output, an alarm is generated, and if the result is negative, the verification is finished, and the judgment is that the abandoned object is not abandoned.
In summary, the invention provides a carry-over detection algorithm combining double-background modeling and video sequence time sequence information, which improves the precision of primary screening of carry-over, and simultaneously provides a multiple carry-over video backtracking verification technology, which can significantly improve the robustness and accuracy of the double-background modeling algorithm.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A double-background modeling legacy detection method based on multiple backtracking verification is characterized in that: the method comprises the following steps:
(1) Reading a video stream from a video stream reading module, performing decoding operation, and performing image preprocessing operation on a decoded picture;
(2) Dividing the preprocessed image into regions of interest according to actual conditions, wherein the proper region of interest division can effectively eliminate partial unreasonable false alarms;
(3) Performing double-background Gaussian modeling on each pixel point in the image interesting region, and generating position information of suspected remnants according to the result of the double-background modeling;
(4) According to the position information obtained in the step (3), backtracking the video, and comprehensively analyzing the backtracked video to detect real remnants;
(5) And comprehensively analyzing the position information of the objects left in the step (4), displaying the result in the original image, and performing alarm operation by an event early warning module.
2. The method for detecting the dual-background modeling carry-over based on the multiple backtracking verification of claim 1, wherein: in the step (1), according to the video image data obtained by the video stream reading module, the video image data is decoded into a picture and then is subjected to frame extraction processing to obtain a target picture of the legacy object.
3. The method for detecting the dual-background modeling carry-over based on the multiple backtracking verification of claim 1, wherein: in the step (1), the image preprocessing operation includes a scale change process and a gaussian filter process.
4. The method for detecting the dual-background modeling survivor based on the multiple backtracking verification of claim 1, wherein: in the step (1), the shot object after pretreatment can detect consistent key points under any scale, each feature point corresponds to a scale factor, and the ratio of the scale factors of different feature points is equal to the ratio of image scales.
5. The method for detecting the dual-background modeling carry-over based on the multiple backtracking verification of claim 1, wherein: in the step (2), feature point extraction is performed on the preprocessed image, and the feature region is divided into regions of interest.
6. The method for detecting the dual-background modeling survivor based on the multiple backtracking verification of claim 1, wherein: in the step (3), two background models with different update speeds are established only for the RGB values of the images in the region of interest during the double-background Gaussian modeling.
7. The method for detecting the dual-background modeling survivor based on the multiple backtracking verification of claim 1, wherein: in the step (4), after receiving the position information of the suspected relics preliminarily screened by the Gaussian mixture model, multiple video backtracking verification of the relics is performed.
8. The method for detecting the dual-background modeling carry-over based on the multiple backtracking verification of claim 1, wherein: in the step (4), target detection is performed on each input picture by using a detection model of Yolo V3, and the detection targets are people, vehicles and articles which are not left behind.
9. The method for detecting the dual-background modeling carry-over based on the multiple backtracking verification of claim 1, wherein: in the step (5), after the alarm module receives the position information of the abandoned object, the position information of the abandoned object is drawn in the original image and alarm information is output on an interface.
10. The method for detecting the dual-background modeling carry-over based on the multiple backtracking verification of claim 1, wherein: the system structure for detecting the remnant by utilizing the deep learning technology and the traditional image processing mainly comprises the following components: the video image acquisition module is provided with an interested area module, a Gaussian background modeling module, a video backtracking verification module and an event alarm module.
CN202211395197.4A 2022-11-10 2022-11-10 Double-background modeling legacy detection method based on multiple backtracking verification Pending CN115909199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211395197.4A CN115909199A (en) 2022-11-10 2022-11-10 Double-background modeling legacy detection method based on multiple backtracking verification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211395197.4A CN115909199A (en) 2022-11-10 2022-11-10 Double-background modeling legacy detection method based on multiple backtracking verification

Publications (1)

Publication Number Publication Date
CN115909199A true CN115909199A (en) 2023-04-04

Family

ID=86472021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211395197.4A Pending CN115909199A (en) 2022-11-10 2022-11-10 Double-background modeling legacy detection method based on multiple backtracking verification

Country Status (1)

Country Link
CN (1) CN115909199A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704268A (en) * 2023-08-04 2023-09-05 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Strong robust target detection method for dynamic change complex scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704268A (en) * 2023-08-04 2023-09-05 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Strong robust target detection method for dynamic change complex scene
CN116704268B (en) * 2023-08-04 2023-11-10 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Strong robust target detection method for dynamic change complex scene

Similar Documents

Publication Publication Date Title
Li et al. Traffic light recognition for complex scene with fusion detections
CN110136449B (en) Deep learning-based traffic video vehicle illegal parking automatic identification snapshot method
De Charette et al. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates
CN104156731B (en) Vehicle License Plate Recognition System and method based on artificial neural network
CN104978567B (en) Vehicle checking method based on scene classification
CN112070074B (en) Object detection method and device, terminal equipment and storage medium
CN111967313B (en) Unmanned aerial vehicle image annotation method assisted by deep learning target detection algorithm
CN109447082B (en) Scene moving object segmentation method, system, storage medium and equipment
CN112949633B (en) Improved YOLOv 3-based infrared target detection method
CN110717863B (en) Single image snow removing method based on generation countermeasure network
Yang et al. Spatiotemporal trident networks: detection and localization of object removal tampering in video passive forensics
CN109886147A (en) A kind of more attribute detection methods of vehicle based on the study of single network multiple-task
CN108764338B (en) Pedestrian tracking method applied to video analysis
CN111008608B (en) Night vehicle detection method based on deep learning
Han et al. A method based on multi-convolution layers joint and generative adversarial networks for vehicle detection
CN107038423A (en) A kind of vehicle is detected and tracking in real time
CN115909199A (en) Double-background modeling legacy detection method based on multiple backtracking verification
Zhan et al. Pedestrian detection and behavior recognition based on vision
CN115376108A (en) Obstacle detection method and device in complex weather
Kejriwal et al. Vehicle detection and counting using deep learning basedYOLO and deep SORT algorithm for urban traffic management system
Wang et al. Real-time vehicle signal lights recognition with HDR camera
Zhao et al. An Improved Method for Infrared Vehicle and Pedestrian Detection Based on YOLOv5s
Sridevi et al. Automatic generation of traffic signal based on traffic volume
CN110858392A (en) Monitoring target positioning method based on fusion background model
CN114898290A (en) Real-time detection method and system for marine ship

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination