CN111159471A - Monitoring video concentration processing method - Google Patents
Monitoring video concentration processing method Download PDFInfo
- Publication number
- CN111159471A CN111159471A CN201811326997.4A CN201811326997A CN111159471A CN 111159471 A CN111159471 A CN 111159471A CN 201811326997 A CN201811326997 A CN 201811326997A CN 111159471 A CN111159471 A CN 111159471A
- Authority
- CN
- China
- Prior art keywords
- video
- target
- track
- blocks
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides a monitoring video concentration processing method, which mainly comprises six steps of background reconstruction, target detection, target tracking, target track post-processing, target track generation and video abstract generation, wherein three-dimensional information of a moving pipeline of an interested target is extracted from an original video, the three-dimensional information is synthesized with the background video, a short video segment is obtained through concentration, the video of one day or one hour is compressed into an abstract video which is short to dozens of minutes or several minutes, and the abstract video comprises all important moving target details and snapshot videos in the original video.
Description
Technical Field
The invention relates to the technical field of video monitoring of security systems, in particular to a method for concentrating monitoring videos.
Background
Technologies such as high-definition video, video storage, intelligent video analysis and the like become main aspects of the current video technology development. With the popularization of a large number of video monitors, how to realize the quick browsing of massive videos, the concentrated summarization of massive videos and the quick retrieval of required video data in massive video data become the most important research contents in the current video field, and especially the important research contents are in traffic and security video applications.
Disclosure of Invention
The invention aims to provide a monitoring video concentration processing method based on the traditional video monitoring, so that a user can quickly browse massive videos and quickly search required video data in massive video data.
The technical scheme of the invention is as follows:
a monitoring video concentration processing method is characterized by comprising the following steps:
(1) background reconstruction: processing the video image by adopting a self-adaptive Gaussian mixture model, judging whether each pixel is a foreground or a background to obtain a foreground image and a background image, and reconstructing a background model in real time;
(2) target detection: detecting all the blocks newly entering a scene by using a foreground image obtained by background reconstruction, discarding smaller blocks and blocks overlapped with the tracked blocks, arranging the rest blocks according to the size sequence, screening by image boundary detection and motion consistency detection, discarding blocks which are not in accordance with standards, and storing the real new blocks into a block list;
(3) target tracking: firstly, extracting all lumps from a foreground image; then, predicting the position and the size of each tracked target by using a Kalman filter, and simultaneously performing collision analysis; finally, processing each tracked track, if collision exists, obtaining a target block by adopting weighted particle filtering, and if collision does not exist, obtaining the target block by a nearest neighbor method;
(4) target track post-processing: correcting the position information of the tracking block by adopting Kalman filtering;
(5) generating a target track: generating a motion track of the moving object at the end of each track, wherein the motion track mainly comprises an ID number, a starting frame number, an ending frame number, a collision identification array and position and size information of all the lumps, and exporting the motion track to a motion track file;
(5) video abstract generation: and according to the target motion track file obtained by the track generation module, the motion tracks of all targets are subjected to combined optimization, the time redundancy and the space redundancy are reduced to the maximum extent, the integrity and the time sequence consistency of the targets are ensured, and the motion targets and the generated background images are fused according to the combined optimization scheme to generate the video abstract.
The invention can perform summary recasting on a long-time video file in the modes of background (still image) and event (moving object), and compresses the video preview time into a plurality of minutes while keeping all available information in the video; through the video abstract, a user can conveniently and quickly check the occurrence time of the key attention object and perform associated playback to the historical video; the retrieval function of the input picture in the historical video library is supported by using the picture search picture, and the historical playback of the content concerned by the user is realized by positioning the picture in the library from the retrieved picture frame, so that the fast browsing and fast retrieval of massive videos are realized.
Detailed Description
The video condensation is to extract the three-dimensional information of the moving channel of the object of interest from the original video, synthesize the three-dimensional information with the background video, and further condense the three-dimensional information to obtain a short video clip, which can compress the video of one day or one hour into a summary video which is short to tens of minutes or several minutes, wherein the summary video contains all important moving object details and snapshot video views in the original video. The video abstract generation mainly comprises six processes of background reconstruction, target detection, target tracking, target track post-processing, target track generation and video abstract generation, and the implementation mode of each process is as follows:
background reconstruction: and processing the video image by adopting a self-adaptive Gaussian mixture model, judging whether each pixel is a foreground or a background to obtain a foreground image and a background image, and reconstructing a background model in real time.
Target detection: detecting all the blobs which newly enter a scene by using a foreground image obtained by background reconstruction, discarding the small blobs and the blobs which are overlapped with the tracked blobs, arranging the rest blobs according to the size sequence, screening by image boundary detection and motion consistency detection, discarding the blobs which are not in accordance with the standards, and storing the real new blobs into a blob list.
Target tracking: and on the basis of background reconstruction and target detection, target tracking is realized. Firstly, extracting all lumps from a foreground image; then, predicting the position and the size of each tracked target by using a Kalman filter, and simultaneously performing collision analysis; and finally, processing each tracked track, if collision exists, obtaining a target block by adopting weighted particle filtering, and if collision does not exist, obtaining the target block by a nearest neighbor method.
Target track post-processing: and correcting the position information of the tracking block by adopting Kalman filtering.
Generating a target track: and generating the motion trail of the moving object at the end of each trail, wherein the motion trail mainly comprises an ID number, a starting frame number, an ending frame number, a collision identification array and the position and size information of all the lumps, and exporting the motion trail to a motion trail file.
Video abstract generation: and according to the target motion track file obtained by the track generation module, the motion tracks of all targets are subjected to combined optimization, the time redundancy and the space redundancy are reduced to the maximum extent, the integrity and the time sequence consistency of the targets are ensured, and the motion targets and the generated background images are fused according to the combined optimization scheme to generate the video abstract.
Claims (1)
1. A monitoring video concentration processing method is characterized by comprising the following steps:
(1) background reconstruction: processing the video image by adopting a self-adaptive Gaussian mixture model, judging whether each pixel is a foreground or a background to obtain a foreground image and a background image, and reconstructing a background model in real time;
(2) target detection: detecting all the blocks newly entering a scene by using a foreground image obtained by background reconstruction, discarding smaller blocks and blocks overlapped with the tracked blocks, arranging the rest blocks according to the size sequence, screening by image boundary detection and motion consistency detection, discarding blocks which are not in accordance with standards, and storing the real new blocks into a block list;
(3) target tracking: firstly, extracting all lumps from a foreground image; then, predicting the position and the size of each tracked target by using a Kalman filter, and simultaneously performing collision analysis; finally, processing each tracked track, if collision exists, obtaining a target block by adopting weighted particle filtering, and if collision does not exist, obtaining the target block by a nearest neighbor method;
(4) target track post-processing: correcting the position information of the tracking block by adopting Kalman filtering;
(5) generating a target track: generating a motion track of the moving object at the end of each track, wherein the motion track mainly comprises an ID number, a starting frame number, an ending frame number, a collision identification array and position and size information of all the lumps, and exporting the motion track to a motion track file;
(5) generating a video abstract: and according to the target motion track file obtained by the track generation module, the motion tracks of all targets are subjected to combined optimization, the time redundancy and the space redundancy are reduced to the maximum extent, the integrity and the time sequence consistency of the targets are ensured, and the motion targets and the generated background images are fused according to the combined optimization scheme to generate the video abstract.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811326997.4A CN111159471A (en) | 2018-11-08 | 2018-11-08 | Monitoring video concentration processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811326997.4A CN111159471A (en) | 2018-11-08 | 2018-11-08 | Monitoring video concentration processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111159471A true CN111159471A (en) | 2020-05-15 |
Family
ID=70555128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811326997.4A Pending CN111159471A (en) | 2018-11-08 | 2018-11-08 | Monitoring video concentration processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111159471A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111709972A (en) * | 2020-06-11 | 2020-09-25 | 石家庄铁道大学 | Space constraint-based method for quickly concentrating wide-area monitoring video |
-
2018
- 2018-11-08 CN CN201811326997.4A patent/CN111159471A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111709972A (en) * | 2020-06-11 | 2020-09-25 | 石家庄铁道大学 | Space constraint-based method for quickly concentrating wide-area monitoring video |
CN111709972B (en) * | 2020-06-11 | 2022-03-11 | 石家庄铁道大学 | Space constraint-based method for quickly concentrating wide-area monitoring video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8953044B2 (en) | Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems | |
CN106856577B (en) | Video abstract generation method capable of solving multi-target collision and shielding problems | |
CN101860731B (en) | Video information processing method, system and server | |
Chen et al. | Intelligent urban video surveillance system for automatic vehicle detection and tracking in clouds | |
CN102222104B (en) | Method for intelligently extracting video abstract based on time-space fusion | |
CN104284158B (en) | Method applied to event-oriented intelligent monitoring camera | |
Sevillano et al. | Towards smart traffic management systems: Vacant on-street parking spot detection based on video analytics | |
US10409491B2 (en) | Method of data analysis and extraction for embedded security devices | |
CN102231820B (en) | Monitoring image processing method, device and system | |
US20100287161A1 (en) | System and related techniques for detecting and classifying features within data | |
CN102314916B (en) | Video processing method and system | |
EP3043555B1 (en) | Image storage method and device thereof | |
WO2009079809A1 (en) | Video surveillance system with object tracking and retrieval | |
EP2272011A2 (en) | Media asset management | |
CN110659333B (en) | Multi-level visual feature description method and visual retrieval system | |
CN110087042B (en) | Face snapshot method and system for synchronizing video stream and metadata in real time | |
Ji et al. | Surveillance video summarization based on moving object detection and trajectory extraction | |
He et al. | Fast online video synopsis based on potential collision graph | |
Chang et al. | AI City Challenge 2019-City-Scale Video Analytics for Smart Transportation. | |
CN109246488A (en) | A kind of video abstraction generating method for safety and protection monitoring system | |
CN111159471A (en) | Monitoring video concentration processing method | |
US9275140B2 (en) | Method of optimizing the search for a scene on the basis of a stream of images archived in a video database | |
Li et al. | HN-MUM: heterogeneous video anomaly detection network with multi-united-memory module | |
Namitha et al. | Video synopsis: state-of-the-art and research challenges | |
Iparraguirre et al. | Speeded-up video summarization based on local features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200515 |