CN104915655A - Multi-path monitor video management method and device - Google Patents

Multi-path monitor video management method and device Download PDF

Info

Publication number
CN104915655A
CN104915655A CN201510329776.2A CN201510329776A CN104915655A CN 104915655 A CN104915655 A CN 104915655A CN 201510329776 A CN201510329776 A CN 201510329776A CN 104915655 A CN104915655 A CN 104915655A
Authority
CN
China
Prior art keywords
mrow
target
detection
video
videos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510329776.2A
Other languages
Chinese (zh)
Inventor
李广鑫
王立楠
展俊领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510329776.2A priority Critical patent/CN104915655A/en
Publication of CN104915655A publication Critical patent/CN104915655A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-path monitor video management method and device and relates to the management method for server side intelligent video detection and background multi-path videos. The method comprises the following steps: receiving pictures of multi-path monitoring cameras through a server, detecting targets of the picture images and carrying out target tracking according to the target detection results; and carrying out abnormity detection on the picture of each path of video according to the target detection and tracking result, carrying out classification on the abnormity and recording event attributes; and establishing a database and creating an index according to the abnormity classification and other abnormity attributes. The multi-path monitor video management method and device help people to search clues quickly in mass monitor videos, thereby saving time; not only manpower resource investment in conventional video monitoring can be reduced, but also abnormal conditions in videos can be judged quickly and efficiently; and the method and the device provide an unified plan for the videos globally, so that when people search the recordings, the trouble of multi-person multi-video searching is saved, and efficiency is improved.

Description

Management method and equipment for multi-channel monitoring video
Technical Field
The invention relates to an intelligent monitoring technology, in particular to the field of video management of an intelligent monitoring server.
Background
With the development of society and the progress of human civilization, the video monitoring technology is more and more important in daily life, is widely applied to the field of security protection, and assists public safety departments in fighting against crimes and maintaining social stability. With the popularization of network technology and the improvement of image processing technology, intelligent video monitoring technology is widely permeating into various fields such as education, government, entertainment, hotels and the like.
At present, video monitoring technology is mature day by day, but the following problems still exist in the demand of high definition video:
1) high resolution, high quality picture quality, requiring a large space to store video;
2) the monitoring range is large, the number of monitoring paths is large, and the video file is easily disordered in time;
3) for the correlation of events of the same type, effective connection is not achieved.
How to quickly and accurately find the key part of a plurality of paths of videos in massive high-definition video information is an important subject, so the invention provides a method for using the attribute of a special event of a recorded video to help the analysis and storage of the video and the quick query and the investigation of the video.
At present, in intelligent monitoring, obtaining attributes of special events is particularly important, at present, mature video abnormal behavior intelligent detection comprises events such as two-way border crossing, one-way border crossing, forbidden area entering, forbidden area leaving, loitering, unattended operation, sudden change, personnel gathering, smoke detection, rapid movement, retrograde motion, fighting and the like, along with rapid development of video monitoring analysis technology, a detection algorithm for the special events is more endless, and the main method at present is to analyze and identify video images collected by a monitoring camera on the basis of computer vision, so as to realize detection, extraction and tracking of suspicious targets in a dynamic complex scene, analyze and identify behaviors of the suspicious targets on the basis, obtain understanding of video image contents, and analyze and plan obtained image information.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a method and equipment for managing multi-path monitoring videos, which are used for automatically analyzing and processing a video sequence shot by a camera by using computer vision, image processing, video analysis and mode recognition methods on the basis of the traditional monitoring technology, and comprise the steps of detecting, extracting, marking and tracking an interested target in a monitoring scene, analyzing and judging the action behavior of the target, storing an analysis result, saving time and improving efficiency when a video is reviewed in a return visit.
In order to achieve the purpose, the invention adopts the following technical scheme:
a management device for multiple paths of monitoring videos comprises a plurality of paths of monitoring cameras, wherein the monitoring cameras are all in communication connection with a server; the server comprises a target detection module, a target tracking module, a target classification module, an abnormality detection classification module and a database module.
Based on the management equipment, the management method of the multi-channel monitoring video comprises the following steps:
s1, the server receives the pictures of the monitoring cameras connected with the server, and the target detection module detects the pictures in each monitoring camera;
s2, tracking the target detected in the step S1 through a target tracking module;
s3 classifying the targets by using the target classification module according to the results obtained in the steps S1 and S2, detecting the abnormality of the targets by using the abnormality detection classification module based on the category of the targets, and classifying the detected abnormality into corresponding abnormality classifications;
s4, establishing a database through the database module, writing the abnormal attribute into the corresponding field set by the database, and establishing an index; wherein the fields in the database at least comprise the video identification to which the exception belongs and the category to which the exception belongs.
In a general video anomaly detection process, after a video image is read and a target is detected, anomaly determination and analysis are performed according to a set standard condition. The main improvement of the invention is to classify the abnormality based on the original detection process, establish a database of the abnormality attributes including the abnormality classification, and create an index structure.
Further, in step S1, the target region is detected by distinguishing the key frame from the background frame using an inter-frame difference method or a background difference method.
Further, in step S2, a Camshift tracking algorithm, an optical flow tracking algorithm, or a particle filter algorithm is used for target tracking.
Further, in step S3, the object is mainly classified into a person, a vehicle, an object, smoke, and flame.
Further, in step S3, the content of the anomaly detection mainly includes malicious occlusion, image interference, camera movement, object identification, smoke detection, fire detection, vehicle speed measurement, retrograde warning, boundary crossing identification, and human body abnormal behavior; the abnormal categories of malicious occlusion, image interference and camera movement are diagnosis categories, the abnormal categories of object identification, smoke detection and fire detection are identification categories, and the abnormal categories of vehicle speed measurement, retrograde warning, border crossing identification and human body abnormal behaviors are behavior categories.
Further, in step S3, abnormality detection is performed by a template matching method, a probability statistics method, or a semantic method.
Further, in step S4, the fields in the database further include an exception location, an exception time, an exception body, and an exception content.
The invention has the beneficial effects that: in a mass of monitored videos, the method helps personnel to quickly search clues, saves time, can reduce the input of human resources in the traditional video monitoring, and can quickly and efficiently judge the abnormal conditions in the videos; the videos are uniformly planned in the whole situation, the trouble of searching by multiple people and multiple videos is avoided when the records are searched, and the efficiency is improved.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the anomaly classification of FIG. 1;
FIG. 3 is a schematic diagram of an embodiment;
fig. 4 is a schematic diagram of the principle of object classification.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that the present embodiment is based on the technical solution, and the detailed implementation and the specific operation process are provided, but the protection scope of the present invention is not limited to the present embodiment.
A management device for multiple paths of monitoring videos comprises a plurality of paths of monitoring cameras, wherein the monitoring cameras are all in communication connection with a server; the server comprises a target detection module, a target tracking module, a target classification module, an abnormality detection classification module and a database module
As shown in fig. 1, a method for managing multiple surveillance videos includes the following steps:
s1, receiving the pictures of the monitoring cameras connected with the server through the server, and detecting the target in each picture through the target detection module. The target detection may specifically adopt an inter-frame difference method or a background difference method to distinguish a key frame from a background frame, so as to detect a target region.
S2, tracking the target detected in the step S1 through a target tracking module by adopting a Camshift tracking algorithm, an optical flow tracking algorithm or a particle filter algorithm.
S3, classifying the targets by using a target classification module according to the results obtained in the steps S1 and S2, mainly classifying the targets into people, vehicles, objects, smoke and flames, wherein the purpose of target classification is mainly to analyze and understand the high-level behaviors by adopting different algorithm strategies according to different categories.
After the targets are classified, based on the categories to which the targets belong, the targets are subjected to anomaly detection through an anomaly detection classification module, and the detected anomalies are classified into corresponding anomaly classifications. As shown in fig. 2, the content of the anomaly detection mainly includes malicious occlusion, image interference, camera movement, object identification, smoke detection, fire detection, vehicle speed measurement, retrograde warning, boundary crossing identification and abnormal behavior of human body; the abnormal categories of malicious occlusion, image interference and camera movement are diagnosis categories, the abnormal categories of object identification, smoke detection and fire detection are identification categories, and the abnormal categories of vehicle speed measurement, retrograde warning, border crossing identification and human body abnormal behaviors are behavior categories.
The detection of the anomaly can be performed by adopting a template matching-based method, a probability statistics-based method or a semantic-based method.
S4, establishing a database through the database module, writing the abnormal attribute into the corresponding field set by the database, and establishing an index; wherein the fields in the database at least comprise the video identification to which the exception belongs and the category to which the exception belongs. In addition, the method also comprises an abnormal place, an abnormal time, an abnormal subject and abnormal content.
The present invention will be further described below by taking the detection of abnormal human behavior as an example, as shown in fig. 3.
Firstly, target detection is carried out.
At present, there are many methods for detecting a moving object, and there are mainly an inter-frame difference method and a background difference method:
1) the interframe difference method is a method for obtaining the contour of a moving target by carrying out difference operation on two adjacent frames in a video image sequence, and can be well suitable for the condition that a plurality of moving targets and a camera move. When abnormal object motion occurs in a monitored scene, a frame is obviously different from a frame, the two frames are subtracted to obtain an absolute value of the brightness difference of the two frames, whether the absolute value is greater than a threshold value or not is judged to analyze the motion characteristic of a video or an image sequence, and whether object motion exists in the image sequence or not is determined. The difference of the image sequence from frame to frame is equivalent to performing high-pass filtering on the image sequence in a time domain.
The basic implementation process is as follows: using the current frame image Ik(x, y) images I spaced by n time framesk-n(x, y) making a difference, and judging whether the pixel point is a foreground point or a background point according to whether the pixel value of the obtained difference image is greater than or equal to a given threshold value T, wherein the specific judgment formula is as follows:
<math> <mrow> <msub> <mi>D</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>;</mo> <mo>|</mo> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>;</mo> <mo>|</mo> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>k</mi> <mo>-</mo> <mi>n</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&lt;</mo> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
wherein D iskAnd (x, y) is the gray value of the differential binary image at the coordinate (x, y), when the value of the gray value is 1, the pixel point is indicated as a foreground point, and when the value of the gray value is 0, the pixel point is indicated as a background point.
The interframe difference method has the advantages of simple algorithm implementation, low complexity of program design, less sensitivity to scene changes such as light and the like, capability of adapting to various dynamic environments and better stability. However, the interframe difference method cannot extract the complete region of the object, only can extract the boundary, and depends on the selected interframe time interval, namely, the key point is to select a proper n value and a proper threshold value T. For fast moving objects, a small time interval needs to be chosen and if not properly chosen, when the objects do not overlap in the previous and next two frames, they are detected as two separate objects: for a slow moving object, a large time difference should be selected, and if the time selection is not appropriate, the object is not detected when the objects are almost completely overlapped in the two frames before and after.
2) Background subtraction is a method of detecting moving objects using a comparison of a current frame in a sequence of images with a background reference model, the performance of which depends on the background modeling technique used. The basic implementation of the algorithm is to use the current frame Ik(x, y) and the background image are subjected to difference, whether the pixel point is a foreground point or a background point is judged according to whether the pixel value of the obtained difference image is greater than or equal to a given threshold value T, and the specific judgment formula is as follows:
<math> <mrow> <msub> <mi>D</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>;</mo> <mo>|</mo> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>B</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>;</mo> <mo>|</mo> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>B</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&lt;</mo> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
wherein, Bk(x, y) is a background frame image, Dk(x, y) is the gray value of the difference binary image at the coordinate (x, y), when the value is 1, the pixel point is indicated as the foreground point, and when the value is 1, the pixel point is indicated as the foreground pointWhen the value is 0, the pixel point is indicated as a background point.
The background difference method has the advantages of high speed, accurate detection and easy realization of the detection of the moving target, and the key point is the acquisition of a background image. In practical applications, a static background is not easily obtained directly, and meanwhile, due to the dynamic change of a background image, the background needs to be estimated and restored through interframe information of a video sequence, namely background reconstruction, so that the background needs to be selectively updated:
first, a background is established, and a weighted sum of two frames of images before the initial frame is taken to establish an initial background model, namely
B0(x,y)=a×Ik-2(x,y)+b×Ik-1(x,y);
In the formula: b is0(x, y) is the pixel value of the initial background image at the (x, y) point; i isk-1(x, y) and Ik-2(x, y) are respectively the pixel values of the two frames of images before the start at the point (x, y); a and b are weighting factors, which satisfy a + b being 1, and the values of a and b can be adjusted according to actual conditions to obtain a suitable initial background image, where a being 0.5.
In real life, the background changes with time, and therefore, if the background does not change, an error is inevitably caused to be larger.
The Surendra background updating algorithm can adaptively obtain a background image, and the basic idea of the algorithm is to find a motion area of an object by an inter-frame difference method, keep the background in the motion area unchanged, and replace and update the background of a non-motion area with a current frame. The basic steps of the algorithm are as follows:
(1) a first frame image I0(x, y) as background B0(x,y);
(2) Selecting a threshold T, initializing the iteration times, wherein M is 1, and the maximum iteration times is M;
(3) calculating a frame difference image of the current frame:
<math> <mrow> <mi>DB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>;</mo> <mo>|</mo> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&GreaterEqual;</mo> <mi>T</mi> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>;</mo> <mo>|</mo> <msub> <mi>I</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&lt;</mo> <mi>T</mi> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
wherein,Ik(x,y),Ik-1(x, y) are the current frame and the previous frame of image respectively;
(4) updating the binary image DB (x, y) to the background image Bk(x,y):
<math> <mrow> <msub> <mi>B</mi> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>B</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>DB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <msub> <mrow> <mo>&PartialD;</mo> <mi>I</mi> </mrow> <mi>k</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mrow> <mo>(</mo> <mo>&PartialD;</mo> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <msub> <mi>B</mi> <mrow> <mi>k</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>DB</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </math>
Wherein, Bk(x, y) is the updated background image, Bk-1(x, y) is the background image before updating, DB (x, y) is the gray value of the difference binary image at the coordinate (x, y), Ik(x, y) is the k frame image,is an iteration speed coefficient;
(5) and (5) adding 1 to the iteration number M, returning to the step (3), and ending the iteration when M is equal to M.
Secondly, after the target in the picture image is detected, the detected target needs to be tracked, and target tracking is an indispensable link in most of visual systems. The current image tracking algorithm is continuously updated and has no determined standard. In the fields of video monitoring and the like, a Camshift algorithm, an optical flow tracking algorithm and a particle filtering algorithm are commonly used.
The Camshift tracking algorithm is a mean shift based algorithm. The method is an improvement of a MeanShift algorithm, is called a Continuous Adaptive MeanShift algorithm, is called a CamShift algorithm as a whole as 'Continuous ly Adaptive Mean-SHIFT', and has the basic idea that all frames of a video image are subjected to MeanShift operation, and the result of the previous frame (namely, the center and the size of SearcWindow) is used as the initial value of Search Window of the MeanShift algorithm of the next frame, and the steps are iterated. The method comprises the following steps:
(1) firstly, selecting a region in a video frame sequence;
(2) calculating the color 2D probability distribution of the area;
(3) converging the area to be tracked by using the MeanShift algorithm;
(4) concentrating the converged region and marking it;
(5) repeating steps (3) and (4) every frame;
the key point of Camshift is that when the size of the target changes, the algorithm can adaptively adjust the target area to continue tracking.
And thirdly, classifying the targets according to the results of target detection and target tracking. The final purpose of the intelligent video monitoring system is to automatically analyze the video image sequence, locate, identify and track changes in the monitored scene, and further analyze and judge the behavior of the target without human intervention. Therefore, on the basis of realizing target detection and target tracking, in order to realize the purpose of analyzing abnormal behaviors of specific classes of targets in video monitoring, it is necessary to correctly classify the detected and tracked moving targets.
Object classification generally refers to the identification of pedestrians, cars, animals, or other refined items from detected and tracked objects in a video frame. After the category of the moving target is obtained, different algorithm strategies can be adopted to carry out high-level behavior analysis understanding according to different categories in a targeted manner. The method for classifying the target includes the following two steps of firstly obtaining characteristic information of a target area and classifying the target according to different adopted characteristic information:
1. object classification based on static features of the object
The static characteristics of the moving target area are used for classification, namely the moving target does not change along with time, and target characteristic extraction can be completed only by a single frame of image, so that the algorithm is simple and the real-time performance is good. The static characteristics of the target mainly comprise shape characteristics, color characteristics, gray level characteristics, texture characteristics and the like, and the specific characteristics comprise color, contour characteristics, region characteristics, gray level mean values, gray level variances, histograms, co-occurrence matrixes and the like.
2. Object classification based on dynamic features of objects
In the analysis of a video moving target, because the change of the environment of a monitoring area where the moving target is located, such as illumination, shielding and the like, can affect the accurate extraction of the static features of the target, great difficulty is brought to target classification, and therefore, many researchers provide a target classification method based on the motion correlation characteristics of the moving target. Since the motion state changes and periodicity of different classes of objects may also be different, the dynamic characteristics of the objects may be obtained by analyzing the motion state changes and periodicity of the objects in a sequence of consecutive frame images. The motion characteristics of the target include target position, motion direction, motion speed, motion periodicity and the like.
At present, the field of object classification and behavior analysis in video surveillance systems is in the first stage, but great progress has been made in the research of video object classification algorithms. The principle is shown in fig. 4.
In this embodiment, the pedestrian detection is performed by using the Hog feature and the SVM classifier, the feature extraction is performed by using the Hog, and the linear SVM is used as the classifier, so that the pedestrian detection is realized. However, for different real-life scenes, the classifier suitable for a specific application scene needs to be retrained. The main process of training the new classifier is as follows:
(1) preparing a training sample set and cutting the training sample set, wherein the training sample set comprises a positive sample set and a negative sample set;
(2) extracting Hog characteristics of all positive samples and negative samples;
(3) sample labels are given to all positive and negative samples;
(4) and inputting Hog characteristics of the positive and negative samples and labels of the positive and negative samples into the SVM for training.
Therefore, the classifier trained by the original training sample can be used for pedestrian detection.
The final purpose of target detection, target tracking and target classification is to perform anomaly detection.
Normal behavior generally refers to a state with a certain periodicity, repeatability, such as walking, running, etc. in daily life. The definition of the abnormal behavior has different standards in different environments, and in the embodiment, the abnormal behavior is defined as: running behavior, jumping behavior, squatting behavior, climbing behavior, and loitering behavior different from normal walking behavior of a human body are defined as abnormal behavior.
The human behavior recognition mainly comprises a template-based method, a probability statistics-based method and a semantic-based method. The simplest method for analyzing abnormal behavior is to perform pattern matching on the posture or a series of movements of the human body with a pre-trained template.
The template matching algorithm is to convert a moving image sequence into a group of static image patterns, and then compare the static image patterns with known templates, wherein the templates comprise several abnormal behaviors such as running, jumping, falling, wandering and the like, so as to obtain a recognition result.
Template matching is to search for a target in a large image, and knowing that the image has the target to be found and the target has the same size, direction and image as the template, the target can be found in the image through a certain algorithm, and the coordinate position of the target can be determined. Taking an 8-bit image (1 pixel described by 1 byte) as an example, the template T (M × N pixels) is superimposed on the searched image S (W × H pixels) and translated, and the template covers the area of the searched image called sub-image Si,j. i, j are the coordinates of the upper left corner of the subgraph on the searched graph S. The search range is:
1≤i≤W-M;
1≤j≤H-N;
by comparing T and Si,jThe similarity of (a) to (b) is,completing the template matching process, measuring the template T and the subgraph Si,jThe matching degree of (c) can be measured by the following two methods, which are the simplest SAD method and are also a faster method:
<math> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>;</mo> </mrow> </math>
<math> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <mo>|</mo> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>;</mo> </mrow> </math>
where m, n represent pixel coordinates.
However, the SAD algorithm is poor in robustness, and in order to solve the problem and also consider real-time performance, the correlation coefficient method in template matching can well adapt to the requirements: the correlation coefficient (r) is a mathematical distance that can be used to measure the similarity between two vectors. It originates from the cosine theorem: cos (a) ═ b2+c2-a2) And/2 bc, wherein a represents opposite sides of angle A, and b and c represent two adjacent sides of a.
Two vectors are completely similar if they are at an angle of 0 degrees (corresponding to r-1), completely dissimilar if they are at an angle of 90 degrees (r-0), and completely opposite if they are at an angle of 180 degrees (r-1). Writing the cosine theorem in the form of a vector:
cos(A)=<b,c>/(|b|*|c|);
namely: cos (a) ═ b1c1+b2c2+...+bncn)/sqrt[(b1 2+b2 2+...+bn 2)(c1 2+c2 2+...+cn 2)]。
Where the numerator represents the inner product of two vectors and the denominator represents the modulo multiplication of the two vectors.
Therefore, the method formula for solving the similarity by using the vector cosine included angle is as follows:
<math> <mrow> <mi>r</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> <mrow> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>[</mo> <msub> <mi>S</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </msqrt> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>M</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <msup> <mrow> <mo>[</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </mfrac> </mrow> </math>
wherein r (i, j) represents subgraph Si,jSimilarity to template T.
The determination of the size of the template is often an empirical value, and a template that approximates the contour of the object, which is too small and sensitive to changes in the object, or a template that contains too much background, is not good. The latter is the opposite, and the algorithm does not react when the target changes. Generally, the ratio of the target to the template is preferably 30% to 50%.
The whole process comprises the whole processes of target detection, target tracking, target classification and abnormity detection classification, and the result is finally stored and processed. A database is created and a table is created to store the analyzed data. The form of the table is shown in table 1:
TABLE 1
When the video needs to be browsed, the video ID of the corresponding video can be found by inquiring the attribute of the database through the inquiry statement in the database, so that the manpower is greatly reduced, and the efficiency is improved.
Various changes and modifications can be made by those skilled in the art based on the above technical solutions and concepts, and all such changes and modifications should be included in the scope of the present invention.

Claims (8)

1. The management equipment for the multiple paths of monitoring videos is characterized by comprising a plurality of paths of monitoring cameras, wherein the monitoring cameras are all in communication connection with a server; the server comprises a target detection module, a target tracking module, a target classification module, an abnormality detection classification module and a database module.
2. A method of managing multiple surveillance videos using the method of claim 1, comprising the steps of:
s1, the server receives the pictures of the monitoring cameras connected with the server, and the target detection module detects the pictures in each monitoring camera;
s2, tracking the target detected in the step S1 through a target tracking module;
s3 classifying the targets by using the target classification module according to the results obtained in the steps S1 and S2, detecting the abnormality of the targets by using the abnormality detection classification module based on the category of the targets, and classifying the detected abnormality into corresponding abnormality classifications;
s4, establishing a database through the database module, writing the abnormal attribute into the corresponding field set by the database, and establishing an index; wherein the fields in the database at least comprise the video identification to which the exception belongs and the category to which the exception belongs.
3. The method for managing multiple surveillance videos according to claim 2, wherein in step S1, the target area is detected by using an inter-frame difference method or a background difference method to distinguish between the key frame and the background frame.
4. The method for managing multiple surveillance videos as claimed in claim 2, wherein in step S2, a Camshift tracking algorithm, an optical flow tracking algorithm or a particle filter algorithm is used for tracking the target.
5. The method for managing multiple surveillance videos as claimed in claim 2, wherein in step S3, the objects are mainly classified into people, vehicles, objects, smoke and flames.
6. The method for managing multiple surveillance videos according to claim 2, wherein in step S3, the content of the anomaly detection mainly includes malicious occlusion, image interference, camera movement, object recognition, smoke detection, fire detection, vehicle speed measurement, reverse warning, border crossing recognition and abnormal human behavior; the abnormal categories of malicious occlusion, image interference and camera movement are diagnosis categories, the abnormal categories of object identification, smoke detection and fire detection are identification categories, and the abnormal categories of vehicle speed measurement, retrograde warning, border crossing identification and human body abnormal behaviors are behavior categories.
7. The method for managing multiple surveillance videos as claimed in claim 2, wherein in step S3, the anomaly detection is performed by a template matching method, a probability statistics method or a semantic method.
8. The method for managing multiple surveillance videos as claimed in claim 2, wherein in step S4, the fields in the database further include an exception location, an exception time, an exception subject and an exception content.
CN201510329776.2A 2015-06-15 2015-06-15 Multi-path monitor video management method and device Pending CN104915655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510329776.2A CN104915655A (en) 2015-06-15 2015-06-15 Multi-path monitor video management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510329776.2A CN104915655A (en) 2015-06-15 2015-06-15 Multi-path monitor video management method and device

Publications (1)

Publication Number Publication Date
CN104915655A true CN104915655A (en) 2015-09-16

Family

ID=54084705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510329776.2A Pending CN104915655A (en) 2015-06-15 2015-06-15 Multi-path monitor video management method and device

Country Status (1)

Country Link
CN (1) CN104915655A (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744232A (en) * 2016-03-25 2016-07-06 南京第五十五所技术开发有限公司 Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN106603999A (en) * 2017-02-17 2017-04-26 上海创米科技有限公司 Video monitoring alarming method and system
CN107133951A (en) * 2017-05-22 2017-09-05 中国科学院自动化研究所 Distorted image detection method and device
CN107358190A (en) * 2017-07-07 2017-11-17 广东中星电子有限公司 A kind of image key area management method and device
CN107832680A (en) * 2017-04-06 2018-03-23 小蚁科技(香港)有限公司 Method, system and storage medium for the computerization of video analysis
CN108009473A (en) * 2017-10-31 2018-05-08 深圳大学 Based on goal behavior attribute video structural processing method, system and storage device
CN108090357A (en) * 2017-12-14 2018-05-29 湖南财政经济学院 A kind of computer information safe control method and device
CN108230594A (en) * 2016-12-21 2018-06-29 安讯士有限公司 A kind of method for the generation alarm in video monitoring system
CN108364245A (en) * 2018-02-21 2018-08-03 韩明泽 A kind of anti-cheating management system in examination hall
CN108373001A (en) * 2018-02-01 2018-08-07 王学斌 A kind of intelligent industrial robot automation warehousing system
WO2018153150A1 (en) * 2017-02-27 2018-08-30 苏州科达科技股份有限公司 Video image 3d denoising method and device
CN108648236A (en) * 2018-05-11 2018-10-12 武汉电力职业技术学院 A kind of indirect method and control system for measurement of coordinates
CN109658437A (en) * 2018-11-01 2019-04-19 深圳神目信息技术有限公司 A kind of method and device of quick detection moving object
CN109889794A (en) * 2016-08-30 2019-06-14 吴玉芳 Can recognition of face video monitoring intelligence Skynet system and its working method
CN110210461A (en) * 2019-06-27 2019-09-06 北京澎思智能科技有限公司 Multiple view based on video camera grid cooperates with anomaly detection method
WO2019179024A1 (en) * 2018-03-20 2019-09-26 平安科技(深圳)有限公司 Method for intelligent monitoring of airport runway, application server and computer storage medium
CN110309720A (en) * 2019-05-27 2019-10-08 北京奇艺世纪科技有限公司 Video detecting method, device, electronic equipment and computer-readable medium
CN111009136A (en) * 2019-12-11 2020-04-14 公安部交通管理科学研究所 Method, device and system for detecting vehicles with abnormal running speed on highway
CN111008580A (en) * 2019-11-27 2020-04-14 云南电网有限责任公司电力科学研究院 Human behavior analysis method and device based on intelligent security of park
CN111090777A (en) * 2019-12-04 2020-05-01 浙江大华技术股份有限公司 Video data management method, management equipment and computer storage medium
CN111144171A (en) * 2018-11-02 2020-05-12 银河水滴科技(北京)有限公司 Abnormal crowd information identification method, system and storage medium
CN111369578A (en) * 2020-02-25 2020-07-03 四川新视创伟超高清科技有限公司 Intelligent tracking method and system for holder transaction
CN111401368A (en) * 2020-03-24 2020-07-10 武汉大学 News video title extraction method based on deep learning
CN111652043A (en) * 2020-04-15 2020-09-11 北京三快在线科技有限公司 Object state identification method and device, image acquisition equipment and storage medium
CN111680058A (en) * 2020-04-24 2020-09-18 合肥湛达智能科技有限公司 Multi-target tracking and behavior analysis detection method based on embedded terminal
CN111881320A (en) * 2020-07-31 2020-11-03 歌尔科技有限公司 Video query method, device, equipment and readable storage medium
CN112017195A (en) * 2020-08-26 2020-12-01 上海三维工程建设咨询有限公司 Intelligent integrated monitoring system applied to urban rail transit
CN112417205A (en) * 2019-08-20 2021-02-26 富士通株式会社 Target retrieval device and method and electronic equipment
CN112883836A (en) * 2021-01-29 2021-06-01 中国矿业大学 Video detection method for deformation of underground coal mine roadway
CN113033505A (en) * 2021-05-20 2021-06-25 南京甄视智能科技有限公司 Flame detection method, device and system based on dynamic classification detection and server
CN113449627A (en) * 2021-06-24 2021-09-28 深兰科技(武汉)股份有限公司 Personnel tracking method based on AI video analysis and related device
TWI743987B (en) * 2019-09-30 2021-10-21 中國商深圳市商湯科技有限公司 Behavioral analysis methods, electronic devices and computer storage medium
CN113963298A (en) * 2021-10-25 2022-01-21 东北林业大学 Wild animal identification tracking and behavior detection system, method, equipment and storage medium based on computer vision
CN115424207A (en) * 2022-09-05 2022-12-02 南京星云软件科技有限公司 Self-adaptive monitoring system and method
CN116449761A (en) * 2023-06-13 2023-07-18 中苏科技股份有限公司 Intelligent pump station intelligent control system and method
CN116453066A (en) * 2023-06-20 2023-07-18 深圳市软筑信息技术有限公司 Intelligent video monitoring method and system for park
WO2023231082A1 (en) * 2022-05-30 2023-12-07 广州工商学院 Public video surveillance system based on big data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291985A1 (en) * 2006-06-20 2007-12-20 Nils Oliver Krahnstoever Intelligent railyard monitoring system
CN101093603A (en) * 2007-07-03 2007-12-26 北京智安邦科技有限公司 Module set of intellective video monitoring device, system and monitoring method
CN103873825A (en) * 2014-02-28 2014-06-18 北京航科威视光电信息技术有限公司 ATM (automatic teller machine) intelligent monitoring system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070291985A1 (en) * 2006-06-20 2007-12-20 Nils Oliver Krahnstoever Intelligent railyard monitoring system
CN101093603A (en) * 2007-07-03 2007-12-26 北京智安邦科技有限公司 Module set of intellective video monitoring device, system and monitoring method
CN103873825A (en) * 2014-02-28 2014-06-18 北京航科威视光电信息技术有限公司 ATM (automatic teller machine) intelligent monitoring system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王磊: "智能视频检索技术及其在提高大规模监控效率中的应用", 《2008年中国安防国际高峰论坛》 *

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105744232B (en) * 2016-03-25 2017-12-12 南京第五十五所技术开发有限公司 A kind of method of the transmission line of electricity video external force damage prevention of Behavior-based control analytical technology
CN105744232A (en) * 2016-03-25 2016-07-06 南京第五十五所技术开发有限公司 Method for preventing power transmission line from being externally broken through video based on behaviour analysis technology
CN109889794A (en) * 2016-08-30 2019-06-14 吴玉芳 Can recognition of face video monitoring intelligence Skynet system and its working method
CN108230594A (en) * 2016-12-21 2018-06-29 安讯士有限公司 A kind of method for the generation alarm in video monitoring system
CN106603999A (en) * 2017-02-17 2017-04-26 上海创米科技有限公司 Video monitoring alarming method and system
WO2018153150A1 (en) * 2017-02-27 2018-08-30 苏州科达科技股份有限公司 Video image 3d denoising method and device
CN107832680A (en) * 2017-04-06 2018-03-23 小蚁科技(香港)有限公司 Method, system and storage medium for the computerization of video analysis
CN107832680B (en) * 2017-04-06 2020-11-24 小蚁科技(香港)有限公司 Computerized method, system and storage medium for video analytics
CN107133951A (en) * 2017-05-22 2017-09-05 中国科学院自动化研究所 Distorted image detection method and device
CN107133951B (en) * 2017-05-22 2020-02-28 中国科学院自动化研究所 Image tampering detection method and device
CN107358190A (en) * 2017-07-07 2017-11-17 广东中星电子有限公司 A kind of image key area management method and device
CN108009473A (en) * 2017-10-31 2018-05-08 深圳大学 Based on goal behavior attribute video structural processing method, system and storage device
CN108090357A (en) * 2017-12-14 2018-05-29 湖南财政经济学院 A kind of computer information safe control method and device
CN108373001A (en) * 2018-02-01 2018-08-07 王学斌 A kind of intelligent industrial robot automation warehousing system
CN108364245A (en) * 2018-02-21 2018-08-03 韩明泽 A kind of anti-cheating management system in examination hall
WO2019179024A1 (en) * 2018-03-20 2019-09-26 平安科技(深圳)有限公司 Method for intelligent monitoring of airport runway, application server and computer storage medium
CN108648236A (en) * 2018-05-11 2018-10-12 武汉电力职业技术学院 A kind of indirect method and control system for measurement of coordinates
CN109658437A (en) * 2018-11-01 2019-04-19 深圳神目信息技术有限公司 A kind of method and device of quick detection moving object
CN111144171A (en) * 2018-11-02 2020-05-12 银河水滴科技(北京)有限公司 Abnormal crowd information identification method, system and storage medium
CN110309720A (en) * 2019-05-27 2019-10-08 北京奇艺世纪科技有限公司 Video detecting method, device, electronic equipment and computer-readable medium
CN110210461A (en) * 2019-06-27 2019-09-06 北京澎思智能科技有限公司 Multiple view based on video camera grid cooperates with anomaly detection method
CN112417205A (en) * 2019-08-20 2021-02-26 富士通株式会社 Target retrieval device and method and electronic equipment
TWI743987B (en) * 2019-09-30 2021-10-21 中國商深圳市商湯科技有限公司 Behavioral analysis methods, electronic devices and computer storage medium
CN111008580A (en) * 2019-11-27 2020-04-14 云南电网有限责任公司电力科学研究院 Human behavior analysis method and device based on intelligent security of park
CN111090777A (en) * 2019-12-04 2020-05-01 浙江大华技术股份有限公司 Video data management method, management equipment and computer storage medium
CN111090777B (en) * 2019-12-04 2023-07-28 浙江大华技术股份有限公司 Video data management method, management equipment and computer storage medium
CN111009136A (en) * 2019-12-11 2020-04-14 公安部交通管理科学研究所 Method, device and system for detecting vehicles with abnormal running speed on highway
CN111369578A (en) * 2020-02-25 2020-07-03 四川新视创伟超高清科技有限公司 Intelligent tracking method and system for holder transaction
CN111369578B (en) * 2020-02-25 2023-06-30 四川新视创伟超高清科技有限公司 Intelligent tracking method and system for cradle head transaction
CN111401368A (en) * 2020-03-24 2020-07-10 武汉大学 News video title extraction method based on deep learning
CN111652043A (en) * 2020-04-15 2020-09-11 北京三快在线科技有限公司 Object state identification method and device, image acquisition equipment and storage medium
CN111680058A (en) * 2020-04-24 2020-09-18 合肥湛达智能科技有限公司 Multi-target tracking and behavior analysis detection method based on embedded terminal
CN111680058B (en) * 2020-04-24 2023-09-26 合肥湛达智能科技有限公司 Multi-target tracking and behavior analysis detection method based on embedded terminal
CN111881320A (en) * 2020-07-31 2020-11-03 歌尔科技有限公司 Video query method, device, equipment and readable storage medium
CN112017195A (en) * 2020-08-26 2020-12-01 上海三维工程建设咨询有限公司 Intelligent integrated monitoring system applied to urban rail transit
CN112883836A (en) * 2021-01-29 2021-06-01 中国矿业大学 Video detection method for deformation of underground coal mine roadway
CN112883836B (en) * 2021-01-29 2024-04-16 中国矿业大学 Video detection method for deformation of underground coal mine roadway
CN113033505A (en) * 2021-05-20 2021-06-25 南京甄视智能科技有限公司 Flame detection method, device and system based on dynamic classification detection and server
CN113449627A (en) * 2021-06-24 2021-09-28 深兰科技(武汉)股份有限公司 Personnel tracking method based on AI video analysis and related device
CN113963298A (en) * 2021-10-25 2022-01-21 东北林业大学 Wild animal identification tracking and behavior detection system, method, equipment and storage medium based on computer vision
WO2023231082A1 (en) * 2022-05-30 2023-12-07 广州工商学院 Public video surveillance system based on big data
CN115424207A (en) * 2022-09-05 2022-12-02 南京星云软件科技有限公司 Self-adaptive monitoring system and method
CN116449761A (en) * 2023-06-13 2023-07-18 中苏科技股份有限公司 Intelligent pump station intelligent control system and method
CN116449761B (en) * 2023-06-13 2023-10-13 中苏科技股份有限公司 Intelligent pump station intelligent control system and method
CN116453066A (en) * 2023-06-20 2023-07-18 深圳市软筑信息技术有限公司 Intelligent video monitoring method and system for park
CN116453066B (en) * 2023-06-20 2023-08-18 深圳市软筑信息技术有限公司 Intelligent video monitoring method and system for park

Similar Documents

Publication Publication Date Title
CN104915655A (en) Multi-path monitor video management method and device
Bertini et al. Multi-scale and real-time non-parametric approach for anomaly detection and localization
CN108447078B (en) Interference perception tracking algorithm based on visual saliency
CN108549846B (en) Pedestrian detection and statistics method combining motion characteristics and head-shoulder structure
US9299162B2 (en) Multi-mode video event indexing
CN107123131B (en) Moving target detection method based on deep learning
Elguebaly et al. Finite asymmetric generalized Gaussian mixture models learning for infrared object detection
US7957557B2 (en) Tracking apparatus and tracking method
KR101764845B1 (en) A video surveillance apparatus for removing overlap and tracking multiple moving objects and method thereof
CN103345492A (en) Method and system for video enrichment
CN111738218B (en) Human body abnormal behavior recognition system and method
Ferryman et al. Performance evaluation of crowd image analysis using the PETS2009 dataset
CN110298297A (en) Flame identification method and device
Luo et al. Traffic analytics with low-frame-rate videos
CN117437599B (en) Pedestrian abnormal event detection method and system for monitoring scene
CN104717468A (en) Cluster scene intelligent monitoring method and system based on cluster trajectory classification
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
Liang et al. Methods of moving target detection and behavior recognition in intelligent vision monitoring.
CN114049581A (en) Weak supervision behavior positioning method and device based on action fragment sequencing
Tao et al. An adaptive frame selection network with enhanced dilated convolution for video smoke recognition
Abdullah et al. Objects detection and tracking using fast principle component purist and kalman filter.
Fan et al. Video anomaly detection using CycleGan based on skeleton features
Ghasemi et al. A real-time multiple vehicle classification and tracking system with occlusion handling
Yang et al. Video anomaly detection for surveillance based on effective frame area
Agrawal et al. Segmentation of moving objects using numerous background subtraction methods for surveillance applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150916

RJ01 Rejection of invention patent application after publication