CN115424207B - Self-adaptive monitoring system and method - Google Patents

Self-adaptive monitoring system and method Download PDF

Info

Publication number
CN115424207B
CN115424207B CN202211076362.XA CN202211076362A CN115424207B CN 115424207 B CN115424207 B CN 115424207B CN 202211076362 A CN202211076362 A CN 202211076362A CN 115424207 B CN115424207 B CN 115424207B
Authority
CN
China
Prior art keywords
module
result
clustering
region
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211076362.XA
Other languages
Chinese (zh)
Other versions
CN115424207A (en
Inventor
魏志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xingyun Digital Technology Co Ltd
Original Assignee
Nanjing Xingyun Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xingyun Digital Technology Co Ltd filed Critical Nanjing Xingyun Digital Technology Co Ltd
Priority to CN202211076362.XA priority Critical patent/CN115424207B/en
Publication of CN115424207A publication Critical patent/CN115424207A/en
Application granted granted Critical
Publication of CN115424207B publication Critical patent/CN115424207B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a self-adaptive monitoring system and a self-adaptive monitoring method, wherein the system comprises a monitoring video input module, a clustering module, a tracking module, an identification module and a merging analysis module, wherein the identification module and the tracking module are connected with the monitoring video input module, the clustering module is connected with the merging analysis module, the clustering module is connected with the identification module through the tracking module, and the identification module is connected with the merging analysis module. The self-adaptive monitoring method comprises the following steps: firstly, a monitoring video input module inputs a video image; step two, performing parallel processing on the video images through two paths; and step three, combining and analyzing the processing results of the video images and outputting the analysis results. The invention can automatically capture abnormal events except those defined in advance in real time, efficiently identify various unexpected events needing to be monitored, automatically capture event details for the defined monitoring scene, and reduce the expenses of manual retrieval and monitoring.

Description

Self-adaptive monitoring system and method
Technical Field
The invention relates to the field of monitoring, in particular to a self-adaptive monitoring system and a self-adaptive monitoring method.
Background
Automatic monitoring systems based on intelligent video for indoor people, especially for elderly and disabled living alone, are currently marketed. The physical characteristics of standing, sitting, falling and the like are identified through behavior analysis of a moving target (human) in a video monitoring area, and when the physical characteristics are judged to be falling, the physical characteristics are sent to terminal equipment such as mobile phones, computer screens, tablet computers and the like of other members of a home, community management or civil administration staff in a wireless or wired mode. The method can be applied to indoor places such as homes, geriatric care homes, activity centers, hospitals and the like, but cannot exhaust all scenes and objects, and once events which are not defined in advance occur, real-time capture is not easy.
The market also discloses a human behavior recognition method in video monitoring, which relates to the field of computer vision and can recognize continuous different behaviors in videos. The human behavior monitoring system based on the human behavior identification method comprises a video acquisition unit, a storage unit, a feature extraction unit, a correlation analysis unit, a behavior identification unit, a video output unit and an early warning unit, and can identify continuous different behaviors in a video. However, all scenes and objects cannot be exhausted, once events except those defined in advance occur, real-time capture is not easy, manual retrieval is needed when analysis is needed afterwards, and time and labor are wasted. Furthermore, for the monitoring scenarios that have been defined, current technologies lack a mechanism to automatically capture event details in real-time.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a self-adaptive monitoring system and a self-adaptive monitoring method.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the self-adaptive monitoring system comprises a monitoring video input module, a clustering module, a tracking module, an identification module and a merging analysis module, wherein the identification module and the tracking module are connected with the monitoring video input module, the clustering module is connected with the merging analysis module, the clustering module is connected with the identification module through the tracking module, and the identification module is connected with the merging analysis module.
The self-adaptive monitoring method comprises the following steps:
firstly, a monitoring video input module inputs a video image;
step two, performing two-way parallel processing on the video image: one path is processed according to a predefined mode, and the other path is processed after clustering according to a motion mode;
and step three, combining and analyzing the processing results of the video images and outputting the analysis results.
Preferably, the step of processing according to a predefined pattern is: the method comprises the steps of firstly identifying a video image through an identification module so as to identify a plurality of objects, then classifying the plurality of objects according to a predefined classification method, and then subclassing the objects according to needs.
Preferably, the predefined classification methods include sliding window and artificial feature extraction based methods, machine learning based methods, and methods for automatically extracting hidden features in an input image using deep learning techniques.
Preferably, the step of processing after clustering according to the motion pattern comprises: firstly, a tracking module monitors and tracks a moving object according to a motion state by adopting a tracking algorithm, then, clustering is carried out in the same category or a sub-category according to one or more motion characteristics through a clustering algorithm of a clustering module, and then, whether a new clustering result exists or not is judged.
Preferably, the motion characteristics include a motion direction, a motion speed, a motion acceleration, a motion position, and respective statistical parameters including a maximum value, a minimum value, a mean value, a variance, a change rate, a first-order difference average value, a second-order difference average value, a normalized first-order difference average value, a normalized second-order difference average value, a change range, and a square sum of sequence differences.
Preferably, the two paths of steps of performing combination analysis on the processing results of the video images and outputting the analysis results comprise the following steps: if the clustering has no new result, exiting, otherwise, taking the region where the class object with the minimum sample number is located as a result region I, taking the region where the monitored predefined mode result is located as a result region II, then customizing the size of a preset threshold, then judging whether the superposition area of the two result regions exceeds the preset threshold through a merging analysis module, if so, selecting the region with the large area of the result region as a common result region, then, carrying out focusing amplification on the common result and the rest independent result regions, and continuously tracking to obtain event information, wherein the event information is an analysis result.
Preferably, the independent result area is where the first result area and the second result area do not overlap.
Preferably, the tracking algorithm includes a continuous interframe difference method, a background difference method and an optical flow method.
Preferably, the clustering algorithm includes a K-Means (K Means) clustering algorithm, a mean shift clustering algorithm, a density-based clustering algorithm, a gaussian mixture model clustering algorithm, and a hierarchical clustering algorithm.
The invention has the following beneficial effects: the invention adopts two paths of parallel processing for monitoring video input, one path of parallel processing is processed according to a predefined mode, the other path of parallel processing is processed after clustering according to a motion mode, and areas defined by the two paths of results are combined and analyzed and then the results are output, so that the invention can automatically capture abnormal events out of the predefined in real time, efficiently identify various unexpected events needing monitoring, automatically capture event details for a defined monitoring scene, and reduce the expenses of manual retrieval and monitoring.
Drawings
FIG. 1 is a block diagram of an adaptive monitoring system according to the present invention;
fig. 2 is a flow chart of an adaptive monitoring method.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings of the specification:
the first embodiment is as follows:
as shown in fig. 1, the adaptive monitoring system includes a monitoring video input module 1, a clustering module 2, a tracking module 3, an identification module 4, and a merging analysis module 5, wherein the identification module 4 and the tracking module 3 are both connected to the monitoring video input module 1, the clustering module 2 is connected to the merging analysis module 5, the clustering module 2 is connected to the identification module 4 through the tracking module 3, and the identification module 4 is connected to the merging analysis module 5.
As shown in fig. 2, the adaptive monitoring method includes the following steps:
firstly, a monitoring video input module 1 inputs a video image;
step two, performing two-path parallel processing on the video image: one path is processed according to a predefined mode, and the other path is processed after clustering according to a motion mode;
and step three, combining and analyzing the processing results of the video images and outputting the analysis results.
The steps of processing according to the predefined pattern are: the video images are firstly identified by an identification module 4 so as to identify a plurality of objects, then the plurality of objects are classified according to a predefined classification method, and then the classification is carried out again according to the needs. A plurality of objects to be monitored, such as people, animals, vehicles, etc., can be identified and classified from the input video image by the prior art. Further, sub-categories may be subdivided as desired, for example vehicles may be divided into large/medium/small vehicle categories, animals may be divided into flying/walking animals on the ground, etc.
The predefined classification methods include sliding window and artificial feature extraction based methods, machine learning based methods, and methods for automatically extracting hidden features from input images using deep learning techniques.
The steps of clustering according to the motion mode and then processing are as follows: firstly, the tracking module 3 adopts a tracking algorithm to monitor and track a moving object according to the motion state, then carries out clustering in the same category or subcategory through the clustering algorithm of the clustering module 2 according to one or more motion characteristics, and then judges whether a new clustering result exists. Clustering may be performed in the same category or subcategory based on one or more motion characteristics, which may have the advantage of avoiding false positives, e.g., normal bird flight may not be judged as pedestrian speed anomalies. The clustered data may be time series data (historical data and current data) of the same object, or time series data (historical data of all objects and/or instant data of all objects) of a plurality of objects. The process of checking whether the new result exists in the cluster can be performed for a plurality of times by using a plurality of methods of combining a plurality of features, and the result is an or relationship (wherein the new result is counted as long as the new result exists once).
The motion characteristics comprise motion direction, motion speed, motion acceleration, motion position and respective statistical parameters, wherein the statistical parameters comprise maximum value, minimum value, mean value, variance, change rate (if the change rate is positive when ascending and negative when descending), first-order difference average value, second-order difference average value, normalized first-order difference average value, normalized second-order difference average value, change range and the square sum of sequence difference values.
The two paths of steps for carrying out merging analysis on the processing results of the video images and outputting the analysis results comprise the following steps: if no new clustering result exists, quitting, otherwise, taking the region where the class object with the minimum number of samples is located as a result region I, taking the region where the monitored predefined mode result is located as a result region II, then customizing the size of a preset threshold, then judging whether the superposition area of the two paths of result regions exceeds the preset threshold through a merging analysis module 5, if so, selecting the region with the large area of the result region as a common result region, then, carrying out focusing amplification on the common result and the rest independent result regions to continuously track to obtain event information, and the event information is an analysis result. The independent result area is where the first result area and the second result area do not coincide. For example, the predetermined threshold is 50% of the smaller area of the result region, and if the coincidence area of the two result regions exceeds the predetermined threshold, the larger area of the result region is selected as the common result region. Focus-zoom-on tracking continues for the common result and the remaining independent result regions (where result region one and result region two do not coincide).
The tracking algorithm comprises a continuous interframe difference method, a background difference method and an optical flow method.
Successive interframe difference method: the video sequence collected by the camera has the characteristic of continuity. If there are no moving objects in the scene, the change in successive frames is weak, and if there are moving objects, there will be significant changes from frame to frame. The interframe difference method is based on the thought. As objects in the scene are moving, the images of the objects are in different positions in different image frames. The algorithm carries out differential operation on two or three continuous frames of images in time, pixel points corresponding to different frames are subtracted, the absolute value of the gray difference is judged, when the absolute value exceeds a certain threshold value, a moving target can be judged, all moving target pixel points are communicated, the moving target can be determined, and therefore the target detection function is achieved.
Background subtraction method: the background subtraction method segments moving objects by comparing an input image with a background image, and requires certain limitations when applying the background subtraction method: the gray value of the foreground (moving object) pixel and the gray value of the background pixel are required to have a certain difference, and the problem of the continuous frame division method is solved by requiring the camera to be static. Background subtraction is the mainstream method for detecting moving objects at present, and the basic idea is to subtract each current frame image from a background image stored in advance or acquired in real time, and calculate a region deviating from the background by more than a certain threshold value as a moving region. The algorithm is simple to implement, the subtraction result directly gives information such as the position, size, shape and the like of the target, complete description about a moving target area can be provided, and particularly for the condition that a camera is static, background subtraction is a preferred method for realizing real-time detection and extraction of the moving target. The key to the implementation of the background subtraction method is the acquisition and update of the background model. Background acquisition algorithms generally require acquisition of a background image in the presence of moving objects in a scene, and the updating process enables the background to adapt to various changes and disturbances of the scene, such as changes in external light, disturbances of objects in the background and movements of fixed objects, the influence of shadows, and the like.
A typical background modeling method is that a Gaussian mixture model is used for describing the distribution of background image pixel values, whether the current pixel value of an image accords with the distribution or not is judged in the target detection process, and if the current pixel value accords with the distribution, the current pixel value is judged as a foreground point, otherwise, the current pixel value is judged as a background point. And meanwhile, carrying out self-adaptive updating on the background image parameters according to the newly acquired image. The method can reliably process illumination change, interference of background chaotic motion, long-time scene change and the like. On the basis, different updating strategies are adopted for the background, the static target and the moving target so as to weaken the influence of the moving target on the background in the background updating process.
An optical flow method: the basic principle of optical flow to detect moving objects is: the target is detected by the difference of the velocity vectors when the target is present and when the target is absent in the image. Each pixel in the image is assigned a velocity vector, which forms an image motion field, and at a specific moment of the motion, the points on the image correspond to the points on the three-dimensional object one to one, and the correspondence can be obtained by projection, and the image can be dynamically analyzed according to the velocity vector characteristics of each pixel. If no moving object exists in the image, the optical flow vector is continuously changed in the whole image area, and when the object and the image background have relative motion, the velocity vector formed by the moving object is different from the velocity vector of the neighborhood background, so that the position of the moving object is detected. Optical flow refers to the apparent or apparent motion of the image brightness pattern. The reason why "apparent motion" is used is that the optical flow cannot be uniquely determined from local information of a moving image.
The clustering algorithm comprises a K-Means clustering algorithm, a mean shift clustering algorithm, a density-based clustering algorithm, a Gaussian mixture model clustering algorithm and a hierarchical clustering algorithm.
K-Means clustering algorithm step: (1) First we select some classes or groups and randomly initialize their respective center points. The center point is the same length position as each data point vector. This requires us to predict the number of classes (i.e. the number of center points) in advance. (2) The distance of each data point to the center point is calculated, and the class to which the data point is closest to which center point is classified. And (3) calculating the central point in each class as a new central point. (4) The above steps are repeated until the center of each class does not change much after each iteration. It is also possible to randomly initialize the center point multiple times and then select the one that has the best run result.
Mean shift clustering is a sliding window based algorithm to find dense regions of data points. This is a centroid-based algorithm that locates the center point of each group/class by updating the candidate points for the center point to the mean of the points within the sliding window. And then removing similar windows from the candidate windows to finally form a central point set and corresponding groups. The method comprises the following specific steps: 1. and determining the radius r of the sliding window, and starting sliding by using a circular sliding window with the radius r of a randomly selected center point C. The mean shift is similar to a hill climbing algorithm, moving to a more dense region in each iteration until convergence. 2. Each time a new region is slid, the mean value within the sliding window is calculated as the center point, and the number of points within the sliding window is the density within the window. In each movement, the window will want the denser area to move. 3. Moving the window, calculating the center point within the window and the density within the window, knows that there is no direction to accommodate more points within the window, i.e., moving until the density within the circle no longer increases. 4. And step one to step three generate a plurality of sliding windows, when the sliding windows are overlapped, the window containing the most points is reserved, and then clustering is carried out according to the sliding window where the data points are located.
A density-based clustering algorithm. The method comprises the following specific steps: 1. first, determining radius r and minPoints, starting from an arbitrary data point which has not been visited, and taking this point as the center, whether the number of points contained in a circle with radius r is greater than or equal to minPoints, if so, then the point is marked as central point, otherwise, the point is marked as noise point.2. Repeating the step 1, if a noise point exists in a circle with a radius of a certain central point, marking the point as an edge point, and otherwise, still indicating the noise point. Step 1 is repeated until all points have been visited.
And (4) a Gaussian mixture model clustering algorithm. First, assuming that the data points are gaussian distributed, corresponding to the K-Means assuming that the data points are circular, the gaussian distribution (elliptical) gives more possibilities. We have two parameters to describe the shape of the cluster: mean and standard deviation. The clusters can take the form of ellipses of any shape because of the standard deviation in both the x and y directions. Thus, each gaussian distribution is assigned to a single cluster. So to do clustering should first find the mean and standard deviation of the data set, we will use an optimization algorithm called max-expectation. The method comprises the following specific steps: 1. the number of clusters (similar to K-Means) is chosen and the gaussian distribution parameters (mean and variance) for each cluster are randomly initialized. It is also possible to look at the data first to give a relatively accurate mean and variance. 2. Given the gaussian distribution of each cluster, the probability of each data point belonging to each cluster is calculated. The closer a point is to the center of the gaussian distribution, the more likely it belongs to the cluster. 3. Based on these probabilities, we calculate a gaussian distribution parameter such that the probability of a data point is maximized, and these new parameters can be calculated using a weighting of the probability of a data point, which is the probability that the data point belongs to the cluster. 4. Iterations 2 and 3 are repeated until the change in the iterations is not large. The Gaussian mixture model clustering has the advantages that: (1) Gaussian mixture model clustering uses mean and standard deviation, and clusters can appear elliptical rather than limited to circles. K-Means is a special case of Gaussian mixture model clustering, where a cluster appears circular when the variance is close to 0 in all dimensions. (2) Gaussian mixture model clustering uses probabilities that all a data point can belong to multiple clusters. For example, data point X may have a 20 percent probability of belonging to cluster A and an 80 percent probability of belonging to cluster B. That is, gaussian mixture model clustering can support mixture clustering.
Hierarchical clustering algorithms fall into two categories: top-down and bottom-up. Agglomerative level clustering is a bottom-up clustering algorithm. Each data point is first treated as a single cluster and then the distances between all clusters are calculated to merge the clusters until all clusters are aggregated into one cluster. The present invention provides an example of agglomerative hierarchical clustering: the method comprises the following specific steps: 1. first we treat each data point as a single cluster and then choose a metric that measures the distance between the two clusters. For example, we use average linkage as a criterion that defines the distance between two clusters as the average distance between a data point in a first cluster and a data point in a second cluster. 2. In each iteration, we merge the two clusters with the smallest averagerange into one cluster. 3. Repeat step 2 until all data points are merged into one cluster and then select how many clusters we need.
According to the invention, two paths of parallel processing are adopted for monitoring video input, one path of parallel processing is carried out according to a predefined mode, the other path of parallel processing is carried out after clustering according to a motion mode, areas defined by the two paths of results are subjected to merging analysis and then the results are output, so that the abnormal events except the predefined events can be automatically captured in real time, various unexpected events needing monitoring can be efficiently identified, the details of the events can be automatically captured for the defined monitoring scene, and the expenses of manual retrieval and monitoring are reduced.
Example two: without any pre-identified results.
The application scenario is monitoring in an independent space. The monitored object walks, sits and rests in an independent space at ordinary times. If accidentally left in front of the window for a long time, this action is not easily defined in advance, but needs to be recognized. At this time, the clustering result according to the historical motion pattern will extract the motion pattern with the least number of samples, i.e. the motion pattern with long dwell time in the window, and enlarge the area for analysis.
Example three: there is no new result clustered.
In a large-scale park monitoring scene, if an event needing to be identified is defined by gathering multiple persons in advance, the park only has gathering of multiple persons, the system is identified as a predefined mode, although a clustering result has no new result, the system still can automatically amplify the relevant area, and the system can further judge how to process the event according to the detail identification, such as whether there is a sign of fighting, whether a tool is used for fighting, and the like.
Example four: new results are clustered with predefined patterns.
In a large campus monitoring scenario, if multiple people are predefined to gather as an event to be identified, but stay at lake is not defined as a scenario. When people gather beside a lake, and other people walk normally, the system finds that a new type of lake edge gathering (new positions) is added through clustering, outputs a first monitoring area beside the new type of lake, and also outputs a second monitoring area beside the lake through identification of gathering events. The larger area of the first monitoring area and the second monitoring area is output as a monitoring result, the system can automatically enlarge the result area, and can further judge how to process the monitoring result according to the detail identification, such as whether the fighting sign exists or not, whether the tool is used or not, and the like.
The invention utilizes two parallel ways of machine learning (supervised learning and unsupervised learning) to simultaneously process one path of data, if the result is repeated, the repeated area is taken as the output result, and if the result is not repeated, the result is respectively output. Therefore, various events (whether defined in advance or not) can be effectively identified, the expenses of manual retrieval and monitoring are reduced, and the details of the events can be conveniently and automatically captured.
It should be noted that the above list is only one specific embodiment of the present invention. It is clear that the invention is not limited to the embodiments described above, but that many variations are possible, all of which can be derived or suggested directly from the disclosure of the invention by a person skilled in the art, and are considered to be within the scope of the invention.

Claims (7)

1. The self-adaptive monitoring system is characterized by comprising a monitoring video input module (1), a clustering module (2), a tracking module (3), an identification module (4) and a merging analysis module (5), wherein the identification module (4) and the tracking module (3) are connected with the monitoring video input module (1), the clustering module (2) is connected with the merging analysis module (5), the clustering module (2) is connected with the identification module (4) through the tracking module (3), and the identification module (4) is connected with the merging analysis module (5);
the merging analysis module (5) is used for merging and analyzing two paths of processing results of video images and outputting analysis results, if no new clustering result exists, the clustering process exits, otherwise, the region where the class object with the least number of samples is located is used as a result region I, the region where the monitored predefined mode result is located is used as a result region II, then the size of a preset threshold is defined by users, then whether the overlapping area of the two paths of result regions exceeds the preset threshold is judged through the merging analysis module (5), if the overlapping area exceeds the preset threshold, the region with the larger result region is selected as a common result region, then the common result and the remaining independent result regions are focused, amplified and continuously tracked to obtain event information, and the event information is an analysis result.
2. The adaptive monitoring method, based on the adaptive monitoring system of claim 1, is characterized by comprising the following steps:
firstly, a monitoring video input module (1) inputs a video image;
step two, performing two-way parallel processing on the video image: one path is processed according to a predefined mode, and the other path is processed after clustering according to a motion mode;
the step of processing after clustering according to the motion mode comprises the following steps: firstly, a tracking module (3) adopts a tracking algorithm to monitor and track a moving object according to a motion state, then clustering is carried out in the same category or a sub-category according to one or more motion characteristics through a clustering algorithm of a clustering module (2), and then whether a new clustering result exists is judged;
step three, combining and analyzing the processing results of the video images and outputting the analysis results;
the two paths of processing results of the video images are merged and analyzed, and the analysis result is output by the steps of: if no new clustering result exists, quitting, otherwise, taking the region where the class object with the minimum sample number is located as a first result region, taking the region where the monitored predefined mode result is located as a second result region, then customizing the size of a preset threshold, then judging whether the superposition area of the two result regions exceeds the preset threshold through a merging analysis module (5), if so, selecting the region with the large area of the result region as a common result region, then, carrying out focusing amplification on the common result and the rest independent result regions, and continuously tracking to obtain event information, wherein the event information is an analysis result;
the independent result area is a place where the first result area and the second result area do not coincide.
3. The adaptive monitoring method according to claim 2, wherein the step of processing according to a predefined pattern is: the video images are firstly identified through an identification module (4) so as to identify a plurality of objects, then the objects are classified according to a predefined classification method, and then the classification is carried out according to the requirement.
4. The adaptive monitoring method according to claim 3, wherein the predefined classification method comprises a sliding window and artificial feature extraction based method, a machine learning based method, and a method for automatically extracting hidden features in the input image by using a deep learning technique.
5. The adaptive monitoring method according to claim 2, wherein the motion characteristics include motion direction, motion speed, motion acceleration, motion position, and respective statistical parameters including maximum value, minimum value, mean value, variance, change rate, and first-order difference mean value, second-order difference mean value, normalized first-order difference mean value, normalized second-order difference mean value, change range, and the sum of squares of sequence differences.
6. The adaptive monitoring method according to claim 2, wherein the tracking algorithm comprises a continuous interframe difference method, a background difference method and an optical flow method.
7. The adaptive monitoring method according to claim 2, wherein the clustering algorithm comprises a K-Means clustering algorithm, a mean shift clustering algorithm, a density-based clustering algorithm, a gaussian mixture model clustering algorithm, a hierarchical clustering algorithm.
CN202211076362.XA 2022-09-05 2022-09-05 Self-adaptive monitoring system and method Active CN115424207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211076362.XA CN115424207B (en) 2022-09-05 2022-09-05 Self-adaptive monitoring system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211076362.XA CN115424207B (en) 2022-09-05 2022-09-05 Self-adaptive monitoring system and method

Publications (2)

Publication Number Publication Date
CN115424207A CN115424207A (en) 2022-12-02
CN115424207B true CN115424207B (en) 2023-04-14

Family

ID=84202267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211076362.XA Active CN115424207B (en) 2022-09-05 2022-09-05 Self-adaptive monitoring system and method

Country Status (1)

Country Link
CN (1) CN115424207B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915655A (en) * 2015-06-15 2015-09-16 西安电子科技大学 Multi-path monitor video management method and device
CN105825201A (en) * 2016-03-31 2016-08-03 武汉理工大学 Moving object tracking method in video monitoring
CN107194317B (en) * 2017-04-24 2020-07-31 广州大学 Violent behavior detection method based on grid clustering analysis
CN107133654A (en) * 2017-05-25 2017-09-05 大连理工大学 A kind of method of monitor video accident detection
US11044206B2 (en) * 2018-04-20 2021-06-22 International Business Machines Corporation Live video anomaly detection
CN109409307B (en) * 2018-11-02 2022-04-01 深圳龙岗智能视听研究院 Online video behavior detection method based on space-time context analysis
US11508050B2 (en) * 2018-12-19 2022-11-22 Packsize Llc Systems and methods for joint learning of complex visual inspection tasks using computer vision
CN110929560B (en) * 2019-10-11 2022-10-14 杭州电子科技大学 Video semi-automatic target labeling method integrating target detection and tracking
CN114065799B (en) * 2021-12-01 2022-04-08 兰和科技(深圳)有限公司 Campus security monitoring management system based on artificial intelligence technology
CN114550100A (en) * 2022-03-01 2022-05-27 上海商汤信息科技有限公司 Method, device, equipment and medium for detecting occupied road

Also Published As

Publication number Publication date
CN115424207A (en) 2022-12-02

Similar Documents

Publication Publication Date Title
US10007850B2 (en) System and method for event monitoring and detection
CN106203274B (en) Real-time pedestrian detection system and method in video monitoring
Brulin et al. Posture recognition based on fuzzy logic for home monitoring of the elderly
KR101731461B1 (en) Apparatus and method for behavior detection of object
CN103246896B (en) A kind of real-time detection and tracking method of robustness vehicle
JP2016072964A (en) System and method for subject re-identification
CN107491749B (en) Method for detecting global and local abnormal behaviors in crowd scene
CN111626194A (en) Pedestrian multi-target tracking method using depth correlation measurement
CN106570490A (en) Pedestrian real-time tracking method based on fast clustering
Mehta et al. Motion and region aware adversarial learning for fall detection with thermal imaging
Chakravarty et al. Panoramic vision and laser range finder fusion for multiple person tracking
CA3196344A1 (en) Rail feature identification system
KR20200017594A (en) Method for Recognizing and Tracking Large-scale Object using Deep learning and Multi-Agent
Kumar et al. Background subtraction based on threshold detection using modified K-means algorithm
CN115311735A (en) Intelligent recognition early warning method for abnormal behaviors
KR20160057503A (en) Violence Detection System And Method Based On Multiple Time Differences Behavior Recognition
Pervaiz et al. Tracking and Analysis of Pedestrian's Behavior in Public Places.
CN108280408A (en) A kind of crowd's accident detection method based on combined tracking and generalized linear model
CN115424207B (en) Self-adaptive monitoring system and method
Liu et al. Multi-view vehicle detection and tracking in crossroads
CN111860097A (en) Abnormal behavior detection method based on fuzzy theory
CN113012193A (en) Multi-pedestrian tracking method based on deep learning
Di Lascio et al. Tracking interacting objects in complex situations by using contextual reasoning
Deepak et al. Design and utilization of bounding box in human detection and activity identification
CN114067360A (en) Pedestrian attribute detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant