CN111126195A - Abnormal behavior analysis method based on scene attribute driving and time-space domain significance - Google Patents

Abnormal behavior analysis method based on scene attribute driving and time-space domain significance Download PDF

Info

Publication number
CN111126195A
CN111126195A CN201911260652.8A CN201911260652A CN111126195A CN 111126195 A CN111126195 A CN 111126195A CN 201911260652 A CN201911260652 A CN 201911260652A CN 111126195 A CN111126195 A CN 111126195A
Authority
CN
China
Prior art keywords
scene
main body
model
subject
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911260652.8A
Other languages
Chinese (zh)
Other versions
CN111126195B (en
Inventor
田二林
姚妮
孟颍辉
杨学冬
张永霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University of Light Industry
Original Assignee
Zhengzhou University of Light Industry
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University of Light Industry filed Critical Zhengzhou University of Light Industry
Priority to CN201911260652.8A priority Critical patent/CN111126195B/en
Publication of CN111126195A publication Critical patent/CN111126195A/en
Application granted granted Critical
Publication of CN111126195B publication Critical patent/CN111126195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an abnormal behavior analysis method based on scene attribute driving and time-space domain significance, which comprises the steps of constructing a subject-object static kinetic response model; constructing a main body apparent significance model; constructing a dynamic feature extraction model facing to the global scene to extract dynamic features of the global scene; constructing a subject motion significance model; continuously checking and tracking the subject target; judging the behavior state; judging the intensity degree of the abnormal behavior by adopting a conditional posterior probability prediction method; according to the invention, the dependence relationship between the subjects and the objects is considered from the two aspects of the static situation and the dynamic situation, the description of the static situation information of the scene is divided into the obvious expression characteristic facing the subjects and the objects and the class attribute characteristic between the subjects and the objects, the scene attribute association is formed as the potential state description of the driving target, and the problem that the abnormal judgment of the static subject target cannot be carried out is solved.

Description

Abnormal behavior analysis method based on scene attribute driving and time-space domain significance
Technical Field
The invention relates to the technical field of new-generation information, in particular to an abnormal behavior analysis method based on scene attribute driving and time-space domain significance.
Background
The existing intelligent video monitoring system can only judge the dynamic and static changes or color differences of a scene by using a simple image processing algorithm, and does not improve the description, judgment and understanding of scene behaviors. Therefore, how to abstract and conceptualize the motion law, motion change and apparent difference of the scene target is a problem to be solved urgently in the research of abnormal behaviors of the scene;
at present, target behavior analysis methods based on video scene content mainly include a scene behavior analysis method based on depth learning and a scene behavior analysis method based on a visual attention mechanism, and the two methods still have certain defects: (1) the method for establishing the mapping relation between the scene global characteristics and the behavior meanings based on the deep neural network not only ignores the generation process of the behavior intention, but also lacks the attention to the scene local area, and meanwhile, the network training difficulty is increased by the scale of the data sample and the fuzziness of the behavior label; (2) the existing scene behavior analysis research does not fully consider the dependency relationship between the targets, however, the association between the targets about the category attribute and the apparent attribute is an important basis for forming the scene dependency relationship, the current research algorithm mainly utilizes the significant motion and the apparent deformation of the targets to judge whether the target behavior is abnormal, and the category and the apparent attribute between the targets are also important factors for expressing the scene situation; (3) the current scene behavior analysis fills the semantic gap of behavior expression mainly through scene feature extraction and state analysis, and further considers the occurrence level and the hazard degree of the target abnormal state, so as to solve the problem of the comprehension from the semantic gap to the intention gap. Therefore, the invention provides an abnormal behavior analysis method based on scene attribute driving and time-space domain significance, so as to solve the defects in the prior art.
Disclosure of Invention
In order to solve the problems, the invention provides an abnormal behavior analysis method based on scene attribute drive and time-space domain significance, and the method is characterized in that the dependence relationship between the subjects and the objects is considered from the aspects of static situation and dynamic situation, the description of the static situation information of the scene is divided into the significant expression characteristic facing the subjects and the object and the category attribute characteristic between the subjects and the objects, the scene attribute association is used as the potential state description of the drive target, and the problem that the static subject target cannot be judged abnormally is solved.
The invention provides an abnormal behavior analysis method based on scene attribute driving and time-space domain significance, which comprises the following steps of:
the method comprises the following steps: setting a foreground target and a background target in a typical social scene as a subject target and an object target respectively, analyzing a category matching relationship between the subject target and the object target based on classification and labeling results of the subject target and the object target in the typical social scene to obtain a category matching degree, and constructing a subject-object static kinetic response dictionary based on the category matching degree to evaluate the static kinetic of the typical social scene to obtain a subject-object static kinetic response model;
step two: according to the defects of foreground object feature description in a typical social scene, feature mining is carried out on the texture structure information of a subject object and the apparent state attributes of the texture structure difference information among subjects, and then the feature mining is combined with a subject-object static kinetic response model to obtain a subject apparent saliency model, and the apparent saliency features of the scene subjects are expressed;
step three: on the basis of the subject-object static action response model, mining associated information among global scene characteristics, inputting the mining result of the subject-object static action response model as information, then combining the convolutional neural network model and the recurrent neural network model, and constructing a dynamic feature extraction model facing the global scene to extract the dynamic features of the global scene;
step four: extracting features from two aspects of the time-space domain feature and the shape feature of the main body track, and designing and constructing a main body motion significance model;
step five: combining the dynamic feature extraction model facing the global scene in the third step and the main body dynamic significance model in the fourth step, and continuously checking and tracking the main body target;
step six: on the basis of a main body apparent significance model and a main body motion significance model, respectively establishing a scene behavior discrimination model based on event response for main body significance characteristics of space and time domains, and then designing a reasonable abnormal decision discrimination function to discriminate behavior states;
step seven: carrying out classification processing on the degree of the abnormal state based on probability prediction according to the judging result of the behavior state, establishing an abnormal behavior grade judging model about the remarkable strength of the characteristic aiming at the characteristic response of the abnormal region after detecting the region where the abnormal behavior occurs, fusing the apparent saliency characteristic, the comprehensive situation matching degree characteristic and the motion saliency characteristic of a main body, establishing an abnormal saliency word bag model according to ratio cascade, reasonably describing the distribution of the characteristic by adopting a Fisher Vector method in the characteristic coding process, and finally judging the strength of the abnormal behavior by adopting a conditional posterior probability prediction method.
The further improvement lies in that: and the recurrent neural network model in the third step is used for respectively expressing the significant apparent characteristics of each main target relative to the global time sequence, and the convolutional neural network model is used for expressing the motion change characteristics in the global scene.
The further improvement lies in that: when feature extraction is carried out from the aspect of time-space domain features of the main body track in the fourth step, the dense sampling points of the main body area in the video are tracked by using an optical flow field moving target detection algorithm to form a main body dense sampling track set, then a space domain depth convolution feature map of each video image frame is obtained by using a depth convolution network, finally the time-space domain features based on the sampling track of the main body area are constructed by combining the main body dense sampling track set, and then a relation frame between the macro features of the whole local scene and the micro features of the local main body is established, so that the automatic excavation of the main body space features is realized.
The further improvement lies in that: when extracting features from the aspect of the shape features of the main body track in the fourth step, firstly, the shape context feature distribution based on the main body intensive sampling track set is constructed, the motion state difference between every two tracks is judged by using a dynamic programming algorithm, then, the track motion state difference histogram distribution is established, and the feature expression is carried out on the main body track with larger change of appearance and speed.
The further improvement lies in that: when the main body target behavior is continuously detected and tracked in the step five, the understanding mapping between the video scene content and the main body behavior state needs to analyze two levels of semantics, and the method specifically comprises the following steps: characteristic semantic parsing from video scene content to subject behavior detection and logic level semantic parsing from understanding of the state of subject behavior detection behavior.
The further improvement lies in that: when the behavior state is judged in the sixth step, when the abnormal degree of the detection interval is lower than a threshold value, the behavior model of the normal state is updated, and the detection area is further provided with a minimum video cube unit; and when the abnormality degree of the detection area is higher than the threshold value, the area is considered to have the occurrence of abnormal behaviors.
The invention has the beneficial effects that: according to the method, the dependence relationship between the subjects and the objects is considered from two aspects of the static situation and the dynamic situation, the description of the static situation information of the scene is divided into the obvious expression characteristic facing the subjects and the objects and the class attribute characteristic between the subjects and the objects, the scene attribute association is formed as the potential state description of the driving target, and the problem that the abnormal judgment of the static subject target cannot be carried out is solved; the global situation excavated based on the deep learning model is combined with the dense track flow characteristics of the main body area, so that the characteristic detection difficulty can be reduced, and the reliability and the accuracy of the characteristic detection can be improved; by classifying the target behavior tendency by using an unsupervised classification method, a new mode can be provided for scene behavior analysis and judgment based on situation attribute driving, and the method has a high judgment effect.
Drawings
FIG. 1 is a schematic diagram of the process framework of the present invention.
FIG. 2 is a schematic flow chart of the method of the present invention.
FIG. 3 is a diagram illustrating a structure of a host-object static situation response dictionary according to the present invention.
Fig. 4 is a schematic diagram of the calculation of the comprehensive matching degree of the static situation of the subject and the object according to the invention.
FIG. 5 is a schematic diagram of apparent significance detection of a subject of the present invention.
FIG. 6 is a schematic diagram of the deep learning-based global time-space domain dynamic feature mining.
FIG. 7 is a schematic diagram of mining a dynamic salient feature of a subject of the present invention.
FIG. 8 is a schematic diagram illustrating the classification of subject behavior trends based on event signature responses according to the present invention.
FIG. 9 is a schematic diagram illustrating a classification of main body abnormality degree based on probability prediction according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
According to fig. 1, 2, 3, 4, 5, 6, 7, 8, and 9, the present embodiment provides an abnormal behavior analysis method based on scene attribute driving and time-space domain saliency, including the following steps:
the method comprises the following steps: setting a foreground target and a background target in a typical social scene as a subject target and an object target respectively, analyzing a category matching relationship between the subject target and the object target based on classification and labeling results of the subject target and the object target in the typical social scene to obtain a category matching degree, and constructing a subject-object static kinetic response dictionary based on the category matching degree to evaluate the static kinetic of the typical social scene to obtain a subject-object static kinetic response model;
creating a subject-subject and subject-object static situation response dictionary, and expressing the comprehensive matching degree of the scene subject-object with respect to the category attributes, so as to achieve the purpose of mining the scene static situation, in this embodiment, when the subject-object static situation response dictionary shown in fig. 3 is created, first, a category matching unit and a corresponding situation matching degree between every two target tags need to be created, for example: dictionary unit a { main body: an automobile: object: class dependency of objects in highway is better than that of dictionary unit b { subject: pedestrian: object: the category dependency of the targets in the expressway, so that the static situation matching degree of the design unit a is higher than that of the design unit b, then the static situation comprehensive matching degree is calculated by searching a static situation response dictionary (as shown in fig. 4), firstly, the apparent feature statistical distribution expression of the subject and object targets of various types of scenes is constructed, and a feature coding dictionary based on category attributes is formed by adopting a visual word bag model method: under the condition of inputting original video scene information, segmenting each scene subject-object unit by utilizing a latest open source software platform Detectron of Facebook, and labeling the segmented scene subject-object target through a category characteristic coding dictionary; finally, the situation matching degrees of all the subjects and the objects in the specific scene are searched by utilizing the labeling information, and an effective mathematical model is established to express the comprehensive situation matching degree of the subject and object matching unit in the specific scene;
step two: according to the defects of foreground object feature description in a typical social scene, feature mining is carried out on the texture structure information of a subject object and the apparent state attributes of the texture structure difference information among subjects, and then the feature mining is combined with a subject-object static kinetic response model to obtain a subject apparent saliency model, and the apparent saliency features of the scene subjects are expressed;
as shown in fig. 5, firstly, a region segmentation theory and an image information descriptor are combined to construct a subject appearance saliency feature, context feature mining is performed on information such as texture, color and shape of a target region, and then illumination intensity, color tropism and contrast of the subject target region are described by using a mature image feature extraction technology and a color probability model;
then introducing a Markov random field concept, describing the apparent significance degree of each subject, defining the apparent significance of the subject as the sum of significance conditional random fields of an independent subject acting on a global subject and a field object in a scene, abstracting scene subject characteristics into grids consisting of random variables, considering the condition distribution of each subject (grid point) about other global subjects (adjacent grid points) and the significance fields of the neighborhood objects, obtaining the apparent significance characteristic description of the subject by depending on a maximum likelihood estimation method, and providing apparent significance characteristic input for the follow-up scene behavior prediction;
step three: on the basis of a subject-object static action response model, mining associated information among global scene characteristics, inputting the mining result of the subject-object static action response model as information, then combining a convolutional neural network model and a recurrent neural network model, constructing a global scene-oriented dynamic feature extraction model to extract global scene dynamic features, wherein the recurrent neural network model is used for respectively expressing the significance apparent features of each subject target on a global time sequence, and the convolutional neural network model is used for expressing motion change features in the global scene;
step four: extracting features from two aspects of a main body track time-space domain feature and a main body track shape feature, designing and constructing a main body motion significance model, when extracting the features from the aspect of the main body track time-space domain feature, tracking dense sampling points of a main body region in a video by using an optical flow field motion target detection algorithm to form a main body dense sampling track set, acquiring a space domain depth convolution feature map of each video image frame by using a depth convolution network, constructing a time-space domain feature based on the main body region sampling track by combining the main body dense sampling track set, further establishing a relation frame between a global scene macro feature and a local main body micro feature, and realizing automatic mining of the main body space feature;
when the characteristics are extracted from the aspect of main track shape characteristics, firstly, the shape context characteristic distribution based on a main dense sampling track set is constructed, the difference of motion states between every two tracks is judged by using a dynamic programming algorithm, then, the distribution of track motion state difference histograms is established, and the characteristics of the main track with large appearance and speed changes are expressed;
step five: combining the dynamic feature extraction model facing the global scene in the third step and the main body dynamic significance model in the fourth step, continuously checking and tracking the main body target, and analyzing two levels of semantics for understanding and mapping between the video scene content and the main body behavior state, specifically comprising: characteristic semantic analysis from the content of the video scene to the detection of the main body behaviors and logic level semantic analysis for understanding the behavior states of the main body behaviors in detection;
step six: on the basis of a main body apparent significance model and a main body motion significance model, respectively establishing a scene behavior discrimination model based on event response for main body significance characteristics of space and time domains, then designing a reasonable abnormal decision discrimination function to discriminate behavior states, updating a normal state behavior model when the abnormality degree of a detection interval is lower than a threshold value, and stepping a minimum video cube unit in a detection area; when the abnormality degree of the detection area is higher than a threshold value, the area is considered to have abnormal behaviors;
step seven: carrying out classification processing on the degree of the abnormal state based on probability prediction according to the judging result of the behavior state, establishing an abnormal behavior grade judging model about the remarkable strength of the characteristic aiming at the characteristic response of the abnormal region after detecting the region where the abnormal behavior occurs, fusing the apparent saliency characteristic, the comprehensive situation matching degree characteristic and the motion saliency characteristic of a main body, establishing an abnormal saliency word bag model according to ratio cascade, reasonably describing the distribution of the characteristic by adopting a Fisher Vector method in the characteristic coding process, and finally judging the strength of the abnormal behavior by adopting a conditional posterior probability prediction method.
According to the method, the dependence relationship between the subjects and the objects is considered from the two aspects of the static situation and the dynamic situation, the description of the static situation information of the scene is divided into the obvious expression characteristic facing the subjects and the class attribute characteristic between the subjects, the scene attribute association is formed as the potential state description of the driving target, and the problem that the abnormal judgment of the static subject target cannot be carried out is solved; the global situation excavated based on the deep learning model is combined with the dense track flow characteristics of the main body area, so that the characteristic detection difficulty can be reduced, and the reliability and the accuracy of the characteristic detection can be improved; by classifying the target behavior tendency by using an unsupervised classification method, a new mode can be provided for scene behavior analysis and discrimination based on situation attribute driving, and the method has a high discrimination effect.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention as defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (6)

1. The abnormal behavior analysis method based on scene attribute driving and time-space domain significance is characterized by comprising the following steps of: the method comprises the following steps:
the method comprises the following steps: setting a foreground target and a background target in a typical social scene as a subject target and an object target respectively, analyzing a category matching relationship between the subject target and the object target based on classification and labeling results of the subject target and the object target in the typical social scene to obtain a category matching degree, and constructing a subject-object static kinetic response dictionary based on the category matching degree to evaluate the static kinetic of the typical social scene to obtain a subject-object static kinetic response model;
step two: according to the defects of foreground target feature description in a typical social scene, feature mining is carried out on the texture structure information of a subject target and the apparent state attributes of texture structure difference information among subjects, and then the feature mining is combined with a subject-object static kinetic response model to obtain a subject apparent saliency model, and the apparent saliency features of the scene subjects are expressed;
step three: on the basis of the subject-object static action response model, mining associated information among global scene characteristics, taking the mining result of the subject-object static action response model as information input, then combining a convolutional neural network model and a recurrent neural network model, and constructing a global scene-oriented dynamic feature extraction model to extract global scene dynamic features;
step four: extracting features from two aspects of the time-space domain feature and the shape feature of the main body track, and designing and constructing a main body motion significance model;
step five: combining the dynamic feature extraction model facing the global scene in the third step and the main body dynamic significance model in the fourth step, and carrying out continuous inspection and tracking on the main body target;
step six: on the basis of a main body apparent significance model and a main body motion significance model, respectively establishing a scene behavior discrimination model based on event response for main body significance characteristics of space and time domains, and then designing a reasonable abnormal decision discrimination function to discriminate behavior states;
step seven: carrying out classification processing on the degree of the abnormal state based on probability prediction according to the judgment result of the behavior state, establishing an abnormal behavior grade judgment model about the remarkable strength of the characteristic aiming at the characteristic response of the abnormal region after detecting the region where the abnormal behavior occurs, fusing the apparent saliency characteristic, the comprehensive situation matching degree characteristic and the motion saliency characteristic of a main body, establishing an abnormal saliency word bag model according to ratio cascade, reasonably describing the distribution of the characteristic by adopting a Fisher Vector method in the characteristic coding process, and finally judging the strength of the abnormal behavior by adopting a conditional posterior probability prediction method.
2. The abnormal behavior analysis method based on scene attribute driving and time-space domain significance according to claim 1, characterized in that: and the recurrent neural network model in the third step is used for respectively expressing the significance appearance characteristics of each main target relative to the global time sequence, and the convolutional neural network model is used for expressing the motion change characteristics in the global scene.
3. The abnormal behavior analysis method based on scene attribute driving and time-space domain significance according to claim 1, characterized in that: when feature extraction is carried out on the aspect of time-space domain features of the main body track in the fourth step, an optical flow field moving target detection algorithm is used for tracking dense sampling points of the main body area in the video to form a main body dense sampling track set, then a space domain depth convolution feature map of each video image frame is obtained by using a depth convolution network, finally the time-space domain features based on the sampling track of the main body area are constructed by combining the main body dense sampling track set, and then a relation frame between global scene macro features and local main body micro features is established, so that automatic mining of the main body space features is realized.
4. The abnormal behavior analysis method based on scene attribute driving and time-space domain significance according to claim 3, characterized in that: when feature extraction is performed from the aspect of main track shape features in the fourth step, shape context feature distribution based on a main dense sampling track set is constructed, motion state difference between every two tracks is judged by using a dynamic programming algorithm, then track motion state difference histogram distribution is established, and feature expression is performed on main tracks with large changes in appearance and speed.
5. The abnormal behavior analysis method based on scene attribute driving and time-space domain significance according to claim 1, characterized in that: when the main body target behavior is continuously detected and tracked in the step five, the understanding mapping between the video scene content and the main body behavior state needs to analyze two levels of semantics, and the method specifically comprises the following steps: characteristic semantic parsing from video scene content to subject behavior detection and logic level semantic parsing from understanding of the state of subject behavior detection behavior.
6. The abnormal behavior analysis method based on scene attribute driving and time-space domain significance according to claim 1, characterized in that: when the behavior state is judged in the sixth step, when the abnormal degree of the detection interval is lower than a threshold value, the behavior model of the normal state is updated, and the detection area is stepped by a minimum video cube unit; and when the abnormality degree of the detection area is higher than the threshold value, the area is considered to have the occurrence of abnormal behaviors.
CN201911260652.8A 2019-12-10 2019-12-10 Abnormal behavior analysis method based on scene attribute driving and time-space domain significance Active CN111126195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911260652.8A CN111126195B (en) 2019-12-10 2019-12-10 Abnormal behavior analysis method based on scene attribute driving and time-space domain significance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911260652.8A CN111126195B (en) 2019-12-10 2019-12-10 Abnormal behavior analysis method based on scene attribute driving and time-space domain significance

Publications (2)

Publication Number Publication Date
CN111126195A true CN111126195A (en) 2020-05-08
CN111126195B CN111126195B (en) 2023-03-14

Family

ID=70498147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911260652.8A Active CN111126195B (en) 2019-12-10 2019-12-10 Abnormal behavior analysis method based on scene attribute driving and time-space domain significance

Country Status (1)

Country Link
CN (1) CN111126195B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549591A (en) * 2022-04-27 2022-05-27 南京甄视智能科技有限公司 Method and device for detecting and tracking time-space domain behaviors, storage medium and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120237081A1 (en) * 2011-03-16 2012-09-20 International Business Machines Corporation Anomalous pattern discovery
CN107563345A (en) * 2017-09-19 2018-01-09 桂林安维科技有限公司 A kind of human body behavior analysis method based on time and space significance region detection
CN110287941A (en) * 2019-07-03 2019-09-27 哈尔滨工业大学 A kind of thorough perception and dynamic understanding method based on concept learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120237081A1 (en) * 2011-03-16 2012-09-20 International Business Machines Corporation Anomalous pattern discovery
CN107563345A (en) * 2017-09-19 2018-01-09 桂林安维科技有限公司 A kind of human body behavior analysis method based on time and space significance region detection
CN110287941A (en) * 2019-07-03 2019-09-27 哈尔滨工业大学 A kind of thorough perception and dynamic understanding method based on concept learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈云彪等: "基于场景信息注意模型的目标检测技术研究", 《信息与电脑(理论版)》 *
鹿天然等: "基于显著性检测和稠密轨迹的人体行为识别", 《计算机工程与应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549591A (en) * 2022-04-27 2022-05-27 南京甄视智能科技有限公司 Method and device for detecting and tracking time-space domain behaviors, storage medium and equipment
CN114549591B (en) * 2022-04-27 2022-07-08 南京甄视智能科技有限公司 Method and device for detecting and tracking time-space domain behaviors, storage medium and equipment

Also Published As

Publication number Publication date
CN111126195B (en) 2023-03-14

Similar Documents

Publication Publication Date Title
Zhou et al. Attention-driven loss for anomaly detection in video surveillance
Murugan et al. Region-based scalable smart system for anomaly detection in pedestrian walkways
Coşar et al. Toward abnormal trajectory and event detection in video surveillance
US8374393B2 (en) Foreground object tracking
US8218819B2 (en) Foreground object detection in a video surveillance system
US20140270489A1 (en) Learned mid-level representation for contour and object detection
CN109919106B (en) Progressive target fine recognition and description method
Kazi Tani et al. Events detection using a video-surveillance ontology and a rule-based approach
CN110414367B (en) Time sequence behavior detection method based on GAN and SSN
Lin et al. Saliency detection via multi-scale global cues
Kim et al. Text detection with deep neural network system based on overlapped labels and a hierarchical segmentation of feature maps
CN111126195B (en) Abnormal behavior analysis method based on scene attribute driving and time-space domain significance
Wang et al. Bilayer sparse topic model for scene analysis in imbalanced surveillance videos
Lacabex et al. Lightweight tracking-by-detection system for multiple pedestrian targets
Wang et al. Video-based vehicle detection approach with data-driven adaptive neuro-fuzzy networks
CN116524410A (en) Deep learning fusion scene target detection method based on Gaussian mixture model
Gao et al. A combined method for multi-class image semantic segmentation
CN112232124A (en) Crowd situation analysis method, video processing device and device with storage function
Ma Research on intelligent evaluation system of sports training based on video image acquisition and scene semantics
Fu et al. Density-Aware U-Net for Unstructured Environment Dust Segmentation
Rajesh et al. COMPUTER VISION APPLICATION: VEHICLE COUNTING AND CLASSIFICATION SYSTEM FROM REAL TIME VIDEOS
Indu et al. Real-time Classification and Counting of Vehicles from CCTV Videos for Traffic Surveillance Applications
Li Video-Based Object Detection in Security Monitoring System
Zhou et al. Weakly supervised salient object detection via double object proposals guidance
Shivakumara et al. A new deep CNN for 3D text localization in the wild through shadow removal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant