CN111931587A - Video anomaly detection method based on interpretable space-time self-encoder - Google Patents

Video anomaly detection method based on interpretable space-time self-encoder Download PDF

Info

Publication number
CN111931587A
CN111931587A CN202010678292.XA CN202010678292A CN111931587A CN 111931587 A CN111931587 A CN 111931587A CN 202010678292 A CN202010678292 A CN 202010678292A CN 111931587 A CN111931587 A CN 111931587A
Authority
CN
China
Prior art keywords
interpretable
encoder
video
level
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010678292.XA
Other languages
Chinese (zh)
Other versions
CN111931587B (en
Inventor
丰江帆
梁渝坤
熊伟
张莉
李皓辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Guosheng Technology Co ltd
Shenzhen Hongyue Enterprise Management Consulting Co ltd
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202010678292.XA priority Critical patent/CN111931587B/en
Publication of CN111931587A publication Critical patent/CN111931587A/en
Application granted granted Critical
Publication of CN111931587B publication Critical patent/CN111931587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a video anomaly detection method based on an interpretable space-time self-encoder, which comprises the steps of preprocessing a video; performing feature learning on the processed data, wherein the feature learning comprises a deep learning model based on an interpretable space-time self-encoder, and acquiring a reconstructed video sequence; a step of calculating a regularity score for the reconstructed video sequence; and comparing the calculated regularity score with a predefined threshold value, and judging whether an abnormality occurs.

Description

Video anomaly detection method based on interpretable space-time self-encoder
Technical Field
The invention belongs to the technical field of video processing, relates to a method for detecting video abnormity, and particularly relates to a video abnormity detection method based on an interpretable space-time self-encoder.
Background
With the increasing popularization of video monitoring equipment and the increasing importance of people on security work, the requirements on the analysis of monitoring videos, particularly on the automatic detection of abnormal events or behaviors in the videos, are more and more urgent.
In recent years, many researchers have contributed in the field. For example, Xuet et al propose a depth model for anomalous event detection using a stacked self-encoder for feature learning and a linear classifier for event classification. Tian Wang et al propose a video stream-based algorithm to detect anomalous events, which is based on the direction histogram of the optical flow descriptor and a class of Support Vector Machine (SVM) (support Vector machines) classifiers, and demonstrate the effectiveness of the algorithm on a large amount of data. Mahmudul Hasan et al solve the problem of extracting valid motion features in long video sequences by learning a generative model of a regular motion pattern, called regularity. Yong Shean Chong et al propose an effective video anomaly detection method, which is applicable to spatio-temporal structure of video anomaly detection including crowded scenes.
The above method can solve the problem of anomaly detection to some extent, but the prior art ignores a problem, i.e. ignores the inherent logic for explaining the anomaly detection process. Without the transparency processing of the feature learning process, we cannot fully believe the accuracy of the result and can not take the detection result as the basis of the final decision. Therefore, the Interpretability (Interpretability) of deep learning needs to be combined with an anomaly detection method, so that the reliability of anomaly detection is greatly improved, the judgment result serving as the basis of final decision is improved, and the purpose of enhancing the reliability and accuracy of a security system is achieved.
Disclosure of Invention
The invention aims to provide a video anomaly detection method, and particularly relates to a video anomaly detection method based on an interpretable space-time self-encoder.
In order to achieve the purpose, the invention provides the following technical scheme:
a video anomaly detection method based on an interpretable spatio-temporal autoencoder, comprising the steps of:
preprocessing a video;
performing feature learning on the processed data, wherein the feature learning comprises a deep learning model based on an interpretable space-time self-encoder, and acquiring a reconstructed video sequence;
a step of calculating a regularity score for the reconstructed video sequence;
and comparing the calculated regularity score with a predefined threshold value, and judging whether an abnormality occurs.
Preferably, the video anomaly detection method further comprises a step of visualizing a convolution kernel, wherein visualizing the convolution kernel comprises calculating a neural-activated receptive field in the convolution kernel and enlarging the neural-activated receptive field to an image resolution.
Preferably, the deep learning based on interpretable spatio-temporal self-coding comprises processing the pre-processed video sequence successively through a spatial encoder, a temporal self-encoder and a spatial decoder, wherein the spatial encoder consists of at least 2 interpretable convolutional layers, the temporal self-encoder consists of at least 3 interpretable convolutional LSTM layers, and the spatial decoder consists of at least 2 deconvolution layers.
Preferably, the 2-layer interpretable convolution layer of the spatial encoder is set to 11 x 11 for a first layer with a step size of 4 and contains 128 convolution kernels, and is set to 5 x 5 for a second layer with a step size of 2 and contains 64 convolution kernels.
Preferably, the 2-level deconvolution level of the spatial decoder is set to 5 × 5 for the first level and 2 steps, and contains 128 convolution kernels, and the second level is set to 11 × 11 for the second level and 4 steps, and contains 1 convolution kernel.
Preferably, the interpretable convolutional layer and/or interpretable convolutional LSTM layer includes at least one mask therein, and the particular mask is selected from the at least one mask to filter noise activations.
Preferably, the selecting of the particular mask from the at least one mask is selected by calculating an optimal activation position on the object site.
The method applies the interpretable deep learning method to the video event anomaly detection method, directly interprets the network by visualizing the representation of the convolutional neural network and visualizing the convolutional core in the convolutional layer, and improves the reliability of the anomaly detection result.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to make the object, technical scheme and beneficial effect of the invention more clear, the invention provides the following drawings for explanation:
FIG. 1 is a block diagram of a conventional video anomaly detection method;
FIG. 2 is a block diagram of a video anomaly detection method according to the present invention;
FIG. 3 is a block diagram of an interpretable spatio-temporal autoencoder model according to the present invention;
fig. 4 is a block diagram of a method for implementing an interpretable convolutional layer according to the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in FIG. 1, a flow diagram 100 of a conventional video anomaly detection method includes the contents of block diagrams 101 to 107. In step 101, an input video is preprocessed, that is, the input video includes a series of conventional operations such as decomposing the video into a sequence of video frames, converting the video frames into gray scales, and reducing dimensions. The preprocessed video frames are then input into block 103 for feature learning, and steps such as calculating regularity scores in block 105 and detecting abnormal events in block 107 are performed.
In fig. 2, a flow chart 200 of the anomaly detection method based on the interpretable spatio-temporal self-encoder according to the present invention is shown. In block 201, the video data still needs to be preprocessed to convert the video into an acceptable input based on the interpretable spatio-temporal self-coder model. The preprocessing of the video data includes decomposing the original video into a frame-by-frame sequence of video frames, such as uniformly setting the size of the video frames to 224 × 224 pixels, and further converting the images into gray scale to reduce the dimension. In the flow step block 203, a deep learning method based on an interpretable space-time self-encoder is introduced for feature learning, including obtaining a reconstructed video sequence through the space-time self-encoder and visualizing semantics of a convolution kernel in an interpretable convolutional layer, and a specific method needs to further refer to fig. 3. Then, in optional step block 205, after the model training is completed, the semantics of the convolution kernel are visualized by computing the neural-activated receptor Field in the convolution kernel, and then enlarging it to image resolution. In block 207, the regularity score of the video frame is calculated, and finally an anomaly detection result is obtained in block 209.
From the comparison between fig. 2 and fig. 1, the present invention introduces a deep learning method based on the interpretable spatio-temporal autoencoder to perform feature learning, and optionally, to perform visualization processing on the semantics of the convolution kernel in the interpretable convolutional layer.
In fig. 3, a block diagram of the interpretable spatiotemporal self-coder model and the processing flow provided by the present invention, i.e. the processing flow of the feature learning method shown in block 203 in fig. 2, is shown. In FIG. 3, the spatio-temporal autoencoder is divided into a spatial encoder of blocks 303-305, a temporal autoencoder of blocks 307-311, and a spatial decoder of blocks 313-315 for feature learning and obtaining reconstructed video sequences and visualizing the semantics of the convolution kernel in the interpretable layer.
First, at step block 301, a video sequence is input. The video sequence first passes through a spatial self-encoder, which is made up of at least two interpretable convolutional layers 303 and 305, which turns the convolutional kernel into an interpretable convolutional layer by adding a mask to the normal convolutional layer and setting a specific loss function for the convolutional kernel, as shown in detail in fig. 4. The first layer of interpretable convolutional layer 303 may be sized 11 x 11 with a step size of 4 containing 128 convolution kernels, and the second layer of interpretable convolutional layer 305 may be sized 5 x 5 with a step size of 2 containing 64 convolution kernels. After learning the spatial features of a video sequence by the interpretable convolutional layer, the encoded spatial structure is then input into a temporal self-encoder.
Within the temporal self-encoder, the spatial signature graph may be generated using 3 layers of interpretable convolutional LSTM layers 307-311. The 3 layers of the convolution LSTM can be explained, compared with the common full-connected LSTM model, and the convolution operation is added, so that the better spatial feature map can be obtained only by needing less weight. Finally, the spatial signature is passed through a spatial decoder, namely two deconvolution layers 313-315, to reconstruct the video sequence, the first layer 313 of the two deconvolution layers being 5 × 5 in size and 2 in steps, containing 128 convolution kernels, and the second layer 315 being 11 × 11 in size and 4 in steps, with only one convolution kernel. At this point, the process of model training ends.
Inputting the video frame to be tested into the trained model, and combining the obtained reconstructed video sequence with the initial input frame, and further calculating the regularity score. The content of calculating the regularity score may be performed according to the prior art. The specific calculation of the regularity score may be as follows:
calculating the reconstruction error of the intensity value I of the pixel at position (x, y) in the t-th frame of the video sequence:
e(x,y,t)=‖I(x,y,t)–fw(i(x,y,t))‖2
where fw is the learning model of the spatio-temporal autoencoder.
The formula calculates the error of one pixel point, and the reconstruction error of the frame is the sum of all the pixel point errors, so that the reconstruction errors of all the pixel points are added in a summation mode to obtain the reconstruction error of the current frame.
e(t)=∑(x,y)e(x,y,t)
Finally, the regularity score of each frame is calculated,
Figure BDA0002584896160000051
wherein, minte (t) indicates reconstruction errors in the test videoOne frame with the smallest difference, maxte (t) represents a frame with the maximum reconstruction error in the test video, the regularity score obtained by converting the reconstruction error can be limited within the range of 0-1 through the operation, a proper threshold value is set, when the regularity score of the image sequence is higher than the threshold value, abnormal behaviors appear in the video, an alarm is sent to remind security personnel, and serious abnormal accidents are prevented.
Referring to fig. 4, a method of implementing the interpretable convolutional layer and the interpretable convolutional LSTM layer provided by the present invention is described. As shown in the figure, the interpretable convolutional layers and interpretable convolutional LSTM layers of the present invention add a penalty to the feature map X of the normal convolutional kernel f after activating the function Relu layer, compared to the normal convolutional layers and normal convolutional LSTM layers of the prior art. X is an n X n matrix, and since the object regions corresponding to the convolution kernel may be at different locations in the image, n X n templates are provided for the convolution kernel f, each template is also an n X n matrix describing the desired activation profile of the feature map X, and the mask described above is one of the templates selected.
Therefore, as shown in fig. 4, in the forward propagation process of deep learning (from left to right), for an input image I, first, the input image I passes through the Conv layer and the activation function Relu layer, and then a specific template is selected as a Mask (Mask) in the interpretable convolution layer to filter out noise activation from the feature map X. The Mask operation also supports gradient backpropagation during learning. In the back propagation, the Loss of the convolution kernel (Loss for filter) pushes the self to represent a specific object part of one kind, and silence other kinds of images. Therefore, the convolution kernels in the interpretable convolutional layer only represent a specific object part, and are not activated by a plurality of object parts of the image together, so that the ambiguity of activation of the convolution kernels is reduced, and the interpretability of the convolutional layer is greatly improved. The interpretable convolution LSTM layer also contains many convolution kernels, so similar operations are also performed on the convolution kernels to convert them into interpretable convolution kernels, thereby increasing the interpretability of the convolution LSTM.
The loss is calculated for the convolution kernel as follows:
Figure BDA0002584896160000061
the loss value of the convolution kernel f can be expressed as mutual information of the feature map X and the template T. Where X represents the special mapping of the convolution kernel f after the ReLU operation, and T represents n X n +1 templates, each of which is a matrix of n X n that describes the ideal activation profile of the signature X. The prior probability p (t) is defined as a constant. p (X | T) is a conditional probability of chance expressed as the fitness between the feature map X and the template T.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic or optical disks, and the like.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A video anomaly detection method based on an interpretable spatio-temporal autoencoder, comprising the steps of:
preprocessing a video;
performing feature learning on the processed data, wherein the feature learning comprises a deep learning model based on an interpretable space-time self-encoder, and acquiring a reconstructed video sequence;
a step of calculating a regularity score for the reconstructed video sequence;
and comparing the calculated regularity score with a predefined threshold value, and judging whether an abnormality occurs.
2. The method of claim 1, further characterized by the step of visualizing the convolution kernel, wherein visualizing the convolution kernel includes computing a neural-activated receptive field in the convolution kernel and magnifying it to image resolution.
3. The method of claim 1 or 2, further characterized in that the deep learning based on interpretable spatio-temporal self-coding comprises processing the pre-processed video sequence successively through a spatial encoder, a temporal self-encoder, and a spatial decoder, wherein the spatial encoder consists of at least 2 interpretable convolutional layers, the temporal self-encoder consists of at least 3 interpretable convolutional LSTM layers, and the spatial decoder consists of at least 2 deconvolution layers.
4. The method of claim 3, further characterized in that the 2-level interpretable convolution layer of the spatial encoder has a first level of 11 x 11 with a step size of 4 and 128 convolution kernels, and a second level of 5 x 5 with a step size of 2 and 64 convolution kernels.
5. A method as claimed in claim 3, further characterised in that the 2-level deconvolution hierarchy of the spatial decoder is set to 5 x 5 for the first level, step size 2, with 128 convolution kernels, and 11 x 11 for the second level, step size 4, with 1 convolution kernel.
6. A method as claimed in any preceding claim, further characterised in that the interpretable convolutional layer and/or interpretable convolutional LSTM layer contains at least one mask and selecting a particular mask from the at least one mask filters noise activations.
7. A method as recited in claim 6, further characterized in that the selecting a particular mask from at least one mask is selected by calculating an optimal activation location on the subject site.
CN202010678292.XA 2020-07-15 2020-07-15 Video anomaly detection method based on interpretable space-time self-encoder Active CN111931587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010678292.XA CN111931587B (en) 2020-07-15 2020-07-15 Video anomaly detection method based on interpretable space-time self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010678292.XA CN111931587B (en) 2020-07-15 2020-07-15 Video anomaly detection method based on interpretable space-time self-encoder

Publications (2)

Publication Number Publication Date
CN111931587A true CN111931587A (en) 2020-11-13
CN111931587B CN111931587B (en) 2022-10-25

Family

ID=73312394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010678292.XA Active CN111931587B (en) 2020-07-15 2020-07-15 Video anomaly detection method based on interpretable space-time self-encoder

Country Status (1)

Country Link
CN (1) CN111931587B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911930A (en) * 2024-03-15 2024-04-19 释普信息科技(上海)有限公司 Data security early warning method and device based on intelligent video monitoring

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273880A (en) * 2017-07-31 2017-10-20 秦皇岛玥朋科技有限公司 A kind of multi-storied garage safety-protection system and method based on intelligent video monitoring
CN109615019A (en) * 2018-12-25 2019-04-12 吉林大学 Anomaly detection method based on space-time autocoder
CN109670446A (en) * 2018-12-20 2019-04-23 泉州装备制造研究所 Anomaly detection method based on linear dynamic system and depth network
CN109871799A (en) * 2019-02-02 2019-06-11 浙江万里学院 A kind of driver based on deep learning plays the detection method of mobile phone behavior
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
US20190188212A1 (en) * 2016-07-27 2019-06-20 Anomalee Inc. Prioritized detection and classification of clusters of anomalous samples on high-dimensional continuous and mixed discrete/continuous feature spaces
CN110889328A (en) * 2019-10-21 2020-03-17 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition
CN111008570A (en) * 2019-11-11 2020-04-14 电子科技大学 Video understanding method based on compression-excitation pseudo-three-dimensional network
CN111079539A (en) * 2019-11-19 2020-04-28 华南理工大学 Video abnormal behavior detection method based on abnormal tracking
CN111325347A (en) * 2020-02-19 2020-06-23 山东大学 Automatic danger early warning description generation method based on interpretable visual reasoning model
WO2020142483A1 (en) * 2018-12-31 2020-07-09 Futurewei Technologies, Inc. Explicit address signaling in video coding
CN111401526A (en) * 2020-03-20 2020-07-10 厦门渊亭信息科技有限公司 Model-universal deep neural network representation visualization method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190188212A1 (en) * 2016-07-27 2019-06-20 Anomalee Inc. Prioritized detection and classification of clusters of anomalous samples on high-dimensional continuous and mixed discrete/continuous feature spaces
CN107273880A (en) * 2017-07-31 2017-10-20 秦皇岛玥朋科技有限公司 A kind of multi-storied garage safety-protection system and method based on intelligent video monitoring
CN109670446A (en) * 2018-12-20 2019-04-23 泉州装备制造研究所 Anomaly detection method based on linear dynamic system and depth network
CN109615019A (en) * 2018-12-25 2019-04-12 吉林大学 Anomaly detection method based on space-time autocoder
WO2020142483A1 (en) * 2018-12-31 2020-07-09 Futurewei Technologies, Inc. Explicit address signaling in video coding
CN109902562A (en) * 2019-01-16 2019-06-18 重庆邮电大学 A kind of driver's exception attitude monitoring method based on intensified learning
CN109871799A (en) * 2019-02-02 2019-06-11 浙江万里学院 A kind of driver based on deep learning plays the detection method of mobile phone behavior
CN110889328A (en) * 2019-10-21 2020-03-17 大唐软件技术股份有限公司 Method, device, electronic equipment and storage medium for detecting road traffic condition
CN111008570A (en) * 2019-11-11 2020-04-14 电子科技大学 Video understanding method based on compression-excitation pseudo-three-dimensional network
CN111079539A (en) * 2019-11-19 2020-04-28 华南理工大学 Video abnormal behavior detection method based on abnormal tracking
CN111325347A (en) * 2020-02-19 2020-06-23 山东大学 Automatic danger early warning description generation method based on interpretable visual reasoning model
CN111401526A (en) * 2020-03-20 2020-07-10 厦门渊亭信息科技有限公司 Model-universal deep neural network representation visualization method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JEFFERSON RYAN MEDEL等: "《Anomaly Detection in Video Using Predictive Convolutional Long Short-Term Memory Networks》", 《ARXIV》 *
周培培 等: "《视频监控中的人群异常行为检测与定位》", 《光学学报》 *
朱辉辉: "《监控场景下基于视频目标分析的异常检测算法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117911930A (en) * 2024-03-15 2024-04-19 释普信息科技(上海)有限公司 Data security early warning method and device based on intelligent video monitoring
CN117911930B (en) * 2024-03-15 2024-06-04 释普信息科技(上海)有限公司 Data security early warning method and device based on intelligent video monitoring

Also Published As

Publication number Publication date
CN111931587B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN108805015B (en) Crowd abnormity detection method for weighted convolution self-coding long-short term memory network
CN108256562B (en) Salient target detection method and system based on weak supervision time-space cascade neural network
EP2377044B1 (en) Detecting anomalous events using a long-term memory in a video analysis system
CN111680614A (en) Abnormal behavior detection method based on video monitoring
CN107506692A (en) A kind of dense population based on deep learning counts and personnel's distribution estimation method
CN110232361B (en) Human behavior intention identification method and system based on three-dimensional residual dense network
CN111079539B (en) Video abnormal behavior detection method based on abnormal tracking
Chen et al. Adaptive convolution for object detection
CN111738054B (en) Behavior anomaly detection method based on space-time self-encoder network and space-time CNN
CN110930378B (en) Emphysema image processing method and system based on low data demand
CN116092179A (en) Improved Yolox fall detection system
Dhole et al. Anomaly detection using convolutional spatiotemporal autoencoder
CN111626134A (en) Dense crowd counting method, system and terminal based on hidden density distribution
Prawiro et al. Abnormal event detection in surveillance videos using two-stream decoder
CN115205604A (en) Improved YOLOv 5-based method for detecting wearing of safety protection product in chemical production process
CN111931587B (en) Video anomaly detection method based on interpretable space-time self-encoder
CN118155266A (en) Bank outlet monitoring and identifying method and device
CN113807203A (en) Hyperspectral anomaly detection method based on tensor decomposition network
Zhao et al. Layer-wise multi-defect detection for laser powder bed fusion using deep learning algorithm with visual explanation
CN112149596A (en) Abnormal behavior detection method, terminal device and storage medium
CN116611500A (en) Method and device for training neural network
CN115984568A (en) Target detection method in haze environment based on YOLOv3 network
CN111062408B (en) Fuzzy license plate image super-resolution reconstruction method based on deep learning
Fang et al. An attention-based U-Net network for anomaly detection in crowded scenes
Esan et al. Detection of Anomalous Behavioural Patterns In University Environment Using CNN-LSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240529

Address after: 100000 Room 601, 6th floor, building 5, Lianhua yuan, Haidian District, Beijing

Patentee after: Aerospace Guosheng Technology Co.,Ltd.

Country or region after: China

Address before: 518000 1104, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee before: Shenzhen Hongyue Enterprise Management Consulting Co.,Ltd.

Country or region before: China

Effective date of registration: 20240528

Address after: 518000 1104, Building A, Zhiyun Industrial Park, No. 13, Huaxing Road, Henglang Community, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Hongyue Enterprise Management Consulting Co.,Ltd.

Country or region after: China

Address before: 400065 Chongwen Road, Nanshan Street, Nanan District, Chongqing

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

Country or region before: China

TR01 Transfer of patent right