CN114581836A - Abnormal behavior detection method, device, equipment and medium - Google Patents

Abnormal behavior detection method, device, equipment and medium Download PDF

Info

Publication number
CN114581836A
CN114581836A CN202210251079.XA CN202210251079A CN114581836A CN 114581836 A CN114581836 A CN 114581836A CN 202210251079 A CN202210251079 A CN 202210251079A CN 114581836 A CN114581836 A CN 114581836A
Authority
CN
China
Prior art keywords
abnormal
video
loss function
target
video segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210251079.XA
Other languages
Chinese (zh)
Inventor
丁鑫煜
翁晓俊
徐嘉禛
曾琳奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210251079.XA priority Critical patent/CN114581836A/en
Publication of CN114581836A publication Critical patent/CN114581836A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides an abnormal behavior detection method which can be applied to the technical fields of artificial intelligence and financial science and technology. The abnormal behavior detection method comprises the following steps: acquiring a target monitoring video to be detected, wherein the target monitoring video comprises a plurality of video segments; extracting behavior characteristics of video clips from a target monitoring video; inputting the behavior characteristics of the video clips into an abnormal behavior detection model, and outputting abnormal values of the video clips; the abnormal behavior detection model is obtained based on target loss function training, and the target loss function is obtained according to a plurality of abnormal value parameters; and obtaining a detection result of the target monitoring video based on the abnormal values of the plurality of video segments. The disclosure also provides an abnormal behavior detection apparatus, a device, a storage medium, and a program product.

Description

Abnormal behavior detection method, device, equipment and medium
Technical Field
The present disclosure relates to the technical field of artificial intelligence and financial technology, and in particular, to a method, apparatus, device, medium, and program product for detecting abnormal behavior.
Background
In the related art, abnormal behavior may be identified by detecting a video. When abnormal behaviors are detected, the behaviors can be immediately warned, and meanwhile security personnel can be informed to intervene and process abnormal conditions in time, so that the possibility of deterioration and development of abnormal events is reduced.
In the process of implementing the disclosure, the problems of complicated detection process, low detection accuracy and the like still exist in the abnormal behavior detection.
Disclosure of Invention
In view of the above, the present disclosure provides an abnormal behavior detection method, an abnormal behavior detection apparatus, a device, a medium, and a program product.
According to a first aspect of the present disclosure, there is provided an abnormal behavior detection method, including:
acquiring a target monitoring video to be detected, wherein the target monitoring video comprises a plurality of video segments;
extracting behavior characteristics of video clips from a target monitoring video;
inputting the behavior characteristics of the video clips into an abnormal behavior detection model, and outputting abnormal values of the video clips; the abnormal behavior detection model is obtained based on target loss function training, and the target loss function is obtained according to a plurality of abnormal value parameters;
and obtaining a detection result of the target monitoring video based on the abnormal values of the plurality of video segments.
According to an embodiment of the present disclosure, the outlier parameter includes: the method comprises the steps of obtaining a maximum abnormal value parameter of a normal video segment, a first order abnormal value parameter of an abnormal video segment, a second order abnormal value parameter of the abnormal video segment, a third order abnormal value parameter of the abnormal video segment and a minimum abnormal value parameter of the abnormal video segment.
According to the embodiment of the disclosure, the constructing of the target loss function according to the plurality of abnormal value parameters includes:
constructing a first sub-loss function according to the maximum abnormal value parameter of the normal video segment and the first sequence abnormal value parameter of the abnormal video segment;
constructing a second sub-loss function according to the first order abnormal value parameter of the abnormal video segment and the minimum abnormal value parameter of the abnormal video segment;
constructing a third sub-loss function according to the maximum abnormal value parameter of the normal video segment and the second order abnormal value parameter of the abnormal video segment;
constructing a fourth sub-loss function according to the maximum abnormal value parameter of the normal video segment and the third order abnormal value parameter of the abnormal video segment;
and combining the first sub-loss function, the second sub-loss function, the third sub-loss function and the fourth sub-loss function to construct and obtain a target loss function.
According to an embodiment of the present disclosure, the constructing of the target loss function according to the plurality of outlier parameters further includes:
constructing a sparsity constraint loss function according to the abnormal value parameters of the normal video clips;
constructing a time smoothness constraint loss function according to the difference value of the abnormal value parameter of the ith abnormal video segment and the abnormal value parameter of the (i + 1) th abnormal video segment, wherein the time sequences of the ith abnormal video segment and the (i + 1) th abnormal video segment are adjacent, and i is a positive integer;
and combining the first sub-loss function, the second sub-loss function, the third sub-loss function, the fourth sub-loss function, the sparsity constraint loss function and the time smoothness constraint loss function to construct a target loss function.
According to the embodiment of the disclosure, the extracting of the behavior feature of the video clip from the target monitoring video comprises:
and extracting the behavior characteristics in the video clips in the monitoring video by utilizing a deep convolutional neural network with a space-time three-dimensional kernel.
According to the embodiment of the present disclosure, before acquiring a target surveillance video to be detected, the method further includes:
acquiring an initial monitoring video to be detected;
and dividing the initial monitoring video according to the segments to obtain the target monitoring video.
According to the embodiment of the disclosure, the training of the abnormal behavior detection model based on the target loss function includes:
acquiring a training data set, wherein the training data set comprises a plurality of video clip samples, and at least one of the video clip samples comprises abnormal behaviors;
extracting behavior characteristics of the video segment samples from the training data set;
inputting the behavior characteristics of the video segment samples into a classifier, and outputting abnormal values of the video segment samples;
and adjusting parameters of the classifier based on the abnormal values and the target loss function, and taking the classifier after parameter adjustment as an abnormal behavior detection model.
A second aspect of the present disclosure provides an abnormal behavior detection apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a target monitoring video to be detected, and the target monitoring video comprises a plurality of video segments;
the characteristic extraction module is used for extracting the behavior characteristics of the video clips from the target monitoring video;
the classification module is used for inputting the behavior characteristics of the video clips into the abnormal behavior detection model and outputting abnormal values of the video clips; the abnormal behavior detection model is obtained based on target loss function training, and the target loss function is obtained according to a plurality of abnormal value parameters; and
and the result determining module is used for obtaining the detection result of the target monitoring video based on the abnormal values of the plurality of video segments.
A third aspect of the present disclosure provides an electronic device, comprising: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the above-described abnormal behavior detection method.
A fourth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-mentioned abnormal behavior detection method.
A fifth aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the above-described abnormal behavior detection method.
According to the embodiment of the disclosure, after the behavior feature of the video segment is extracted, the behavior feature is detected through the abnormal behavior detection model, abnormal values of the video segment can be output, and a detection result of the video is obtained based on the abnormal values of the plurality of video segments. By regarding the abnormal behavior detection as a two-classification problem, namely detecting whether abnormal behaviors occur in a section of video, the problem of complex detection process can be reduced. According to the embodiment of the disclosure, the target loss function is constructed and obtained according to the plurality of abnormal value parameters, and the abnormal behavior detection model obtained by training the target loss function has high detection accuracy. The embodiment of the disclosure can detect the abnormal event in time under the condition that the abnormal event occurs, thereby effectively reducing the economic loss caused by the abnormal event.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario diagram of an abnormal behavior detection method, apparatus, device, medium and program product according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an abnormal behavior detection method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method for constructing an objective loss function from a plurality of outlier parameters, according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a structural schematic of a deep convolutional neural network with spatiotemporal three-dimensional kernels, in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow diagram of a method by which an abnormal behavior detection model is trained based on an objective loss function, according to an embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of the structure of an abnormal behavior detection apparatus according to an embodiment of the present disclosure; and
fig. 7 schematically shows a block diagram of an electronic device adapted to implement the abnormal behavior detection method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
In the technical scheme of the embodiment of the disclosure, before the personal information of the user is obtained or collected, the authorization or the consent of the user is obtained.
In the process of implementing the disclosure, it is found that abnormal behavior detection, for example, abnormal behaviors such as falling, collision and the like, can be mainly detected, a Surendra moving object detection algorithm based on interframe-background difference combination is applied to detect a moving object of an ATM self-service hall, and a foreground object is extracted; then, tracking and positioning the moving target by using a Kalman filtering CanShift tracking algorithm; then extracting characteristic information of the moving target in the current frame, and establishing a behavior detector through an SVM (support vector machine) for classification judgment. The modeling process is complex, effective motion characteristic information can be extracted only through multiple steps, undefined abnormal behaviors cannot be effectively identified, and the generalization capability is not available. And the abnormal behaviors which may occur need to be artificially defined and trained in advance, so that unexpected abnormal events cannot be well dealt with, and certain limitation is realized. And the problems of complex detection process, low detection accuracy and the like still exist in the abnormal behavior detection.
An embodiment of the present disclosure provides an abnormal behavior detection method, including: acquiring a target monitoring video to be detected, wherein the target monitoring video comprises a plurality of video segments; extracting behavior characteristics of video clips from a target monitoring video; inputting the behavior characteristics of the video clips into an abnormal behavior detection model, and outputting abnormal values of the video clips; the abnormal behavior detection model is obtained by training based on a target loss function, and the target loss function is obtained by constructing according to a plurality of abnormal value parameters; and obtaining a detection result of the target monitoring video based on the abnormal values of the plurality of video segments.
Fig. 1 schematically illustrates an application scenario diagram of an abnormal behavior detection method, apparatus, device, medium, and program product according to an embodiment of the present disclosure.
As shown in fig. 1, the application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the abnormal behavior detection method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the abnormal behavior detection apparatus provided by the embodiment of the present disclosure may be generally disposed in the server 105. The abnormal behavior detection method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the abnormal behavior detection apparatus provided in the embodiment of the present disclosure may also be disposed in a server or a server cluster that is different from the server 105 and can communicate with the terminal devices 101, 102, and 103 and/or the server 105.
The abnormal behavior detection method provided by the embodiment of the present disclosure may also be executed by the terminal devices 101, 102, and 103. Accordingly, the abnormal behavior detection apparatus provided in the embodiments of the present disclosure may also be generally disposed in the terminal devices 101, 102, and 103. The abnormal behavior detection method provided by the embodiment of the present disclosure may also be executed by other terminals different from the terminal devices 101, 102, and 103. Accordingly, the abnormal behavior detection apparatus provided in the embodiments of the present disclosure may also be disposed in other terminals different from the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
The abnormal behavior detection method according to the embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 5 based on the scenario described in fig. 1.
Fig. 2 schematically shows a flow chart of an abnormal behavior detection method according to an embodiment of the present disclosure.
As shown in fig. 2, the abnormal behavior detection method 200 of this embodiment includes operations S201 to S204.
In operation S201, a target surveillance video to be detected is acquired, where the target surveillance video includes a plurality of video segments.
According to the embodiment of the disclosure, the initial surveillance video to be detected can be divided into a plurality of video segments according to the preset number of frames, so as to obtain the target surveillance video to be detected. The preset number of frames may be 32 frames, for example.
In operation S202, behavior features of a video clip are extracted from a target surveillance video.
According to the embodiment of the disclosure, the behavior characteristics of the video segments can be extracted from the target monitoring video through a pre-trained 3D ResNet network. For example, the dynamic characteristics of a character in a video clip, etc.
In operation S203, inputting the behavior characteristics of the video segment into the abnormal behavior detection model, and outputting an abnormal value of the video segment; the abnormal behavior detection model is obtained by training based on a target loss function, and the target loss function is obtained by constructing according to a plurality of abnormal value parameters.
According to the embodiment of the disclosure, the target loss function can be constructed according to a plurality of abnormal value parameters. The network is trained in a weakly supervised manner using deep Multiple Instance Learning (MIL). Under the condition that no frame level or time level label exists, by providing a video level label, the MIL can train the network through a target loss function, and an abnormal behavior detection model is obtained after training.
It should be noted that in Multiple Instance Learning (MIL), the input is a series of labeled "packages," each containing multiple "instances. In the embodiments of the present disclosure, it can be seen as a binary problem, i.e., there are positive and negative packets. The definition of a positive packet is that there are at least one or more positive examples within the existing examples, and the definition of a negative packet is that all of the contained examples are negative examples. One surveillance video may be regarded as one packet, and a plurality of video segments divided by the surveillance video are exemplified. If one monitoring video comprises one or more abnormal video segments, the monitoring video is regarded as a positive packet, and the abnormal video segments are positive examples. If all the monitored videos are normal video segments, the monitored videos are negative packets, and the normal video segments are negative examples.
In operation S204, a detection result of the target surveillance video is obtained based on the abnormal values of the plurality of video segments.
According to the embodiment of the disclosure, the abnormal value of the video segment may be compared with a preset threshold, and if the abnormal value is greater than the preset threshold, the video segment is an abnormal segment, and the detection result of the target surveillance video may be that an abnormal event occurs. If the video segment is smaller than the preset threshold, the video segment is a normal segment, and the detection result of the target monitoring video can be that no abnormal event occurs. Wherein the preset threshold value can be artificially determined according to the actual model.
For example, if the preset threshold is 4, a normal event may be given below 4 points, and an abnormal event may be given above 4 points, and the detection result of the target surveillance video may be obtained accordingly.
According to the embodiment of the disclosure, after the behavior feature of the video segment is extracted, the behavior feature is detected through the abnormal behavior detection model, abnormal values of the video segment can be output, and a detection result of the video is obtained based on the abnormal values of the plurality of video segments. By regarding abnormal behavior detection as a two-classification problem, namely detecting whether abnormal behavior occurs in a section of video, the problem of complex detection process can be reduced. According to the embodiment of the disclosure, the target loss function is constructed and obtained according to the plurality of abnormal value parameters, and the abnormal behavior detection model obtained by training the target loss function has high detection accuracy. The embodiment of the disclosure can detect the abnormal event in time under the condition that the abnormal event occurs, thereby effectively reducing the economic loss caused by the abnormal event.
FIG. 3 is a flow chart of a method for constructing an objective loss function according to a plurality of abnormal value parameters according to an embodiment of the present disclosure.
As shown in fig. 3, the method 300 for constructing the objective loss function according to the plurality of abnormal value parameters includes operations S301 to S305. It should be noted that, the abnormal value parameter may include: the method comprises the steps of obtaining a maximum abnormal value parameter of a normal video segment, a first order abnormal value parameter of an abnormal video segment, a second order abnormal value parameter of the abnormal video segment, a third order abnormal value parameter of the abnormal video segment and a minimum abnormal value parameter of the abnormal video segment.
In operation S301, a first sub-loss function is constructed according to the maximum abnormal value parameter of the normal video segment and the first order abnormal value parameter of the abnormal video segment.
According to the embodiment of the present disclosure, the abnormal behavior detection problem can be treated as a regression problem, and thus all abnormal video segments can be assigned an abnormal value higher than that of the normal video segments. For example, a ranking function relationship as shown in the following equation (1) can be obtained:
S(Ia)>S(In) (1)
wherein, IaAnd InRespectively representing abnormal video clips and normal video clips; function S (I)a) And S (I)n) Respectively representing abnormal values of abnormal video segments and abnormal values of normal video segments; the outlier can be between 0 and 1.
It should be noted that, the following two false cases may exist in the abnormal behavior detection task: predicting a normal video segment as an abnormal video segment, namely a false abnormal video segment condition; and secondly, predicting the abnormal video segment as a normal video segment, namely, a false normal video segment condition. In order to reduce various error rates and complete the abnormal behavior detection task, the following ordering conditions can be proposed as shown in equations (2) and (3):
Figure BDA0003546888070000091
Figure BDA0003546888070000092
wherein, CaAnd CnRespectively representing an abnormal video clip group and a normal video clip group; formula (2) compares the ranking of the video segment with the maximum abnormal value in the abnormal video segment group and the normal video segment group, wherein the maximum ranked video segment in the abnormal video segment group is most likely to be the true abnormal video segment, and the maximum ranked video segment in the normal video segment group is likely to be the false abnormal video segment; equation (3) compares the largest ranked video segment and the smallest ranked video segment of the abnormal video segment set, where the largest ranked video segment of the abnormal video segment set is most likely to be a true abnormal video segment and the smallest ranked video segment is likely to be a false abnormal video segment.
Therefore, the abnormal values of the video clips of the abnormal video clip group can be sorted in descending order as shown in formula (4):
Figure BDA0003546888070000093
wherein M isnAn nth order outlier representing an outlier video segment; n is a positive integer and n also represents the total number of anomalous video segments.
It should be noted that, because the video scale of the training data set is large, and there may exist a plurality of video segments with abnormal behavior in the video, it is necessary to maximize the abnormal values of other video segments of the abnormal video segment group to avoid the false situation ±; it is also desirable to minimize outliers of all normal video segment sets video segments.
Therefore, in order to avoid the above two false cases, the abnormal value second rank video segment and the abnormal value third rank video segment of the abnormal video segment group can be compared with the abnormal value maximum rank video segment of the normal video segment group using, for example, formula (5) and formula (6), respectively.
As follows:
Figure BDA0003546888070000101
Figure BDA0003546888070000102
according to an embodiment of the present disclosure, the first sub-loss function/1(Ca,Cn) For example, it can be expressed as shown in formula (7):
Figure BDA0003546888070000103
in operation S302, a second sub-loss function is constructed according to the first order outlier parameter of the abnormal video segment and the minimum outlier parameter of the abnormal video segment.
According to an embodiment of the present disclosure, the second sub-loss function l2(Ca,Cn) For example, it can be expressed as shown in formula (8):
Figure BDA0003546888070000104
in operation S303, a third sub-loss function is constructed according to the maximum abnormal value parameter of the normal video segment and the second order abnormal value parameter of the abnormal video segment.
According to an embodiment of the present disclosure, the third sub-loss function l3(Ca,Cn) For example, it can be expressed as shown in formula (9):
Figure BDA0003546888070000105
in operation S304, a fourth sub-loss function is constructed according to the maximum abnormal value parameter of the normal video segment and the third order abnormal value parameter of the abnormal video segment.
According to an embodiment of the present disclosure, the fourth sub-loss function l4(Ca,Cn) For example, it can be expressed as shown in formula (10):
Figure BDA0003546888070000106
in operation S305, a target loss function is constructed by combining the first sub-loss function, the second sub-loss function, the third sub-loss function, and the fourth sub-loss function.
According to an embodiment of the present disclosure, the target loss function l (w) may be expressed as shown in equation (11):
L(w)=l(Ca,Cn)+μ1||w1||2 (11)
wherein l (C)a,Cn)=l1(Ca,Cn)+l2(Ca,Cn)+l3(Ca,Cn)+l4(Ca,Cn);w1Representing abnormal behavior detection model weights; mu.s1Representing the hyper-parameters of the abnormal behavior detection model.
According to the embodiment of the disclosure, two situations of the false abnormal video clip and the false normal video clip are fully considered when the target loss function is constructed, the two false situations are avoided, all the real abnormal video clips are far away from all the real normal video clips as far as possible, and the accuracy of the model can be improved by training the obtained model.
According to another embodiment of the present disclosure, the method for constructing the target loss function according to the plurality of outlier parameters may consider sparsity constraint and time smoothness constraint, including operations S1 through S3, in addition to the above operations S301 through S305.
In operation S1, a sparsity constraint loss function is constructed according to the abnormal value parameters of the normal video segment.
According to an embodiment of the present disclosure, the sparsity constraint penalty function may be expressed, for example, as shown in equation (12):
Figure BDA0003546888070000111
wherein, mu2Representing the hyper-parameters of the abnormal behavior detection model.
In operation S2, a time smoothness constraint loss function is constructed according to a difference value between an abnormal value parameter of an i-th abnormal video segment and an abnormal value parameter of an i + 1-th abnormal video segment, where the i-th abnormal video segment and the i + 1-th abnormal video segment are adjacent in time sequence, and i is a positive integer.
According to an embodiment of the present disclosure, the time smoothness constraint loss function may be expressed as shown in equation (13), for example:
Figure BDA0003546888070000112
wherein, mu3Representing the hyper-parameters of the abnormal behavior detection model.
In operation S3, the first sub-loss function, the second sub-loss function, the third sub-loss function, the fourth sub-loss function, the sparsity constraint loss function, and the time smoothness constraint loss function are combined to construct a target loss function.
According to an embodiment of the present disclosure, the target loss function l (w)' may be expressed as shown in equation (14), for example:
L(w)′=l(Ca,Cn)′+μ4||w2||2 (14)
wherein l (C)a,Cn)′=l1(Ca,Cn)+l2(Ca,Cn)+l3(Ca,Cn)+l4(Ca,Cn)+sparsityconstraint+temporalconstraint;w2Representing abnormal behavior detection model weights; mu.s4A hyper-parameter representing an abnormal behavior detection model.
According to the embodiment of the disclosure, in addition to two situations of a false abnormal video segment and a false normal video segment, the time smoothness and sparsity of an abnormal value of the video segment are also considered when the target loss function is constructed, the accuracy of a trained model is high, and the problem of low detection accuracy of an abnormal behavior detection model in the prior art is at least partially solved.
According to the embodiment of the present disclosure, before acquiring a target surveillance video to be detected, the method further includes:
acquiring an initial monitoring video to be detected;
and dividing the initial monitoring video according to the segments to obtain the target monitoring video.
According to the embodiment of the disclosure, the initial monitoring video can be obtained through the video obtaining device, and then the initial monitoring video is divided into a plurality of video segments according to the time period or the preset video frame number, so as to obtain the target monitoring video.
FIG. 4 schematically illustrates a structural diagram of a deep convolutional neural network with spatiotemporal three-dimensional kernels, in accordance with an embodiment of the present disclosure.
According to the embodiment of the disclosure, the extracting of the behavior feature of the video clip from the target monitoring video comprises:
and extracting the behavior characteristics in the video clips in the monitoring video by utilizing a deep convolutional neural network with a space-time three-dimensional kernel.
The method for extracting the behavior features of the video segments from the target surveillance video according to the embodiment can utilize a deep convolution neural network (3D ResNet network) with a space-time three-dimensional kernel, and has a residual structure compared with a pure 3D CNN network, when a picture is convolved for multiple times by a traditional 3D CNN, the loss of the image features of the previous layers can be caused in the deeper layer, and the phenomenon of model degradation can be caused by simply increasing the network depth of the 3D CNN along with the improvement of the data replication degree.
As shown in fig. 4, the residual network also uses the idea of cross-layer connection of a high-speed network, but the difference is that the residual item originally has a weight, but the residual network uses identity mapping instead. Wherein the activation function ReLU serves as a pre-activation process for the Weight layer. Assuming that the network input is x, the expected output is h (x), i.e. the expected complex potential mapping, the shortcut connections (shortconnections) in fig. 4 directly pass x to the output as the initial result, and the output result is h (x) ═ f (x) + x, when f (x) ═ 0, i.e. h (x) ═ x, this is the identity mapping. The residual error network does not position the learning target to be a complete output, but changes the learning target to be the difference value between the target value H (x) and the target value x, so that the subsequent training target is to approach the residual error result to be 0, and the accuracy rate cannot be reduced even if the network is deepened. Compared with a mode that the output of the upper layer of the traditional neural network can only be used as the input of the upper layer, the phenomena of gradient disappearance and model degradation can be solved while the network depth is increased. By the structure, the speed of the network can be optimized, the parameter utilization rate is improved, and the model volume is reduced. The shortcut connection can transmit the high-dimensional features extracted by the shallow network structure and then the deep structure is subjected to feature multiplexing, so that the deep features of the picture can be extracted more effectively, the gradient is kept not to disappear, the features can be extracted more efficiently, and the network recognition capability is enhanced.
According to the embodiment of the disclosure, the behavior characteristics in the video segments in the monitoring video are extracted by utilizing the pre-trained deep convolutional neural network with the space-time three-dimensional kernel. Wherein a deep convolutional neural network with spatio-temporal three-dimensional kernels may be pre-trained on the UCF-101 dataset.
Fig. 5 schematically illustrates a flowchart of a method of training an abnormal behavior detection model based on an objective loss function according to an embodiment of the present disclosure.
As shown in fig. 5, the method 500 of training the abnormal behavior detection model of this embodiment based on the objective loss function includes operations S501 to S504.
In operation S501, a training data set is obtained, where the training data set includes a plurality of video segment samples, and at least one of the video segment samples includes an abnormal behavior. The training data set may be divided into a normal video segment group and an abnormal video segment group, wherein at least one video segment sample in the abnormal video segment group comprises an abnormal behavior.
According to the embodiment of the disclosure, a training data set can be constructed by utilizing indoor monitoring video data in a UCF-CRIME data set of a real monitoring video data set.
In operation S502, behavior features of a video segment sample are extracted from a training data set.
According to embodiments of the present disclosure, 3D ResNet may be used to extract behavioral characteristics of video segment samples in a training dataset. The behavior features include behavior features within normal video segments and abnormal video segments. The behavior characteristics can be, for example, dynamic characteristics of a person in a video clip.
In operation S503, the behavior characteristics of the video segment sample are input to the classifier, and an abnormal value of the video segment sample is output.
According to embodiments of the present disclosure, the classifier may be a multi-example learning model. As described in the above operation S501, the training data set has grouped all video segments, i.e. the normal video segment group CaAnd abnormal video clip group Cn. Although at least one video segment sample in the abnormal video segment group comprises abnormal behaviors, the accurate label of the video segment is unknown, the following optimization function formula (15) can be used for carrying out a binary classification task, and the abnormal value of the video segment sample is output.
Figure BDA0003546888070000141
Wherein the content of the first and second substances,
Figure BDA0003546888070000142
the function of the loss of the hinge is expressed,
Figure BDA0003546888070000143
representing normal video clip group CaAnd abnormal video clip group CnPhi (x) represents the behavior feature representation of the video segment samples, b represents the bias term, m represents the total number of the video segment samples in the training set, and w represents the weight of the multi-instance learning model.
In operation S504, parameters of the classifier are adjusted based on the abnormal value and the objective loss function, and the classifier after parameter adjustment is used as an abnormal behavior detection model.
According to the embodiment of the present disclosure, the parameters of the classifier may be adjusted according to the abnormal value of the video segment sample output in operation S503 and the target loss function obtained by the above-described embodiment of the present disclosure, and the classifier after parameter adjustment may be used as the abnormal behavior detection model.
According to the embodiment of the disclosure, the abnormal behavior detection is performed by extracting the behavior characteristics of the video segments, and the targets in the monitoring video do not need to be subjected to independent modeling detection in the process of detecting the abnormal behavior, so that the whole process is simpler and more convenient, the calculation overhead is saved, and the method is more suitable for light-weight and low-calculation-force detection equipment. The abnormal behavior detection model obtained through training of the embodiment of the disclosure has strong generalization capability and high accuracy. The method can be applied to detecting whether abnormal events occur in the ATM self-service hall, can timely and effectively detect the abnormal events, and reduces economic loss caused by the abnormal events.
Based on the abnormal behavior detection method, the disclosure also provides an abnormal behavior detection device. The apparatus will be described in detail below with reference to fig. 6.
Fig. 6 schematically shows a block diagram of the structure of an abnormal behavior detection apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the abnormal behavior detection apparatus 600 of this embodiment includes an acquisition module 610, a feature extraction module 620, a classification module 630, and a result determination module 640.
The obtaining module 610 is configured to obtain a target surveillance video to be detected, where the target surveillance video includes a plurality of video segments. In an embodiment, the obtaining module 610 may be configured to perform the operation S201 described above, which is not described herein again.
The feature extraction module 620 is configured to extract behavior features of the video segments from the target surveillance video. In an embodiment, the feature extraction module 620 may be configured to perform the operation S202 described above, which is not described herein again.
The classification module 630 is configured to input the behavior characteristics of the video segments into the abnormal behavior detection model, and output abnormal values of the video segments; the abnormal behavior detection model is obtained by training based on a target loss function, and the target loss function is obtained by constructing according to a plurality of abnormal value parameters. In an embodiment, the classification module 630 may be configured to perform the operation S203 described above, which is not described herein again.
The result determining module 640 is configured to obtain a detection result of the target surveillance video based on the abnormal values of the plurality of video segments. In an embodiment, the result determining module 640 may be configured to perform the operation S204 described above, which is not described herein again.
According to the embodiment of the present disclosure, any plurality of the obtaining module 610, the feature extracting module 620, the classifying module 630, and the result determining module 640 may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 610, the feature extracting module 620, the classifying module 630, and the result determining module 640 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or an appropriate combination of any several of them. Alternatively, at least one of the obtaining module 610, the feature extraction module 620, the classification module 630 and the result determination module 640 may be at least partially implemented as a computer program module, which when executed may perform corresponding functions.
Fig. 7 schematically shows a block diagram of an electronic device adapted to implement the abnormal behavior detection method according to an embodiment of the present disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. The processor 701 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. It is noted that the programs may also be stored in one or more memories other than the ROM 702 and RAM 703. The processor 701 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 700 may also include input/output (I/O) interface 705, which input/output (I/O) interface 705 is also connected to bus 704, according to an embodiment of the present disclosure. The electronic device 700 may also include one or more of the following components connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 702 and/or the RAM 703 and/or one or more memories other than the ROM 702 and the RAM 703 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 701. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via the communication section 709, and/or installed from the removable medium 711. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program, when executed by the processor 701, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. An abnormal behavior detection method, comprising:
acquiring a target monitoring video to be detected, wherein the target monitoring video comprises a plurality of video segments;
extracting the behavior characteristics of the video clips from the target monitoring video;
inputting the behavior characteristics of the video clips into an abnormal behavior detection model, and outputting abnormal values of the video clips; the abnormal behavior detection model is obtained based on target loss function training, and the target loss function is obtained by constructing a plurality of abnormal value parameters;
and obtaining a detection result of the target monitoring video based on the abnormal values of the video segments.
2. The method of claim 1, wherein the outlier parameter comprises: the method comprises the steps of obtaining a maximum abnormal value parameter of a normal video clip, a first order abnormal value parameter of an abnormal video clip, a second order abnormal value parameter of the abnormal video clip, a third order abnormal value parameter of the abnormal video clip and a minimum abnormal value parameter of the abnormal video clip.
3. The method of claim 2, wherein the objective loss function is constructed from a plurality of outlier parameters comprising:
constructing a first sub-loss function according to the maximum abnormal value parameter of the normal video segment and the first sequence abnormal value parameter of the abnormal video segment;
constructing a second sub-loss function according to the first sequence abnormal value parameter of the abnormal video clip and the minimum abnormal value parameter of the abnormal video clip;
constructing a third sub-loss function according to the maximum abnormal value parameter of the normal video segment and the second order abnormal value parameter of the abnormal video segment;
constructing a fourth sub-loss function according to the maximum abnormal value parameter of the normal video segment and the third order abnormal value parameter of the abnormal video segment;
and combining the first sub-loss function, the second sub-loss function, the third sub-loss function and the fourth sub-loss function to construct the target loss function.
4. The method of claim 3, wherein the objective loss function is constructed from a plurality of outlier parameters further comprising:
constructing a sparsity constraint loss function according to the abnormal value parameters of the normal video clips;
constructing a time smoothness constraint loss function according to the difference value of the abnormal value parameter of the ith abnormal video segment and the abnormal value parameter of the (i + 1) th abnormal video segment, wherein the time sequence of the ith abnormal video segment and the (i + 1) th abnormal video segment is adjacent, and i is a positive integer;
and combining the first sub-loss function, the second sub-loss function, the third sub-loss function, the fourth sub-loss function, the sparsity constraint loss function and the time smoothness constraint loss function to construct and obtain the target loss function.
5. The method of claim 1, wherein the extracting the behavior feature of the video segment from the target surveillance video comprises:
and extracting the behavior characteristics in the video segments in the monitoring video by utilizing a deep convolutional neural network with a space-time three-dimensional kernel.
6. The method according to claim 1, wherein before the acquiring the target surveillance video to be detected, further comprising:
acquiring an initial monitoring video to be detected;
and dividing the initial monitoring video according to segments to obtain the target monitoring video.
7. The method of claim 1, wherein the abnormal behavior detection model trained based on an objective loss function comprises:
acquiring a training data set, wherein the training data set comprises a plurality of video clip samples, and at least one of the video clip samples comprises abnormal behaviors;
extracting behavior features of the video segment samples from the training dataset;
inputting the behavior characteristics of the video segment sample into a classifier, and outputting an abnormal value of the video segment sample;
and adjusting parameters of the classifier based on the abnormal values and the target loss function, and taking the classifier after parameter adjustment as the abnormal behavior detection model.
8. An abnormal behavior detection apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a target monitoring video to be detected, and the target monitoring video comprises a plurality of video clips;
the characteristic extraction module is used for extracting the behavior characteristics of the video clips from the target monitoring video;
the classification module is used for inputting the behavior characteristics of the video clips into an abnormal behavior detection model and outputting abnormal values of the video clips; the abnormal behavior detection model is obtained based on target loss function training, and the target loss function is obtained by constructing a plurality of abnormal value parameters; and
and the result determining module is used for obtaining the detection result of the target monitoring video based on the abnormal values of the video segments.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 7.
11. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 7.
CN202210251079.XA 2022-03-15 2022-03-15 Abnormal behavior detection method, device, equipment and medium Pending CN114581836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210251079.XA CN114581836A (en) 2022-03-15 2022-03-15 Abnormal behavior detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210251079.XA CN114581836A (en) 2022-03-15 2022-03-15 Abnormal behavior detection method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114581836A true CN114581836A (en) 2022-06-03

Family

ID=81780053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210251079.XA Pending CN114581836A (en) 2022-03-15 2022-03-15 Abnormal behavior detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114581836A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115102730A (en) * 2022-06-10 2022-09-23 深圳市众功软件有限公司 Integrated monitoring method for multiple devices

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115102730A (en) * 2022-06-10 2022-09-23 深圳市众功软件有限公司 Integrated monitoring method for multiple devices
CN115102730B (en) * 2022-06-10 2023-08-11 深圳市众功软件有限公司 Integrated monitoring method and device for multiple devices and electronic device

Similar Documents

Publication Publication Date Title
WO2018192570A1 (en) Time domain motion detection method and system, electronic device and computer storage medium
CN110431560B (en) Target person searching method, device, equipment and medium
US8660368B2 (en) Anomalous pattern discovery
US20140139633A1 (en) Method and System for Counting People Using Depth Sensor
CN108230346B (en) Method and device for segmenting semantic features of image and electronic equipment
US11256921B2 (en) System and method for identifying events of interest in images from one or more imagers in a computing network
US20190138787A1 (en) Method and apparatus for facial age identification, and electronic device
US20200394414A1 (en) Keyframe scheduling method and apparatus, electronic device, program and medium
US20220254162A1 (en) Deep learning framework for congestion detection and prediction in human crowds
Sumon et al. Violent crowd flow detection using deep learning
CN112329762A (en) Image processing method, model training method, device, computer device and medium
CN113361603A (en) Training method, class recognition device, electronic device and storage medium
CN112989987A (en) Method, apparatus, device and storage medium for identifying crowd behavior
CN114581836A (en) Abnormal behavior detection method, device, equipment and medium
Thomopoulos et al. Automated real-time risk assessment for airport passengers using a deep learning architecture
Gulghane et al. A survey on intrusion detection system using machine learning algorithms
Akash et al. Human violence detection using deep learning techniques
Joshi et al. Smart surveillance system for detection of suspicious behaviour using machine learning
US10944898B1 (en) Systems and methods for guiding image sensor angle settings in different environments
Mohandas et al. TensorFlow Enabled Deep Learning Model Optimization for enhanced Realtime Person Detection using Raspberry Pi operating at the Edge.
CN116304910A (en) Anomaly detection method, device, equipment and storage medium for operation and maintenance data
US20230072641A1 (en) Image Processing and Automatic Learning on Low Complexity Edge Apparatus and Methods of Operation
Nagulan et al. An Efficient Real-Time Fire Detection Method Using Computer Vision and Neural Network-Based Video Analysis
CN114139059A (en) Resource recommendation model training method, resource recommendation method and device
Behera et al. Characterization of dense crowd using gibbs entropy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination