CN113313919A - Alarm method and device using multilayer feedforward network model and electronic equipment - Google Patents

Alarm method and device using multilayer feedforward network model and electronic equipment Download PDF

Info

Publication number
CN113313919A
CN113313919A CN202110581280.XA CN202110581280A CN113313919A CN 113313919 A CN113313919 A CN 113313919A CN 202110581280 A CN202110581280 A CN 202110581280A CN 113313919 A CN113313919 A CN 113313919A
Authority
CN
China
Prior art keywords
feature
layer
pulse
bar
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110581280.XA
Other languages
Chinese (zh)
Other versions
CN113313919B (en
Inventor
肖扬
罗涛
施佳子
于海燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110581280.XA priority Critical patent/CN113313919B/en
Publication of CN113313919A publication Critical patent/CN113313919A/en
Application granted granted Critical
Publication of CN113313919B publication Critical patent/CN113313919B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/182Level alarms, e.g. alarms responsive to variables exceeding a threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Neurology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Alarm Systems (AREA)

Abstract

The disclosure provides an alarm method and device using a multilayer feedforward network model and electronic equipment. The alarm method and the alarm device can be used in the technical field of regional security, the financial field or other fields. The alarm method comprises the following steps: acquiring a multi-pulse event stream of a target scene using a dynamic visual sensor; inputting the multi-pulse event stream to a multi-layer feed-forward network model using an input layer; performing feature extraction and coding on the multi-pulse event stream by using a feature extraction layer to generate a multi-pulse feature sequence; classifying the multi-pulse characteristic sequences by using a classification layer, generating a classification result and outputting the classification result; and judging whether the classification result belongs to an abnormal condition or not, and generating an alarm signal based on the abnormal condition.

Description

Alarm method and device using multilayer feedforward network model and electronic equipment
Technical Field
The present disclosure relates to the field of regional security technologies, and in particular, to an alarm method and apparatus using a multi-layer feedforward network model, an electronic device, and a storage medium.
Background
As technology develops, more and more area monitoring is performed using machine equipment to reduce costs and improve efficiency. In the related art, monitoring a specific target scene often employs a high-resolution monitoring camera, which is implemented by using a camera based on a frame image. There is a difference in the interval time between each frame of image of this type of camera, and therefore, key information is lost between adjacent frame of image. In addition, a large amount of storage resources and calculation resources are wasted by the continuous frame images and the redundant pixels in each frame image, so that the energy consumption is high, the delay is large, and the monitoring of the target scene for a long time is not facilitated.
Disclosure of Invention
In view of the above, the present disclosure provides an alarm method, apparatus, electronic device and storage medium using a multi-layer feedforward network model.
An aspect of the present disclosure provides an alarm method using a multi-layer feedforward network model including an input layer, a feature extraction layer, and a classification layer, wherein the alarm method includes: acquiring a multi-pulse event stream of a target scene using a dynamic visual sensor; inputting the multi-pulse event stream to the multi-layer feed-forward network model using the input layer; performing feature extraction and encoding on the multi-pulse event stream using the feature extraction layer to generate a multi-pulse feature sequence; classifying the multi-pulse characteristic sequence by using the classification layer, generating a classification result and outputting the classification result; and judging whether the classification result belongs to an abnormal condition or not, and generating an alarm signal based on the abnormal condition.
According to an embodiment of the present disclosure, the feature extraction layer includes a first feature extraction layer and a second feature extraction layer; the using the feature extraction layer to perform feature extraction and encoding on the multi-pulse event stream to generate a multi-pulse feature sequence comprises: performing an event-driven convolution operation on the multi-pulse event stream using the first feature extraction layer to extract bar features of the multi-pulse event stream; and encoding the intensity of the bar-shaped features by using the second feature extraction layer to generate a multi-pulse feature sequence.
According to an embodiment of the present disclosure, the performing, using the first feature extraction layer, an event-driven convolution operation on the multi-pulse event stream to extract the strip features of the multi-pulse event stream includes: acquiring address information of the multi-pulse event stream; and covering each element in the convolution kernel into the reaction graph of the first feature extraction layer based on the address information to update the reaction graph to obtain the bar-shaped feature.
According to an embodiment of the present disclosure, the encoding the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence includes: acquiring the bar-shaped features in the convolution kernel; and encoding the intensity of the bar feature, wherein the intensity of the bar feature decays linearly over time.
According to an embodiment of the present disclosure, the encoding the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence further comprises: after the intensity of the bar-shaped feature is coded, judging whether the intensity of the bar-shaped feature exceeds a preset threshold value or not; and when the intensity of the bar-shaped feature exceeds a preset threshold value, generating a pulse signal, and resetting the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within a set range of the peripheral side to be set values.
According to an embodiment of the present disclosure, the encoding the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence further comprises: after the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within the set range of the peripheral side are reset to be set values, the reset intensity of the bar-shaped feature is controlled to keep the set values within a preset time.
According to the embodiment of the disclosure, the classification result at least comprises two types of normal condition and abnormal condition.
According to an embodiment of the present disclosure, the classification layer includes a neuron network made up of a plurality of Tempotron neurons.
Another aspect of the present disclosure provides an alarm apparatus using a multi-layer feedforward network model, the multi-layer feedforward network model including an input layer, a feature extraction layer, a classification layer; the alarm device includes: an acquisition module configured to acquire a multi-pulse event stream of a target scene using a dynamic visual sensor; an input module configured to input the multi-pulse event stream to the multi-layer feed-forward network model using the input layer; a feature extraction coding module configured to perform feature extraction and coding on the multi-pulse event stream using the feature extraction layer to generate a multi-pulse feature sequence; the classification module is configured to classify the multi-pulse feature sequence by using the classification layer, generate a classification result and output the classification result; and the alarm module is configured to judge whether the classification result belongs to an abnormal condition or not and generate an alarm signal based on the abnormal condition.
According to an embodiment of the present disclosure, the feature extraction layer includes a first feature extraction layer and a second feature extraction layer; the feature extraction coding module comprises a feature extraction submodule and a feature coding submodule; wherein the feature extraction submodule is configured to perform an event-driven convolution operation on the multi-pulse event stream using the first feature extraction layer to extract bar features of the multi-pulse event stream; the feature encoding submodule is configured to encode the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence.
According to an embodiment of the present disclosure, the feature extraction submodule is configured to acquire address information of the multi-pulse event stream; and covering each element in the convolution kernel into the reaction graph of the first feature extraction layer based on the address information to update the reaction graph to obtain the bar-shaped feature.
According to an embodiment of the present disclosure, the feature encoding submodule is configured to obtain the bar feature in a convolution kernel; and encoding the intensity of the bar feature, wherein the intensity of the bar feature decays linearly over time.
According to an embodiment of the present disclosure, the feature encoding sub-module further includes a reset module, and the reset module is configured to determine whether the intensity of the bar feature exceeds a preset threshold after encoding the intensity of the bar feature; and when the intensity of the bar-shaped feature exceeds a preset threshold value, generating a pulse signal, and resetting the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within a set range of the peripheral side to be set values.
According to an embodiment of the present disclosure, the feature coding sub-module further includes a holding module configured to control the intensity of the bar feature that is reset to a set value for a preset time after the intensity of the bar feature and the intensity of the bar feature within a peripheral side setting range are reset to the set value.
Another aspect of the present disclosure provides an electronic device including: one or more processors; a storage device for storing executable instructions that, when executed by the processor, implement the alarm method according to the above.
Another aspect of the disclosure provides a computer readable storage medium having stored thereon executable instructions that, when executed by a processor, implement an alarm method according to the above.
Another aspect of the present disclosure also provides a computer program product, wherein the product stores a computer program, which when executed is capable of implementing the alarm method according to the above.
One or more of the embodiments described above have the following advantages or benefits: the dynamic vision sensor is adopted to obtain the target scene, so that redundant information can be reduced, and the recording precision is improved. In addition, the multi-pulse event stream is processed through the feedforward network model, so that the calculation complexity can be effectively reduced, the calculation efficiency and the accuracy are improved, and the situations of false alarms and the like are prevented.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of an alarm method and an alarm apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a structural schematic of a multi-layer feed-forward network model according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of an alarm method according to an embodiment of the disclosure;
fig. 4a to 4d schematically illustrate a process diagram of performing an event-driven convolution operation in operation S303 by the alarm method according to the embodiment of the present disclosure;
fig. 5a to 5b schematically illustrate a process of encoding the alarm method in operation S303 according to the embodiment of the present disclosure;
fig. 6 schematically illustrates a diagram of linear attenuation and resetting of a strip feature in operation S303 by an alarm method according to an embodiment of the disclosure;
FIG. 7a schematically illustrates a block diagram of an alarm device according to an embodiment of the present disclosure;
FIG. 7b schematically illustrates a block diagram of a feature encoding submodule of an alarm device according to an embodiment of the present disclosure;
FIG. 8 schematically shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features.
Embodiments of the present disclosure provide an alarm method using a multi-layer feedforward network model that includes an input layer, a feature extraction layer, and a classification layer. The alarm method comprises the following steps: a multi-pulse event stream of a target scene is acquired using a dynamic vision sensor. The multi-pulse event stream is input to a multi-layer feed-forward network model using an input layer. And performing feature extraction and coding on the multi-pulse event stream by using a feature extraction layer to generate a multi-pulse feature sequence. And classifying the multi-pulse characteristic sequence by using a classification layer, generating a classification result and outputting the classification result. And judging whether the classification result belongs to an abnormal condition or not, and generating an alarm signal based on the abnormal condition. According to the alarm method disclosed by the embodiment of the disclosure, the dynamic visual sensor is used for acquiring the target scene, so that redundant information can be reduced, and the recording precision is improved. The multi-pulse event stream is processed through the multi-layer feedforward network model, so that the calculation complexity can be reduced, the calculation efficiency and the accuracy can be improved, and the situations such as false alarm and the like can be prevented.
Fig. 1 schematically illustrates an application scenario of an alarm method and an alarm device according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of an application scenario in which the embodiment of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiment of the present disclosure may not be applied to other devices, systems, environments or scenarios. It should be noted that the alarm method and apparatus, the electronic device, and the storage medium using the multi-layer feedforward network model provided in the embodiments of the present disclosure may be used in the field of regional security technology, the related aspects of scene security in the financial field, and other fields outside the financial field.
As shown in FIG. 1, an exemplary system architecture 100 to which the alarm method of the disclosed embodiments using a multi-layer feed-forward network model may be applied. The system architecture 100 may include dynamic vision sensors 101, 102, a network 103, and a server 104, among others. The network 103 is used to provide a medium for communication links between the terminal devices 101, 102 and the server 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The dynamic vision sensors 101, 102 interact with a server 104 over a network 103 to receive or transmit data or the like. The dynamic vision sensors 101, 102 are sensors that can output a time-continuous stream of pulsed events, which generate address times by sensing changes in light intensity of a scene, with high temporal accuracy. As shown in fig. 1, the dynamic vision sensor may be disposed in different application scenarios for monitoring changes in the scenarios of different environments. For example, one or more dynamic visual sensors may be provided for scene monitoring within a scene in which monitoring is desired.
The server 104 may be a server that provides various services, such as processing of multi-pulse event stream data for dynamic visual sensor input. The server 104 is internally installed with a brain-like chip, which can operate a Spiking Neural Network (SNN) closer to a brain Neural working mechanism, and the brain-like Neural Network can process a pulse signal input from the dynamic vision sensor.
It should be noted that the alarm method using the multi-layer feedforward network model provided by the embodiment of the present disclosure can be generally executed by the server 104. Accordingly, the alarm device provided by the embodiments of the present disclosure may be generally disposed in the server 104. It should be understood that the number of dynamic vision sensors (101, 102), networks 103, and servers 104 in fig. 1 are merely illustrative. There may be any number of dynamic vision sensors, networks, and servers, as desired for implementation.
FIG. 2 schematically shows a structural schematic of a multi-layer feed-forward network model according to an embodiment of the disclosure.
As shown in FIG. 2, the multi-layer feed-forward network model 200 includes an input layer 210, a feature extraction layer 220, and a classification layer 230.
The input layer 210 is configured to input the multi-pulse event stream of the target scene acquired by the dynamic vision sensor into the multi-layer feed-forward network model, so as to facilitate subsequent information processing and the like.
According to an embodiment of the present disclosure, the input layer 210 is used to input a multi-pulse event stream. Specifically, the multi-pulse event stream is a continuous multi-pulse event stream which is encoded by simulating a biological retina through the neuromorphic engineering and sensing the change of the light intensity of a scene, wherein the scene comprises space-time address information and the like. Because pictures cannot be directly downloaded from the internet and converted into multi-pulse event stream information, a dynamic vision sensor or other nerve morphology vision sensors must be used for recording and calibrating.
The feature extraction layer 220 is configured to perform feature extraction and encoding on the input multi-pulse event stream, and generate a multi-pulse feature sequence. Specifically, event-driven convolution with a forgetting mechanism is introduced at the feature extraction layer 220 to process a continuous multi-pulse event stream, and WTA (Winner-Take-All) operation is used in encoding, simplifying computational complexity.
As shown in fig. 2, the feature extraction layer 220 includes a first feature extraction layer 221 and a second feature extraction layer 222.
The first feature extraction layer 221 extracts a bar feature (bar feature) of the multi-pulse event stream by performing an event-driven convolution operation on the input multi-pulse event stream and a filter. The filters may be, for example, Gabor filters, each of which simulates a fixed range of neuronal receptive fields sensitive to a particular direction. The Gabor filter kernel function is defined as follows:
Figure BDA0003084460080000081
where X is X cos θ + Y sin θ, Y is-X sin θ + Y cos θ, θ denotes the direction of neuronal sensing, γ is the aspect ratio, σ is the effective fast reading, and λ is the wavelength.
The second feature extraction layer 222 is used to encode the intensity of the strip feature and generate a multi-pulse feature sequence. Specifically, when the first feature extraction layer 221 extracts the bar feature, the intensity of the bar feature is encoded, and when the intensity value exceeds a set threshold, a pulse is emitted, and a multi-pulse feature sequence is generated. For example, the intensity of the bar feature may be encoded in a precise temporal encoding.
The classification layer 230 is used to classify the multi-pulse sequence. The classification layer 230 classifies the multi-pulse sequence according to the acquired multi-pulse sequence, and the classification result may be a result set manually. For example, the classification result in the classification layer 230 includes a person, an animal, and a vehicle, and the classification layer generates a classification result as a person according to the multi-pulse feature sequence generated according to the acquired multi-pulse event stream of the target scene. In an embodiment of the present disclosure, the classification result includes at least two classification results. Wherein at least one classification result is an abnormal condition.
In an embodiment of the present disclosure, the multi-layer feedforward network model 200 also includes a learning layer 240.
The learning layer 240 is disposed between the feature extraction layer 220 and the classification layer 230, and is used for learning to classify the multi-pulse sequence and to perform weight adjustment by adjusting the membrane potential and pulse firing time of the neurons of the spiking neural network.
Fig. 3 schematically illustrates a flow chart of an alarm method according to an embodiment of the present disclosure.
As shown in fig. 3, the alert method 300 of the present disclosure includes operations S301 to S305.
In operation S301, a multi-pulse event stream of a target scene is acquired using a dynamic vision sensor.
For example, the dynamic visual sensor is installed in a specific target scene to be monitored, and the target scene may be an ATM room of a bank, a storage warehouse to be monitored, or a residential area.
In an embodiment of the present disclosure, a dynamic vision sensor is used to encode a target scene into a continuous stream of multi-pulse events that can generate events with event and address information when the scene light intensity changes and exceeds a preset threshold.
In operation S302, a multi-pulse event stream is input to a multi-layer feed-forward network model using an input layer.
In operation S303, the multi-pulse event stream is feature extracted and encoded using a feature extraction layer to generate a multi-pulse feature sequence.
For example, a first feature extraction layer is used to perform an event-driven convolution operation on the multi-pulse event stream to extract the strip features of the multi-pulse event stream. And encoding the strength of the strip feature by using the second feature extraction layer to generate a multi-pulse feature sequence.
In operation S304, the multi-pulse feature sequence is classified using the classification layer, and a classification result is generated and output.
In an embodiment of the present disclosure, the classification layer classifies the multi-pulse feature sequences, and the classification result has at least two, for example, a human, an animal, and the like. And outputs the classification result for subsequent processing.
In operation S305, it is determined whether the classification result belongs to an abnormal situation, and an alarm signal is generated based on the abnormal situation.
In the embodiment of the disclosure, whether the abnormal condition is caused is judged according to the type of the classification result and according to the difference of the type. For example, the classification result may include a person, an animal, and a vehicle, and when the classification result is a person, it may be assumed that the person is present in the target scene, the classification result is defined as an abnormal situation, and an alarm signal is generated based on the abnormal situation to prompt the relevant person that the target scene is abnormal. For another example, when the classification result is an animal or a vehicle, the classification result is defined as a normal condition, and no alarm signal is generated.
In other embodiments of the present disclosure, the classification result may be multiple, and different alarm signals are generated according to different classification results.
Fig. 4a to 4d schematically show a process diagram of performing an event-driven convolution operation in operation S303 by the alarm method according to the embodiment of the present disclosure.
In an embodiment of the present disclosure, performing an event-driven convolution operation on the multi-pulse event stream using the first feature extraction layer to extract the strip features of the multi-pulse event stream includes: acquiring address information of a multi-pulse event stream; and covering each element in the convolution kernel into the reaction graph of the first feature extraction layer based on the address information to update the reaction graph, so as to obtain the bar-shaped features.
As shown in fig. 4a, which shows the acquisition of address information in a multi-pulse event stream. Specifically, address information is included in the input multi-pulse event stream, and therefore, the address information is acquired from the input multi-pulse event stream. As shown in fig. 4b, each element in the convolution kernel is overlaid into the reaction map of the first feature extraction layer based on the acquired address information to obtain bar feature information, as shown in fig. 4 c.
In the embodiment of the present disclosure, as shown in fig. 4d, in order to eliminate the influence of a long-time previous event on the current reaction diagram, a forgetting mechanism is also introduced, and each element in the reaction diagram changes linearly with time.
Fig. 5a to 5b schematically illustrate a process of encoding the alarm method in operation S303 according to an embodiment of the present disclosure.
In an embodiment of the present disclosure, encoding the intensity of the strip feature using the second feature extraction layer to generate the multi-pulse feature sequence includes: acquiring a bar-shaped feature in a convolution kernel; and encoding the intensity of the bar feature, wherein the intensity of the bar feature decays linearly over time.
After the bar features in the convolution kernel are acquired, the intensities of the bar features are encoded, as shown in fig. 5 a. In particular, the value of the intensity of a bar feature is defined in terms of its height. As shown in fig. 5b, the values in the graph represent the intensity levels after encoding the intensity of the bar feature. For example, the intensity value of bar feature a is 9 and the intensity value of bar feature B is 7. Whether the corresponding neuron fires a pulse may be determined based on the intensity values of the bar features.
Fig. 6 schematically shows a schematic diagram of the linear attenuation and resetting of the strip feature in operation S303 by the alarm method according to the embodiment of the present disclosure.
In the embodiment of the disclosure, after the strength of the bar-shaped feature is encoded, whether the strength of the bar-shaped feature exceeds a preset threshold value is judged; and when the intensity of the bar-shaped feature exceeds a preset threshold value, generating a pulse signal, and resetting the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within the set range of the peripheral side to be set values.
As shown in fig. 6, the process of linearly attenuating and resetting the bar feature includes operations S601 to S606.
In operation S601, a convolution kernel is overlaid in the reaction graph according to address information of the multi-pulse event stream. For example, the convolution kernel is shown as S601 in fig. 6, and intensity information of the bar feature is represented in each box.
In operation S602, the intensity information of the bar feature linearly decays with time. Assuming that the attenuation effect is-1 per time step, the intensity of the bar feature decreases by 1 over one time step, and the value decreases by 2 after both time steps compared to the value in operation S601.
In operation S603, when a second event arrives, i.e., a convolution kernel covering the second event in the response map, the intensity of the bar feature changes.
In operation S604, the intensity information of the bar feature linearly decays again with time. For example, over a time step, the intensity of the bar features all decrease by 1.
In operation S605, when a third event comes, i.e., the convolution kernel of the third event is covered in the response map, the intensity of the bar feature changes.
In the embodiment of the present disclosure, it is determined whether the intensity of the bar feature exceeds a preset threshold, for example, assuming that the preset threshold is 6, at this time, since the intensity of the bar feature of the third event exceeds the preset threshold, at this time, a pulse signal is generated.
In operation S606, the intensity of the bar feature and the intensity of the bar feature within the peripheral side setting range are reset to the setting values. For example, the range of the peripheral side is a bar feature of 8 directions surrounding the bar feature, and the set value of the intensity reset is 0. Then, as shown in S606 in fig. 6, after the intensity of the bar feature exceeds the set threshold, a pulse signal is generated, and the intensity values of the bar feature and the bar feature around the circumference of the bar feature are reset to 0.
In the embodiment of the present disclosure, after the intensity of the bar feature and the intensity of the bar feature within the peripheral side setting range are reset to the setting values, the intensity of the reset bar feature is controlled to maintain the setting values for a preset time.
For example, as shown in fig. 6, after the intensity of the bar feature is reset to 0, the intensity of the reset bar feature is controlled to remain at 0 for a preset time in operation S606. That is, the corresponding neuron enters a refractory period of a period of time, and within the preset time range, the response of the neuron is not affected by the new event.
In an embodiment of the present disclosure, the classification layer includes a neuron network made up of a plurality of Tempotron neurons.
According to the alarm method disclosed by the embodiment of the disclosure, the dynamic visual sensor is used for acquiring the target scene, so that redundant information can be reduced, and the recording precision is improved. The multi-pulse event stream is processed through the multi-layer feedforward network model, so that the calculation complexity can be reduced, the calculation efficiency and the accuracy can be improved, and the situations such as false alarm and the like can be prevented.
Fig. 7a schematically shows a block diagram of an alarm device according to an embodiment of the present disclosure. FIG. 7b schematically illustrates a block diagram of a feature encoding submodule of an alarm device according to an embodiment of the present disclosure.
Embodiments of the present disclosure also provide an alarm device using a multi-layer feedforward network model, which includes an input layer, a feature extraction layer, and a classification layer. As shown in fig. 7a, the alarm device 700 includes: an acquisition module 710, an input module 720, a feature extraction and encoding module 730, a classification module 740, and an alarm module 750.
Wherein the acquiring module 710 is configured to acquire a multi-pulse event stream of the target scene using the dynamic visual sensor.
An input module 720 configured to input the multi-pulse event stream to the multi-layer feed-forward network model using the input layer.
The feature extraction and encoding module 730 is configured to perform feature extraction and encoding on the multi-pulse event stream using the feature extraction layer to generate a multi-pulse feature sequence.
And the classification module 740 is configured to classify the multi-pulse feature sequences by using the classification layer, generate a classification result and output the classification result.
And an alarm module 750 configured to determine whether the classification result belongs to an abnormal situation and generate an alarm signal based on the abnormal situation.
In an embodiment of the present disclosure, the feature extraction encoding module 730 includes a feature extraction sub-module 731 and a feature encoding sub-module 732.
Wherein the feature extraction submodule 731 is configured to perform an event-driven convolution operation on the multi-pulse event stream using the first feature extraction layer to extract the bar features of the multi-pulse event stream. The feature encoding sub-module 732 is configured to encode the intensity of the strip feature using the second feature extraction layer, generating a multi-pulse feature sequence.
The feature extraction submodule 731 is configured to obtain address information for the multi-pulse event stream, according to an embodiment of the present disclosure. And covering each element in the convolution kernel into the reaction graph of the first feature extraction layer based on the address information to update the reaction graph, so as to obtain the bar-shaped features.
The feature encoding submodule 732 is configured to obtain strip features in the convolution kernel, according to an embodiment of the present disclosure. And encoding the intensity of the bar feature, wherein the intensity of the bar feature decays linearly over time.
As shown in fig. 7b, the feature encoding sub-module 732 further includes a reset module 7321 and a hold module 7322.
Wherein the resetting module 7321 is configured to determine whether the intensity of the bar feature exceeds a preset threshold after encoding the intensity of the bar feature; and when the intensity of the bar-shaped feature exceeds a preset threshold value, generating a pulse signal, and resetting the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within the set range of the peripheral side to be set values.
The holding module 7322 is configured to control the intensity of the bar feature that is reset to a set value after the intensity of the bar feature and the intensity of the bar feature within the peripheral side setting range are reset to the set value for a preset time.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any of the obtaining module 710, the input module 720, the feature extraction coding module 730, the classification module 740, the alarm module 750, the feature extraction sub-module 731, the feature coding sub-module 732, the resetting module 7321, and the keeping module 7322 may be combined into one module to be implemented, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 710, the inputting module 720, the feature extraction coding module 730, the classifying module 740, the alarming module 750, the feature extraction sub-module 731, the feature coding sub-module 732, the resetting module 7321, and the keeping module 7322 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or implemented by a suitable combination of any of them. Alternatively, at least one of the obtaining module 710, the input module 720, the feature extraction coding module 730, the classification module 740, the alarm module 750, the feature extraction sub-module 731, the feature coding sub-module 732, the reset module 7321, and the hold module 7322 may be implemented at least in part as a computer program module that, when executed, may perform a corresponding function.
Fig. 8 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 801 may also include onboard memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing different actions of the method flows according to embodiments of the present disclosure.
In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or RAM 803. Note that the programs may also be stored in one or more memories other than the ROM 802 and RAM 803. The processor 801 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 800 may also include input/output (I/O) interface 805, input/output (I/O) interface 805 also connected to bus 804, according to an embodiment of the present disclosure. Electronic device 800 may also include one or more of the following components connected to I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication section 809, and/or installed from the removable medium 81 ]. The computer program, when executed by the processor 801, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 802 and/or RAM 803 described above and/or one or more memories other than the ROM 802 and RAM 803.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being adapted to cause the electronic device to carry out the alarm method provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 801, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted in the form of a signal on a network medium, distributed, downloaded and installed via communication section 809, and/or installed from removable media 811. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (11)

1. An alarm method using a multi-layer feed-forward network model comprising an input layer, a feature extraction layer, and a classification layer, wherein the alarm method comprises:
acquiring a multi-pulse event stream of a target scene using a dynamic visual sensor;
inputting the multi-pulse event stream to the multi-layer feed-forward network model using the input layer;
performing feature extraction and encoding on the multi-pulse event stream using the feature extraction layer to generate a multi-pulse feature sequence;
classifying the multi-pulse characteristic sequence by using the classification layer, generating a classification result and outputting the classification result; and
and judging whether the classification result belongs to an abnormal condition or not, and generating an alarm signal based on the abnormal condition.
2. The alarm method of claim 1, wherein the feature extraction layer comprises a first feature extraction layer and a second feature extraction layer;
the using the feature extraction layer to perform feature extraction and encoding on the multi-pulse event stream to generate a multi-pulse feature sequence comprises:
performing an event-driven convolution operation on the multi-pulse event stream using the first feature extraction layer to extract bar features of the multi-pulse event stream; and
and encoding the intensity of the bar-shaped feature by using the second feature extraction layer to generate a multi-pulse feature sequence.
3. The alarm method of claim 2, wherein said performing an event-driven convolution operation on the multi-pulse event stream using the first feature extraction layer to extract the bar features of the multi-pulse event stream comprises:
acquiring address information of the multi-pulse event stream; and
covering each element in the convolution kernel to the reaction graph of the first feature extraction layer based on the address information to update the reaction graph, so as to obtain the bar-shaped features.
4. The alarm method of claim 3, wherein said encoding the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence comprises:
acquiring the bar-shaped features in the convolution kernel; and
encoding the intensity of the bar feature, wherein the intensity of the bar feature decays linearly over time.
5. The alarm method of claim 4, wherein said encoding the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence further comprises:
after the intensity of the bar-shaped feature is coded, judging whether the intensity of the bar-shaped feature exceeds a preset threshold value or not; and
when the intensity of the bar-shaped feature exceeds a preset threshold value, generating a pulse signal, and resetting the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within a set range of the peripheral side to be set values.
6. The alarm method of claim 5, wherein said encoding the intensities of the bar features using the second feature extraction layer to generate a multi-pulse feature sequence further comprises:
after the intensity of the bar-shaped feature and the intensity of the bar-shaped feature within the set range of the peripheral side are reset to be set values, the reset intensity of the bar-shaped feature is controlled to keep the set values within a preset time.
7. The alarm method according to any one of claims 1 to 6, wherein the classification result includes at least two of a normal case and an abnormal case.
8. The alarm method according to any one of claims 1 to 6, wherein the classification layer includes a neuron network composed of a plurality of Tempotron neurons.
9. An alarm device using a multi-layer feedforward network model, wherein the multi-layer feedforward network model comprises an input layer, a feature extraction layer and a classification layer; the alarm device includes:
an acquisition module configured to acquire a multi-pulse event stream of a target scene using a dynamic visual sensor;
an input module configured to input the multi-pulse event stream to the multi-layer feed-forward network model using the input layer;
a feature extraction coding module configured to perform feature extraction and coding on the multi-pulse event stream using the feature extraction layer to generate a multi-pulse feature sequence;
the classification module is configured to classify the multi-pulse feature sequence by using the classification layer, generate a classification result and output the classification result;
and the alarm module is configured to judge whether the classification result belongs to an abnormal condition or not and generate an alarm signal based on the abnormal condition.
10. An electronic device, comprising:
one or more processors;
storage means for storing executable instructions which, when executed by the processor, implement the alarm method of any one of claims 1 to 8.
11. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, implement the alarm method of any one of claims 1 to 8.
CN202110581280.XA 2021-05-26 2021-05-26 Alarm method and device using multilayer feedforward network model and electronic equipment Active CN113313919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110581280.XA CN113313919B (en) 2021-05-26 2021-05-26 Alarm method and device using multilayer feedforward network model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110581280.XA CN113313919B (en) 2021-05-26 2021-05-26 Alarm method and device using multilayer feedforward network model and electronic equipment

Publications (2)

Publication Number Publication Date
CN113313919A true CN113313919A (en) 2021-08-27
CN113313919B CN113313919B (en) 2022-12-06

Family

ID=77375304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110581280.XA Active CN113313919B (en) 2021-05-26 2021-05-26 Alarm method and device using multilayer feedforward network model and electronic equipment

Country Status (1)

Country Link
CN (1) CN113313919B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114521868A (en) * 2022-02-24 2022-05-24 清华大学 Pulse signal classification device and wearable equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509827A (en) * 2017-02-27 2018-09-07 阿里巴巴集团控股有限公司 The recognition methods of anomalous content and video flow processing system and method in video flowing
CN109102000A (en) * 2018-09-05 2018-12-28 杭州电子科技大学 A kind of image-recognizing method extracted based on layered characteristic with multilayer impulsive neural networks
CN109948725A (en) * 2019-03-28 2019-06-28 清华大学 Based on address-event representation neural network object detecting device
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium
US20200342252A1 (en) * 2019-04-23 2020-10-29 International Business Machines Corporation Advanced Image Recognition for Threat Disposition Scoring
CN112597980A (en) * 2021-03-04 2021-04-02 之江实验室 Brain-like gesture sequence recognition method for dynamic vision sensor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509827A (en) * 2017-02-27 2018-09-07 阿里巴巴集团控股有限公司 The recognition methods of anomalous content and video flow processing system and method in video flowing
CN109102000A (en) * 2018-09-05 2018-12-28 杭州电子科技大学 A kind of image-recognizing method extracted based on layered characteristic with multilayer impulsive neural networks
CN109948725A (en) * 2019-03-28 2019-06-28 清华大学 Based on address-event representation neural network object detecting device
US20200342252A1 (en) * 2019-04-23 2020-10-29 International Business Machines Corporation Advanced Image Recognition for Threat Disposition Scoring
CN111709967A (en) * 2019-10-28 2020-09-25 北京大学 Target detection method, target tracking device and readable storage medium
CN112597980A (en) * 2021-03-04 2021-04-02 之江实验室 Brain-like gesture sequence recognition method for dynamic vision sensor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114521868A (en) * 2022-02-24 2022-05-24 清华大学 Pulse signal classification device and wearable equipment
CN114521868B (en) * 2022-02-24 2024-04-09 清华大学 Pulse signal classification device and wearable equipment

Also Published As

Publication number Publication date
CN113313919B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
JP6867153B2 (en) Abnormality monitoring system
US8558889B2 (en) Method and system for security system tampering detection
KR20180107930A (en) Method and system for artificial intelligence based video surveillance using deep learning
US20210124914A1 (en) Training method of network, monitoring method, system, storage medium and computer device
JPWO2016002408A1 (en) Image processing apparatus, monitoring system, image processing method, and program
US11145174B2 (en) Methods and system for monitoring an environment
CN113313919B (en) Alarm method and device using multilayer feedforward network model and electronic equipment
Ezzahout et al. Conception and development of a video surveillance system for detecting, tracking and profile analysis of a person
WO2013037344A1 (en) Method for automatic real-time monitoring of marine mammals
CN111523362A (en) Data analysis method and device based on electronic purse net and electronic equipment
US11935303B2 (en) System and method for mitigating crowd panic detection
CN113392779A (en) Crowd monitoring method, device, equipment and medium based on generation of confrontation network
US20100109867A1 (en) Improvements relating to event detection
EP4099224A1 (en) A concept for detecting an anomaly in input data
KR20230103890A (en) Video surveillance system based on multi-modal video captioning and method of the same
KR102484198B1 (en) Method, apparatus and system for detecting abnormal event
Zennayi et al. Unauthorized access detection system to the equipments in a room based on the persons identification by face recognition
CN110956057A (en) Crowd situation analysis method and device and electronic equipment
Long et al. An Image-based Fall Detection System using You Only Look Once (YOLO) Algorithm to Monitor Elders’ Fall Events
US20240019602A1 (en) System and method of multi-lane elevated body temperature preventative scanning solution using gige vision and tracking
RU2808557C2 (en) Intelligent intruder identification system
Chevallier et al. Covert attention with a spiking neural network
CN111199179A (en) Target object tracking method, terminal device and medium
US11423760B2 (en) Device for detecting drowning individuals or individuals in a situation presenting a risk of drowning
Akash et al. Motion Detection and Alert System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant