CN118072255A - Intelligent park multisource data dynamic monitoring and real-time analysis system and method - Google Patents

Intelligent park multisource data dynamic monitoring and real-time analysis system and method Download PDF

Info

Publication number
CN118072255A
CN118072255A CN202410498443.1A CN202410498443A CN118072255A CN 118072255 A CN118072255 A CN 118072255A CN 202410498443 A CN202410498443 A CN 202410498443A CN 118072255 A CN118072255 A CN 118072255A
Authority
CN
China
Prior art keywords
data
monitoring
time
real
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410498443.1A
Other languages
Chinese (zh)
Inventor
徐江峰
吴志华
杨国水
芮罗峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pengpai Digital Intelligence Technology Co ltd
Original Assignee
Hangzhou Pengpai Digital Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Pengpai Digital Intelligence Technology Co ltd filed Critical Hangzhou Pengpai Digital Intelligence Technology Co ltd
Priority to CN202410498443.1A priority Critical patent/CN118072255A/en
Publication of CN118072255A publication Critical patent/CN118072255A/en
Pending legal-status Critical Current

Links

Landscapes

  • Alarm Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of safety monitoring and data analysis of intelligent parks, in particular to a system and a method for dynamically monitoring and analyzing multi-source data of an intelligent park, which improve the safety and response efficiency of the park through an integrated technical means and collect the multi-source data through monitoring equipment at a key position; performing preliminary processing on the data by using an edge computing technology and extracting key features; performing behavior analysis on the characteristic data by using a deep learning algorithm to generate prediction data; encrypting and recording the predicted data on a blockchain to ensure the data security; and carrying out real-time monitoring and automatic labeling based on the predicted data, and carrying out cross-region interactive verification when the potential threat is detected. By the implementation of the intelligent park management system, the monitoring data processing capacity and the safety response speed of the intelligent park are remarkably improved, and the efficiency and the safety level of park management are greatly improved through automatic and intelligent data analysis.

Description

Intelligent park multisource data dynamic monitoring and real-time analysis system and method
Technical Field
The invention relates to the technical field of safety monitoring and data analysis of intelligent parks, in particular to a system and a method for dynamically monitoring and analyzing multi-source data of an intelligent park in real time.
Background
In the operation management of intelligent parks, multi-source data dynamic monitoring and real-time analysis are key factors for improving safety, efficiency and decision quality. Various monitoring devices are widely deployed in intelligent parks, and visual, acoustic, environmental and location data is collected for monitoring various aspects of traffic, environmental conditions, etc. However, current intelligent park monitoring systems face a number of challenges. First, existing systems often suffer from deficiencies in data integration and real-time response capabilities, resulting in a slow response in handling emergency events. Furthermore, these systems typically rely on traditional data processing methods, which are difficult to efficiently process and analyze large and complex data sets, and thus do not fully utilize the collected data for in-depth analysis.
The main disadvantage of the prior art (Chinese patent, publication No. CN 116453066B) is the lack of efficient cross-regional data collaboration and intelligent analysis capabilities. These techniques often lack accuracy in feature extraction and behavioral analysis, and cannot be used to synthesize multi-source information for effective prediction, thereby affecting accurate identification and timely response of security threats. Furthermore, many existing schemes do not adequately account for data security and privacy protection, especially where data storage and transmission are vulnerable to attacks, which is particularly important in environments such as smart parks where data driven decisions are highly dependent.
Disclosure of Invention
Aiming at a plurality of problems existing in the prior art, the invention provides a system and a method for dynamically monitoring and analyzing multi-source data in an intelligent park in real time, which utilize advanced machine learning technology, including Convolutional Neural Network (CNN) and long-short-term memory network (LSTM), to analyze and process the multi-source data from the intelligent park in real time, and the system extracts meaningful characteristics from vision, sound, environment and position data so as to identify and predict potential abnormal behaviors or security threats; by combining the data with a behavior analysis algorithm, the system can generate high-accuracy prediction data, encrypt the data and record the data on a blockchain, so that the safety and the non-tamper property of information are ensured; in addition, the system can also perform real-time monitoring and automatic labeling according to the predicted data, and rapidly perform cross-regional response when the potential threat is detected, so that the safety management of the whole park is optimized.
An intelligent campus multisource data dynamic monitoring and real-time analysis system, comprising:
The acquisition unit is used for carrying out multi-source data acquisition through monitoring equipment deployed at a key position of a park and collecting visual data, sound data, environment data and position data;
and the edge computing node receives and performs preliminary processing on the multi-source data, wherein the preliminary processing comprises the following steps: motion detection, voice recognition and environmental anomaly detection, extracting features by using a deep learning model, generating feature data, wherein the extracting features comprise: face, vehicle details, special sound patterns and environmental indicators;
The analysis and encryption unit is used for analyzing the extracted characteristic data by using a behavior analysis algorithm to generate prediction data for identifying potential abnormal behaviors or security threats, and recording the prediction data on a blockchain after encryption processing, wherein the behavior analysis algorithm adopts a combination of a convolutional neural network and a long-term and short-term memory network for analysis;
The threat identification and response unit is used for automatically labeling the monitoring video based on the prediction data and carrying out real-time monitoring by combining the audio data and the environment data; triggering cross-regional interactive verification when the potential threat is detected, and performing priority verification by the adjacent regional monitoring equipment according to the predicted data so as to respond to the potential threat;
And the feedback and optimization unit integrates the automatically marked data, the cross-region verification result and the user feedback, and continuously optimizes the deep learning model and the behavior analysis algorithm through the integrated data.
Preferably, the edge computing node analyzes the difference between video frames by using a motion detection algorithm, identifies the outline and path of the moving object, identifies the position and motion track of the moving object in the video, outputs motion detection data, and completes motion detection;
Analyzing the audio waveform using a voice recognition technique to identify human voice, vehicle noise or other significant sound events, outputting voice recognition data including the identified sound event type and associated audio time stamp, completing voice recognition;
And (3) evaluating whether the environmental data exceeds a normal operation range by applying a threshold detection or pattern recognition algorithm, recognizing potential environmental risks, outputting environmental abnormality detection data identifying environmental parameters exceeding a preset safety range, and finishing environmental abnormality detection.
Preferably, the extracting features using the deep learning model, generating feature data includes:
Extracting face features, analyzing face images in video data by using a convolutional neural network, and extracting face features of an individual, wherein the face features comprise: facial structure and expression recognition for authentication or emotion analysis;
Extracting vehicle detail characteristics, analyzing vehicle images in video data by using a convolutional neural network, and extracting model, color and license plate information of a vehicle for identifying and tracking the vehicle;
extracting special sound mode characteristics, and identifying key sound events from environment sounds by utilizing a long-short-term memory network, wherein the key sound events comprise: glass breaking sound and alarm sound;
Extracting environmental index features, and extracting abnormal indexes from environmental sensor data by using a long-short-term memory network and a one-dimensional convolutional neural network, wherein the abnormal indexes comprise: smoke concentration, rate of temperature change, for identifying potential environmental risks or equipment failure.
Preferably, the analysis and encryption unit performs visual data processing by using a convolutional neural network, and performs time sequence data processing by using a long-term and short-term memory network, so as to realize behavior analysis on the characteristic data; the convolutional neural network and the long-term and short-term memory network are trained through historical data, and differences between normal behaviors and abnormal behaviors are learned and identified; after training is completed, the analysis and encryption unit performs reasoning on the real-time data, identifies potential abnormal behaviors or security threats, and generates prediction data, wherein the prediction data comprises: the type, location, and time of potential abnormal behavior or security threats.
Preferably, the automatically labeling the monitoring video based on the prediction data and performing real-time monitoring in combination with the audio and the environmental data includes:
decrypting the encrypted predicted data in the block chain for real-time monitoring and automatic labeling;
The predicted data is corresponding to a specific scene in the video through the time stamp and the position information, the related event is marked in the video stream automatically based on the predicted abnormal behavior, and the marking comprises the following steps: highlighting and adding labels;
The audio data and the environment data are synchronized with the video data in time, abnormal sounds in the audio and abnormal readings of the environment sensor are analyzed corresponding to specific monitoring scenes, and the potential security threats are comprehensively evaluated and confirmed by combining the abnormal sounds in the audio and the abnormal readings with visual information in the video data.
Preferably, the expression for generating the predicted data for identifying the potential abnormal behavior or security threat is:
wherein, Representing probability distributions of the predicted data describing different types of potential abnormal behavior or security threats; /(I)Representing an input video frame; /(I)The representation convolutional neural network is used for processing visual data and extracting features; Representing a long-term and short-term memory network for analyzing visual features and capturing time dependencies; /(I) And/>Is a training parameter.
Preferably, the interactive authentication process includes:
Automatically analyzing and generating alarm information when a potential threat is detected in one area, and transmitting the alarm information to monitoring equipment in an adjacent area based on a network communication protocol, wherein the alarm information comprises: the type, location, timestamp, and associated video or audio data snapshot of the threat;
the monitoring equipment of the adjacent area is dynamically adjusted according to the alarm information, and the monitoring equipment is used for capturing the activity of the designated area, and the dynamic adjustment comprises: the focal length and the angle of the camera are adjusted, so that the sensitivity of the audio equipment is enhanced;
after resource adjustment, the observation of the designated area is enhanced, the data flow from the designated area is processed preferentially, the behavior or event described in the alarm information is analyzed, whether the confirmed threat behavior exists or not is identified, and the audio data and the environment data are used for parallel analysis so as to verify whether other evidences supporting threat confirmation exist or not;
After the priority verification is completed, a threat report is generated, wherein the threat report comprises: the validation status, the specific nature of the threat, and the immediate response action taken.
Preferably, the feedback and optimization unit combines the automatically marked data, the cross-region verification result and the user feedback in a unified format, aligns the time stamps, and combines the data, the cross-region verification result and the user feedback in a comprehensive data set for analysis and model training;
Wherein, the automatically marked data comprises: monitoring marked abnormal event information in the video, wherein the abnormal event information comprises: event type, time and location; the cross-region verification result includes: identifying the confirmation state and specific property of the threat in the verification process; the user feedback includes: including error reporting, performance evaluation, or advice.
A method for dynamically monitoring and analyzing multi-source data in an intelligent park in real time comprises the following steps:
the method comprises the steps of carrying out multi-source data acquisition through monitoring equipment deployed at a key position of a park, and collecting visual data, sound data, environment data and position data;
Performing preliminary processing on the multi-source data, wherein the preliminary processing comprises the following steps: motion detection, voice recognition and environmental anomaly detection, extracting features by using a deep learning model, generating feature data, wherein the extracting features comprise: face, vehicle details, special sound patterns and environmental indicators;
analyzing by using a behavior analysis algorithm based on the extracted characteristic data to generate predicted data for identifying potential abnormal behaviors or security threats, and recording the predicted data on a blockchain after encryption processing, wherein the behavior analysis algorithm adopts a combination of a convolutional neural network and a long-term and short-term memory network for analysis;
automatically labeling a monitoring video based on the predicted data, and carrying out real-time monitoring by combining the audio data and the environment data; triggering cross-regional interactive verification when the potential threat is detected, and performing priority verification by the adjacent regional monitoring equipment according to the predicted data so as to respond to the potential threat;
And integrating the automatically marked data, the cross-region verification result and the user feedback, and continuously optimizing the deep learning model and the behavior analysis algorithm through the integrated data.
Compared with the prior art, the invention has the advantages that:
The invention adopts advanced Convolutional Neural Network (CNN) and long-short-term memory network (LSTM) technical means to realize the high-efficiency analysis of visual and time sequence data, and greatly improves the accuracy of behavior analysis and the accuracy of predicted data;
through the analysis method driven by deep learning, the method can identify and respond to potential security threats in real time, and effectively improve the security management level of an intelligent park;
The invention also introduces encryption and blockchain technology, ensures the safety and the integrity of the data in the storage and transmission processes, and solves the defects of the prior art in the aspect of data protection.
Drawings
FIG. 1 is a block diagram of a system of the present invention;
FIG. 2 is a schematic diagram of data processing execution in accordance with the present invention;
FIG. 3 is a schematic flow chart of the method of the present invention.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a convention should be interpreted in accordance with the meaning of one of skill in the art having generally understood the convention (e.g., "a system having at least one of A, B and C" would include, but not be limited to, systems having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart. The techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system.
As shown in fig. 1-2, a system for dynamically monitoring and analyzing multi-source data in an intelligent campus in real time, comprising:
The acquisition unit is used for carrying out multi-source data acquisition through monitoring equipment deployed at a key position of a park and collecting visual data, sound data, environment data and position data;
The invention is used for monitoring people flow dynamics, vehicle driving conditions and other vision related events by arranging the high-definition camera to capture videos and images. For example, installing cameras at the entrances and exits may help track the ingress and egress of guests and vehicles, enhancing security management.
A microphone array is mounted to collect acoustic information in the environment. These devices may recognize specific sound patterns in emergency situations, such as shouting, glass breaking or other abnormal sounds, for real-time security alarms.
The environmental sensor is used for monitoring environmental variables such as temperature, humidity, smog and the like. This is critical for early detection of fires, chemical leaks or other environmental security threats. For example, a smoke sensor can trigger an alarm immediately at the beginning of a fire, activating the fire suppression system.
The location of a particular individual or asset is tracked using RFID or GPS technology. Within the campus, this may help a management team monitor the location of important devices in real time or to quickly locate personnel in case of emergency.
Before the data is transmitted to the edge computing node or the central processing unit, the acquisition unit performs necessary preprocessing, such as signal amplification, filtering and preliminary data formatting, so that the transmitted data is ensured to have high quality, and the subsequent analysis is convenient.
According to the invention, the system can quickly respond to various conditions through real-time data acquisition and transmission, and the optimization from the processing of emergency conditions to daily operation is realized; the multi-source data provides a comprehensive view for decision making, and helps management staff make more intelligent decisions based on comprehensive information; the real-time monitoring data helps the campus management optimize resource allocation, such as human deployment and security adjustments, to cope with real-time demand changes.
Through the comprehensive and deep data acquisition, the intelligent park can realize a finer and dynamic management mode, and safety, efficiency and user experience are improved.
And the edge computing node receives and performs preliminary processing on the multi-source data, wherein the preliminary processing comprises the following steps: motion detection, voice recognition and environmental anomaly detection, extracting features by using a deep learning model, generating feature data, wherein the extracting features comprise: face, vehicle details, special sound patterns and environmental indicators;
The edge computing node is positioned near the data acquisition point, can rapidly process the received multi-source data, and reduces the need of data transmission to the central processing center, thereby reducing delay, saving bandwidth and improving data processing efficiency.
By analyzing the differences between successive video frames, the edge computing node can identify moving objects in the image. This process is typically accomplished by background subtraction, optical flow, or frame difference methods. For example, by setting a threshold to identify a set of moving pixels, movement of a person or vehicle may be detected. The edge node processes audio data captured from the microphone using a voice recognition algorithm. These algorithms are capable of identifying human voice, vehicle noise, or other critical sound events such as crumbling or warning sounds. Voice recognition is typically based on feature extraction (e.g., MFCC) and pattern matching techniques. The sensor data (e.g., temperature, humidity, smoke, etc.) is analyzed in real-time to monitor whether the environmental parameter is outside of a preset safety range. Using statistical analysis or threshold determinations, the node can quickly identify potential environmental risks or faults.
Face images in video data are processed using Convolutional Neural Networks (CNNs), facial features are extracted for identity authentication or emotion analysis. Also with CNN, the model, color and license plate information of the vehicle is extracted from the video for vehicle tracking and identification. The acoustic data is analyzed by a long short term memory network (LSTM) to identify key events such as glass breakage sounds or emergency alerts. Sensor data is processed using a one-dimensional convolutional neural network (1D-CNN) and LSTM to identify abnormal environmental changes such as rapid increases in smoke concentration or abnormal temperature fluctuations.
In the invention, the data is processed at the local node, and the data does not need to be transmitted to the central server for a long distance, so that the response time is obviously reduced; by carrying out preliminary processing and feature extraction on the data locally, only necessary processed data or alarms are sent to the center, so that the network load is effectively reduced; the rapid identification and response of potential security threats, such as unauthorized intrusion, emergency or environmental risks, enhances the overall security of the campus.
In sum, the intelligent park can realize real-time response and efficient management of events, and management quality and park safety are greatly improved.
Preferably, the edge computing node analyzes the difference between video frames by using a motion detection algorithm, identifies the outline and path of the moving object, identifies the position and motion track of the moving object in the video, outputs motion detection data, and completes motion detection;
Analyzing the audio waveform using a voice recognition technique to identify human voice, vehicle noise or other significant sound events, outputting voice recognition data including the identified sound event type and associated audio time stamp, completing voice recognition;
And (3) evaluating whether the environmental data exceeds a normal operation range by applying a threshold detection or pattern recognition algorithm, recognizing potential environmental risks, outputting environmental abnormality detection data identifying environmental parameters exceeding a preset safety range, and finishing environmental abnormality detection.
Preferably, the extracting features using the deep learning model, generating feature data includes:
Extracting face features, analyzing face images in video data by using a convolutional neural network, and extracting face features of an individual, wherein the face features comprise: facial structure and expression recognition for authentication or emotion analysis;
Extracting vehicle detail characteristics, analyzing vehicle images in video data by using a convolutional neural network, and extracting model, color and license plate information of a vehicle for identifying and tracking the vehicle;
extracting special sound mode characteristics, and identifying key sound events from environment sounds by utilizing a long-short-term memory network, wherein the key sound events comprise: glass breaking sound and alarm sound;
Extracting environmental index features, and extracting abnormal indexes from environmental sensor data by using a long-short-term memory network and a one-dimensional convolutional neural network, wherein the abnormal indexes comprise: smoke concentration, rate of temperature change, for identifying potential environmental risks or equipment failure.
The dynamic monitoring and real-time analysis system of the intelligent park performs detailed feature extraction on various monitoring data by deploying advanced deep learning technology, particularly Convolutional Neural Network (CNN) and long-term memory network (LSTM).
And analyzing the face image in the video data by using a convolutional neural network. CNN effectively extracts low-level to high-level features in a face image through its multi-layer structure. The primary layer recognizes edges and colors, and the deeper layers recognize facial structures and expressive features. For example: in the portal control system, CNN may be used to perform rapid identity verification, or to evaluate the emotional state of the employee in the emotion analysis system, thereby adjusting the working environment or performing timely interventions.
CNN is also used to extract vehicle details from the surveillance video. These networks are trained to identify different models of vehicles, colors, and license plate information, which is critical to vehicle identification and tracking. For example: in a parking management system, this technique helps identify and track the entrance and exit of vehicles, optimizes parking resource allocation, and enhances security monitoring.
Audio data in an environment is processed using a long and short term memory network. LSTM is particularly suited for processing time series data, capable of identifying and memorizing time patterns in sound, such as intermittent glass breaking sounds or sustained alarm sounds. For example: in security systems, this may be used for early warning, such as a window breaking sound generated when an intrusion is identified.
And extracting key indexes from the environmental sensor data by combining LSTM and a one-dimensional convolutional neural network (1D-CNN). This combination allows the system to identify short-term and long-term environmental trends such as rapid increases in smoke concentration or abnormal changes in temperature. For example: in environmental monitoring, this can be used to detect fires or chemical leaks early, and to initiate an emergency response quickly by analyzing the rate of rise of smoke concentration or temperature.
In the invention, the accurate feature extraction enables the system to more accurately identify and respond to the event in the monitoring area, reduces false alarm and increases the identification rate of real threat; through detailed feature analysis, the management team can gain deeper insight, thereby formulating more effective security measures and operation strategies; in particular, in terms of security and environmental monitoring, the system can identify potential risks in advance by analyzing the obtained characteristic data, and deploy preventive measures in advance.
By applying the technology, the invention not only improves the safety performance of the park, but also greatly improves the operation efficiency and crisis coping capacity, and ensures the stability and safety of the park environment.
Long-term memory networks (LSTM) are particularly suitable for processing time-series information in audio data, as they can maintain time-dependence in audio events, such as duration and variation of sound. LSTM effectively learns when to retain old information and when to introduce new information through its gating mechanisms (including input gates, forget gates, and output gates), which is critical to distinguishing transient noise from sustained emergency sounds.
In sound processing, the audio signal is first transformed from the time domain to the frequency domain by fourier transformation, a step which may help LSTM to better capture the frequency characteristics. LSTM networks are trained using a large number of marked audio samples, including common environmental sounds and various emergency sounds (e.g., glass breaking sounds, alarm sounds). Through supervised learning, model learning recognizes and distinguishes the characteristics of these sounds.
Long term memory network (LSTM) optimization strategies include:
(1) Enhancement data set: including more real-world sound events, especially sound samples under noisy conditions, to improve the robustness and accuracy of the model.
(2) Fine tuning of the model: and continuously fine-tuning model parameters according to feedback data collected in practical application, and optimizing the recognition efficiency.
The one-dimensional convolutional neural network (1D-CNN) is adapted to process time series of environmental data, such as temperature and smoke sensor readings. The 1D-CNN processes the sequence data through the convolution layer, extracts key time and space characteristics, and is used for subsequent pattern recognition. The 1D-CNN is set to identify abnormal patterns of environmental parameters such as rapid rise in smoke concentration or abnormal fluctuations in temperature. This requires a fine filter design to capture rapid changes over a small range. And monitoring environmental data in real time, continuously analyzing the data stream by using a trained 1D-CNN model, and rapidly responding to environmental changes. And dynamically adjusting an alarm threshold and response measures according to the real-time analysis result, such as starting an automatic water spraying system when the smoke concentration suddenly increases.
According to the intelligent park monitoring system, through the deep learning model, conventional security threats can be effectively identified, environmental risks can be early warned, and the safety management capacity of the park and the accuracy of environmental monitoring are greatly improved.
The analysis and encryption unit is used for analyzing the extracted characteristic data by using a behavior analysis algorithm to generate prediction data for identifying potential abnormal behaviors or security threats, and recording the prediction data on a blockchain after encryption processing, wherein the behavior analysis algorithm adopts a combination of a convolutional neural network and a long-term and short-term memory network for analysis;
CNN is an ideal image processing algorithm that can effectively extract and analyze visual features in video data from a monitoring camera, such as faces, vehicle details, etc. CNNs extract low-level to high-level features in images layer by layer through a multi-layer filter, which are critical for subsequent behavior recognition. LSTM is good at processing time series data, and is able to capture and learn time dependencies in data, such as monitoring the output of sound or environmental sensors. This enables the LSTM to identify a behavior pattern with a temporal progression, such as a movement trajectory of a person or vehicle within a monitored area.
The visual features extracted by CNN are combined with the time sequence data processed by LSTM to carry out comprehensive analysis, so that the method can more comprehensively understand the events occurring in the monitoring area and improve the accuracy and efficiency of behavior recognition. Real-time data is analyzed through the trained model, behaviors which are inconsistent with the normal behavior patterns marked in the training data are identified, and potential security threats or abnormal conditions are indicated.
The invention combines the analysis of visual information and time information to ensure that the predicted data is not limited to the current frame, but contains the trend of behavior development, which is particularly important for an early warning system; the safety of the generated prediction data in the transmission and storage processes is ensured, and the data is prevented from being tampered; by utilizing the non-tamperable property of the blockchain, the encrypted data is recorded, so that the integrity of the data is ensured, and the transparency and traceability of the whole monitoring system are improved.
In one embodiment, vehicle abnormal behavior detection: for example, parking lot videos are analyzed using CNN and LSTM, which are responsible for identifying details of vehicles (such as vehicle type and color), while LSTM tracks the moving track of vehicles. If a vehicle is detected to stay in a non-parking area or move abnormally, the system generates predictive data indicating a potential safety problem. In industrial areas, where the temperature or smoke concentration monitored by environmental sensors is abnormally elevated, LSTM can help predict the trend of these readings, early warning of potential fire risk.
The analysis and encryption unit not only enhances the monitoring capability of the intelligent park, but also improves the safety and response speed of the whole system through prospective data analysis and encryption technology.
Preferably, the analysis and encryption unit performs visual data processing by using a convolutional neural network, and performs time sequence data processing by using a long-term and short-term memory network, so as to realize behavior analysis on the characteristic data; the convolutional neural network and the long-term and short-term memory network are trained through historical data, and differences between normal behaviors and abnormal behaviors are learned and identified; after training is completed, the analysis and encryption unit performs reasoning on the real-time data, identifies potential abnormal behaviors or security threats, and generates prediction data, wherein the prediction data comprises: the type, location, and time of potential abnormal behavior or security threats.
Convolutional Neural Networks (CNNs) are particularly suitable for processing image data because they can effectively identify and extract local features in images. By processing layer by layer, CNNs can abstract from simple edge and texture features step by step to complex object level features such as faces or vehicles.
Long-term memory networks (LSTM) are preferred over traditional Recurrent Neural Networks (RNN) because they avoid the gradient vanishing problem common in long-sequence data processing. This makes LSTM particularly suitable for processing time series such as sound and other sensor data, which can capture time-dependence in the data, such as the duration and pattern of change of sound.
By combining the CNN extracted visual features with the LSTM processed time series features, the system is able to fully analyze events occurring within the monitored area. Such fusion analysis helps to accurately identify complex patterns of behavior and potentially abnormal activity. Once the models are trained on historical data, they can be inferred on real-time data. This means that the system is able to instantly recognize abnormal behavior such as unauthorized access or unusual activity trajectories.
By means of real-time reasoning, the system not only can identify abnormal behaviors, but also can accurately predict the types, the specific occurring positions and the time of the behaviors. These predictive data are critical to the response speed and efficiency of the safety monitoring system.
In one embodiment, a person is present at night in an area where no person should normally come in and go out, CNN can identify the person by analyzing the video image, and LSTM can identify whether the person's behavior pattern is abnormal by analyzing the person's movement track over a period of time. In an industrial scenario, if the sensor data suddenly shows a sharp rise in temperature, the LSTM can quickly recognize this pattern and trigger an alarm, while the CNN can check the visual data to determine if a flame or smoke is present.
Regarding the specific application and configuration of Convolutional Neural Networks (CNNs) and long-term memory networks (LSTM), the optimization and structural design of these networks are designed to promote the processing efficiency and accuracy of certain types of data.
Configuration of Convolutional Neural Network (CNN):
Type and configuration of layers:
Convolution layer: these layers scan the input image through filters (or kernels) to extract low-level features such as edges, colors, and textures. Typically, the primary convolutional layer will identify simple features, while deeper convolutional layers may identify more complex features.
Activation function: reLU (RECTIFIED LINEAR Unit) is the most commonly used activation function because it also reduces the gradient vanishing problem while accelerating training. In some cases, other activation functions, such as Sigmoid or Tanh, are also used in order to increase the non-linear characteristics of the network.
Pooling layer (Pooling Layer): the pooling layer typically follows the convolution layer to reduce the spatial dimensions of the feature map and increase the invariance of the features. The most common pooling operation is maximum pooling and average pooling.
The stacking of the multi-layer convolution and pooling layers can effectively extract the hierarchical features in the image. The number of feature channels increases per layer with reduced spatial dimensions, enabling more complex features to be captured.
Configuration of long short term memory network (LSTM):
the number of units (i.e., the number of neurons) in the LSTM layer determines the memory capacity of the network, and more units can enhance the processing power of the network for long-term dependency information, but at the same time increase the computational burden.
Multilayer LSTM can increase the expressive power of the model, enabling it to learn more complex time-series data patterns. However, increasing the number of layers also results in a model that is more difficult to train, and therefore techniques such as residual connection, batch normalization, etc. are required to stabilize the training process.
LSTM regulates the flow of information through gating mechanisms (including input gates, forget gates, and output gates), which enable the network to retain old information, or forget irrelevant information, if necessary. Such a mechanism is particularly suitable for processing data with important time dependencies, such as continuous surveillance video data or audio streams.
Training these networks typically requires a large amount of marking data. For example, image data of a face and a vehicle need to be annotated in detail, including each important feature of the face and the vehicle. The data enhancement, dropout layer or regularization method is used to avoid over fitting, and ensure that the model can perform well on unseen data.
For visual and audio data, actual surveillance recordings and ambient sounds are typically collected from the campus. Face and vehicle datasets include publicly available datasets (e.g., LFW, celebA, MS-COCO, CITYSCAPES) as well as proprietary datasets, ensuring coverage of diverse scenes and lighting conditions.
The data preprocessing comprises the following steps:
(1) Visual data: image cropping, scaling, normalization, and data enhancement (including rotation, flipping, color change, etc.) are performed to increase the generalization ability of the model.
(2) Audio data: noise reduction, echo cancellation, and feature extraction (e.g., MFCCs-mel-frequency cepstral coefficients) are required.
(3) Time series data: normalization processing, such as scaling the data to a uniform range or distribution, is performed.
The technical applications for preventing overfitting in the present invention include:
dropout: part of neurons in the network are randomly discarded in the training process, so that the model is prevented from being excessively dependent on training data, and the generalization capability of the model is improved.
Regularization technique:
(1) L2 regularization (weight decay): a term proportional to the square of the weight is added to the loss function, penalizing large weight values. Early stop (Early Stopping): stopping training when the performance of the validation set is no longer improved, avoiding overfitting on the training set.
(2) Batch normalization (Batch Normalization): by normalizing the layer inputs, internal covariate offsets (Internal Covariate Shift) are reduced, and model overfitting can also be slightly prevented.
(3) Batch normalization (Batch Normalization): by normalizing the layer inputs, internal covariate offsets (Internal Covariate Shift) are reduced, and model overfitting can also be slightly prevented.
According to the invention, the analysis and encryption unit for running the behavior analysis algorithm is deployed on the edge computing node close to the data source, so that the data transmission time can be remarkably reduced, and the overall delay is reduced. The edge devices have processing and memory capabilities that allow for immediate processing of the collected data.
Model compression techniques such as pruning (Pruning), quantization (quantisation) and knowledge distillation (Knowledge Distillation) are used to reduce the complexity of the model and increase the operation speed, thereby optimizing the inference time of the model. And special hardware such as GPU, TPU or FPGA is utilized, and the hardware is specially designed for parallel processing of large-scale calculation tasks, so that the reasoning process of the deep learning model can be accelerated.
The output of the inference model is directly integrated into the real-time monitoring system, ensuring that once the model identifies a potential threat, a warning is immediately issued to security personnel through a graphical user interface or alarm system.
Upon detection of a high risk event, the system automatically triggers an audio or visual alarm informing security personnel or triggering an automated security response measure, such as an automatic lock of the door access system. All identified events and associated video clips or audio recordings are automatically marked and saved for later review or further analysis.
Monitoring resources such as camera angles, focus adjustments, and sensitivity settings are dynamically adjusted based on reasoning results to better capture and track potential threats.
Time synchronization of all monitoring points is ensured, so that an inference result can be accurately associated with monitoring video and other sensor data, and rapid and accurate positioning and response are facilitated.
The predictive data specifically classifies different potential anomalous behavior such as unauthorized access, jerky movement, anomalous aggregation, or other suspicious activity. In the case of potential threats, the system not only identifies the type of behavior, but also analyzes specific action modes, such as specific actions of running to an exit, climbing a fence and the like. The prediction data will contain specific locations where the behavior occurred, such as using GPS coordinates or describing the locations with reference to specific landmarks in the campus. Each behavior prediction is accompanied by an accurate time stamp indicating the exact time at which the abnormal behavior was detected. For situations involving multiple objects or multiple events, the prediction data may also contain relevance information, identifying potential links and effects between events.
And abnormal events are automatically marked in the monitoring video by using the prediction data, and labels, highlighting or other visual prompts are added to the video stream, so that monitoring personnel can quickly recognize and respond. The system automatically archives the detected events, generating event logs for legal, security review, or future analysis.
Upon detection of a high risk or emergency, the predictive data triggers an automatic alarm, alerting security personnel or related personnel on the campus by sound, light or other form; depending on the urgency and importance of the predicted data, the system may optimize resource allocation, such as reassigning security personnel, or adjusting the focus and coverage of the monitoring device.
The forecast data provides real-time information for a park manager and supports the establishment of quick and effective safety measures and emergency response strategies; long-term collection and analysis of predictive data can help administrators identify security vulnerabilities, optimize the overall security layout and precautions for the campus.
The generated prediction data can be encrypted before storage and transmission, so that the safety and privacy of the data in the interaction process are ensured. The encrypted data is recorded on the blockchain, which not only ensures that it is not tamperable, but also provides a verifiable source of data, increasing the transparency and reliability of the system.
The behavior analysis based on the deep learning model is combined with the blockchain technology, so that the monitoring capability of an intelligent park is enhanced, and the reliability and the efficiency of the whole system are improved through advanced data processing and safety measures.
The threat identification and response unit is used for automatically labeling the monitoring video based on the prediction data and carrying out real-time monitoring by combining the audio data and the environment data; triggering cross-regional interactive verification when the potential threat is detected, and performing priority verification by the adjacent regional monitoring equipment according to the predicted data so as to respond to the potential threat;
The system of the present invention uses pre-trained models (such as convolutional neural networks and long-term memory networks) to automatically identify and annotate potential security threats in combination with visual data from surveillance video and audio data in the environment. These models are able to identify specific patterns of behavior or sound events such as a fight, screaming, or glass breaking sound. By time stamping and location information, the system synchronizes audio events with visual content, enhancing accuracy and contextual relevance of recognition. For example, if a monitoring camera captures someone running and at the same time an alarm sound is captured, the system combines the two information and automatically annotates this behavior in the video as a high risk event.
Using edge computing techniques, the system is able to process and analyze data streams from cameras and sensors in real time. The real-time processing capability enables the system to respond and mark events in the monitoring video immediately, and delay is reduced. The system adjusts the monitoring strategy in real time according to the analysis result, such as changing the focus of the camera or sending an instant alarm.
When the system detects a potential threat, an alert is automatically generated and alert information is sent to monitoring devices in the adjacent area. The monitoring device receiving the alarm will adjust its monitoring priority based on the predictive data to prioritize attention to the potential threat occurrence area. For example, if a suspicious activity is detected by a camera in an area, cameras in neighboring areas automatically adjust the angle and focal length to monitor the path of the activity.
According to the invention, through automatic labeling and real-time monitoring, the monitoring efficiency is greatly improved, and safety personnel can rapidly locate and respond to potential safety threats; the cross-regional interactive verification enhances the overall security measure of the park, and increases the response speed and accuracy to potential threats through cooperation among regions; by means of predictive data and real-time analysis, the campus manager can make more intelligent security decisions based on the data, such as resource allocation and emergency response planning.
The threat identification and response system integrated with visual, audio and environmental data analysis not only improves the safety level of the intelligent park, but also optimizes the resource use and emergency response strategy in a data driving mode, thereby greatly improving the intelligent and automatic degree of park management.
Preferably, the automatically labeling the monitoring video based on the prediction data and performing real-time monitoring in combination with the audio and the environmental data includes:
decrypting the encrypted predicted data in the block chain for real-time monitoring and automatic labeling;
The predicted data is corresponding to a specific scene in the video through the time stamp and the position information, the related event is marked in the video stream automatically based on the predicted abnormal behavior, and the marking comprises the following steps: highlighting and adding labels;
The audio data and the environment data are synchronized with the video data in time, abnormal sounds in the audio and abnormal readings of the environment sensor are analyzed corresponding to specific monitoring scenes, and the potential security threats are comprehensively evaluated and confirmed by combining the abnormal sounds in the audio and the abnormal readings with visual information in the video data.
The encrypted prediction data is stored in the blockchain to ensure its security and non-tamper resistance. At the beginning of the real-time monitoring, the system first decrypts the encrypted data for use in the monitoring and automatic labeling process. Blockchain techniques are used to ensure the security and integrity of data during transmission and storage, while decryption techniques are used to convert such data in real-time into operational information.
The system accurately corresponds the predicted data with the video data through the time stamp, so that the perfect synchronization of the event in the video monitoring and the event in the predicted data is ensured. The location information helps the system determine the specific scene location in the video. Labels, such as highlighting and adding text labels, are automatically added to the video stream based on the predicted abnormal behavior. These annotations visually indicate key events and behaviors in the surveillance video, facilitating rapid identification and response by security personnel.
Combining audio data (such as sudden alarm sounds or glass breaking sounds) with visual information in the video provides a more comprehensive analysis of security events; the data provided by the environmental sensors (e.g., smoke, temperature anomalies) is synchronized with the video and audio data so that the system can comprehensively assess potential security threats, such as fire or chemical leaks.
The automatic labeling and the real-time monitoring greatly improve the response speed to potential threats, and allow security personnel to intervene in the early stage of an event; by combining video, audio and environmental data, the system can analyze and confirm potential security threats more accurately, and reduce false alarm rate; the automatic data processing and event labeling reduce the burden of manual monitoring and improve the overall safety management efficiency.
In the invention, the monitoring system detects that a fast moving image is in a certain area and accompanies sudden glass breakage sound, the system automatically marks potential intrusion behavior in the video, and whether the window is damaged or not is detected by an environment sensor (such as temperature change and smoke increase). The comprehensive utilization of the multi-source data provides a timely and accurate alarm for security personnel, so that immediate action is taken.
The system of the invention integrates visual, audio and environmental data to a high degree, and utilizes advanced encryption and blockchain technology, thereby not only guaranteeing the safety of the data, but also greatly improving the efficiency and accuracy of threat detection. The multi-source data dynamic monitoring and real-time analysis system of the intelligent park is a typical representative of the development of a modern safety management system, and shows the front dynamic of the application of the technology in the safety field.
Preferably, the expression for generating the predicted data for identifying the potential abnormal behavior or security threat is:
wherein, Representing probability distributions of the predicted data describing different types of potential abnormal behavior or security threats; /(I)Representing an input video frame; /(I)The representation convolutional neural network is used for processing visual data and extracting features; Representing a long-term and short-term memory network for analyzing visual features and capturing time dependencies; /(I) And/>Is a training parameter.
The process of the present invention for generating predicted data using Convolutional Neural Network (CNN) processing visual data and long-short-term memory network (LSTM) processing time-series data may be formalized with the following steps and expressions:
convolutional Neural Networks (CNNs) process visual data:
for example, video frames are currently processed, and CNNs are used to extract spatial features of each frame.
Input video frame sequenceThen
Wherein,Is time/>Video frame/>And the eigenvectors are processed by a convolutional neural network.
Long-short-term memory network (LSTM) processing time sequence:
LSTM is used to analyze the feature vector sequences extracted by CNN, capturing the temporal relationship therein.
Input device(Feature vector sequence from CNN), then
Wherein,Is a hidden state in which time series information is considered.
Behavior analysis prediction output:
based on characteristics of the LSTM output, final predictions are made with the full connectivity layer (typically following a softmax activation function) to identify potential abnormal behavior or security threats.
Full connection layer output:
wherein, Is a predictive probability distribution that describes different types of potential abnormal behavior or security threats.
In summary, the expression for obtaining the predicted data in the present invention is:
Preferably, the interactive authentication process includes:
Automatically analyzing and generating alarm information when a potential threat is detected in one area, and transmitting the alarm information to monitoring equipment in an adjacent area based on a network communication protocol, wherein the alarm information comprises: the type, location, timestamp, and associated video or audio data snapshot of the threat;
the monitoring equipment of the adjacent area is dynamically adjusted according to the alarm information, and the monitoring equipment is used for capturing the activity of the designated area, and the dynamic adjustment comprises: the focal length and the angle of the camera are adjusted, so that the sensitivity of the audio equipment is enhanced;
after resource adjustment, the observation of the designated area is enhanced, the data flow from the designated area is processed preferentially, the behavior or event described in the alarm information is analyzed, whether the confirmed threat behavior exists or not is identified, and the audio data and the environment data are used for parallel analysis so as to verify whether other evidences supporting threat confirmation exist or not;
After the priority verification is completed, a threat report is generated, wherein the threat report comprises: the validation status, the specific nature of the threat, and the immediate response action taken.
The interactive verification process is a key component of the intelligent park monitoring system, and ensures that the system can not only detect potential threats, but also respond and coordinate resources of different areas in real time for effective treatment.
When the monitoring system detects a potential threat in an area, the system automatically analyzes and generates alert information. This process relies on advanced image and sound analysis techniques, such as using machine learning models to identify abnormal behavior patterns. The generated alert information includes the type of threat (e.g., illegal intrusion, fire, etc.), the specific location, the timestamp, and the snapshot of the audiovisual data associated with the event. The information is rapidly sent to monitoring equipment in the adjacent area through a safe network communication protocol, so that timely information transmission and data integrity are ensured.
After receiving the alarm, the monitoring system of the adjacent area can dynamically adjust resources according to the alarm content, such as changing the focal length and angle of the camera, adjusting the sensitivity of the audio equipment, and the like, so as to better capture the activity condition of the appointed area. For example, if the alarm information indicates an emergency in a certain area, the monitoring system may automatically adjust the nearest camera angle, focus on the place where the event occurred, and increase the sensitivity of audio monitoring to capture potential acoustic anomalies.
After resource adjustment, the system may prioritize and analyze the data streams obtained from the adjusted devices. Advanced behavioral analysis algorithms (such as convolutional neural networks and long-term memory networks) are used to process visual and audio data to confirm the presence and nature of the threat. In addition, the data of the environmental sensors (such as temperature and smoke concentration) can be analyzed in parallel to provide more comprehensive evidence to support threat confirmation and enhance the judgment accuracy of the system.
Once the existence and specific nature of the threat is confirmed, the system will generate detailed threat reports. The report content includes the validation status of the threat, specific behavior or event characteristics, and immediate response measures taken to address the threat. The report may be used to guide specific actions of the emergency response team, such as dispatch of security personnel or initiation of other security measures.
In one embodiment, an area monitoring system detects unauthorized intrusion such as climbing a fence, the system automatically records video of the event and generates an alert to the control center and adjacent monitoring points. The receiving device will instantly adjust to focus on the direction in which the intrusion occurred while increasing the sensitivity of the recording device to capture potential sound information, such as window breaking. Through the series of automatic response and resource adjustment, the monitoring system can rapidly and accurately respond to potential security threats, and the security protection level of the park is effectively improved.
The intelligent park not only can realize timely detection of abnormal behaviors, but also can provide a quick and effective response mechanism by coordinating various monitoring resources, so that the efficiency and effect of safety management are greatly improved.
In intelligent park monitoring systems, threat types are typically categorized according to a predetermined security protocol. These classifications include unauthorized intrusion, fire, environmental hazards (such as chemical spills), emergency medical situations, and the like. Each type of threat is provided with a corresponding response protocol to ensure that the system can take the most appropriate action.
The accuracy of the location information is critical to a fast and efficient threat response. The system typically uses GPS coordinates to identify the location of outdoor events, while indoor events are precisely located using RFID systems or Wi-Fi signal location. Ensuring the accuracy and real-time updating of these location information is critical to the system design.
The time stamps are in a uniform format (e.g., ISO 8601) to ensure consistency in time records for all system components. Time stamping is critical to event logging, security auditing, and subsequent analysis, which helps the security team track the order of occurrence and response time of events.
The system may be configured to automatically adjust the focal length and angle of the camera to optimally capture the activity of the critical area. The automatic adjustment is achieved by preset scene recognition algorithms that can recognize specific abnormal activities and dynamically adjust camera settings based on the nature and location of the event. For example, if the monitoring system detects unauthorized activity in an area, the associated camera may automatically adjust the focal length to zoom in on the view of the area to obtain a clearer image.
Sensitivity adjustment in audio monitoring systems is typically achieved by Automatic Gain Control (AGC) techniques. This technique enables automatic adjustment of the receiving sensitivity of the microphone according to the sound intensity in the environment, thereby optimizing the recording quality and ensuring that key sounds are effectively captured. When a particular event is detected (such as a glass breaking or other safety warning sound), the AGC system may enhance the receiving capabilities of the device to ensure successful separation and identification of the critical audio signal from the noisy environment.
In the invention, the specific implementation method of parallel analysis is as follows:
Specific sound patterns, such as glass breaking sounds, alarm sounds, or other abnormal sounds, are monitored and identified using sound recognition techniques, such as Voice Activity Detection (VAD) and Sound Event Detection (SED) algorithms. Advanced algorithms such as long and short term memory networks (LSTM) may be used to identify time dependent patterns in sound data and abnormal sound events, which patterns represent emergency situations.
For environmental data, one-dimensional convolutional neural networks (1D-CNN) or threshold-based anomaly detection algorithms can be used to monitor and analyze environmental sensor data, such as smoke, temperature, and humidity changes, in real-time. By setting a predetermined safety threshold, the system can automatically identify anomalies or abrupt changes in the data that are indicative of environmental risks or equipment failure.
Multiple sensor data fusion techniques are used to integrate data streams from different sources. This fusion is typically done at the feature level, meaning that the system integrates features collected from the various sensors into one unified analytical model. The data fusion not only enhances the data interpretation capability of a single sensor, but also improves the accuracy and reliability of anomaly detection.
The integrated data is processed through a central decision support system that uses machine learning models to evaluate and interpret the fused data, outputting a comprehensive threat assessment. The system is able to integrate anomalies in sound and environmental metrics to determine if they correlate to each other, thereby assessing threat severity and urgency.
Based on the analysis results, the system may automatically generate alert and response measures, including sending notifications to security personnel and highlighting relevant areas or events on the monitoring interface. This parallel analysis and real-time feedback ensures the high efficiency of security monitoring, allowing the security team to react quickly to handle potential security threats.
And the feedback and optimization unit integrates the automatically marked data, the cross-region verification result and the user feedback, and continuously optimizes the deep learning model and the behavior analysis algorithm through the integrated data.
The automatically marked data, the cross-regional verification result and the user feedback are three key information sources in the intelligent park monitoring system. The automatically annotated data provides detailed information about the identified event in the surveillance video; the cross-region verification result provides the effect of inter-region cooperation and the confirmation condition of specific events; the user feedback includes ratings and improvement suggestions for system performance. In data integration, format unification and time stamp alignment are first required for various data to ensure consistency and comparability of the data. This involves the steps of data cleaning, normalization, missing value processing, etc.
And using the integrated data, the system retrains and fine-tunes the existing deep learning model and behavior analysis algorithm through a machine learning algorithm. This continuous training process can help the model learn over new data, thereby improving its predictive accuracy and generalization ability. According to the feedback and verification results of the user, algorithm parameters such as learning rate and regularization coefficient are adjusted, or new features and model structure improvement are introduced, so that the performance of the model in practical application is improved.
By continuously learning new data, the model can more accurately identify and predict abnormal behaviors or security threats in the park, and reduce false alarm and missing report conditions. The model is continuously optimized along with the time, can adapt to environmental changes or new threat types, and enhances the adaptability and flexibility of the system.
In one embodiment, false positives frequently occur in a region, and user feedback and automatically annotated data indicate that the behavior recognition algorithm for that region needs to be adjusted. The feedback and optimization unit analyzes the data to identify potential causes of problems such as camera angle, illumination variation, or algorithm sensitivity to specific behavior patterns. The technical team can then adjust the algorithm parameters or add new training data to correct the false positive problem based on these analyses.
According to the intelligent park multi-source data dynamic monitoring and real-time analysis system, the current safety threat can be responded in real time, continuous learning and adaptation can be realized, and the overall monitoring efficiency and accuracy are improved.
Preferably, the feedback and optimization unit combines the automatically marked data, the cross-region verification result and the user feedback in a unified format, aligns the time stamps, and combines the data, the cross-region verification result and the user feedback in a comprehensive data set for analysis and model training;
Wherein, the automatically marked data comprises: monitoring marked abnormal event information in the video, wherein the abnormal event information comprises: event type, time and location; the cross-region verification result includes: identifying the confirmation state and specific property of the threat in the verification process; the user feedback includes: including error reporting, performance evaluation, or advice.
In the intelligent campus monitoring system, data from different sources (automatically tagged data, cross-regional verification results, user feedback) is first uniformly formatted and time stamped aligned. This step is critical to ensure that all data can be analyzed within a unified time frame, ensuring consistency and comparability of the data. Combining the formatted data into a comprehensive data set can make the analysis and training process more efficient while ensuring that all data points are taken into account, thereby improving the comprehensiveness and accuracy of the model.
The feedback and optimization unit continuously trains and optimizes the deep learning model and the behavior analysis algorithm by utilizing the integrated data set. By the mechanism, the model can adapt to a new data mode, and the identification capability of unknown threats is improved. Based on user feedback and cross-region verification results, model parameters, such as weight, learning rate, etc., may be adjusted to address new challenges or to improve the performance of the algorithm.
Through continuous data-driven optimization, the behavior analysis model can more accurately identify and predict potential security threats, and erroneous judgment and omission are reduced. The model integrating the user feedback and the actual operation result can better meet the actual requirements of the site, and the operation efficiency and the response speed of the system are improved.
In one embodiment, the system identifies an unauthorized intrusion by automatic tagging during a specific security event, and the user feedback indicates that the identified event is actually a false positive because the identified object is a staff member for evening cleaning. In this case, after integrating the automatically labeled data and the user feedback, the feedback and optimization unit analyzes the misinformation because the model is not accurate enough for identifying the behavior of the person in the low-light environment at night. Thus, the development team can adjust the model, for example, to increase training samples of night personnel behavior, optimize the model to reduce similar false positives, thereby improving overall accuracy and reliability of the system.
The invention ensures that the system has high-efficiency monitoring capability in theory, can carry out self-correction and optimization according to feedback in practical application, and continuously improves the performance.
User feedback is collected through a system interface or periodic user satisfaction survey including, but not limited to, error reporting, performance evaluation, and optimization suggestions. These feedback are automatically categorized into several main categories, such as algorithm false positives, false negatives, and performance delays.
The feedback data is first initially evaluated by a specialized team to determine its potential impact on the current model. For directly related feedback, such as false positives and false negatives, the system marks and re-enters these data into the training queue. Incremental learning strategies are employed to allow the model to be gradually blended into this new or revised data without requiring retraining from the head.
The system design has the ability to update the model online, i.e., without interrupting the service. This strategy uses new training data (including user-fed data) to gradually adjust and optimize the weights and parameters of the model in small batches.
In handling highly unbalanced data such as abnormal behavior detection, common strategies include balancing the data using either oversampling (replication of minority class samples) or undersampling (reduction of majority class samples) techniques. In addition, a Loss function specifically designed to handle unbalanced data, such as Focal Loss (Focal Loss), may be used that adjusts the focus of the model by increasing the weight of misclassified samples, focusing the model more on a few classes of samples that are difficult to classify.
Different optimization algorithms can be selected according to the specific requirements and characteristics of the model. For real-time systems that require fast response, optimizers such as Adam or RMSprop that converge faster are typically selected, which can handle large-scale and high-dimensional data more efficiently. At the same time, the adaptive learning rate of the optimizers can better cope with dynamic changes of various data.
The performance of the model is comprehensively evaluated through indexes such as accuracy, recall rate and F1 score, so that the false alarm can be effectively reduced while the recognition accuracy of the model is improved. These metrics help the team to continuously monitor the model performance and make adjustments as necessary.
Through the mechanism, the system can continuously learn and adapt to the change of the environment, and simultaneously integrate the actual use feedback of the user, so that the technical solution is ensured to always meet the requirements of the user and the actual operation environment.
As shown in fig. 3, a method for dynamically monitoring and analyzing multi-source data in an intelligent park in real time comprises the following steps:
the method comprises the steps of carrying out multi-source data acquisition through monitoring equipment deployed at a key position of a park, and collecting visual data, sound data, environment data and position data;
A variety of monitoring devices are deployed at strategic locations on the campus, including video cameras, audio monitoring devices, environmental sensors (e.g., temperature, humidity, smoke sensors), and location tracking devices (e.g., GPS or RFID). The devices can collect data of vision, sound, environment state and physical position in real time, and provide omnibearing monitoring information for the system. Through the multi-source data integration, the system can acquire more abundant context information, and the response speed and the processing precision to the event are improved. For example, the video camera captures the condition that an unauthorized person enters a restricted area, and the environment sensor can detect smoke or abnormal temperature in the area at the same time to jointly confirm whether a security threat exists.
Performing preliminary processing on the multi-source data, wherein the preliminary processing comprises the following steps: motion detection, voice recognition and environmental anomaly detection, extracting features by using a deep learning model, generating feature data, wherein the extracting features comprise: face, vehicle details, special sound patterns and environmental indicators;
The preliminary processing includes motion detection, voice recognition, and environmental anomaly detection. For example, motion detection algorithms analyze changes between video frames, identifying and tracking moving objects. The voice recognition process recognizes key events (e.g., crumbling sounds, alarm sounds) by analyzing the audio spectrum. Environmental data analysis detects such indicators as smoke concentration and temperature anomalies. Key features such as faces, vehicle details, special sound patterns and environmental indicators are extracted from these processed data using deep learning models (such as convolutional neural networks and long-term memory networks). The application of the deep learning model enables feature extraction to be more accurate, normal conditions and potential threats can be effectively distinguished, and therefore the overall recognition capability of the system is improved. For example, convolutional neural networks exhibit high efficiency and accuracy in face and vehicle recognition.
Analyzing by using a behavior analysis algorithm based on the extracted characteristic data to generate predicted data for identifying potential abnormal behaviors or security threats, and recording the predicted data on a blockchain after encryption processing, wherein the behavior analysis algorithm adopts a combination of a convolutional neural network and a long-term and short-term memory network for analysis;
Based on the extracted features, behavior patterns are analyzed using a behavior analysis algorithm (in combination with convolutional neural networks and long-term memory networks) to identify potential abnormal behaviors or security threats. These algorithms learn to distinguish differences in normal and abnormal behavior through historical data training. The generated prediction data includes the type, location, and time of the potential abnormal behavior such that the monitoring system can take precautionary measures before the event occurs. For example, the system predicts and alerts of impending illegal intrusions or technical failures, notifying security personnel or maintenance teams in advance.
Automatically labeling a monitoring video based on the predicted data, and carrying out real-time monitoring by combining the audio data and the environment data; triggering cross-regional interactive verification when the potential threat is detected, and performing priority verification by the adjacent regional monitoring equipment according to the predicted data so as to respond to the potential threat;
And automatically labeling the monitoring video stream, and carrying out real-time monitoring by combining the audio and the environmental data. The system can trigger cross-regional interactive verification when the potential threat is detected, and mobilize monitoring resources of adjacent regions to respond quickly. This step improves the real-time response capability and accuracy of the monitoring system. By automatic labeling and real-time data fusion, the system can provide more comprehensive evidence support in verifying potential threats.
And integrating the automatically marked data, the cross-region verification result and the user feedback, and continuously optimizing the deep learning model and the behavior analysis algorithm through the integrated data.
And continuously optimizing the deep learning model and the behavior analysis algorithm by integrating automatic annotation data, cross-region verification results and user feedback. This process involves data cleansing, alignment, and merging to ensure consistency and availability of data. The optimized model reflects the actual operation environment and the user requirement more accurately, and the overall performance and the user satisfaction of the system are improved. For example, false alarms can be reduced and the accuracy of the alarm can be improved by the user feedback of the adjusted model.
Through the steps, the monitoring system of the intelligent park can realize high-efficiency threat detection and response, adapt to environment change and user requirements through continuous learning and optimization, and keep long-term effectiveness and reliability in a dynamic environment.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory includes volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. An intelligent campus multisource data dynamic monitoring and real-time analysis system, which is characterized by comprising:
The acquisition unit is used for carrying out multi-source data acquisition through monitoring equipment deployed at a key position of a park and collecting visual data, sound data, environment data and position data;
and the edge computing node receives and performs preliminary processing on the multi-source data, wherein the preliminary processing comprises the following steps: motion detection, voice recognition and environmental anomaly detection, extracting features by using a deep learning model, generating feature data, wherein the extracting features comprise: face, vehicle details, special sound patterns and environmental indicators;
The analysis and encryption unit is used for analyzing the extracted characteristic data by using a behavior analysis algorithm to generate prediction data for identifying potential abnormal behaviors or security threats, and recording the prediction data on a blockchain after encryption processing, wherein the behavior analysis algorithm adopts a combination of a convolutional neural network and a long-term and short-term memory network for analysis;
The threat identification and response unit is used for automatically labeling the monitoring video based on the prediction data and carrying out real-time monitoring by combining the audio data and the environment data; triggering cross-regional interactive verification when the potential threat is detected, and performing priority verification by the adjacent regional monitoring equipment according to the predicted data so as to respond to the potential threat;
And the feedback and optimization unit integrates the automatically marked data, the cross-region verification result and the user feedback, and continuously optimizes the deep learning model and the behavior analysis algorithm through the integrated data.
2. The intelligent park multisource data dynamic monitoring and real-time analysis system according to claim 1, wherein the edge computing node analyzes differences between video frames by using a motion detection algorithm, identifies the outline and path of a moving object, identifies the position and motion track of the moving object in the video, outputs motion detection data, and completes motion detection;
Analyzing the audio waveform using a voice recognition technique to identify human voice, vehicle noise or other significant sound events, outputting voice recognition data including the identified sound event type and associated audio time stamp, completing voice recognition;
And (3) evaluating whether the environmental data exceeds a normal operation range by applying a threshold detection or pattern recognition algorithm, recognizing potential environmental risks, outputting environmental abnormality detection data identifying environmental parameters exceeding a preset safety range, and finishing environmental abnormality detection.
3. The intelligent campus multisource data dynamic monitoring and real-time analysis system according to claim 1, wherein the extracting features using the deep learning model, generating feature data includes:
Extracting face features, analyzing face images in video data by using a convolutional neural network, and extracting face features of an individual, wherein the face features comprise: facial structure and expression recognition for authentication or emotion analysis;
Extracting vehicle detail characteristics, analyzing vehicle images in video data by using a convolutional neural network, and extracting model, color and license plate information of a vehicle for identifying and tracking the vehicle;
extracting special sound mode characteristics, and identifying key sound events from environment sounds by utilizing a long-short-term memory network, wherein the key sound events comprise: glass breaking sound and alarm sound;
Extracting environmental index features, and extracting abnormal indexes from environmental sensor data by using a long-short-term memory network and a one-dimensional convolutional neural network, wherein the abnormal indexes comprise: smoke concentration, rate of temperature change, for identifying potential environmental risks or equipment failure.
4. The intelligent park multisource data dynamic monitoring and real-time analysis system according to claim 1, wherein the analysis and encryption unit performs visual data processing by using a convolutional neural network, and performs time sequence data processing by using a long-term and short-term memory network, so that behavior analysis on characteristic data is realized; the convolutional neural network and the long-term and short-term memory network are trained through historical data, and differences between normal behaviors and abnormal behaviors are learned and identified; after training is completed, the analysis and encryption unit performs reasoning on the real-time data, identifies potential abnormal behaviors or security threats, and generates prediction data, wherein the prediction data comprises: the type, location, and time of potential abnormal behavior or security threats.
5. The intelligent campus multisource data dynamic monitoring and real-time analysis system according to claim 4, wherein the automatic annotation of the monitoring video based on the prediction data and the real-time monitoring in combination with the audio and the environmental data comprises:
decrypting the encrypted predicted data in the block chain for real-time monitoring and automatic labeling;
The predicted data is corresponding to a specific scene in the video through the time stamp and the position information, the related event is marked in the video stream automatically based on the predicted abnormal behavior, and the marking comprises the following steps: highlighting and adding labels;
The audio data and the environment data are synchronized with the video data in time, abnormal sounds in the audio and abnormal readings of the environment sensor are analyzed corresponding to specific monitoring scenes, and the potential security threats are comprehensively evaluated and confirmed by combining the abnormal sounds in the audio and the abnormal readings with visual information in the video data.
6. The intelligent campus multisource data dynamic monitoring and real-time analysis system of claim 1, wherein the expression for generating predictive data for identifying potential abnormal behavior or security threats is:
wherein, Representing probability distributions of the predicted data describing different types of potential abnormal behavior or security threats; /(I)Representing an input video frame; /(I)The representation convolutional neural network is used for processing visual data and extracting features; Representing a long-term and short-term memory network for analyzing visual features and capturing time dependencies; /(I) And/>Is a training parameter.
7. The intelligent campus multisource data dynamic monitoring and real-time analysis system according to claim 1, wherein the interactive verification process includes:
Automatically analyzing and generating alarm information when a potential threat is detected in one area, and transmitting the alarm information to monitoring equipment in an adjacent area based on a network communication protocol, wherein the alarm information comprises: the type, location, timestamp, and associated video or audio data snapshot of the threat;
the monitoring equipment of the adjacent area is dynamically adjusted according to the alarm information, and the monitoring equipment is used for capturing the activity of the designated area, and the dynamic adjustment comprises: the focal length and the angle of the camera are adjusted, so that the sensitivity of the audio equipment is enhanced;
after resource adjustment, the observation of the designated area is enhanced, the data flow from the designated area is processed preferentially, the behavior or event described in the alarm information is analyzed, whether the confirmed threat behavior exists or not is identified, and the audio data and the environment data are used for parallel analysis so as to verify whether other evidences supporting threat confirmation exist or not;
After the priority verification is completed, a threat report is generated, wherein the threat report comprises: the validation status, the specific nature of the threat, and the immediate response action taken.
8. The intelligent campus multisource data dynamic monitoring and real-time analysis system according to claim 1, wherein the feedback and optimization unit combines automatically labeled data, cross-regional verification results and user feedback in a unified format, aligned time stamps into one comprehensive data set for analysis and model training;
Wherein, the automatically marked data comprises: monitoring marked abnormal event information in the video, wherein the abnormal event information comprises: event type, time and location; the cross-region verification result includes: identifying the confirmation state and specific property of the threat in the verification process; the user feedback includes: including error reporting, performance evaluation, or advice.
9. A method for dynamically monitoring and analyzing multi-source data in an intelligent park in real time is characterized by comprising the following steps:
the method comprises the steps of carrying out multi-source data acquisition through monitoring equipment deployed at a key position of a park, and collecting visual data, sound data, environment data and position data;
Performing preliminary processing on the multi-source data, wherein the preliminary processing comprises the following steps: motion detection, voice recognition and environmental anomaly detection, extracting features by using a deep learning model, generating feature data, wherein the extracting features comprise: face, vehicle details, special sound patterns and environmental indicators;
analyzing by using a behavior analysis algorithm based on the extracted characteristic data to generate predicted data for identifying potential abnormal behaviors or security threats, and recording the predicted data on a blockchain after encryption processing, wherein the behavior analysis algorithm adopts a combination of a convolutional neural network and a long-term and short-term memory network for analysis;
automatically labeling a monitoring video based on the predicted data, and carrying out real-time monitoring by combining the audio data and the environment data; triggering cross-regional interactive verification when the potential threat is detected, and performing priority verification by the adjacent regional monitoring equipment according to the predicted data so as to respond to the potential threat;
And integrating the automatically marked data, the cross-region verification result and the user feedback, and continuously optimizing the deep learning model and the behavior analysis algorithm through the integrated data.
CN202410498443.1A 2024-04-24 2024-04-24 Intelligent park multisource data dynamic monitoring and real-time analysis system and method Pending CN118072255A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410498443.1A CN118072255A (en) 2024-04-24 2024-04-24 Intelligent park multisource data dynamic monitoring and real-time analysis system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410498443.1A CN118072255A (en) 2024-04-24 2024-04-24 Intelligent park multisource data dynamic monitoring and real-time analysis system and method

Publications (1)

Publication Number Publication Date
CN118072255A true CN118072255A (en) 2024-05-24

Family

ID=91095752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410498443.1A Pending CN118072255A (en) 2024-04-24 2024-04-24 Intelligent park multisource data dynamic monitoring and real-time analysis system and method

Country Status (1)

Country Link
CN (1) CN118072255A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050258943A1 (en) * 2004-05-21 2005-11-24 Mian Zahid F System and method for monitoring an area
US20060190419A1 (en) * 2005-02-22 2006-08-24 Bunn Frank E Video surveillance data analysis algorithms, with local and network-shared communications for facial, physical condition, and intoxication recognition, fuzzy logic intelligent camera system
CN103198605A (en) * 2013-03-11 2013-07-10 成都百威讯科技有限责任公司 Indoor emergent abnormal event alarm system
US20170220854A1 (en) * 2016-01-29 2017-08-03 Conduent Business Services, Llc Temporal fusion of multimodal data from multiple data acquisition systems to automatically recognize and classify an action
CN113706355A (en) * 2021-09-05 2021-11-26 上海远韵实业有限公司 Method for building intelligent emergency system of chemical industry park
CN114373245A (en) * 2021-12-16 2022-04-19 南京南自信息技术有限公司 Intelligent inspection system based on digital power plant
CN218450237U (en) * 2022-10-13 2023-02-03 杭州澎湃数智科技有限公司 Video monitoring equipment
CN115880602A (en) * 2022-11-04 2023-03-31 中国舰船研究设计中心 Submersible vehicle cabin fire state evaluation method based on multi-source heterogeneous data fusion
CN116308960A (en) * 2023-03-27 2023-06-23 杭州绿城信息技术有限公司 Intelligent park property prevention and control management system based on data analysis and implementation method thereof
CN116994390A (en) * 2023-07-18 2023-11-03 漳州市诺兰信息科技有限公司 Security monitoring system and method based on Internet of things
CN117236430A (en) * 2023-09-22 2023-12-15 北京环中睿驰科技有限公司 Fire safety management system based on multisource data analysis and data visualization
CN117575332A (en) * 2024-01-12 2024-02-20 唐山伟仁建筑工程有限公司 Road construction safety monitoring method and system
CN117852025A (en) * 2023-12-14 2024-04-09 深圳市格瑞邦科技有限公司 Intelligent security monitoring system and method for tablet personal computer

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050258943A1 (en) * 2004-05-21 2005-11-24 Mian Zahid F System and method for monitoring an area
US20060190419A1 (en) * 2005-02-22 2006-08-24 Bunn Frank E Video surveillance data analysis algorithms, with local and network-shared communications for facial, physical condition, and intoxication recognition, fuzzy logic intelligent camera system
CN103198605A (en) * 2013-03-11 2013-07-10 成都百威讯科技有限责任公司 Indoor emergent abnormal event alarm system
US20170220854A1 (en) * 2016-01-29 2017-08-03 Conduent Business Services, Llc Temporal fusion of multimodal data from multiple data acquisition systems to automatically recognize and classify an action
CN113706355A (en) * 2021-09-05 2021-11-26 上海远韵实业有限公司 Method for building intelligent emergency system of chemical industry park
CN114373245A (en) * 2021-12-16 2022-04-19 南京南自信息技术有限公司 Intelligent inspection system based on digital power plant
CN218450237U (en) * 2022-10-13 2023-02-03 杭州澎湃数智科技有限公司 Video monitoring equipment
CN115880602A (en) * 2022-11-04 2023-03-31 中国舰船研究设计中心 Submersible vehicle cabin fire state evaluation method based on multi-source heterogeneous data fusion
CN116308960A (en) * 2023-03-27 2023-06-23 杭州绿城信息技术有限公司 Intelligent park property prevention and control management system based on data analysis and implementation method thereof
CN116994390A (en) * 2023-07-18 2023-11-03 漳州市诺兰信息科技有限公司 Security monitoring system and method based on Internet of things
CN117236430A (en) * 2023-09-22 2023-12-15 北京环中睿驰科技有限公司 Fire safety management system based on multisource data analysis and data visualization
CN117852025A (en) * 2023-12-14 2024-04-09 深圳市格瑞邦科技有限公司 Intelligent security monitoring system and method for tablet personal computer
CN117575332A (en) * 2024-01-12 2024-02-20 唐山伟仁建筑工程有限公司 Road construction safety monitoring method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈广友;: "基于物联网的智能楼宇安防监控系统设计", 信息技术与信息化, no. 06, 25 June 2018 (2018-06-25) *

Similar Documents

Publication Publication Date Title
CN110689054B (en) Worker violation monitoring method
JP5560397B2 (en) Autonomous crime prevention alert system and autonomous crime prevention alert method
KR102356666B1 (en) Method and apparatus for risk detection, prediction, and its correspondence for public safety based on multiple complex information
CN116308960B (en) Intelligent park property prevention and control management system based on data analysis and implementation method thereof
KR101377029B1 (en) The apparatus and method of monitoring cctv with control moudule
CN111783530A (en) Safety system and method for monitoring and identifying behaviors in restricted area
KR20200052418A (en) Automated Violence Detecting System based on Deep Learning
KR20200017594A (en) Method for Recognizing and Tracking Large-scale Object using Deep learning and Multi-Agent
KR102263512B1 (en) IoT integrated intelligent video analysis platform system capable of smart object recognition
CN116862740A (en) Intelligent prison management and control system based on Internet
CN117035419A (en) Intelligent management system and method for enterprise project implementation
CN116416281A (en) Grain depot AI video supervision and analysis method and system
US20230334966A1 (en) Intelligent security camera system
CN118072255A (en) Intelligent park multisource data dynamic monitoring and real-time analysis system and method
CN115767017A (en) Wisdom sentry monitored control system
EP4367653A1 (en) Threat assessment system
Demestichas et al. Prediction and visual intelligence platform for detection of irregularities and abnormal behaviour
Ali et al. Survey of Surveillance of Suspicious Behavior from CCTV Video Recording
Heshan et al. Computerized Prison Monitoring Application Based on Knowledge Engineering
Islam et al. Carts: Constraint-based analytics from real-time system monitoring
CN117746338B (en) Property park safety management method and system based on artificial intelligence
Varun et al. Real Time Theft Detection Using YOLOv5 Object Detection Model
CN117079430A (en) Early warning automatic adjustable security monitoring device and method
US20230316726A1 (en) Using guard feedback to train ai models
Nithesh et al. Anomaly Detection in Surveillance Videos Using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination