CN111814761A - Intelligent safety monitoring method for energy storage power station - Google Patents

Intelligent safety monitoring method for energy storage power station Download PDF

Info

Publication number
CN111814761A
CN111814761A CN202010857536.0A CN202010857536A CN111814761A CN 111814761 A CN111814761 A CN 111814761A CN 202010857536 A CN202010857536 A CN 202010857536A CN 111814761 A CN111814761 A CN 111814761A
Authority
CN
China
Prior art keywords
power station
energy storage
storage power
camera
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010857536.0A
Other languages
Chinese (zh)
Inventor
李沛哲
李湘旗
程津
陈仲伟
肖振锋
王逸超
邓凯
刘浩田
伍也凡
冷阳
谢林瑾
李晨
张昀伟
徐敬文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Hunan Electric Power Co Ltd
Economic and Technological Research Institute of State Grid Hunan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
State Grid Hunan Electric Power Co Ltd
Economic and Technological Research Institute of State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Hunan Electric Power Co Ltd, Economic and Technological Research Institute of State Grid Hunan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010857536.0A priority Critical patent/CN111814761A/en
Publication of CN111814761A publication Critical patent/CN111814761A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a safety intelligent monitoring method for an energy storage power station, which comprises the steps of acquiring a monitoring picture in real time by a camera, and acquiring and uploading the monitoring picture to a server; the server processes the image, adopts a neural network discrimination model for identification and obtains an identification result and positioning data; controlling a camera to track in real time; the server communicates with the remote streaming media and shares the monitoring picture. The invention optimizes the artificial intelligent recognition model, reduces the computational burden, can realize remote real-time processing, and can quickly detect and recognize the portrait; simultaneously, tracking the identified target in real time according to a tracking algorithm; therefore, the method realizes the safe real-time monitoring of the energy storage power station, and has high reliability, good real-time performance and higher efficiency.

Description

Intelligent safety monitoring method for energy storage power station
Technical Field
The invention belongs to the field of intelligent security and protection, and particularly relates to an intelligent safety monitoring method for an energy storage power station.
Background
With the development of economic technology and the improvement of living standard of people, electric energy becomes essential secondary energy in production and life of people, and brings endless convenience to production and life of people. Therefore, stable and reliable operation of the power system becomes one of the most important tasks of the power system.
At present, with the development of new energy power generation such as wind power and photovoltaic, the phenomena of wind generation, light abandonment and the like are increasingly serious. The energy storage system can remarkably improve the consumption level of renewable energy sources such as wind, light and the like, maintains the stable operation of a power grid, and is a key technology for promoting the replacement of main energy sources from fossil energy sources to renewable energy sources. Therefore, energy storage systems and energy storage power stations have also been developed.
The electrochemical energy storage power station has many advantages and has good application prospect in the development of renewable energy sources. However, electrochemical energy storage power stations also have disadvantages, the most prominent of which are safety issues. The energy storage power station always faces accident risks such as fire and the like; therefore, the safety monitoring of the energy storage power station becomes an important component of the energy storage power station.
At present, the safety monitoring of energy storage power stations is still at the level of extensive management. Safety monitoring generally adopts a simple video monitoring and video storage mode: a large amount of monitoring images of the energy storage power station are transmitted back to the master control room in real time to be displayed and stored in real time, and operators on duty need to manually check and monitor massive monitoring images. The existing monitoring mode obviously has the defects of low efficiency, low safety, low operability and the like.
Disclosure of Invention
The invention aims to provide an intelligent safety monitoring method for an energy storage power station, which is high in reliability, good in real-time performance and high in efficiency.
The invention provides a safety intelligent monitoring method for an energy storage power station, which comprises the following steps:
s1, a camera acquires a monitoring picture of a monitoring area in real time;
s2, the camera collects the captured monitoring picture and uploads the collected monitoring picture to the server;
s3, the server carries out image processing on the received video signal and adopts a neural network discrimination model to carry out recognition so as to obtain a recognition result and positioning data;
s4, controlling the camera to track in real time according to the identification result and the positioning data acquired in the step S3;
and S5, the server communicates with the remote streaming media, and the video signal is pushed to the streaming media server in a streaming pushing mode, so that the sharing of the monitoring picture is realized.
The neural network discrimination model in step S3 is specifically the following neural network discrimination model:
A. building a neural network discrimination model: the neural network is composed of a network hierarchy; extracting image characteristics from the convolution layer and the pooling layer, and generating classification information from the full-connection layer; wherein each layer comprises a plurality of neurons; the input of each neuron is the weight multiplied by the output of the previous neuron, and the output is the activation of the summation of all the inputs; image information is finally conducted forwards among all levels in a neuron signal mode, and the functions of detecting and identifying the target are achieved by training and modifying the weight among different neurons and fitting the required data distribution; neural networks are generally composed of two parts: detecting and identifying;
B. the detection network is designed based on a target detection task: by detecting the relevant characteristics of the human face, the human face data are proposed: firstly, extracting image features, wherein the image features are realized by a plurality of convolution layers and pooling layers; in the feature extraction, different convolution kernels are trained, and related feature filters are automatically generated; residual convolution connection is adopted, and a Prelu activation function is adopted as an activation function; introducing a characteristic pyramid module for amplifying the bottom semantic interpolation and increasing the transverse connection; introducing an SSH context module, extracting semantic information of different receptive fields by cascading 3 × 3 convolution operations of different quantities, finally distributing special weight by an SE module, and sending the special weight to a detection head for regression and classification detection; converting the extracted face features into classification information of a boundary box and regression information of boundary coordinates through regression and 1-x 1 convolution of a classification head; finally, obtaining the coordinate and the size of a bounding box of the face in the image;
C. the recognition network filters the face features through the convolution layer and the pooling layer; adopting resnet50 as a backbone, extracting a target 512-dimensional feature vector, and training a network by using arcface _ loss as a supervision signal to enlarge the difference of different feature expressions in an angle space, thereby distinguishing the feature expressions of different faces; and the face feature vector generated based on the network is compared with a face database so as to realize the recognition function.
The arcface _ loss is expressed by adopting the following formula:
Figure BDA0002646960290000031
where s is a scaling factor, m is an offset factor, and θ is the normalized backward component internal angle of the weight and feature.
The neural network discrimination model adopts the following formula as a conduction formula between neuron layers:
out=w*input+b
wherein w is a weight matrix consisting of (w1, w2, w3 and w4), and b is bias.
Step S4, the camera is controlled to perform real-time tracking, specifically, the camera is controlled to perform real-time tracking by the following steps:
a. the processor receives the video image frame obtained by the camera, sends the image frame to the neural network, judges the personnel type and obtains the coordinates (x, y) of the power station visitor in the image;
b. and judging the left and right relative positions of the current target person relative to the center of the picture according to the difference value (delta X, delta Y) between the coordinates (X, Y) and the central coordinates (X, Y) of the monitoring picture, so that the processor generates a left-turn or right-turn PWM signal to be output to the motor for driving, and the holder is controlled to rotate, thereby realizing that the camera follows abnormal rotation.
The PWM signal is specifically calculated by the following equation:
Figure BDA0002646960290000041
V=α*Vmax
in the formula of UavTo average voltage, UsPeak voltage, V is the PWM controlled motor speed, VmaxAnd alpha is the duty ratio of PWM modulation for the fastest rotating speed of the motor.
The server in step S5 communicates with the remote streaming media, and pushes the video signal to the streaming media server in a stream pushing manner, so as to implement sharing of the monitoring picture, specifically, the following steps are adopted for sharing:
remotely setting up a streaming media server, and deploying an RTMP protocol on the streaming media server;
and setting a front-end page on the streaming media server, receiving a stream pushing signal from the server, and presenting the stream pushing signal on the front-end page.
The intelligent safety monitoring method for the energy storage power station optimizes an artificial intelligent identification model, reduces the computational burden, can realize remote real-time processing, and can quickly detect and identify the portrait; simultaneously, tracking the identified target in real time according to a tracking algorithm; therefore, the method realizes the safe real-time monitoring of the energy storage power station, and has high reliability, good real-time performance and higher efficiency.
Drawings
FIG. 1 is a schematic process flow diagram of the process of the present invention.
FIG. 2 is a schematic structural diagram of a neural network discriminant model of the method of the present invention.
FIG. 3 is a schematic diagram of a classical structure of a neural network discriminant model according to the method of the present invention.
FIG. 4 is a diagram of the SSH context module of the neural network discriminant model of the present invention.
Fig. 5 is a schematic diagram of the network structure of resnet50 of the neural network discriminant model of the method of the present invention.
FIG. 6 is a schematic diagram of the PWM principle of the method of the present invention.
Detailed Description
FIG. 1 is a schematic flow chart of the method of the present invention: the invention provides a safety intelligent monitoring method for an energy storage power station, which comprises the following steps:
s1, a camera acquires a monitoring picture of a monitoring area in real time;
s2, the camera collects the captured monitoring picture and uploads the collected monitoring picture to the server;
s3, the server carries out image processing on the received video signal and adopts a neural network discrimination model to carry out recognition so as to obtain a recognition result and positioning data;
in specific implementation, the neural network discrimination model is constructed by adopting the following steps:
A. a neural network discrimination model is built according to FIG. 2, wherein the neural network is composed of a network hierarchical structure, and the classic structure of the neural network is shown in FIG. 3 (a): extracting image characteristics from the convolution layer and the pooling layer, and generating classification information from the full-connection layer; wherein each layer contains a plurality of neurons; each neuron is shown in fig. 3(c), the input of each neuron is weight multiplication pre-neuron output, and the output is activation (active) for summing all the inputs; image information is finally conducted forwards among all levels in a neuron signal mode, and required data distribution can be fitted through training and modifying weights among different neurons, so that the aims of intelligent detection and intelligent identification are achieved; neural networks are generally composed of two parts: detecting and identifying; firstly, finding the position (x, y) of a portrait in a picture, and then using the detected portrait for identification;
B. the detection network is designed based on a target detection task, and the human face is extracted from the background by detecting the relevant characteristics of the human face; firstly, extracting image features, wherein the step is realized by a plurality of convolution layers and pooling layers; in the feature extraction, the related feature filters can be automatically generated by training different convolution kernels; in order to deepen the feature extraction network, common convolution connection can be modified into residual convolution connection, an activation function is optimized, and the activation function is changed into a Prelu activation function so as to be more suitable for detection of human face features; in order to deal with face images with different scales in the images, a Feature pyramid module (Feature pyramid net) is introduced into the network, and bottom-layer semantic information and multi-scale information can be more fully utilized by interpolating and amplifying bottom-layer semantic and increasing transverse connection so as to realize better multi-scale face detection; in the aspect of multi-scale detection of the receptive fields, an SSH context module (as shown in figure 4) similar to initiation is introduced into the network, semantic information of different receptive fields can be extracted by cascading different numbers of 3-by-3 convolution operations, the context information can be fully utilized by splicing the semantic information, the accuracy of the network is improved, and finally, special weight is distributed by an SE module and is sent to a detection head for regression and classification detection; because the special face detection algorithm is anchor-based, the extracted face features need to be converted into classification information of a boundary box and regression information of boundary coordinates through regression and 1 x 1 convolution of a classification head; therefore, the coordinate and the size of the bounding box of the face in the image can be obtained; a powerful face detector can be trained through the public data set, so that the face in the image can be positioned and detected;
C. the recognition network filters the face features through a convolution layer and a pooling layer based on the assumption that the face features express similarity; in the patent, resnet50 (shown in fig. 5) is used as a backbone to extract a target 512-dimensional feature vector, and arcface _ loss is used as a supervision signal to train a network so as to enlarge the difference of different feature expressions in an angle space, thereby distinguishing the feature expressions of different human faces; the face feature vector generated based on the network can be distinguished from a face database so as to realize the recognition function;
arcface _ loss expression:
Figure BDA0002646960290000061
wherein s is a scaling coefficient, m is an offset coefficient, and theta is a normalized backward quantity internal angle of the weight and the characteristic;
neuron interlayer conduction formula:
out=w*input+b
wherein w is a weight matrix consisting of (w1, w2, w3 and w4), and b is bias;
s4, controlling the camera to track in real time according to the identification result and the positioning data acquired in the step S3; the method specifically comprises the following steps of controlling a camera to track in real time: the method specifically comprises the following steps of carrying out real-time tracking:
a. the processor receives the video image frame obtained by the camera, sends the image frame to the neural network, judges the personnel type (such as a worker or other visitors) and obtains the coordinates (x, y) of the power station visitor in the image;
b. judging the left and right relative positions of the current target person relative to the center of the picture according to the difference value (delta X, delta Y) between the coordinates (X, Y) and the central coordinates (X, Y) of the monitoring picture, so that a processor generates a left-turn or right-turn PWM signal to be output to a motor for driving, and the motor drives a holder to rotate, thereby realizing that the camera follows abnormal rotation; the PWM principle is shown in fig. 6;
Figure BDA0002646960290000071
V=α*Vmax
wherein U isavTo average voltage, UsPeak voltage, V is the PWM controlled motor speed, VmaxThe fastest rotating speed of the motor is set, and alpha is the duty ratio of PWM modulation;
s5, the server communicates with the remote streaming media, and the video signal is pushed to the streaming media server in a streaming pushing mode, so that sharing of monitoring pictures is achieved; the method specifically comprises the following steps of:
remotely setting up a streaming media server, and deploying an RTMP protocol on the streaming media server;
and setting a front-end page on the streaming media server, receiving a stream pushing signal from the server, and presenting the stream pushing signal on the front-end page.
The process of the invention is further illustrated below with reference to one example:
in the embodiment of the invention, an edge camera is arranged to capture a monitoring picture, the camera is a raspberry pi camera, and the captured picture can be directly sent to a neural network model carried by the raspberry pi camera for operation, so that the time delay and the transmission cost are reduced.
The raspberry pi processor captures a frame of pictures with a resolution of (320 x 240) every 0.5 seconds. The picture is sent to a neural network, the light face detection algorithm can detect and position the face of the pedestrian in the picture, a segmented face boundary frame is output, and background noise of the image is filtered. In the test example, a (320 x 240) portrait photo is input, and the error between the regression of the bounding box and the boundary of the real face is in the tens level, which means that the error of face positioning in one image is only in the tens level, and the face filtering background noise can be accurately detected.
The portrait is sent into a light identification network for identification processing, high-precision classification can be carried out according to the facial features of pedestrians and clothes features, the pedestrians are judged to be workers or other visitors, and the image coordinates (x, y) of the pedestrians are returned. The example tests that the pedestrian identification precision is more than ninety percent, and whether the pedestrian is the object stored in the database or not can be accurately distinguished.
The processor selects the (x, y) coordinates of the visitor with the largest area in the image and takes the difference (Δ x, Δ y) from the center of the frame (160, 120). And generating a PWM signal for controlling the holder through the difference value, and controlling the holder to rotate to realize pedestrian tracking of the camera. The camera is arranged on the holder, and the holder carries, tracks and rotates smoothly.
And constructing an Nginx server and deploying an RTMP protocol to realize a streaming media server capable of receiving video streams. And a front-end page pull stream is deployed on the streaming media server, so that the remote monitoring of the monitoring theme at any time can be realized. And the video stream data is remotely pushed to the built streaming media server through wifi communication by the ffmpeg tool.
The monitoring page at the front end of the streaming media server is accessed, the pedestrian behavior characteristics can be clearly seen due to clear image quality, the video frame rate is kept at about 25 frames, and the smooth watching experience of human eyes is met. The video delay is less than one second, and the real-time performance of monitoring only can be met. When the visitor passes through the monitoring screen, the screen display follows the advancing route of the visitor, the frame rate is not obviously reduced, and the whole intelligent monitoring system has good performance and meets the requirements of the intelligent monitoring system.

Claims (7)

1. An energy storage power station safety intelligent monitoring method comprises the following steps:
s1, a camera acquires a monitoring picture of a monitoring area in real time;
s2, the camera collects the captured monitoring picture and uploads the collected monitoring picture to the server;
s3, the server carries out image processing on the received video signal and adopts a neural network discrimination model to carry out recognition so as to obtain a recognition result and positioning data;
s4, controlling the camera to track in real time according to the identification result and the positioning data acquired in the step S3;
and S5, the server communicates with the remote streaming media, and the video signal is pushed to the streaming media server in a streaming pushing mode, so that the sharing of the monitoring picture is realized.
2. The intelligent monitoring method for the safety of the energy storage power station as claimed in claim 1, wherein the neural network discrimination model in step S3 is specifically a neural network discrimination model that:
A. building a neural network discrimination model: the neural network is composed of a network hierarchy; extracting image characteristics from the convolution layer and the pooling layer, and generating classification information from the full-connection layer; wherein each layer comprises a plurality of neurons; the input of each neuron is the weight multiplied by the output of the previous neuron, and the output is the activation of the summation of all the inputs; image information is finally conducted forwards among all levels in a neuron signal mode, and the functions of detecting and identifying the target are achieved by training and modifying the weight among different neurons and fitting the required data distribution; neural networks are generally composed of two parts: detecting and identifying;
B. the detection network is designed based on a target detection task: by detecting the relevant characteristics of the human face, the human face data are proposed: firstly, extracting image features, wherein the image features are realized by a plurality of convolution layers and pooling layers; in the feature extraction, different convolution kernels are trained, and related feature filters are automatically generated; residual convolution connection is adopted, and a Prelu activation function is adopted as an activation function; introducing a characteristic pyramid module for amplifying the bottom semantic interpolation and increasing the transverse connection; introducing an SSH context module, extracting semantic information of different receptive fields by cascading 3 × 3 convolution operations of different quantities, finally distributing special weight by an SE module, and sending the special weight to a detection head for regression and classification detection; converting the extracted face features into classification information of a boundary box and regression information of boundary coordinates through regression and 1-x 1 convolution of a classification head; finally, obtaining the coordinate and the size of a bounding box of the face in the image;
C. the recognition network filters the face features through the convolution layer and the pooling layer; adopting resnet50 as a backbone, extracting a target 512-dimensional feature vector, and training a network by using arcface _ loss as a supervision signal to enlarge the difference of different feature expressions in an angle space, thereby distinguishing the feature expressions of different faces; and the face feature vector generated based on the network is compared with a face database so as to realize the recognition function.
3. The intelligent monitoring method for the safety of the energy storage power station as claimed in claim 2, wherein the arcface _ loss is expressed by the following formula:
Figure FDA0002646960280000021
where s is a scaling factor, m is an offset factor, and θ is the normalized backward component internal angle of the weight and feature.
4. The intelligent monitoring method for the safety of the energy storage power station as claimed in claim 2, wherein the neural network discrimination model adopts the following formula as the conduction formula between the neuron layers:
out=w*input+b
wherein w is a weight matrix consisting of (w1, w2, w3 and w4), and b is bias.
5. The intelligent monitoring method for the safety of the energy storage power station as claimed in claim 2, wherein the step S4 is to control the camera to perform real-time tracking, specifically, the following steps are adopted to control the camera to perform real-time tracking:
a. the processor receives the video image frame obtained by the camera, sends the image frame to the neural network, judges the personnel type and obtains the coordinates (x, y) of the power station visitor in the image;
b. and judging the left and right relative positions of the current target person relative to the center of the picture according to the difference value (delta X, delta Y) between the coordinates (X, Y) and the central coordinates (X, Y) of the monitoring picture, so that the processor generates a left-turn or right-turn PWM signal to be output to the motor for driving, and the holder is controlled to rotate, thereby realizing that the camera follows abnormal rotation.
6. The energy storage power station safety intelligent monitoring method according to claim 5, characterized in that the PWM signal is calculated by the following formula:
Figure FDA0002646960280000031
V=α*Vmax
in the formula of UavTo average voltage, UsPeak voltage, V is the PWM controlled motor speed, VmaxAnd alpha is the duty ratio of PWM modulation for the fastest rotating speed of the motor.
7. The energy storage power station safety intelligent monitoring method according to any one of claims 1 to 6, characterized in that the server in step S5 communicates with a remote streaming media, and pushes a video signal to the streaming media server in a streaming mode, so as to realize sharing of a monitoring picture, specifically, the following steps are adopted for sharing:
remotely setting up a streaming media server, and deploying an RTMP protocol on the streaming media server;
and setting a front-end page on the streaming media server, receiving a stream pushing signal from the server, and presenting the stream pushing signal on the front-end page.
CN202010857536.0A 2020-08-24 2020-08-24 Intelligent safety monitoring method for energy storage power station Pending CN111814761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010857536.0A CN111814761A (en) 2020-08-24 2020-08-24 Intelligent safety monitoring method for energy storage power station

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010857536.0A CN111814761A (en) 2020-08-24 2020-08-24 Intelligent safety monitoring method for energy storage power station

Publications (1)

Publication Number Publication Date
CN111814761A true CN111814761A (en) 2020-10-23

Family

ID=72859147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010857536.0A Pending CN111814761A (en) 2020-08-24 2020-08-24 Intelligent safety monitoring method for energy storage power station

Country Status (1)

Country Link
CN (1) CN111814761A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713292A (en) * 2016-12-13 2017-05-24 山东交通学院 Ship real-time monitoring system
CN206696995U (en) * 2017-03-23 2017-12-01 华南理工大学 The fire detection and tracks of device of small-sized depopulated helicopter
CN109168031A (en) * 2018-11-06 2019-01-08 杭州云英网络科技有限公司 Streaming Media method for pushing and device, steaming media platform
WO2019127273A1 (en) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 Multi-person face detection method, apparatus, server, system, and storage medium
CN110562299A (en) * 2019-09-12 2019-12-13 辽宁鼎汉奇辉电子系统工程有限公司 Intelligent inspection device for invasion of railway key regional personnel
CN210955484U (en) * 2019-11-22 2020-07-07 武汉微创光电股份有限公司 Traffic condition monitor
CN111405242A (en) * 2020-02-26 2020-07-10 北京大学(天津滨海)新一代信息技术研究院 Ground camera and sky moving unmanned aerial vehicle linkage analysis method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713292A (en) * 2016-12-13 2017-05-24 山东交通学院 Ship real-time monitoring system
CN206696995U (en) * 2017-03-23 2017-12-01 华南理工大学 The fire detection and tracks of device of small-sized depopulated helicopter
WO2019127273A1 (en) * 2017-12-28 2019-07-04 深圳市锐明技术股份有限公司 Multi-person face detection method, apparatus, server, system, and storage medium
CN109168031A (en) * 2018-11-06 2019-01-08 杭州云英网络科技有限公司 Streaming Media method for pushing and device, steaming media platform
CN110562299A (en) * 2019-09-12 2019-12-13 辽宁鼎汉奇辉电子系统工程有限公司 Intelligent inspection device for invasion of railway key regional personnel
CN210955484U (en) * 2019-11-22 2020-07-07 武汉微创光电股份有限公司 Traffic condition monitor
CN111405242A (en) * 2020-02-26 2020-07-10 北京大学(天津滨海)新一代信息技术研究院 Ground camera and sky moving unmanned aerial vehicle linkage analysis method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIANKANG DENG 等: "ArcFace: Additive Angular Margin Loss for Deep Face Recognition", 《ARXIV》 *
JIANKANG DENG 等: "RetinaFace: Single-stage Dense Face Localisation in the Wild", 《ARXIV》 *
王博文;宗碧;雷铠伊;伍明亮;郭鑫伟;: "智能监控摄像头的研究――基于Dlib、OpenCV和双舵机云台摄像头", 《现代信息科技》 *
赖保均 等: "基于深度学习的人脸追踪安防监控系统", 《科学技术创新》 *

Similar Documents

Publication Publication Date Title
CN109819208B (en) Intensive population security monitoring management method based on artificial intelligence dynamic monitoring
CN109447048B (en) Artificial intelligence early warning system
CN108109385B (en) System and method for identifying and judging dangerous behaviors of power transmission line anti-external damage vehicle
CN112309068B (en) Forest fire early warning method based on deep learning
CN113963315A (en) Real-time video multi-user behavior recognition method and system in complex scene
CN109905423B (en) Intelligent management system
CN108416715A (en) A kind of Campus security management system based on intelligent video camera head
CN110427814A (en) A kind of bicyclist recognition methods, device and equipment again
CN112084928A (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network
Fawzi et al. Embedded real-time video surveillance system based on multi-sensor and visual tracking
CN210666820U (en) Pedestrian abnormal behavior detection system based on DSP edge calculation
CN115294519A (en) Abnormal event detection and early warning method based on lightweight network
CN115409992A (en) Remote driving patrol car system
CN115188066A (en) Moving target detection system and method based on cooperative attention and multi-scale fusion
CN111064928A (en) Video monitoring system with face recognition function
CN110324589A (en) A kind of monitoring system and method for tourist attraction
CN107358191B (en) Video alarm detection method and device
CN110807444A (en) Pedestrian abnormal behavior detection system and method based on DSP edge calculation
CN111814761A (en) Intelligent safety monitoring method for energy storage power station
CN110110620A (en) A kind of students ' behavior management system and design method based on recognition of face
CN113033326B (en) Photovoltaic power station construction treading assembly monitoring method
CN113920354A (en) Action recognition method based on event camera
KR102354035B1 (en) System and method for context awareness using sound source based on empirical learning
CN114037937A (en) Real-time refrigerator food material identification method based on multi-target tracking
CN202472689U (en) Face posture detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023