CN112183313A - SlowFast-based power operation field action identification method - Google Patents

SlowFast-based power operation field action identification method Download PDF

Info

Publication number
CN112183313A
CN112183313A CN202011030237.6A CN202011030237A CN112183313A CN 112183313 A CN112183313 A CN 112183313A CN 202011030237 A CN202011030237 A CN 202011030237A CN 112183313 A CN112183313 A CN 112183313A
Authority
CN
China
Prior art keywords
video
model
slowfast
action
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011030237.6A
Other languages
Chinese (zh)
Other versions
CN112183313B (en
Inventor
王波
张迎晨
马富齐
罗鹏
周胤宇
张天
王红霞
马恒瑞
李怡凡
张嘉鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202011030237.6A priority Critical patent/CN112183313B/en
Publication of CN112183313A publication Critical patent/CN112183313A/en
Application granted granted Critical
Publication of CN112183313B publication Critical patent/CN112183313B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for recognizing actions of an electric power operation site, which is based on a SlowFast algorithm, belongs to a deep learning video recognition technology, analyzes a video image of the operation site by utilizing a computer vision technology and the strong computing power of a computer, recognizes the actions of an operator, compares the actions with the actions contained in a standard operation flow, plays a role in real-time monitoring, can effectively improve the monitoring quality of the site operation, and reduces the safety risk of the site operation.

Description

SlowFast-based power operation field action identification method
Technical Field
The invention relates to the technical field of electric power operation safety control, in particular to an electric power operation field action identification method based on SlowFast.
Background
In the process of power production, the operation site safety supervision has important significance for ensuring the safety of workers. The power generation operation is a relatively complicated process involving many dangerous processes. The electric power safety regulations have proposed safety regulations for electric power operation, and require safety measures to be taken to prevent electric shock accidents during operation. However, many power operators do not establish a correct safety concept during work, lack sufficient protection awareness, and seriously affect the safety and reliability of the work. If the safety consciousness of related operators is low, the operators do not reasonably participate in the work according to the current electric power operation characteristics, rules and actual field conditions, and the safety of the electric power operation and the safe operation of equipment can be seriously influenced.
At present, the electric power field operation generally adopts a mode of manual safety monitoring and video monitoring, but a monitoring person and an operating person are easily influenced by external factors, attention may not be focused, and then safety accidents may be caused. In summary, the problems of the safety supervision of the current electric field operation personnel can be summarized as follows:
(1) although the existing safety supervision method for field operation is relatively perfect, related safety management personnel still need to perform manual execution, so that the safety supervision method cannot guarantee that the safety supervision method can be completely fallen to a real place, and the supervision personnel cannot guarantee that all regulations of the safety supervision management method can be comprehensively realized, and cannot guarantee that the supervision personnel can carry out real-time monitoring and risk early warning information real-time feedback on the currently developed operation process.
(2) The video monitoring system provides effective assistance for safety supervision, but the actual monitoring task still needs more manual work to be completed, and the content analysis of the monitoring image still needs to be watched in real time and read manually. The information provided by the existing video monitoring system is original video data, and the monitoring system usually only records video images for later evidence collection and cannot fully play the real-time active monitoring role of the monitoring system.
The invention is based on the deep learning video identification technology, analyzes the video image of the operation site by utilizing the computer vision technology and the strong computing power of the computer, identifies the action of the operator, compares the action with the action contained in the standard operation flow and plays a role in real-time monitoring. The SlowFast algorithm is a video identification algorithm with a double-data-stream structure, wherein time and space information in a video are respectively processed by a fast data stream and a slow data stream, a data transmission channel is constructed between the data streams, the cross sensing capability of a model on the time and space information is improved, and finally feature information extracted by the double data streams is fused and identified.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a method for recognizing the actions of an electric power working site based on SlowFast. Based on a SlowFast algorithm and electric field operation specifications, an algorithm model is trained, and the action recognition of common switching, grounding and electricity testing operations is realized.
The invention provides a method for recognizing actions of an electric power operation site based on SlowFast, which comprises the following steps of:
firstly, collecting a video image sample of an operation site, and carrying out video collection aiming at each operation type;
secondly, preprocessing and labeling the manual video, cutting the acquired video data by manual work with the center of a manual picture, normalizing the video size, classifying according to the operation type, extracting the action in each operation process through video editing, and labeling the edited video segments according to the action sequence;
thirdly, constructing a model, extracting image information, texture features, edge information and optical flow information contained in the video by using a Pythrch frame, inputting the extracted feature information into a neural network, calculating model parameters by the neural network, and outputting a motion recognition result based on the video;
fourthly, model training, namely performing neural network training based on processed labeled video data, optimizing model parameters through a verification set and a test set identification result, supplementing and correcting and labeling a data set for special identification difficult samples, inputting new data into a model for continuous training, performing auxiliary calculation through a professional graphic calculation accelerator card, and obtaining a high-precision action identification model after multiple iterations;
and fifthly, evaluating the model, and verifying the identification effect of the model by acquiring new video data of field operation or testing the action model on the spot.
The video clip and the labeling information related in the second step of manual video preprocessing and labeling are based on the power field operation specification, the actions contained in each complete operation process are divided according to the action descriptions in the operation specification, a large number of action video segments with different action descriptions are obtained through the video clip, and the action descriptions in the operation specification are used as the labeling information to classify and label the video segments.
And the third step of constructing a model considers the action recognition task as the analysis of action space-time, and captures characteristic information in space-time through directional filtering. The time and the space are distinguished by video frame extraction intervals, the physiological characteristics of biological vision are used for reference, the action space analysis refers to information of slow change of color, texture, light and the like of an analysis object, and the action time analysis refers to information of high-speed change of position, posture, direction and the like of a quick motion part of the analysis object. The video frames are extracted according to a larger interval, namely a small number of frames with larger image span are obtained as the input of motion space analysis, and the video frames are extracted according to a smaller interval, namely a large number of frames with small image span are obtained as the input of motion time analysis. The semantic information contained in the action can be effectively analyzed by combining the characteristic information extracted by the two paths, and the analysis of action space and time is realized.
The SlowFast network adopted by the model constructed in the third step keeps the original frame of the video as input in the spatial dimension, namely the spatial resolution and the visible light color information are kept, which is helpful for the Slow branch to extract the motion space information; the video frames input in the time dimension are preprocessed, including reducing the spatial resolution and removing the color information, and the processing mode not only can reduce the capture capacity of the Fast branch to the spatial information, but also can enhance the capture capacity of the Fast branch to the time information.
In the spatial dimension, the SlowFast network has less input video frames, but has more and more complex key information needing to be analyzed and extracted and higher fine-grained degree, thereby generating a large amount of calculation and occupying more calculation power.
The Slow Fast network comprises a unidirectional lateral connection channel which leads a Fast branch to a Slow branch, and aims to fuse two kinds of characteristic information, because the frame numbers of videos input by the two branches are different, the dimensions of characteristic diagrams are also different, when the lateral connection is carried out, the characteristic diagrams of the Fast branch need to be subjected to scale conversion by using 3D convolution operation, and the dimensions of the characteristic diagrams of the Slow branch are matched, so that the characteristic fusion can be realized.
In the process of model training of the SlowFast network, in order to obtain a better robust model, input data can be enhanced, and an enhancement mode method comprises Gaussian blur, random illumination and horizontal inversion.
In the model training process of the SlowFast network, the 3D convolution operation is included, and a multi-GPU parallel computing mode is adopted, so that the model training efficiency can be effectively improved, the model parameters can be quickly converged to an ideal minimum value, and higher identification precision is obtained on a verification set and a test set.
Compared with the prior art, the invention has the advantages that
Drawings
FIG. 1 is a schematic view of the model structure of the present invention
Detailed Description
The embodiments of the invention will be further described with reference to the accompanying drawings in which:
the invention provides a method for recognizing actions of an electric power operation site based on SlowFast, which comprises the following steps of:
firstly, collecting a video image sample of an operation site, and carrying out video collection aiming at each operation type;
secondly, preprocessing and labeling the manual video, cutting the acquired video data by manual work with the center of a manual picture, normalizing the video size, classifying according to the operation type, extracting the action in each operation process through video editing, and labeling the edited video segments according to the action sequence;
thirdly, constructing a model, extracting image information, texture features, edge information and optical flow information contained in the video by using a Pythrch frame, inputting the extracted feature information into a neural network, calculating model parameters by the neural network, and outputting a motion recognition result based on the video;
fourthly, model training, namely performing neural network training based on processed labeled video data, optimizing model parameters through a verification set and a test set identification result, supplementing and correcting and labeling a data set for special identification difficult samples, inputting new data into a model for continuous training, performing auxiliary calculation through a professional graphic calculation accelerator card, and obtaining a high-precision action identification model after multiple iterations;
and fifthly, evaluating the model, and verifying the identification effect of the model by acquiring new video data of field operation or testing the action model on the spot.
The specific operation for realizing the first step is as follows:
the method comprises the steps of preparing 5 visible light cameras, arranging the visible light cameras on an operation site, surrounding an operator, shooting operation pictures of the operator from 5 visual angles, repeating each operation for multiple times, and shooting three types of operation videos of switching, grounding and electricity testing operations.
The specific operation of the second step is realized as follows:
(1) classifying the video data acquired in the first step according to operation types;
(2) and acquiring the standard action of each operation from the power field operation specification, and making a label list, such as the label list of the switching operation: the row title is the serial number of the operation step, and the column title is the specific action name of the operation step;
(3) performing action segmentation on each complete job video according to the label list, wherein the action contained in each video segment corresponds to a specific action name in the label list;
(4) and rearranging the clipped video segments according to the operation type.
The specific operation of the third step is realized as follows:
(1) installing an algorithm on a supercomputing platform to realize a required development environment;
(2) the SlowFast network captures feature information in space-time through directional filtering. Temporal and spatial are distinguished by the video frame extraction interval. The video frames are extracted according to 16 frame intervals, namely a small number of frames with larger image span are obtained as the input of a Slow branch, and the video frames are extracted according to 2 frame intervals, namely a large number of frames with small image span are obtained as the input of a Fast branch. The semantic information contained in the action can be effectively analyzed by combining the characteristic information extracted by the two paths, so that the analysis of action time and space is realized;
(3) the SlowFast network keeps a video original frame as input in a spatial dimension, namely, the spatial resolution 1080P and visible light RGB color information are kept;
(4) in the spatial dimension, the SlowFast network inputs fewer video frames, but the analysis and extraction of the key information are more and complicated, the fine grain degree is higher, so that a large amount of calculation is generated, the calculation is about 80 percent of the calculation power, in the time dimension, the input video frames are more, but the analysis and extraction of the key information are less and simple, the fine grain degree is lower, so that a large amount of calculation is not generated, and only about 20 percent of the calculation power is consumed;
(5) the Slow Fast network comprises a unidirectional lateral connection channel which points from a Fast branch to a Slow branch, and aims to fuse two kinds of characteristic information, because the frame numbers of videos input by the two branches are different, the dimensions of characteristic graphs of the two branches are also different, when the two branches are laterally connected, the characteristic graphs of the Fast branch need to be subjected to scale transformation by using a 5 x 1 3D convolution kernel, and the characteristic graphs of the Slow branch are summed to realize characteristic fusion;
(6) after the characteristic information extracted by the two branches is connected in series, inputting the characteristic information into a full connection layer to further extract the characteristic;
(7) inputting the features extracted in the step (6) into a sigmoid regression layer for regression calculation to obtain a predicted value;
(8) and inquiring the action label corresponding to the predicted value according to the label list, namely the predicted action.
The specific operation of the step four is realized as follows:
(1) inputting the sorted action data set into a model, and automatically extracting a video frame by the model;
(2) and (3) performing data enhancement on the video frame extracted in the step (1), wherein the enhancement method comprises Gaussian blur, random illumination and horizontal inversion.
(3) Inputting the enhanced data in the step (2) into a Slow branch and a Fast branch respectively according to preset;
(4) after a series of feature extraction and feature fusion are carried out on the two branches, the feature vectors are input into a sigmoid regression layer to carry out regression calculation, and a predicted value is obtained;
(5) inquiring an action label corresponding to the predicted value according to the label list, namely obtaining a predicted action;
(6) comparing the predicted action tag obtained in the step (5) with the real action tag in the verification set, and calculating the prediction precision of the verification set;
(7) adjusting model parameters according to the verification set precision obtained in the step (6), and performing iterative training;
(8) and the prediction precision of the ideal model parameters obtained after the multiple rounds of training on the test set is the prediction precision of the final model.
The concrete operation for realizing the step five is as follows:
(1) arranging a camera on site to shoot an operation picture of an operator, and directly inputting a video stream into the model through a data transmission interface;
(2) predicting the action of the operator in real time by the model, and recording the prediction result;
(3) analyzing the actual application effect of the model according to the prediction result in the step (2), and increasing the data set and adjusting the model parameters according to the actual application effect.

Claims (8)

1. A method for recognizing actions of an electric power operation site based on SlowFast is characterized by comprising the following steps: the method comprises the following steps:
step one, collecting a video image sample of an operation site: performing video acquisition on each operation type;
secondly, preprocessing and labeling the artificial video: manually cutting acquired video data by taking a manual picture center, normalizing the video size, and classifying according to the operation type; extracting actions in each operation process through video editing, and marking the edited video segments according to the action sequence;
thirdly, constructing a model: extracting image information, texture features, edge information and optical flow information contained in the video by using a Pythrch frame, inputting the extracted feature information into a neural network, calculating model parameters by the neural network, and outputting a motion recognition result based on the video;
fourthly, model training: performing neural network training on processed labeled video data, further optimizing model parameters through a recognition result, performing data set supplement and correction labeling on samples with low recognition accuracy, inputting new data into a model for continuous training, performing auxiliary calculation through a professional graphic calculation accelerator card, and performing multiple iterations to obtain a high-precision action recognition model.
2. The method for operation identification of a SlowFast-based power job site according to claim 1, wherein the method comprises the following steps: the first step of collecting the video image samples of the operation site is to shoot the operators by arranging a plurality of site visible light cameras, record the action videos under a multi-view and multi-background, and collect rich action video data by changing the operation scene and the illumination condition.
3. The method for operation identification of a SlowFast-based power job site according to claim 1, wherein the method comprises the following steps: the video clip and the labeling information related in the second step of manual video preprocessing and labeling are based on the power field operation specification, the actions contained in each complete operation process are divided according to the action descriptions in the operation specification, a large number of action video segments with different action descriptions are obtained through the video clip, and the action descriptions in the operation specification are used as the labeling information to classify and label the video segments.
4. The method for operation identification of a SlowFast-based power job site according to claim 1, wherein the method comprises the following steps: and the third step of constructing the model is to adopt a SlowFast network, the network takes a 3D convolutional neural network 3D ResNet as a main body, the model carries out frame extraction on an input video segment, a series of frames with large time intervals are taken as the input of a Slow branch, a series of frames with small time intervals are taken as the input of a Fast branch, two branches calculate frame information in parallel and extract characteristics, vectors containing characteristic parameters are input into a full connection layer after being connected in series through a series of convolutional operations, and the full connection layer further inputs the calculated characteristic vectors into a sigmoid regression layer for regression calculation to obtain a classification result.
5. The build model of claim 4, wherein: the features extracted after each scale transformation of the Fast branch can be simultaneously input into the Slow branch to form a plurality of lateral connections for feature information fusion.
6. The build model of claim 5, wherein: the feature information fusion is scale matching by means of 3D convolution in lateral connection.
7. The method for operation identification of a SlowFast-based power job site according to claim 1, wherein the method comprises the following steps: and the fourth step of model training adopts data enhancement, and the enhancement method comprises Gaussian blur, random illumination and horizontal turnover.
8. The method for recognizing actions of a power working site based on SlowFast according to claim 7, wherein the model training in the fourth step adopts a multi-GPU parallel computing mode, and the learning rate in a SlowFast network is adjusted according to the number of GPUs.
CN202011030237.6A 2020-09-27 2020-09-27 SlowFast-based power operation field action identification method Expired - Fee Related CN112183313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011030237.6A CN112183313B (en) 2020-09-27 2020-09-27 SlowFast-based power operation field action identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011030237.6A CN112183313B (en) 2020-09-27 2020-09-27 SlowFast-based power operation field action identification method

Publications (2)

Publication Number Publication Date
CN112183313A true CN112183313A (en) 2021-01-05
CN112183313B CN112183313B (en) 2022-03-11

Family

ID=73945013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011030237.6A Expired - Fee Related CN112183313B (en) 2020-09-27 2020-09-27 SlowFast-based power operation field action identification method

Country Status (1)

Country Link
CN (1) CN112183313B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111825A (en) * 2021-04-22 2021-07-13 北京房江湖科技有限公司 Construction monitoring method and device, electronic equipment and storage medium
CN113158970A (en) * 2021-05-11 2021-07-23 清华大学 Action identification method and system based on fast and slow dual-flow graph convolutional neural network
CN113297914A (en) * 2021-04-26 2021-08-24 云南电网有限责任公司信息中心 Distribution network field operation electricity testing action recognition method
CN113723169A (en) * 2021-04-26 2021-11-30 中国科学院自动化研究所 Behavior identification method, system and equipment based on SlowFast
CN113743306A (en) * 2021-09-06 2021-12-03 浙江广厦建设职业技术大学 Method for analyzing abnormal behaviors of real-time intelligent video monitoring based on slowfast double-frame rate
CN114937028A (en) * 2022-06-21 2022-08-23 苏州上舜精密工业科技有限公司 Intelligent identification-based quality detection method and system for linear sliding table module
CN115035458A (en) * 2022-07-06 2022-09-09 中国安全生产科学研究院 Safety risk evaluation method and system
WO2022235593A3 (en) * 2021-05-02 2022-12-29 The Trustees Of Dartmouth College System and method for detection of health-related behaviors

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942009A (en) * 2019-11-22 2020-03-31 南京甄视智能科技有限公司 Fall detection method and system based on space-time hybrid convolutional network
CN111487624A (en) * 2020-04-23 2020-08-04 上海眼控科技股份有限公司 Method and equipment for predicting rainfall capacity
CN111523421A (en) * 2020-04-14 2020-08-11 上海交通大学 Multi-user behavior detection method and system based on deep learning and fusion of various interaction information
CN111680646A (en) * 2020-06-11 2020-09-18 北京市商汤科技开发有限公司 Motion detection method and device, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942009A (en) * 2019-11-22 2020-03-31 南京甄视智能科技有限公司 Fall detection method and system based on space-time hybrid convolutional network
CN111523421A (en) * 2020-04-14 2020-08-11 上海交通大学 Multi-user behavior detection method and system based on deep learning and fusion of various interaction information
CN111487624A (en) * 2020-04-23 2020-08-04 上海眼控科技股份有限公司 Method and equipment for predicting rainfall capacity
CN111680646A (en) * 2020-06-11 2020-09-18 北京市商汤科技开发有限公司 Motion detection method and device, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHRISTOPH FEICHTENHOFER ET AL.: "SlowFast Networks for Video Recognition", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
张丽娟 等: "基于深度多模态特征融合的短视频分类", 《北京航空航天大学学报》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111825A (en) * 2021-04-22 2021-07-13 北京房江湖科技有限公司 Construction monitoring method and device, electronic equipment and storage medium
CN113297914A (en) * 2021-04-26 2021-08-24 云南电网有限责任公司信息中心 Distribution network field operation electricity testing action recognition method
CN113723169A (en) * 2021-04-26 2021-11-30 中国科学院自动化研究所 Behavior identification method, system and equipment based on SlowFast
CN113723169B (en) * 2021-04-26 2024-04-30 中国科学院自动化研究所 SlowFast-based behavior recognition method, system and equipment
WO2022235593A3 (en) * 2021-05-02 2022-12-29 The Trustees Of Dartmouth College System and method for detection of health-related behaviors
CN113158970A (en) * 2021-05-11 2021-07-23 清华大学 Action identification method and system based on fast and slow dual-flow graph convolutional neural network
CN113743306A (en) * 2021-09-06 2021-12-03 浙江广厦建设职业技术大学 Method for analyzing abnormal behaviors of real-time intelligent video monitoring based on slowfast double-frame rate
CN114937028A (en) * 2022-06-21 2022-08-23 苏州上舜精密工业科技有限公司 Intelligent identification-based quality detection method and system for linear sliding table module
CN114937028B (en) * 2022-06-21 2023-12-08 苏州上舜精密工业科技有限公司 Intelligent identification and recognition linear sliding table module quality detection method and system
CN115035458A (en) * 2022-07-06 2022-09-09 中国安全生产科学研究院 Safety risk evaluation method and system
CN115035458B (en) * 2022-07-06 2023-02-03 中国安全生产科学研究院 Safety risk evaluation method and system

Also Published As

Publication number Publication date
CN112183313B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN112183313B (en) SlowFast-based power operation field action identification method
CN110826538B (en) Abnormal off-duty identification system for electric power business hall
CN110084165B (en) Intelligent identification and early warning method for abnormal events in open scene of power field based on edge calculation
CN113324864B (en) Pantograph carbon slide plate abrasion detection method based on deep learning target detection
CN111652225A (en) Non-invasive camera reading method and system based on deep learning
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
CN111401308B (en) Fish behavior video identification method based on optical flow effect
CN113920097A (en) Power equipment state detection method and system based on multi-source image
CN114298948A (en) Ball machine monitoring abnormity detection method based on PSPNet-RCNN
CN113139489A (en) Crowd counting method and system based on background extraction and multi-scale fusion network
CN114882440A (en) Human head detection method and system
CN115035088A (en) Helmet wearing detection method based on yolov5 and posture estimation
CN108174198B (en) Video image quality diagnosis analysis detection device and application system
CN115311740A (en) Method and system for recognizing abnormal human body behaviors in power grid infrastructure site
CN115661757A (en) Automatic detection method for pantograph arcing
CN108764287B (en) Target detection method and system based on deep learning and packet convolution
CN113077423A (en) Laser selective melting pool image analysis system based on convolutional neural network
CN113469938A (en) Pipe gallery video analysis method and system based on embedded front-end processing server
CN109934172B (en) GPS-free full-operation line fault visual detection and positioning method for high-speed train pantograph
Guo et al. Anomaly detection of trackside equipment based on semi-supervised and multi-domain learning
CN114140731B (en) Traction substation abnormality detection method
CN114841932A (en) Foreign matter detection method, system, equipment and medium for photovoltaic panel of photovoltaic power station
CN114743257A (en) Method for detecting and identifying image target behaviors
CN114677667A (en) Transformer substation electrical equipment infrared fault identification method based on deep learning
CN110598569B (en) Action recognition method based on human body posture data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220311