CN113920461A - Power grid operation and maintenance process image monitoring system and monitoring method - Google Patents
Power grid operation and maintenance process image monitoring system and monitoring method Download PDFInfo
- Publication number
- CN113920461A CN113920461A CN202111177600.1A CN202111177600A CN113920461A CN 113920461 A CN113920461 A CN 113920461A CN 202111177600 A CN202111177600 A CN 202111177600A CN 113920461 A CN113920461 A CN 113920461A
- Authority
- CN
- China
- Prior art keywords
- image
- unit
- module
- camera
- storage unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012544 monitoring process Methods 0.000 title claims abstract description 56
- 238000012423 maintenance Methods 0.000 title claims abstract description 51
- 230000008569 process Effects 0.000 title claims abstract description 33
- 238000010191 image analysis Methods 0.000 claims abstract description 15
- 230000009466 transformation Effects 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 33
- 238000004891 communication Methods 0.000 claims description 20
- 230000000007 visual effect Effects 0.000 claims description 13
- 238000001931 thermography Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 6
- 238000007405 data analysis Methods 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 238000003825 pressing Methods 0.000 claims description 3
- 238000004148 unit process Methods 0.000 abstract 1
- 230000006399 behavior Effects 0.000 description 35
- 238000001514 detection method Methods 0.000 description 20
- 230000004927 fusion Effects 0.000 description 15
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 210000002478 hand joint Anatomy 0.000 description 7
- 239000013598 vector Substances 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000004424 eye movement Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000003936 working memory Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 125000004122 cyclic group Chemical group 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 238000001308 synthesis method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A42—HEADWEAR
- A42B—HATS; HEAD COVERINGS
- A42B3/00—Helmets; Helmet covers ; Other protective head coverings
- A42B3/04—Parts, details or accessories of helmets
- A42B3/0406—Accessories for helmets
- A42B3/042—Optical devices
-
- A—HUMAN NECESSITIES
- A42—HEADWEAR
- A42B—HATS; HEAD COVERINGS
- A42B3/00—Helmets; Helmet covers ; Other protective head coverings
- A42B3/04—Parts, details or accessories of helmets
- A42B3/30—Mounting radio sets or communication systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Physics & Mathematics (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image monitoring system and method for a power grid operation and maintenance process, and belongs to the field of power grid operation and maintenance monitoring. The system comprises an image acquisition unit, a first image storage unit, an image identification unit, a second image storage unit, an image analysis unit, a remote monitoring unit and an early warning guide unit which are sequentially connected, wherein the image acquisition unit transmits acquired real-time images to the first image storage unit for temporary storage, the first storage unit transmits image information to the image identification unit for identification and then stores the image information into the second image storage unit in a classified manner, the image analysis unit processes image data, the remote monitoring unit calls data results processed by the image analysis unit and real-time display site image information, and the early warning guide unit receives early warning instructions of the remote monitoring unit and then conducts site early warning. The system can monitor real-time image information in a power transformation operation and maintenance scene, and early-warning and guiding workers.
Description
Technical Field
The invention belongs to the field of power grid operation and maintenance monitoring, and particularly relates to a power grid operation and maintenance process image monitoring system and a monitoring method.
Background
The image acquisition refers to the process that an optical image of an image intensifier is shot by a camera and converted into a video signal, and the video signal is transmitted to an image acquisition card for digitization to form digital image data for processing and storing by a computer. Image acquisition has two indicators, namely gray scale and acquisition resolution: the optical signals collected by the vision sensor are converted into electrical signals, which, after spatial sampling and amplitude quantization, form a digital image. Image acquisition can generally be divided into two categories: one is static image acquisition, namely taking a picture to obtain an image at a certain moment; another category is dynamic image acquisition, i.e. taking a video with the purpose of obtaining successive images over a certain period of time.
The image acquisition technology based on the power equipment takes the acquisition of high-quality images as a core, and aims to contain all effective information of the conditions of the acquisition equipment in the acquired images as much as possible. The prior acquisition technology has low efficiency, single image acquisition means and poor image quality, and can not automatically sense the environment and make corresponding adjustment. More importantly, the images obtained by the technologies must be transmitted back to the background of the system for uniform processing, which has an influence on the stability and complexity of the system.
At present, a large number of video monitoring systems are installed in power plants and substations, and the functions of acquiring images of field equipment, monitoring, controlling the motion of a remote camera, recording digital videos and the like can be realized. However, most of these video acquisition and monitoring systems only have a video monitoring function and no video image recognition module.
Disclosure of Invention
The invention aims to provide an image monitoring system and an image monitoring method for a power grid operation and maintenance process, which are used for carrying out remote monitoring, early warning and guiding on a power transformation operation and maintenance process.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an image monitoring system for the operation and maintenance process of a power grid comprises an image acquisition unit, an image first storage unit, an image recognition unit, an image second storage unit, an image analysis unit, a remote monitoring unit and an early warning guide unit which are connected in sequence, the image acquisition unit transmits the acquired real-time image to the first image storage unit for temporary storage, the first storage unit transmits the temporarily stored image data to the image identification unit for identification and then stores the image data into the second image storage unit in a classified manner, the image analysis unit calls data stored in the image second storage unit to perform data analysis and processing, the remote monitoring unit is used for calling the data stored in the image second storage unit and displaying the image information acquired on site and the analysis result of the image analysis unit in real time, and the early warning guide unit performs on-site early warning after receiving the early warning instruction of the remote monitoring unit.
Further, in some embodiments of the present invention, the image acquisition unit includes a substation device state acquisition camera, a substation environment state acquisition infrared camera, a human behavior state and gesture motion acquisition camera, and a tool state acquisition camera.
Further, in some embodiments of the present invention, the image capturing unit further comprises a wearable video capturing intelligent terminal.
Preferably, in some embodiments of the present invention, the wearable video capture intelligent terminal is an intelligent helmet, the intelligent helmet includes a helmet body and a WIFI wireless communication module, a 4G wireless communication module, a differential global satellite navigation positioning module, a battery module, a computing module, a vision processing module, a 9-axis inertial sensor, a fisheye camera, an eye tracking camera, and a TOF camera disposed on the helmet body, the vision processing module is electrically connected to the 9-axis inertial sensor, the fisheye camera, the eye tracking camera, the TOF camera, the computing module is electrically connected to the WIFI wireless communication module, the 4G wireless communication module, and the differential global satellite navigation positioning module, the 9-axis inertial sensor, the fisheye camera, the eye tracking camera, and the TOF camera are disposed on a front side of the helmet body, and the vision processing module, the WIFI wireless communication module, the eye tracking camera, and the TOF camera are disposed on a front side of the helmet body, The 4G wireless communication module, the differential global satellite navigation positioning module, the battery module and the calculation module are arranged at the top of the cap body.
Further, in some embodiments of the present invention, the cap body is further provided with an infrared thermal imaging camera, the infrared thermal imaging camera is disposed at the front side of the cap body, and the infrared thermal imaging camera is electrically connected to the vision processing module.
Further, in some embodiments of the present invention, an ultra-wideband wireless positioning module is further disposed on the top of the cap body, and the ultra-wideband wireless positioning module is electrically connected to the computing module.
Further, in some embodiments of the present invention, the cap body is further provided with a speaker, a microphone and an indicator light for warning the staff, the speaker and the microphone are respectively electrically connected to the visual processing module through a codec, and the indicator light is electrically connected to the visual processing module.
Further, in some embodiments of the present invention, the image recognition unit includes a meter reading recognition module, a switch disconnecting link on-off state recognition module, a protection pressing plate state recognition module of a switch cabinet and a protection screen, a multi-target personnel operation behavior recognition module, and a tool posture and trajectory recognition module.
The invention also provides a method for monitoring the operation and maintenance of the power grid by using the image monitoring system in the operation and maintenance process of the power grid, which comprises the following steps:
(1) transformer substation model construction
The method comprises the steps of scanning substation equipment in a substation operation and maintenance scene based on a visual three-dimensional laser radar, obtaining point cloud data of the substation, carrying out scene base map modeling by utilizing Unity3d to establish an image base map, generating a three-dimensional live-action substation through data registration, noise removal, model generation and texture mapping processes, and storing data into a first image storage unit;
(2) image acquisition
The method comprises the steps that an image acquisition unit of an image monitoring system based on the operation and maintenance process of a power grid acquires video images of the state of transformer substation equipment, the behavior state and gesture action of workers, the state and track of tools and instruments and the environment temperature and then stores the video images into a first image storage unit;
(3) state recognition
The image recognition unit fuses video image data collected by the image collection unit on the basis of the transformer substation equipment model data to realize the recognition of the transformer substation equipment state, the worker state and the tool state, and the image recognition unit performs recognition and then stores the recognized data in the image second storage unit in a classified mode;
(4) guiding early warning
The remote monitoring unit calls the data stored in the image second storage unit, data analysis and processing are carried out on the image analysis unit, the processing result is sent to the early warning guide unit through the remote monitoring unit, and the early warning guide unit carries out on-site early warning or guide on the power transformation operation and maintenance workers.
The beneficial technical effects of the invention are as follows: the power grid operation and maintenance process image monitoring system can monitor real-time image information in a power transformation operation and maintenance scene, and can perform real-time reminding and remote guidance on operators who enter an operation site and do not have normative behaviors such as incorrect wearing of safety helmets, irregular wearing of working clothes, no fastening of safety belts or safety belts in high-altitude operation, irregular use of safety ropes, no wearing of insulating shoes and insulating gloves in operation grounding wires, and the like, so that the standardization and standardization levels of the field operation behaviors of the power transformation operation and maintenance personnel are improved; in the field working process of the power transformation operation and maintenance personnel, the system can automatically identify equipment in a scene, analyze the action track of the personnel and the current task progress condition, intelligently prompt working danger points, operation steps and operation methods, guide the operation and maintenance personnel to perform inspection, inspection and operation interactively, reduce memory load and improve the field operation speed, accuracy and normalization.
Drawings
Fig. 1 is a schematic structural diagram of an image monitoring system in a power grid operation and maintenance process according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an intelligent safety helmet in an image monitoring system in a power grid operation and maintenance process according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an intelligent safety helmet in an image monitoring system in a power grid operation and maintenance process according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for detecting joint points of a human body according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the detection of joint points of a hand according to an embodiment of the present invention;
FIG. 6 is a flowchart of an exemplary tool trajectory extraction algorithm;
the labels in the figure are: 1-cap, 101-vision processing module, 102-9 axis inertial sensor, 103-infrared thermal imaging camera, 104-microphone, 105-loudspeaker, 106-fisheye camera, 107-eye tracking camera, 108-TOF camera, 109-indicator light, 110-key, 111-codec, 112-computing unit, 113-WIFI wireless communication unit, 114-4G wireless communication unit, 115-ultra wide band wireless positioning unit, 116-differential global satellite navigation positioning unit, 117-battery.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
Referring to fig. 1-3, the image monitoring system for power grid operation and maintenance process provided in this embodiment includes an image acquisition unit, a first image storage unit, an image recognition unit, a second image storage unit, an image analysis unit, a remote monitoring unit, and an early warning guidance unit, which are connected in sequence, the image acquisition unit transmits the acquired real-time image to the first image storage unit for temporary storage, the first storage unit transmits the temporarily stored image data to the image identification unit for identification and then stores the image data into the second image storage unit in a classified manner, the image analysis unit calls data stored in the image second storage unit to perform data analysis and processing, the remote monitoring unit is used for calling the data stored in the image second storage unit and displaying the image information acquired on site and the analysis result of the image analysis unit in real time, and the early warning guide unit performs on-site early warning after receiving the early warning instruction of the remote monitoring unit.
The image recognition unit comprises a meter reading recognition module, a switch disconnecting link on-off state recognition module, a protection pressing plate state recognition module of a switch cabinet and a protection screen, a multi-target personnel operation behavior recognition module and a tool posture and track recognition module.
The image acquisition unit comprises a transformer substation equipment state acquisition camera, a transformer substation environment state acquisition infrared camera, a human body behavior state and gesture motion acquisition camera and a tool state acquisition camera.
The image acquisition unit further comprises a wearable video acquisition intelligent terminal, the wearable video acquisition intelligent terminal is an intelligent safety helmet, the intelligent safety helmet comprises a helmet body and a WIFI wireless communication module, a 4G wireless communication module, a differential global satellite navigation positioning module, a battery module, a calculation module, a visual processing module, a 9-axis inertial sensor, a fisheye camera, an eye movement tracking camera and a TOF camera, the intelligent safety helmet is arranged on the helmet body, the visual processing module is respectively electrically connected with the 9-axis inertial sensor, the fisheye camera, the eye movement tracking camera, the TOF camera, the calculation module and the battery module, the calculation module is respectively electrically connected with the WIFI wireless communication module, the 4G wireless communication module and the differential global satellite navigation positioning module, the 9-axis inertial sensor, the fisheye camera, the eye movement tracking camera and the TOF camera are arranged on the front side of the helmet body, and the visual processing module, the WIFI wireless communication module, The 4G wireless communication module, the differential global satellite navigation positioning module, the battery module and the calculation module are arranged at the top of the cap body.
The helmet is characterized in that an infrared thermal imaging camera is further arranged on the helmet body and is arranged on the front side of the helmet body, the infrared thermal imaging camera is electrically connected with the vision processing module, an ultra wide band wireless positioning module is further arranged at the top of the helmet body and is electrically connected with the computing module, a loudspeaker, a microphone and an indicator light used for giving an early warning to workers are further arranged on the helmet body, the loudspeaker and the microphone are respectively electrically connected with the vision processing module through a coder-decoder, and the indicator light is electrically connected with the vision processing module.
In this embodiment, the number of the fisheye cameras and the number of the eye-tracking cameras are 2. The intelligent safety helmet brim is provided with goggles, the visual processing module is arranged at the top of the safety helmet, the 9-axis inertial sensor is arranged at the front side of the safety helmet, the key is positioned at the left side of the safety helmet, the two fisheye cameras are symmetrically arranged at two sides of the top of the goggles under the intelligent safety helmet brim, the two eye-motion tracking cameras are symmetrically arranged and arranged at two sides of the goggles under the intelligent safety helmet brim, the TOF camera is positioned between the two fisheye cameras,
the 9-axis inertial sensor is used for tracking the movement of the head-mounted cap body, the fisheye camera is used for acquiring large-view-angle video data, and the fisheye camera and the 9-axis inertial sensor are combined to realize instant positioning and map construction. The fisheye camera can also be used for three-dimensional posture recognition, gesture recognition, reconstruction of other three-dimensional objects and three-dimensional object recognition of operators. The TOF camera is a depth sensor and is used for acquiring three-dimensional depth data in a scene and is used for gesture recognition and gesture interaction, three-dimensional posture recognition and three-dimensional object reconstruction of an operator and three-dimensional object recognition. The eye movement tracking camera adopts infrared LED illumination and an infrared camera to identify and position pupils and light spots on an eye image in real time, calculates the sight direction, judges which equipment a user is watching based on the coordinates of each equipment in a three-dimensional scene, records the observation action and the observation content of an operator, and realizes the interaction of the operator and the real three-dimensional scene; and realizing the interaction between the operating personnel and the virtual three-dimensional scene according to the sight direction of the human eyes and the coordinates of the virtual display content.
In this embodiment, the intelligent safety helmet is further provided with a speaker, a microphone and a codec, the speaker and the microphone are symmetrically arranged on the left side and the right side of the brim of the safety helmet, the codec is arranged on the top of the safety helmet, and the speaker and the microphone are respectively and electrically connected with the visual processing unit through the codec; collecting voice signals by using a microphone, and using the voice signals for voice recognition of operators to realize voice interaction between the operators and the system; a stereo speaker is used for playing sound signals for realizing the voice interaction between an operator and the system; still be equipped with the pilot lamp on the cap body, the pilot lamp is located goggles intermediate position under the intelligent safety helmet brim of a hat, and the pilot lamp is used for warning the staff.
The infrared thermal imaging camera is arranged on the front side of the safety helmet and is positioned below the 9-axis inertial sensor; the infrared thermal imaging camera detects the temperature of the transformer substation operation equipment, judges the heating phenomenon, and is provided with an ultra wide band wireless positioning unit at the top of the safety helmet, a worker adopts the ultra wide band wireless positioning unit to position in an indoor three-dimensional scene, and adopts a differential global satellite navigation positioning unit to position in an outdoor three-dimensional scene.
Example 2
The present embodiment provides a method for monitoring operation and maintenance of a power grid by using the power grid operation and maintenance process image monitoring system of embodiment 1, including:
(1) transformer substation model construction
The method comprises the steps of scanning substation equipment in a substation operation and maintenance scene based on a visual three-dimensional laser radar, obtaining point cloud data of the substation, carrying out scene base map modeling by utilizing Unity3d to establish an image base map, generating a three-dimensional live-action substation through data registration, noise removal, model generation and texture mapping processes, and storing data into a first image storage unit;
(2) image acquisition
The method comprises the steps that an image acquisition unit of an image monitoring system based on the operation and maintenance process of a power grid acquires video images of the state of transformer substation equipment, the behavior state and gesture action of workers, the state and track of tools and instruments and the environment temperature and then stores the video images into a first image storage unit;
(3) state recognition
The image recognition unit fuses video image data collected by the image collection unit on the basis of the transformer substation equipment model data to realize the recognition of the transformer substation equipment state, the worker state and the tool state, and the image recognition unit performs recognition and then stores the recognized data in the image second storage unit in a classified mode;
(4) guiding early warning
The remote monitoring unit calls the data stored in the image second storage unit, data analysis and processing are carried out on the image analysis unit, the processing result is sent to the early warning guide unit through the remote monitoring unit, and the early warning guide unit carries out on-site early warning or guide on the power transformation operation and maintenance workers.
1. Modeling and acquisition of human body, hand motion and body state of operation and maintenance personnel of transformer substation
2D posture estimation of personnel in the operation and maintenance scene of the transformer substation is completed based on the component relation field, and then acquisition and modeling of human body, hand motion and body states of the operation and maintenance personnel of the transformer substation are achieved through the constructed personnel meta-posture set and the 3D human body posture estimation algorithm based on the graph convolution network.
1.1 human 2D Joint Point estimation
Under the condition that multiple persons work simultaneously in a transformer substation scene, a bottom-up working mode is selected, a supervised learning joint point detection scheme based on a convolutional neural network is used, the detection is realized based on python, a C + + interface can also be provided, and multi-person human body joint point detection and hand joint point detection can be performed. The input of human body joint point detection is an image shot by a field camera. The camera is recommended to be arranged at a high position of the scene and is in a overlook shape. Inputting each frame of image acquired by a camera into a neural network for prediction, firstly extracting features, and then predicting a confidence image through a multi-stage convolution neural network with two branches, wherein the first branch is used for predicting a partial affinity domain. After the features are extracted, a joint point heat map is generated, specific positions of joint points are extracted from the heat map, and peaks in the heat map are obtained by applying non-maximum inhibition and are used as confidence degrees. Limb attachment is achieved through the articulation point and partial affinity domain. Determining two joint points and a partial affinity domain corresponding to each limb, integrating partial affinity domain information between the two joint points, taking the obtained result as the confidence coefficient of the limb, and finally sequencing all scores to determine whether the parts are connected or not, wherein each connection can be regarded as a limb. After all limbs are obtained, the limbs with the same joint point are regarded as the limbs of the same human body, finally, the joint point coordinates belonging to the same person are placed in a set, the joint point coordinates of each frame of image and a live video are combined, the result is visualized, and a working video containing the human body joint points is output. When hand joint point detection is carried out, the camera is recommended to be erected above a worker, is overlooked and shoots the hand close-up of the worker at an angle close to the vertical direction. And inputting a working image sequence shot by a camera as a model, performing coordinate regression through a hand joint detector to obtain coordinates of hand joint points of workers in the video, and mapping through a calibration camera or a depth camera if 3D coordinates are required. And finally, visualizing the result.
1.2 human 3D Joint Point estimation
The transformer substation has complex working scene, has the problems of intensive workers, mutual shielding and the like, and provides a severe test for the target detection of the workers. The scheme is realized based on a python environment, a convolutional neural network is adopted for extracting joint points, a cyclic convolutional network is added to retain time dimension information, the joint point detection of the body and the hand of a plurality of people is realized, and the motion state of a transformer substation worker on a time axis can be reflected.
Firstly, a video is collected when a transformer substation works, the video is cut and input into a ResNet feature extraction network for feature extraction, the result of feature extraction of each frame is input into a cyclic convolution neural network, time information is added into the feature extraction result, the feature extraction result contains known information of a past frame and potential information of a future frame, and the robustness of feature extraction on a time axis is improved. And (3) regressing the parameters of the human body joint point model according to the characteristics obtained by the cyclic convolution neural network, thereby obtaining the 3D coordinates of the joint points of the working personnel in each frame of image and the 3D joint point model matched with the working personnel. The hand joint point detection scheme first separates the captured video of the work site from frame to frame. In order to obtain complete hand information, a camera needs to be vertically arranged above a workplace. And then inputting the separated images into a target detection network, obtaining specific coordinates of the hand in a rectangular frame mode, and inputting the images of the hand area detected before into a hand joint point detector for coordinate regression to obtain 3D hand joint point coordinates.
2. The invention discloses a video frame-based illegal action and state identification method for power transformation operation and maintenance operators
Based on the current popular YOLO-V4 convolutional neural network architecture, a customized model improvement scheme is developed by combining the actual scene requirements of power transformation operation and maintenance. Specifically, aiming at the defects of the model in the experimental process, namely a large number of label rewriting problems and an invalid anchor allocation problem, the following three-aspect improvement strategies are adopted: 1) a high resolution single scale output layer is provided to replace the original network tack and head portions. 2) And a step aggregation mode is adopted to realize smoother multi-scale feature fusion. 3) The SE module is added on the basis of the original CSPDarknet53 backbone network. Through the improvement, the number of the target detection network parameters based on the video frame can be reduced by 40%, and the identification accuracy under the power transformation operation and maintenance scene is improved by 20% compared with that of the original algorithm.
3. Power transformation operation and maintenance worker time sequence behavior identification based on video stream
3.1 Power transformation operation and maintenance personnel time sequence behavior recognition based on human skeleton characteristics
In the process of identifying the operation behaviors, firstly, taking a certain human body joint point as a convolution center, and taking a point adjacent to the certain human body joint point in a space dimension as a point needing to participate in convolution; on the time level, the joint points of the previous and the next frames at the same position are taken as the points needing to participate in convolution. Second, after the convolution center and the convolution point that needs to participate are determined, a convolution operation is performed thereon to aggregate pose information in both the time dimension and the space dimension. And finally, designing a space-time graph convolution network based on the skeleton space-time sequence diagram to complete a behavior recognition task.
In the multi-target operation behavior identification process, the provided framework can take the multi-target tracking result as prior information, and complete the whole video analysis task by circularly executing the behavior identification step of the specific target. However, this inevitably leads to an increase in computational costs, and therefore the above strategy is suitable for use in a work environment where real-time requirements or personnel targets are low. For the condition with higher real-time requirement, the target tracking step under the framework can be replaced by simple joint space-time Euclidean distance sequencing, so that the identification efficiency of each operation ID is obviously improved under the condition of ensuring the identification accuracy.
3.2 Power transformation operation and maintenance personnel time sequence behavior recognition based on efficient convolutional neural network (ECN)
Compared with the behavior recognition based on the skeleton characteristics, the method can describe the behavior of the person in the scene from a more macroscopic perspective. However, the time-series behavior classification belongs to the spatio-temporal 3D pattern recognition problem, which corresponds to a 3D convolution operation, and the consumption of computational resources will be much larger than that of a typical 2D convolution. Therefore, the high-efficiency convolutional neural network structure special for time sequence behavior recognition is provided, and the reasoning speed of the network is practically improved on the premise of ensuring the recognition accuracy.
The improvement ideas of the following two aspects are mainly considered in the aspect of model construction: a) dense sampling of video frames, while avoiding loss of information, generates a large amount of inter-frame redundancy, and testing using a single frame image has generally achieved relatively optimal initial classification performance. Thus, the recognition model uses only a single frame image as input in one temporal neighborhood; b) the simple decision-level fusion strategy in the classical 3D model is not enough to obtain the complete long-term video interframe context relation. Therefore, the model realizes end-to-end fusion between far frames in a mode of performing 3D convolution on the feature map.
In order to further improve the recognition accuracy of the algorithm for the behaviors, on the basis of realizing information fusion by adopting 3D convolution, a parallel 2D convolution network is added at the rear end of the model, and an additionally added 2D network branch is named as 2D-Nets. And the final pooling layer generates 1024-dimensional feature vectors for each video frame, performs average pooling on the feature vectors to generate video-level description of the behavior, and then performs classification after cascading the obtained result and the global expression generated by the 3D Net to obtain a prediction label about the current behavior.
In the testing stage, different from the mode that the traditional time sequence behavior identification method adopts data enhancement strategies such as multiple times of pruning sampling and horizontal turning and the like for each section of video to improve the testing effect, the identification model only calculates a forward channel once for a specific video without additional fusion or enhancement steps, and can obviously improve the execution efficiency, so that the algorithm is more suitable for an online scene.
In addition, an online video analysis method is designed based on the behavior recognition model. The method will mainly maintain two image groups, a working memory group for storing previous images and a new image group for storing unprocessed images. And during each prediction, sampling half video frames from the two image sequences respectively for updating the working memory group, and taking the working memory group as the input of the behavior recognition model to obtain the current behavior prediction result P. Subsequently, P is compared with the average prediction result PAAveraging to generate final prediction score vector, and using PA=(P+PA) Update PA. Finally, by P for any timeASetting threshold value of same dimension to filter interference prediction, and taking P after processingAAnd taking the behavior type corresponding to the medium maximum value as a behavior detection result at the current moment. The method not only considers the previous time course information in two layers of a working memory group and average output, but also takes the current input image as the main prediction result, effectively applies the real-time advantage of the recognition algorithm, and simultaneously ensures the accuracy of online behavior detection.
3.3 Multi-target personnel operation behavior recognition based on human skeleton feature and depth convolution feature fusion
Human behaviors usually have multiple attributes, only one or a few space-time characteristics are relied on and are not enough to describe all characteristics of the behaviors, and a reasonable multi-characteristic fusion strategy can fully utilize complementarity among different characteristics to improve the behavior recognition performance of an algorithm. Current multi-feature fusion strategies can be broadly divided into three categories: descriptor level fusion, video expression level fusion, and score level fusion. Which feature fusion strategy is specifically employed depends on the degree of correlation between the features to be fused. Considering that there is no obvious correlation between the human skeleton features and the proposed depth convolution features, it is considered to adopt a scoring fusion strategy.
Most scoring level fusion strategies obtain weight vectors about different features through a learning link, wherein randomness and incompleteness of training data are usually not sufficiently valued. DS evidence theory can solve the information uncertainty problem by continuously narrowing the range of hypothesis through evidence accumulation. And deducing a decision result meeting objective conditions under the condition of no prior probability. In view of the above advantages, the DS evidence theory is applied in the designed weighted scoring multi-feature fusion framework. Firstly, extracting a deep convolution characteristic and an optimized human posture characteristic from a verification set sample selected from a training set to obtain reliable evidence information; secondly, calculating weight vectors of the two characteristics about each operation behavior category by using an evidence synthesis method, and optimizing the vectors by establishing a survival rule of a suitable person; and finally, deducing the classification label of the current behavior through a weighted summation strategy.
4. Substation tool posture and track recognition method
The subject is to research a track identification technology for power transformation operation and maintenance site tools based on video semantic information, and an online multi-target tracking method is designed aiming at the real-time and accuracy requirements of multi-target tracking, so that the accuracy is maintained, and the algorithm efficiency is improved. The method constructs an end-to-end network structure, integrates track prediction and detection into a backbone network, and simultaneously outputs a detection frame of a target and a coordinate deviation value of the target of a previous frame to achieve the aim of track prediction; subsequently, extracting the depth features of the corresponding detection frame and the corresponding prediction frame by means of a target re-identification network; then, respectively calculating the intersection and parallel ratio between the detection frame and the prediction frame and the cosine distance between the detection frame and the historical track, performing feature splicing, and sending the feature spliced result to a classifier to output a probability value; and finally, constructing a bipartite graph between detection and prediction according to the probability value, and finding the best match to form a new track. For a single-target tracking module, a siammask framework is adopted, a target prediction position and an example segmentation result can be obtained simultaneously, and then posture information of a tool is estimated
The above description is not intended to limit the present invention, but rather, the present invention is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention.
Claims (9)
1. An image monitoring system for the operation and maintenance process of a power grid is characterized by comprising an image acquisition unit, a first image storage unit, an image identification unit, a second image storage unit, an image analysis unit, a remote monitoring unit and an early warning guide unit which are sequentially connected, the image acquisition unit transmits the acquired real-time image to the first image storage unit for temporary storage, the first storage unit transmits the temporarily stored image data to the image identification unit for identification and then stores the image data into the second image storage unit in a classified manner, the image analysis unit calls data stored in the image second storage unit to perform data analysis and processing, the remote monitoring unit is used for calling the data stored in the image second storage unit and displaying the image information acquired on site and the analysis result of the image analysis unit in real time, and the early warning guide unit performs on-site early warning after receiving the early warning instruction of the remote monitoring unit.
2. The power grid operation and maintenance process image monitoring system according to claim 1, wherein: the image acquisition unit comprises a transformer substation equipment state acquisition camera, a transformer substation environment state acquisition infrared camera, a human body behavior state and gesture motion acquisition camera and a tool state acquisition camera.
3. The power grid operation and maintenance process image monitoring system according to claim 2, wherein: the image acquisition unit further comprises a wearable video acquisition intelligent terminal.
4. The power grid operation and maintenance process image monitoring system according to claim 3, wherein: the wearable video acquisition intelligent terminal is an intelligent safety helmet, the intelligent safety helmet comprises a helmet body and a WIFI wireless communication module, a 4G wireless communication module, a differential global satellite navigation positioning module, a battery module, a calculation module, a visual processing module, a 9-axis inertial sensor, a fisheye camera, an eye tracking camera and a TOF camera which are arranged on the helmet body, the visual processing module is respectively and electrically connected with the 9-axis inertial sensor, the fisheye camera, the eye tracking camera, the TOF camera, the calculation module and the battery module, the calculation module is respectively and electrically connected with the WIFI wireless communication module, the 4G wireless communication module and the differential global satellite navigation positioning module, the 9-axis inertial sensor, the fisheye camera, the eye tracking camera and the TOF camera are arranged on the front side of the helmet body, and the visual processing module, the WIFI wireless communication module, the 4G wireless communication module, the differential global satellite navigation positioning module, The battery module and the calculation module are arranged at the top of the cap body.
5. The power grid operation and maintenance process image monitoring system according to claim 4, wherein: the helmet body is further provided with an infrared thermal imaging camera, the infrared thermal imaging camera is arranged on the front side of the helmet body, and the infrared thermal imaging camera is electrically connected with the vision processing module.
6. The power grid operation and maintenance process image monitoring system according to claim 5, wherein: the top of the cap body is also provided with an ultra wide band wireless positioning module, and the ultra wide band wireless positioning module is electrically connected with the computing module.
7. The video semantic-based image capture device of claim 6, wherein: still be equipped with speaker, microphone on the cap body and be used for the pilot lamp to the staff early warning, speaker, microphone are respectively through codec and vision processing module electric connection, pilot lamp and vision processing module electric connection.
8. The power grid operation and maintenance process image monitoring system according to claim 7, wherein: the image recognition unit comprises a meter reading recognition module, a switch disconnecting link on-off state recognition module, a protection pressing plate state recognition module of a switch cabinet and a protection screen, a multi-target personnel operation behavior recognition module and a tool posture and track recognition module.
9. The method for monitoring the operation and maintenance of the power grid by using the image monitoring system for the operation and maintenance process of the power grid as claimed in claim 8, comprising the following steps:
(1) transformer substation model construction
The method comprises the steps of scanning substation equipment in a substation operation and maintenance scene based on a visual three-dimensional laser radar, obtaining point cloud data of the substation, carrying out scene base map modeling by utilizing Unity3d to establish an image base map, generating a three-dimensional live-action substation through data registration, noise removal, model generation and texture mapping processes, and storing data into a first image storage unit;
(2) image acquisition
The method comprises the steps that an image acquisition unit of an image monitoring system based on the operation and maintenance process of a power grid acquires video images of the state of transformer substation equipment, the behavior state and gesture action of workers, the state and track of tools and instruments and the environment temperature and then stores the video images into a first image storage unit;
(3) state recognition
The image recognition unit fuses video image data collected by the image collection unit on the basis of the transformer substation equipment model data to realize the recognition of the transformer substation equipment state, the worker state and the tool state, and the image recognition unit performs recognition and then stores the recognized data in the image second storage unit in a classified mode;
(4) guiding early warning
The remote monitoring unit calls the data stored in the image second storage unit, data analysis and processing are carried out on the image analysis unit, the processing result is sent to the early warning guide unit through the remote monitoring unit, and the early warning guide unit carries out on-site early warning or guide on the power transformation operation and maintenance workers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111177600.1A CN113920461A (en) | 2021-10-09 | 2021-10-09 | Power grid operation and maintenance process image monitoring system and monitoring method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111177600.1A CN113920461A (en) | 2021-10-09 | 2021-10-09 | Power grid operation and maintenance process image monitoring system and monitoring method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113920461A true CN113920461A (en) | 2022-01-11 |
Family
ID=79239117
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111177600.1A Pending CN113920461A (en) | 2021-10-09 | 2021-10-09 | Power grid operation and maintenance process image monitoring system and monitoring method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113920461A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115131874A (en) * | 2022-06-29 | 2022-09-30 | 深圳市神州云海智能科技有限公司 | User behavior recognition prediction method and system and intelligent safety helmet |
CN115376161A (en) * | 2022-08-22 | 2022-11-22 | 北京航空航天大学 | Home companion optical system based on low-resolution infrared array sensor |
CN115471769A (en) * | 2022-08-16 | 2022-12-13 | 上海航翼高新技术发展研究院有限公司 | Visual identification method for existing state of tool in tool cabinet |
-
2021
- 2021-10-09 CN CN202111177600.1A patent/CN113920461A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115131874A (en) * | 2022-06-29 | 2022-09-30 | 深圳市神州云海智能科技有限公司 | User behavior recognition prediction method and system and intelligent safety helmet |
CN115131874B (en) * | 2022-06-29 | 2023-10-17 | 深圳市神州云海智能科技有限公司 | User behavior recognition prediction method, system and intelligent safety helmet |
CN115471769A (en) * | 2022-08-16 | 2022-12-13 | 上海航翼高新技术发展研究院有限公司 | Visual identification method for existing state of tool in tool cabinet |
CN115471769B (en) * | 2022-08-16 | 2023-04-07 | 上海航翼高新技术发展研究院有限公司 | Visual identification method for existing state of tool in tool cabinet |
CN115376161A (en) * | 2022-08-22 | 2022-11-22 | 北京航空航天大学 | Home companion optical system based on low-resolution infrared array sensor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113920461A (en) | Power grid operation and maintenance process image monitoring system and monitoring method | |
CN112991656B (en) | Human body abnormal behavior recognition alarm system and method under panoramic monitoring based on attitude estimation | |
CN111291718B (en) | Behavior prediction method and device, gait recognition method and device | |
CN109102678A (en) | A kind of drowned behavioral value method of fusion UWB indoor positioning and video object detection and tracking technique | |
CN114220176A (en) | Human behavior recognition method based on deep learning | |
CN109298785A (en) | A kind of man-machine joint control system and method for monitoring device | |
CN113963315A (en) | Real-time video multi-user behavior recognition method and system in complex scene | |
CN105426827A (en) | Living body verification method, device and system | |
CN110321780A (en) | Exception based on spatiotemporal motion characteristic falls down behavioral value method | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
CN106354264A (en) | Real-time man-machine interaction system based on eye tracking and a working method of the real-time man-machine interaction system | |
CN111832400A (en) | Mask wearing condition monitoring system and method based on probabilistic neural network | |
CN109523041A (en) | Nuclear power station management system | |
CN114863489B (en) | Virtual reality-based movable intelligent auxiliary inspection method and system for construction site | |
CN114550027A (en) | Vision-based motion video fine analysis method and device | |
CN113076825A (en) | Transformer substation worker climbing safety monitoring method | |
CN112270807A (en) | Old man early warning system that tumbles | |
Yan et al. | Human-object interaction recognition using multitask neural network | |
CN112926388A (en) | Campus violent behavior video detection method based on action recognition | |
CN115546899A (en) | Examination room abnormal behavior analysis method, system and terminal based on deep learning | |
CN110910449A (en) | Method and system for recognizing three-dimensional position of object | |
CN114325573A (en) | Method for rapidly detecting identity and position information of operation and maintenance personnel of transformer substation | |
CN113064490B (en) | Eye movement track-based virtual enhancement equipment identification method | |
CN107547867A (en) | A kind of outside transformer substation video monitoring system and monitoring method | |
CN112785564B (en) | Pedestrian detection tracking system and method based on mechanical arm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |