CN117036650B - AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system - Google Patents

AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system Download PDF

Info

Publication number
CN117036650B
CN117036650B CN202310979975.2A CN202310979975A CN117036650B CN 117036650 B CN117036650 B CN 117036650B CN 202310979975 A CN202310979975 A CN 202310979975A CN 117036650 B CN117036650 B CN 117036650B
Authority
CN
China
Prior art keywords
maintenance
image
glove
glasses
safety index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310979975.2A
Other languages
Chinese (zh)
Other versions
CN117036650A (en
Inventor
杨朝翔
李大成
田霖
刘其良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Jibei Integrated Energy Service Co ltd
Original Assignee
State Grid Jibei Integrated Energy Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Jibei Integrated Energy Service Co ltd filed Critical State Grid Jibei Integrated Energy Service Co ltd
Priority to CN202310979975.2A priority Critical patent/CN117036650B/en
Publication of CN117036650A publication Critical patent/CN117036650A/en
Application granted granted Critical
Publication of CN117036650B publication Critical patent/CN117036650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Computer Graphics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a power grid maintenance navigation method, medium and system based on AR glasses, belonging to the technical field of power grid maintenance, wherein the method comprises the following steps: acquiring acquired scene images and position information of AR glasses worn by operation and maintenance personnel; preprocessing a scene image to obtain a preprocessed image; extracting features of the preprocessed image and identifying information of a plurality of devices in the scene image to obtain gloves and operation tools of operation staff; generating an AR image by utilizing a scene image according to the equipment information, wherein the AR image is marked with an operation and maintenance step, absolute electrification conditions and safety indexes of operation and maintenance personnel gloves and operation and maintenance tools; and sending the generated AR image to AR glasses worn by the operation and maintenance personnel and outputting the AR image. The system comprises the AR glasses and the electroscope, and solves the technical problems that whether the power grid equipment is electrified is judged and the safety indexes of the gloves and the operation and maintenance tools are evaluated in the current power grid maintenance process based on the AR glasses.

Description

AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system
Technical Field
The invention belongs to the technical field of power grid maintenance, and particularly relates to a power grid maintenance navigation method, medium and system based on AR (augmented reality) glasses.
Background
With rapid development of technology, the power industry is an important pillar industry of national economy, and safety operation and maintenance work of equipment are particularly important. The traditional power grid equipment maintenance mode mainly relies on experience and skills of maintenance personnel to manually perform operations such as equipment inspection, maintenance and replacement. However, this approach has a number of problems. First, maintenance personnel need to have a deep understanding of the structure, performance, working principle, etc. of the power grid equipment, which requires a high level of skill and experience for the maintenance personnel, and a long time for the maintenance personnel who are newly involved to learn and master the knowledge. Secondly, because the variety and the quantity of the power grid equipment are more, maintenance personnel often need to finish a large number of maintenance tasks in a short time, the working strength is high, and mistakes are easy to occur. Finally, the operation environment of the power grid equipment is often severe, such as high temperature, high pressure, high humidity and the like, which has high requirements on physical conditions and psychological quality of maintenance personnel and increases the working danger. The traditional power grid equipment maintenance mode mainly depends on periodic inspection and fault treatment of equipment by operation and maintenance personnel, and the normal operation of the power grid equipment can be guaranteed to a certain extent, but because the manual inspection has the problems of strong subjectivity, low efficiency, easy error and the like, a novel maintenance mode capable of effectively improving the maintenance efficiency and accuracy of the power grid equipment is urgently needed.
In recent years, with the rapid development of artificial intelligence and augmented reality technology, the application of the technology in the power industry is also increasingly widespread. The Augmented Reality (AR) technology enables a user to receive virtual information in a real environment by superimposing the virtual information in the real environment, thereby improving the cognitive ability and operation efficiency of the user. However, the application of the current AR technology in the power industry is mainly focused on the aspects of fault diagnosis and maintenance assistance of power equipment, and the like, and the daily maintenance and inspection of the power grid equipment is lacking in judgment of whether the power grid equipment is electrified or not and assessment of safety indexes of gloves and operation and maintenance tools.
Disclosure of Invention
In view of the above, the invention provides a power grid maintenance navigation method, medium and system based on AR glasses, which are used for solving the technical problems of lack of judgment on whether power grid equipment is electrified or not and assessment on safety indexes of gloves and operation and maintenance tools in the current power grid maintenance process based on AR glasses.
The invention is realized in the following way:
the first aspect of the invention provides an AR (augmented reality) -glasses-based power grid maintenance navigation method, which comprises the following steps of:
s10, acquiring acquired scene images and position information of AR glasses worn by operation and maintenance personnel;
s20, preprocessing the scene image to obtain a preprocessed image;
s30, extracting features of the preprocessed images, and identifying and obtaining information of a plurality of devices in the scene images, and gloves and operation tools of operation and maintenance personnel, wherein the device information comprises device names, device positions, device states and device structures;
s40, generating an AR image by utilizing the scene image according to the equipment information, wherein the AR image is marked with an operation and maintenance step, an absolute electrification condition, and safety indexes of gloves and operation and maintenance tools of operation and maintenance personnel;
and S50, sending the generated AR image to AR glasses worn by the operation and maintenance personnel and outputting the AR image.
The step of preprocessing the scene image includes image noise reduction, image filtering and image enhancement.
On the basis of the technical scheme, the power grid maintenance navigation method based on the AR glasses can be further improved as follows:
in the step of extracting and identifying the features of the preprocessed image, the algorithm of extracting the features is SIFT, and the identification method is convolutional neural network identification.
The step of generating an AR image by using the scene image according to the device information specifically includes:
s41, marking equipment information and a preset scene operation and maintenance step on the scene image, and updating the scene operation and maintenance step according to the operation history of operation and maintenance personnel;
s42, acquiring a safety index of the glove of the operation and maintenance personnel, and marking the corresponding position of the glove of the scene image;
s43, acquiring a safety index of the operation and maintenance tool, and marking the corresponding position of the operation and maintenance tool of the scene image;
s44, judging whether equipment in the scene image is charged or not, and if so, carrying out absolute charging marking on the scene image;
s45, carrying out position adjustment on the mark information on the scene image to avoid shielding the equipment, wherein the mark information comprises an operation and maintenance step, a glove safety index, an operation and maintenance tool safety index and an absolute electrification condition;
s46, adjusting the brightness of the marked scene image to generate an AR image.
Further, the method for obtaining the safety index of the glove of the operation and maintenance personnel is obtained by calculating the glove in the pretreatment image by adopting a pre-trained glove safety index model, wherein the steps of establishing and training the glove safety index model specifically comprise the following steps:
establishing a first training sample, wherein the first training sample comprises a plurality of glove images of different types, the number of the glove images of different types is multiple, each glove has wear and oil stain with different degrees, and each glove also has a corresponding safety index, wherein the safety index is obtained by manually marking according to safety operation and maintenance regulations;
establishing a glove safety index model prototype by using a convolutional neural network;
and training the model prototype of the glove safety index model by using a first training sample to obtain the glove safety index model, wherein the training input is a glove image, and the training output is a safety index corresponding to the glove image.
Further, the method for obtaining the security index of the operation and maintenance tool is obtained by calculating the operation and maintenance tool in the preprocessed image by using a pre-trained operation and maintenance tool security index model, wherein the steps of establishing and training the operation and maintenance tool security index model specifically comprise:
establishing a second training sample, wherein the second training sample comprises a plurality of operation and maintenance tool images of different types, each operation and maintenance tool of different types is provided with a plurality of parts, each part is worn and damaged in different degrees, and meanwhile, each part of operation and maintenance tool also has a corresponding safety index, wherein the safety index is obtained by specifying manual marks according to the safety operation and maintenance;
establishing an operation and maintenance tool safety index model prototype by using a convolutional neural network;
and training the operation and maintenance tool safety index model prototype by using a second training sample to obtain an operation and maintenance tool safety index model, wherein the training input is an operation and maintenance tool image, and the training output is a safety index corresponding to the operation and maintenance tool image.
Further, the step of determining whether the device in the scene image is charged or not, if so, performing absolute charging mark on the scene image, specifically includes:
acquiring an electricity testing result acquired by an electroscope in the operation and maintenance process of a power grid;
and screening the charged electricity inspection result from the electricity inspection result, and marking the scene image according to the corresponding equipment.
Further, the step of adjusting the position of the marker information on the scene image specifically includes:
image segmentation, namely segmenting the scene image into a plurality of areas;
target detection, identification equipment and marking information are carried out in the segmented scene image;
and judging whether the equipment is shielded by the mark information or not through shielding judgment and position optimization, and if so, adjusting the position of the mark information.
Further, in the step of establishing the glove safety index model prototype by using the convolutional neural network, a network skeleton adopted by the convolutional neural network is DenseNet121.
A second aspect of the present invention provides a computer readable storage medium having stored therein program instructions that, when executed, are configured to perform an AR glasses-based power grid maintenance navigation method as described above.
The third aspect of the invention provides an AR glasses-based power grid maintenance navigation system, which comprises AR glasses, an electroscope, an image acquisition device and a positioning device, wherein the AR glasses are provided with the image acquisition device, the electroscope is provided with a bluetooth device, and the electroscope is in communication connection with the AR glasses through the bluetooth device, and the power grid maintenance navigation system further comprises the computer readable storage medium.
Compared with the prior art, the AR-glasses-based power grid maintenance navigation method, medium and system have the beneficial effects that:
the power grid maintenance navigation method based on the AR glasses can effectively assist power grid operation and maintenance personnel in carrying out maintenance work of power grid equipment, can monitor states of gloves and operation and maintenance tools of the operation and maintenance personnel in real time, avoids safety accidents caused by equipment or tools, and improves efficiency and safety of power grid operation and maintenance. The technical effects of the present invention will be described in detail below.
Firstly, the invention acquires the scene image and the position information through the AR glasses, and compared with the traditional handheld equipment, the acquisition mode can release both hands of operation and maintenance personnel in the operation process, and can more conveniently carry out maintenance operation of the equipment. Meanwhile, the collection mode of the AR glasses is more in line with the visual angle of people, so that operation and maintenance personnel can more intuitively know the actual state of the equipment.
Secondly, preprocessing and feature extraction are carried out on the scene image, and equipment information in the scene image, and gloves and operation tools of operation and maintenance personnel are identified. Therefore, when the equipment is maintained, the state of the equipment can be known more accurately, and meanwhile, the states of the gloves and the operation and maintenance tools of the operation and maintenance personnel can be monitored in real time, so that safety accidents caused by the problems of the equipment or tools are avoided.
And thirdly, generating an AR image by utilizing the scene image according to the equipment information, wherein the AR image is marked with an operation and maintenance step, an absolute electrification condition and the safety indexes of gloves and operation and maintenance tools of operation and maintenance personnel. Therefore, when the operation and maintenance personnel carry out equipment maintenance, the operation and maintenance steps of the equipment, whether electricity exists or not and the safety indexes of the gloves and tools can be clearly seen only through the AR glasses, the operation and maintenance efficiency is greatly improved, and safety accidents caused by undefined information are reduced.
In addition, the position of the mark information on the scene image is adjusted, and the shielding of the equipment is avoided, so that the equipment can be prevented from being shielded by the mark information on the AR glasses when operation and maintenance personnel observe the state of the equipment, and the operation and maintenance efficiency is further improved.
Finally, the marked scene image is adjusted in brightness to generate an AR image, and the AR image is sent to AR glasses worn by operation and maintenance personnel for output. Therefore, when the operation and maintenance personnel maintain the equipment, the state of the equipment can be intuitively seen, and the operation and maintenance efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a power grid maintenance navigation method based on AR glasses provided by the invention;
fig. 2 is a flowchart showing the specific steps of step S40.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, based on the embodiments of the invention, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the invention.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, based on the embodiments of the invention, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Fig. 1 shows an embodiment of a power grid maintenance navigation method based on AR glasses according to a first aspect of the present invention, where the method includes the following steps:
s10, acquiring acquired scene images and position information of AR glasses worn by operation and maintenance personnel;
s20, preprocessing a scene image to obtain a preprocessed image;
s30, extracting features of the preprocessed image, and identifying to obtain information of a plurality of devices in the scene image, and gloves and operation tools of operation staff, wherein the device information comprises a device name, a device position, a device state and a device structure;
s40, generating an AR image by utilizing a scene image according to the equipment information, wherein the AR image is marked with an operation and maintenance step, an absolute electrification condition, and safety indexes of gloves and operation and maintenance tools of operation and maintenance personnel;
and S50, sending the generated AR image to AR glasses worn by the operation and maintenance personnel and outputting the AR image.
Specifically, in step S20, the scene image needs to be preprocessed first, including noise filtering, image enhancement, and other operations, and then the device in the scene image is identified and the position, state, and structure information of the device are acquired through feature extraction and pattern recognition techniques.
First, a scene image is preprocessed. In actual operation, the acquired scene image may have problems of noise, blurring, low contrast and the like due to environmental illumination, equipment camera performance and the like. Therefore, noise filtering and image enhancement processing are required for the scene image. The noise filtering can adopt methods such as median filtering, mean filtering and the like, and the image enhancement can adopt methods such as histogram equalization, gamma correction and the like.
In the above technical solution, in the step of extracting and identifying the features of the preprocessed image, the algorithm of feature extraction is SIFT, and the identification method is convolutional neural network identification.
Specifically, in step S30, it is necessary to perform feature extraction on the preprocessed image and recognize information of a plurality of devices in the scene image and gloves and tools of the operation and maintenance personnel. The device information includes a device name, a device location, a device status, and a device structure. This step involves two main tasks: feature extraction and object recognition.
First, feature extraction is performed. The goal of feature extraction is to transform the input data into a set of features that reflect the primary characteristics of the original data. In the case of feature extraction, the goal is to convert the preprocessed image into a set of features that can reflect the main characteristics of the equipment and the gloves, tools of the operator in the image.
The feature extraction part can adopt a common feature extraction algorithm, namely SIFT (Scale-Invariant Feature Transform). The SIFT algorithm can extract keypoints of an image and generate descriptors of the keypoints, which have invariance to scaling, rotation, brightness change and the like, and are very suitable for device identification.
The main steps of the SIFT algorithm are as follows:
and (3) detecting a scale space extremum: searching key points of the image in different scale spaces;
positioning key points: accurately positioning the detected key points and removing unstable key points;
direction distribution: assigning one or more directions to each keypoint;
descriptor generation: and generating descriptors of the key points according to the directions of the key points.
Feature extraction is performed using Convolutional Neural Networks (CNNs) in deep learning. CNN is a special neural network, which is characterized by automatically extracting the features of an image. A typical CNN structure includes an input layer, a convolution layer, a pooling layer, a full connection layer, and an output layer.
The main function of the convolution layer is to perform feature extraction. Assuming that the size of the image is nxn, the size of the convolution kernel is fxf, and the convolution step is S, the size of the feature map after convolution is (N-F)/s+1× (N-F)/s+1. The convolution operation can be expressed as:
wherein,the i, j-th element of the feature map representing the first layer, < >>The m, n-th element, b, representing the convolution kernel of the first layer (l) Representing the bias term of the layer i, σ () represents an activation function, e.g. a ReLU function.
The main function of the pooling layer is to perform downsampling, reduce the size of the feature map and reduce the computational complexity of the model. Common pooling operations include maximum pooling and average pooling. For example, the maximum pooling may be expressed as:
wherein,the i, j-th element of the pooled feature map representing the first layer,>the i×s+m, j×s+n elements of the feature map representing the first layer.
Then, object recognition is performed. The object of object recognition is to recognize the glove and the operation tool of the equipment and the operation staff in the image according to the extracted characteristics. Target recognition was performed using the YOLO (You Only Look Once) algorithm in deep learning.
The key idea of the YOLO algorithm is to transform the object detection problem into a regression problem, directly predicting the class and location of the object. For example, a plurality of images to be recognized are input into the model, whether or not the authentication by hand is accurate, whether or not each glove/tool belongs to the image of the respective security class, and the like. Such as shadow areas on the glove, bare drain areas, possibly with risk of electric shock, gumming of the tool, paint stripping positions, etc.
In the above technical solution, the step of generating the AR image by using the scene image according to the device information specifically includes:
s41, marking equipment information and a preset scene operation and maintenance step on a scene image, and updating the scene operation and maintenance step according to the operation history of operation and maintenance personnel;
s42, acquiring a safety index of the glove of the operation and maintenance personnel, and marking the corresponding position of the glove of the scene image; these safety indices, which are mentioned in the examples, may be numbers, such as safety levels of 1 to 10, classified from fatal hazards, severe injuries, disabilities, light injuries, equipment damages, malfunctions, safety, etc.
S43, acquiring a safety index of the operation and maintenance tool, and marking the corresponding position of the operation and maintenance tool of the scene image;
s44, judging whether equipment in the scene image is charged or not, and if so, carrying out absolute charging marking on the scene image; electroscope may be used in the field for electroscope.
S45, position adjustment is carried out on the marking information on the scene image to avoid shielding the equipment, wherein the marking information comprises an operation and maintenance step, a glove safety index, an operation and maintenance tool safety index and an absolute electrification condition;
s46, adjusting the brightness of the marked scene image to generate an AR image.
In S41, the step of updating the scene operation and maintenance step according to the operation history of the operation and maintenance personnel specifically includes:
step 1: acquiring standard operation video of operation and maintenance personnel, wherein the standard operation is operation of the operation and maintenance personnel on experience of the same scene according to operation and maintenance rules;
step 2: carrying out Openphase extraction on each frame of the standard operation video of the operation and maintenance personnel to obtain an operation and maintenance Pose sequence, in particular hand gestures, wherein the hand gestures are accurate to each finger;
step 3: extracting a key Pose from the operation and maintenance Pose sequence to obtain a key Pose sequence;
step 4: matching the operation history of the current operation and maintenance personnel with the obtained key Pose sequence to obtain the most matched key Pose;
step 5: acquiring the sequence after the most matched key Pose in the key Pose sequence as an updated operation and maintenance step; here, the operation and maintenance steps are not limited to the text description, the position indication, and the action indication;
the method for extracting the key Pose from the operation and maintenance Pose sequence comprises the following steps of:
step 3.1, acquiring a hand center of each operation and maintenance Pose in the operation and maintenance Pose sequence;
step 3.2, clustering by utilizing the hand center of each operation and maintenance Pose and utilizing an adjacent clustering algorithm to obtain M clustering centers, wherein M is E [5,20], and M=10 is preferable;
step 3.3, using the M calculated clustering centers as M key Pose, and forming a key Pose sequence;
the step of matching the operation history of the operation and maintenance personnel with the obtained key Pose sequence to obtain the best matched key Pose comprises the following steps:
step 4.1, carrying out Openpost extraction on the operation histories of operation and maintenance personnel to obtain an operation history Pose sequence;
step 4.2, acquiring a hand center of each operation history Pose in the operation history Pose sequence;
step 4.3, forming a curve at the hand center of each operation history Pose in the operation history Pose sequence, and marking the curve as a first curve;
step 4.4, forming a curve at the hand center of each operation and maintenance Pose in the key Pose sequence, and marking the curve as a second curve;
step 4.5, searching for the paragraph which is most matched with the first curve on the second curve, taking the operation and maintenance Pose corresponding to the hand center of the operation and maintenance Pose with the closest end point of the most matched paragraph as a mark Pose through a coordinate point, and taking the key Pose on the left side of the mark Pose as the most matched key Pose;
through the steps, whether a user lacks a certain action can be dynamically distinguished, and the action is marked on the image of the AR glasses.
The operation history of the operation and maintenance personnel is an operation record of the operation and maintenance personnel, and the operation record can be empty, which indicates that the operation and maintenance personnel has not started operation and maintenance operation yet.
Among them, the glove safety index calculation is a complex process involving image processing and machine learning. Firstly, the image of the glove needs to be separated from the scene image, and then the safety index of the glove is calculated according to the characteristics of the glove such as color, texture, shape and the like, the abrasion degree, pollution degree and the like of the glove.
Separation of glove images: an image segmentation algorithm, such as GrabCut, watershed, may be used to separate the glove image from the scene image. Specifically, an initial glove region is set, and then the image of the glove is gradually separated from the background by using information such as color, texture, and the like. The aim of the image segmentation is to make the separated glove image as consistent as possible with the original glove image, i.e. the segmentation error is as small as possible.
Extracting glove characteristics: a feature extraction algorithm, such as SIFT, SURF, HOG, may be used to extract the color, texture, shape, etc. features of the glove. In particular, it is necessary to calculate a color histogram, texture histogram, shape descriptor, etc. of the glove image to describe color, texture, shape information of the glove. Furthermore, it is also necessary to calculate the abrasion and soiling characteristics of the glove image, such as the area proportion of the abrasion region, the area proportion of the soiling region, etc.
Calculation of glove safety index: the glove safety index may be calculated from the glove characteristics using a machine learning algorithm, such as a support vector machine, random forest, neural network, etc. In particular, it is necessary to build a predictive model of the glove safety index, the input of which is a characteristic of the glove and the output of which is the safety index of the glove. It is desirable to train the predictive model using known glove characteristics and safety index data so that the prediction error of the model is as small as possible. The safety index of the glove can be calculated by giving corresponding safety marks to the glove or tools, equipment and the like through historical accidents as historical training data in the form of similarity of the contrast of the glove and the like in the image.
Further, in the above technical solution, the method for obtaining the safety index of the glove of the operation and maintenance personnel is obtained by calculating the glove in the pretreatment image by using a pre-trained glove safety index model, wherein the steps of establishing and training the glove safety index model specifically include:
establishing a first training sample, wherein the first training sample comprises a plurality of glove images of different types, the number of the glove images of different types is multiple, each glove has wear and oil stain with different degrees, and each glove also has a corresponding safety index, wherein the safety index is obtained by manually marking according to safety operation and maintenance regulations;
establishing a glove safety index model prototype by using a convolutional neural network;
and training the model prototype of the glove safety index model by using a first training sample to obtain the glove safety index model, wherein the training input is a glove image, and the training output is a safety index corresponding to the glove image.
Further, in the above technical solution, the method for obtaining the security index of the operation and maintenance tool is obtained by calculating the operation and maintenance tool in the preprocessed image by using a pre-trained operation and maintenance tool security index model, where the steps of establishing and training the operation and maintenance tool security index model specifically include:
establishing a second training sample, wherein the second training sample comprises a plurality of operation and maintenance tool images of different types, each operation and maintenance tool of different types is provided with a plurality of parts, each part is worn and damaged in different degrees, and meanwhile, each part of operation and maintenance tool also has a corresponding safety index, wherein the safety index is obtained by specifying manual marks according to the safety operation and maintenance;
establishing an operation and maintenance tool safety index model prototype by using a convolutional neural network;
and training the operation and maintenance tool safety index model prototype by using a second training sample to obtain an operation and maintenance tool safety index model, wherein the training input is an operation and maintenance tool image, and the training output is a safety index corresponding to the operation and maintenance tool image.
Further, in the above technical solution, the step of determining whether the device in the scene image is charged, if so, performing absolute charging marking on the scene image, specifically includes:
acquiring an electricity testing result acquired by an electroscope in the operation and maintenance process of a power grid;
and screening the charged electricity test result from the electricity test result, and marking the electricity test result in the scene image according to the corresponding equipment.
Further, in the above technical solution, the step of adjusting the position of the marker information on the scene image specifically includes:
dividing the scene image into a plurality of areas;
target detection, identification equipment and marking information are carried out in the segmented scene image;
and judging whether the equipment is shielded by the mark information or not through shielding judgment and position optimization, and if so, adjusting the position of the mark information.
In step S45, position adjustment of the marker information on the scene image is required to avoid occlusion of the device. The marking information includes operation and maintenance steps, glove safety index, operation and maintenance tool safety index and absolute electrification condition. First, some variables and parameters need to be defined. Let I denote the scene image, M denote the marker information, P denote the position of the device in the image, O denote the original position of the marker information in the image, N denote the new position of the marker information in the image.
The device can also measure the safety index, such as the sudden discharge sound on site, at which time the safety index of the device is adjusted to the highest dangerous level, and the user sees the dangerous index on the AR glasses and can withdraw rapidly.
The position P of the device can be obtained by feature extraction and recognition in step S30. The position O of the original marking information can be obtained by AR image generation and device information marking in steps S40 and S41, with the aim of finding a new position N so that the marking information M does not obscure the device.
To achieve this goal, techniques of image segmentation and object detection may be utilized. Image segmentation may divide the image I into a plurality of regions, each region representing an object or a background. Target detection can identify and locate specific objects in the image. In this step, image segmentation may be used to divide the scene image I and target detection is used to identify device and marker information.
First, the scene image I is divided into a plurality of regions using an image segmentation algorithm. This can be achieved by some classical image segmentation algorithms like Watershed algorithm, K-means clustering, grabCut algorithm, etc. Then, the device and the tag information are identified in the scene image I using the object detection algorithm. This can be achieved by some classical target detection algorithms like Faster R-CNN, YOLO, SSD, etc.
Through image segmentation and object detection, the location of the device and marker information in the field Jing Tuxiang I can be obtained. Then, it is necessary to determine whether the marker information M obscures the device. This may be achieved by computing the ratio of the intersection of the device and the tag information (Intersection over Union, ioU). IoU is an index that measures the degree of overlap of two regions, defined as the ratio of the intersection area to the union area of the two regions. If IoU of the device and the marker information is greater than a preset threshold, the marker information M is considered to obscure the device.
Set I M And I P Representing the areas in the field Jing Tuxiang I of the marking information M and the device P, respectively, a (I M ) And A (I) P ) Representing their area, A (I M ∩I P ) Representing their intersection area, A (I M ∪I P ) Representing their union area. Then IoU of the device and tag information can be defined as:
if IoU > θ, where θ is a preset threshold, then the marker information M is considered to obscure the device. In this case, it is necessary to move the position of the mark information M from O to N so that IoU is smaller than θ. This may be achieved by some optimization algorithm such as gradient descent, simulated annealing, genetic algorithms, etc.
Let f (N) = IoU, the goal is to find a new position N, such that f (N) < θ. This is a constraint optimization problem, where the constraint condition can be introduced into the objective function by a lagrangian multiplier method to construct a lagrangian function L (N, λ) =f (N) +λ (θ -f (N)), where λ is the lagrangian multiplier. Then, by solving the minimum value point of L (N, λ), a new position N can be obtained.
In general, the embodiment of step S45 includes the following steps: image segmentation, target detection, occlusion judgment and position optimization. Through the steps, the mark information shielding equipment can be effectively avoided, and the use experience of AR glasses and the working efficiency of operation and maintenance personnel are improved.
Further, in the above technical solution, in the step of establishing the glove safety index model prototype by using the convolutional neural network, the network skeleton adopted by the convolutional neural network is DenseNet121.
DenseNet121 is a variant of DenseNet (Densely Connected Convolutional Networks) in which 121 represents the depth of the network, i.e., there is a layer of 121. DenseNet is a Convolutional Neural Network (CNN) architecture, which is mainly characterized by dense connections between network layers. In DenseNet, each layer is directly connected to all layers before it, which allows the network to better utilize the features of the previous layers.
Major advantages of DenseNet121 include:
parameter efficiency: since each layer has direct access to the features of all previous layers, fewer parameters are required for each layer.
Flow of gradient is improved: since each layer has direct access to the activation of its previous layer, the gradient can be transferred directly to all its previous layers, which helps to reduce the gradient vanishing problem.
Feature multiplexing is improved: since each layer has direct access to the activation of its previous layer, each layer can reuse the features of the previous layer when needed.
More efficient feature fusion: since each layer has direct access to the activation of its previous layer, features can be fused more effectively between all layers of the network.
DenseNet121 exhibits good performance in many image classification tasks, such as ImageNet Large Scale visual identification challenge race (ILSVRC), and so forth, and is therefore suitable for identification analysis of gloves, operation and maintenance tools.
Preferably, the mark can be adjusted along with the action of the user and the recognition of the image, for example, the working time of the user changes at any time, such as the middle goes out to return to the scene, the devices in the surrounding environment are different, the running state of each device is different before and after the user returns, such as the temperature rise, the voltage fluctuation, the discharge change of the electrical equipment and the like, so that the state change of each device is different before and after the user moves, and the mark change occurs.
The operation and maintenance steps of the preset scene, the operation of personnel on equipment, marks from gloves, operation tools, equipment states (temperature, voltage, discharge change and the like) and the like are all minimum values in an ideal state, so that the safety of the personnel is ensured.
The difference in the marks is caused by the movement of the user, such as returning after temporary out during work, incorrect wearing, taking the equipment of other people, and a change in the state of the device during the period.
At this time, if the user continues to complete the subsequent operation and maintenance operation by using the mark before moving, a security accident may occur. Therefore, every other period, or the equipment state has temperature rise exceeding the threshold value, voltage rise exceeding the threshold value, and the sensor detects discharge, the regeneration mark is triggered to check whether the glove and the operation tool conform to the situation after the current equipment change state, the mark of the current scene image is compared with a plurality of marks representing safety of the preset operation and maintenance, the marks can be compared in a weighted summation mode, and the final summation numerical value is used for comparison and judgment, so that whether the user operates after the change has the safety hidden trouble is determined, and if the summation numerical value is larger than the weighted summation numerical value of the historical operation and maintenance step, the safety hidden trouble is indicated.
For example, before and after a meal is taken out by a user, the same overhaul operation is performed on the transformer due to high load of power grid equipment in noon, the marks are different before 10 points and 13 points, the dangerous level is increased, and safety accidents are avoided for the user through gloves or tool equipment of the user.
A second aspect of the present invention provides a computer readable storage medium having stored therein program instructions that, when executed, are configured to perform an AR glasses-based power grid maintenance navigation method as described above.
The third aspect of the invention provides an AR glasses-based power grid maintenance navigation system, which comprises AR glasses, an electroscope, an image acquisition device and a positioning device, wherein the AR glasses are provided with the image acquisition device, the electroscope is provided with a bluetooth device, and the electroscope is in communication connection with the AR glasses through the bluetooth device, and the power grid maintenance navigation system further comprises the computer readable storage medium.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (7)

1. The power grid maintenance navigation method based on the AR glasses is characterized by comprising the following steps of:
s10, acquiring acquired scene images and position information of AR glasses worn by operation and maintenance personnel;
s20, preprocessing the scene image to obtain a preprocessed image;
s30, extracting characteristics of the preprocessed images, and identifying and obtaining information of a plurality of devices in the scene images and gloves and operation tools of operation and maintenance personnel, wherein the information of the devices comprises device names, device positions, device states and/or device structures;
s40, generating an AR image by utilizing the scene image according to the equipment information, wherein the AR image is marked with an operation and maintenance step, an absolute electrification condition, a safety index of gloves of operation and maintenance personnel and a safety index of operation and maintenance tools;
s50, sending the generated AR image to AR glasses worn by operation and maintenance personnel and outputting the AR image;
in the step of extracting and identifying the features of the preprocessed image, the algorithm of the feature extraction is SIFT, and the identification method is convolutional neural network identification;
the step of generating an AR image using the scene image according to the device information specifically includes:
s41, marking equipment information and a preset scene operation and maintenance step on the scene image, and updating the scene operation and maintenance step according to the operation history of operation and maintenance personnel;
s42, acquiring a safety index of the glove of the operation and maintenance personnel, and marking the position of the glove of the scene image;
s43, acquiring a safety index of the operation and maintenance tool, and marking the position of the operation and maintenance tool of the scene image;
s44, judging whether equipment in the scene image is charged or not, and if so, carrying out absolute charging marking on the scene image;
s45, adjusting the position of a mark on the scene image to a position which does not shade a marked object, wherein the mark has the following information, and the method comprises the following steps: operation and maintenance steps, glove safety indexes, operation and maintenance tool safety indexes and absolute electrification conditions;
s46, adjusting brightness of the marked scene image to generate the AR image.
2. The AR-glasses-based power grid maintenance navigation method according to claim 1, wherein the step of obtaining the safety index of the glove of the operation and maintenance person is obtained by calculating the glove in the pre-processing image by using a pre-trained glove safety index model, and the steps of establishing and training the glove safety index model specifically comprise:
establishing a first training sample, wherein the first training sample comprises a plurality of glove images of different types, the number of the glove images of different types is multiple, each glove has wear and oil stain with different degrees, and each glove also has a corresponding safety index, wherein the safety index is obtained by manually marking according to safety operation and maintenance regulations;
establishing a glove safety index model prototype by using a convolutional neural network;
and training the model prototype of the glove safety index model by using a first training sample to obtain the glove safety index model, wherein the training input is a glove image, and the training output is a safety index corresponding to the glove image.
3. The AR glasses-based power grid maintenance navigation method according to claim 1, wherein the step of obtaining the security index of the operation and maintenance tool is obtained by calculating the operation and maintenance tool in the pre-processed image by using a pre-trained operation and maintenance tool security index model, and the steps of establishing and training the operation and maintenance tool security index model specifically include:
establishing a second training sample, wherein the second training sample comprises a plurality of operation and maintenance tool images of different types, each operation and maintenance tool of different types is provided with a plurality of parts, each part is worn and damaged in different degrees, and meanwhile, each part of operation and maintenance tool also has a corresponding safety index, wherein the safety index is obtained by specifying manual marks according to the safety operation and maintenance;
establishing an operation and maintenance tool safety index model prototype by using a convolutional neural network;
and training the operation and maintenance tool safety index model prototype by using a second training sample to obtain an operation and maintenance tool safety index model, wherein the training input is an operation and maintenance tool image, and the training output is a safety index corresponding to the operation and maintenance tool image.
4. The AR glasses-based power grid maintenance navigation method according to claim 1, wherein the step of determining whether the equipment in the scene image is charged or not, if so, performing absolute charging mark on the scene image, specifically comprises:
acquiring an electricity testing result acquired by an electroscope in the operation and maintenance process of a power grid;
and screening the charged electricity inspection result from the electricity inspection result, and marking the scene image according to the corresponding equipment.
5. The AR-glasses-based power grid maintenance navigation method according to claim 1, wherein the step of adjusting the position of the mark on the scene image specifically comprises:
dividing the scene image into a plurality of regions;
in the segmented scene image in each region, carrying out marked object detection, and identifying the equipment, the mark and information thereof;
judging whether the marks and the information thereof shield the glove, the operation and maintenance tool and the equipment, and if yes, adjusting the positions of the marks and the information thereof.
6. A computer readable storage medium, wherein program instructions are stored in the computer readable storage medium, and when the program instructions are executed, the program instructions are configured to perform an AR glasses-based power grid maintenance navigation method according to any one of claims 1-5.
7. An AR (augmented reality) -glasses-based power grid maintenance navigation system is characterized by comprising AR glasses, an electroscope, image acquisition equipment and positioning equipment, wherein the AR glasses are provided with Bluetooth devices, the electroscope is in communication connection with the AR glasses through the Bluetooth devices, and the AR glasses-based power grid maintenance navigation system further comprises the computer-readable storage medium according to claim 6.
CN202310979975.2A 2023-08-04 2023-08-04 AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system Active CN117036650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310979975.2A CN117036650B (en) 2023-08-04 2023-08-04 AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310979975.2A CN117036650B (en) 2023-08-04 2023-08-04 AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system

Publications (2)

Publication Number Publication Date
CN117036650A CN117036650A (en) 2023-11-10
CN117036650B true CN117036650B (en) 2024-03-12

Family

ID=88638474

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310979975.2A Active CN117036650B (en) 2023-08-04 2023-08-04 AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system

Country Status (1)

Country Link
CN (1) CN117036650B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952529A (en) * 2017-05-25 2017-07-14 广东电网有限责任公司教育培训评价中心 The safe drilling method of transformer station's crane operation, device and system
CN107333111A (en) * 2017-08-07 2017-11-07 国家电网公司 A kind of method of inspecting substation equipment, apparatus and system
CN107610269A (en) * 2017-09-12 2018-01-19 国网上海市电力公司 A kind of power network big data intelligent inspection system and its intelligent polling method based on AR
CN109858636A (en) * 2018-12-28 2019-06-07 中国电力科学研究院有限公司 Power circuit livewire work method and apparatus based on mixed reality
CN110413122A (en) * 2019-07-30 2019-11-05 厦门大学嘉庚学院 A kind of AR eyewear applications method and system with operative scenario identification
CN110502119A (en) * 2019-08-28 2019-11-26 国网上海市电力公司 Transformer fault case virtual interactive interface method and system based on virtual reality
CN110728670A (en) * 2019-10-14 2020-01-24 贵州电网有限责任公司 Low-voltage equipment operation and maintenance method based on AR technology
CN111722714A (en) * 2020-06-17 2020-09-29 贵州电网有限责任公司 Digital substation metering operation inspection auxiliary method based on AR technology
CN112018892A (en) * 2020-09-04 2020-12-01 南京太司德智能电气有限公司 Electric power operation and maintenance remote guidance system
CN114220117A (en) * 2021-11-01 2022-03-22 浙江大华技术股份有限公司 Wearing compliance detection method and device and computer readable storage medium
WO2023097016A2 (en) * 2021-11-23 2023-06-01 Strong Force Ee Portfolio 2022, Llc Ai-based energy edge platform, systems, and methods

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106952529A (en) * 2017-05-25 2017-07-14 广东电网有限责任公司教育培训评价中心 The safe drilling method of transformer station's crane operation, device and system
CN107333111A (en) * 2017-08-07 2017-11-07 国家电网公司 A kind of method of inspecting substation equipment, apparatus and system
CN107610269A (en) * 2017-09-12 2018-01-19 国网上海市电力公司 A kind of power network big data intelligent inspection system and its intelligent polling method based on AR
CN109858636A (en) * 2018-12-28 2019-06-07 中国电力科学研究院有限公司 Power circuit livewire work method and apparatus based on mixed reality
CN110413122A (en) * 2019-07-30 2019-11-05 厦门大学嘉庚学院 A kind of AR eyewear applications method and system with operative scenario identification
CN110502119A (en) * 2019-08-28 2019-11-26 国网上海市电力公司 Transformer fault case virtual interactive interface method and system based on virtual reality
CN110728670A (en) * 2019-10-14 2020-01-24 贵州电网有限责任公司 Low-voltage equipment operation and maintenance method based on AR technology
CN111722714A (en) * 2020-06-17 2020-09-29 贵州电网有限责任公司 Digital substation metering operation inspection auxiliary method based on AR technology
CN112018892A (en) * 2020-09-04 2020-12-01 南京太司德智能电气有限公司 Electric power operation and maintenance remote guidance system
CN114220117A (en) * 2021-11-01 2022-03-22 浙江大华技术股份有限公司 Wearing compliance detection method and device and computer readable storage medium
WO2023097016A2 (en) * 2021-11-23 2023-06-01 Strong Force Ee Portfolio 2022, Llc Ai-based energy edge platform, systems, and methods

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
AR增强现实技术在变电站二次设备运检中的应用;简学之;刘子俊;文明浩;张惠仙;;电力系统保护与控制;第48卷(第15期);第170-176页 *
AR技术在用电信息采集系统运维中的应用;张秋雁;张俊玮;丁超;钱威;黎世华;;自动化与仪表;第35卷(第04期);第30-33页 *
Efficiency and Satety Improvement of Power Equipment Smart Inspection and Operation vir Augmented Reality Glasses based on AI Technology;Xiaoxiong Lu et al.;《2022 the 4th World Symposium on Artificial Intelligence》;第18-23页 *
基于增强现实技术(AR)的变电站动态运维关键技术研究;彭博;《中国优秀硕士学位论文全文数据库(电子期刊)工程科技Ⅱ辑》;全文 *
智能眼镜在变电站电力设备智能巡检中的应用分析;王永明;黄春红;李鹏;李宽宏;林力辉;;科技视界(第18期);第42-43页 *
面向电网调度故障处理的知识图谱框架与关键技术初探;乔骥;王新迎;闵睿;白淑华;姚冬;蒲天骄;;中国电机工程学报;第40卷(第18期);第5837-5849页 *

Also Published As

Publication number Publication date
CN117036650A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
Racki et al. A compact convolutional neural network for textured surface anomaly detection
CN110569837B (en) Method and device for optimizing damage detection result
KR101549645B1 (en) Method and apparatus of recognizing facial expression using motion dictionary
CN112149514B (en) Method and system for detecting safety dressing of construction worker
CN107133569A (en) The many granularity mask methods of monitor video based on extensive Multi-label learning
Yoo et al. Development of a crack recognition algorithm from non-routed pavement images using artificial neural network and binary logistic regression
CN106709518A (en) Android platform-based blind way recognition system
CN116108397B (en) Electric power field operation violation identification method integrating multi-mode data analysis
Han et al. Recognition and location of steel structure surface corrosion based on unmanned aerial vehicle images
CN112069988A (en) Gun-ball linkage-based driver safe driving behavior detection method
CN113688797A (en) Abnormal behavior identification method and system based on skeleton extraction
Vasseur et al. Perceptual organization approach based on Dempster–Shafer theory
Ngxande et al. Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms
Ali et al. Substation Danger Sign Detection and Recognition using Convolutional Neural Networks
CN116872961B (en) Control system for intelligent driving vehicle
KR101967858B1 (en) Apparatus and method for separating objects based on 3D depth image
CN117036650B (en) AR (augmented reality) glasses-based power grid maintenance navigation method, medium and system
CN112966618A (en) Dressing identification method, device, equipment and computer readable medium
Moussa et al. Manmade objects classification from satellite/aerial imagery using neural networks
Zhang et al. Semantic segmentation of point clouds of field obstacle-crossing terrain for multi-legged rescue equipment based on random forest
CN116862952B (en) Video tracking method for substation operators under similar background conditions
Shamila Ebenezer et al. Identification of Civil Infrastructure Damage Using Ensemble Transfer Learning Model
CN114004963B (en) Target class identification method and device and readable storage medium
Gowtham et al. Text Detection and Language Identification on Natural Scene Images using Faster R-CNN
Ebenezer et al. Identification of civil infrastructure damage using ensemble transfer learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant