CN113327240A - Visual guidance-based wire lapping method and system and storage medium - Google Patents

Visual guidance-based wire lapping method and system and storage medium Download PDF

Info

Publication number
CN113327240A
CN113327240A CN202110656321.7A CN202110656321A CN113327240A CN 113327240 A CN113327240 A CN 113327240A CN 202110656321 A CN202110656321 A CN 202110656321A CN 113327240 A CN113327240 A CN 113327240A
Authority
CN
China
Prior art keywords
operated
component
prediction frame
task
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110656321.7A
Other languages
Chinese (zh)
Inventor
何忠良
傅晓飞
李大武
水炜
周李刚
蒋晨平
冯旭洲
王刚
陆凯磊
颜旭昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gsg Intelligent Technology Co ltd
Shanghai South Power Group Co ltd
State Grid Shanghai Electric Power Co Ltd
CSG Smart Science and Technology Co Ltd
Original Assignee
Gsg Intelligent Technology Co ltd
Shanghai South Power Group Co ltd
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gsg Intelligent Technology Co ltd, Shanghai South Power Group Co ltd, State Grid Shanghai Electric Power Co Ltd filed Critical Gsg Intelligent Technology Co ltd
Priority to CN202110656321.7A priority Critical patent/CN113327240A/en
Publication of CN113327240A publication Critical patent/CN113327240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system and a storage medium for lapping wires based on visual guidance, which belong to the technical field of machine learning and comprise the following steps: acquiring an image to be detected; analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network; processing the 2D prediction frame of the to-be-operated task component by using the space detection model to obtain a 3D prediction frame of the to-be-operated task component; and carrying out wiring positioning guide according to the 3D prediction frame of the component to be operated so as to complete the operation task. The invention can effectively improve the operation efficiency and the operation safety, and realizes the automatic and intelligent operation of the power distribution network operation.

Description

Visual guidance-based wire lapping method and system and storage medium
Technical Field
The invention relates to the technical field of machine learning, in particular to a method, a system and a storage medium for lapping wires based on visual guidance.
Background
At present, the high-voltage live working still adopts an insulating glove working method, and operators are required to climb a high-voltage iron tower or carry out uninterrupted working by means of an insulating bucket arm vehicle. The manual live working means that an operator needs to be in a dangerous environment of high altitude, high voltage and strong electromagnetic field, the labor intensity is high, the posture of a human body is not easy to control, even if the operator strictly complies with the safety operation specification and adds insulation protection measures, the mental pressure and physical loss of the operator cannot be completely relieved, personal casualty accidents easily occur due to carelessness, and serious loss is brought to families and the society. In order to cooperate with a series of measures of increasing the power grid transformation strength, optimizing the operator environment, improving the distribution network uninterrupted operation management and the like, the intelligent robot is used for replacing manpower to carry out uninterrupted operation, and is an important link which needs to be solved urgently in a distribution network maintenance system.
The robot is adopted to carry out live-wire work, an operator can remotely control the robot, and robot control signals are transmitted by wireless or optical fibers, so that the operator can be ensured to be isolated from a high-voltage electric field. However, the operator of the live working robot is in the main control room, the field working environment is transmitted to the main control console through the field camera, the operator operates the main control console through the field video displayed on the main control display, and the working mechanical arm is pinched by the main hand or the keyboard and the mouse to complete the live working. When the operation with accurate position is carried out, the operation mode of judging the position to reach the target point by visual only and then manually controlling the mechanical arm is easy to cause position error, thereby reducing the operation efficiency and the operation quality. For operations requiring precise positioning, such as disconnecting and connecting an isolation switch, a drop-out fuse, a lead at two ends of a lightning arrester, a disconnection lead and the like, operation is required according to precise spatial information, and therefore, the judgment of the spatial information of the components to be operated and the wiring area is realized through autonomous detection and positioning, which is very important.
Disclosure of Invention
The invention aims to overcome the defects in the background technology and improve the operation efficiency and the operation quality of the power distribution network.
To achieve the above object, in one aspect, a method for stringing based on visual guidance is adopted, which includes:
acquiring an image to be detected;
analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network;
processing the 2D prediction frame of the to-be-operated task component by using the space detection model to obtain a 3D prediction frame of the to-be-operated task component;
and carrying out wiring positioning guide according to the 3D prediction frame of the component to be operated so as to complete the operation task.
Further, the loss function of the YOLOv3 network includes a loss function for determining coordinate errors, an iouError loss function for determining IOU errors, and a classror loss function for determining classification errors.
Further, the space detection model adopts a 3D target detection network model based on an M3D-RPN network.
Further, the loss functions of the M3D-RPN network include a classification loss function, a 2D box regression loss function, and a 3D box regression loss function.
Further, according to the 3D prediction frame of waiting to operate components and parts, carry out the routing and fix a position guide in order to accomplish the operation task, include:
and calculating to obtain the wiring area information of the component to be operated based on the 3D prediction frame and the prior information of the component to be operated, and performing wiring positioning guidance based on the wiring area information to complete an operation task.
Further, after the 2D prediction frame of the task component to be operated is processed by using the spatial detection model to obtain the 3D prediction frame of the task component to be operated, the method further includes:
performing two-dimensional space projection on the 3D prediction frame of the component to be operated to obtain projection of space information of the component to be operated on a plane image and obtain plane projection;
carrying out target detection on components to be operated of the power distribution network by using a locally constructed target detection network to obtain a 2D prediction frame;
calculating the degree of coincidence of the plane projection and the IOU of the 2D prediction frame;
judging whether the coincidence value of the IOU is larger than a set threshold value or not;
if so, outputting the 3D prediction frame of the to-be-operated task component as the space coordinate information of the to-be-operated component;
and if not, correcting the 3D prediction frame of the component to be operated.
Further, the correcting the 3D prediction frame of the component to be operated includes:
re-acquiring the image to be detected;
calculating the projection of a new 3D prediction frame of the component to be operated on a two-dimensional plane according to the newly acquired image to be detected to obtain a new plane projection;
carrying out target detection on components to be operated of the power distribution network by using a locally constructed target detection network to obtain a new 2D prediction frame;
calculating the IOU coincidence degree between the new plane projection and the new 2D prediction frame;
judging whether the degree of coincidence of the IOU is greater than the set threshold value
If so, outputting the new 3D prediction frame of the component to be operated as the space coordinate information of the component to be operated;
and if not, performing compensation calculation on the new 3D prediction frame to obtain the space coordinate information of the component to be operated.
In a second aspect, a visual guidance-based patch system is employed, comprising: camera and operation robot, the camera is installed on operation robot, is equipped with data processing module in the operation robot, and this data processing module includes image acquisition unit, target detecting element and spatial information detecting element, wherein:
the image acquisition module is used for acquiring an image to be detected acquired by the camera;
the target detection unit is used for analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network;
the space information detection unit processes the 2D prediction frame of the component to be operated according to the space detection model to obtain a 3D prediction frame of the component to be operated, so that the operation robot carries out wiring positioning guidance according to the 3D prediction frame of the component to be operated to complete an operation task.
In a third aspect, a computer-readable storage medium is used, on which a computer program is stored, the computer program being executed by a processor, and the method for wire bonding based on visual guidance as described above can be implemented.
Compared with the prior art, the invention has the following technical effects: the technical problems that the operation of traditional manual inspection is low in efficiency and automation degree is low, the positioning result is easily influenced by subjective factors of a detector and the like are effectively solved, the components to be operated can be accurately and autonomously positioned, the operation quality is improved, and the investment of human resources is reduced. The method can effectively improve the operation efficiency, has high efficiency, reduces the investment cost of the operation of the power distribution network, and reduces the potential safety hazard.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
FIG. 1 is a flow chart of a visual guidance-based approach to wire taping;
FIG. 2 is a schematic diagram of a visual-guidance-based approach to wire stitching;
FIG. 3 is a data processing flow diagram of a visual-guidance-based approach to wire-tying;
FIG. 4 is a schematic diagram of a data analysis strategy;
FIG. 5 is a schematic diagram of the operation of the target detection unit;
fig. 6 is a schematic diagram of the operation of the spatial information detecting unit;
FIG. 7 is a schematic diagram of spatial information calibration and land area positioning.
Detailed Description
To further illustrate the features of the present invention, refer to the following detailed description of the invention and the accompanying drawings. The drawings are for reference and illustration purposes only and are not intended to limit the scope of the present disclosure.
As shown in fig. 1 to 3, the present embodiment discloses a wire joining method based on visual guidance, which includes the following steps S1 to S4:
s1, acquiring an image to be detected;
s2, analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network;
s3, processing the 2D prediction frame of the task component to be operated by using the space detection model to obtain a 3D prediction frame of the task component to be operated;
and S4, carrying out wire lapping positioning guide according to the 3D prediction frame of the component to be operated so as to complete the operation task.
It should be noted that, in practical application, image acquisition equipment such as a camera may be used to acquire an image of a component to be detected of the power distribution network, so as to obtain an image of the component to be detected.
As a further preferable technical scheme, in the embodiment, on the open source deep learning platform Pytorch, a YOLOv3 network is built for detecting the to-be-detected components of the power distribution network, and the backhaul network darknet53 is changed into CSPDarknet53 according to the particularity of the scene image of the power distribution network.
As a more preferable embodiment, in step S2: the method comprises the following steps of analyzing an image to be detected by using a YOLOv3 network, and before obtaining a 2D prediction frame of a task component to be operated of the power distribution network, further comprising:
software labeling is carried out on component images of a local power distribution network operation task component image database, a training set with multiple types of power distribution networks to be detected is established, and the image database stores the following components: isolating picture information such as disconnecting link, drop-out fuse and wires at two ends of the arrester, and disconnecting lead wires;
and inputting the marked training sample set into a YOLOv3 network, and training the network to obtain a trained model for detecting the to-be-detected component of the power distribution network.
As a more preferable embodiment, in step S2: the method comprises the following steps of analyzing an image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of a task component to be operated of the power distribution network, and specifically comprises the following steps:
inputting the image to be detected into a trained YOLOv3 network model, and judging whether an operation task component to be detected of the power distribution network appears in the image to be detected;
if yes, adjusting the operation task component to the central position of the camera view field, and saving picture information of the power distribution network component to be operated to the central position of the camera view field as input of a power distribution network operation robot operation task component space information detection unit; if not, directly judging that no power distribution network component to be operated exists in the current image, and continuing to operate.
As a further preferred technical solution, the loss function of the YOLOv3 network includes a loss function for determining a coordinate error, an iouError loss function for determining an IOU error, and a classror loss function for determining a classification error, where:
the penalty function used to determine the coordinate error is:
Figure BDA0003112943990000061
wherein λ iscoordThe representation is a coordination coefficient set for coordinating the inconsistency of the contribution of rectangular frames with different sizes to the error function, I represents all grids, j represents all rectangular frames, B represents the maximum number of the rectangular frames, S represents the unilateral number of the maximum grids, I represents a target object predicted by the rectangular frames, obj represents the target object, and x represents the maximum number of the rectangular framesiThe x coordinate of the center of the rectangular box representing the network prediction,
Figure BDA0003112943990000062
representing the central x-coordinate, y, of the rectangular box of the markiThe y coordinate of the center of the rectangular box representing the network prediction,
Figure BDA0003112943990000071
denotes the center y coordinate, w, of the rectangular frame of the markiThe size of the rectangular box width representing the network prediction,
Figure BDA0003112943990000072
indicates the width, h, of the rectangular frame of the markiThe size of the rectangular box height representing the network prediction,
Figure BDA0003112943990000073
indicating the high size of the marked rectangular box.
The iouError loss function used to determine the IOU error is:
Figure BDA0003112943990000074
wherein, CiA predicted value representing the confidence of the parameter,
Figure BDA0003112943990000075
the true value representing the confidence of the parameter.
The classError loss function used to determine the classification error is:
Figure BDA0003112943990000076
wherein p isi(c) Indicating the probability that the prediction box belongs to class c
Figure BDA0003112943990000077
And the real value of the class c to which the mark box belongs is represented, if the mark box belongs to the class c, the size of the mark box is equal to 1, and otherwise, the mark box is 0.
As a further preferred technical solution, the spatial detection model adopts a 3D target detection network model based on an M3D-RPN network, and the 3D target detection network model establishes an independent monocular 3D area recommendation network (M3D-RPN) by using shared 2D and 3D detection spaces, and performs strong initialization on each 3D parameter by using prior statistics. The depth-aware convolution perceives 3D parameter estimation, enabling the network to learn more spatial level higher-order features. And passes a simple post-direction estimation optimization algorithm that uses 3D projection and 2D detection to improve the estimation.
As a further preferred technical solution, the specific process of detecting by using the spatial detection model is as follows:
1) using M3D-RPN, it is proposed to construct 3D anchor points to function in image space, all initialized with a priori statistical information of the 3D parameters of each anchor point. Therefore, each discrete anchor point has strong a priori reasoning ability in 3D based on the consistency of camera view and the correlation of 2D scale to 3D depth.
2) And learning the characteristics of the spatial perception by using the new depth perception convolution layer. In order to predict the 2D frame and the 3D frame simultaneously, anchor templates need to be defined in respective dimensional spaces.
To place an anchor point and define a complete 2D, 3D frame, a shared center pixel location (x, y) must be specified, where the parameters of the 2D representation are expressed in terms of pixel coordinates. The 3D center position (x, y, z) in the camera coordinate system is projected three-dimensionally into the image given the known projection matrix P, the depth information parameters are encoded. And carrying out mean value statistics on coordinate information of each point position, so that strong prior information is used for reducing the difficulty of 3D parameter estimation.
Specifically, for each anchor point, the matching statistics are counted for which the prior value IOU exceeds 0.5. Thus, the anchor points represent discrete templates where the 3D priors can be used as strong initial guesses to assume a reasonably consistent scene geometry. Visualizing an anchor point generation formula and a pre-computed 3D prior;
3) the loss function of the M3D-RPN network is a multi-task learning problem, and is composed of a classification loss, a 2D frame regression loss function, and a 3D frame regression loss function.
For each generated box, check if there is an IOU ≧ 0.5 with GT. The GT that generates the best match for the box is used to define the class index τ, 2D box and 3D box for an object, if any. If not, the target class index is a background class, disregarding bounding box regression. A softmax-based polynomial logistic loss function is employed for Lc:
Figure BDA0003112943990000081
4) and classifying the 3D frames under screening, screening the target boundary frames by using regression, and predicting candidate frames of each ROI area to obtain a final target area so as to finish 3D target detection.
As a more preferable embodiment, in step S3: the method comprises the following steps of processing a 2D prediction frame of a task component to be operated by using a space detection model, and before obtaining a 3D prediction frame of the task component to be operated, further comprising:
performing 3D target detection labeling on component images of a local power distribution network operation task component image database, and constructing a 3D target detection training set;
constructing a deep learning M3D-RPN power distribution network operation task element machine 3D target detection network;
and inputting the marked power distribution network operation task element device sample set into an M3D-RPN model for training to obtain a trained power distribution network operation task element device 3D target detection model.
As a more preferable mode, as shown in fig. 4, in the step S3: after the 2D prediction frame of the task component to be operated is processed by utilizing the space detection model to obtain the 3D prediction frame of the task component to be operated, the method further comprises the following steps:
performing two-dimensional space projection on the 3D prediction frame of the component to be operated to obtain projection of space information of the component to be operated on a plane image and obtain plane projection;
calculating the degree of coincidence of the plane projection and the IOU of the 2D prediction frame;
judging whether the coincidence value of the IOU is larger than a set threshold value or not;
if so, outputting the 3D prediction frame of the to-be-operated task component as the space coordinate information of the to-be-operated component;
and if not, correcting the 3D prediction frame of the component to be operated.
As a further preferred technical scheme, the correcting the 3D prediction frame of the component to be operated specifically includes:
adjusting the angle of a camera, re-shooting the image of the task component to be detected, performing 3D target detection and 2D target detection network for target detection, respectively obtaining the projection of a new 3D prediction frame on a two-dimensional plane and a 2D prediction frame, calculating the value of the coincidence degree of the plane projection of the new 3D prediction frame and the 2D prediction frame, outputting the new 3D prediction frame as the space coordinate information of the component to be operated if the value is larger than a threshold value, calculating the IOU difference value of the 2D plane projection and the 2D prediction frame and the coincidence degree of the two frames if the value is smaller than the threshold value, performing back projection to obtain a new 3D coordinate after the coordinate of the projection center point of the 3D prediction frame reaches the 2D coordinate of the IOU meeting the standard, and performing compensation calculation to obtain the final 3D prediction frame information. The method comprises the steps of calculating to-be-operated component wiring area information through prior information (namely to-be-operated parameter information, such as the sample, model, size and other parameters of the component) of the to-be-operated component, and outputting the wiring area information to a robot according to space coordinate information guided by wiring to complete an operation task.
As shown in fig. 5 to 7, the present embodiment discloses a visual guidance-based patch system, including: camera and operation robot, the camera is installed on operation robot, is equipped with data processing module in the operation robot, and this data processing module includes image acquisition unit, target detecting element and spatial information detecting element, wherein:
the image acquisition module is used for acquiring an image to be detected acquired by the camera;
the target detection unit is used for analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network;
the space information detection unit processes the 2D prediction frame of the component to be operated according to the space detection model to obtain a 3D prediction frame of the component to be operated, so that the operation robot carries out wiring positioning guidance according to the 3D prediction frame of the component to be operated to complete an operation task.
As a further preferred technical solution, the apparatus further includes an information calibration unit and a correction unit connected to the spatial information detection unit, wherein:
the information calibration unit is specifically configured to:
performing two-dimensional space projection on the 3D prediction frame of the component to be operated to obtain projection of space information of the component to be operated on a plane image and obtain plane projection;
carrying out target detection on components to be operated of the power distribution network by using a locally constructed target detection network to obtain a 2D prediction frame;
calculating the degree of coincidence of the plane projection and the IOU of the 2D prediction frame;
judging whether the coincidence value of the IOU is larger than a set threshold value or not;
if so, outputting the 3D prediction frame of the to-be-operated task component as the space coordinate information of the to-be-operated component;
if not, the 3D prediction frame of the component to be operated is corrected by the correction unit
As a further preferred technical solution, the correcting unit is specifically configured to:
adjusting the position of a camera, and reacquiring the image to be detected;
calculating the projection of a new 3D prediction frame of the component to be operated on a two-dimensional plane according to the newly acquired image to be detected to obtain a new plane projection;
carrying out target detection on components to be operated of the power distribution network by using a locally constructed target detection network to obtain a new 2D prediction frame;
calculating the IOU coincidence degree between the new plane projection and the new 2D prediction frame;
judging whether the degree of coincidence of the IOU is greater than the set threshold value
If so, outputting the new 3D prediction frame of the component to be operated as the space coordinate information of the component to be operated;
and if not, performing compensation calculation on the new 3D prediction frame to obtain the space coordinate information of the component to be operated.
This scheme can gather the regional operation components and parts that wait of distribution network through distribution network operation robot camera, carry out data processing, analysis, obtain the spatial information of distribution network operation robot autonomic operation target components and parts and bonding wire region, can be during the operation, the efficient is to the regional location of distribution network and wait the operation components and parts, can be based on image information's detection and discernment, the spatial information of the autonomic operation target of distribution network operation robot is given, can effectively promote the security of operating efficiency and promotion operation, realize the automation of distribution network operation, intelligent operation.
The embodiment also discloses a computer readable storage medium, on which a computer program is stored, the computer program being executed by a processor, and the method for wire lapping based on visual guidance can be realized as described above.
Those skilled in the art can understand that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a (may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A visual guidance-based stringing method, comprising:
acquiring an image to be detected;
analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network;
processing the 2D prediction frame of the to-be-operated task component by using the space detection model to obtain a 3D prediction frame of the to-be-operated task component;
and carrying out wiring positioning guide according to the 3D prediction frame of the component to be operated so as to complete the operation task.
2. The visual-guidance-based wiretapping method of claim 1, wherein the loss functions of the YOLOv3 network comprise a loss function for determining coordinate errors, an iouError loss function for determining IOU errors, and a classror loss function for determining classification errors.
3. The visual guidance-based patch cord method of claim 1, wherein the spatial detection model employs a 3D object detection network model based on an M3D-RPN network.
4. The visual-guidance-based patch cord method of claim 3, wherein the loss functions of the M3D-RPN network comprise a classification loss function, a 2D box regression loss function, and a 3D box regression loss function.
5. The visual guidance-based wire lapping method according to claim 1, wherein the wire lapping positioning guidance for completing the operation task according to the 3D prediction frame of the component to be operated comprises:
and calculating to obtain the wiring area information of the component to be operated based on the 3D prediction frame and the prior information of the component to be operated, and performing wiring positioning guidance based on the wiring area information to complete an operation task.
6. The visual guidance-based wire lapping method according to any one of claims 1-5, wherein after the 2D prediction frame of the task component to be worked is processed by using the spatial detection model to obtain the 3D prediction frame of the task component to be worked, the method further comprises:
performing two-dimensional space projection on the 3D prediction frame of the component to be operated to obtain projection of space information of the component to be operated on a plane image and obtain plane projection;
calculating the degree of coincidence of the plane projection and the IOU of the 2D prediction frame;
judging whether the coincidence value of the IOU is larger than a set threshold value or not;
if so, outputting the 3D prediction frame of the to-be-operated task component as the space coordinate information of the to-be-operated component;
and if not, correcting the 3D prediction frame of the component to be operated.
7. The visual guidance-based wire lapping method according to claim 6, wherein the correcting the 3D prediction frame of the component to be worked comprises:
re-acquiring the image to be detected;
calculating the projection of a new 3D prediction frame of the component to be operated on a two-dimensional plane according to the newly acquired image to be detected to obtain a new plane projection;
carrying out target detection on components to be operated of the power distribution network by using a locally constructed target detection network to obtain a new 2D prediction frame;
calculating the IOU coincidence degree between the new plane projection and the new 2D prediction frame;
judging whether the degree of coincidence of the IOU is greater than the set threshold value
If so, outputting the new 3D prediction frame of the component to be operated as the space coordinate information of the component to be operated;
and if not, performing compensation calculation on the new 3D prediction frame to obtain the space coordinate information of the component to be operated.
8. A visual guidance-based patch system, comprising: camera and operation robot, the camera is installed on operation robot, is equipped with data processing module in the operation robot, and this data processing module includes image acquisition unit, target detecting element and spatial information detecting element, wherein:
the image acquisition module is used for acquiring an image to be detected acquired by the camera;
the target detection unit is used for analyzing the image to be detected by using a YOLOv3 network to obtain a 2D prediction frame of the task component to be operated of the power distribution network;
the space information detection unit processes the 2D prediction frame of the component to be operated according to the space detection model to obtain a 3D prediction frame of the component to be operated, so that the operation robot carries out wiring positioning guidance according to the 3D prediction frame of the component to be operated to complete an operation task.
9. A computer-readable storage medium, on which a computer program is stored, the computer program being executable by a processor to implement the visual guidance-based wiretapping method according to any of claims 1-8.
CN202110656321.7A 2021-06-11 2021-06-11 Visual guidance-based wire lapping method and system and storage medium Pending CN113327240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110656321.7A CN113327240A (en) 2021-06-11 2021-06-11 Visual guidance-based wire lapping method and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110656321.7A CN113327240A (en) 2021-06-11 2021-06-11 Visual guidance-based wire lapping method and system and storage medium

Publications (1)

Publication Number Publication Date
CN113327240A true CN113327240A (en) 2021-08-31

Family

ID=77420624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110656321.7A Pending CN113327240A (en) 2021-06-11 2021-06-11 Visual guidance-based wire lapping method and system and storage medium

Country Status (1)

Country Link
CN (1) CN113327240A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677586A (en) * 2022-03-15 2022-06-28 南京邮电大学 Automatic identification method for physical circuit experiment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145756A (en) * 2018-07-24 2019-01-04 湖南万为智能机器人技术有限公司 Object detection method based on machine vision and deep learning
CN111178206A (en) * 2019-12-20 2020-05-19 山东大学 Building embedded part detection method and system based on improved YOLO
CN111929314A (en) * 2020-08-26 2020-11-13 湖北汽车工业学院 Wheel hub weld visual detection method and detection system
CN112149514A (en) * 2020-08-28 2020-12-29 中国地质大学(武汉) Method and system for detecting safety dressing of construction worker
CN112561885A (en) * 2020-12-17 2021-03-26 中国矿业大学 YOLOv 4-tiny-based gate valve opening detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145756A (en) * 2018-07-24 2019-01-04 湖南万为智能机器人技术有限公司 Object detection method based on machine vision and deep learning
CN111178206A (en) * 2019-12-20 2020-05-19 山东大学 Building embedded part detection method and system based on improved YOLO
CN111929314A (en) * 2020-08-26 2020-11-13 湖北汽车工业学院 Wheel hub weld visual detection method and detection system
CN112149514A (en) * 2020-08-28 2020-12-29 中国地质大学(武汉) Method and system for detecting safety dressing of construction worker
CN112561885A (en) * 2020-12-17 2021-03-26 中国矿业大学 YOLOv 4-tiny-based gate valve opening detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GARRICK BRAZIL ETAL: ""M3D-RPN: Monocular 3D Region Proposal Network for Object Detection"", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》, pages 9286 - 9295 *
方志军,高永彬,吴晨谋: "《TensorFlow应用案例教程》", 30 September 2020, 中国铁道出版社有限公司, pages: 106 - 111 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677586A (en) * 2022-03-15 2022-06-28 南京邮电大学 Automatic identification method for physical circuit experiment
CN114677586B (en) * 2022-03-15 2024-04-05 南京邮电大学 Automatic identification method for physical circuit experiment

Similar Documents

Publication Publication Date Title
CN107742093B (en) Real-time detection method, server and system for infrared image power equipment components
CN111275759B (en) Transformer substation disconnecting link temperature detection method based on unmanned aerial vehicle double-light image fusion
WO2021092397A1 (en) System and method for vegetation modeling using satellite imagery and/or aerial imagery
CN110411339B (en) Underwater target size measuring equipment and method based on parallel laser beams
US20160321827A1 (en) Method for Determining Dimensions in an Indoor Scene from a Single Depth Image
CN107767374A (en) A kind of GIS disc insulators inner conductor hot-spot intelligent diagnosing method
CN115331002A (en) Method for realizing remote processing of heating power station fault based on AR glasses
CN116563386A (en) Binocular vision-based substation worker near-electricity distance detection method
CN111158358A (en) Method and system for self-optimization routing inspection of transformer/converter station based on three-dimensional model
CN112950504A (en) Power transmission line inspection haze weather monocular hidden danger object distance measurement method and system
CN113327240A (en) Visual guidance-based wire lapping method and system and storage medium
CN118115945B (en) Binocular vision ranging-based substation construction safety management and control system and method
CN115965578A (en) Binocular stereo matching detection method and device based on channel attention mechanism
CN113723389B (en) Pillar insulator positioning method and device
CN113569801B (en) Distribution construction site live equipment and live area identification method and device thereof
CN113536842B (en) Electric power operation personnel safety dressing identification method and device
KR102366396B1 (en) RGB-D Data and Deep Learning Based 3D Instance Segmentation Method and System
JP5872401B2 (en) Region dividing device
CN113191336B (en) Electric power hidden danger identification method and system based on image identification
CN113807244B (en) Cabinet layout drawing method based on deep learning
CN113674349B (en) Steel structure identification and positioning method based on depth image secondary segmentation
CN114677667A (en) Transformer substation electrical equipment infrared fault identification method based on deep learning
CN115690573A (en) Base station acceptance method, device, equipment and storage medium
CN117823741B (en) Pipe network non-excavation repairing method and system combined with intelligent robot
KR102677976B1 (en) Method and system for detecting opening and creating virtual fence preventing fall at construction site in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination