CN111242080B - Power transmission line identification and positioning method based on binocular camera and depth camera - Google Patents

Power transmission line identification and positioning method based on binocular camera and depth camera Download PDF

Info

Publication number
CN111242080B
CN111242080B CN202010068432.1A CN202010068432A CN111242080B CN 111242080 B CN111242080 B CN 111242080B CN 202010068432 A CN202010068432 A CN 202010068432A CN 111242080 B CN111242080 B CN 111242080B
Authority
CN
China
Prior art keywords
image
binocular
cable
camera
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010068432.1A
Other languages
Chinese (zh)
Other versions
CN111242080A (en
Inventor
李开宇
胡广亮
田裕鹏
朱鹏飞
施金辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010068432.1A priority Critical patent/CN111242080B/en
Publication of CN111242080A publication Critical patent/CN111242080A/en
Application granted granted Critical
Publication of CN111242080B publication Critical patent/CN111242080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The application discloses a transmission line identification and positioning method based on a binocular camera and a depth camera, which belongs to the field of machine vision.

Description

Power transmission line identification and positioning method based on binocular camera and depth camera
Technical Field
The application belongs to the field of machine vision, and relates to a power transmission line identification and positioning method based on a binocular camera and a depth camera.
Background
At present, in the maintenance work of high-voltage transmission lines in China, a manual maintenance method is mainly adopted, an operator wears special tools to carry out operation, and in the operation process, the operator is in dangerous environments such as high voltage, strong electromagnetic fields and the like at all times, so that the labor intensity is high, the danger is high, and accidents are easy to occur. With the continuous improvement of the automation level, the use of the distribution network live working robot has obvious advantages. However, most of working robots used in the current distribution lines are semi-automatic, and require a worker to remotely control the robots to perform related operations, so that the teleoperation is difficult to complete complicated and fine working tasks, such as disconnection of drainage lines, replacement of electrical equipment, and the like. The main difficulty of the full-automatic horizontal distribution network live working robot is that the identification and positioning of the transmission line are difficult to identify and accurately position the cable in a complex background.
Disclosure of Invention
Aiming at the defects of the prior art, the application aims to provide a transmission line identification and positioning method based on a binocular camera and a depth camera, so as to solve the problems of low cable positioning precision and difficult identification of a target in a complex background in the prior art.
In order to achieve the above purpose, the application is realized by the following technical methods:
a power line identification and positioning method based on a binocular camera and a depth camera, the method comprising the steps of:
acquiring a color image and a depth image of a cable in a depth camera;
acquiring a binocular image of the cable in the binocular camera;
performing depth filtration on the depth image to obtain a filtered depth image;
mapping the filtered depth image onto a color image to obtain a mapped color image;
mapping the mapping color image onto a binocular image to obtain a filtered binocular image;
processing the filtered binocular image to obtain pixel coordinates of the cable edge;
obtaining corresponding homonymous points of the cable in the binocular left and right images according to the pixel coordinates and the epipolar constraint;
and calculating according to the homonymy points to obtain the three-dimensional coordinates of the cable.
Further, the mapped color image is mapped onto the binocular image by a registration model.
Further, the method for establishing the registration model comprises the following steps:
acquiring coordinates of a point in space under different depths in a depth camera color image and a binocular camera left and right image;
and putting the coordinates into a BP neural network for training to obtain a registration model.
Further, the three-dimensional coordinate calculation process is as follows:
acquiring corresponding points on two edges of the cable;
taking the midpoints of all the corresponding points as an input point set;
and calculating according to the input point set to obtain the three-dimensional coordinates of the cable.
Further, the processing of filtering the binocular image comprises image preprocessing, color segmentation, morphological filtering, edge extraction and broken line region connection.
Further, the connection process of the broken line area is as follows:
judging whether two cables in the filtered binocular image are the same cable or not;
if so, connecting the two cables into the same cable through the straight line segment.
Further, the judging process is as follows:
acquiring pixel coordinates of head and tail points of two cable edges respectively;
calculating the distance between the head and tail points of the two cables according to the pixel coordinates;
taking the two edge endpoints with the minimum distance as a first target point and a second target point;
calculating absolute values of slopes of the first target point and the second target point on the cable;
calculating the absolute value of the slope of the connecting line of the two first target points and the second target point;
calculating the difference value between the absolute values of the slopes of the first target point, the second target point and the connecting line;
if the differences are within the threshold values, the two cables belong to the same cable.
Compared with the prior art, the application has the following beneficial effects:
firstly, the method can realize the identification of the target in the complex background, and aiming at the complex background, for example, when the cable is in the shade background, the color of the shade is very close to the color of the cable, the cable is difficult to identify, the background pixels outside the target area are filtered by using the depth camera, and then the depth map is mapped onto the binocular left and right map through the two mapping relations, so that the binocular left and right map with the background filtered is obtained, the identification process is greatly simplified, and the identification success rate and the accuracy rate are improved; secondly, the depth distance accuracy obtained by the depth camera is limited, and the target is positioned by adopting the binocular principle, so that the method has high accuracy and low cost.
Drawings
FIG. 1 is a schematic diagram of a system in an embodiment of the application;
fig. 2 is a schematic diagram of a broken wire connection in an embodiment of the present application.
Reference numerals: 1-a binocular right camera; 2-binocular left camera; 3-Kinect depth camera; 4-color cameras; a 5-infrared camera; 6-infrared projector.
Detailed Description
The application is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present application, and should not be construed as limiting the scope of the present application.
A power line identification and positioning method based on a binocular camera and a depth camera, the method comprising the steps of:
acquiring a color image and a depth image of a cable in a depth camera;
acquiring a binocular image of the cable in the binocular camera;
performing depth filtration on the depth image to obtain a filtered depth image;
mapping the filtered depth image onto a color image to obtain a mapped color image;
mapping the mapping color image onto a binocular image to obtain a filtered binocular image;
processing the filtered binocular image to obtain pixel coordinates of the cable edge;
obtaining corresponding homonymous points of the cable in the binocular left and right images according to the pixel coordinates and the epipolar constraint;
and calculating according to the homonymy points to obtain the three-dimensional coordinates of the cable.
The processing of the filtered binocular image comprises image preprocessing, color segmentation, morphological filtering, edge extraction and broken line region connection.
The process of connecting the broken line areas is as follows:
judging whether two cables in the filtered binocular image are the same cable or not;
if so, connecting the two cables into the same cable through the straight line segment.
The judging process is as follows:
acquiring pixel coordinates of head and tail points of two cable edges respectively;
calculating the distance between the first and second points of the two cables according to the pixel coordinates, wherein the total distance is four distances, namely the distance between the first starting point of the cable and the second starting point of the cable, the distance between the first starting point of the cable and the second ending point of the cable, the distance between the first ending point of the cable and the second starting point of the cable, and the distance between the first ending point of the cable and the second ending point of the cable;
taking the two edge endpoints with the minimum distance as a first target point and a second target point;
calculating absolute values of slopes of the first target point and the second target point on the cable;
calculating the absolute value of the slope of the connecting line of the two first target points and the second target point;
calculating the difference value between the absolute values of the slopes of the first target point, the second target point and the connecting line;
if the differences are within the threshold values, the two cables belong to the same cable.
The method specifically comprises the following steps:
(1) The resolution and the view field of the Kinect depth camera are different from those of the binocular camera, the binocular camera and the Kinect depth camera are fixed, the relative position is ensured not to move, and the system is shown in a schematic diagram in FIG. 1;
(2) Respectively acquiring left and right images of a binocular camera and color images and depth images of a depth camera, acquiring a plurality of groups of images at different depths, registering the color images of the depth camera with the left and right images of the binocular camera to obtain a registration model, and calibrating the binocular camera to obtain internal and external parameters of the camera;
(3) After the preparation work is finished, the power transmission line can be identified and positioned, taking the processing of a frame of image as an example, firstly, the depth image, the color image and the binocular image of the Kinect camera are obtained, the depth image is subjected to depth filtration, and only the depth range (for example, 0.5 m-1.5 m) of a target to be extracted is reserved, so that a filtered depth image is obtained;
(4) Mapping the Kinect depth image onto a Kinect color image, and then mapping the Kinect color image onto a binocular left-right image by using the registration model obtained in the step (2), so as to obtain a binocular left-right image after depth filtration;
(5) Performing image preprocessing, color segmentation, edge extraction and the like on the binocular left and right images subjected to depth filtration to obtain pixel coordinates of line outlines of the left and right images;
(6) Obtaining corresponding homonymous points in the left and right images by using polar constraint;
(7) And after finding out the homonymy points, carrying out three-dimensional calculation to obtain the three-dimensional coordinates of the cable.
Registration of depth camera and binocular camera because the Kinect depth camera and binocular camera are different in resolution and field of view, the two must be registered to take a point P in space whose pixel coordinates in the Kinect depth camera are (X K ,Y K ) The pixel coordinates in the binocular left camera are (X L ,Y L ) The pixel coordinates in the binocular right camera are (X R ,Y R ) The essence of registration is to find the mapping relation between the pixel coordinate of one point in the depth camera and the corresponding point in the binocular camera, wherein the mapping relation comprises two mapping relations, namely the mapping relation from the Kinect depth camera to the binocular left camera and the mapping relation from the Kinect depth camera to the binocular right camera, and the mapping relation is inconsistent under different depths and different angles, so that only mapping relation needs to be obtained, namely (X) depth information is added K ,Y K ,Z K ) To (X) L ,Y L ) (X) K ,Y K ,Z K ) To (X) R ,Y R ) Is uniquely determined. The registration procedure is as follows, using a 12 x 9 calibration plate, which can be easily positioned by photographing the calibration plate with a binocular camera and a Kinect camera108 grid points in the calibration plate and the depth information of the depth camera are added, 108 groups of data can be obtained from each image, the depth is kept basically unchanged, the calibration plate is moved to collect a plurality of images, the uniform distribution of the calibration plate in a binocular view field is ensured, then the operation is repeated under different depths to obtain a large amount of data, the data are put into a BP neural network for training, and the (X) K ,Y R ,Z K ) Data are used as inputs, respectively by (X L ,Y L ) And (X) R ,Y R ) For output, the two mapping relations can be obtained, when in experiment, about 20000 samples are collected under different depths and different angles, 18000 samples are taken as training sets, the rest are taken as test sets, the network comprises an input layer, an hidden layer and an output layer, the hidden layer comprises 7 neurons, the maximum training times are 3000 times, the learning rate is 0.01, and the training requirement precision is 0.0004.
Filtering the background except the target depth to be extracted in the binocular image, taking a left image as an example, firstly generating a mask image with the same resolution and the same type as the binocular color image, setting the pixel value to be 0, mapping the points meeting the depth range requirement to the mask image by using an obtained registration model, setting the pixel value of the mapped points to be 255, carrying out morphological expansion operation on the mask image, marking the mask image as an image A, marking the left image of the binocular camera as an image B, carrying out AND operation on the processed mask image and the left image of the binocular camera as an image C, namely C=A & B, and carrying out inverse operation on the image A and the image C to obtain a final effect image D, namely D=C+ (-A). The image D is a binocular color image after background filtering.
The connection of the cable target disconnection area may occur in the cable in the image due to the influence of illumination or the like, and when such a situation occurs, the disconnection area is connected with a line segment. If two cables are detected in the current field of view, judging whether the two cables are the same cable, if so, connecting the two cables, wherein the judging conditions are as follows, taking two end points of one cable edge as a point P and a point Q, taking the two end points of the other cable edge as a point M and a point N, and obtaining the two end pointsRespectively calculating the distance between PM, PN, QM, QN, taking two points with the smallest distance as target points, respectively calculating the slopes K of the two points on the line Q And K N And calculate the slope K between these two points QM If the difference between the three slopes is within the threshold, i.e. the three slopes are approximately equal, it may be indicated that the two cables belong to the same cable, and the QM is connected by a straight line segment.
When calculating the three-dimensional coordinates, the selected input points are not edge points of the cable, but midpoints of two edges of the cable, and the method is that starting from a starting point of one side of the cable, corresponding points on the opposite side are found, midpoints of the two points are taken as input points, a series of center point sets are obtained by traversing the whole line, and the cable space coordinates are calculated by using the point sets.
The system consists of a binocular camera and a Kinect depth camera 3, wherein the binocular camera comprises a binocular right camera 1 and a binocular left camera 2, the Kinect depth camera 3 comprises a color camera 4, an infrared camera 5 and an infrared projector 6, the binocular camera collects color images, the Kinect depth camera collects color images and depth images, the binocular camera images are respectively matched with the Kinect images, the depth information of the depth camera is used for filtering out image information in the binocular images outside a designated depth area, so that the complex background in the binocular images is filtered, and the processed binocular images are processed, segmented and matched to obtain three-dimensional coordinates and posture information of the power transmission line.
The foregoing has outlined rather broadly the principles and embodiments of the present application in order that the detailed description of the application may be better understood, and in order that the present application may be better understood, the present application should not be construed as being limited to the details of the illustrated embodiments and applications.

Claims (4)

1. The power transmission line identification and positioning method based on the binocular camera and the depth camera is characterized by comprising the following steps of:
acquiring a color image and a depth image of a cable in a depth camera;
acquiring a binocular image of the cable in the binocular camera;
performing depth filtration on the depth image to obtain a filtered depth image;
mapping the filtered depth image onto a color image to obtain a mapped color image;
mapping the mapping color image onto a binocular image to obtain a filtered binocular image;
processing the filtered binocular image to obtain pixel coordinates of the cable edge;
obtaining corresponding homonymous points of the cable in the binocular left and right images according to the pixel coordinates and the epipolar constraint;
calculating to obtain the three-dimensional coordinates of the cable according to the homonymy points;
the processing of the filtering binocular image comprises image preprocessing, color segmentation, morphological filtering, edge extraction and broken line region connection;
the process of connecting the broken line areas is as follows:
judging whether two cables in the filtered binocular image are the same cable or not;
if yes, connecting the two cables into the same cable through a straight line segment;
the judging process is as follows:
acquiring pixel coordinates of head and tail points of two cable edges respectively;
calculating the distance between the head and tail points of the two cables according to the pixel coordinates;
taking the two edge endpoints with the minimum distance as a first target point and a second target point;
calculating absolute values of slopes of the first target point and the second target point on the cable;
calculating the absolute value of the slope of the connecting line of the two first target points and the second target point;
calculating the difference value between the absolute values of the slopes of the first target point, the second target point and the connecting line;
if the differences are within the threshold values, the two cables belong to the same cable.
2. The method of claim 1, wherein the mapped color image is mapped onto the binocular image by a registration model.
3. The method for identifying and locating a power line based on a binocular camera and a depth camera according to claim 2, wherein the method for establishing the registration model comprises the following steps:
acquiring coordinates of a point in space under different depths in a depth camera color image and a binocular camera left and right image;
and putting the coordinates into a BP neural network for training to obtain a registration model.
4. The method for identifying and positioning a power transmission line based on a binocular camera and a depth camera according to claim 1, wherein the three-dimensional coordinates are calculated as follows:
acquiring corresponding points on two edges of the cable;
taking the midpoints of all the corresponding points as an input point set;
and calculating according to the input point set to obtain the three-dimensional coordinates of the cable.
CN202010068432.1A 2020-01-21 2020-01-21 Power transmission line identification and positioning method based on binocular camera and depth camera Active CN111242080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010068432.1A CN111242080B (en) 2020-01-21 2020-01-21 Power transmission line identification and positioning method based on binocular camera and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010068432.1A CN111242080B (en) 2020-01-21 2020-01-21 Power transmission line identification and positioning method based on binocular camera and depth camera

Publications (2)

Publication Number Publication Date
CN111242080A CN111242080A (en) 2020-06-05
CN111242080B true CN111242080B (en) 2023-09-12

Family

ID=70869947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010068432.1A Active CN111242080B (en) 2020-01-21 2020-01-21 Power transmission line identification and positioning method based on binocular camera and depth camera

Country Status (1)

Country Link
CN (1) CN111242080B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112422818B (en) * 2020-10-30 2022-01-07 上海大学 Intelligent screen dropping remote detection method based on multivariate image fusion
CN112595265B (en) * 2020-12-07 2022-03-29 新拓三维技术(深圳)有限公司 Method and equipment for measuring bending radius of cable
CN112907559B (en) * 2021-03-16 2022-06-07 湖北工程学院 Depth map generation device based on monocular camera
CN113065521B (en) * 2021-04-26 2024-01-26 北京航空航天大学杭州创新研究院 Object identification method, device, equipment and medium
CN113534176A (en) * 2021-06-22 2021-10-22 武汉工程大学 Light field high-precision three-dimensional distance measurement method based on graph regularization
CN113192131A (en) * 2021-06-24 2021-07-30 南京航空航天大学 Cable position determining method and system based on adaptive vector transformation
CN113674349B (en) * 2021-06-30 2023-08-04 南京工业大学 Steel structure identification and positioning method based on depth image secondary segmentation
CN114359758B (en) * 2022-03-18 2022-06-14 广东电网有限责任公司东莞供电局 Power transmission line detection method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108234984A (en) * 2018-03-15 2018-06-29 百度在线网络技术(北京)有限公司 Binocular depth camera system and depth image generation method
CN109472776A (en) * 2018-10-16 2019-03-15 河海大学常州校区 A kind of isolator detecting and self-destruction recognition methods based on depth conspicuousness
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN110390719A (en) * 2019-05-07 2019-10-29 香港光云科技有限公司 Based on flight time point cloud reconstructing apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108234984A (en) * 2018-03-15 2018-06-29 百度在线网络技术(北京)有限公司 Binocular depth camera system and depth image generation method
CN109472776A (en) * 2018-10-16 2019-03-15 河海大学常州校区 A kind of isolator detecting and self-destruction recognition methods based on depth conspicuousness
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN110390719A (en) * 2019-05-07 2019-10-29 香港光云科技有限公司 Based on flight time point cloud reconstructing apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋子豪,等.汽车双目立体视觉的目标测距及识别研究.武汉理工大学学报.2019,全文. *

Also Published As

Publication number Publication date
CN111242080A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111242080B (en) Power transmission line identification and positioning method based on binocular camera and depth camera
CN111089569B (en) Large box body measuring method based on monocular vision
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN110850723B (en) Fault diagnosis and positioning method based on transformer substation inspection robot system
CN113052903B (en) Vision and radar fusion positioning method for mobile robot
CN109523528B (en) Power transmission line extraction method based on unmanned aerial vehicle binocular vision SGC algorithm
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
CN110136211A (en) A kind of workpiece localization method and system based on active binocular vision technology
CN111996883B (en) Method for detecting width of road surface
CN112949478A (en) Target detection method based on holder camera
CN110202560A (en) A kind of hand and eye calibrating method based on single feature point
CN112288815B (en) Target die position measurement method, system, storage medium and device
CN110617772A (en) Non-contact type line diameter measuring device and method
CN111640156A (en) Three-dimensional reconstruction method, equipment and storage equipment for outdoor weak texture target
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN114473309A (en) Welding position identification method for automatic welding system and automatic welding system
CN113516716A (en) Monocular vision pose measuring and adjusting method and system
CN109727282A (en) A kind of Scale invariant depth map mapping method of 3-D image
Ann et al. Study on 3D scene reconstruction in robot navigation using stereo vision
Orteu et al. Camera calibration for 3D reconstruction: application to the measurement of 3D deformations on sheet metal parts
CN104614372B (en) Detection method of solar silicon wafer
CN112102473A (en) Operation scene modeling method and system for distribution network live working robot
CN108180871A (en) A kind of method of quantitative assessment surface of composite insulator dusting roughness
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN109360269B (en) Ground three-dimensional plane reconstruction method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant