CN117152647A - Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion - Google Patents

Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion Download PDF

Info

Publication number
CN117152647A
CN117152647A CN202311439313.2A CN202311439313A CN117152647A CN 117152647 A CN117152647 A CN 117152647A CN 202311439313 A CN202311439313 A CN 202311439313A CN 117152647 A CN117152647 A CN 117152647A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
images
tower
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311439313.2A
Other languages
Chinese (zh)
Other versions
CN117152647B (en
Inventor
熊道洋
胡浩瀚
郭正雄
李斌
秦娜
魏伟
张溦
黄凯
王迎亮
宋森燏
纪姗姗
李雪松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Richsoft Electric Power Information Technology Co ltd
State Grid Information and Telecommunication Co Ltd
Original Assignee
Tianjin Richsoft Electric Power Information Technology Co ltd
State Grid Information and Telecommunication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Richsoft Electric Power Information Technology Co ltd, State Grid Information and Telecommunication Co Ltd filed Critical Tianjin Richsoft Electric Power Information Technology Co ltd
Priority to CN202311439313.2A priority Critical patent/CN117152647B/en
Publication of CN117152647A publication Critical patent/CN117152647A/en
Application granted granted Critical
Publication of CN117152647B publication Critical patent/CN117152647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle network completion acceptance method based on multi-view fusion, which comprises two sets of methods running synchronously, namely: 1. a gateway key component identification counting method based on multi-view fusion is used for preprocessing multi-view images of the same tower acquired by an unmanned aerial vehicle, extracting feature images of all views by utilizing self-adaptive feature extraction capability of deep learning, performing multi-view association in one total feature image based on a projection principle, detecting the total feature image, and correcting identification errors caused by mutual shielding of components under the condition of multiple loops. 2. Based on the completion acceptance method of key parts of the multi-view fused power distribution network, the application scenes of tension insulators and linear insulators in the distribution network are combined, and the path of the unmanned aerial vehicle to the next base tower is selected according to the detected insulation subtype, so that real-time data acquisition and acceptance result feedback are facilitated.

Description

Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion
Technical Field
The invention relates to the technical field of electric power inspection, in particular to an unmanned aerial vehicle network completion acceptance method based on multi-view fusion.
Background
With the continuous acceleration of the foundation construction of the power grid in China, the scale of the power distribution network is increased sharply, and the safety and reliability of power equipment of the power grid are required to be higher. The quality acceptance of distribution network equipment is taken as an important link of engineering construction, and is an important precondition for guaranteeing normal electricity utilization of electricity utilization units. Therefore, a standardized quality acceptance operation flow is formulated, engineering acceptance technical means are innovated, engineering quality acceptance management level is improved, and the method has important significance for safe and stable operation of the power grid. The traditional network distribution equipment completion quality acceptance is manual inspection operation, so that the workload is large, the efficiency is low, the investment cost is high, certain safety risks exist when the overhead operation is performed under the condition of complex topography, and the life safety of operators is seriously threatened. Meanwhile, the quality inspection content of the completion of the distribution network is more, and the manual inspection result is influenced by professional quality of personnel to a great extent and is good and bad, so that the safety and stability of the subsequent distribution can be influenced.
In recent years, the safety and the operability of the multi-rotor unmanned aerial vehicle realize breakthrough development, and have the advantages of safety, high efficiency, flexible control, strong maneuverability and the like. Through carrying on high-resolution sensor, under unmanned aerial vehicle flies the hand and controls, can realize fast to the security inspection of distribution network key parts such as shaft tower, insulator, state detection, the relevant work such as operation of removing the obstacles, alleviate the pressure of manual work to a certain extent, the present wide application in fields such as electric power inspection, passageway clearance, state detection, electric power survey.
However, at present, the following problems still exist in unmanned aerial vehicle distribution network acceptance: (1) The unmanned aerial vehicle distribution network acceptance still needs experienced unmanned aerial vehicle flyer implementation control, and the unmanned aerial vehicle full-automatic inspection technology is not mature. (2) The counting problem of key components involved in verification and reception is still mainly completed by manpower, and the counting of the key components is tedious and time-consuming in the face of numerous newly-built distribution network circuits. (3) The unmanned aerial vehicle is shooting single visual angle multi-line complex shaft tower and is often seen shelter from the problem, need shoot through the multiple visual angles and observe shaft tower panorama, but the component re-counting problem that the multiple visual angles lead to makes artifical count rate of accuracy not high.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion.
The unmanned aerial vehicle network completion acceptance method based on multi-view fusion comprises a counting method of key parts of a pole tower, and specifically comprises the following steps:
s1, an unmanned aerial vehicle identifies the tower top of a tower, hovers at a position 5m above the central point of the tower top, and shoots an image of a first visual angle with a cradle head facing downwards by 90 degrees, and meanwhile acquires coordinate information of the unmanned aerial vehicle;
s2, the unmanned aerial vehicle sequentially shoots images of a second view angle and a third view angle at a position, which is opposite to the direction of the tower, of a pitch angle of the cradle head by 0 degrees at a distance of 5m from the left side to the right side of the tower according to the setting, and acquires coordinate information of the two points;
s3, sending the three images collected in the S1 and the S2 into a pre-trained ResNet-50 model on an ImageNet data set for feature extraction;
s4, obtaining coordinate information during shooting, performing view angle transformation on three images with different view angles, and projecting the 3D feature images into corresponding 2D feature images;
s5, three groups of 2D feature images are obtained in S4, meanwhile, a 2-channel coordinate image is generated by using coordinate information to specify the X-Y coordinates of the ground plane position, and a group of 3 multiplied by 64+2 channel ground plane feature images are obtained by combining projection 64 channel feature images from three visual angles;
s6, a group of feature graphs after feature aggregation is obtained in S5, and the final prediction feature graph is obtained by convolution operation of a large convolution kernel, wherein the number of output channels of the convolution operation is 1;
and S7, the prediction feature map in S6 comprises all key components including insulators, and the key components in the image are directly counted.
Preferably, the specific feature extraction method of S3 is as follows:
taking three images [ Hi, wi ] with different visual angles as input, wherein Hi and Wi represent the height and width of the images, and extracting 3 64-channel feature images by using ResNet-50; to maintain a relatively high spatial resolution of the feature map, the 3-set convolution after ResNet-50 needs to be replaced with a hole convolution.
Preferably, the viewing angle conversion method in S4 is:
a pixel point in the image is located on a straight line in the three-dimensional world; to determine the exact three-dimensional position of the pixel, a common reference plane is set: a base plane, z=0; for all three-dimensional positions (x, y, 0) on the ground plane, the point state transformation is expressed as:
wherein s is a real scale factor, wherein P θ,0 Representing the slave P θ The 3 x 3 perspective transformation matrix of the third column is cancelled;
and for implementation in neural networks, the position of the base plane needs to be quantized into a grid of one shape [ Hg, wg ]; for images taken at three different view angles, and according to the above formula, a parameterized shape sampling grid [ Hg, wg ] is applied, the image is projected onto the z=0 ground plane, and the resulting sampling grid generates a projection profile on the ground plane, with the remaining positions filled with zeros.
Preferably, the invention discloses an unmanned aerial vehicle network completion acceptance method based on multi-view fusion, which further comprises an unmanned aerial vehicle route-free acceptance data acquisition method based on visual detection, and specifically comprises the following steps:
a1, starting the unmanned aerial vehicle from a predicted flying spot, and simultaneously acquiring 1080p and 60 frames of video streams acquired by a camera of the unmanned aerial vehicle in real time;
a2, manually controlling the unmanned aerial vehicle to fly to a position 5m above the first pole tower and hover;
a3, the unmanned aerial vehicle recognizes the tower and starts a route-free data acquisition algorithm;
a4, performing frame extraction sampling on the 60 frames of images, and extracting 5 frames per second;
a5, executing a multi-view target detection module on the five images extracted in the A4;
a6, judging whether the multi-view target detection module detects tension insulators, if the result is negative, outputting the number of insulators to be 0, executing a channel detection algorithm, identifying the direction of the unmanned aerial vehicle going to the next base tower, and meanwhile, automatically increasing the number of the inspected towers by 1, if the result is positive, outputting the number of insulators of the towers by the multi-view target detection module, judging whether a plurality of channels are led out of the current towers, if the result is positive, manually selecting the channel of the next base tower by operation and maintenance personnel, and if the result is negative, identifying the direction of the unmanned aerial vehicle going to the next base tower according to a channel identification algorithm;
a7, collecting the number of the pole insulators returned in the A6, and storing the number of the pole insulators serving as parameters of the pole in a database;
a8, judging whether the number of the inspected towers reaches a threshold value, if so, carrying out A9, and if not, repeating A3 to A5 until the number of the towers reaches the threshold value;
a9, completing acceptance of the current tower group, and automatically flying the unmanned aerial vehicle to a preset drop point;
a10, deriving the collected insulator data on each tower, comparing the insulator data with the expected number of construction, and judging the acceptance effect according to the situation.
The invention has the advantages and technical effects that:
(1) The gateway key component identification counting algorithm based on multi-view fusion provided by the invention is characterized in that unmanned aerial vehicle carried remote sensing equipment collects images,
target detection is carried out on the collected image at the unmanned aerial vehicle remote controller end, special personnel are not required to operate, the burden of operators is reduced, and completion acceptance efficiency is effectively improved.
(2) According to the multi-view fusion-based gateway key component identification counting algorithm, the image field of the target detection model is enlarged through the multi-view images, identification errors caused by mutual shielding of components under the multi-loop condition are corrected, and acceptance accuracy is improved.
(3) The invention provides an unmanned aerial vehicle route-free acceptance data acquisition method based on visual detection, which combines the identified insulator types to select paths, identifies and counts with high accuracy, feeds back acceptance results in real time, and simultaneously effectively reduces time wasted due to path switching.
(4) The completion acceptance method for the key parts of the power distribution network based on multi-view fusion combines comprehensive factors of road network, traffic and power grid data, and is efficient and feasible.
Drawings
FIG. 1 is a schematic diagram of a three-view shooting tower unmanned aerial vehicle in a gateway key component counting method based on multi-view fusion;
FIG. 2 is an algorithm flow chart of the gateway key component counting method based on multi-view fusion of the present invention;
fig. 3 is a flowchart of a unmanned aerial vehicle path selection algorithm based on visual detection according to the present invention.
Detailed Description
For a further understanding of the nature, features, and efficacy of the present invention, the following examples are set forth to illustrate, but are not limited to, the invention. The present embodiments are to be considered as illustrative and not restrictive, and the scope of the invention is not to be limited thereto.
The unmanned aerial vehicle network completion acceptance method based on multi-view fusion comprises a counting method of key parts of a pole tower, and specifically comprises the following steps:
s1, an unmanned aerial vehicle identifies the tower top of a tower, hovers at a position 5m above the central point of the tower top, and shoots an image of a first visual angle with a cradle head facing downwards by 90 degrees, and meanwhile acquires coordinate information of the unmanned aerial vehicle;
s2, the unmanned aerial vehicle sequentially shoots images of a second view angle and a third view angle at a position, which is opposite to the direction of the tower, of a pitch angle of the cradle head by 0 degrees at a distance of 5m from the left side to the right side of the tower according to the setting, and acquires coordinate information of the two points;
s3, sending the three images collected in the S1 and the S2 into a pre-trained ResNet-50 model on an ImageNet data set for feature extraction;
s4, obtaining coordinate information during shooting, performing view angle transformation on three images with different view angles, and projecting the 3D feature images into corresponding 2D feature images;
s5, three groups of 2D feature images are obtained in S4, meanwhile, a 2-channel coordinate image is generated by using coordinate information to specify the X-Y coordinates of the ground plane position, and a group of 3 multiplied by 64+2 channel ground plane feature images are obtained by combining projection 64 channel feature images from three visual angles;
s6, a group of feature graphs after feature aggregation is obtained in S5, and the final prediction feature graph is obtained by convolution operation of a large convolution kernel, wherein the number of output channels of the convolution operation is 1;
and S7, the prediction feature map in S6 comprises all key components including insulators, and the key components in the image are directly counted.
Preferably, the specific feature extraction method of S3 is as follows:
taking three images [ Hi, wi ] with different visual angles as input, wherein Hi and Wi represent the height and width of the images, and extracting 3 64-channel feature images by using ResNet-50; to maintain a relatively high spatial resolution of the feature map, the 3-set convolution after ResNet-50 needs to be replaced with a hole convolution.
Preferably, the viewing angle conversion method in S4 is:
a pixel point in the image is located on a straight line in the three-dimensional world; to determine the exact three-dimensional position of the pixel, a common reference plane is set: a base plane, z=0; for all three-dimensional positions (x, y, 0) on the ground plane, the point state transformation is expressed as:
wherein s is a real scale factor, wherein P θ,0 Representing the slave P θ The 3 x 3 perspective transformation matrix of the third column is cancelled;
and for implementation in neural networks, the position of the base plane needs to be quantized into a grid of one shape [ Hg, wg ]; for images taken at three different view angles, and according to the above formula, a parameterized shape sampling grid [ Hg, wg ] is applied, the image is projected onto the z=0 ground plane, and the resulting sampling grid generates a projection profile on the ground plane, with the remaining positions filled with zeros.
Preferably, the invention discloses an unmanned aerial vehicle network completion acceptance method based on multi-view fusion, which further comprises an unmanned aerial vehicle route-free acceptance data acquisition method based on visual detection, and specifically comprises the following steps:
a1, starting the unmanned aerial vehicle from a predicted flying spot, and simultaneously acquiring 1080p and 60 frames of video streams acquired by a camera of the unmanned aerial vehicle in real time;
a2, manually controlling the unmanned aerial vehicle to fly to a position 5m above the first pole tower and hover;
a3, the unmanned aerial vehicle recognizes the tower and starts a route-free data acquisition algorithm;
a4, performing frame extraction sampling on the 60 frames of images, and extracting 5 frames per second;
a5, executing a multi-view target detection module on the five images extracted in the A4;
a6, judging whether the multi-view target detection module detects tension insulators, if the result is negative, outputting the number of insulators to be 0, executing a channel detection algorithm, identifying the direction of the unmanned aerial vehicle going to the next base tower, and meanwhile, automatically increasing the number of the inspected towers by 1, if the result is positive, outputting the number of insulators of the towers by the multi-view target detection module, judging whether a plurality of channels are led out of the current towers, if the result is positive, manually selecting the channel of the next base tower by operation and maintenance personnel, and if the result is negative, identifying the direction of the unmanned aerial vehicle going to the next base tower according to a channel identification algorithm;
a7, collecting the number of the pole insulators returned in the A6, and storing the number of the pole insulators serving as parameters of the pole in a database;
a8, judging whether the number of the inspected towers reaches a threshold value, if so, carrying out A9, and if not, repeating A3 to A5 until the number of the towers reaches the threshold value;
a9, completing acceptance of the current tower group, and automatically flying the unmanned aerial vehicle to a preset drop point;
a10, deriving the collected insulator data on each tower, comparing the insulator data with the expected number of construction, and judging the acceptance effect according to the situation.
In order to more clearly describe the specific embodiments of the present invention, an example is provided below:
the invention discloses an unmanned aerial vehicle network completion acceptance method based on multi-view fusion, which comprises two sets of methods running synchronously, namely: 1. a gateway key matching component identification counting method based on multi-view fusion utilizes multi-view images acquired by the same tower to realize high-accuracy counting of insulators on the tower of a complex line through feature extraction and projection; 2. the unmanned aerial vehicle route-free acceptance data acquisition method based on visual detection realizes unmanned aerial vehicle autonomous data acquisition and greatly improves distribution network acceptance efficiency.
In the key component identification and counting method, multi-view images of the same tower, acquired by an unmanned aerial vehicle, are preprocessed, feature images of all views are extracted by utilizing self-adaptive feature extraction capability of deep learning, multi-view association is carried out based on a projection principle, multi-view forms are integrated in one total feature image, the total feature image is detected, identification errors caused by mutual shielding of components under the condition of multiple loops are corrected, and acceptance accuracy is improved.
In addition, in the unmanned aerial vehicle no-route acceptance data acquisition method based on visual detection, on the basis of a completion acceptance method of key parts of the multi-view fused power distribution network, application scenes of tension insulators and linear insulators in the distribution network are combined, and a path of the unmanned aerial vehicle to the next base tower is selected according to the detected insulation subtype, so that real-time data acquisition and acceptance result feedback are facilitated.
In the actual completion acceptance work, the two methods are synchronously carried out, and the overall steps are as follows:
(1) Starting the unmanned aerial vehicle from a predicted flying spot, and simultaneously acquiring 1080p and 60 frames of video streams acquired by a camera of the unmanned aerial vehicle in real time;
(2) Manually controlling the unmanned aerial vehicle to fly to a position 5m above the first pole tower;
(3) The unmanned aerial vehicle recognizes the tower, starts to execute the algorithm of fig. 3, and invokes the multi-view target detection module of fig. 2;
(4) Collecting the number of the pole insulators returned in the step (3) and storing the number as parameters of the pole in a database
(5) Judging whether the number of the inspected towers reaches a threshold value, if so, carrying out the step (6), otherwise, repeating the steps 3 to 5 until the number of the towers reaches the threshold value;
(6) Completing acceptance of the current tower group, and automatically flying the unmanned aerial vehicle to a preset falling point;
(7) And (3) deriving the collected insulator data on each tower, comparing the insulator data with the expected number of construction, and judging the acceptance effect according to the situation.
Finally, the invention adopts the mature products and the mature technical means in the prior art.
It will be understood that modifications and variations will be apparent to those skilled in the art from the foregoing description, and it is intended that all such modifications and variations be included within the scope of the following claims.

Claims (4)

1. The unmanned aerial vehicle network completion acceptance method based on multi-view fusion is characterized by comprising the following steps of: the counting method of the key parts of the pole tower comprises the following specific steps:
s1, an unmanned aerial vehicle identifies the tower top of a tower, hovers at a position 5m above the central point of the tower top, and shoots an image of a first visual angle with a cradle head facing downwards by 90 degrees, and meanwhile acquires coordinate information of the unmanned aerial vehicle;
s2, the unmanned aerial vehicle sequentially shoots images of a second view angle and a third view angle at a position, which is opposite to the direction of the tower, of a pitch angle of the cradle head by 0 degrees at a distance of 5m from the left side to the right side of the tower according to the setting, and acquires coordinate information of the two points;
s3, sending the three images collected in the S1 and the S2 into a pre-trained ResNet-50 model on an ImageNet data set for feature extraction;
s4, obtaining coordinate information during shooting, performing view angle transformation on three images with different view angles, and projecting the 3D feature images into corresponding 2D feature images;
s5, three groups of 2D feature images are obtained in S4, meanwhile, a 2-channel coordinate image is generated by using coordinate information to specify the X-Y coordinates of the ground plane position, and a group of 3 multiplied by 64+2 channel ground plane feature images are obtained by combining projection 64 channel feature images from three visual angles;
s6, a group of feature graphs after feature aggregation is obtained in S5, and the final prediction feature graph is obtained by convolution operation of a large convolution kernel, wherein the number of output channels of the convolution operation is 1;
and S7, directly counting all key components in the prediction characteristic diagram including insulators.
2. The unmanned aerial vehicle network completion acceptance method based on multi-view fusion as claimed in claim 1, wherein the method comprises the following steps: the specific feature extraction method of the S3 comprises the following steps:
taking three images [ Hi, wi ] with different visual angles as input, wherein Hi and Wi represent the height and width of the images, and extracting 3 64-channel feature images by using ResNet-50; to maintain the spatial resolution of the feature map, the ResNet-50 post 3 sets of convolutions need to be replaced with hole convolutions.
3. The unmanned aerial vehicle network completion acceptance method based on multi-view fusion as claimed in claim 1, wherein the method comprises the following steps: the visual angle transformation method in the S4 is as follows:
a pixel point in the image is located on a straight line in the three-dimensional world; to determine the exact three-dimensional position of the pixel, a common reference plane is set: a base plane, z=0; for all three-dimensional positions (x, y, 0) on the ground plane, the point state transformation is expressed as:
wherein s is a real scale factor, wherein P θ,0 Representing the slave P θ The 3 x 3 perspective transformation matrix of the third column is cancelled;
and for implementation in neural networks, the position of the base plane needs to be quantized into a grid of one shape [ Hg, wg ]; for images taken at three different view angles, and according to the above formula, a parameterized shape sampling grid [ Hg, wg ] is applied, the image is projected onto the z=0 ground plane, and the resulting sampling grid generates a projection profile on the ground plane, with the remaining positions filled with zeros.
4. The unmanned aerial vehicle network completion acceptance method based on multi-view fusion as claimed in claim 1, wherein the method comprises the following steps: the unmanned aerial vehicle route-free acceptance data acquisition method based on visual detection comprises the following steps:
a1, starting the unmanned aerial vehicle from a predicted flying spot, and simultaneously acquiring 1080p and 60 frames of video streams acquired by a camera of the unmanned aerial vehicle in real time;
a2, manually controlling the unmanned aerial vehicle to fly to a position 5m above the first pole tower and hover;
a3, the unmanned aerial vehicle recognizes the tower and starts a route-free data acquisition algorithm;
a4, performing frame extraction sampling on the 60 frames of images, and extracting 5 frames per second;
a5, executing a multi-view target detection module on the five images extracted in the A4;
a6, judging whether the multi-view target detection module detects tension insulators, if the result is negative, outputting the number of insulators to be 0, executing a channel detection algorithm, identifying the direction of the unmanned aerial vehicle going to the next base tower, and meanwhile, automatically increasing the number of the inspected towers by 1, if the result is positive, outputting the number of insulators of the towers by the multi-view target detection module, judging whether a plurality of channels are led out of the current towers, if the result is positive, manually selecting the channel of the next base tower by operation and maintenance personnel, and if the result is negative, identifying the direction of the unmanned aerial vehicle going to the next base tower according to a channel identification algorithm;
a7, collecting the number of the pole insulators returned in the A6, and storing the number of the pole insulators serving as parameters of the pole in a database;
a8, judging whether the number of the inspected towers reaches a threshold value, if so, carrying out A9, and if not, repeating A3 to A5 until the number of the towers reaches the threshold value;
a9, completing acceptance of the current tower group, and automatically flying the unmanned aerial vehicle to a preset drop point;
a10, deriving the collected insulator data on each tower, comparing the insulator data with the expected number of construction, and judging the acceptance effect according to the situation.
CN202311439313.2A 2023-11-01 2023-11-01 Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion Active CN117152647B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311439313.2A CN117152647B (en) 2023-11-01 2023-11-01 Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311439313.2A CN117152647B (en) 2023-11-01 2023-11-01 Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion

Publications (2)

Publication Number Publication Date
CN117152647A true CN117152647A (en) 2023-12-01
CN117152647B CN117152647B (en) 2024-01-09

Family

ID=88908598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311439313.2A Active CN117152647B (en) 2023-11-01 2023-11-01 Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion

Country Status (1)

Country Link
CN (1) CN117152647B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110123658A (en) * 2010-05-07 2011-11-15 한국전자통신연구원 Method and system for transmitting/receiving 3-dimensional broadcasting service
CN113267779A (en) * 2021-05-17 2021-08-17 南京师范大学 Target detection method and system based on radar and image data fusion
CN113887641A (en) * 2021-10-11 2022-01-04 山东信通电子股份有限公司 Hidden danger target determination method, device and medium based on power transmission channel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110123658A (en) * 2010-05-07 2011-11-15 한국전자통신연구원 Method and system for transmitting/receiving 3-dimensional broadcasting service
CN113267779A (en) * 2021-05-17 2021-08-17 南京师范大学 Target detection method and system based on radar and image data fusion
CN113887641A (en) * 2021-10-11 2022-01-04 山东信通电子股份有限公司 Hidden danger target determination method, device and medium based on power transmission channel

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENBIN WANG: "The Application of UAV in Intelligent Distribution Line Acceptance System", 《IEEE》, pages 289 - 293 *
王金会: "架空输电线路无人机巡检技术研究进展", 《冶金与材料》, pages 54 - 58 *

Also Published As

Publication number Publication date
CN117152647B (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN114035614B (en) Unmanned aerial vehicle autonomous inspection method and system based on prior information and storage medium
CN109284739B (en) Power transmission line external damage prevention early warning method and system based on deep learning
He et al. Research of multi-rotor UAVs detailed autonomous inspection technology of transmission lines based on route planning
CN105407278A (en) Panoramic video traffic situation monitoring system and method
CN107590835A (en) Mechanical arm tool quick change vision positioning system and localization method under a kind of nuclear environment
CN106403942B (en) Personnel indoor inertial positioning method based on substation field depth image identification
CN109739254B (en) Unmanned aerial vehicle adopting visual image positioning in power inspection and positioning method thereof
CN106547814A (en) A kind of power transmission line unmanned machine patrols and examines the structuring automatic archiving method of image
CN105551032B (en) The shaft tower image capturing system and its method of a kind of view-based access control model servo
CN112327906A (en) Intelligent automatic inspection system based on unmanned aerial vehicle
CN108919367A (en) Transmission line of alternation current inversion method based on current field
CN209174850U (en) The device of big packet collector nozzle is positioned using machine vision
CN115240093B (en) Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion
CN108960134A (en) A kind of patrol UAV image mark and intelligent identification Method
CN110519582A (en) A kind of crusing robot data collection system and collecting method
CN109919038A (en) Power distribution cabinet square pressing plate state identification method based on machine vision and deep learning
CN112734370A (en) BIM-based project supervision information management method and system
CN117152647B (en) Unmanned aerial vehicle distribution network completion acceptance method based on multi-view fusion
CN116736891A (en) Autonomous track planning system and method for multi-machine collaborative inspection power grid line
CN115471573A (en) Method for correcting presetting bit offset of transformer substation cloud deck camera based on three-dimensional reconstruction
CN112613334A (en) High-precision autonomous inspection image identification method of unmanned aerial vehicle on power transmission line
CN114708216A (en) Construction progress intelligent identification method and system based on image identification
CN114387558A (en) Transformer substation monitoring method and system based on multi-dimensional video
Fang et al. A framework of power pylon detection for UAV-based power line inspection
CN115297303B (en) Image data acquisition and processing method and device suitable for power grid power transmission and transformation equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant