CN114217641B - Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment - Google Patents

Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment Download PDF

Info

Publication number
CN114217641B
CN114217641B CN202111274981.5A CN202111274981A CN114217641B CN 114217641 B CN114217641 B CN 114217641B CN 202111274981 A CN202111274981 A CN 202111274981A CN 114217641 B CN114217641 B CN 114217641B
Authority
CN
China
Prior art keywords
inspection
candidate
point
points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111274981.5A
Other languages
Chinese (zh)
Other versions
CN114217641A (en
Inventor
马磊
王耀东
王勇
孟大鹏
缑培培
付治宇
左魁生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Henan Electric Power Co Zhengzhou Power Supply Co
Zhongmu Power Supply Co Of State Grid Henan Electric Power Co
State Grid Corp of China SGCC
Original Assignee
State Grid Henan Electric Power Co Zhengzhou Power Supply Co
Zhongmu Power Supply Co Of State Grid Henan Electric Power Co
State Grid Corp of China SGCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Henan Electric Power Co Zhengzhou Power Supply Co, Zhongmu Power Supply Co Of State Grid Henan Electric Power Co, State Grid Corp of China SGCC filed Critical State Grid Henan Electric Power Co Zhengzhou Power Supply Co
Priority to CN202111274981.5A priority Critical patent/CN114217641B/en
Publication of CN114217641A publication Critical patent/CN114217641A/en
Application granted granted Critical
Publication of CN114217641B publication Critical patent/CN114217641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a method and a system for inspecting power transmission and transformation equipment of an unmanned aerial vehicle in a non-structural environment, wherein S1, environmental information is acquired through a sensor; s2, extracting characteristics of the environment information image, marking power transmission and transformation equipment to be detected as candidate inspection points, if the candidate inspection points exist, turning to S3, and if the candidate inspection points do not exist, turning to S4; s3, selecting an optimal inspection point from all candidate inspection points, driving to the point in parallel, marking the point as inspected, and turning to S1; s4, the inspection robot inquires whether candidate inspection points exist again, if so, the inspection robot goes to S3, and if not, the inspection robot goes to S5; s5, checking whether all candidate inspection points are marked as inspected, if the unknown candidate inspection points exist, driving the inspection robot to the point, turning to S1, and if the unknown candidate inspection points do not exist, ending inspection; the optimal planning of the inspection path is realized, and the unknown inspection environment can be explored in the inspection process, so that self-service inspection in the unknown unstructured environment is realized.

Description

Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment
Technical field:
The invention relates to the field of inspection of power transmission and transformation equipment, in particular to an inspection method and system of unmanned aerial vehicle power transmission and transformation equipment in an unstructured environment.
The background technology is as follows:
In order to ensure stable supply of electric power, an maintainer needs to periodically patrol a power supply and transformation line to find defects of power supply and transformation equipment, along with development of an unmanned robot, a patrol mode is changed towards a patrol direction of a machine by manual patrol, the patrol efficiency of the machine is greatly improved, labor intensity of people is reduced, even a defect fault position can be directly positioned and alarmed, the efficiency of processing faults of the maintainer is greatly improved, stable supply of electric power is ensured, the machine patrol comprises aerial unmanned aerial vehicle patrol and land robot patrol, but no patrol mode is adopted, patrol coordinates are input into the patrol robots in the prior art through manual real-time remote control command, patrol of the patrol robots is carried out one by one according to the input patrol coordinates, the patrol robots are equivalent to the patrol environment to be checked, the condition that the patrol robots automatically patrol the patrol environment in the unknown environment does not exist, meanwhile, the manpower patrol robots are self-powered, the patrol robots and the coordinate points are guaranteed to have the same patrol condition, the electric power cannot be replaced frequently, and the power cannot be wasted when the power is required to be replaced by the robots.
The invention comprises the following steps:
The technical problems to be solved by the invention are as follows: the unknown structural environment is sensed through the sensor, the optimal inspection point is selected from the acquired power supply and transformation equipment needing inspection to be detected, the optimal planning of the inspection path is realized, the unknown inspection environment can be explored in the inspection process, the self-service inspection in the unknown non-structural environment is realized, and the problems of excessive power consumption and manpower resource waste caused by manual remote control inspection and lead-in inspection point inspection are solved.
In order to solve the technical problems, the invention provides a technical scheme that: an unmanned aerial vehicle power transmission and transformation equipment inspection method under an unstructured environment comprises the following steps: step one, an unmanned aerial vehicle collects environment information of power transmission and transformation equipment through a carried sensor;
Step two, processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, jumping to step three if the candidate inspection points exist in the exploration environment, and jumping to step four if the candidate inspection points do not exist;
selecting an optimal inspection point from all candidate inspection points, driving to the inspection point in parallel, marking the point as inspected, and returning to the step one;
step four, the inspection robot inquires whether a candidate inspection point exists again, if so, the step three is returned, and if not, the step five is skipped;
And fifthly, checking whether all the candidate inspection points are marked as inspected, if the unknown candidate inspection points exist, driving the inspection robot to the point, and then jumping to the step one, and if the unknown candidate inspection points do not exist. The inspection is ended.
In the first step, the sensors are a three-dimensional camera SR-3000, a laser range finder and a gyroscope.
Further, in the second step, the steps of image processing and feature extraction are as follows: 1) Preprocessing an image to obtain image gray scale and three-dimensional information;
2) Marking the ground, the sky and the distant view in the gray level image according to a threshold value, and deleting the sky, the distant view and the ground area;
3) Binarizing images of the marked sky and the ground, wherein the gray value of the region of interest is represented by a non-zero value, and the gray value of the region of non-interest is represented by zero;
4) Clustering the extracted region of interest in the gray level image by using the three-dimensional information of the pixels, and separating the region of interest from the non-region of interest;
5) Comparing two current adjacent points of the region of interest in the image, if the distance between the two points is within a certain range, considering that the two data points belong to the same class, if the distance exceeds a threshold value, considering that the two points belong to different classes, and starting the next round of data comparison by taking the current data point as a starting point of a newly added class, so as to complete the secondary clustering analysis;
6) And determining and extracting edges through edge detection, and outlining a target object.
Further, the image preprocessing step comprises the following steps: noise generated in signal acquisition is eliminated by mean value filtering or median value filtering, and gray scale conversion is carried out on the image by linear gray scale conversion, nonlinear gray scale conversion or piecewise linear gray scale conversion.
Further, the image marking step comprises the following steps: 1) Marking out an area belonging to sky on the image according to the height information of the pixel points of the image, and setting the gray value corresponding to the pixel point with the height value of the pixel larger than the set threshold value to zero;
2) Marking the ground area on the image according to the height information of the pixel points of the image, and setting the height value of the pixel points to be smaller than the gray value corresponding to the threshold value pixel points to be zero;
3) And marking a distant view area on the image according to the distance information of the image pixel points, and setting the gray value corresponding to the pixel points with the distance value larger than the threshold value to zero.
Further, during marking the ground, there are partial discontinuities in the non-region of interest and the binary image is processed with erosion and dilation operations to remove these discontinuities.
Further, in the third step, the method for determining the optimal candidate inspection point includes: and an MCDM (Multi-Criteria Decision Making) Multi-index decision system is adopted, a plurality of evaluation indexes of each candidate inspection point are comprehensively considered according to evaluation conditions to obtain evaluation values, the evaluation values of the plurality of candidate inspection points are compared, the candidate inspection point with the largest evaluation value is selected as an optimal inspection point, and the optimal inspection point is selected from the plurality of candidate inspection points.
Further, the evaluation condition is: path loss, information gain, and rotation angle, wherein,
Path consumption: in the one-time inspection process, the distance between the current inspection point and the target inspection point of the inspection robot is measured;
information gain: the new environment information acquired at the inspection point comprises the newly acquired environment area and the length of the free boundary of the inspection point;
Rotation angle: refers to the angle of rotation required by the direction of the robot at the current position to reach the selected inspection point.
In order to solve the technical problems, the invention provides another technical scheme as follows: unmanned aerial vehicle power transmission and transformation equipment inspection system under non-structural environment, its characterized in that: including perception module, autonomous module and removal module, wherein:
and a perception module: the method comprises the steps that environmental information of power transmission and transformation equipment is acquired for a sensor component;
And (3) an autonomous module: processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, selecting an optimal inspection point from all candidate inspection points according to judging conditions, and sending a control instruction for moving the optimal inspection point to a moving module;
and a moving module: and receiving a control instruction sent by the main module, and adjusting corresponding power to move to an optimal inspection point.
The beneficial effects of the invention are as follows:
Acquiring environmental information of power transmission and transformation equipment through a sensor carried by the unmanned aerial vehicle; processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, jumping to the third step if the candidate inspection points exist in the exploration environment, and jumping to the fourth step if the candidate inspection points do not exist; selecting an optimal inspection point from all candidate inspection points, driving to the inspection point in parallel, marking the point as inspected, and returning to the step one; the inspection robot inquires whether a candidate inspection point exists again, if so, the step three is returned, and if not, the step five is skipped; checking whether all the candidate inspection points are marked as inspected, if the unknown candidate inspection points are still present, driving the inspection robot to the point, then jumping to the first step, and if the unknown candidate inspection points are not present, ending the inspection; the optimal planning of the inspection path is realized, and the unknown inspection environment can be explored in the inspection process, so that self-service inspection in the unknown non-structural environment is realized, and the problems of excessive power consumption and manpower resource waste caused by manual remote control inspection and lead-in inspection point inspection are solved.
The method comprises the steps of obtaining gray level images and depth information by a three-dimensional camera according to the characteristic that the background in the non-structural environment is similar to the color of power supply and transformation equipment, dividing the images into an interested area and a non-interested area by using a three-dimensional information threshold method based on the gray level information and the three-dimensional information, improving the image quality and removing redundant information without great value, then carrying out secondary separation processing on the power supply and transformation equipment in the interested area, dividing the images, ensuring that the power supply and transformation equipment in the interested area are mutually independent, facilitating the follow-up inspection point marking and smooth inspection.
The method comprises the steps of automatically inspecting the inspection robot in an unknown environment, wherein the sensing mode of the unknown environment is mainly implemented by a sensor carried by the inspection robot in the inspection process, and the inspection robot only senses the environment around an inspection path due to the limitation of the sensing range of the sensor, so that the inspection process of the power supply and transformation equipment is also implemented by the sensing process of the unknown environment, and the optimal inspection point is selected in the power supply and transformation equipment in the sensed environment to inspect according to the evaluation condition on the basis of the theory of the front edge, so that the inspection path is planned, and the comprehensive optimization of electric quantity, environment sensing and power supply and transformation equipment inspection is met.
Description of the drawings:
In order to more clearly illustrate the invention or the technical solutions of the prior art, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the description below are only two of the inventions, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for inspecting power transmission and transformation equipment of an unmanned aerial vehicle in an unstructured environment;
FIG. 2 is a flowchart of the optimal inspection point acquisition step;
FIG. 3 is a schematic diagram of a simulation experiment exploration process;
FIG. 4 is a schematic diagram of a simulation experiment exploration process II;
FIG. 5 is a schematic diagram III of a simulation experiment exploration process;
Fig. 6 is a system connection block diagram of the inspection system of the unmanned aerial vehicle power transmission and transformation equipment in an unstructured environment.
The specific embodiment is as follows:
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Referring to fig. 1-5, the application discloses an inspection method for power transmission and transformation equipment of an unmanned aerial vehicle in a non-structural environment, which comprises the following steps: s1, acquiring environment information of power transmission and transformation equipment by an unmanned aerial vehicle through a carried sensor;
S2, processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, jumping to S3 if the candidate inspection points exist in the exploration environment, and jumping to S4 if the candidate inspection points do not exist;
s3, selecting an optimal inspection point from all candidate inspection points, driving to the inspection point in parallel, marking the point as inspected, and returning to S1;
step S4, the inspection robot inquires whether a candidate inspection point exists again, if so, the step S3 is returned, and if not, the step S5 is skipped;
And S5, checking whether all the candidate inspection points are marked as inspected, if the unknown candidate inspection points exist, driving the inspection robot to the point, then jumping to S1, and if the unknown candidate inspection points do not exist, ending the inspection.
According to the application, the unknown structural environment is sensed through the sensor, the optimal inspection point is selected from the acquired power supply and transformation equipment needing inspection to be detected to carry out detection, so that the optimal planning of the inspection path is realized, and meanwhile, the unknown inspection environment can be explored in the inspection process, thereby realizing self-service inspection in the unknown non-structural environment, and solving the problems of excessive power consumption and manpower resource waste caused by manual remote control inspection and inspection of the inspection point.
If a plurality of candidate exploration points exist in the exploration process, the selected optimal exploration point should be marked as explored, the unselected candidate exploration point is marked as unexplored, and the next time the exploration is marked as explored. If there is only one candidate point, then that point is the optimal explorer point, and is marked as an explored point.
When the inspection robot does not have candidate points after a certain exploration, if the point where the inspection robot is located is the only candidate point generated by the previous exploration, the inspection robot should select autorotation to find whether candidate points exist around; if the point where the inspection robot is located is one of a plurality of candidate points generated in the last exploration, the inspection robot can directly travel to the suboptimal exploration point generated in the last exploration without autorotation.
The method comprises the following steps of rotating the direction of the inspection robot, wherein the direction of the inspection robot is required to be changed to point to the direction of a target point because the direction of the target point may not be in the initial direction of the inspection robot; secondly, the vehicle runs straight and reaches a target point to stop advancing; finally, the direction of rotation after reaching the candidate point is rotated, so that the inspection robot can maintain the best posture to finish the exploration requirement. It should be noted that if the current direction is exactly the same as the desired direction, the direction of rotation may not be used.
Preferably, in the step S1, the sensors are a three-dimensional camera SR-3000, a laser range finder and a gyroscope.
The three-dimensional camera SR-3000 provides infrared light source for array composed of 55 built-in LEDs of Switzerland MESA IMAGING AG, gray information and corresponding point depth information of space scene point can be obtained simultaneously based on TOF physical ranging principle, image precision is 176×144 pixels, and maximum effective distance is 7.5m. The processing speed can reach 30 frames/second, the power consumption is only 1W, and the field of view is 47.5X39.6 degrees. For the environment imaging area at the depth of field of 7.5m is 6.5m (Width) x 5m (Height), the imaging precision can reach 12.8cm2, the precision is moderate, and the method is suitable for detecting the short-distance obstacle of the inspection robot.
The laser range finder is an external sensor with high precision and high resolution. It first emits a laser beam towards the target and then receives the laser beam reflected by the target by the photocell. The timer calculates the distance from the observer to the target based on the measured time difference from the emission to the reception of the laser beam. Compared with sonar, the sonar has the advantages of dense data points, rapid scanning, short sampling period, long detection distance and higher angular resolution; compared with a vision sensor, the sensor is not influenced by ambient light, has simple data processing and small calculated amount, and is a main ranging sensor adopted by many current inspection robots.
The working environment which changes in real time can enable the inspection robot to have pitching and rolling angles relative to the horizontal plane, so that the mobile inspection robot needs to know the posture information of the mobile inspection robot besides the external information of the surrounding environment, and the gyroscope is used for measuring the posture parameters of the inspection robot.
Further, the power transmission and transformation equipment is power equipment for power supply and transformation of transformers, circuit breakers, insulators, transmission lines, towers and the like.
Preferably, in the step S2, the image of the scene in the non-structural environment acquired by the three-dimensional camera is divided into a perspective (beyond the detection range of the line of sight of the robot), a sky (beyond the height that the robot can reach), a ground, and an obstacle region. The current behavior and the walking route of the inspection robot are not influenced, namely sky, long-range view and ground areas, namely non-interested areas; the position, type, shape, size and the like of the object in the rest area are closely related to the next action of the inspection robot, and important analysis and attention are needed, which is called as an area of interest.
Therefore, the optimal inspection point is obtained by the following steps:
And S31, marking the ground, the sky and the distant view in the gray level image according to a threshold value, and deleting the sky, the distant view and the ground area. Which helps to reduce the amount of computation in subsequent algorithmic processes.
(1) Marking out an area belonging to sky on the image according to the height information of the pixel points of the image, and setting the gray value corresponding to the pixel point with the height value of the pixel larger than the set threshold value to zero;
(2) Marking the ground area on the image according to the height information of the pixel points of the image, and setting the height value of the pixel points to be smaller than the gray value corresponding to the threshold value pixel points to be zero;
(3) And marking a distant view area on the image according to the distance information of the image pixel points, and setting the gray value corresponding to the pixel points with the distance value larger than the threshold value to zero.
Step S32, binarizing the images of the marked sky and the ground, namely, representing the gray value of the region of interest by a non-zero value and representing the gray value of the region of non-interest by zero.
In the process of marking the ground, in step S33, due to the unevenness of the ground, there may be a partial discontinuity in the non-interested area, which should belong to the ground but appear as the interested area. The binary image is processed with successive erosion and dilation operations to remove these discontinuities.
In step S34, the region of interest is separated from the non-region of interest by clustering, but the power supply and transformation equipment individuals inside the region of interest are not classified. The arrangement relation among the three-dimensional data points reflects the geometric position information of each space point, and the pixel points of the power supply equipment area are displayed as data points which are close to each other. The three-dimensional information of the pixels can be used to cluster the regions of interest that have been extracted in the gray scale image. The premise is to assume that the power supply and transformation equipment is a separate object, such as an insulator, a power transmission line and the like. Comparing two current adjacent points of the region of interest in the image, and if the distance between the two points is within a certain range, considering that the two data points belong to the same class; if the threshold is exceeded, the two points are considered to belong to different classes. And starting the next round of data comparison by taking the current data point as the starting point of the new added category.
The Euclidean distance (Euclidean Distance) is adopted to calculate the distance between any two points in the image; and (5) performing edge detection by adopting a Canny operator or a Sobel operator.
Further, the image preprocessing step comprises the following steps: noise generated in signal acquisition is eliminated by mean value filtering or median value filtering, and gray scale conversion is carried out on the image by linear gray scale conversion, nonlinear gray scale conversion or piecewise linear gray scale conversion.
Preferably, in the step S3, an MCDM (Multi-Criteria Decision Making) Multi-index decision system is adopted, multiple evaluation indexes of each candidate inspection point are comprehensively considered according to evaluation conditions to obtain evaluation values, the evaluation values of the multiple candidate inspection points are compared, the candidate inspection point with the largest evaluation value is selected as an optimal inspection point, and the optimal inspection point is selected from the multiple candidate inspection points.
The front-edge theory originates from a greedy (Greed) strategy proposed by Thnm, wherein the strategy refers to that the inspection robot selects an unexplored area closest to the inspection robot with a certain probability q as a target point of the next step, and selects other areas with probabilities 1-q as the explored areas. The front edge theory is systematically proposed by Yamauchi, and the front edge is the edge of the explored detected area and the unexplored area, so that the front edge field can often acquire more information gain, environment information can be quickly and effectively acquired by utilizing the theory, and the exploration action of the inspection robot based on the theory has initiative and purposiveness, can effectively explore the unknown environment, and converts the unknown environment into the known environment.
The detected area refers to an area which can be scanned by a sensor of the inspection robot; undetected area: the area not detected by the inspection robot sensor may be an area outside the range of the sensor or the rear of an obstacle.
Although the front theory can obtain larger information gain, other performance indexes are ignored, such as a longer driving distance, a relatively more path consumption and excessively pursuing the maximum information gain, so that the environment cannot meet the requirement of convenience. Therefore, if the leading edge point is directly selected as a candidate inspection point, although larger information gain can be obtained, environmental information is easy to lose, the requirement of ergodic property cannot be met after exploration of the working environment, the map produced later is incomplete, the leading edge point is generally positioned in the detected edge area, and the distance loss of the inspection robot in running is larger. In a comprehensive view, the leading edge point is not the most ideal candidate inspection point, and the leading edge theory is not the most ideal theory, and only an active detection idea is provided to drive the inspection robot to travel to an unknown environment.
Because the number of the candidate inspection points in the explored area may be one or more, the candidate inspection points at different positions have different attributes, and main attribute parameters include the distance from the candidate inspection point to the current position of the robot, the maximum information gain obtained by the point, and the rotation angle required by the inspection robot to reach the point.
In evaluating a candidate inspection point, different evaluation criteria may be employed. The simplest one is usually the path consumption, and the evaluation criterion selects the candidate waypoint with the smallest path loss as the optimal candidate waypoint. Still other evaluation criteria combine journey consumption with other criteria such as information gain. The optimal candidate inspection point selected by the criteria considers both the route consumption to reach the candidate inspection point and the information gain of the candidate inspection point.
Wherein, the journey consumes: the distance between the current point and the target candidate inspection point p is the distance travelled by the inspection robot in one exploration process.
Information gain: the new environment information acquired at the candidate inspection point p can be expressed in two ways, namely, the newly acquired environment area is utilized, and the length of the free boundary where the candidate inspection point p is located is utilized.
Rotation angle: the angle of rotation required by the direction of the robot at the current position to reach the selected candidate inspection point is referred to.
It is assumed that the evaluation value corresponding to the i-th evaluation condition of the candidate inspection point p is u i (p), and the evaluation value u i (p) is between 0 and 1, which is used as the quality standard for measuring the i-th evaluation condition corresponding to the candidate inspection point p. In general, the larger the evaluation value of the evaluation condition in a certain candidate inspection point is, the more superior the condition corresponding to the represented candidate inspection point is, and conversely, the smaller the evaluation value is, the more the condition is not superior. The judgment condition information gain is that the larger the information gain value is, the more unknown environmental information can be acquired on behalf of the candidate inspection point. Typically, such an evaluation value is calculated by the following formula.
In the formula (1), u c (p) represents the information gain evaluation value of the candidate patrol point p, and L is the set of all the candidate patrol points. c (p) is the information gain value of the candidate inspection point. The formula contracts the information gain of the candidate inspection points to be between 0 and 1, so that the subsequent calculation is facilitated.
However, there are still some evaluation values that are not calculated such that the larger the distance consumption, the less ideal the evaluation condition is represented, and if still calculated with the formula (1), the larger the evaluation value, the less ideal the point is represented, which conflicts with the initial assumption, and the later comprehensive calculation is not facilitated.
This is also the case for judging the condition rotation angle or the course consumption, so the calculation is generally performed by using the formula (2).
In the formula (2), u t (p) represents a path loss evaluation value or a rotation angle evaluation value of the candidate patrol point p, and L is a set of all the candidate patrol points. And t (p) is a path loss value or a rotation angle of the candidate inspection point. The rotation angle or path consumption of the candidate waypoints still shrinks to between 0 and 1.
In order to obtain the optimal candidate inspection points, the inspection robot must consider the evaluation values of all the candidate inspection points, calculate and compare all the evaluation values, and select the candidate inspection point with the largest evaluation value as the optimal candidate inspection point. The evaluation function must take into account all candidate inspection points and take into account all evaluation conditions for each candidate inspection point, only then the selected candidate inspection point is optimal.
Let N be a set of N candidate inspection point criteria, j representing the j-th criterion in N, such as path loss. L is a set of L candidate patrol points, and p represents the p-th candidate patrol point in L. Then u j (p) represents the j-th criterion for the p-th candidate inspection point.
The three evaluation conditions are applied, namely the path loss, the information gain and the rotation angle. Then in combination with the above, n= { path loss, information gain, rotation angle }. The simplest approach is to select a weighted sum function as the evaluation function. If the inspection robot wants to obtain more information gain, a larger value is selected as the coefficient of the information gain, as shown in table 1.
Table 1 weighting coefficient table of evaluation index
Judgment conditions Path loss Information gain Rotation angle
Weighting coefficient 0.3 0.4 0.3
Assuming that three candidate patrol points are provided, i.e., l= { p 1,p2,p3 }, the evaluation criteria are shown in table 2.
Table 2 weighted evaluation value table
Candidate inspection point Path loss Information gain Rotation angle Weighted sum value
P1 0.2 0.9 0.7 0.63
P2 0.6 0.5 0.1 0.41
P3 0.9 0.6 0.3 0.60
The calculation shows that the evaluation value of the candidate inspection point p 1 is the largest, so that p 1 is selected as the optimal candidate inspection point. However, p 1 has a large information gain, but the path loss required to reach the point p 1 is large. The point can be regarded as the optimal candidate inspection point and has a certain relation with the selected weighting coefficient, and the optimal candidate inspection point is expected to have larger information gain; p 1 can become the optimal candidate inspection point. This has the problem that although the information gain is obtained, the route is sacrificed and the inspection robot travels more. Likewise, candidate waypoints with less path loss may tend to be selected, which may not yield a large information gain, which is also common practice with weighted average methods. The evaluation conditions in the evaluation criteria are restricted to each other, and thus, the simultaneous consideration cannot be given to the evaluation conditions.
MCDM provides a solution to this problem, and this method and related concepts are described next. First, an integration function μ is defined in which p (N) belongs to [0,1], and the following condition is satisfied.
(1) Μ (empty) =0, μ (N) =1
(2)
Assuming that a belongs to P (N), μ (a) represents a weighting coefficient of the candidate patrol point a. In this way, the weighting coefficients are weighted not only for one candidate patrol point, but also for a plurality of candidate patrol points. The evaluation function u (P) corresponding to the candidate patrol point P has the following expression formula.
The evaluation values corresponding to the n evaluation conditions of the candidate inspection point p are arranged in the order from small to large as follows.
U 1(p)≤...un (p). Ltoreq.1 and assuming u 0(p)=0,Aj={i∈N|uj(p)≤ui(p)un (p) }, different integration coefficients μ will lead to different evaluation values, assuming that the integration function values of c 1 and c 2 are μ (c 1) and μ (c 2), respectively;
(3) If μ ({ c 1,c2})<μ(c1)+μ(c2), then the two criteria are redundant;
(4) If μ ({ c 1,c2})>μ(c1)+μ(c2), then the two criteria are synergistic;
The same principle is true for more than two criteria, where the weighted average method is a special case: mu ({ c 1,c2})=μ(c1)+μ(c2) is now the effect of the integration function on the weighted average.
The above example will be further described.
Table 3 integration coefficient table of form index
Judgment conditions Path loss Information gain Rotation angle
Judgment conditions 0.3 0.4 0.3
Assuming redundancy exists between the path loss and the rotation angle, synergy exists between the path loss and the information gain, and between the information gain and the rotation angle, and specific integration function values are shown in table 4.
TABLE 4 integration coefficient table of multiple indexes
Judgment conditions Distance loss and rotation angle Path loss and information gain Information gain and rotation angle
Judgment conditions 0.5 0.8 0.8
The following values can be obtained by defining the formula from the integration function.
TABLE 5 integration function value
And finally obtaining P1 as the optimal exploration point through a series of calculation. And looking back at the judgment condition of P1, the P1 point has the smallest path loss and the largest information gain, and the P1 point is still the optimal exploration point in the exploration of three candidate inspection points in comprehensive consideration of the situation although the rotation angle is larger.
Experiments were performed in a MATLAB simulation environment. The working environment is selected from 10 m-10 m unknown environments, and a plurality of irregular barriers are randomly arranged in the working environment. Assuming that the working environment is not being explored, the obstacle in the figure is represented by a yellow area in an unknown state. The inspection robot used in the experiment is provided with a laser range finder, a three-position camera and a gyroscope, the detection distance of the laser sensor is 2m, the scanning angle is 180 degrees, and the resolution angle is 0.5 degrees. Assuming that the minimum step length of the inspection robot is 0.2m, the laser range finder takes a sensor value every 5 degrees, and the calculated amount is simplified. The specific exploration process is shown in figures 3-5
In the simulation diagram, a small circle connecting line is used for representing a walking route of the inspection robot, a fan-shaped area is a sensor component scanning area, black points at the edge of the fan-shaped area represent sensor scanning points, and the residual graph represents an object to be detected. The simulation experiment is one-time exploration process of the inspection robot to the working environment, and three simulation graphs represent three steps of the exploration respectively.
In order to solve the technical problems, the invention provides a technical scheme that: the utility model provides an unmanned aerial vehicle power transmission and transformation equipment inspection system under unstructured environment, see the fig. 6 and show, including perception module A, autonomous module B and removal module C, wherein:
Perception module a: the method comprises the steps that environmental information of power transmission and transformation equipment is acquired for a sensor component;
autonomous module B: processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, selecting an optimal inspection point from all candidate inspection points according to judging conditions, and sending a control instruction for moving the optimal inspection point to a moving module;
And a mobile module C: and receiving a control instruction sent by the main module, and adjusting corresponding power to move to an optimal inspection point.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The above is only a preferred embodiment of the present invention, and the present invention is not limited in any way, and any simple modification, equivalent variation and modification made to the above embodiment according to the technical substance of the present invention still falls within the scope of the technical solution of the present invention.

Claims (7)

1. An unmanned aerial vehicle power transmission and transformation equipment inspection method under an unstructured environment comprises the following steps: step one, an unmanned aerial vehicle collects environment information of power transmission and transformation equipment through a carried sensor;
Step two, processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, jumping to step three if the candidate inspection points exist in the exploration environment, and jumping to step four if the candidate inspection points do not exist;
selecting an optimal inspection point from all candidate inspection points, driving to the inspection point in parallel, marking the point as inspected, and returning to the step one;
In the third step, the method for determining the optimal candidate inspection points comprises the following steps: an MCDM multi-index decision system is adopted, a plurality of evaluation indexes of each candidate inspection point are comprehensively considered according to evaluation conditions to obtain evaluation values, the evaluation values of the plurality of candidate inspection points are compared, the candidate inspection point with the largest evaluation value is selected as an optimal inspection point, and the optimal inspection point is selected from the plurality of candidate inspection points;
The evaluation conditions are as follows: path loss, information gain, and rotation angle, wherein,
Path consumption: in the one-time inspection process, the distance between the current inspection point and the target inspection point of the inspection robot is measured;
information gain: the new environment information acquired at the inspection point comprises the newly acquired environment area and the length of the free boundary of the inspection point;
rotation angle: the angle of rotation required by the direction of the robot at the current position to reach the selected inspection point is referred to;
the information gain is calculated by the following formula:
In the formula (1), u c (p) represents an information gain evaluation value of the candidate inspection point p, L is a set of all candidate inspection points, c (p) is an information gain value of the candidate inspection point, and the formula contracts the information gain of the candidate inspection point to be between 0 and 1, so that the subsequent calculation is facilitated;
The distance consumption or the rotation angle is calculated by adopting a formula (2);
In the formula (2), u t (p) represents a path loss evaluation value or a rotation angle evaluation value of the candidate inspection point p, L is a set of all candidate inspection points, t (p) is a path loss value or a rotation angle of the candidate inspection point, and the path loss or the rotation angle of the candidate inspection point still shrinks to between 0 and 1;
First, an integration function μ is defined in which p (N) belongs to [0,1], and the following conditions are satisfied:
(1) μ (empty) =0, μ (N) =1
(2)ifthenμ(A)≤μ(B)
Assuming that A belongs to P (N), mu (A) represents a weighting coefficient of the candidate inspection point A; in this way, the weighting coefficient not only weights for one candidate inspection point, but also integrates and weights a plurality of candidate inspection points; the corresponding evaluation function u (P) of the candidate inspection point P has the following expression formula:
the evaluation values corresponding to the n evaluation conditions of the candidate inspection points p are arranged in the order from small to large, as follows:
u 1(p)≤...un (p). Ltoreq.1 and assuming u 0(p)=0,Aj={i∈N|uj(p)≤ui(p)un (p) }, different integration coefficients μ will lead to different evaluation values, assuming that the integration function values of c 1 and c 2 are μ (c 1) and μ (c 2), respectively;
(1) If μ ({ c 1,c2})+μ(c1)+μ(c2), then the two criteria are redundant;
(2) If μ ({ c 1,c2})+μ(c1)+μ(c2), then the two criteria are synergistic;
The same principle is true for more than two criteria, where the weighted average method is a special case: mu ({ c 1,c2})=μ(c1)+μ(c2) is the effect of the integration function on the weighted average at this time;
step four, the inspection robot inquires whether a candidate inspection point exists again, if so, the step three is returned, and if not, the step five is skipped;
Step five, checking whether all candidate inspection points are marked as inspected, if the unknown candidate inspection points exist, driving the inspection robot to the point, and then jumping to the step one, and if the unknown candidate inspection points do not exist; the inspection is ended.
2. The inspection method for the power transmission and transformation equipment of the unmanned aerial vehicle in the unstructured environment according to claim 1, which is characterized in that: in the first step, the sensors are a three-dimensional camera SR-3000, a laser range finder and a gyroscope.
3. The inspection method for the power transmission and transformation equipment of the unmanned aerial vehicle in the unstructured environment according to claim 1, which is characterized in that: in the second step, the image processing and feature extraction steps are as follows: 1) Preprocessing an image to obtain image gray scale and three-dimensional information;
2) Marking the ground, the sky and the distant view in the gray level image according to a threshold value, and deleting the sky, the distant view and the ground area;
3) Binarizing images of the marked sky and the ground, wherein the gray value of the region of interest is represented by a non-zero value, and the gray value of the region of non-interest is represented by zero;
4) Clustering the extracted region of interest in the gray level image by using the three-dimensional information of the pixels, and separating the region of interest from the non-region of interest;
5) Comparing two current adjacent points of the region of interest in the image, if the distance between the two points is within a certain range, considering that the two data points belong to the same class, if the distance exceeds a threshold value, considering that the two points belong to different classes, and starting the next round of data comparison by taking the current data point as a starting point of a newly added class, so as to complete the secondary clustering analysis;
6) And determining and extracting edges through edge detection, and outlining a target object.
4. The inspection method for the power transmission and transformation equipment of the unmanned aerial vehicle in the unstructured environment according to claim 3, which is characterized in that: the image preprocessing comprises the following steps: noise generated in signal acquisition is eliminated by mean value filtering or median value filtering, and gray scale conversion is carried out on the image by linear gray scale conversion, nonlinear gray scale conversion or piecewise linear gray scale conversion.
5. The inspection method for the power transmission and transformation equipment of the unmanned aerial vehicle in the unstructured environment according to claim 3, which is characterized in that: the image marking step comprises the following steps: 1) Marking out an area belonging to sky on the image according to the height information of the pixel points of the image, and setting the gray value corresponding to the pixel point with the height value of the pixel larger than the set threshold value to zero;
2) Marking the ground area on the image according to the height information of the pixel points of the image, and setting the height value of the pixel points to be smaller than the gray value corresponding to the threshold value pixel points to be zero;
3) And marking a distant view area on the image according to the distance information of the image pixel points, and setting the gray value corresponding to the pixel points with the distance value larger than the threshold value to zero.
6. The inspection method for the unmanned aerial vehicle power transmission and transformation equipment in the unstructured environment according to claim 5, which is characterized in that: in marking the ground, partial discontinuities exist in the non-region of interest and the binary image is processed with erosion and dilation operations to remove the discontinuities.
7. An unmanned aerial vehicle power transmission and transformation equipment inspection system in a non-structural environment based on the unmanned aerial vehicle power transmission and transformation equipment inspection method in the non-structural environment according to any one of claims 1-6, which is characterized in that: including perception module, autonomous module and removal module, wherein:
and a perception module: the method comprises the steps that environmental information of power transmission and transformation equipment is acquired for a sensor component;
And (3) an autonomous module: processing the obtained environment information image, extracting image features in the processed image, marking power transmission and transformation equipment to be detected as candidate inspection points, selecting an optimal inspection point from all candidate inspection points according to judging conditions, and sending a control instruction for moving the optimal inspection point to a moving module;
and a moving module: and receiving a control instruction sent by the main module, and adjusting corresponding power to move to an optimal inspection point.
CN202111274981.5A 2021-10-29 2021-10-29 Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment Active CN114217641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111274981.5A CN114217641B (en) 2021-10-29 2021-10-29 Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111274981.5A CN114217641B (en) 2021-10-29 2021-10-29 Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment

Publications (2)

Publication Number Publication Date
CN114217641A CN114217641A (en) 2022-03-22
CN114217641B true CN114217641B (en) 2024-05-07

Family

ID=80696376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111274981.5A Active CN114217641B (en) 2021-10-29 2021-10-29 Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment

Country Status (1)

Country Link
CN (1) CN114217641B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115685975A (en) * 2022-09-14 2023-02-03 国家电网公司西南分部 No-signal off-line operation method and system for power transmission line inspection robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460033A (en) * 2018-12-14 2019-03-12 杭州申昊科技股份有限公司 A kind of intelligent inspection robot
CN110580717A (en) * 2019-08-15 2019-12-17 成都优艾维智能科技有限责任公司 Unmanned aerial vehicle autonomous inspection route generation method for electric power tower
CN110610556A (en) * 2018-06-15 2019-12-24 北京京东尚科信息技术有限公司 Robot inspection management method and system, electronic device and storage medium
CN110879601A (en) * 2019-12-06 2020-03-13 电子科技大学 Unmanned aerial vehicle inspection method for unknown fan structure
CN110908401A (en) * 2019-12-06 2020-03-24 电子科技大学 Unmanned aerial vehicle autonomous inspection method for unknown tower structure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610556A (en) * 2018-06-15 2019-12-24 北京京东尚科信息技术有限公司 Robot inspection management method and system, electronic device and storage medium
CN109460033A (en) * 2018-12-14 2019-03-12 杭州申昊科技股份有限公司 A kind of intelligent inspection robot
CN110580717A (en) * 2019-08-15 2019-12-17 成都优艾维智能科技有限责任公司 Unmanned aerial vehicle autonomous inspection route generation method for electric power tower
CN110879601A (en) * 2019-12-06 2020-03-13 电子科技大学 Unmanned aerial vehicle inspection method for unknown fan structure
CN110908401A (en) * 2019-12-06 2020-03-24 电子科技大学 Unmanned aerial vehicle autonomous inspection method for unknown tower structure

Also Published As

Publication number Publication date
CN114217641A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN110531760B (en) Boundary exploration autonomous mapping method based on curve fitting and target point neighborhood planning
Levinson et al. Traffic light mapping, localization, and state detection for autonomous vehicles
CN108303096B (en) Vision-assisted laser positioning system and method
CN109001757B (en) Parking space intelligent detection method based on 2D laser radar
CN115049700A (en) Target detection method and device
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN113298035A (en) Unmanned aerial vehicle electric power tower detection and autonomous cruise method based on image recognition
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN111709988B (en) Method and device for determining characteristic information of object, electronic equipment and storage medium
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
Wang et al. Multi-cue road boundary detection using stereo vision
CN114089330A (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
CN116503803A (en) Obstacle detection method, obstacle detection device, electronic device and storage medium
CN114217641B (en) Unmanned aerial vehicle power transmission and transformation equipment inspection method and system in non-structural environment
CN115151954A (en) Method and device for detecting a drivable region
Nitsch et al. 3d ground point classification for automotive scenarios
CN112987720A (en) Multi-scale map construction method and construction device for mobile robot
Feng et al. Automated extraction of building instances from dual-channel airborne LiDAR point clouds
CN115453570A (en) Multi-feature fusion mining area dust filtering method
CN112651986B (en) Environment recognition method, recognition device, recognition system, electronic equipment and medium
CN112766100A (en) 3D target detection method based on key points
CN113836975A (en) Binocular vision unmanned aerial vehicle obstacle avoidance method based on YOLOV3
Zhang Photogrammetric point clouds: quality assessment, filtering, and change detection
CN114782626B (en) Transformer substation scene map building and positioning optimization method based on laser and vision fusion
Drulea et al. An omnidirectional stereo system for logistic plants. Part 2: stereo reconstruction and obstacle detection using digital elevation maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant