CN112050814A - Unmanned aerial vehicle visual navigation system and method for indoor transformer substation - Google Patents

Unmanned aerial vehicle visual navigation system and method for indoor transformer substation Download PDF

Info

Publication number
CN112050814A
CN112050814A CN202010885682.4A CN202010885682A CN112050814A CN 112050814 A CN112050814 A CN 112050814A CN 202010885682 A CN202010885682 A CN 202010885682A CN 112050814 A CN112050814 A CN 112050814A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
identification code
inspection
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010885682.4A
Other languages
Chinese (zh)
Inventor
刘天立
刘越
李豹
赵金龙
刘俍
吕俊涛
王安山
刘丙伟
王人杰
孙晓斌
黄振宁
张飞
高绍楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Intelligent Technology Co Ltd
Original Assignee
State Grid Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Intelligent Technology Co Ltd filed Critical State Grid Intelligent Technology Co Ltd
Priority to CN202010885682.4A priority Critical patent/CN112050814A/en
Publication of CN112050814A publication Critical patent/CN112050814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention discloses an unmanned aerial vehicle visual navigation system and method for an indoor transformer substation, wherein the system comprises a forward binocular perception module and a downward binocular perception module which are connected with a flight control end of an unmanned aerial vehicle; the flight control terminal includes: the path planning unit is configured to acquire binocular images through the unmanned aerial vehicle forward vision perception module in real time in the flight process of the unmanned aerial vehicle, construct a depth map and plan paths based on the depth map; and the visual odometer unit is configured to acquire images and calculate the current mileage of the unmanned aerial vehicle through the downward visual perception module in real time in the flight process of the unmanned aerial vehicle. According to the invention, obstacle avoidance and path planning can be realized only by a local map, so that the calculation amount is greatly reduced; and can be convenient realization through a plurality of identification codes to the timely correction of unmanned aerial vehicle positioning process in accumulative error.

Description

Unmanned aerial vehicle visual navigation system and method for indoor transformer substation
Technical Field
The invention belongs to the technical field of transformer substation inspection, and particularly relates to an unmanned aerial vehicle visual navigation system and method for an indoor transformer substation.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The SLAM technology is a technology for positioning and mapping while being applied to indoor navigation, and is applied to the field of robots for the first time, and aims to construct a surrounding environment map in real time according to sensor data without any prior knowledge and to infer the positioning of the SLAM technology according to the map. SLAM that only uses a camera as an external perception sensor is called visual SLAM. The camera has the advantages of rich visual information, low hardware cost and the like, and a classical visual SLAM system generally comprises four main parts, namely a front-end visual odometer, rear-end optimization, closed-loop detection and composition, as shown in figure 1. The function of each part thereof is as follows: 1) and (3) visual odometer: pose estimation with visual input only; 2) optimizing the rear end, namely receiving the camera poses measured by the visual odometer at different moments and closed-loop detection information at the rear end, and optimizing the camera poses and the closed-loop detection information to obtain a globally consistent track and map; 3) closed loop detection, namely detecting whether track closed loop occurs or not by using sensor information such as vision and the like in the process of constructing a map by the robot, namely judging whether the robot enters the same historical place or not; 4) establishing a graph: and establishing a map corresponding to the task requirement according to the estimated track. By means of the visual SLAM technology, reliable perception and mapping of indoor environments can be achieved theoretically. Because the positioning and mapping are carried out, the obstacle avoiding action can be simultaneously realized.
However, the inventor knows that if the traditional vision SLAM technology is applied to the unmanned aerial vehicle substation inspection, the following problems exist: 1) the closed-loop detection steps are very critical, but the reliability is poor, the success rate of the closed-loop detection is not 100%, once the closed-loop detection cannot be performed, the generated positioning error accumulation cannot be corrected, no person has a chance to perform positioning failure, and the inspection function cannot be completed at this moment. 2) The indoor transformer substation environment is complicated, and the unmanned aerial vehicle probably needs to detour from the equipment top, left or right side in the flight process, requires higher to the environmental perception ability, consequently, if rely on the map of establishing to carry out route planning, the operand of establishing the picture is huge, is difficult to realize real-time processing at the treater of airborne end.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an unmanned aerial vehicle visual navigation system and method for an indoor substation, which take an unmanned aerial vehicle as a platform, realize automatic positioning, obstacle avoidance and navigation in an indoor environment through a visual perception technology, solve the problem that positioning equipment such as a GNSS (global navigation satellite system), a magnetic compass and the like of the unmanned aerial vehicle possibly fails in an indoor substation inspection scene, and realize the functions of positioning navigation and obstacle avoidance.
In order to achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
an indoor transformer substation unmanned aerial vehicle vision navigation system includes: the forward binocular sensing module and the downward binocular sensing module are connected with the flight control end of the unmanned aerial vehicle; the flight control terminal includes:
the path planning unit is configured to acquire binocular images through the unmanned aerial vehicle forward vision perception module in real time in the flight process of the unmanned aerial vehicle, construct a depth map and plan paths based on the depth map;
and the visual odometer unit is configured to acquire images and calculate the current mileage of the unmanned aerial vehicle through the downward visual perception module in real time in the flight process of the unmanned aerial vehicle.
Furthermore, the system also comprises a plurality of identification codes which are pre-arranged at each inspection position of the transformer substation; the flight control terminal further comprises:
and the identification code recognition unit is configured to execute identification code recognition according to an image acquired by the forward vision perception module or the downward vision perception module after the current inspection position finishes inspection, and after the identification code corresponding to the inspection position is recognized, position and posture adjustment is carried out according to the identification code position.
Further, the path planning based on the depth map comprises:
acquiring a two-dimensional obstacle map corresponding to a certain height section according to the depth map, wherein the map comprises obstacle marking information;
determining the current unmanned aerial vehicle position and target position information in a two-dimensional obstacle map;
and planning a path in the two-dimensional obstacle map based on the current unmanned aerial vehicle position and the target position information.
Further, if the effective path cannot be obtained based on the two-dimensional obstacle map of the current height section, the height section is reselected above or below the current height section, and a two-dimensional obstacle map is generated for path planning.
Further, calculating the current mileage of the unmanned aerial vehicle comprises:
a reference frame is designated, in the flying process, the total displacement offset of the unmanned aerial vehicle relative to the reference frame is sent to a flying control end at a fixed frequency, and the flying control end corrects the position of the flying control end according to the offset; wherein the reference frame is automatically updated when the displacement deviation from the current reference frame exceeds a certain threshold.
Further, the total displacement offset of the unmanned aerial vehicle relative to the reference frame calculates actual rotation and displacement parameters of the unmanned aerial vehicle between two frames according to the currently acquired binocular image, the position of the feature point in the reference frame and the three-dimensional space position of the feature point.
Further, the flight control end prestores the position information of the identification codes in the three-dimensional space of the transformer substation; the adjusting the position and the posture according to the position of the identification code comprises the following steps:
and adjusting the position and the posture according to the position of the identification code in the indoor transformer substation and the position of the identification code relative to the identification code, and correcting the mileage data.
One or more embodiments provide a navigation method based on the visual navigation system of the unmanned aerial vehicle, comprising the following steps:
acquiring a polling task, wherein the polling task comprises a plurality of positions to be polled, starting polling and controlling the unmanned aerial vehicle to take off;
when each inspection position is reached, the unmanned aerial vehicle is controlled to hover and inspect at the inspection position, and after inspection is finished, the unmanned aerial vehicle flies to the next inspection point;
in the flight process, a downward visual perception module acquires a lower image in real time and calculates the current mileage of the unmanned aerial vehicle; and simultaneously, planning a path through the image acquired by the forward visual perception module.
Furthermore, after the inspection of each inspection position is finished, acquiring an image of the inspection position through a forward visual perception module or a downward visual perception module, and executing identification code recognition; after the identification code corresponding to the inspection position is identified, position and attitude adjustment is carried out according to the position of the identification code.
Further, the adjusting the position and the posture according to the position of the identification code comprises:
and adjusting the position and the posture according to the position of the identification code in the indoor transformer substation and the position of the identification code relative to the identification code, and correcting the mileage data.
The above one or more technical solutions have the following beneficial effects:
the utility model provides an indoor transformer substation unmanned aerial vehicle vision navigation, adopt forward, down to two mesh perception systems, with vision odometer technique, identification code recognition technology, barrier detection and path planning technique combine together, based on vision odometer navigation and carry out the position correction with a plurality of identification codes, utilize the depth map after two mesh matchings to keep away barrier path planning, unmanned aerial vehicle automatic positioning under the indoor environment of transformer substation, keep away the barrier, the navigation has been realized, the problem of locating device such as unmanned aerial vehicle GNSS, magnetic compass probably became invalid under the indoor transformer substation patrols and examines the scene is solved, self location navigation has been realized, keep away the barrier function.
Compared with the SLAM, the obstacle avoidance and path planning can be realized only by a local map without acquiring a global map, so that the calculation amount is greatly reduced; in addition, the identification code is arranged at the key position to be inspected, so that the accumulated error in the positioning process of the unmanned aerial vehicle can be corrected in time conveniently; by combining the visual odometer technology and indoor path planning, the unmanned aerial vehicle can always keep correct position and posture, and the indoor inspection task can be quickly and accurately completed.
The unmanned aerial vehicle visual navigation method for the indoor transformer substation is provided, identification codes are set at key positions to be inspected, identification codes are identified through downward or forward binocular, and the unmanned aerial vehicle can perform self-inspection and correction on the position posture after inspection of each inspection device is finished; in the flight process of the unmanned aerial vehicle, the current mileage of the unmanned aerial vehicle is calculated by adopting a downward binocular vision odometer in real time, and meanwhile, barrier sensing and path planning are carried out by means of forward binocular vision, so that the unmanned aerial vehicle can timely carry out accurate adjustment on the position and the posture of the unmanned aerial vehicle in the inspection process.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of a method of a prior art visual SLAM technique;
FIG. 2 is a schematic diagram of an ARTag identification code;
FIG. 3 is a flow chart of a binocular vision based obstacle sensing technique;
FIG. 4 is a schematic diagram of the calculation of translation and rotation parameters of a camera based on binocular vision odometry technology;
fig. 5 is a frame diagram of a visual navigation system of an unmanned aerial vehicle of an indoor substation in one or more embodiments of the present invention;
FIG. 6 is a schematic diagram of the layout of identification codes in one or more embodiments of the invention;
FIG. 7 is a two-dimensional obstacle map in accordance with one or more embodiments of the invention;
FIG. 8 is a schematic diagram of a path based on a two-dimensional obstacle map in accordance with one or more embodiments of the present invention;
fig. 9 is a schematic diagram illustrating selection of a depth map height interval in one or more embodiments of the invention.
Detailed Description
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
The core technology of the invention is an indoor navigation technology based on vision, and is specifically divided into three main technologies: ARTag identification code recognition technology, visual obstacle perception technology and visual odometer technology. The above technique will be explained first.
The ARTag identification code recognition technology can recognize a special identification code (ARTag, as shown in figure 2) in a picture and calculate a space coordinate, and the space position of the unmanned aerial vehicle relative to the identification code is obtained through calculation.
The visual barrier perception technology is based on binocular stereo vision, images are collected through a left camera and a right camera, and three-dimensional space information of an object is obtained through a triangulation principle (namely, a triangle is formed between an image plane of the two cameras and the object to be measured). The method comprises the steps of simultaneously collecting images through a left camera and a right camera, then correcting, matching and denoising the collected left image and the collected right image to generate a depth image, and then calculating the depth image to obtain the azimuth information of the threatening obstacle. The algorithm flow is shown in fig. 3.
The vision odometer technology adopts a binocular vision odometer technology, and translation and rotation parameters of a camera are calculated mainly through characteristic matching of different images at different moments. The schematic diagram is shown in FIG. 4, wherein TtAnd Tt-1And two groups of binocular images at the current moment and the previous moment respectively are obtained, the three-dimensional coordinates of the feature points are calculated by matching the feature points in the picture, and the calculation of the inter-frame rotation translation parameters is carried out by combining the camera intrinsic parameters, so that the movement mileage of the camera (unmanned aerial vehicle) is obtained through accumulation.
Example one
The embodiment discloses an unmanned aerial vehicle vision navigation system of indoor transformer substation, as shown in fig. 5, include: carry on to two mesh perception module and two mesh perception module down on unmanned aerial vehicle, all with unmanned aerial vehicle's flight control end is connected, in addition, still including setting up in advance in each of transformer substation and patrolling and examining a plurality of identification codes of position.
The plurality of identification codes are arranged at positions to be patrolled and examined in the transformer substation to identify the positions to be patrolled and examined, and are not required to be arranged on a patrolling and examining route, and the plurality of identification codes have different identification information. In this embodiment, a plurality of identification codes are respectively arranged on the wall surface near the position to be inspected, the identification codes adopt ARTag identification codes, identification information is represented by ID, and as shown in fig. 6, ARTag with ID of 0,1, and 2 is set. It will be understood by those skilled in the art that a plurality of identification codes may also be arranged on the ground, or even on the equipment, in the vicinity of the location to be inspected, without being limited thereto.
Before inspection, a three-dimensional space coordinate system needs to be established for an indoor substation in advance, and in this embodiment, an xyz coordinate system is established with a certain position 0 of the indoor substation as an origin. With ARTag having IDs of 0,1,2, respectively, the spatial coordinates of each marker are fixed. For example, in cm, the ARTag space coordinate with ID 0 is (0,150,50), the ARTag space coordinate with ID 1 is (150,0,50), and the ARTag space coordinate with ID 2 is (300,150, 50).
And the flight control end of the unmanned aerial vehicle prestores the coordinate information of the plurality of identification codes. In the subsequent unmanned aerial vehicle inspection process, after a certain ARTag is identified, the xyz spatial position t of the ARTag relative to the ARTag r is obtained through an image processing algorithm, and the coordinates w0 and t are superposed in combination with the coordinate w0 of the ARTag in the transformer substation to obtain the spatial position of the ARTag in the whole coordinate system.
The flight control end specifically comprises:
the route planning unit is configured to acquire an inspection task, and the inspection task comprises a plurality of positions to be inspected; when the inspection task is executed, in the process that the unmanned aerial vehicle flies to the next position to be inspected, a binocular image is obtained from the unmanned aerial vehicle forward vision perception module in real time, a depth map is constructed, and obstacle identification and path planning are carried out on the basis of the depth map;
the identification code recognition unit is configured to acquire an image from the forward visual perception module or the downward visual perception module after the inspection of the current inspection position is finished, and recognize the identification code corresponding to the inspection position; after the identification code is identified, calculating the position of the unmanned aerial vehicle relative to the identification code; controlling the unmanned aerial vehicle to accurately adjust the position and the posture of the unmanned aerial vehicle based on the position of the identification code in the indoor transformer substation and the position of the unmanned aerial vehicle relative to the identification code;
the vision odometer unit is configured to execute the task of patrolling and examining, and when the unmanned aerial vehicle flies to the next position in-process of patrolling and examining, the unmanned aerial vehicle acquires the image below and calculates the current mileage of the unmanned aerial vehicle in real time through the downward vision perception module, and the unmanned aerial vehicle is guaranteed to fly according to the planned path accurately.
Whenever the forward binocular sensing module carries out obstacle sensing operation, a depth map is obtained in real time, and path planning is carried out; when the indoor scene is near the ground and has no GNSS signal, the unmanned plane can automatically start the downward binocular perception module and start the visual odometer unit and the identification code recognition unit; meanwhile, an identification code recognition unit is also started for the forward binocular camera module.
Specifically, the path planning unit obtains a depth map through operations such as correction, matching and filtering based on left and right images acquired through forward binocular, realizes sensing of the unmanned aerial vehicle on the obstacle, and performs path planning on the depth map to obtain a flight path. In most cases, obstacle avoidance only requires a detour over the obstacle. When the obstacle needs to be subjected to complicated operations such as left-right, up-down detour and the like (indoor routing inspection scene), the depth map needs to be calculated, and the optimal flight path is planned. In order to realize high-speed and effective path planning, a mathematical model of path planning, namely a three-dimensional depth map is reduced into a two-dimensional obstacle map, is simplified by combining attitude information of an unmanned aerial vehicle. The method specifically comprises the following steps:
(1) acquiring a two-dimensional barrier map corresponding to a certain height interval according to the depth map, and corresponding the current position of the unmanned aerial vehicle (namely the current flight starting point) and the target position information (the next inspection position) to the positions in the two-dimensional barrier map, as shown in fig. 7, wherein the squares corresponding to two crosses respectively represent the starting point (the unmanned aerial vehicle position) and the target point, and the dark color filling area is a barrier;
(2) in the two-dimensional obstacle map, a high-speed and optimal path planning is realized by the a-star algorithm, as shown in fig. 8 (the path is represented by diagonal squares). If the two-dimensional obstacle map of the current height cannot obtain an effective path (as shown in a depth map area 1 of fig. 9), namely, the height does not have a route for the unmanned aerial vehicle to fly to a target position, an area of interest (as shown in depth map areas 2 and 3 of fig. 9) is reselected above or below the depth map with a certain step length, and a two-dimensional obstacle map is generated again until the effective path is obtained;
(3) and calculating the speed proportion of the unmanned aerial vehicle in the x-y-z directions according to the path, and guiding the flight according to the value.
The path planning method is particularly suitable for indoor scenes in which complex operations such as frequent left-right, up-down and detour and the like are required to be performed on the barrier, and the flying height of the unmanned aerial vehicle and the speed proportion in three directions can be calculated based on the depth map, so that the unmanned aerial vehicle can fly flexibly indoors.
The vision odometer unit can realize the location of unmanned aerial vehicle self, realizes "never drifting" stable, safe hovering to can provide unmanned aerial vehicle's flight mileage when moving. When the mileage timing is started, the airborne end processor automatically acquires a reference frame and calculates the displacement offset of the reference frame in the subsequent positioning process; when the offset exceeds a certain threshold, a new reference frame is automatically acquired. In the task process, the total displacement offset of the unmanned aerial vehicle relative to the reference frame is always sent to the flight control end at a fixed frequency, and the flight control carries out self position correction according to the offset. After the identification code is identified, the self pose is accurately adjusted, and the odometer is corrected; and in the process of patrolling and examining, the mileage of the unmanned aerial vehicle can be calculated in real time.
For the binocular vision odometer, an optical flow matching algorithm based on front and rear frames is adopted in consideration of real-time performance. If the matched feature points have 3D coordinate positions, the motion of the camera can be estimated. For the coordinates of the 3D points, it is possible to obtain with the aid of binocular matched depth maps. For a binocular vision odometer between two frames, the calculation steps are as follows:
1) extracting the characteristic points of the previous frame, and calculating a 2D coordinate set P of the characteristic pointst-1(ii) a And calculating a 3D space coordinate set C of the feature points through binocular matchingt-1
2) Extracting the characteristic point P of the post frametMatching with the feature points of the previous frame through the LK optical flow;
3) for the feature point set with successfully matched front and back frames, calculating the 2D coordinates P 'of the back frame'tAnd acquiring the 3D space coordinate set C 'of the previous frame't-1
4) According to P'tAnd C't-1And calculating the rotation and translation parameters of the front frame and the rear frame by combining the camera intrinsic parameter K.
The most central calculation in the above steps is step 4). The problem is solved by PnP. And calculating the actual rotation and displacement between the two frames by combining the calibration parameters through the dual-purpose image (2D) and the matched depth information (3D). For a binocular camera, PnP (passive-n-Point) is a highly efficient method for solving 3D to 2D Point-to-Point motion.
It describes how the pose of the camera is estimated when n 3D spatial points and their projected positions are known. And in the operation process, the PnP operation is carried out in real time, so that the binocular vision odometer is realized.
And the identification code recognition unit is a key for correcting the deviation of the visual odometer by the unmanned aerial vehicle (similar to loop detection in SLAM). The identification code is distributed in the inspection area in advance, and when the camera recognizes the identification code, the space position relative to the identification code is calculated (namely, the real position of the unmanned aerial vehicle in the inspection area can be calculated, because the distribution of the identification code in the inspection area is known). Identification code recognition is an important supplement to the functions of the visual odometer, and after the identification code is recognized, the unmanned aerial vehicle can correct the position deviation accumulated by the visual odometer. The specific scheme of the layout is as follows:
1) attaching ARTag with different IDs to a plurality of inspection positions of the substation. Wherein the ARTag of each ID is positioned in the scene.
2) The unmanned aerial vehicle acquires images through the camera, processes the images, and learns the position of the unmanned aerial vehicle relative to the ARTag through an algorithm.
3) According to the position of ARTag in the indoor transformer substation space, the unmanned aerial vehicle obtains the position of the unmanned aerial vehicle in the transformer substation.
The embodiment designs an indoor inspection scheme which is based on visual odometer navigation, uses a plurality of identification codes to correct the positions and simultaneously utilizes a binocular-matched depth map to plan an obstacle avoidance path. Compared with the SLAM, the method has the advantages that the obstacle avoidance and path planning can be realized only by the local map without acquiring a global map, so that the calculation amount is greatly reduced; and a plurality of identification codes can be convenient realization again to the timely correction of unmanned aerial vehicle positioning process in accumulative error.
Example two
On the basis of the navigation system provided by the first embodiment, the present embodiment provides a navigation method applied to an indoor substation. Specifically, identification codes are distributed at each position to be inspected in advance, in the embodiment, the identification codes at each position are ARTag identification codes with different IDs, and three-dimensional space information of the indoor transformer substation and coordinate information of each identification code are stored; the method specifically comprises the following steps:
step 1: acquiring a polling task, wherein the polling task comprises a plurality of positions to be polled, starting polling, controlling the unmanned aerial vehicle to take off, hovering at a first polling position and polling;
step 2: after the inspection of the first inspection position is finished, acquiring an image of the inspection position through a forward visual perception module, and identifying an identification code corresponding to the inspection position;
and step 3: after the identification code is identified, calculating the position of the unmanned aerial vehicle relative to the identification code; based on the position of the identification code in the indoor transformer substation and the position of the identification code relative to the identification code, the position and the posture of the identification code are accurately adjusted; then, the next inspection point is reached;
and 4, step 4: in the process of going to the next inspection point, acquiring a lower image in real time through a downward visual perception module and calculating the current mileage of the unmanned aerial vehicle; meanwhile, carrying out obstacle perception and path planning through the image acquired by the forward visual perception module;
and 5: hovering and inspecting the inspection position after the next inspection position is reached, acquiring an image of the inspection position through a forward visual perception module or a downward visual perception module after the inspection is finished, and identifying an identification code corresponding to the inspection position; and repeating the steps of 3-5 until the polling of all polling positions is completed.
According to the inspection system and the inspection method, a global map of an indoor transformer substation does not need to be constructed, the data processing pressure of a flight control end is greatly reduced, and the navigation of the unmanned aerial vehicle can be realized based on the path information acquired by the forward visual perception module and the odometer information acquired by the downward visual perception module; furthermore, the unique identification code is arranged at the inspection position in advance, the position and the posture of the unmanned aerial vehicle can be corrected based on the identification code after inspection is finished at every time, the accumulated error in the positioning process of the unmanned aerial vehicle can be corrected in time, and then the unmanned aerial vehicle flies to the next inspection position, so that the flying precision of the unmanned aerial vehicle in the inspection process is guaranteed, and the smooth execution of the inspection task is guaranteed.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. The utility model provides an unmanned aerial vehicle vision navigation of indoor transformer substation, its characterized in that includes: the forward binocular sensing module and the downward binocular sensing module are connected with the flight control end of the unmanned aerial vehicle; the flight control terminal includes:
the path planning unit is configured to acquire binocular images through the unmanned aerial vehicle forward vision perception module in real time in the flight process of the unmanned aerial vehicle, construct a depth map and plan paths based on the depth map;
and the visual odometer unit is configured to acquire images and calculate the current mileage of the unmanned aerial vehicle through the downward visual perception module in real time in the flight process of the unmanned aerial vehicle.
2. The unmanned aerial vehicle visual navigation system of the indoor substation of claim 1, wherein the system further comprises a plurality of identification codes pre-arranged at each inspection position of the substation; the flight control terminal further comprises:
and the identification code recognition unit is configured to execute identification code recognition according to an image acquired by the forward vision perception module or the downward vision perception module after the current inspection position finishes inspection, and after the identification code corresponding to the inspection position is recognized, position and posture adjustment is carried out according to the identification code position.
3. The indoor substation unmanned aerial vehicle visual navigation system of claim 1, wherein path planning based on the depth map comprises:
acquiring a two-dimensional obstacle map corresponding to a certain height section according to the depth map, wherein the map comprises obstacle marking information;
determining the current unmanned aerial vehicle position and target position information in a two-dimensional obstacle map;
and planning a path in the two-dimensional obstacle map based on the current unmanned aerial vehicle position and the target position information.
4. The visual navigation system of the unmanned aerial vehicle for the indoor substation of claim 3, wherein if the effective path cannot be obtained based on the two-dimensional obstacle map of the current height interval, the height interval is reselected above or below the current height interval, and the two-dimensional obstacle map is generated for path planning.
5. The indoor substation unmanned aerial vehicle visual navigation system of claim 1, wherein calculating the current mileage of the unmanned aerial vehicle comprises:
a reference frame is designated, in the flying process, the total displacement offset of the unmanned aerial vehicle relative to the reference frame is sent to a flying control end at a fixed frequency, and the flying control end corrects the position of the flying control end according to the offset; wherein the reference frame is automatically updated when the displacement deviation from the current reference frame exceeds a certain threshold.
6. The visual navigation system of the unmanned aerial vehicle for the indoor substation of claim 5, wherein the total displacement offset of the unmanned aerial vehicle relative to the reference frame is calculated according to the currently acquired binocular image and the positions of the feature points and the three-dimensional spatial positions of the feature points in the reference frame, and actual rotation and displacement parameters of the unmanned aerial vehicle between the two frames are calculated.
7. The unmanned aerial vehicle visual navigation system of indoor substation of claim 2, wherein the flight control terminal prestores position information of the plurality of identification codes in a three-dimensional space of the substation; the adjusting the position and the posture according to the position of the identification code comprises the following steps:
and adjusting the position and the posture according to the position of the identification code in the indoor transformer substation and the position of the identification code relative to the identification code, and correcting the mileage data.
8. A navigation method based on the visual navigation system of the unmanned aerial vehicle according to any one of claims 1 to 7, characterized by comprising the following steps:
acquiring a polling task, wherein the polling task comprises a plurality of positions to be polled, starting polling and controlling the unmanned aerial vehicle to take off;
when each inspection position is reached, the unmanned aerial vehicle is controlled to hover and inspect at the inspection position, and after inspection is finished, the unmanned aerial vehicle flies to the next inspection point;
in the flight process, a downward visual perception module acquires a lower image in real time and calculates the current mileage of the unmanned aerial vehicle; and simultaneously, planning a path through the image acquired by the forward visual perception module.
9. The navigation method according to claim 8, characterized in that after the inspection of each inspection position is finished, the image of the inspection position is acquired by a forward visual perception module or a downward visual perception module, and identification code recognition is performed; after the identification code corresponding to the inspection position is identified, position and attitude adjustment is carried out according to the position of the identification code.
10. The navigation method of claim 8, wherein making position and orientation adjustments based on the identification code position comprises:
and adjusting the position and the posture according to the position of the identification code in the indoor transformer substation and the position of the identification code relative to the identification code, and correcting the mileage data.
CN202010885682.4A 2020-08-28 2020-08-28 Unmanned aerial vehicle visual navigation system and method for indoor transformer substation Pending CN112050814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010885682.4A CN112050814A (en) 2020-08-28 2020-08-28 Unmanned aerial vehicle visual navigation system and method for indoor transformer substation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010885682.4A CN112050814A (en) 2020-08-28 2020-08-28 Unmanned aerial vehicle visual navigation system and method for indoor transformer substation

Publications (1)

Publication Number Publication Date
CN112050814A true CN112050814A (en) 2020-12-08

Family

ID=73607164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010885682.4A Pending CN112050814A (en) 2020-08-28 2020-08-28 Unmanned aerial vehicle visual navigation system and method for indoor transformer substation

Country Status (1)

Country Link
CN (1) CN112050814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562348A (en) * 2022-11-03 2023-01-03 国网福建省电力有限公司漳州供电公司 Unmanned aerial vehicle image technology method based on transformer substation

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105492985A (en) * 2014-09-05 2016-04-13 深圳市大疆创新科技有限公司 Multi-sensor environment map building
CN106687878A (en) * 2014-10-31 2017-05-17 深圳市大疆创新科技有限公司 Systems and methods for surveillance with visual marker
CN106933243A (en) * 2015-12-30 2017-07-07 湖南基石信息技术有限公司 A kind of unmanned plane Real Time Obstacle Avoiding system and method based on binocular vision
CN108445905A (en) * 2018-03-30 2018-08-24 合肥赛为智能有限公司 A kind of UAV Intelligent avoidance regulator control system
CN108594851A (en) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane
CN109029417A (en) * 2018-05-21 2018-12-18 南京航空航天大学 Unmanned plane SLAM method based on mixing visual odometry and multiple dimensioned map
CN109923589A (en) * 2016-11-14 2019-06-21 深圳市大疆创新科技有限公司 Building and update hypsographic map
CN111356903A (en) * 2019-01-25 2020-06-30 深圳市大疆创新科技有限公司 Visual positioning method, device and system
CN111487642A (en) * 2020-03-10 2020-08-04 国电南瑞科技股份有限公司 Transformer substation inspection robot positioning navigation system and method based on three-dimensional laser and binocular vision

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN105492985A (en) * 2014-09-05 2016-04-13 深圳市大疆创新科技有限公司 Multi-sensor environment map building
CN106687878A (en) * 2014-10-31 2017-05-17 深圳市大疆创新科技有限公司 Systems and methods for surveillance with visual marker
CN108594851A (en) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane
CN106933243A (en) * 2015-12-30 2017-07-07 湖南基石信息技术有限公司 A kind of unmanned plane Real Time Obstacle Avoiding system and method based on binocular vision
CN109923589A (en) * 2016-11-14 2019-06-21 深圳市大疆创新科技有限公司 Building and update hypsographic map
CN108445905A (en) * 2018-03-30 2018-08-24 合肥赛为智能有限公司 A kind of UAV Intelligent avoidance regulator control system
CN109029417A (en) * 2018-05-21 2018-12-18 南京航空航天大学 Unmanned plane SLAM method based on mixing visual odometry and multiple dimensioned map
CN111356903A (en) * 2019-01-25 2020-06-30 深圳市大疆创新科技有限公司 Visual positioning method, device and system
CN111487642A (en) * 2020-03-10 2020-08-04 国电南瑞科技股份有限公司 Transformer substation inspection robot positioning navigation system and method based on three-dimensional laser and binocular vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
F. VALENTI等: "Enabling Computer Vision-Based Autonomous Navigation for Unmanned Aerial Vehicles in Cluttered GPS-Denied Environments", 《2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC)》 *
FABRIZIO LAMBERTI等: "Mixed Marker-Based/Marker-Less Visual Odometry System for Mobile Robots", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562348A (en) * 2022-11-03 2023-01-03 国网福建省电力有限公司漳州供电公司 Unmanned aerial vehicle image technology method based on transformer substation

Similar Documents

Publication Publication Date Title
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
US20210012520A1 (en) Distance measuring method and device
CN106774431B (en) Method and device for planning air route of surveying and mapping unmanned aerial vehicle
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
JP7263630B2 (en) Performing 3D reconstruction with unmanned aerial vehicles
Martínez et al. On-board and ground visual pose estimation techniques for UAV control
KR20200041355A (en) Simultaneous positioning and mapping navigation method, device and system combining markers
Merino et al. Vision-based multi-UAV position estimation
CN109753076A (en) A kind of unmanned plane vision tracing implementing method
CN111958592A (en) Image semantic analysis system and method for transformer substation inspection robot
CN107390704B (en) IMU attitude compensation-based multi-rotor unmanned aerial vehicle optical flow hovering method
CN104298248A (en) Accurate visual positioning and orienting method for rotor wing unmanned aerial vehicle
WO2019152149A1 (en) Actively complementing exposure settings for autonomous navigation
CN105182992A (en) Unmanned aerial vehicle control method and device
CN103941746A (en) System and method for processing unmanned aerial vehicle polling image
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
CN109164825A (en) A kind of independent navigation barrier-avoiding method and device for multi-rotor unmanned aerial vehicle
Acuna et al. Dynamic Markers: UAV landing proof of concept
US20230206491A1 (en) Information processing device, mobile device, information processing system, method, and program
CN111413708A (en) Unmanned aerial vehicle autonomous landing site selection method based on laser radar
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
Springer et al. Autonomous Drone Landing with Fiducial Markers and a Gimbal-Mounted Camera for Active Tracking
CN107741233A (en) A kind of construction method of the outdoor map of three-dimensional

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201208

RJ01 Rejection of invention patent application after publication