CN108873931A - A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness - Google Patents

A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness Download PDF

Info

Publication number
CN108873931A
CN108873931A CN201810570734.1A CN201810570734A CN108873931A CN 108873931 A CN108873931 A CN 108873931A CN 201810570734 A CN201810570734 A CN 201810570734A CN 108873931 A CN108873931 A CN 108873931A
Authority
CN
China
Prior art keywords
image
unmanned plane
power line
distance
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810570734.1A
Other languages
Chinese (zh)
Inventor
王长杰
肖鹏飞
唐林波
王亚男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING POLYTECHNIC LEIKE ELECTRONIC INFORMATION TECHNOLOGY Co Ltd
Original Assignee
BEIJING POLYTECHNIC LEIKE ELECTRONIC INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING POLYTECHNIC LEIKE ELECTRONIC INFORMATION TECHNOLOGY Co Ltd filed Critical BEIJING POLYTECHNIC LEIKE ELECTRONIC INFORMATION TECHNOLOGY Co Ltd
Priority to CN201810570734.1A priority Critical patent/CN108873931A/en
Publication of CN108873931A publication Critical patent/CN108873931A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of unmanned plane vision avoiding collisions combined based on subjectiveness and objectiveness, using the method for visual token, image is directly acquired by high definition camera, real-time perfoming target detection, Feature Points Matching scheduling algorithm operation, to obtain range data in real time, unmanned plane can be instructed to make flight processing in real time, keep certain safe distance with target, avoiding barrier, so that safety completes task;Simultaneously, using virtual reality glasses as terminal, the image data that the airborne biocular systems of real-time monitoring obtain allows operator really to perceive surrounding scene by Three-dimensional Display, intervene flight path in time as necessary by unmanned aerial vehicle platform control assembly, further increases safety.

Description

A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness
Technical field
The invention belongs to air vehicle technique fields, and in particular to a kind of unmanned plane vision combined based on subjectiveness and objectiveness is anti- Hit method.
Background technique
Anti-collision system is the unmanned plane safety guarantee important when executing aerial mission, and the reliability of anti-collision system is very big The level of intelligence of unmanned plane is reflected in degree.For unmanned plane anti-collision system, unmanned plane needs not in flight course Disconnected real time monitoring ambient enviroment finds barrier in time, and plans that it flies again according to the depth information between barrier Walking along the street line, to be automatically performed aerial mission.Therefore the most crucial problem of the autonomous anticollision of unmanned plane is solved to obstacle distance Perception.
Present unmanned plane anti-collision system mainly divides three classes:1) active anti-collision system, this kind of anti-collision system generally can be actively To barrier project beams, by receiving the feedback information of wave beam to obtain the range information of barrier.Active anticollision system It unites and perceived distance and anti-barrier is carried out to barrier usually using infrared ray, radar, ultrasonic wave etc..Its measurement model of range-measurement infrared system It is with limit.Ultrasonic anti-collision system is easy to be influenced by external environment.Its own quality of radar ranging system is bigger than normal, specified function Rate height and higher cost;2) passive type anti-collision system, this kind of anti-collision system directly acquires the information of scene, then to the figure of acquisition As carrying out handling and by complicated calculating to obtain the depth information of scene.Passive type anti-collision system it is most typical representative be Vision anti-collision system, vision anti-collision system simulate the vision system of people, image data are obtained using visual sensor, by figure Extraneous depth information of scene is obtained as being handled and being calculated, the depth information of usage scenario realizes anticollision, but due to image The limitation of Processing Algorithm can't reach requirement only according to vision system in some cases;3) composite anti-collision system, it is this kind of Anti-collision system combines the characteristics of active and passive type avoidance, and system needs multiple sensors, generally include laser ranging, Binocular Stereo Vision System, IMU etc., multiple sensors improve cost, increase volume and quality, it is difficult to meet small-sized nothing Man-machine requirement.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of unmanned plane vision anticollision sides combined based on subjectiveness and objectiveness The reliability of unmanned plane anti-collision system can be improved in method.
A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness, including:Binocular phase is carried on unmanned plane Machine handles the binocular image of acquisition, obtains the 3-D image of binocular imaging;
Unmanned plane is obtained at a distance from barrier according to the binocular image, and distance information transmission is flown to unmanned plane Control system, by system for flight control computer automatic obstacle avoiding;
By the load of obstacle distance information on the 3-D image, 3-D image is subjected to video compress, after compression Video code flow transmitted by wireless data chain, ground real-time reception compression video code flow and decompression, by the figure of decompression Three-dimensional Display is realized as data are transferred to VR glasses, by eye-observation real time 3-D image and the distance value of superposition, in urgent feelings It is realized by manual control unmanned plane based on subjective anticollision under condition;
Wherein, after obtaining the binocular image, when barrier is power line, ranging as follows:
The power line in binocular image is detected respectively;For the piece image in binocular image, one is selected on power line Point P1, according to core collimation method, determining point P1 corresponding same place P2 in the another piece image in binocular image;Further according to point P1 and Parallax between P2, determines the distance between power line and unmanned plane;
When barrier is the non-electrical line of force:Barrier and unmanned plane are determined using the characteristic point telemetry based on sift algorithm The distance between, specially:
A, gaussian filtering and down-sampled processing are carried out to binocular image, obtains gaussian pyramid, and then obtain difference gold word Tower;
B, it is directed to the pyramidal each image of difference, obtains the horizontal gradient Dxx and vertical gradient Dyy of each pixel; The pixel that the product of horizontal gradient Dxx and vertical gradient Dyy is less than or equal to given threshold is rejected, extreme point scalping is completed Choosing;
C, for the difference pyramid of completion extreme point coarse sizing, feature point extraction is carried out, the feature of binocular image is obtained Point;
D, it is directed in binocular image a wherein width, different target is split using watershed algorithm;
E, the characteristic point for obtaining step C gives each target according to the Target Assignment belonging to it;
F, it is directed to each target, selected part characteristic point, and characteristic point is matched, obtains matching characteristic point pair; Using the parallax of each pair of matching characteristic point, obstacle distance is calculated;
G, multiple obstacle distances corresponding for each target cluster, and obtain the distance of the target;By all mesh Target minimum range is ultimately sent to flight control system and shows on VR glasses as final obstacle distance.
Preferably, the obstacle distance of every power line is calculated separately, by minimum range when detecting a plurality of power line It is sent to the flight control system and VR glasses.
Preferably, being less than a plurality of power line of setting value as one to electric power wire spacing when detecting a plurality of power line Bar power line completes power line cluster;Minimum range is sent to described fly by the obstacle distance for calculating separately every power line Control system and VR glasses.
The present invention has the advantages that:
1) a kind of, unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness provided by the invention, is surveyed using vision Away from method, i.e., directly by high definition camera acquire image, real-time perfoming target detection, Feature Points Matching scheduling algorithm operation, from And range data is obtained in real time, unmanned plane can be instructed to make flight processing in real time, keep certain safe distance with target, evaded Barrier, so that safety completes task;Meanwhile using virtual reality glasses as terminal, the airborne biocular systems of real-time monitoring are obtained Image data allows operator really to perceive surrounding scene, as necessary by unmanned aerial vehicle platform control assembly by Three-dimensional Display Intervene flight path in time, further increases safety.
2), using the specific flying scene of unmanned plane, using power line distance measuring method, i.e., by detection binocular image Power line, same place is found in binocular image by core collimation method, and determine the parallax of two o'clock, is achieved in ranging, this hair The bright special flight environment of vehicle using unmanned plane, it is innovative that core collimation method is integrated in barrier ranging, not only opened for unmanned plane New distance measuring method has been opened up, characteristic point of the same name has been also reduced and carries out the calculating such as matching, reduce calculation amount, improve real-time;In nothing In the case of power line, the present invention is based on sift matching algorithms to use characteristic point telemetry, but adds before extreme point calculating New coarse sizing algorithm eliminates a large amount of non-extreme points to reduce calculation amount and further improves real-time.
3), the present invention is handled image using watershed algorithm, entire image is divided into different targets, to same All characteristic points of one target, selected part characteristic point calculate distance, so as to avoid the repetition of a large amount of same target feature points It calculates, all distances calculated same target are clustered to obtain the distance of the target, are eventually found all target ranges The smallest value is simultaneously shown, improves real-time.
Detailed description of the invention
Fig. 1 is the anti-collision system schematic diagram that method of the invention is based on.
Fig. 2 is three-dimensional imaging flow chart of the present invention.
Fig. 3 is power line ranging flow chart of the present invention.
Fig. 4 is feature of present invention point ranging flow chart.
Specific embodiment
The present invention will now be described in detail with reference to the accompanying drawings and examples.
A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness of the invention, technical solution are:At nobody The real-time acquisition that embedded platform realizes binocular image is carried on machine, and three-dimensional imaging is carried out to the binocular image of acquisition and vision is double Range estimation is away from the one hand the minimum range information that will acquire is transferred directly to the flight control system of unmanned plane, by system for flight control computer On the other hand obstacle distance value is superimposed upon on 3-D image by automatic obstacle avoiding, image at this time is carried out H.265 video pressure Contracting, compressed video code flow is transmitted by wireless data chain, and the video code flow of ground real-time reception compression simultaneously decompresses aobvious Show, the image data of decompression display is transferred to VR glasses and realizes Three-dimensional Display, by eye-observation real time 3-D image and folds The distance value added can in case of emergency be realized by manual control based on subjective anticollision.
Referring to Fig.1, what method of the invention used includes binocular camera module based on subjective and objective anti-collision system, embedding Enter formula processing platform module, wireless sending module, wireless receiving module, decompression display module and virtual reality glasses.Wherein Embedded processing platform includes mainly the image procossings such as acquisition, compression of images, the object ranging of image;Wireless sending module is The video image of compression is subjected to network transmission;Wireless receiving module is the video codeword data stream of real-time reception compression;Decompression Module is that the video code flow that will be received unzips it display;Virtual reality glasses are mainly to carry out to the video image of decompression Three-dimensional Display.
Referring to Fig. 2, the mainly video image processing on embedded processing platform, its step are as follows.
Step 1: three-dimensional imaging.
(1a) configures two-way COMS chip I MX222, obtains the bayer video data of CMOS output.
The CMOS bayer video data exported is changed into rgb format by (1b).
(1c) carries out 3D correction to the RGB component of two-path video.
RGB after correction is changed into YCbCr422 picture format by (1d);
(1e) dwindles into every 1920*1080 video all the way the image of 960*1080, and the figure after the diminution of this two-way As being spliced into 1920*1080 video all the way.
Spliced 1920*1080 video is sent to ARM processing platform by PCIE interface by (1f).
Step 2: using power line ranging, i.e., detecting the electric power in binocular image respectively when barrier is power line Line;For the piece image in binocular image, a point P1 is selected on power line, according to core collimation method, determines point P1 in binocular figure Corresponding same place P2 in another piece image as in;Further according to the parallax between point P1 and P2, power line and unmanned plane are determined The distance between Z;The realization of power line range finder module, referring to Fig. 3, specially:
(2a) after ARM processing platform receives image data, read in image simultaneously be loaded into GPU video memory, by Ratio edge detection is run on GPU, edge detection is carried out to binocular electric power line image, rolled up using the Ratio template that size is 9*9 Product obtains the binary image comprising marginal information.
(2b) carries out Hough transformation on the basis of binary image, and spatial-domain information is transformed into parameter field and is thrown Ticket.Screening stage after ballot has the hypothetical constraint of two o'clock:
(2b1) assumes that power line exists in the form of long straight line and runs through entire picture, this is because the binocular in experiment Vision system possesses lesser field angle.
Between the high-voltage power line of (2b2) list corridor in addition to anti-lightning strike ground wire, remaining is almost parallel trend.Therefore most of electricity The line of force has angle agreement.
According to above-mentioned two constraint condition, the present invention is using Hough transformation line detection algorithm by background in experiment scene Interference straight line is well rejected, and may finally detect power line.
(2c) is less than a plurality of power line of setting value as a power line to electric power wire spacing, completes power line cluster;
(2d) selects a point P1 for the piece image in binocular image on power line, according to core collimation method, determines point P1 Corresponding same place P2 in another piece image in binocular image;Further according to the parallax between point P1 and P2, power line is determined The distance between unmanned plane Z;
Wherein f indicates binocular camera focal length;B indicates the distance between binocular camera optical center;Dx is the width of every pixel in object Corresponding distance in reason, u1 and u2 are respectively coordinate value of the same place P1 and P2 along u axis, and Z is obstacle distance.
Step 3: when barrier is non-high-voltage power line, using characteristic point telemetry ranging, referring to Fig. 4, specially:
(3a) devises a kind of improvement SIFT feature extraction algorithm based on CUDA for characteristic point ranging, and CPU is received On the other hand image is stored on the texture memory of GPU by the image of income on the one hand by the memory of image preservation CPU.
(3b) carries out gaussian filtering and down-sampled processing on GPU, to binocular image, obtains gaussian pyramid, and then Obtain difference pyramid;
Conventional method is to carry out extreme point screening to difference pyramid again below, in the present invention, is carrying out extreme point inspection Coarse sizing is carried out before surveying, i.e.,:For the pyramidal each image of difference, the horizontal gradient Dxx=of each pixel is obtained | f (x+1)-f (x-1) | and vertical gradient Dyy=| f (y+1)-f (y-1) |;
Extreme point is the point that pixel changes greatly in image, and DOG operator can generate stronger edge effect, in order to retain Extreme point in image reduces calculation amount, while unstable marginal point being inhibited to avoid interfering, and carries out pole to difference of Gaussian image Value point screening design.The Dxx when the point is extreme point, Dyy is bigger, so Dxx*Dyy can obtain a bigger value, If only one is bigger and another is smaller by mobile rim point then Dxx, Dyy, Dxx*Dyy can obtain one it is smaller Value, if neither extreme point is also not mobile rim point, and Dxx, Dyy is smaller, Dxx*Dyy can obtain one it is smaller Value.Therefore, the pixel that the product of horizontal gradient Dxx and vertical gradient Dyy is less than or equal to given threshold is rejected, is passed through Extreme point coarse sizing is completed in the screening of threshold value T;Wherein T is empirical value.
To the coarse sizing of extreme point, a large amount of non-extreme points are eliminated, the calculating of a large amount of non-extreme points are avoided, to reduce Calculation amount improves real-time.
For completing the difference pyramid of extreme point coarse sizing, carrying out extreme point calculating and being accurately positioned.
(3c) carries out the auxiliary direction calculating of principal direction to obtained extreme point, on the one hand the major-minor direction of obtained characteristic point is protected There are on CPU, on the other hand save it on GPU.
(3d) generates feature vector according to the major-minor direction of characteristic point on GPU.
Step 4: using watershed algorithm segmented image.
The width being stored in the binocular image on CPU is converted gray level image by (4a).
(4b) obtains gradient magnitude image with Sobel operator, and it includes two groups of 3*3 matrix operators that Sobel operator, which is a kind of, Calculation formula is as follows, and wherein A represents original image, DxAnd DyThe image of edge detection is respectively represented,
The horizontal and vertical gradient approximation of each calculated pixel, calculates gradient magnitude by following formula, Calculation formula is as follows:
Therefore gradient direction is:
(4c) prospect tagged object and calculating, the present invention uses and foreground object and background object is marked respectively, right Image carries out out operation.
(4d) removes darker spot and limb label.
(4e) carries out supplement to image, carries out supplement to the image of reconstruction.
(4f) superposition prospect is tagged in original image.
(4g) calculates context marker, is split by the method for threshold value to image, is realized by calculating range conversion Watershed transform crestal line.
The segmentation function of (4h) watershed transform calculates, and image can be made to be equipped with local minimum in certain bits, repaired simultaneously Change gradient magnitude image, so that having local minimum in prospect and background label pixel;Complete the Target Segmentation in image;
The characteristic point that step 3 obtains is distributed to each target according to its position by (4f);
(4g) is directed to each target, selected part characteristic point, according to the characteristic point vector calculated in (3d) to characteristic point It is matched, obtains matching characteristic point pair;Using the parallax of each pair of matching characteristic point, obstacle distance is calculated;Each pair of matching is special Sign point all obtains an obstacle distance, thus obtains multiple distances;
(4h) multiple obstacle distances corresponding for each target cluster, and obtain the distance of the target;To own Minimum range in target range is ultimately sent to flight control system and shows on VR glasses as final obstacle distance.
Using watershed segmentation mesh calibration method, unmanned plane observation ability is enhanced, it can be very intuitive anti-by image Reflect the range information of specific objective;Carry out a large amount of characteristic points due to will detect that in scene, when some characteristic points belong to the same mesh When mark, selected part characteristic point ranging does not need all characteristic points of a target all to carry out ranging operation, reduces Calculation amount, improves real-time.
Step 5: H.265 video compress and transmission:
In the case where guaranteeing identical image quality, half H.265 is reduced than H.264 video transmission bandwidth.Make on ARM With GStreamer, by treated, H.265 video image is compressed, and code rate 4Mbps passes through the video code flow of compression Wireless transport module carries out network transmission.
Step 6: Three-dimensional Display:
The video code flow of compression is received by wireless receiving module in earth station, and video code flow is passed through into software on ground solution Compression display is transmitted to VR virtual reality glasses by HDMI interface, so as to intuitively observe distance value and by manual control Prevent unmanned plane anticollision.On the other hand the smallest distance value can be sent to system for flight control computer, is controlled by flight control system Unmanned plane is to realize unmanned plane anticollision.
In conclusion the above is merely preferred embodiments of the present invention, being not intended to limit the scope of the present invention. All within the spirits and principles of the present invention, any modification, equivalent replacement, improvement and so on should be included in of the invention Within protection scope.

Claims (3)

1. a kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness, which is characterized in that including:It is taken on unmanned plane Binocular camera is carried, the binocular image of acquisition is handled, the 3-D image of binocular imaging is obtained;
Unmanned plane is obtained at a distance from barrier according to the binocular image, and by distance information transmission to the winged control system of unmanned plane System, by system for flight control computer automatic obstacle avoiding;
By the load of obstacle distance information on the 3-D image, 3-D image is subjected to video compress, by compressed view Frequency code stream is transmitted by wireless data chain, the video code flow that ground real-time reception compresses and decompression, by the picture number of decompression Three-dimensional Display is realized according to VR glasses are transferred to, by eye-observation real time 3-D image and the distance value of superposition, in case of emergency It is realized by manual control unmanned plane based on subjective anticollision;
Wherein, after obtaining the binocular image, when barrier is power line, ranging as follows:
The power line in binocular image is detected respectively;For the piece image in binocular image, a point P1 is selected on power line, According to core collimation method, determining point P1 corresponding same place P2 in the another piece image in binocular image;Further according to point P1 and P2 it Between parallax, determine the distance between power line and unmanned plane;
When barrier is the non-electrical line of force:It is determined between barrier and unmanned plane using the characteristic point telemetry based on sift algorithm Distance, specially:
A, gaussian filtering and down-sampled processing are carried out to binocular image, obtains gaussian pyramid, and then obtain difference pyramid;
B, it is directed to the pyramidal each image of difference, obtains the horizontal gradient Dxx and vertical gradient Dyy of each pixel;By water The pixel that the product of flat ladder degree Dxx and vertical gradient Dyy is less than or equal to given threshold is rejected, and extreme point coarse sizing is completed;
C, for the difference pyramid of completion extreme point coarse sizing, feature point extraction is carried out, the characteristic point of binocular image is obtained;
D, it is directed in binocular image a wherein width, different target is split using watershed algorithm;
E, the characteristic point for obtaining step C gives each target according to the Target Assignment belonging to it;
F, it is directed to each target, selected part characteristic point, and characteristic point is matched, obtains matching characteristic point pair;It utilizes The parallax of each pair of matching characteristic point calculates obstacle distance;
G, multiple obstacle distances corresponding for each target cluster, and obtain the distance of the target;By all targets Minimum range is ultimately sent to flight control system and shows on VR glasses as final obstacle distance.
2. a kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness as described in claim 1, which is characterized in that When detecting a plurality of power line, the obstacle distance of every power line is calculated separately, minimum range is sent to the winged control System and VR glasses.
3. a kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness as described in claim 1, which is characterized in that When detecting a plurality of power line, a plurality of power line of setting value is less than as a power line to electric power wire spacing, completes electricity Line of force cluster;Minimum range is sent to the flight control system and VR glasses by the obstacle distance for calculating separately every power line.
CN201810570734.1A 2018-06-05 2018-06-05 A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness Pending CN108873931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810570734.1A CN108873931A (en) 2018-06-05 2018-06-05 A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810570734.1A CN108873931A (en) 2018-06-05 2018-06-05 A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness

Publications (1)

Publication Number Publication Date
CN108873931A true CN108873931A (en) 2018-11-23

Family

ID=64336755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810570734.1A Pending CN108873931A (en) 2018-06-05 2018-06-05 A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness

Country Status (1)

Country Link
CN (1) CN108873931A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244760A (en) * 2019-06-06 2019-09-17 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device and electronic equipment
CN112148033A (en) * 2020-10-22 2020-12-29 广州极飞科技有限公司 Method, device and equipment for determining unmanned aerial vehicle air route and storage medium
CN114115278A (en) * 2021-11-26 2022-03-01 东北林业大学 Obstacle avoidance system based on FPGA (field programmable Gate array) for forest fire prevention robot during traveling

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0459295A2 (en) * 1990-05-25 1991-12-04 Toshiba Electronic Systems Co., Ltd. Aircraft docking guidance system which takes position reference in anti-collision light of aircraft
EP2130766A2 (en) * 2008-06-04 2009-12-09 Honeywell International Inc. Anti-collision lighting systems and methods for a micro aerial vehicle
CN101763119A (en) * 2009-12-16 2010-06-30 东南大学 Obstacle avoidance aiding method based on teleoperation mobile robot
CN106340009A (en) * 2016-08-18 2017-01-18 河海大学常州校区 Parallel-binocular-based power line detection method and system
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN106708074A (en) * 2016-12-06 2017-05-24 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle based on VR glasses
CN107462217A (en) * 2017-07-07 2017-12-12 北京航空航天大学 Unmanned aerial vehicle binocular vision barrier sensing method for power inspection task
CN108037765A (en) * 2017-12-04 2018-05-15 国网山东省电力公司电力科学研究院 A kind of unmanned plane obstacle avoidance system for polling transmission line

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0459295A2 (en) * 1990-05-25 1991-12-04 Toshiba Electronic Systems Co., Ltd. Aircraft docking guidance system which takes position reference in anti-collision light of aircraft
EP2130766A2 (en) * 2008-06-04 2009-12-09 Honeywell International Inc. Anti-collision lighting systems and methods for a micro aerial vehicle
CN101763119A (en) * 2009-12-16 2010-06-30 东南大学 Obstacle avoidance aiding method based on teleoperation mobile robot
CN106340009A (en) * 2016-08-18 2017-01-18 河海大学常州校区 Parallel-binocular-based power line detection method and system
CN106681353A (en) * 2016-11-29 2017-05-17 南京航空航天大学 Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion
CN106708074A (en) * 2016-12-06 2017-05-24 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle based on VR glasses
CN107462217A (en) * 2017-07-07 2017-12-12 北京航空航天大学 Unmanned aerial vehicle binocular vision barrier sensing method for power inspection task
CN108037765A (en) * 2017-12-04 2018-05-15 国网山东省电力公司电力科学研究院 A kind of unmanned plane obstacle avoidance system for polling transmission line

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GULIANG89: "标记分水岭分割算法", 《百度文库》 *
周士超: "无人机电力巡检视觉避撞技术研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *
牛明花 等: "基于机器视觉的车道线识别算法", 《天津职业技术师范大学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110244760A (en) * 2019-06-06 2019-09-17 深圳市道通智能航空技术有限公司 A kind of barrier-avoiding method, device and electronic equipment
CN112148033A (en) * 2020-10-22 2020-12-29 广州极飞科技有限公司 Method, device and equipment for determining unmanned aerial vehicle air route and storage medium
CN114115278A (en) * 2021-11-26 2022-03-01 东北林业大学 Obstacle avoidance system based on FPGA (field programmable Gate array) for forest fire prevention robot during traveling

Similar Documents

Publication Publication Date Title
US10678257B2 (en) Generating occlusion-aware bird eye view representations of complex road scenes
US20230014874A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
US11106203B2 (en) Systems and methods for augmented stereoscopic display
CN113359810B (en) Unmanned aerial vehicle landing area identification method based on multiple sensors
Vandapel et al. Natural terrain classification using 3-d ladar data
US20210150227A1 (en) Geometry-aware instance segmentation in stereo image capture processes
US9269145B2 (en) System and method for automatically registering an image to a three-dimensional point set
US9275267B2 (en) System and method for automatic registration of 3D data with electro-optical imagery via photogrammetric bundle adjustment
CN111527463A (en) Method and system for multi-target tracking
US20100305857A1 (en) Method and System for Visual Collision Detection and Estimation
CN112740269A (en) Target detection method and device
Rong et al. Intelligent detection of vegetation encroachment of power lines with advanced stereovision
CN108873931A (en) A kind of unmanned plane vision avoiding collision combined based on subjectiveness and objectiveness
CN109213138B (en) Obstacle avoidance method, device and system
CN111983603A (en) Motion trajectory relay method, system and device and central processing equipment
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN114862952B (en) Unmanned aerial vehicle detection and defense method and system
Kim et al. Fusing lidar data and aerial imagery with perspective correction for precise localization in urban canyons
JP2022511147A (en) Systems and methods to facilitate the generation of geographic information
KR102588386B1 (en) Method and apparatus for detecting obscured object using a lidar
Shimoni et al. Detection of vehicles in shadow areas
CN117423077A (en) BEV perception model, construction method, device, equipment, vehicle and storage medium
Baeck et al. Drone based near real-time human detection with geographic localization
CN113836975A (en) Binocular vision unmanned aerial vehicle obstacle avoidance method based on YOLOV3
Su Vanishing points in road recognition: A review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181123

WD01 Invention patent application deemed withdrawn after publication