CN108830257A - A kind of potential obstacle detection method based on monocular light stream - Google Patents

A kind of potential obstacle detection method based on monocular light stream Download PDF

Info

Publication number
CN108830257A
CN108830257A CN201810695216.2A CN201810695216A CN108830257A CN 108830257 A CN108830257 A CN 108830257A CN 201810695216 A CN201810695216 A CN 201810695216A CN 108830257 A CN108830257 A CN 108830257A
Authority
CN
China
Prior art keywords
light stream
unmanned plane
image
detection
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810695216.2A
Other languages
Chinese (zh)
Inventor
叶润
闫斌
金钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810695216.2A priority Critical patent/CN108830257A/en
Publication of CN108830257A publication Critical patent/CN108830257A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to air vehicle technique field, specifically a kind of potential obstacle detection method based on monocular light stream.The present invention is only by unmanned aerial vehicle onboard sensor, environment is detected to optical flow method by global adaptive iteration threshold value and frame difference method and carries out abbreviation pretreatment, then the precision and real-time that target detection tracking is improved by pyramid model optical flow algorithm estimate the relative distance between unmanned plane and detection angle point finally by light stream FOE and light stream TTC calculation formula.Compared with existing visual token technology, The present invention reduces camera numbers, and save cost;Compared with traditional light stream depth information detection algorithm, range accuracy is improved, environment sensing ability of unmanned plane during autonomous line walking is improved, will have wide practical use in electric power unmanned plane line walking.

Description

A kind of potential obstacle detection method based on monocular light stream
Technical field
The invention belongs to air vehicle technique field, specifically a kind of potential detection of obstacles side based on monocular light stream Method.
Background technique
Before monocular light stream into depth information detection process, since traditional optical flow algorithm detection environment needs to meet detection Ambient brightness is invariable, and space is consistent and Time Continuous three major premise conditions, and random during unmanned plane electric inspection process The influence of environment often influences vision-based detection environment and causes influence of noise, therefore traditional optical flow algorithm tends not to Meet unmanned plane in the requirement of autonomous walking operation environment real-time detection.
In addition, optical flow objective detecting and tracking algorithm mainly carries out detecting and tracking to unmanned plane full width cruise environment, it is complicated Environment measuring can reduce the detection accuracy of algorithm in the environment, not along with huge calculation amount and interminable calculating time Suitable for the exploration practical field environment for barrier.
Summary of the invention
The purpose of the present invention, the forward direction potential threat detection field based on monocular, to connect in visual sensor detection environment Continuous frame image depth information variation to propose a kind of potential obstacle detection method based on light stream to inspire.
The technical scheme is that:
A kind of potential obstacle detection method based on monocular light stream, as shown in Figure 1, including the following steps:
S1, picture pretreatment, extract unmanned plane propulsion direction video image, then extract continuous three frames image into Row pretreatment is divided come the prospect to unmanned plane current driving environment with background using adaptive iteration threshold value and three frame difference methods It cuts, the target area coordinates in extraction environment with respect to unmanned plane movement are as region of interest ROI;
S2, to relative motion target area ROI application LK optical flow algorithm, and carry out Corner Detection tracking, obtain ROI region Detect the light stream vectors depth information of angle point;
S3, dissipated by the light stream calculated in present viewing field environment focus and unmanned plane in detection angle point time of contact come Judge forward vision with the presence or absence of potential risk;To whether depositing before being judged by unmanned plane current kinetic speed and time of contact In potential barrier, if forward direction environment angle point distance farther out or is not detected potential barrier, return step S1.
Further, the specific method of the step S1 is:
S11, three width successive frame adjacent image sequences in input image sequence are chosen;
S12, the image of selection is pre-processed, and calculates optimal threshold T, specially:
S121, selection one initial estimation threshold value, and set it to T0={ Tk| k=0 }
Wherein tminWith tmaxMinimum gradation value and maximum gradation value respectively in input picture;
S122, pass through threshold value TkInput picture is divided into two hierarchical regions R1With R2
R1={ f1(x,y)|f(x,y)≥Tk}
R2={ f2(x,y)|f(x,y)<Tk}
Wherein in f (i, j) representing input images coordinate (i, j) gray-scale pixel values;
S123, region R is calculated separately1With R2Average gray value t1With t2
Wherein N (i, j) is the weight coefficient of coordinate points (i, j), is set N (i, j)=1;
S124、Tk+1Threshold value updates
If Tk+1=TkOr reaching highest the number of iterations, then iterative cycles terminate;Otherwise it jumps to S122 and continues to execute and follow Ring;S13, it is respectively calculated two-by-two by three frame difference methods to sequential frame image is chosen, and exports calculated result;
Wherein frame difference method formula is as follows:
S14, three frame difference methods are combined with based on global adaptive iteration threshold value T segmentation, and carries out prospect and background Dividing processing;
After S15, segmentation, accurate moving target is obtained.
Further, LK optical flow algorithm target detection tracks, and pyramid model is as shown in Fig. 2, light stream depth letter It is as follows that breath calculates mathematical model.Wherein I (x)=I (x, y), J (x)=J (x, y) difference consecutive frame gray level image gray value, And it is u=(u that corner speed is detected in image Ix, uy)T, the corresponding detection corner speed of image J is v=u+d, and d represents light stream value. If w is neighborhood windows radius (w takes 2~7pixels), light stream residual error is E (d):
Optimal light stream vector dLFor:
dL=G-1b。
Further, light stream diverging focus (Focus Of Expansion) is to be obtained by following formula in step S3:
(cx,cy)T=(ATA)-1ATb
Wherein (cx, cy) be least square method FOE specific coordinate,B=(v1x1- u1y1v2x2-u2y2…vnxn-unyn)T, and un, vnThe horizontal and vertical component of light stream representated by respectively pixel n, xn、ynIt is Corresponding coordinate position in pixel and current machine visual plane coordinate.
Time of contact (Time To Contact) Computing Principle is as shown in Fig. 3, and formula is:
Wherein TTC is relative depth, and Z is represented before pixel and unmanned plane to the relative depth between vision, and v is unmanned plane The intrinsic uniform translation speed in forward direction translation motion.
Beneficial effects of the present invention are that the present invention passes through global adaptive iteration only by unmanned aerial vehicle onboard sensor Threshold value and frame difference method carry out abbreviation pretreatment to optical flow method detection environment, then improve mesh by pyramid model optical flow algorithm The precision and real-time for marking detecting and tracking, finally by light stream FOE and light stream TTC calculation formula to unmanned plane and detection angle point it Between relative distance estimated.Compared with existing visual token technology, The present invention reduces camera numbers, and save into This;Compared with traditional light stream depth information detection algorithm, range accuracy is improved, improves unmanned plane in autonomous line walking process In environment sensing ability, will have wide practical use in electric power unmanned plane line walking.
Detailed description of the invention
Fig. 1 is the potential detection of obstacles block diagram of light stream monocular of the present invention;
Fig. 2 is the schematic diagram that pyramid model light stream depth information calculates;
Fig. 3 is the range measurement principle of light stream monocular of the present invention;
Specific embodiment
Technical solution of the present invention is described in detail in Summary, details are not described herein.

Claims (4)

1. a kind of potential obstacle detection method based on monocular light stream, which is characterized in that include the following steps:
S1, picture pretreatment, extract unmanned plane propulsion direction video image, then extract continuous three frames image and carry out in advance Processing, is split the prospect of unmanned plane current driving environment with background using adaptive iteration threshold value and three frame difference methods, Target area coordinates in extraction environment with respect to unmanned plane movement are as region of interest ROI;
S2, to relative motion target area ROI application LK optical flow algorithm, and carry out Corner Detection tracking, obtain ROI region detection The light stream vectors depth information of angle point;
S3, it dissipates by the light stream calculated in present viewing field environment focus and unmanned plane and judges in the time of contact of detection angle point Forward vision whether there is potential risk;To with the presence or absence of latent before being judged by unmanned plane current kinetic speed and time of contact In barrier, if forward direction environment angle point distance farther out or is not detected potential barrier, return step S1.
2. a kind of potential obstacle detection method based on monocular light stream according to claim 1, which is characterized in that described The specific method of step S1 is:
S11, three width successive frame adjacent image sequences in input image sequence are chosen;
S12, the image of selection is pre-processed, and calculates optimal threshold T, specially:
S121, selection one initial estimation threshold value, and set it to T0={ Tk| k=0 }
Wherein tminWith tmaxMinimum gradation value and maximum gradation value respectively in input picture;
S122, pass through threshold value TkInput picture is divided into two hierarchical regions R1With R2
R1={ f1(x,y)|f(x,y)≥Tk}
R2={ f2(x,y)|f(x,y)<Tk}
Wherein in f (i, j) representing input images coordinate (i, j) gray-scale pixel values;
S123, region R is calculated separately1With R2Average gray value t1With t2
Wherein N (i, j) is the weight coefficient of coordinate points (i, j), is set N (i, j)=1;
S124、Tk+1Threshold value updates
If Tk+1=TkOr reaching highest the number of iterations, then iterative cycles terminate;Otherwise it jumps to S122 and continues to execute circulation;
S13, it is respectively calculated two-by-two by three frame difference methods to sequential frame image is chosen, and exports calculated result;
Wherein frame difference method formula is as follows:
S14, three frame difference methods are combined with based on global adaptive iteration threshold value T segmentation, and carries out prospect and background segment Processing;
After S15, segmentation, accurate moving target is obtained.
3. a kind of potential obstacle detection method based on monocular light stream according to claim 2, which is characterized in that step The light stream vectors depth information of the detection angle point of acquisition ROI region described in S2 is to be obtained by following formula:
dL=G-1b;
Wherein, middle I (x)=I (x, y), J (x)=J (x, y) difference consecutive frame gray level image gray value, and angle is detected in image I Spot speed is u=(ux, uy)T, the corresponding detection corner speed of image J is v=u+d, and d represents light stream value, and w is neighborhood windows radius, Light stream residual error is E (d), dLFor optimal light stream vector.
4. a kind of potential obstacle detection method based on monocular light stream according to claim 3, which is characterized in that step Light stream diverging focus is to be obtained by following formula in S3:
(cx,cy)T=(ATA)-1ATb
Wherein, wherein (cx, cy) be least square method FOE specific coordinate,B=(v1x1- u1y1v2x2-u2y2…vnxn-unyn)T, un, vnThe horizontal and vertical component of light stream representated by respectively pixel n, xn、ynIt is picture Corresponding coordinate position in vegetarian refreshments and current machine visual plane coordinate;
Time of contact is to be obtained by following formula:
Wherein, TTC is relative depth, and Z is represented before pixel and unmanned plane to the relative depth between vision, and v is that unmanned plane exists Intrinsic uniform translation speed in forward direction translation motion.
CN201810695216.2A 2018-06-29 2018-06-29 A kind of potential obstacle detection method based on monocular light stream Pending CN108830257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810695216.2A CN108830257A (en) 2018-06-29 2018-06-29 A kind of potential obstacle detection method based on monocular light stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810695216.2A CN108830257A (en) 2018-06-29 2018-06-29 A kind of potential obstacle detection method based on monocular light stream

Publications (1)

Publication Number Publication Date
CN108830257A true CN108830257A (en) 2018-11-16

Family

ID=64134599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810695216.2A Pending CN108830257A (en) 2018-06-29 2018-06-29 A kind of potential obstacle detection method based on monocular light stream

Country Status (1)

Country Link
CN (1) CN108830257A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414392A (en) * 2019-07-15 2019-11-05 北京天时行智能科技有限公司 A kind of determination method and device of obstacle distance
CN111368883A (en) * 2020-02-21 2020-07-03 浙江大华技术股份有限公司 Obstacle avoidance method based on monocular camera, computing device and storage device
CN111383257A (en) * 2018-12-29 2020-07-07 顺丰科技有限公司 Method and device for determining loading and unloading rate of carriage
CN112560769A (en) * 2020-12-25 2021-03-26 北京百度网讯科技有限公司 Method for detecting obstacle, electronic device, road side device and cloud control platform
CN113361504A (en) * 2021-08-10 2021-09-07 南京邮电大学 Edge group intelligent method based on unmanned aerial vehicle cooperative networking

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101881615A (en) * 2010-05-28 2010-11-10 清华大学 Method for detecting visual barrier for driving safety
CN101909206A (en) * 2010-08-02 2010-12-08 复旦大学 Video-based intelligent flight vehicle tracking system
US20120314071A1 (en) * 2011-04-27 2012-12-13 Mobileye Technologies Ltd. Pedestrian collision warning system
CN103149940A (en) * 2013-03-27 2013-06-12 清华大学 Unmanned plane target tracking method combining mean-shift algorithm and particle-filter algorithm
CN103925920A (en) * 2014-04-10 2014-07-16 西北工业大学 Image perspective-based micro unmanned aerial vehicle indoor autonomous navigation method
CN104021388A (en) * 2014-05-14 2014-09-03 西安理工大学 Reversing obstacle automatic detection and early warning method based on binocular vision
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN104880187A (en) * 2015-06-09 2015-09-02 北京航空航天大学 Dual-camera-based motion estimation method of light stream detection device for aircraft
CN104881645A (en) * 2015-05-26 2015-09-02 南京通用电器有限公司 Vehicle front target detection method based on characteristic-point mutual information content and optical flow method
CN105306500A (en) * 2014-06-19 2016-02-03 赵海 Express transportation system based on quadrirotor, express transportation method and monocular obstacle avoidance method
CN106155082A (en) * 2016-07-05 2016-11-23 北京航空航天大学 A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream
CN106200672A (en) * 2016-07-19 2016-12-07 深圳北航新兴产业技术研究院 A kind of unmanned plane barrier-avoiding method based on light stream
CN106708084A (en) * 2016-11-24 2017-05-24 中国科学院自动化研究所 Method for automatically detecting and avoiding obstacles for unmanned aerial vehicle under complicated environments
CN106933233A (en) * 2015-12-30 2017-07-07 湖南基石信息技术有限公司 A kind of unmanned plane obstacle avoidance system and method based on interval flow field

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101881615A (en) * 2010-05-28 2010-11-10 清华大学 Method for detecting visual barrier for driving safety
CN101909206A (en) * 2010-08-02 2010-12-08 复旦大学 Video-based intelligent flight vehicle tracking system
US20120314071A1 (en) * 2011-04-27 2012-12-13 Mobileye Technologies Ltd. Pedestrian collision warning system
CN103149940A (en) * 2013-03-27 2013-06-12 清华大学 Unmanned plane target tracking method combining mean-shift algorithm and particle-filter algorithm
CN103925920A (en) * 2014-04-10 2014-07-16 西北工业大学 Image perspective-based micro unmanned aerial vehicle indoor autonomous navigation method
CN104021388A (en) * 2014-05-14 2014-09-03 西安理工大学 Reversing obstacle automatic detection and early warning method based on binocular vision
CN105306500A (en) * 2014-06-19 2016-02-03 赵海 Express transportation system based on quadrirotor, express transportation method and monocular obstacle avoidance method
CN104331910A (en) * 2014-11-24 2015-02-04 沈阳建筑大学 Track obstacle detection system based on machine vision
CN104881645A (en) * 2015-05-26 2015-09-02 南京通用电器有限公司 Vehicle front target detection method based on characteristic-point mutual information content and optical flow method
CN104880187A (en) * 2015-06-09 2015-09-02 北京航空航天大学 Dual-camera-based motion estimation method of light stream detection device for aircraft
CN106933233A (en) * 2015-12-30 2017-07-07 湖南基石信息技术有限公司 A kind of unmanned plane obstacle avoidance system and method based on interval flow field
CN106155082A (en) * 2016-07-05 2016-11-23 北京航空航天大学 A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream
CN106200672A (en) * 2016-07-19 2016-12-07 深圳北航新兴产业技术研究院 A kind of unmanned plane barrier-avoiding method based on light stream
CN106708084A (en) * 2016-11-24 2017-05-24 中国科学院自动化研究所 Method for automatically detecting and avoiding obstacles for unmanned aerial vehicle under complicated environments

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAOQUN WANG等: "Obstacle avoidance for quadrotor using improved method based on optical flow", 《2015 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION》 *
孟介成等: "基于 DM642 的自适应运动目标检测系统", 《广西大学学报: 自然科学版》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383257A (en) * 2018-12-29 2020-07-07 顺丰科技有限公司 Method and device for determining loading and unloading rate of carriage
CN111383257B (en) * 2018-12-29 2024-06-07 顺丰科技有限公司 Carriage loading and unloading rate determining method and device
CN110414392A (en) * 2019-07-15 2019-11-05 北京天时行智能科技有限公司 A kind of determination method and device of obstacle distance
CN111368883A (en) * 2020-02-21 2020-07-03 浙江大华技术股份有限公司 Obstacle avoidance method based on monocular camera, computing device and storage device
CN111368883B (en) * 2020-02-21 2024-01-19 浙江大华技术股份有限公司 Obstacle avoidance method based on monocular camera, computing device and storage device
CN112560769A (en) * 2020-12-25 2021-03-26 北京百度网讯科技有限公司 Method for detecting obstacle, electronic device, road side device and cloud control platform
CN112560769B (en) * 2020-12-25 2023-08-29 阿波罗智联(北京)科技有限公司 Method for detecting obstacle, electronic device, road side device and cloud control platform
CN113361504A (en) * 2021-08-10 2021-09-07 南京邮电大学 Edge group intelligent method based on unmanned aerial vehicle cooperative networking

Similar Documents

Publication Publication Date Title
CN108830257A (en) A kind of potential obstacle detection method based on monocular light stream
CN111693972B (en) Vehicle position and speed estimation method based on binocular sequence images
CN107272021B (en) Object detection using radar and visually defined image detection areas
EP3208635A1 (en) Vision algorithm performance using low level sensor fusion
US8395659B2 (en) Moving obstacle detection using images
WO2014114923A1 (en) A method of detecting structural parts of a scene
CN108398672A (en) Road surface based on the 2D laser radar motion scans that lean forward and disorder detection method
CN112115889B (en) Intelligent vehicle moving target detection method based on vision
Wang Research of vehicle speed detection algorithm in video surveillance
CN104331907B (en) A kind of method based on ORB feature detections measurement bearer rate
CN104200492A (en) Automatic detecting and tracking method for aerial video target based on trajectory constraint
McManus et al. Distraction suppression for vision-based pose estimation at city scales
CN109917359A (en) Robust vehicle distances estimation method based on vehicle-mounted monocular vision
CN116978009A (en) Dynamic object filtering method based on 4D millimeter wave radar
Omar et al. Detection and localization of traffic lights using YOLOv3 and Stereo Vision
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
Wu et al. Research progress of obstacle detection based on monocular vision
CN112837374B (en) Space positioning method and system
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
Hu et al. A novel lidar inertial odometry with moving object detection for dynamic scenes
CN102034245A (en) Method for tracking landmark on unmanned helicopter platform
Xin et al. Vehicle ego-localization based on the fusion of optical flow and feature points matching
CN112595312A (en) Method and system for filtering pseudo star target of large-field-of-view star sensor
CN110488320A (en) A method of vehicle distances are detected using stereoscopic vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181116

RJ01 Rejection of invention patent application after publication