AU679321B2 - System and method for detecting obstacles in a road - Google Patents

System and method for detecting obstacles in a road Download PDF

Info

Publication number
AU679321B2
AU679321B2 AU14904/95A AU1490495A AU679321B2 AU 679321 B2 AU679321 B2 AU 679321B2 AU 14904/95 A AU14904/95 A AU 14904/95A AU 1490495 A AU1490495 A AU 1490495A AU 679321 B2 AU679321 B2 AU 679321B2
Authority
AU
Australia
Prior art keywords
road
vehicle
image plane
edge points
scan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU14904/95A
Other versions
AU1490495A (en
Inventor
Walter J. Bradbury
Dana A. Christensen
Richard G. Clow
Lonnie J. Devier
Adam J. Gudat
Carl A. Kemner
Karl W. Kleimenhagen
Craig L. Koehrsen
Christos N. Kyrtsos
Norman K. Lay
Joel L. Peterson
Prithvi N. Rao
Larry E. Schmidt
James W. Sennott
Gary K. Shaffer
Wenfan Shi
Dong Hun Shin
Sanjiv J. Singh
Darrell E. Stafford
Louis J. Weinbeck
Jay H. West
William L. Whittaker
Baoxin Wu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Caterpillar Inc
Original Assignee
Caterpillar Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Caterpillar Inc filed Critical Caterpillar Inc
Priority to AU14904/95A priority Critical patent/AU679321B2/en
Publication of AU1490495A publication Critical patent/AU1490495A/en
Application granted granted Critical
Publication of AU679321B2 publication Critical patent/AU679321B2/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Description

1 PIOO0fJOII Regulation 3.2
AUSTRALIA
Patents Act 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
A
A A
*AOA
A.
A A
A.
A. CA 4 A
A
*0 Name of Applicant: Actual Inventors: CATERPILLAR INC.
Adam J. GUDAT, Walter J. BRADBURY, Dana A. CHRISTENSEN, Richard G. CLOW, Lonnie J. DEVIER, Carl A. KEMNER, Karl W. KLEIMENHAGEN, Craig L. KOEHRSEN, Cliristos N. KYRTSOS, Norman K. LAY, Joel L. PETERSON, Prithvi N. RAO, Larry E. SCHMIDT, James W. SENNCYIT, Addrss fr seviceGary K. SHAFFER, WenFan SHll, Dong Hun SHIN, Sanjiv J. SINGH, Darrell E. STAFFORD, Louis J. WEINBECK& Jay H. WEST, William L. WHITTAKER, BaoXin WU CARTER SMITH BEADLE 2 Railway Parade Camberwell Victoria 3124 Australia T\ 0.
full description of this invention, including the best method of performing it known to us in Australia: Invention Title: r IC r ILy ~r r BACKGROUND OF THE INVENTION 1. Field of the Invention The present application is a divisional application of Australian Patent Application No. 46045/93 which was divided from Australian Patent No. 642638.
The present application relates to systems for navigating an autonomous vehicle.
Other inventions relating to apparatus/methods for determining terrestrial position information are discussed and claimed in other co-pending applications e.g.
Australian Patent Application No. 46043/93. For an overview of the interrelated navigation and positioning systems/apparatus/methods, the reader should refer to the specification accompanying Australian Patent No. 642638.
2. Related Art There is presently under development a terrestrial position determining 15 system, referred to as the global positioning system (GPS), designated NAVSTAR by the U.S. Government. In this system, a multitude of orbiting satellites will be used to determine the terrestrial position of receivers on the Earth. In the planned system, there will be eight orbiting satellites in each of three sets of orbits, 21 satellites on line and three spares, for a total of 24 satellites. The three sets of orbits 20 will have mutually orthogonal planes relative to the Earth. The orbits are neither
C.
polar orbits nor equatorial orbits, however. The satellites will be in 12-hour orbits.
The position of each satellite at all times willi be precisely known. The longitude, latitude, and altitude with respect to the center of the Earth, of a receiver at any point close to Earth at the time of transmission, will be calculated by determining the propagation time of transmissions from at least four of the satellites to the receiver. The more satellites used the better. A current constraint on the number of satellites is that the currently available receiver has only five channels.
Rudimentary autonomous, meaning unmanned or machine controlled, vehicle navigation is also known in the field.
Systems exist which rely on vision based positioning. For instance, vision 16 Mach 199S ~gl blSCel -3- 0 0* 00 0000 0@ 00 Sc
S.
S
500
S
S S 5* S based positioning is used in the Martin Marietta Autonomous Land Vehicle, as described in "Obstacle Avoidance Perception Processing for the Autonomous Land Vehicle," by R. Terry Dunlay, IEEE, CH2555-1/88/0000/0912$01.00, 1988. (See also "A Curvature-based Scheme for Improving Road Vehicle Guidance by computer Vision," by E. D. Dickmanns and A. Zapp, as reported at SPIE's Cambridge Symposium on Optical and Optoelectronic Engineering, October 26-31, 1986.) Some of these vision based positioning systems may use fixed lines or markings on, for instance, a factory floor, to navigate from point to point. Others may involve complex pattern recognition hardware and software. Other systems may navigate by keeping track of their position relative to a known starting point by measuring the distance they have travelled and the direction from the starting point.
These measuring type systems are known as "dead-reckoning" systems.
Such navigation systems suffer from numerous drawbacks and limitations.
15 For instance, if the system cannot recognize where it is, looses track of where it has been, or miscalculates its starting point, it will become unable to accurately reach its goal. Because errors in position accumulate in such systems, they may require frequent, time consuming initializations. Such systems may require patterns and markers to be placed along their route, which is also time consuming and costly, and 20 limits their usefulness to small, controlled areas.
SUMMARY OF THE INVENTION The present invention is an integrated vehicle positioning and navigation system which, as used throughout, means apparatus, method or a combination of both apparatus and method. The present invention overcomes many of the 25 limitations present in conventional technology in the fields of positioning and navigation, and thereby provides for highly accurate and autonomous positioning and navigation of a vehicle.
In accordance with a first aspect of the present invention, there is disclosed a computer based method of detecting an object in a road, said computer based method adapted for use with a land based vehicle, said computer based method 00.* S S DCC:CJZ:RlL#17557.RS1 14 April 1997 al- -4comprising the steps of: determining a stopping distance of said vehicle; scanning said road ahead of said vehicle to obtain an image plane of the road, said image plane comprised of scan lines, said scan lines including range information; selecting those scan lines in said image plane that include range information indicating distances that are beyond said stopping distance; projecting left and right edge points of said road into said image plane; and processing said selected scan lines to detect obstacles between said left and right edge points.
In accordance with a second aspect of the present invention, there is disclosed a computer based system for detecting an object in a road, said computer based system adapted for use with a land based vehicle, said computer based system s comprising: .15 means for determining a stopping distance of said vehicle; means for scanning said road ahead of said vehicle to obtain an image plane of the road, said image plane comprised of scan lines, said scan lines including range information; means for selecting those scan lines in said image plane that include range 0. *20 information indicating distances that are beyond said stopping distance; means for projecting left and right edge points of said road into said image plane; and means for processing said selected scan lines to detect obstacles between said
°O°O
left and right edge points.
25 In accordance with a third aspect of the present invention, the.re is disclosed a method for detecting an object in a road ahead of a vehicle, said method comprising the steps of: calculating edge points of said road from stored road data; constructing an image plane comprised of a plurality of scan lines, each scan A-L, line including range information at various points along each scan line; 14 Apdrl 1997 P ll -I projecting said calculated edge points into said image plane; determining a left edge point and a right edge point of each scan line in said image plane using said projected edge points, said left and right edge points corresponding to left and right edges of the road; and detecting a discontinuity in said rar:ge information along each scan line between said left and right edge points thereby detecting said object ahead of said vehicle.
In accordance with a fourth aspect of the present invention, there is disclosed a system for detecting an object in a road ahead of a vehicle, said system comprising: means for calculating edge points of said road from stored road data; means for constructing an image plane comprised of a plurality of scan lines, each scan line including range information at various points along each scan line; means for projecting said calculated edge points into said image plane; .15 means for determining a left edge point and a right edge point for each scan line in said image plane using said projected edge points, said left and right edge 49: •points corresponding to left and right edges of said road; and means for detecting a discontinuity in said range information along each scan 0. line between said left and right edge points thereby detecting said object ahead of 20 said vehicle.
The superior positioning data capability disclosed herein is used to aid the autonomous vehicle navigation system of the present invention. The navigation system of the present invention, provides for highly accurate vehicle path-tracking *e on predetermined, or dynamically determined, vehicle paths.
14 A0Ii 1991
I
-6- There is also disclosed novel systems for determining paths or roads for an jomous vehicle to follow at a work site, both dynamically during operation anuior in advance. It includes systems for efficiently storing and accessing these paths. And it includes systems for accurately tracking these paths. The vehicle also possesses the capability to travel autonomously without pre-set paths from beginning to ending points.
The present invention contemplates a totally autonomous work site, where numerous autonomous vehicles and equipment may operate with minimal human supervision. This reduces labor costs. This also makes the present invention suitable for hazardous or remote work sites, such as at the site of a nuclear accident, for example.
The present invention provides the capability for monitoring of vehicle and equipment control and mechanical systems, such as oil, air, and/or hydraulic pressures and temperatures. It provides fo; the indicating of a problem in any one of its systems, and for the scheduling of regular and emergency maintenance.
The present invention also provides for efficient scheduling of work and dispatching of vehicles and equipment at an autonomous work site, for instance, by providing a host processing system at a base station.
The host processing system coordinates various positioning and navigation 20 tasks. It also acts as the supervisory interface between the autonomous system and its human managers.
A preferred embodiment or "best mode" of the present invention, would be in an autonoihous mining vehicle, such as is used for hauling ore, at a mining site.
However, it is noted that it is contemplated that the present invention may be implemented in other vehicles performing other tasks. Examples are, for instance, in other types of heavy equipment for mining or construction, or in other totally different types of vehicles, such as farm tractors or automobiles.
There is also disclosed novel systems for ensuring safe operation of a manned or unmanned autonomous vehicle. These may be embodied in manual override features, failsafe features, error checking features, supervisory features, obstacle D(C:TJl:TG:#17557SPC 16 Matb 1995 P 7- P -7detection and avoidance, and redundancy features.
The vehicle positioning and navigation system disclosed includes systems which provide for a hierarchy of control such that manual control when asserted takes precedence as a safety feature.
The vehicle positioning and navigation system includes systems which provide for a safe failure mode. This insures that should a system or subsystem malfunction, or drift unacceptably out of calibration, the system will detect this condition and bring the vehicle to a stop. The brake system, as an example, is designed with the parking brakes normally set. Any failure or loss of power will result in the brakes being returned to their normal set mode, stopping the vehicle in the shortest distance possible.
The vehicle positioning and navigation system includes systems which provide
S
for error checking to detect, for instance, garbled messages or out-of-limit commands, which may indicate system malfunction.
S 15 The vehicle positioning and navigation system includes systems which monitor and supervise the operation of other systems and sub-systems to assure proper and safe operation. This may include checking actual performance against a multi-state model of ideal performance of the vehicle and its systems.
The vehicle positioning and navigation system of the present invention 20 includes systems for detecting obstacles in the vehicle path and enabling the vehicle to successfully avoid colliding with those obstacles, thereafter returning to the path.
Such obstacles may be fallen rocks, people, other vehicles, and so on.
The present invention also includes novel scanning systems for use in detecting obstacles. These may include scanner systems designed around laser, sonar, radar, or other electromagnetic or acoustical wave apparatus.
The vehicle positioning and navigation system includes redundancy of functional units, as backup and/or as checks on the system operation.
The navigation portion may control flexibility by providing for at least three modes of operation. These modes include a manual mode, a remote control or "tele" mode, and a fully autonomous mode. The "tele" mode may be over a remote radio, DOC:TJI'G:#17S57.SPC 16 Marh 1996
I
-8or cable, with the controlled vehicle in the line of sight of the "tele" operator. There is also a transitional or intermediate "ready" mode, used when going between certain other modes.
This facilitates on-board, tele, and host displays for status and control information and manipulation.
There is also disclosed novel vehicle control sub-systems, including speed control, steering control, and shutdown systems.
The system disclosed is able to achieve some of its advantages by providing for vehicle control which serves to decouple speed and steering control so that they may be controlled separately. This simplifies the problems that may be associated with accurate path tracking, for instance.
~There is also disclosed a novel global memory structure for communicating data between plural processing units, or tasks, and/or processors. This *0 communication structure enables real-timne, multi-tasking control of vehicle navigation.
A better appreciation of these and other advantages and features of the present invention, as well as how the present invention realizes them, will be gained from the following detailed description and drawings of the various embodiments, and S"from the claims.
o20 BRIEF DESCRIPTION OF THE DRAWINGS The present invention is better understood by reference to the following drawings in conjunction with the accompanying text.
Fig. 1 is a diagrammatic view of a typical autonomous work site in which the present invention is practiced.
Fig. 2 is a diagrammatic representation of the interrelationships between the navigator, the vehicle positioning system (VPS), and the vehicle controls.
Fig. 3 is a context diagram, illustrating the various elements and their interrelationship in an autonomous control system according to the present invention.
Fig. 4 is a diagram showing how an error vector including curvature is computed.
DOC.'flTO:#17557.SPC 16 Mach 1995 -9- Fig. 5 is a diagram showing how an error vector including curvature is computed with the vehicle path included.
Fig. 6 is a context diagram of an embodiment of the navigator of the present invention.
Fig. 7 is a context diagram of a path tracking structure of the present inve xtion.
Figs. 8A-8D are navigator data flow summaries.
Fig. 9A is an illustration of a vehicle mounted scanner.
Fig. 9B is an illustration of an autonomous vehicle scanning for obstacles.
Fig. 10 is a diagram of selected scan lines in an embodiment of a laser anner system of the present invention.
Fig. 1i is a diagram of an autonomous vehicle avoiding obstacles.
Fig. 12 is a diagram of obstacle handling according to an embodiment of the present invention.
Fig. 13 is a schematic of a laser scanner system used for obstacle detection in an embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT TABLE OF CONTENTS I. General Overview II. Navigation A. Overview B. Obstacle Handling 1. Introduction 2. Detection of Obstacles a. Clearance checking b. Filtering and edge detection c. Obstacle extraction Finding the road Modeling road height Thresholding DCC:TH:TG7:#17SS7.SPC 16 March 1995 Blob extraction Applications 3. Avoidance of Obstacles 4. Return to Path 5. Scanner System a. Introduction b. Laser scanner c. Scanner system interface d. Scanner system buffer circuit I. General Overview In the following description of the piesent invention, and throughout the se. specification, the term "system" is used for shorthand purposes to mean apparatus, o method, or a combination of both apparatus and method.
Autonomous is used herein in its conventional sense. It indicates operation 15 which is either completely automatic or substantially automatic, that is, without significant human involvement in the operation. An autonomous vehicle will generally be unmanned, that is without a human pilot or co-pilot. However, an autonomous vehicle may be driven or otherwise operated automatically, and have so one or more human passengers.
20 The task of guiding an autonomous vehicle along a prescribed path requires
O,°Q
ss: that one accurately knows the current position of the vehicle. Once the current position is known, one can command the vehicle to proceed to its next destination.
For a truly autonomous vehicle to be practical, very accurate position information is required. In the past, it was not believed possible to obtain this nearly absolute position information without using a prohibitively large number of reference points. All positioning was done relative to the last reference point using inertial navigation or dead reckoning.
Using the systems discussed in Australian Patent No. 642638, we are able to obtain the nearly absolute position information required for a truly autonomously navigated vehicle. This led us to the development of simpler, more reliable vehicle DCCI:TJ:TG:#17557.SPC 16 M4arch 1995 11 controllers and related systems as well.
The development and implementation of the GPS (global positioning system) was a necessary and vital link which allowed us to develop our inventive systems to obtain more accurate position information. The GPS satellite position information, significantly enhanced through our inventive techniques, is combined and filtered with IRU (Inertial Reference Unit) inertial navigation information, and vehicle odometer information, resulting in a highly accurate system for determining position and effecting navigation. See Australian Patent No. 642638 for a discussion of positioning systems.
Integration of both positioning and navigation systems, apparatus, methods and techniques which, provide for the highly accurate transportation of unmanned vehicles.
The navigation system using the highly accurate position information obtained from the positioning system, is able to provide for accurate navigation between 15 points along pre-established or dynamically generated paths.
Use is made of various models or conceptual representations to generate and follows these paths. For example, !in, -d arcs may be used to establish paths between objective points the vehicle is to obtain. B-splines or clothoid curves may be used to mojel the actual path the vehicle is to navigate. Using modelling or 20 representational techniques provides for enhanced data communication, storage and handling efficiencies. It allows for simplification of supervisory tasks by providing a hierarchy of control and communication such that the higher the level of control, the simpler the task, and the more compact the commands.
At the base station, where GPS positioning data is also received from the satellites, a host processing system provides for the highest level of control of the navigation system. The host handles scheduling and dispatching of vehicles with much the same results as a humanA dispatcher would achieve. The host thereby determines the work cycle of each vehicle.
The host commands each of the vehicles to proceed from a current position to another position taking a specified route in order that they effect their work goals.
DCTJIllTG: i17557.SPC 16 Mairh 199S s 12- The host specifies the routes by name, and not by listing each point along the route.
Each vehicle has an on-board system which looks up .he named route and translates the named route into a set of nodes and segments along the route, using, for example, the models or representations discussed above to connect the nodes.
The on-board system also provides for controlling the vehicle systems, such as brakes, steering, and engine and transmission, to effect the necessary physical acts required to move, stop, and steer the vehicle. These may be designed to be retrofitted onto existing vehicles, such as the Caterpillar, Inc. 785 off-highway truck, for instance.
The on-board system also checks actual position against desire6 position and corrects vehicle control to stay on route. The system may run multi-state models to enhance this checking capability. The system also checks for errors or failures of the system and vehicle components. If detected, the system provides for fail-safe shutdown, bringing the vehicle to a stop.
15 The on-board navigation system provides for different modes of control.
These include a fully autonomous mode, where navigation of the vehicle is automatically handled by the on-board system; a tele or remote control mode, S'.where a remote human operator may control the direction and motion, and so on, of the vehicle; and a manual mode, where a human operator sitting in the cab can 20 take control of the vehicle and drive it manually.
In the autonomous mode, obstacle detection is critical, and is provided for in the iavigation system of the present invention. Boulders, animals, people, trees, or other obst';uctions may enter the path of a vehicle unexpectedly. The on-board system is capable of detecting these, and either stopping, or plotting a path around the obstruction, to return to its original route when it is safe.
Accurately tracking the desired route is another function of the on-board navigational system. System function and arclitecture has been designed for real time tracking of the path at speeds up to approximately 30 mph. safely.
The vehicle positioning system (VPS) is a positioning system based on the incorporation of Inertial Reference Unit (IRU) Data, Odometer Data and Global DCC:TJII:TG:#7SS7.SPC 16 March 1995 13 Positioning System (GPS) Data.
The VPS incorporates these three subsystems with extensive innovative methodology to create a high accurate position determination system for moving or stationary vehicles, or any point on the Earth.
The GPS comprises a space segment and a land or atmospheric based processing system. The space segment includes 24 special purpose satellites (not yet fully implemented) which are being launched and operated by the U.S. Government.
These satellites continually transmit data to the Earth that can be read by a GPS receiver. The GPS receiver is part of a processing system which produces an estimate of the vehicle position based on the transmitted data.
When multiple satellites are in view, the GPS receiver can read each satellite's navigation messages and determine position based on known triangulation methods.
o .The accuracy of the process is in part dependent on how many satellites are in view.
The IRU comprises laser gyroscopes and accelerometers which produce position, 15 velocity, roll, pitch and yaw data. The IRU combines this information into an estimate of vehicle position. The odometer produces data on distance travel that is incorporated with the IRU data.
A base station provides a geographic proximate reference point and reads the data transmitted by each satellite. Using the transmitted data, the base st 'lon makes improvements in accuracy on the estimate of the vehicle position.
The position of any 'atellite can be predicted at the current time or any future time. Using that information, the GPS receiver can determine the optimum satellite constellation.
Vehicle "wandering" may also be reduced by using the path history of the vehicle to statistically determine the accuracy of future estimates of position.
There are also numerous methods for determining how accurate the data transmitted from the satellite is. Included in these methods are techniques to compensate for data with error.
II. Navigation A. Overview DCC:TJII:TG:#175S7.SPC 16 Madl 1995 14- In considering implementation of an autonomous navigation system, there are some basic questions which any autonomous system must be able to answer in order to successfully navigate from point A to point B. The first question is "where are we (the vehicle) now?". This first question is answered by the positioning system portion of the present invention, as fully discussed in Australian Patent No. 642638 and other co-pending applications.
The next or second question is "where do we go and how do we get there?".
This second question falls within the domain of the navigation system portion of the present invention, discussed in this section in part and more fully in the Australian patent referred to above.
A further (third) question, really a refinement of the second one, is "how do we actually physically _iove the vehicle, for example, what actuators are involved (steering, speed, braking, and so on), to get there?". This is in the domain of the 1 vehicle controls subsystem of the navigation system, also discussed in the Australian patent referred to above.
In the preceding and following discussions of the present invention, recall that, "system(s)" may include apparatus and/or methods.
As has been discussed implicitly above, autonomous navigation, of a mining vehicle as an example, may provide certain significant advantages over conventional S 20 navigation. Among them is an increased productivity from round the clock, 24 hr.
operation of the vehicles. The problems presented by dangerous work environments, or work environments where visibility is low, are particularly well suited to solution by an autonomous system.
There are, for instance, some mining sites where visibility is so poor that work is not possible 200 days of the year. There are other areas which may be hazardous to human life because of being contaminated by industrial or nuclear pollution. An area may be so remote or desolate that requiring humans to work there may pose severe hardships or be impractical. The application of the present invention could foreseeably include extraterrestrial operations, for example, mining on the Moon, provided that the necessary satellites were put in Moon orbit.
Dc:TII:IG :f17557.SIPC 16 Martch 1995 15 In a typical application of the present invention, as shown in Fig. 1, with regard to the navigation of a mining vehicle at a mining site, there are three basic work areas: the load site, the haul segment, and the dump site. At the load site, a hauling vehicle may be loaded with ore in any number of ways, by human operated shovels for instance, controlled either directly or by remote control, or by autonomous shovels. The hauling vehicle then must traverse an area called the haul segment which may be only a few hundred meters or may be several km's. At the end of the haul segment is the dump site, where the ore is dumped out of the hauling vehicle to be crushed, or otherwise refined, for instance. In the present invention, autonomous positioning and navigation may be used to control the hauling vehicle along the haul segment. Autonomously navigated refueling and maintenance vehicles are also envisioned.
*.Referring now to Figs. 2 and 3, Navigation of the AMT (Autonomous Mining Truck) encompasses several systems, apparatus and/or functions. The VPS (Vehicle Positioning System) 1000 subsystem of the overall AMT system, outputs position ooeoo data that indicates where the vehicle is located, including, for example, a North and an East position.
Referring now to Figs. 2 and 3, position data output from the VPS is received by a navigator 406. The navigator determines where the vehicle wants to go (from S 20 route data) and how to get there, and in turn outputs data composed of steer and speed commands to a vehicle controls functional block 408 to move the vehicle.
The vehicle controls block then outputs low level commands to the various vehicle systems, 310, such as the governor, brakes and transmission. As the vehicle is moving towards its destination, the vehicle controls block and the VPS receive feed-back information from the vehicle indicative of, for example, any fault conditions in the vehicle's systems, current speed, and so on.
Navigation also must include an obstacle handling (detcction and avoidance) capability to deal with the unexpected. A scanning system 409 detects obstacles in the vehicle's projected trajectory, as well as obstacles which may be approaching from the sides and informs the navigator of these.
DOCC:TJII:TG:1757.SPC 16 Maah 1995 c' 16- The navigator may be required to then decide if action is required to avoid the obstacle. If action is required, the navigator decides how to avoid the obstacle.
And after avoiding the obstacle, the navigator decides how to get the vehicle back onto a path towards its destination.
Referring now to Fig. 6, titled the context diagram, and Fig. 8A-8D, definitions of the communications, which are bhown as circles with numbers in them, are provided below: 502. Host commands queries: Commands given by the host to the vehicle manager. These commands could be of several types: initiate/terminate; supply parameters; *00o emergency actions; and directives.
15 Queries inquire about the status of various parts of the navigator.
504. replies to host: These are responses to the queries made by the host.
432. position data: This is streamed information provided by the VPS system.
20 416. Range data: This is range data frm the line laser scanner.
432. VPS control: These are commands given to the VPS system to bring it up, shut it down and switch between modes.
416. scanner control: These are commands sent to the laser scanner to initiate motion and set follow velocity profile.
420. steering speed commands These are commands given to the vehicle to control steering and speed. These commands are issued at the rate of 2-5 Hz.
DOC:TJH:TG:#17557.SPC 16 Ma m h 1995
I
17- Referring to Fig. 3, both the VPS and the navigator are located on the vehicle and communicate with the base station 314 to receive high level GPS position information and directives from a host processing system. The system gathers GPS position information from the satellites 200-206 at the base station and on-board the vehicle so that common-mode error can be removed and positioning accuracy enhanced.
Alternatively, portions of the VPS and navigator may be located at the base station.
The host at the base station may tell the navigator to go from point A to point B, for instance, and may indicate one of a set of fixed routes to use. The host also handles other typical dispatching and scheduling activities, such as coordinating vehicles and equipment to maximize efficiency, avoid collisions, schedule maintenance, detect error conditions, and the like. The host also has an operations oooo interface for a human manager.
:i 15 It was found to be desirable to locate the host at the base station and the navigator on the vehicle to avoid a communications bottleneck, and a resultant degradation in performance and responsiveness. Since the host sends relatively high-level commands and simplified data to the navigator, it requires relatively little communication bandwidth. However, in situations where broad-band 20 communication is available to the present invention, this may not be a factor.
Another factor in determining the particular location of elements of the system of the present invention, is the time-criticality of autonomous navigation. The navigation system must continually check its absolute and relative locations to avoid unacceptable inaccuracies in following a route. The required frequency of checking location increases with the speed of the vehicle, and communication speed may become a limiting factor even at a relatively moderate vehicle speed.
However, in applications where maximum vehicle speed is not a primary consideration and/or a high degree of route following accuracy is not critical, this communication factor may not be important. For example, in rapidly crossing large expanses of open, flat land, in a relatively straight path, it may not be necessary to DCC:TJ:TG:#1I557.SPC 16 M h 1995
C
18check position as often in the journey as it would be in navigating a journey along a cun ,aceous mountain road.
Conceptually, the navigation aspects of the present invention can be arbitrarily divided into the following major functions: route planning/path generation; path tracking; and obstacle handling.
See Australian Patent No. 642638 for a full discussion of these aspects.
Obstacle handling is discussed below.
B. Obstacle Handling 1. Introduction Obstacle handling involves at least three major functions: detection of obstacles 4002, avoidance of obstacles 4002, and returning to path 3312. The o ea returning to path function is similar to patai generation and tracking, as described 15 above.
In addition to path tracking (following), successful navigation of vehicle 310 requires that vehicle 310 be able to detect obstacles 4002 in its path, thus allowing the vehicle to stop or otherwise avoid such an obstacle before a collision occurs.
In one embodiment of the present invention, a single line infra-red laser S 20 scanner 404 (see Fig. 9) is used in a configuration where the scan is horizontal (not shown). The scan line 3810 does not contact the ground, so any discontinuities in the range data can be attributed to objects 4002 in the environment.
Since a reference path 3312 is available and the vehicle position is known relative to the reference path, only the range data and a region bounding the reference path 3312 is processed for threatening objects 4002. Objects outside of this region, or boundary zone, are ignored. The width of the boundary zone (not shown) is equal to the vehicle width plus some selected safety buffer to allow for tracking and positioning errors. This method is limited in its usefulness and is referred to as "clearance checking".
2. Detection of Obstacles D:THI:Ta:#17557.SPC 16 Mtb 1995
I
19a. Clearance checking: In the simplest case of the present invention, the laser 404 may be used in a single line scan mode with successive range measurements being made at regular angular intervals as the laser scans over the field of view. Again for simplicity, these scans can commence at regular time intervals. The term "clearance checking" has been used to describe this method. In this version of the present invention, the method has been limited to processing only two dimensional data.
This type of obstacle method is limited to checking to see if the path 3312 is clear using a single line scan mode with successive range measurements being made at regular angular intervals as the scanner 404 scans over the field of view.
It does not include any methods to establish the existence of any obstacle 4002 or to create a path around it if the path is not clear. This type of method is not deemed to be a particularly useful obstacle detection method, except in very rigidly controlled environments, such as on a factory floor.
15 b. Filtering and edge detection scheme: S"e A second obstacle detection embodiment of the present invention uses a 1. multiple-line scanner 3804 (see Fig. whose scan 3810 contacts the ground at some distance in front of the vehicle 310. Since the scan line contacts the ground, discontinuities in range data can no longer be attributed to threatening objects 4002.
20 For example, profiles from natural objects such as hills and banked or crowned roads S.can cause discontinuities in range data. This technique of the present invention can discern discontinuities in range data between threatening objects 4002 and natural objects (not shown).
In this embodiment of the present invention, a filtering scheme is used to decrease the amount of data processed and is independent of the scanner configuration used. The edges of the boundary zone are found by transferring the range data to an image plane representation 3900 (see Fig. 10), where each range value is located by a row number 3908 and a column number 3910 (a matrix representation).
Processing load is minimized by selecting a relatively small number of the DCC:TJI:TG:117557.SPC 16 March 1995 20 scan lines available in a range image 3900. The scan lines are selected by vehicle speed, and are concentrated at, and beyond, the vehicle stopping distance. The selected scan lines from successive frames of data can overlap.
In this method, if the vehicle 310 is moving fast, the selected scan lines 3906 are far in front of the vehicle (near the top of the range image 3900). In contrast, when the vehicle is traveling slowly, the selective scan lines 3906 are closer to the vehicle (near the bottom of the range image 3900).
Each scan line is made up of many pixels of data. Each pixel has two parameters associated with it. First, the actual value of the pixel is the range value returned by the scanner 3804. Second, the location of the pixel on the scan line gives an indication of the angle, relative to the vehicle centerline, at which the range value was recorded. This corresponds to a cylindrical coordinate frame S"(R,THETA,Z) description.
3iven the cylindrical description and the known scanner location with respect 15 to the vehicle 310, the range values can be converted to a cartesian coordinate system. The result is a road profile description which can be used by a novel filtering scheme to determine if threatening objects 4002 are present in the vehicle path 3812, while ignoring effects due to natural hills and valleys in a typical roadway.
After the scanner data is converted to cartesian coordinates, the data is processed to determine which part of the scan is actually on road 3312 and which part of the scan line is outside of the vehicle path and therefore safely ignored. Given the vehicle position and the width of a boundary (which is equal to the vehicle width plus some safety margin), the coordinates of the boundary on either side of the vehicle path can be determined. The c( ,odin.Ates of the boundary can be compared to the coordinates of each pixel on fhe current scan line. The pixels which have coordinates outside of the boundary are ignored.
The filtering scheme builds an expectation of the road profile from previously sensed road profiles. This expectation is based on three parameters which were found to adequately describe typical paved roads. These three parameters are: DCC:T:TG:#17557SPC 16 MaM 1995 r 21 o road crown: the curvature of the road cross section (perpendicular to the road centerline).
o road bank: the 'tilt' of the road profile (perpendicular to the centerline).
o road height: the height of the road centerline above a reference plane described by the location of the four tires of the vehicle 310.
Expected values of the road crown and the road bank are determined by performing a standard, least-squares Kalman filtering technique on previously sensed scanner data. The Kalman filter basically keeps a type of running average of the two parameters based on the values determined from the previous data.
In accordance with the present invention, the expected road height for a particular scan can be determined through one of two similar methods.
One is to average the road height at each pixel within the current scan line 15 to determine a characteristic height of the scan line in question.
The second method is to filter the road height using the standard Kalman filter similar to that used when determining crown and bank expectations.
eq° These three parameters can be used to determine a second order equation which describes the expected road profile. This expected profile is compared to the 20 actual road profile. Any deviations between the two which exceed a preset threshold *ogo value are assumed to be threatening objects.
This scheme of the present invention is viable given the assumption that any detected objects 4002 are small in comparison to the width of the road. Then, when these averaging or least squares methods are used, the effects due to objects are negligible in comparison to natural road data.
This filtering scheme also includes a very simple edge detection method which convolves the selected range data with a simple seven point weighing function.
c. Obstacle extraction: An additional technique of the present invention processes an entire range DC :TJH:TG:#17557.SPC 16 Mach 1995 22 image 3900 from a multi-line scanner 3804 for objects. This method of the present invention accomplishes three goals: 1. Do not detect obstacles 4002 when none exists, 2. Detect obstacles 4002 when obstacles do exist, and 3. Detect the correct obstacles 4002 when obstacles exist.
Obstacle extraction is obstacle detection through the use of blob extraction.
Blob extraction is well known in the art of computer graphics. Obstacles are found by clustering similar pixels into group called blobs. The goal of obstacle extraction is to store and process obstacles as units rather than as individual pixels.
The obstacle extraction of the present invention may be done by preforming the following steps in the image plane 3900: 1. Project the vehicle path into the image plane 3900, 2. Transform range data into height data, o° 0* 3. Fit a curve to the height at the center of the road (this represents the 15 expected road height at each row), 4. Threshold the actual road height against the height expectation, and a.
5. Extract the obstacles (indicated by differences in actual and expected road heights which exceed the threshold).
Finding the road: In order to process all the available data, the images 3900 must be processed at the frame rate of the scanner 3804. For this reason, most of the computations in the obstacle extraction method are done in the image plane 3900. By projecting the path into the image, a large portion of the image can be ignored, and many needless computations avoided.
Assuming that the vehicle path 3812 is specified at regular intervals, the current vehicle position can be used to locate the path segment line 3902 in front of the scanner. This path 3812 is transformed from world coordinates into image coordinates by projecting the points corresponding to the road or boundary edges 3902 into the image plane 3900 (see Fig. A cubic spline is used to interpolate between the gaps. Thus, the center and DCC:TJI:T:N17557.SPC 16 M&Th 1995 I I 23 edges of the row 3902 are found for each row 3908 in the image 3900. The pixels isolated between the road edges 3902 are converted (cylindrical to cartesian coordinates) from range to height data. The outlying pixels are discarded and not processed any further.
Modeling road height: Once the center of the road is known for every row 3908 in the image 3900, the height for each of these points can be determined. A third order least squares curve is fit to t&-se data.
This has the effect of modeling the general trend of the road (up and down hills) as well as fillering out the effects of noise and small objects lying in the center of the road.
:i Thresholding: Obstacles may be located by using a height threshold. A straight height threshold would be meaningless since the surrounding terrain is not necessarily flat.
15 Hence, th- threshold is referenced against the expected height, as predicted by the third order fit, at the row number 3908 of the given pixel.
In this manner, a hill is not considered an obstacle since the height expectation and the actual height should match very closely. On the other hand, a ~real obstacle 4002 would barely reflect the expected road height (due to the least squares fit), and therefore is readily found by thresholding. The result of this thresholding is a binary image (not shown) suitable for a "blob extraction." The binary image only indicates where an object is or is not present in the image.
Blob extraction: Blob extraction works by clustering adjacent set pixels (indicating zn obstacle 4002 is present) together and treating them as a unit. Two pixels are adjacent if they are either: 1. In the same column 3910 and have consecutive row numbers 3908, or 2. In the same row 3908 and have consecutive column numbers 3910.
By grouping pixels together into blobs, the obstacles 4002 can be treated as a whole unit and are suitable for further processing.
DC:TJII:TUI:#17557.SPC 16 Mrch 1995 hB -L II I 24 Applications: One way to use extracted blobs is to pipe them as input into another program.
For example, the objects 4002 can be parsed into coordinates and used to accumulate a global object map 4004 (see Fig. 11). This map 4002 is then passed into another program, and used to do collision avoidance or path planning.
3. Avoidance of Obstacles Once the present invention detects an obstacle 4002 in the path of the vehicle 310 (see Fig. 11), it must then avoid a collision with the object. Certain assumptions are made concerning the obstacle avoidance problem: 1. The obstacle environment is populated with obstacles 4002 that can be represented by convex-polygons or convex lines; 2. The navigation methods only have access to the local environment information in the form of a local map representing all of the visible faces of the obstacle from the position of the vehicle 310, which can 15 be obtained from unprocessed laser range data or from data processed through blob-extraction; 3. The vehicle 310 is a conventionally steered type which has constrainis on its speed and acceleration and constraints on its steering angle and the rate of change in the steering angle.
To deal with the obstacle avoidance problem, the present invention divides it into two sub-problems.
First, to decide if any obstacles are in the way, and if so, which side should the vehicle pass on. Then select a sub-goal 4006, which will lead the vehicle 310 around the obstacle 4002, leading towards a higher level goal 4008, which is to get back on the desired path.
Second, once a sub-goal 4006 is selected, make a steering decision which drives the vehicle 310 towards the sub-goal 4006, while steering clear of the obstacle 4002. A sub-goal selection method and a steering decision method of the present invention solve these two sub-problems.
The above enumerated assumptions are managed in the following process: DCC:TJII:TG:f17557.SPC 16 MTch .995 1 I 25 The obstacle locations are obtained from the laser range scanner 3804 or 404.
The range data generated by the scanner 3804 or 404 are processed to produce a list of polygonal faces, modeling the visible portions of the obstacle 4002 from the vehicle position. Each time new range data become available, a sub-goal selection method is executed to generate a sub-goal 4006 and determine regions of safe navigation (free-space 4010) for the steering decision method. The frequency at which the sub-goal selection method can be executed depends on the rate at which the scanner 3804 or 404 can collect data. The achievable vehicle speed, in turn, depends on this frequency of execution.
For the steering decision method, a higher sampling rate is desirable in order to produce a smooth path. Therefore, the steering decision method is executed more frequently than the sub-goal method.
The basic flow of the sub-goal method is the following: 1 save last initial-subgoal, subgoal, and free-space, set goal_blocked 15 flag to true; 2 if final goal is visible a.
generate direct goal 0 if direct goal is visible set goal_blocked flag to false; 3 otherwise generate an initial subgoal set subgoal to initial subgoal recursively generate subgoals until the latest one is visible, if subgoal not feasible, abort; 4 if goal_blocked flag is true restore old initial-subgoal, subgoal, and freespace; otherwise generate free-space if free-space is not safe restore old initial-subgoal, subgoal, and freespace.
DCC:nTJI:TG:l#17557.SPC 16 March 1995
M
26 Sub-goal Method: First (step 1 above), the initial-subgoal, subgoal, and freespace generated from the previous iteration is saved. This assures that when the newly generated subgoal is not safe, the old subgoal can continue to be pursued.
Next (step 2 above), when the final goal is visible, attempt to generate a direct goal which is not associated with any obstacles 4002. Although the final goal is visible in the local map, it does not necessarily mean that no obstacle is blocking the final goal because obstacles outside the scanner range (both distance and angular wise) will not be represented in the local map. Therefore, when generating a direct goal, ensure that the goal is located in the cone area which is covered by the scanner 3804 or 404 to avoid placing a subgoal on or behind an obstacle 4002 that is not in the kocal map.
:i The next step (step 3 above) handles the situation where the final goal is blocked by an obstacle 4002 in the local map. In this case, the obstacle 4002 that blocks the line of sight to the final goal is first determined.
15 Given a blocking obstacle, there are two possible ways of going around it.
If both edges of the obstacle are in the range of the scanner 3804 or 404, we may choose to go around the edge which gives the minimum sum of the distances from the vehicle 310 to edge and from the edge to the final distance. If only one edge of the obstacle 4002 is in the range, choose that edge to go around. If none of the edges is visible, always arbitrarily choose the left edged to go around. Once the edge to go around is determined, place the initial subgoal away from the edge at a 00*000 distan that is proportional to the vehicle size.
Because of this displacement, the resulting subgoal may be blocked by other obstacles 4002. This calls for the recursive generation of subgoa! on the obstacle, which blocks the line of sight to the subgoals just generated. This recursive process continues until a subgoal visible to the vehicle 310 is generated. Each subgoal so generated is checked for viability. By viability it is meant that the subgoal does not lead the vehicle 310 towards a gap between two obstacles 4002 which is too small for the vehicle to pass throu3h. When such a condition is detected, the vehicle 310 will stop.
DCC:TJIH:T:#17557.SPC 16 March 1995
I-
27 9409 9.9.
9* 99 *c 4 0* 9. 0 4. *0 *o 9 9 9 9 0949 The direct subgoal generated in the second step (step 2 above) could possibly be obscured from the vehicle 310. If such is indeed the case, the old subgoals from the previous iteration is restored and used next (step 4 above).
In the final step (step 5 above), generate the free-space 4010 for the visible subgoal, which is a triangular region that contains no obstacles. Once the free-space 4010 is generated, the safeness of the subgoal and free-space 4010 can be determined. When the new subgoal and free-space 4010 is not safe, the old subgoal and free-space is again retained. Otherwise, the new subgoal and free-space is used.
The steering decision method of the present invention is composed of two major components: transferring state constraints to control constraints; and determination of the desired control vector.
Once the control constraints and the desired control vector are computed, the control vectors can be determined using optimization techniques well known in the art.
4. Return to Path The present invention includes a method, as shown diagrammatically in Fig.
11, whereby a safe path around a detected object 4002 will be plotted and navigated so that the vehicle 310 will reacquire the reference path after avoiding the object 4002.
Scanner System a. Introduction: Referring to Figs. 9 and 13, the present invention also includes a laser scanner system 404. The scanner 404 is used to find obstructions 4002 (see Fig. 11) that randomly crop up in the vehicles 310 path, as previously discussed.
Sources of such obstructions 4002 may be varied and numerous depending on the particular work site. They may include fallen trees and branches, boulders, moving and parked vehicles, and people.
The scanner 404 gives the autonomous vehicle 310 the ability to detect and deal with the external world as conditions require.
DCC:TJH:TG:#17557.SPC 16 March 1995 e I ass 28p p b. LASER Scanner: The major components of the laser scanner system 404 are depicted in Fig.
13.
A laser range finder 3804 uses an infra-red beam 3810 to measure distances between the range finder unit 3804 and the nearest object 4002. A brief pulse is transmitted by the unit 3804 and the time for the beam 3810 to reflect off an object 4002 and return gives the distance.
The beam 3810 from the range finder 404 is reflected by a rotating mirror 4222 giving the range finder 404 a 3600 view of the world. Mirror rotation is accomplished through a motor 4206. The motor speed is controlled via a terminal 4210, which communicates with a motor amplifier/controller 4220 through a standard RS232C serial link 4224. Synchronization between laser firings and mirror angular position is done with an encoder.
Distance data on a line 4226 from the laser range finder 404 is taken by an 15 interface circuit 4228, which transmits the data differentially to a buffer circuit 4214.
Individual pieces of data are collected by the buffer circuit 4214 until the mirror 4222 makes one full revolution. This set of data comprises one scan. When a scan is complete, the buffer circuit 4214 signals a processor 4212, whereupon data for the entire scan is transferred to the processor 4212 for processing.
c. Scanner System Interface: The interface circuit 4228 has three functions.
First, it acts as a safety monitor. A situation could occur where the mirror 4222 would stop rotating, as in the case of the drive belt 4230 between the motor 4206 and mirror 4222 breaking. Under this condition, the laser 4204 would continue to fire, and since the mirror 4222 is stationary, it would fire at a single point (dangerous for anyone looking directly into the laser beam). The interface circuit 4228, however, senses when the angular velocity of the mirror 4222 falls below half a revolution per second, and disables the laser 4204 if such a condition occurs.
The second function is to disable the laser 4204 from firing for part of the 360 degrees scan area. Typically, the laser scanner unit 404 will be mounted in 16 Mari 199S DCC:TJI 1. T :17557.SPC I, I 1 I 29 front of a vehicle 310, and the field of interest is in the 180 degree area in front of the vehicle. The vehicle itself will block the back portion of the 360 degree scan area. In this case, the circuitry 4228 will prevent the laser 4204 from firing into the vehicle, extending the life of the laser diode while receiving range data for the area in front of the vehicle. The enabling and disabling of the laser range-finder 4204 is done through two sensors (not shown) mounted near the mirror housing 4222. For testing purposes, or for applications where a 360 degree scan is desirable, the disable feature can be turned off through a DIP switch.
The third function of the circuit 4228 is to convert signals between single ended and differential form. TITL signals from the laser unit 4204 are differentially transmitted to the buffer circuit 4214, and differentially transmitted signals from the buffer circuit 4214 are converted to TTL levels. This prevents noise contamination I' along the cable 4226 connecting the two circuits.
d. Scanner System Buffer Circuit: 15 The function of the buffer circuit 4214 is to synchronize laser 404 firings with the angular position of the mirror 4222, to collect data for one complete scan, and to transmit the scan to computer 4214 for processing.
The angular position of the mirror 4222 can be determined through signals sent by the encoder 4208. The buffer circuit 4214 uses two signals from the encoder 20 4208: the Z and A channels.
The Z channel is the encoder index; it gets asserted once per revolution of the encoder 4208, and is used to signal the beginning of the scan area.
*V 0 The A channel is one line of the two line quadrature output of the encoder 4208, and pulses 1000 times per revolution of the encoder. This channel is used to trigger laser firings.
One additional signal is needed to fully synchronize the scan field with the encoder signals. There is a gearing ratio of 2:1 between the encoder/motor 4206 and the mirror 4222. Two revolutions of the encoder 4208 rotates the mirror 4222 once.
This translates to 2Z channel pluses and 2000 A channel pulses per revolution of the mirror 4222, and the inability to differentiate the beginning of the first half of the DCC:TJII:TG:#17557.SPt 16 aich 1995 d 30 scan with the beginning of the second half.
To fully synchronize the scan field, the DB (dead band) signal generated by the interface circuit 4222 is used. The DB signal, used to disable the laser 4204 from firing in the back half of the scan, allows the differentiation of the front and back halves of the scan. The Z and DB signal together signal the beginning of the scan area.
The second task of the buffer circuit 4214, to collect data for one complete scan, is accomplished through the A channel of the encoder 4208. The 2000 pulses of the channel are divided by either 2, 4, 8, or 16, selected through DIP switches (not shown) on the circuit board 4228. This allows the number of data points per scan to be varied between 1000, 500, 250, and 125. The divided signal is used to trigger the laser range-finder 4204 at appropriate angular intervals, and to store the resulting range data in memory 4214.
1 uoaThe sequence of events is as follows. W (write) is asserted one clock cycle upon a rising edge on the divided A signal. At this point, data from a previous T (laser trigger) is available and is stored in memory 4214. T is asserted the following clock cycle, triggering the laser and putting the resulting range data onto the memory input bus 4226. This data is written on the next W pulse, repeating the cycle.
The final task of the buffer circuit 4214 is to transmit the scan data to a 20 computer 4212 for processing. Completed scans are signaled by the Z and the DB signals (the beginning of a scan is also the end of a previous one). Upon a completed scan, an interrupt request line is asserted, and remains asserted until either the mirror 4222 has made half a revolution, or the processor 4212 acknowledges the interrupt. In the first case, the half revolution of the mirror 4222 is signaled by a subsequent Z pulse and indicates a timeout condition; the processor 4212 has failed to respond and the data is lost.
In the normal case, the interrupt is acknowledged. Upon receipt of the acknowledgement, STR (data strobe) is asserted and held until IBF (input buffer full) is received. During this time, data is put on the data bus 4230 and may be ready by the computer 4212. Data is valid on the bus 4230 until IBF is asserted, at which DC:TH:TG:#17SS7.SPC 16 Mah 1995 I- ~I 31 time STR is de-asserted and the data removed from the bus 4230. Once the processor 4212 detects the de-assertion of STR, it de-asserts IBF. This causes STR to be asserted for the next piece of data, repeating the cycle.
Scan data is collected and stored in two memory banks 4214. This avoids shared memory and synchronization problems between scan storage and scan transmission. Data for a new scan is stored in one bank, while the previous scan is being transmitted from the other bank.
The buffer circuit 4214 removes from the processor 4212 the responsibility of synchronizing laser findings with mirror position and collecting individual pieces of data. It allows more efficient use of CPU time, as data is received in scan sized chunks. The processor 4212 spends its time processing the data, not in collecting it.
«*e oo a DI).TJII:TG:#17557.SPC 16 March I II- -rllLL d,

Claims (14)

1. A computer based method of detecting an object in a road, said computer based method adapted for use with a land based vehicle, said computer based method comprising the steps of: determining a stopping distance of said vehicle; scanning said road ahead of said vehicle to obtain an image plane of said road, said image plane comprised of scan lines, said scan lines including range information; selecting those scan lines in said image plane that include range information indicating distances that are beyond said stopping distance; projecting left and right edge points of said road into said image plane; and 6processing said selected scan lines to detect obstacles between said left and oe right edge points. °•on
2. A computer based system for detecting an object in a road, said computer 15 based system adapted for use with a land based vehicle, said computer based system comprising: means for determining a stopping distance of said vehicle; °oo means for scanning said road ahead of said vehicle to obtain an image plane of said road, said image plane comprised of scan lines, said scan lines including e 20 range information; means for selecting those scan lines in said image plane that include range ooo. 4information indicating distances that are beyond said stopping distance; means for projecting left and right edge points of said road into said image plane; and means for processing said selected scan lines to detect obstacles between said left and right edge points.
S3. A method for detecting an object in a road ahead of a vehicle, said method DCC:CJ-RR:175517.S1I 14 pl 1997 L I I I P, ~6 q 33 comprising the steps of: calculating edge points of said road from stored road data; constructing an image plane comprised of a plurality of scan lines, each scan line including range information at various points along each scan line; projecting said calculated edge points into said image plane; determining a left edge point and a right edge point of each sea, line in said image plane using said projected edge points, said left and right edge points corresponding to left and right edges of the road; and detecting a discontinuity in said range information along each scan line between said left and right edge points thereby detecting said object ahead of said vehicle.
4. The method of claim 3, wherein -;aid step of constructing an image plane comprises the step of: constructing said image plane comprised of said plurality of scan lines, each 15 scan line including range information to positions lccated on said road in front of S; said vehicle at various points along each scan line.
5. The method of claim 3, farther comprising the step of: modeling said road using said range information in said scan line between said left edge point and said right edge point with an equation. 0000
6. The method of claim 5, wherein ;-iid step of modeling further comprises modeling said road based on road parameters including at least one of road crown, road tilt, and road height.
7. The method of claim 5, wherein said step of modeling further comprises modeling said road with a third order polynomial equation based on road parameters including road crown, road tilt, and road height. DCC:CJZ:RR:#17557.RS1 14 April 1997 IL~ _C 34
8. A system for detecting an object in a road ahead of a vehicle, said system comprising: means for calculating edge points of said road from stored road data; means for constructing an image plane comprised of a plurality of scan lines, each scan line including range information at various points along each scan line; means for projecting said calculated edge points into said image plane; means for determining a left edge poin* and a right edge point for each scan line in said image plane using said projected edge points, said left and right edge points corresponding to left and right edges of said road; and means for detecting a discontinuity in said range information along each scan line between said left and right edge points thereby detecting said )bject ahead of said vehicle.
9. The system of claim 8, wherein said range information includes range o* information to positions located on said road in front of said vehicle. 0*Ow 15
10. The system of claim 8, further comprising the step of: modeling said road using said range information in said scan line between said left edge point and said right edge point with an equation.
11. The system of cLim 10, wherein said means for modeling further comprises 0S* means for modeling said road based on road parameters including at least one of road crown, road tilt, and road height.
12. The system of claim 10, wherein said step of modeling further comprises .:-odeling said road with a third order polynomial equation based on road parameters including road crown, road tilt, and road height.
13. The method as claimed in claims 1 or 3 substantially as hereinbefore described with reference to the accompanying figures. RR: 17557.RSl 14 Apdl 1997 %^4 i /VTo 35
14. The system as claimed in claims 2 or 8 substantially as hereinbefore described with reference to the accompanying figures. DATED: 14 April 1997 CARTER SMITH BEADLE Patent Attorneys for the Applicant: CATERPILLAR INC. p 14 Apil 1997 I Is _C l I ABSTRACT A computer based method is used to project road edges ahead of a vehicle (310) onto an image plane (3900) to thereby detect obstacles (4002) ahead of the vehicle (310). The method involves calculating the edge points of the road from stored road data and projecting these edge points onto the image plate (3900). From a scan line of range data, a left edge point and a right edge point are determined, which points correspond to left and right edges of the road. **c *t C C *u fr DCC:TII:TG:#17557.SPC 16 Mach 1995 ~slr I I I ~P--pl
AU14904/95A 1989-12-11 1995-03-16 System and method for detecting obstacles in a road Expired AU679321B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU14904/95A AU679321B2 (en) 1989-12-11 1995-03-16 System and method for detecting obstacles in a road

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU642638 1989-12-11
AU14904/95A AU679321B2 (en) 1989-12-11 1995-03-16 System and method for detecting obstacles in a road

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU46045/93A Division AU658325B2 (en) 1989-12-11 1993-09-02 Integrated vehicle positioning and navigation system apparatus and method

Publications (2)

Publication Number Publication Date
AU1490495A AU1490495A (en) 1995-06-29
AU679321B2 true AU679321B2 (en) 1997-06-26

Family

ID=3704970

Family Applications (1)

Application Number Title Priority Date Filing Date
AU14904/95A Expired AU679321B2 (en) 1989-12-11 1995-03-16 System and method for detecting obstacles in a road

Country Status (1)

Country Link
AU (1) AU679321B2 (en)

Also Published As

Publication number Publication date
AU1490495A (en) 1995-06-29

Similar Documents

Publication Publication Date Title
EP0936518B1 (en) Integrated vehicle positioning and navigation system, apparatus and method
US5610815A (en) Integrated vehicle positioning and navigation system, apparatus and method
US5646845A (en) System and method for controlling an autonomously navigated vehicle
EP2338029B1 (en) Control and systems for autonomously driven vehicles
US20070219720A1 (en) Navigation and control system for autonomous vehicles
AU679321B2 (en) System and method for detecting obstacles in a road
AU677755B2 (en) Integrated vehicle positioning and navigation system, apparatus and method
AU666033B2 (en) Integrated vehicle positioning and navigation system apparatus and method
AU658325B2 (en) Integrated vehicle positioning and navigation system apparatus and method
AU687634B2 (en) Integrated vehicle positioning and navigation system, apparatus and method
AU677889B2 (en) Integrated vehicle positioning and navigation system, apparatus and method
AU672731B2 (en) Integrated vehicle positioning and navigation system, apparatus and method