GB2603133A - Free Space Estimation - Google Patents
Free Space Estimation Download PDFInfo
- Publication number
- GB2603133A GB2603133A GB2101041.8A GB202101041A GB2603133A GB 2603133 A GB2603133 A GB 2603133A GB 202101041 A GB202101041 A GB 202101041A GB 2603133 A GB2603133 A GB 2603133A
- Authority
- GB
- United Kingdom
- Prior art keywords
- object detection
- free space
- polygon
- segment
- segments
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 claims abstract description 205
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000002093 peripheral effect Effects 0.000 claims description 32
- 238000009826 distribution Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 5
- 238000009827 uniform distribution Methods 0.000 claims description 4
- 238000013459 approach Methods 0.000 description 16
- 238000003860 storage Methods 0.000 description 11
- 238000005266 casting Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000009471 action Effects 0.000 description 6
- 230000006399 behavior Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/06—Systems determining position data of a target
- G01S13/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Image Analysis (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
A free space determination module and method for estimating free space in an occupancy grid. The free space estimation unit comprises a memory configured to store object detection data relating to one or more object detections by an object detection system; and a controller configured to: obtain one or more object detections in a field of view of the object detection system 5.1; divide the field of view of the object detection system into multiple segments 5.2; select an object detection of the object detection system in each of the multiple segments of the field of view containing an object detection 5.3; use locations of the selected object detections to plot a polygon representing a free space envelope 5.4; and designate cells of an occupancy grid as representing free space by applying a triangle rasterization algorithm to the polygon representing the free space envelope 5.5. The controller preferably selects the object detection determined to be closest to the sensor in each of the segments of the field of view containing an object detection. Determining that no object detections are present in a segment of the field of view if no objects are detected within a threshold distance of a sensor.
Description
Free space estimation
Technical field
Aspects of the disclosure relate to estimating free space. In particular, embodiments described herein pertain to estimating free space in occupancy grids in the field of autonomous vehicles.
Background
With the current development of motor vehicles which are fully autonomous and/or fitted with ADAS (Advanced Driver Assistance Systems), many techniques have been developed for the reliable estimation of the vehicle environment, on the basis of a large amount of data coming from one or more sensors of the vehicle, such as lidars and/or radars. A widely used approach consists in detecting objects or obstacles in the environment of the vehicle, using one or more sensors, then in converting the raw data from the sensors into an occupancy grid containing cells associated with respective occupancy probabilities. The occupancy probability of each cell is calculated using an algorithm called "Inverse Sensor Model" which converts the detected data and the additional information into occupancy probabilities. Such an occupancy grid 100 is shown in Figure 1.
The object detection system used to detect objects or obstacles may obtain such object detections directly using sensors such as cameras capturing images that are then analysed using image recognition software. Object detection systems using indirect sensors may also be employed. Such systems detect reflections of waves emitted by the object detection system. Examples of such indirect methods include radar and lidar systems.
In radar systems, radio waves may be emitted by an array of antennas across a field of view. Radio waves that have interacted with objects in the field of view may be reflected or scattered and detected by one or more receiving antennas. The detected radio signals are then processed to obtain object detection information indicating the location of the detected object. In lidar systems, a similar principle applies except that instead of radio waves, laser light is emitted and reflections and/or scatterings thereof detected and processed to obtain object detection data.
In order to determine free space from a set of object detections, an approach known as ray casting or line casting has hitherto been employed and is illustrated in Figure 2A. Figure 2A shows a plan view of a field of view of a sensor used to obtain object detection data. A notional line is cast from the sensor origin and terminates at an object detection detected by the object detection system. A line may be cast to each object detection in the field of view.
A line rasterization algorithm, such as the Bresenham algorithm, is then employed to convert the line cast to the object detection to a set of cells of the occupancy grid to be designated as free space. The use of a line rasterization algorithm to determine the cells designated as free space is illustrated in Figure 2B. As shown, in Figure 2B, a line is cast from the triangle representing the location of a sensor of an object detection system to the star representing the location of an object detection.
The approach of using a line rasterization algorithm to determine free space in an occupancy grid has several drawbacks. Since a line must be cast to each object detection in the detection set, the ray casting approach is computationally expensive. This step in the processing of object detections can lead to a significant bottleneck in the process.
Furthermore, regions of the field of view in which no objects are detected, such as long expanses of road directly in front of a vehicle in which there are no obstacles, cannot be modelled as free space using this approach since there are object detections to which a line may be cast. Such a region 200 is shown in Figure 2. This is a significant drawback of the ray casting approach. Furthermore, visual artefacts may be encountered, such as those arising from the Moire effect. The Moire effect may arise especially in peripheral regions of the field of view where there may be a higher density of object detections.
An improved approach is therefore required.
Summary
In accordance with a first aspect of the present disclosure there is provided a free space determination module for estimating free space in an occupancy grid, the free space estimation unit comprising: a memory configured to store object detection data relating to one or more object detections by an object detection system; and a controller configured to: obtain one or more object detections in a field of view of the object detection system; divide the field of view of the object detection system into multiple segments; select an object detection of the object detection system in each of the multiple segments of the field of view containing an object detection; use locations of the selected object detections to plot a polygon representing a free space envelope; and designate cells of an occupancy grid as representing free space by applying a triangle rasterization algorithm to the polygon representing the free space envelope.
The controller may be further configured to select the object detection determined to be closest to a sensor origin of the object detection system in each of the segments of the field of view containing an object detection.
The controller may be further configured to: determine that no object detections are present in a segment of the field of view if no objects are detected within a threshold distance of a sensor origin of the object detection system in that segment.
Plotting a polygon representing a free space envelope may comprise: plotting a preliminary polygon; identifying areas of the preliminary polygon located beyond the threshold distance; and plotting a final polygon representing a free space envelope excluding areas of the preliminary polygon located beyond the threshold distance.
The controller may be configured, in response to determining that no object detection is present in a segment of the field of view, to plot an edge of the polygon as a transverse straight line extending between a first point located at the threshold distance along a first radial boundary of the segment and a second point located at the threshold distance along a second radial boundary of the segment.
The controller may be configured to: plot successive vertices of the polygon comprising a first vertex at a sensor origin and further vertices at the locations of the selected object detections in each of the segments containing an object detection; and plot edges of the polygon as a series of straight lines connecting the successive vertices.
The controller may be configured to: plot, for each segment containing a selected object detection, an edge of the polygon as a transverse straight line passing through the selected object detection and connecting radial boundaries of that segment; plot edges of the polygon as radial lines connecting transverse straight lines of neighbouring segments along radial boundaries separating the neighbouring segments; and plot edges of the polygon as respective radial lines connecting a sensor origin of the object detection system to the transverse straight line of each peripheral segment along a peripheral radial boundary of each peripheral segment.
The cells of the occupancy grid may be designated as free space are assigned an occupancy probability according to a uniform distribution, a Gaussian distribution or an exponential distribution.
The multiple segments may comprise peripheral segments and one or more central segments, and each of the multiple segments may subtend substantially the same angle.
The multiple segments may comprise peripheral segments and one or more central segments, and each peripheral segment may subtend an angle greater than the angle subtended by the or each central segment.
The object detection system may be a lidar or radar system.
In accordance with a second aspect of the present disclosure there is provided an autonomous vehicle system comprising: a vehicle control system; an object detection system; and the free space estimation module of any preceding claim.
In accordance with a third aspect of the present disclosure there is provided a method of estimating free space in an occupancy grid, the method comprising: obtaining one or more object detections in a field of view of the object detection system; dividing the field of view
S
of the object detection system into multiple segments; selecting an object detection of the object detection system in each of the multiple segments of the field of view containing an object detection; using locations of the selected object detections to plot a polygon representing a free space envelope; and designating cells of an occupancy grid as representing free space by applying a triangle rasterization algorithm to the polygon representing the free space envelope.
Selecting an object detection may comprise selecting the object detection determined to be closest to a sensor of the object detection system in each of the multiple segments of the
field of view containing an object detection.
The method may further comprise: determining that no object detections are present in a segment of the field of view if no objects are detected within a threshold distance of a sensor of the object detection system in that segment.
Plotting a polygon representing a free space envelope may comprises: plotting a preliminary polygon; identifying areas of the preliminary polygon located beyond the threshold distance; and plotting a final polygon representing a free space envelope excluding areas of the preliminary polygon located beyond the threshold distance.
The method may further comprise: in response to determining that no object detections are present in a segment of the field of view, plotting an edge of the polygon as a transverse straight line extending between a first point located at the threshold distance along a first radial boundary of the segment and a second point located at the threshold distance along a second radial boundary of the segment.
Plotting the polygon may comprise: plotting successive vertices of the free space polygon comprising a first vertex at a sensor origin and further vertices at the locations of the selected object detections in each of the segments containing an object detection; and plotting edges of the free space polygon as a series of straight lines connecting the successive vertices.
Plotting the polygon may comprise: plotting, for each segment containing a selected object detection, an edge of the free space polygon as a transverse straight line passing through the selected object detection and connecting radial boundaries of that segment; plotting edges of the free space polygon as radial lines connecting transverse straight lines of neighbouring segments along radial boundaries separating the neighbouring segments; and plotting edges of the free space polygon as respective radial lines connecting a sensor origin of the object detection system to the transverse straight line of each peripheral segment along a peripheral radial boundary of each peripheral segment.
The cells of the occupancy grid designated as free space may be assigned an occupancy probability according to a uniform distribution, a Gaussian distribution or an exponential distribution.
The multiple segments may comprise peripheral segments and one or more central segments, and wherein each of the multiple segments subtends substantially the same angle.
The multiple segments may comprise peripheral segments and one or more central segments, wherein each peripheral segment subtends an angle greater than the angle subtended by the or each central segment.
The object detection system may be a lidar or a radar system.
In accordance with a fourth aspect of the present disclosure there is provided a computer readable medium comprising computer readable instructions that, when executed by a computing apparatus, cause the computing apparatus to perform the method of the third aspect of the present disclosure.
Brief Description of the Drawings
So that the present disclosure may be fully understood, example aspects will be described with reference to the accompanying drawings, in which: Figure 1 shows an occupancy grid; Figure 2A illustrates free space estimation using a ray casting technique; Figure 2B illustrates an occupancy grid populated using a line rasterization algorithm; Figure 3 is a schematic block diagram illustrating components of an automated vehicle system; Figures 4A-4E illustrate a field of view of an object detection system during successive steps of an example process; Figure 5 is a flow chart illustrating steps of an example process; Figure 6 illustrates a rasterization of a triangular segment of a free space envelope; and Figure 7 illustrates an occupancy grid containing a free space envelope.
Detailed Description
Example aspects of the present disclosure involve selecting certain object detections obtained from an object detection system (such as a radar or lidar system) to form a polygon representing a free space envelope. The free space envelope is rasterized using a triangle rasterization algorithm to designate certain cells of the occupancy grid as free space. As will be discussed herein, this arrangement is beneficial since a free space envelope may be formed covering areas of a field of view that, using prior art methods, could not previously be modelled as free space. Furthermore, the approach described herein has been found to be computationally efficient in comparison with prior approaches.
Automated vehicles described herein may be fully automated vehicles requiring little or no driver intervention. Alternatively, an automated vehicle may be partially automated but may still require the driver to supervise the behaviour of the vehicle even though the driver does not directly control the vehicle except in certain circumstances where driver intervention is required. Furthermore, in vehicles controlled by the driver, driver assistance systems may be provided. For example, a parking assistance system uses sensors to detect the vehicle's environment and outputs information to assist the driver when parking the vehicle.
Figure 3 is a schematic block diagram illustrating an automated vehicle system in which a free space determination module 300 is provided. The free space determination module 300 is in communication with an object detection system 310 and a vehicle control system 320.
The free space determination module 300 includes a memory 301. The memory 301 has stored therein object detection data 302 obtained from the object detection system 310.
The object detection data 302 may include a list of object detections indicating the location of an object within a field of view of the object detection system 310. The memory 301 has computer readable instructions 303 in the form of programming code that, when executed, causes the free space determination module to perform the actions described herein.
The free space determination module 300 includes a controller 304. The controller 304 may be implemented as processing circuitry and executes the computer readable instructions 303 stored in the memory 301. The controller 304 includes a field of view divider 305 for dividing the field of view of the object detection system 310 into multiple segments. The controller 304 includes an object detection selector 306 for selecting one or more object detections from each segment of the field of view containing an object detection. The controller 304 includes a polygon plotter 307 for determining a polygon using the locations of the selected object detections. The controller 304 includes a triangle rasterizer 308 configured to convert the determined polygon into a series of triangles and rasterize the series of triangles to identify cells of an occupancy grid to be designated as representing free space.
In embodiments described herein, the object detection system 310 is a radar or lidar system in which object or obstacle information is provided using an indirect sensor approach. The radar and/or lidar lidar systems used in association with embodiments described herein operate in a standard manner known in the art and so extended discussions thereof will not be presented herein. The object detection system 310 includes, in embodiments using a radar system, hardware such as a transmitter, one or more transmitting antennas, one or more receiving antennas (which may be the same antennas used for transmission), a receiver, signal processing apparatus to process the received radio signal to obtain object detection information. In embodiments using a lidar system, the object detection system 310 includes a laser emitter, one or more sensors and appropriate signal processing components to obtain object detection information. In both radar and lidar examples, the object detection system 310 may further include memory storing appropriate software and processing circuitry required for the object detection system 310 to perform the actions necessary to determine object detection information.
The vehicle control system 320 includes hardware and software components used in controlling the vehicle's behaviour. In examples where the vehicle is fully or substantially automated, the vehicle control system 320 may be operable to control the execution of actions of the vehicle by generating appropriate commands without intervention from the driver. Such actions include steering the vehicle, controlling the speed of the vehicle, gear selection, activation of vehicle lights such as headlights and indicator lights and so forth.
The commands generated by the vehicle control system 320 may be generated based on information obtained from the object detection system 310, for example an occupancy grid indicating the probability of the presence of obstacles or free space in a field of view of the object detection system 310.
In vehicles having lower levels of automation, the vehicle control system 320 may control the vehicle's behaviour in response to a combination of driver input and information obtained by the vehicle control system 320 autonomously.
Figures 4A, 4B, 4C, 4D and 4E illustrate, in plan view, the field of view of the object detection system 310 during successive steps of an example process. The steps of this process will also be described with reference to the flow chart provided in Figure 5.
At step 5.1, the controller 304 obtains a set of object detections from the object detection system 310. The object detections may be stored as object detection data 302 in the memory 301 of the free space determination module 300. The object detection data indicate the location and time of an object detection. The location of an object detection may be expressed in polar coordinates (r, 0), where r is the distance of the object detection from a sensor of the object detection system 310 and 0 is an azimuthal angle in the ground plane of the field of view of the object detection system 310.
In addition to the object detection data, parameters of the object detection system 310 may be provided to the free space estimation module 300, such as field of view information to enable to the free space determination module 300 to reconstruct the field of view of the object detection system 310.
At step 5.2, the field of view divider 305 divides the field of view of the object detection system 310 into multiple segments. Figure 4A shows a field of view (FOV) in plan view and illustrates a ground plane of the object detection system 310. The position of the sensor of the object detection system 310 is referred to as the sensor origin 410. Radial boundaries extend outwardly from the sensor origin. The radial boundaries of a segment delimit the angular range of that segments. A circumferential boundary is provided at a threshold distance from the sensor of the object detection distance. The threshold distance may be a distance beyond which object detections are deemed not to be sufficiently reliable for use in determining an occupancy grid.
In the field of view 400 shown in Figure 4A, each of the multiple segments subtends substantially the same angle. In alternative embodiments that have peripheral segments at the edges of the field of view and one or more central segments, each peripheral segment may subtend an angle greater than the angle subtended by the central segment or segments. In automated vehicles there is a tendency to detect more object detections at the fringes of the field of view than at the centre of the field of view. By providing narrower segments towards the centre of the field of view, a greater number of object detections will be selected and included in the polygon used to represent the free space envelope. This is beneficial given that object detections tend to be sparser towards the centre of the field of view. Furthermore, having wider angle segment covering peripheral portions of the field of view reduces the number of object detections selected for inclusion in the polygon from regions that tend to have a higher density of object detections than would be the case in embodiments having segment subtending the same angle. By reducing the number of object detections from this region, processing resources are used more efficiently.
Figure 4B shows the field of view 400 on which a set of object detections obtained at step 5.1 has been superimposed. At step 5.3, the object detection selector selects an object detection in each of the segments containing an object selection. In the embodiment illustrated in Figure 4B, the object detection in each segment containing at least one object selection that is closest to the sensor of the object detection system 310, is selected. In other words, the object detection having the lowest r distance value from the sensor origin 410 in each segment containing an object detection is selected. The selected objected detections have been labelled S in Figure 4B. In the central segment, there are no object detections within the threshold distance and so no object detection is selected for this segment.
At step 5.4, a polygon representing a free space envelope is plotted. Figures 4C, 4D, 4E represent alternative schemes for plotting the polygon that may be employed by the polygon plotter 307 with respect to a set of object detections.
In Figure 4C, a polygon has been plotted. The sensor origin 410 is adopted as a vertex at the origin of the field of view 400. The vertex located at the sensor origin may be assigned an index value of 1=0. Each of the selected object detections in each segment may then be designated as further vertices of the polygon. In this case, the left most segment in Figure 4C may be processed as the "first" segment (containing vertex i=1) followed by successive segments with the right-most segment being considered the "last" segment. The second segment to be processed contains vertex i=2. Since there is no object detection selection in the central segment, points located at the threshold distance along the radial boundaries of that segment are assigned as vertices (i=3 and i=4) for that portion of the polygon. Vertex i=5 is situated in the fourth segment to the processed. Vertex i=6 is located in the final segment to be processed. The successive vertices are then connected in order via straight lines to plot the polygon. Since vertex i=6 is the vertex in the final segment to be processed, a straight line is plotted between vertex i=6 and sensor origin vertex i=0.
In Figure 4D, an alternative scheme is used to plot the polygon representing the free space envelope. Again, the object detection in each segment containing an object detection that is closest to the sensor origin 410 is selected. In each segment containing a selected object detection, a transverse straight line is plotted to pass through the selected object detection and to connect the radial boundaries of the segment in which that selected object detection is situated. The transverse straight line may be plotted so that it traverses the segment across the shortest distance between the radial boundaries delimiting the segment containing the object detection. Transverse straight lines in neighbouring segments may be connected via a radial line extending along respective radial boundaries of the neighbouring segments. In a segment not containing any object detections, such as the central segment of Figure 4D, a transverse straight line may be plotted between points located at a threshold distance from the sensor origin along the radial boundaries of the central segment. The transverse straight lines of the peripheral segments may each be connected to the sensor origin 410 via a radial line extending along the peripheral radial boundary of the respective peripheral segment.
Figure 4E shows an alternative scheme for forming the free space polygon. In general, the approach taken is similar to that described above with reference to Figure 4C. Successive object detections are connected via a series of straight lines, with detections in peripheral segments of the field of view being connected to the sensor origin 410 by respective straight lines. However, in this scheme, object detections situated beyond the threshold distance may be used as vertices to form a preliminary polygon that are connected via straight lines to neighbouring object detection locations. However, regions of the preliminary polygon located beyond the threshold distance may be excluded from the final polygon that is to be rasterized. In the case shown in Figure 4E, radial lines are plotted from the sensor origin to each object detection (including object detections located beyond the threshold distance from the sensor origin). The points at which radial lines connecting the sensor origin to an object detection cross the threshold distance may form a vertex of the final polygon.
Furthermore, the points at which a straight line connecting an object detection within the threshold distance and an object detection beyond the threshold distance crosses the circumferential boundary may also form a vertex of the final polygon. In this example, the final polygon represents the free space envelope.
The polygons plotted in Figures 4C, 4D and 4E are examples of schemes for plotting polygons representing a free space envelope. It should be understood that alternative schemes for plotting free space polygons may be used. The logic for such schemes may be stored in the memory 301 and executed by the polygon plotter 307.
At step 5.5, the triangle rasterizer 308 rasterizes the polygon representing the free space envelope. The triangle rasterizer 308 uses a triangle rasterization algorithm to designate cells of an occupancy grid as free space. Triangle rasterization algorithms are known in the art and any suitable algorithm may be used. The triangle rasterizer 308 breaks down the polygon into triangles and rasterizes each triangle in turn in order to rasterize the entire polygon.
Each cell of the occupancy grid designated as free space may be assigned a uniform occupancy probability, for example 0.2. Alternatively, the occupancy probability of cells closest to the sensor origin may be assigned a very low occupancy probability, with cells further away from the sensor probability having a higher occupancy probability. The variance of occupancy probability with respect to distance from the sensor origin may follow a Gaussian distribution or an exponential distribution Figure 6 shows an occupancy grid 600 in which a triangular portion of the polygon has been rasterized. The shaded cells of the occupancy grid are designated as free space.
Figure] illustrates a high resolution view of an occupancy grid 700 in which the grid lines have been omitted to aid clarity. The occupancy grid 700 displays a free space envelope 710 determined in accordance with an example of the present disclosure. As can be seen from Figure], there are no gaps in the free space envelope 710 corresponding to regions in which free space cannot be modelled. This compares favourably with the prior art approach of ray casting whereby regions in which there were no object detections could not be modelled as free space.
Further advantages of the approach described herein will be apparent to the person skilled in the art. The approach described herein is more efficient than systems that employ ray casting because it is not necessary to perform a rasterization for every object detection in the object detection set. Instead, a single object detection is selected for every segment of the field of view containing an object detection. Thus, the number of object detections processed to determine the free space envelope is greatly reduced in comparison to the number of object detections that are processed in approaches that employ ray casting.
The approach described herein has been found to remove many of the visual artefacts that arise using ray casting, such as the Moire effect. The process for determining free space in occupancy grids described herein can be incorporated into existing algorithms used for the determination of occupancy grids. Furthermore, free space estimation can be performed in areas in which there are sparse radar or lidar detections.
The examples described herein may be performed on detections received from indirect sensor systems such as a radar or lidar system. Alternatively, a polygon representing free space may be plotted using data obtained from direct sensors such as camera images. Appropriate image recognition software may be used to process the image and extract features from the image. A depth map may be obtained by appropriate means to determine a distance to the feature. A polygon may then be plotted assuming the sensor origin as a first vertex of the polygon and extracted features from the image being used as successive vertices. The plotted polygon may then be rasterized as described above to obtain an occupancy grid indicating a free space envelope.
In the foregoing description, example aspects are described with reference to several example embodiments. Accordingly, the specification should be regarded as illustrative, rather than restrictive. Similarly, the figures illustrated in the drawings, which highlight the functionality and advantages of the example embodiments, are presented for example purposes only. The architecture of the example embodiments is sufficiently flexible and configurable, such that it may be utilized in ways other than those shown in the accompanying figures.
Software embodiments of the examples presented herein may be provided as, a computer program, or software, such as one or more programs having instructions or sequences of instructions, included or stored in an article of manufacture such as a machine-accessible or machine-readable medium, an instruction store, or computer-readable storage device, each of which can be non-transitory, in one example embodiment. The program or instructions on the non-transitory machine-accessible medium, machine-readable medium, instruction store, or computer-readable storage device, may be used to program a computer system or other electronic device. The machine-or computer-readable medium, instruction store, and storage device may include, but are not limited to, floppy diskettes, optical disks, and magneto-optical disks or other types of media/machine-readable medium/instruction store/storage device suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. The terms "computer-readable", "machine-accessible medium", "machine-readable medium", "instruction store", and "computer-readable storage device" used herein shall include any medium that is capable of storing, encoding, or transmitting instructions or a sequence of instructions for execution by the machine, computer, or computer processor and that causes the machine/computer/computer processor to perform any one of the methods described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on), as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result.
Some embodiments may also be implemented by the preparation of application-specific integrated circuits, field-programmable gate arrays, or by interconnecting an appropriate network of conventional component circuits.
Some embodiments include a computer program product. The computer program product may be a storage medium or media, instruction store(s), or storage device(s), having instructions stored thereon or therein which can be used to control, or cause, a computer or computer processor to perform any of the procedures of the example embodiments described herein. The storage medium/instruction store/storage device may include, by example and without limitation, an optical disc, a ROM, a RAM, an EPROM, an EEPROM, a DRAM, a VRAM, a flash memory, a flash card, a magnetic card, an optical card, nanosystems, a molecular memory integrated circuit, a RAID, remote data storage/archive/warehousing, and/or any other type of device suitable for storing instructions and/or data.
Stored on any one of the computer-readable medium or media, instruction store(s), or storage device(s), some implementations include software for controlling both the hardware of the system and for enabling the system or microprocessor to interact with a human user or other mechanism utilizing the results of the example embodiments described herein. Such software may include without limitation device drivers, operating systems, and user applications. Ultimately, such computer-readable media or storage device(s) further include software for performing example aspects of the disclosure, as described above.
Included in the programming and/or software of the system are software modules for implementing the procedures described herein. In some example embodiments herein, a module includes software, although in other example embodiments herein, a module includes hardware, or a combination of hardware and software.
While various example embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant a rt(s) that various changes in form and detail can be made therein. Thus, the present disclosure should not be limited by any of the above described example embodiments but should be defined only in accordance with the following claims and their equivalents.
Further, the purpose of the Abstract is to enable the Patent Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the example embodiments presented herein in any way. It is also to be understood that any procedures recited in the claims need not be performed in the order presented.
While this specification contains many specific embodiment details, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of features specific to particular embodiments described herein. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination.
In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Having now described some illustrative embodiments and embodiments, it is apparent that the foregoing is illustrative and not limiting, having been presented by way of example. In particular, although many of the examples presented herein involve specific combinations of apparatus or software elements, those elements may be combined in other ways to accomplish the same objectives. Acts, elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments.
The apparatuses described herein may be embodied in other specific forms without departing from the characteristics thereof. The foregoing embodiments are illustrative rather than limiting of the described systems and methods. Scope of the apparatuses described herein is thus indicated by the appended claims, rather than the foregoing description, and changes that come within the meaning and range of equivalence of the claims are embraced therein.
Claims (24)
- Claims 1. A free space determination module for estimating free space in an occupancy grid, the free space estimation unit comprising: a memory configured to store object detection data relating to one or more object detections by an object detection system; and a controller configured to: obtain one or more object detections in a field of view of the object detection system; divide the field of view of the object detection system into multiple segments; select an object detection of the object detection system in each of the multiple segments of the field of view containing an object detection; use locations of the selected object detections to plot a polygon representing a free space envelope; and designate cells of an occupancy grid as representing free space by applying a triangle rasterization algorithm to the polygon representing the free space envelope.
- 2. The module of claim 1, wherein the controller is further configured to select the object detection determined to be closest to a sensor origin of the object detection system in each of the segments of the field of view containing an object detection.
- 3. The module of claim 1 or claim 2, wherein the controller is further configured to: determine that no object detections are present in a segment of the field of view if no objects are detected within a threshold distance of a sensor origin of the object detection system in that segment.
- 4. The module of claim 3, wherein plotting a polygon representing a free space envelope comprises: plotting a preliminary polygon; identifying areas of the preliminary polygon located beyond the threshold distance; and plotting a final polygon representing a free space envelope excluding areas of the preliminary polygon located beyond the threshold distance.
- 5. The module of claim 3, wherein the controller is configured, in response to determining that no object detection is present in a segment of the field of view, to plot an edge of the polygon as a transverse straight line extending between a first point located at the threshold distance along a first radial boundary of the segment and a second point located at the threshold distance along a second radial boundary of the segment.
- 6. The module of any preceding claim, wherein the controller is configured to: plot successive vertices of the free space polygon comprising a first vertex at a sensor origin and further vertices at the locations of the selected object detections in each of the segments containing an object detection; and plot edges of the free space polygon as a series of straight lines connecting the successive vertices.
- 7. The module of any one of claims 1-5, wherein the controller is configured to: plot, for each segment containing a selected object detection, an edge of the free space polygon as a transverse straight line passing through the selected object detection and connecting radial boundaries of that segment; plot edges of the free space polygon as radial lines connecting transverse straight lines of neighbouring segments along radial boundaries separating the neighbouring segments; and plot edges of the free space polygon as respective radial lines connecting a sensor origin of the object detection system to the transverse straight line of each peripheral segment along a peripheral radial boundary of each peripheral segment.
- 8. The module of any preceding claim, wherein the cells of the occupancy grid designated as free space are assigned an occupancy probability according to a uniform distribution, a Gaussian distribution or an exponential distribution.
- 9. The module of any preceding claim, wherein the multiple segments comprise peripheral segments and one or more central segments, and wherein each of the multiple segments subtends substantially the same angle.
- 10. The module of any of claims 1-8, wherein the multiple segments comprise peripheral segments and one or more central segments, wherein each peripheral segment subtends an angle greater than the angle subtended by the or each central segment.
- 11. The module of any preceding claim, wherein the object detection system is a lidar or radar system.
- 12. An autonomous vehicle system comprising: a vehicle control system; an object detection system; and the free space estimation module of any preceding claim.
- 13. A method of estimating free space in an occupancy grid, the method comprising: obtaining one or more object detections in a field of view of the object detection system; dividing the field of view of the object detection system into multiple segments; selecting an object detection of the object detection system in each of the multiple segments of the field of view containing an object detection; using locations of the selected object detections to plot a polygon representing a free space envelope; and designating cells of an occupancy grid as representing free space by applying a triangle rasterization algorithm to the polygon representing the free space envelope.
- 14. The method of claim 13, wherein selecting an object detection comprises selecting the object detection determined to be closest to a sensor of the object detection system in each of the multiple segments of the field of view containing an object detection.
- 15. The method of claim 13 or claim 14, further comprising: determining that no object detections are present in a segment of the field of view if no objects are detected within a threshold distance of a sensor of the object detection system in that segment.
- 16. The method of claim 15, wherein plotting a polygon representing a free space envelope comprises: plotting a preliminary polygon; identifying areas of the preliminary polygon located beyond the threshold distance; and plotting a final polygon representing a free space envelope excluding areas of the preliminary polygon located beyond the threshold distance.
- 17. The method of claim 15, further comprising: in response to determining that no object detections are present in a segment of the field of view, plotting an edge of the polygon as a transverse straight line extending between a first point located at the threshold distance along a first radial boundary of the segment and a second point located at the threshold distance along a second radial boundary of the segment.
- 18. The method of any of claims 13-17, wherein plotting the free space polygon comprises: plotting successive vertices of the free space polygon comprising a first vertex at a sensor origin and further vertices at the locations of the selected object detections in each of the segments containing an object detection; and plotting edges of the free space polygon as a series of straight lines connecting the successive vertices.
- 19. The method of any one of claims 13-17, wherein plotting the free space polygon comprises: plotting, for each segment containing a selected object detection, an edge of the free space polygon as a transverse straight line passing through the selected object detection and connecting radial boundaries of that segment; plotting edges of the free space polygon as radial lines connecting transverse straight lines of neighbouring segments along radial boundaries separating the neighbouring segments; and plotting edges of the free space polygon as respective radial lines connecting a sensor origin of the object detection system to the transverse straight line of each peripheral segment along a peripheral radial boundary of each peripheral segment.
- 20. The method of any of claims 13-19, wherein the cells of the occupancy grid designated as free space are assigned an occupancy probability according to a uniform distribution, a Gaussian distribution or an exponential distribution.
- 21. The method of any of claims 13-20, wherein the multiple segments comprise peripheral segments and one or more central segments, and wherein each of the multiple segments subtends substantially the same angle.
- 22. The method of any of claims 13-20, wherein the multiple segments comprise peripheral segments and one or more central segments, wherein each peripheral segment subtends an angle greater than the angle subtended by the or each central segment.
- 23. The method of any of claims 13-22, wherein the object detection system is a lidar or radar system.
- 24. Computer readable medium comprising computer readable instructions that, when executed by a computing apparatus, cause the computing apparatus to perform the method of any of claims 13-23.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2101041.8A GB2603133A (en) | 2021-01-26 | 2021-01-26 | Free Space Estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2101041.8A GB2603133A (en) | 2021-01-26 | 2021-01-26 | Free Space Estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202101041D0 GB202101041D0 (en) | 2021-03-10 |
GB2603133A true GB2603133A (en) | 2022-08-03 |
Family
ID=74858847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2101041.8A Pending GB2603133A (en) | 2021-01-26 | 2021-01-26 | Free Space Estimation |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2603133A (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180356508A1 (en) * | 2015-12-17 | 2018-12-13 | Autoliv Development Ab | A vehicle radar system arranged for determining an unoccupied domain |
-
2021
- 2021-01-26 GB GB2101041.8A patent/GB2603133A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180356508A1 (en) * | 2015-12-17 | 2018-12-13 | Autoliv Development Ab | A vehicle radar system arranged for determining an unoccupied domain |
Non-Patent Citations (4)
Title |
---|
IVANJKO E ET AL: "Experimental comparison of sonar based occupancy grid mapping methods", AUTOMATIKA - JOURNAL FOR CONTROL, MEASUREMENT, ELECTRONICS, COMPUTING AND COMMUNICATIONS, TAYLOR & FRANCIS, GB, vol. 50, no. 1-2, 1 January 2009 (2009-01-01), pages 65 - 79, XP009530142, ISSN: 0005-1144 * |
JUNGNICKEL RUBEN ET AL: "Efficient automotive grid maps using a sensor ray based refinement process", 2016 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), IEEE, 19 June 2016 (2016-06-19), pages 668 - 675, XP032939037, DOI: 10.1109/IVS.2016.7535459 * |
KUBERTSCHAK TIM ET AL: "Fusion routine independent implementation of advanced driver assistance systems with polygonal environment models", 2016 19TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), ISIF, 5 July 2016 (2016-07-05), pages 347 - 354, XP032935033 * |
KUBERTSCHAK TIM ET AL: "Towards a unified architecture for mapping static environments", 17TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), INTERNATIONAL SOCIETY OF INFORMATION FUSION, 7 July 2014 (2014-07-07), pages 1 - 8, XP032653908 * |
Also Published As
Publication number | Publication date |
---|---|
GB202101041D0 (en) | 2021-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10832064B2 (en) | Vacant parking space detection apparatus and vacant parking space detection method | |
CN111223135B (en) | System and method for enhancing range estimation by monocular cameras using radar and motion data | |
EP3812793B1 (en) | Information processing method, system and equipment, and computer storage medium | |
CN111352110B (en) | Method and device for processing radar data | |
CN108116410B (en) | Method and apparatus for controlling speed of vehicle | |
EP3881226A1 (en) | Object classification using extra-regional context | |
CN112183180A (en) | Method and apparatus for three-dimensional object bounding of two-dimensional image data | |
CN110826386A (en) | LIDAR-based object detection and classification | |
KR102054926B1 (en) | System and method for detecting close cut-in vehicle based on free space signal | |
US20190065878A1 (en) | Fusion of radar and vision sensor systems | |
US11313696B2 (en) | Method and apparatus for a context-aware crowd-sourced sparse high definition map | |
US11598877B2 (en) | Object recognition device and vehicle control system | |
CN114384491B (en) | Point cloud processing method and device for laser radar and storage medium | |
KR20220035894A (en) | Object recognition method and object recognition device performing the same | |
CN114384492B (en) | Point cloud processing method and device for laser radar and storage medium | |
GB2599939A (en) | Method of updating the existance probability of a track in fusion based on sensor perceived areas | |
CN110954912B (en) | Method and apparatus for optical distance measurement | |
CN114325642A (en) | Laser radar scanning method, scanning apparatus, and computer-readable storage medium | |
GB2603133A (en) | Free Space Estimation | |
US20230008457A1 (en) | Occupancy Grid Calibration | |
EP3416094B1 (en) | Information processing apparatus, information processing method, and computer readable medium | |
EP4191274A1 (en) | Radar-based estimation of the height of an object | |
CN112651405B (en) | Target detection method and device | |
WO2023033040A1 (en) | Flare detection system, flare detection device, flare detection method, and flare detection program | |
US20240371150A1 (en) | Brake Light Detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
732E | Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977) |
Free format text: REGISTERED BETWEEN 20240321 AND 20240327 |