US10823855B2 - Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics - Google Patents
Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics Download PDFInfo
- Publication number
- US10823855B2 US10823855B2 US16/194,465 US201816194465A US10823855B2 US 10823855 B2 US10823855 B2 US 10823855B2 US 201816194465 A US201816194465 A US 201816194465A US 10823855 B2 US10823855 B2 US 10823855B2
- Authority
- US
- United States
- Prior art keywords
- point cloud
- cloud data
- lidar
- lidar point
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/04—Systems determining the presence of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4808—Evaluating distance, position or velocity data
-
- G06K9/00825—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present application generally relates to vehicle advanced driver assistance systems (ADAS) and, more particularly, to techniques for traffic recognition and adaptive ground removal based on light detection and ranging (LIDAR) point cloud data.
- ADAS vehicle advanced driver assistance systems
- LIDAR light detection and ranging
- LIDAR light detection and ranging
- ADAS vehicle advanced driver assistance systems
- LIDAR systems emit laser light pulses and capture pulses that are reflected back by surrounding objects.
- 3D LIDAR point clouds are obtained.
- Each point cloud comprises a plurality of reflected pulses in a 3D (x/y/z) coordinate system).
- These point clouds could be used to detect objects (other vehicles, pedestrians, traffic signs, etc.).
- Ground removal is typically one of the first tasks performed on the point cloud data, but conventional ground removal techniques do not account for non-flat surfaces (curved/banked roads, speed bumps, etc.) or scenes having multiple ground surfaces at different levels.
- DNNs deep neural networks
- an advanced driver assistance system for a vehicle.
- the ADAS comprises: a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to: receive the 3D LIDAR point cloud data, divide the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generate a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, perform at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.
- LIDAR light detection and ranging
- the adaptive ground removal comprises determining a dynamic height threshold indicative of a ground surface based on the height differences. In some implementations, the adaptive ground removal further comprises removing or ignoring any 3D LIDAR point cloud data having a z-coordinate that is less than the dynamic height threshold.
- the histogram is a feature of a model classifier for traffic level recognition.
- the controller is further configured to train the model classifier based on known traffic level data.
- the model classifier is a support vector machine (SVM).
- the traffic level recognition comprises using the trained model classifier to recognize a traffic level based on the 3D LIDAR point cloud data.
- the controller is further configured to adjust a field of view (FOV) of the LIDAR system based on the recognized traffic level.
- the controller is configured to narrow the FOV of the LIDAR system for light traffic levels and to widen the FOV of the LIDAR system for heavy traffic levels.
- FOV field of view
- the controller does not utilize a deep neural network (DNN).
- DNN deep neural network
- a method of performing at least one of adaptive ground removal from 3D LIDAR point cloud data and traffic level recognition by a vehicle comprises: receiving, by a controller of the vehicle and from a LIDAR system of the vehicle, the 3D LIDAR point cloud data, wherein the 3D LIDAR point cloud data collectively represents reflected laser light pulses captured by the LIDAR system after the emitting of laser light pulses from the LIDAR system, dividing, by the controller, the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generating, by the controller, a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, performing, by the controller, at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.
- the adaptive ground removal comprises determining a dynamic height threshold indicative of a ground surface based on the height differences. In some implementations, the adaptive ground removal further comprises removing or ignoring any 3D LIDAR point cloud data having a z-coordinate that is less than the dynamic height threshold. In some implementations, the histogram is a feature of a model classifier for traffic level recognition. In some implementations, the method further comprises training, by the controller, the model classifier based on known traffic level data. In some implementations, the model classifier is an SVM. In some implementations, the traffic level recognition comprises using the trained model classifier to recognize a traffic level based on the 3D LIDAR point cloud data.
- the method further comprises adjusting, by the controller, an FOV of the LIDAR system based on the recognized traffic level.
- adjusting the FOV of the LIDAR system comprises narrowing the FOV of the LIDAR system for light traffic levels and widening the FOV of the LIDAR system for heavy traffic levels.
- the controller does not utilize a deep neural network (DNN).
- DNN deep neural network
- FIG. 1 is a functional block diagram of an example vehicle having an advanced driver assistance system (ADAS) with a light detection and ranging (LIDAR) system according to some implementations of the present disclosure
- ADAS advanced driver assistance system
- LIDAR light detection and ranging
- FIG. 2 is a functional block diagram of an example traffic recognition and adaptive ground removal architecture according to some implementations of the present disclosure
- FIG. 3 is an example overhead view of a vehicle and a plurality of cells corresponding to distinct regions surrounding the vehicle for which height differences are calculated using LIDAR point cloud data;
- FIG. 4 is a flow diagram of an example method of traffic recognition and adaptive ground removal based on LIDAR point cloud statistics according to some implementations of the present disclosure.
- ADAS automated driver assistance
- LIDAR light detection and ranging
- ADAS includes driver assistance systems (lane keeping, adaptive cruise control, etc.) as well as partially and fully autonomous driving systems.
- a conventional ADAS for object detection utilizes a deep neural network (DNN) trained by machine learning with training data that is annotated (e.g., human labeled) for every different type of object. This requires a substantial amount of resources, both from a processing standpoint and a labeled training data standpoint, which increases costs.
- DNN deep neural network
- the first processing step for 3D LIDAR point cloud data is typically to identify and remove (or ignore) the ground surface. This is typically performed using a fixed z-coordinate threshold (specific to a particular LIDAR sensor mounting configuration) or plane model segmentation.
- LIDAR point cloud statistics e.g., data point height differences in various distinct cells surrounding the vehicle
- the adaptive removal of all types of ground surfaces provides for faster LIDAR point cloud processing and improved ground surface detection and removal accuracy (e.g., reduce false ground surface detections).
- the detection of different traffic scenarios can be leveraged to control other ADAS features. For example, a heavy traffic scenario may allow the ADAS features to behave more conservatively, and vice versa.
- a field of view (FOV) of the LIDAR sensors is adjusted based on the detected traffic scenario. For example, a no traffic or light traffic scenario may allow the LIDAR FOV to be tuned for more accurate long distance sensing, which could be helpful for higher speed driving, such as on a highway.
- the vehicle 100 comprises a torque generating system 104 (an engine, an electric motor, combinations thereof, etc.) that generates drive torque that is transferred to a driveline 108 via a transmission 112 .
- a controller 116 controls operation of the torque generating system 104 , such as to generate a desired drive torque based on a driver input via a driver interface 120 (a touch display, an accelerator pedal, combinations thereof, etc.).
- the vehicle 100 further comprises an ADAS 124 having a LIDAR system 128 .
- the LIDAR system 128 emits laser light pulses and captures reflected laser light pulses (from other vehicles, structures, traffic signs, ground surfaces, etc.) that collectively form captured 3D LIDAR point cloud data.
- FIG. 2 a functional block diagram of an example traffic level recognition and adaptive ground removal architecture 200 is illustrated. It will be appreciated that this architecture 200 could be implemented by the ADAS 124 or the controller 116 .
- the 3D LIDAR point cloud data is captured using the LIDAR system 128 . This could include, for example, analyzing return times and wavelengths of the reflected laser light pulses.
- the LIDAR system 128 has a FOV setting that specifies horizontal and/or vertical scanning angles. This FOV setting could be adjusted, for example, by the controller 116 via a command to the LIDAR system 128 . A narrower FOV produce higher resolution over a smaller field, whereas a wider FOV produces a lesser resolution over a larger field.
- the narrower FOV could be particularly useful, for example, for long distance scanning, such as during high speed driving, such as on a highway.
- the wider FOV could be particularly useful for other operating scenarios, such as low speed driving in crowded environments (parking lots, heavy traffic jams, etc.).
- the 3D LIDAR point cloud data is divided into a plurality of cells, each cell representing a distinct region surrounding the vehicle 100 .
- FIG. 3 illustrates an overhead view 300 of an example plurality of cells 304 surrounding the vehicle 100 .
- the angle and radius increment between the cells 304 is 2 degrees and 50 centimeters (cm), respectively. It will be appreciated, however, that any suitable division of the 3D LIDAR point cloud data into the plurality of cells 304 could be utilized.
- a histogram is generated at 212 .
- the histogram includes a height difference between maximum and minimum heights (z-coordinates) of the 3D LIDAR point cloud data for each respective cell.
- 3D LIDAR point cloud data corresponding to a ground surface.
- the histogram is then used to perform at least one of adaptive ground removal at 216 and traffic level recognition at 220 . It will be appreciated, however, that the filtered 3D LIDAR point cloud data (after adaptive ground removal) could be used for processing tasks other than traffic level recognition, such as, but not limited to, object detection and tracking.
- the histogram data is analyzed to determine a height threshold indicative of a ground surface.
- This height threshold could be dynamic in that it is repeatedly recalculated for different scenes. Any points in the 3D LIDAR point cloud data having heights (z-coordinates) less than this height threshold could then be removed from the 3D LIDAR point cloud data (thereby obtaining filtered 3D LIDAR point cloud data) or otherwise ignored.
- the removal or ignoring of any 3D LIDAR point cloud data that is likely a ground surface allows for faster processing due to the smaller dataset. This also may facilitate the use of a less expensive controller 116 due to the reduced throughput requirements. As shown in FIG.
- one example of the processing of the filtered 3D LIDAR point cloud is the traffic level recognition at 220 . It will be appreciated, however, that the histogram and the unfiltered 3D LIDAR point cloud data could also be fed directly to the traffic level recognition at 220 .
- the traffic level recognition 220 involves detecting a quantity of nearby objects (i.e., other vehicles) based on the height differences.
- the histogram is a feature of a model classifier for traffic level recognition.
- This model classifier is much less complex than a DNN as used by conventional methods.
- One example of the model classifier is a support vector machine (SVM), but it will be appreciated that any suitable model classifier could be utilized.
- the model classifier is trained using known traffic level data. This known traffic level data could be training histograms that are each labeled with a traffic level. For example, a binary labeling system could be used where each training histogram is labeled as a “1” (e.g., heavy traffic or a traffic jam) or a “0” (e.g., very light traffic or no traffic).
- the model classifier is then applied to the filtered (or unfiltered) 3D LIDAR point cloud data to recognize a traffic level near the vehicle 100 .
- the FOV of the LIDAR system 128 could then be adjusted at 224 depending on the recognized traffic level.
- the controller 116 optionally determines whether a set of one or more preconditions are satisfied. These could include, for example only, the vehicle 100 being operated (e.g., the torque generating system 104 being activated) and no malfunctions being present.
- the controller 116 obtains the 3D LIDAR point cloud data using the LIDAR system 128 .
- the controller divides the 3D LIDAR point cloud data into a plurality of cells (e.g., cells 304 ) each indicative of a distinct region surrounding the vehicle 100 .
- the controller 116 generates a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell.
- the controller 116 determines whether adaptive ground removal is to be performed. When true, the method 400 proceeds to 424 . Otherwise, the method 400 proceeds to 432 .
- the controller 116 determines a dynamic height threshold indicative of a ground surface based on the height differences in the histogram. Typically, most of the height differences are within a certain statistical range and therefore can be considered ground cells.
- any data points having a height (z-coordinate) less than this height threshold are removed from the 3D LIDAR point cloud data (to obtain filtered 3D LIDAR point cloud data) or are otherwise ignored.
- the controller 116 determines whether traffic level recognition is to be performed. When true, the method 400 proceeds to 436 . Otherwise, the method 400 ends or returns to 404 .
- the controller 116 uses a trained model classifier (e.g., previously trained using the histogram as the model feature) to recognize a traffic level from the filtered (or unfiltered) 3D LIDAR point cloud data.
- the controller 116 optionally adjusts a FOV of the LIDAR system 128 based on the recognized traffic level. This could include, for example, narrowing the FOV for no traffic or light traffic levels and widening the FOV for heavy traffic levels.
- the method 400 then ends or returns to 404 .
- controller refers to any suitable control device or set of multiple control devices that is/are configured to perform at least a portion of the techniques of the present disclosure.
- Non-limiting examples include an application-specific integrated circuit (ASIC), one or more processors and a non-transitory memory having instructions stored thereon that, when executed by the one or more processors, cause the controller to perform a set of operations corresponding to at least a portion of the techniques of the present disclosure.
- ASIC application-specific integrated circuit
- the one or more processors could be either a single processor or two or more processors operating in a parallel or distributed architecture.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
An advanced driver assistance system (ADAS) and method for a vehicle utilize a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to receive the 3D LIDAR point cloud data divide the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generate a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, perform at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.
Description
The present application generally relates to vehicle advanced driver assistance systems (ADAS) and, more particularly, to techniques for traffic recognition and adaptive ground removal based on light detection and ranging (LIDAR) point cloud data.
Some vehicle advanced driver assistance systems (ADAS) utilize light detection and ranging (LIDAR) systems to capture information. LIDAR systems emit laser light pulses and capture pulses that are reflected back by surrounding objects. By analyzing the return times and wavelengths of the reflected pulses, three-dimensional (3D) LIDAR point clouds are obtained. Each point cloud comprises a plurality of reflected pulses in a 3D (x/y/z) coordinate system). These point clouds could be used to detect objects (other vehicles, pedestrians, traffic signs, etc.). Ground removal is typically one of the first tasks performed on the point cloud data, but conventional ground removal techniques do not account for non-flat surfaces (curved/banked roads, speed bumps, etc.) or scenes having multiple ground surfaces at different levels.
It is also typically difficult, however, to distinguish between different types of objects without using extensively trained deep neural networks (DNNs). This requires a substantial amount of labeled training data (e.g., manually annotated point clouds) and also substantial processing power, which increases costs. This is also a particularly difficult task in heavy traffic scenarios where there are many other vehicles nearby. Accordingly, while such ADAS systems work well for their intended purpose, there remains a need for improvement in the relevant art.
According to one example aspect of the invention, an advanced driver assistance system (ADAS) for a vehicle is presented. In one exemplary implementation, the ADAS comprises: a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data and a controller configured to: receive the 3D LIDAR point cloud data, divide the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generate a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, perform at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.
In some implementations, the adaptive ground removal comprises determining a dynamic height threshold indicative of a ground surface based on the height differences. In some implementations, the adaptive ground removal further comprises removing or ignoring any 3D LIDAR point cloud data having a z-coordinate that is less than the dynamic height threshold.
In some implementations, the histogram is a feature of a model classifier for traffic level recognition. In some implementations, the controller is further configured to train the model classifier based on known traffic level data. In some implementations, the model classifier is a support vector machine (SVM). In some implementations, the traffic level recognition comprises using the trained model classifier to recognize a traffic level based on the 3D LIDAR point cloud data. In some implementations, the controller is further configured to adjust a field of view (FOV) of the LIDAR system based on the recognized traffic level. In some implementations, the controller is configured to narrow the FOV of the LIDAR system for light traffic levels and to widen the FOV of the LIDAR system for heavy traffic levels.
In some implementations, the controller does not utilize a deep neural network (DNN).
According to another example aspect of the invention, a method of performing at least one of adaptive ground removal from 3D LIDAR point cloud data and traffic level recognition by a vehicle is presented. In one exemplary implementation, the method comprises: receiving, by a controller of the vehicle and from a LIDAR system of the vehicle, the 3D LIDAR point cloud data, wherein the 3D LIDAR point cloud data collectively represents reflected laser light pulses captured by the LIDAR system after the emitting of laser light pulses from the LIDAR system, dividing, by the controller, the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle, generating, by the controller, a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and using the histogram, performing, by the controller, at least one of adaptive ground removal from the 3D LIDAR point cloud data and traffic level recognition.
In some implementations, the adaptive ground removal comprises determining a dynamic height threshold indicative of a ground surface based on the height differences. In some implementations, the adaptive ground removal further comprises removing or ignoring any 3D LIDAR point cloud data having a z-coordinate that is less than the dynamic height threshold. In some implementations, the histogram is a feature of a model classifier for traffic level recognition. In some implementations, the method further comprises training, by the controller, the model classifier based on known traffic level data. In some implementations, the model classifier is an SVM. In some implementations, the traffic level recognition comprises using the trained model classifier to recognize a traffic level based on the 3D LIDAR point cloud data. In some implementations, the method further comprises adjusting, by the controller, an FOV of the LIDAR system based on the recognized traffic level. In some implementations, adjusting the FOV of the LIDAR system comprises narrowing the FOV of the LIDAR system for light traffic levels and widening the FOV of the LIDAR system for heavy traffic levels.
In some implementations, the controller does not utilize a deep neural network (DNN).
Further areas of applicability of the teachings of the present disclosure will become apparent from the detailed description, claims and the drawings provided hereinafter, wherein like reference numerals refer to like features throughout the several views of the drawings. It should be understood that the detailed description, including disclosed embodiments and drawings referenced therein, are merely exemplary in nature intended for purposes of illustration only and are not intended to limit the scope of the present disclosure, its application or uses. Thus, variations that do not depart from the gist of the present disclosure are intended to be within the scope of the present disclosure.
As discussed above, there exists a need for improvement in automated driver assistance (ADAS) systems that utilize light detection and ranging (LIDAR) for object detection. It will be appreciated that the term “ADAS” as used herein includes driver assistance systems (lane keeping, adaptive cruise control, etc.) as well as partially and fully autonomous driving systems. A conventional ADAS for object detection utilizes a deep neural network (DNN) trained by machine learning with training data that is annotated (e.g., human labeled) for every different type of object. This requires a substantial amount of resources, both from a processing standpoint and a labeled training data standpoint, which increases costs.
In a heavy traffic scenario, for example, there will be a large number of objects (i.e., other vehicles) present in the three-dimensional LIDAR point cloud. Having to detect and track each of these objects requires substantial processing power. Additionally, the first processing step for 3D LIDAR point cloud data is typically to identify and remove (or ignore) the ground surface. This is typically performed using a fixed z-coordinate threshold (specific to a particular LIDAR sensor mounting configuration) or plane model segmentation. These conventional techniques, however, fail to account for non-flat ground surfaces (curved/banked surfaces, speed bumps, etc.) or scenes having multiple ground surfaces at different heights.
Accordingly, improved techniques for traffic recognition and adaptive ground removal are presented. These techniques determine LIDAR point cloud statistics (e.g., data point height differences in various distinct cells surrounding the vehicle) and use these statistics to adaptively detect and remove (or ignore) all types of ground surfaces and to recognize different traffic scenarios (heavy traffic, light traffic, no traffic, etc.). The adaptive removal of all types of ground surfaces provides for faster LIDAR point cloud processing and improved ground surface detection and removal accuracy (e.g., reduce false ground surface detections). The detection of different traffic scenarios can be leveraged to control other ADAS features. For example, a heavy traffic scenario may allow the ADAS features to behave more conservatively, and vice versa. In one exemplary implementation, a field of view (FOV) of the LIDAR sensors is adjusted based on the detected traffic scenario. For example, a no traffic or light traffic scenario may allow the LIDAR FOV to be tuned for more accurate long distance sensing, which could be helpful for higher speed driving, such as on a highway.
Referring now to FIG. 1 , a functional block diagram of an example vehicle 100 is illustrated. The vehicle 100 comprises a torque generating system 104 (an engine, an electric motor, combinations thereof, etc.) that generates drive torque that is transferred to a driveline 108 via a transmission 112. A controller 116 controls operation of the torque generating system 104, such as to generate a desired drive torque based on a driver input via a driver interface 120 (a touch display, an accelerator pedal, combinations thereof, etc.). The vehicle 100 further comprises an ADAS 124 having a LIDAR system 128. While the ADAS 124 is illustrated as being separate from the controller 116, it will be appreciated that the ADAS 124 could be incorporated as part of the controller 116, or the ADAS 124 could have its own separate controller. The LIDAR system 128 emits laser light pulses and captures reflected laser light pulses (from other vehicles, structures, traffic signs, ground surfaces, etc.) that collectively form captured 3D LIDAR point cloud data.
Referring now to FIG. 2 , a functional block diagram of an example traffic level recognition and adaptive ground removal architecture 200 is illustrated. It will be appreciated that this architecture 200 could be implemented by the ADAS 124 or the controller 116. At 204, the 3D LIDAR point cloud data is captured using the LIDAR system 128. This could include, for example, analyzing return times and wavelengths of the reflected laser light pulses. The LIDAR system 128 has a FOV setting that specifies horizontal and/or vertical scanning angles. This FOV setting could be adjusted, for example, by the controller 116 via a command to the LIDAR system 128. A narrower FOV produce higher resolution over a smaller field, whereas a wider FOV produces a lesser resolution over a larger field. The narrower FOV could be particularly useful, for example, for long distance scanning, such as during high speed driving, such as on a highway. The wider FOV, on the other hand, could be particularly useful for other operating scenarios, such as low speed driving in crowded environments (parking lots, heavy traffic jams, etc.).
At 208, the 3D LIDAR point cloud data is divided into a plurality of cells, each cell representing a distinct region surrounding the vehicle 100. FIG. 3 illustrates an overhead view 300 of an example plurality of cells 304 surrounding the vehicle 100. In one exemplary implementation, the angle and radius increment between the cells 304 is 2 degrees and 50 centimeters (cm), respectively. It will be appreciated, however, that any suitable division of the 3D LIDAR point cloud data into the plurality of cells 304 could be utilized. Referring again to FIG. 2 , a histogram is generated at 212. The histogram includes a height difference between maximum and minimum heights (z-coordinates) of the 3D LIDAR point cloud data for each respective cell. Very small height differences are likely indicative of that 3D LIDAR point cloud data corresponding to a ground surface. The histogram is then used to perform at least one of adaptive ground removal at 216 and traffic level recognition at 220. It will be appreciated, however, that the filtered 3D LIDAR point cloud data (after adaptive ground removal) could be used for processing tasks other than traffic level recognition, such as, but not limited to, object detection and tracking.
For adaptive ground removal, the histogram data is analyzed to determine a height threshold indicative of a ground surface. This height threshold could be dynamic in that it is repeatedly recalculated for different scenes. Any points in the 3D LIDAR point cloud data having heights (z-coordinates) less than this height threshold could then be removed from the 3D LIDAR point cloud data (thereby obtaining filtered 3D LIDAR point cloud data) or otherwise ignored. The removal or ignoring of any 3D LIDAR point cloud data that is likely a ground surface allows for faster processing due to the smaller dataset. This also may facilitate the use of a less expensive controller 116 due to the reduced throughput requirements. As shown in FIG. 2 , one example of the processing of the filtered 3D LIDAR point cloud is the traffic level recognition at 220. It will be appreciated, however, that the histogram and the unfiltered 3D LIDAR point cloud data could also be fed directly to the traffic level recognition at 220. The traffic level recognition 220 involves detecting a quantity of nearby objects (i.e., other vehicles) based on the height differences.
In one exemplary implementation, the histogram is a feature of a model classifier for traffic level recognition. This model classifier is much less complex than a DNN as used by conventional methods. One example of the model classifier is a support vector machine (SVM), but it will be appreciated that any suitable model classifier could be utilized. The model classifier is trained using known traffic level data. This known traffic level data could be training histograms that are each labeled with a traffic level. For example, a binary labeling system could be used where each training histogram is labeled as a “1” (e.g., heavy traffic or a traffic jam) or a “0” (e.g., very light traffic or no traffic). The model classifier is then applied to the filtered (or unfiltered) 3D LIDAR point cloud data to recognize a traffic level near the vehicle 100. As previously mentioned, the FOV of the LIDAR system 128 could then be adjusted at 224 depending on the recognized traffic level.
Referring now to FIG. 4 , a flow diagram of a method 400 for performing at least one of adaptive ground removal from 3D LIDAR point cloud data and traffic level recognition is illustrated. At 404, the controller 116 optionally determines whether a set of one or more preconditions are satisfied. These could include, for example only, the vehicle 100 being operated (e.g., the torque generating system 104 being activated) and no malfunctions being present. At 408, the controller 116 obtains the 3D LIDAR point cloud data using the LIDAR system 128. At 412, the controller divides the 3D LIDAR point cloud data into a plurality of cells (e.g., cells 304) each indicative of a distinct region surrounding the vehicle 100. At 416, the controller 116 generates a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell. At 420, the controller 116 determines whether adaptive ground removal is to be performed. When true, the method 400 proceeds to 424. Otherwise, the method 400 proceeds to 432.
At 424, the controller 116 determines a dynamic height threshold indicative of a ground surface based on the height differences in the histogram. Typically, most of the height differences are within a certain statistical range and therefore can be considered ground cells. At 428, any data points having a height (z-coordinate) less than this height threshold are removed from the 3D LIDAR point cloud data (to obtain filtered 3D LIDAR point cloud data) or are otherwise ignored. At 432, the controller 116 determines whether traffic level recognition is to be performed. When true, the method 400 proceeds to 436. Otherwise, the method 400 ends or returns to 404. At 436, the controller 116 uses a trained model classifier (e.g., previously trained using the histogram as the model feature) to recognize a traffic level from the filtered (or unfiltered) 3D LIDAR point cloud data. At 440, the controller 116 optionally adjusts a FOV of the LIDAR system 128 based on the recognized traffic level. This could include, for example, narrowing the FOV for no traffic or light traffic levels and widening the FOV for heavy traffic levels. The method 400 then ends or returns to 404.
It will be appreciated that the term “controller” as used herein refers to any suitable control device or set of multiple control devices that is/are configured to perform at least a portion of the techniques of the present disclosure. Non-limiting examples include an application-specific integrated circuit (ASIC), one or more processors and a non-transitory memory having instructions stored thereon that, when executed by the one or more processors, cause the controller to perform a set of operations corresponding to at least a portion of the techniques of the present disclosure. The one or more processors could be either a single processor or two or more processors operating in a parallel or distributed architecture.
It should be understood that the mixing and matching of features, elements, methodologies and/or functions between various examples may be expressly contemplated herein so that one skilled in the art would appreciate from the present teachings that features, elements and/or functions of one example may be incorporated into another example as appropriate, unless described otherwise above.
Claims (16)
1. An advanced driver assistance system (ADAS) for a vehicle, the ADAS comprising:
a light detection and ranging (LIDAR) system configured to emit laser light pulses and capture reflected laser light pulses collectively forming three-dimensional (3D) LIDAR point cloud data; and
a controller configured to:
receive the 3D LIDAR point cloud data,
divide the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle,
generate a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells, and
using the histogram, perform at least one of:
i) adaptive ground removal from the 3D LIDAR point cloud data based on a dynamic height threshold indicative of a ground surface, the dynamic height threshold being dynamically determined based on the calculated height differences of the histogram, and
(ii) traffic level recognition based on a model classifier having a feature trained using the histogram.
2. The ADAS of claim 1 , wherein the adaptive ground removal further comprises removing or ignoring any 3D LIDAR point cloud data having a z-coordinate that is less than the dynamic height threshold.
3. The ADAS of claim 1 , wherein the controller is further configured to train the model classifier based on known traffic level data.
4. The ADAS of claim 3 , wherein the model classifier is a support vector machine (SVM).
5. The ADAS of claim 3 , wherein the traffic level recognition comprises using the trained model classifier to recognize a traffic level based on the 3D LIDAR point cloud data.
6. The ADAS of claim 5 , wherein the controller is further configured to adjust a field of view (FOV) of the LIDAR system based on the recognized traffic level.
7. The ADAS of claim 6 , wherein the controller is configured to narrow the FOV of the LIDAR system for light traffic levels and to widen the FOV of the LIDAR system for heavy traffic levels.
8. The ADAS of claim 1 , wherein the controller does not utilize a deep neural network (DNN).
9. A method of performing at least one of adaptive ground removal from three-dimensional (3D) light detection and ranging (LIDAR) point cloud data and traffic level recognition by a vehicle, the method comprising:
receiving, by a controller of the vehicle and from a LIDAR system of the vehicle, the 3D LIDAR point cloud data, wherein the 3D LIDAR point cloud data collectively represents reflected laser light pulses captured by the LIDAR system after the emitting of laser light pulses from the LIDAR system;
dividing, by the controller, the 3D LIDAR point cloud data into a plurality of cells corresponding to distinct regions surrounding the vehicle;
generating, by the controller, a histogram comprising a calculated height difference between a maximum height and a minimum height in the 3D LIDAR point cloud data for each cell of the plurality of cells; and
using the histogram, performing, by the controller, at least one of:
(i) adaptive ground removal from the 3D LIDAR point cloud data based on a dynamic height threshold indicative of a ground surface, the dynamic height threshold being dynamically determined based on the calculated height differences of the histogram, and
(ii) traffic level recognition based on a model classifier having a feature trained using the histogram.
10. The method of claim 9 , wherein the adaptive ground removal further comprises removing or ignoring any 3D LIDAR point cloud data having a z-coordinate that is less than the dynamic height threshold.
11. The method of claim 9 , further comprising training, by the controller, the model classifier based on known traffic level data.
12. The method of claim 11 , wherein the model classifier is a support vector machine (SVM).
13. The method of claim 11 , wherein the traffic level recognition comprises using the trained model classifier to recognize a traffic level based on the 3D LIDAR point cloud data.
14. The method of claim 13 , further comprising adjusting, by the controller, a field of view (FOV) of the LIDAR system based on the recognized traffic level.
15. The method of claim 14 , wherein adjusting the FOV of the LIDAR system comprises narrowing the FOV of the LIDAR system for light traffic levels and widening the FOV of the LIDAR system for heavy traffic levels.
16. The method of claim 9 , wherein the controller does not utilize a deep neural network (DNN).
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/194,465 US10823855B2 (en) | 2018-11-19 | 2018-11-19 | Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/194,465 US10823855B2 (en) | 2018-11-19 | 2018-11-19 | Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200158874A1 US20200158874A1 (en) | 2020-05-21 |
| US10823855B2 true US10823855B2 (en) | 2020-11-03 |
Family
ID=70726530
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/194,465 Active 2039-03-05 US10823855B2 (en) | 2018-11-19 | 2018-11-19 | Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US10823855B2 (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220268888A1 (en) * | 2021-02-19 | 2022-08-25 | Mando Mobility Solutions Corporation | Radar control device and method |
| US11933967B2 (en) | 2019-08-22 | 2024-03-19 | Red Creamery, LLC | Distally actuated scanning mirror |
| US12123950B2 (en) | 2016-02-15 | 2024-10-22 | Red Creamery, LLC | Hybrid LADAR with co-planar scanning and imaging field-of-view |
| US12399278B1 (en) | 2016-02-15 | 2025-08-26 | Red Creamery Llc | Hybrid LIDAR with optically enhanced scanned laser |
| US12399279B1 (en) | 2016-02-15 | 2025-08-26 | Red Creamery Llc | Enhanced hybrid LIDAR with high-speed scanning |
Families Citing this family (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10902625B1 (en) | 2018-01-23 | 2021-01-26 | Apple Inc. | Planar surface detection |
| EP3824621B1 (en) | 2018-07-19 | 2025-06-11 | Activ Surgical, Inc. | Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots |
| CN113906479A (en) * | 2018-12-28 | 2022-01-07 | 艾科缇弗外科公司 | Generate synthetic 3D imaging from local depth maps |
| US11500385B2 (en) * | 2019-09-30 | 2022-11-15 | Zoox, Inc. | Collision avoidance perception system |
| US11353592B2 (en) | 2019-09-30 | 2022-06-07 | Zoox, Inc. | Complex ground profile estimation |
| CN111860321B (en) * | 2020-07-20 | 2023-12-22 | 浙江光珀智能科技有限公司 | An obstacle recognition method and system |
| CN112099050A (en) * | 2020-09-14 | 2020-12-18 | 北京魔鬼鱼科技有限公司 | Vehicle appearance recognition device and method, vehicle processing apparatus and method |
| US20220114762A1 (en) * | 2020-10-12 | 2022-04-14 | Electronics And Telecommunications Research Institute | Method for compressing point cloud based on global motion prediction and compensation and apparatus using the same |
| CN112348781A (en) * | 2020-10-26 | 2021-02-09 | 广东博智林机器人有限公司 | Method, device and equipment for detecting height of reference plane and storage medium |
| CN112556726B (en) * | 2020-12-07 | 2023-04-11 | 中国第一汽车股份有限公司 | Vehicle position correction method and device, vehicle and medium |
| CN115147567A (en) * | 2021-03-29 | 2022-10-04 | 北京四维图新科技股份有限公司 | Traffic scene recognition method, equipment and storage medium |
| CN113486741B (en) * | 2021-06-23 | 2022-10-11 | 中冶南方工程技术有限公司 | Stock yard stock pile point cloud step identification method |
| TWI786765B (en) * | 2021-08-11 | 2022-12-11 | 中華電信股份有限公司 | Radar and method for adaptively configuring radar parameters |
| US20230092198A1 (en) * | 2021-09-23 | 2023-03-23 | Aeye, Inc. | Systems and methods of real-time detection of and geometry generation for physical ground planes |
| CN116309199B (en) * | 2021-12-03 | 2025-10-31 | 咪咕文化科技有限公司 | Point cloud processing method and device and electronic equipment |
| US12504541B2 (en) * | 2021-12-15 | 2025-12-23 | Rivian Ip Holdings, Llc | Systems and methods for determining a drivable surface |
| CN114563796B (en) * | 2022-03-03 | 2022-11-04 | 北京华宜信科技有限公司 | Power line identification method and device based on unmanned aerial vehicle laser radar detection data |
| WO2023182674A1 (en) * | 2022-03-21 | 2023-09-28 | 현대자동차주식회사 | Method and device for lidar point cloud coding |
| CN114758096B (en) * | 2022-04-15 | 2025-03-04 | 长沙行深智能科技有限公司 | A roadside detection method, device, terminal equipment and storage medium |
| KR20240020376A (en) * | 2022-08-08 | 2024-02-15 | 현대자동차주식회사 | Object recognition apparatus and vehicle having the same |
Citations (36)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5831748A (en) * | 1994-12-19 | 1998-11-03 | Minolta Co., Ltd. | Image processor |
| US20070002075A1 (en) * | 2005-07-04 | 2007-01-04 | Fuji Xerox Co., Ltd. | Image processing device and method for the same |
| US20090310867A1 (en) | 2008-06-12 | 2009-12-17 | Bogdan Calin Mihai Matei | Building segmentation for densely built urban regions using aerial lidar data |
| US20130218472A1 (en) | 2010-06-10 | 2013-08-22 | Autodesk, Inc. | Segmentation of ground-based laser scanning points from urban environment |
| US8620089B1 (en) * | 2009-12-22 | 2013-12-31 | Hrl Laboratories, Llc | Strip histogram grid for efficient segmentation of 3D point clouds from urban environments |
| US20140118716A1 (en) * | 2012-10-31 | 2014-05-01 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
| US8761991B1 (en) | 2012-04-09 | 2014-06-24 | Google Inc. | Use of uncertainty regarding observations of traffic intersections to modify behavior of a vehicle |
| US9221461B2 (en) | 2012-09-05 | 2015-12-29 | Google Inc. | Construction zone detection using a plurality of information sources |
| US20160093101A1 (en) * | 2013-05-23 | 2016-03-31 | Mta Szamitastechnikai Es Automatizalasi Kutato Intezet | Method And System For Generating A Three-Dimensional Model |
| US20160104049A1 (en) * | 2014-10-14 | 2016-04-14 | Here Global B.V. | Lateral Sign Placement Determination |
| US9383753B1 (en) | 2012-09-26 | 2016-07-05 | Google Inc. | Wide-view LIDAR with areas of special attention |
| US9558564B1 (en) * | 2014-05-02 | 2017-01-31 | Hrl Laboratories, Llc | Method for finding important changes in 3D point clouds |
| US20170124781A1 (en) * | 2015-11-04 | 2017-05-04 | Zoox, Inc. | Calibration for autonomous vehicle operation |
| US20180188043A1 (en) * | 2016-12-30 | 2018-07-05 | DeepMap Inc. | Classification of surfaces as hard/soft for combining data captured by autonomous vehicles for generating high definition maps |
| US20180203113A1 (en) * | 2017-01-17 | 2018-07-19 | Delphi Technologies, Inc. | Ground classifier system for automated vehicles |
| US10108867B1 (en) * | 2017-04-25 | 2018-10-23 | Uber Technologies, Inc. | Image-based pedestrian detection |
| US20180349715A1 (en) * | 2017-05-31 | 2018-12-06 | Carmera, Inc. | System of vehicles equipped with imaging equipment for high-definition near real-time map generation |
| US20190130182A1 (en) * | 2017-11-01 | 2019-05-02 | Here Global B.V. | Road modeling from overhead imagery |
| US20190145765A1 (en) * | 2017-11-15 | 2019-05-16 | Uber Technologies, Inc. | Three Dimensional Object Detection |
| US20190154439A1 (en) * | 2016-03-04 | 2019-05-23 | May Patents Ltd. | A Method and Apparatus for Cooperative Usage of Multiple Distance Meters |
| US20190251743A1 (en) * | 2016-11-01 | 2019-08-15 | Panasonic Intellectual Property Corporation Of America | Display method and display device |
| US20190250626A1 (en) * | 2018-02-14 | 2019-08-15 | Zoox, Inc. | Detecting blocking objects |
| US20190258251A1 (en) * | 2017-11-10 | 2019-08-22 | Nvidia Corporation | Systems and methods for safe and reliable autonomous vehicles |
| US20190258878A1 (en) * | 2018-02-18 | 2019-08-22 | Nvidia Corporation | Object detection and detection confidence suitable for autonomous driving |
| US20190266418A1 (en) * | 2018-02-27 | 2019-08-29 | Nvidia Corporation | Real-time detection of lanes and boundaries by autonomous vehicles |
| US20190278292A1 (en) * | 2018-03-06 | 2019-09-12 | Zoox, Inc. | Mesh Decimation Based on Semantic Information |
| US20190286921A1 (en) * | 2018-03-14 | 2019-09-19 | Uber Technologies, Inc. | Structured Prediction Crosswalk Generation |
| US20190286915A1 (en) * | 2018-03-13 | 2019-09-19 | Honda Motor Co., Ltd. | Robust simultaneous localization and mapping via removal of dynamic traffic participants |
| US20190286153A1 (en) * | 2018-03-15 | 2019-09-19 | Nvidia Corporation | Determining drivable free-space for autonomous vehicles |
| US20190384304A1 (en) * | 2018-06-13 | 2019-12-19 | Nvidia Corporation | Path detection for autonomous machines using deep neural networks |
| US20190384303A1 (en) * | 2018-06-19 | 2019-12-19 | Nvidia Corporation | Behavior-guided path planning in autonomous machine applications |
| US20200026960A1 (en) * | 2018-07-17 | 2020-01-23 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
| US20200090322A1 (en) * | 2018-09-13 | 2020-03-19 | Nvidia Corporation | Deep neural network processing for sensor blindness detection in autonomous machine applications |
| US20200118278A1 (en) * | 2018-10-15 | 2020-04-16 | TuSimple | Tracking and modeling processing of image data for lidar-based vehicle tracking system and method |
| US10625748B1 (en) * | 2019-06-28 | 2020-04-21 | Lyft, Inc. | Approaches for encoding environmental information |
| US20200125845A1 (en) * | 2018-10-22 | 2020-04-23 | Lyft, Inc. | Systems and methods for automated image labeling for images captured from vehicles |
-
2018
- 2018-11-19 US US16/194,465 patent/US10823855B2/en active Active
Patent Citations (37)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5831748A (en) * | 1994-12-19 | 1998-11-03 | Minolta Co., Ltd. | Image processor |
| US20070002075A1 (en) * | 2005-07-04 | 2007-01-04 | Fuji Xerox Co., Ltd. | Image processing device and method for the same |
| US20090310867A1 (en) | 2008-06-12 | 2009-12-17 | Bogdan Calin Mihai Matei | Building segmentation for densely built urban regions using aerial lidar data |
| US8620089B1 (en) * | 2009-12-22 | 2013-12-31 | Hrl Laboratories, Llc | Strip histogram grid for efficient segmentation of 3D point clouds from urban environments |
| US20130218472A1 (en) | 2010-06-10 | 2013-08-22 | Autodesk, Inc. | Segmentation of ground-based laser scanning points from urban environment |
| US8761991B1 (en) | 2012-04-09 | 2014-06-24 | Google Inc. | Use of uncertainty regarding observations of traffic intersections to modify behavior of a vehicle |
| US9221461B2 (en) | 2012-09-05 | 2015-12-29 | Google Inc. | Construction zone detection using a plurality of information sources |
| US9383753B1 (en) | 2012-09-26 | 2016-07-05 | Google Inc. | Wide-view LIDAR with areas of special attention |
| US20140118716A1 (en) * | 2012-10-31 | 2014-05-01 | Raytheon Company | Video and lidar target detection and tracking system and method for segmenting moving targets |
| US20160093101A1 (en) * | 2013-05-23 | 2016-03-31 | Mta Szamitastechnikai Es Automatizalasi Kutato Intezet | Method And System For Generating A Three-Dimensional Model |
| US9558564B1 (en) * | 2014-05-02 | 2017-01-31 | Hrl Laboratories, Llc | Method for finding important changes in 3D point clouds |
| US20160104049A1 (en) * | 2014-10-14 | 2016-04-14 | Here Global B.V. | Lateral Sign Placement Determination |
| US20170124781A1 (en) * | 2015-11-04 | 2017-05-04 | Zoox, Inc. | Calibration for autonomous vehicle operation |
| US20190154439A1 (en) * | 2016-03-04 | 2019-05-23 | May Patents Ltd. | A Method and Apparatus for Cooperative Usage of Multiple Distance Meters |
| US20190251743A1 (en) * | 2016-11-01 | 2019-08-15 | Panasonic Intellectual Property Corporation Of America | Display method and display device |
| US20180188043A1 (en) * | 2016-12-30 | 2018-07-05 | DeepMap Inc. | Classification of surfaces as hard/soft for combining data captured by autonomous vehicles for generating high definition maps |
| US20180203113A1 (en) * | 2017-01-17 | 2018-07-19 | Delphi Technologies, Inc. | Ground classifier system for automated vehicles |
| US10108867B1 (en) * | 2017-04-25 | 2018-10-23 | Uber Technologies, Inc. | Image-based pedestrian detection |
| US20180349715A1 (en) * | 2017-05-31 | 2018-12-06 | Carmera, Inc. | System of vehicles equipped with imaging equipment for high-definition near real-time map generation |
| US20190130182A1 (en) * | 2017-11-01 | 2019-05-02 | Here Global B.V. | Road modeling from overhead imagery |
| US10628671B2 (en) * | 2017-11-01 | 2020-04-21 | Here Global B.V. | Road modeling from overhead imagery |
| US20190258251A1 (en) * | 2017-11-10 | 2019-08-22 | Nvidia Corporation | Systems and methods for safe and reliable autonomous vehicles |
| US20190145765A1 (en) * | 2017-11-15 | 2019-05-16 | Uber Technologies, Inc. | Three Dimensional Object Detection |
| US20190250626A1 (en) * | 2018-02-14 | 2019-08-15 | Zoox, Inc. | Detecting blocking objects |
| US20190258878A1 (en) * | 2018-02-18 | 2019-08-22 | Nvidia Corporation | Object detection and detection confidence suitable for autonomous driving |
| US20190266418A1 (en) * | 2018-02-27 | 2019-08-29 | Nvidia Corporation | Real-time detection of lanes and boundaries by autonomous vehicles |
| US20190278292A1 (en) * | 2018-03-06 | 2019-09-12 | Zoox, Inc. | Mesh Decimation Based on Semantic Information |
| US20190286915A1 (en) * | 2018-03-13 | 2019-09-19 | Honda Motor Co., Ltd. | Robust simultaneous localization and mapping via removal of dynamic traffic participants |
| US20190286921A1 (en) * | 2018-03-14 | 2019-09-19 | Uber Technologies, Inc. | Structured Prediction Crosswalk Generation |
| US20190286153A1 (en) * | 2018-03-15 | 2019-09-19 | Nvidia Corporation | Determining drivable free-space for autonomous vehicles |
| US20190384304A1 (en) * | 2018-06-13 | 2019-12-19 | Nvidia Corporation | Path detection for autonomous machines using deep neural networks |
| US20190384303A1 (en) * | 2018-06-19 | 2019-12-19 | Nvidia Corporation | Behavior-guided path planning in autonomous machine applications |
| US20200026960A1 (en) * | 2018-07-17 | 2020-01-23 | Nvidia Corporation | Regression-based line detection for autonomous driving machines |
| US20200090322A1 (en) * | 2018-09-13 | 2020-03-19 | Nvidia Corporation | Deep neural network processing for sensor blindness detection in autonomous machine applications |
| US20200118278A1 (en) * | 2018-10-15 | 2020-04-16 | TuSimple | Tracking and modeling processing of image data for lidar-based vehicle tracking system and method |
| US20200125845A1 (en) * | 2018-10-22 | 2020-04-23 | Lyft, Inc. | Systems and methods for automated image labeling for images captured from vehicles |
| US10625748B1 (en) * | 2019-06-28 | 2020-04-21 | Lyft, Inc. | Approaches for encoding environmental information |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12123950B2 (en) | 2016-02-15 | 2024-10-22 | Red Creamery, LLC | Hybrid LADAR with co-planar scanning and imaging field-of-view |
| US12399278B1 (en) | 2016-02-15 | 2025-08-26 | Red Creamery Llc | Hybrid LIDAR with optically enhanced scanned laser |
| US12399279B1 (en) | 2016-02-15 | 2025-08-26 | Red Creamery Llc | Enhanced hybrid LIDAR with high-speed scanning |
| US11933967B2 (en) | 2019-08-22 | 2024-03-19 | Red Creamery, LLC | Distally actuated scanning mirror |
| US20220268888A1 (en) * | 2021-02-19 | 2022-08-25 | Mando Mobility Solutions Corporation | Radar control device and method |
| US12140695B2 (en) * | 2021-02-19 | 2024-11-12 | Hl Klemove Corp. | Radar control device and method |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200158874A1 (en) | 2020-05-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10823855B2 (en) | Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics | |
| US11543531B2 (en) | Semi-automatic LIDAR annotation system for autonomous driving | |
| US10983215B2 (en) | Tracking objects in LIDAR point clouds with enhanced template matching | |
| KR102109941B1 (en) | Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera | |
| CN107272021B (en) | Object detection using radar and visually defined image detection areas | |
| KR102463720B1 (en) | System and Method for creating driving route of vehicle | |
| US10255510B2 (en) | Driving assistance information generating method and device, and driving assistance system | |
| CN102016921B (en) | image processing device | |
| US8818702B2 (en) | System and method for tracking objects | |
| EP3252658A1 (en) | Information processing apparatus and information processing method | |
| EP3282228A1 (en) | Dynamic-map constructing method, dynamic-map constructing system, and moving terminal | |
| US10929986B2 (en) | Techniques for using a simple neural network model and standard camera for image detection in autonomous driving | |
| CN105182364A (en) | Collision avoidance with static targets in narrow spaces | |
| JP2014146326A (en) | Detecting method and detecting system for multiple lane | |
| JP6413898B2 (en) | Pedestrian determination device | |
| US20210133947A1 (en) | Deep neural network with image quality awareness for autonomous driving | |
| CN110869258B (en) | Vehicle speed control device and vehicle speed control method | |
| US20230260132A1 (en) | Detection method for detecting static objects | |
| US11460544B2 (en) | Traffic sign detection from filtered birdview projection of LIDAR point clouds | |
| JP2011196699A (en) | Device for detecting road edge | |
| JP2018147399A (en) | Target detection device | |
| US12466433B2 (en) | Autonomous driving LiDAR technology | |
| KR102461439B1 (en) | Detection of objects in the car's surroundings | |
| Markiewicz et al. | Review of tracking and object detection systems for advanced driver assistance and autonomous driving applications with focus on vulnerable road users sensing | |
| JP2020066246A (en) | Road condition estimation device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |