WO2020157135A1 - Environmental awareness system for a vehicle - Google Patents

Environmental awareness system for a vehicle Download PDF

Info

Publication number
WO2020157135A1
WO2020157135A1 PCT/EP2020/052171 EP2020052171W WO2020157135A1 WO 2020157135 A1 WO2020157135 A1 WO 2020157135A1 EP 2020052171 W EP2020052171 W EP 2020052171W WO 2020157135 A1 WO2020157135 A1 WO 2020157135A1
Authority
WO
WIPO (PCT)
Prior art keywords
attribute
sensor
processor
vehicle
data
Prior art date
Application number
PCT/EP2020/052171
Other languages
French (fr)
Inventor
Karsten Behrendt
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2020157135A1 publication Critical patent/WO2020157135A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Definitions

  • This disclosure relates to machine-learning functions and artificial intelligence systems for a vehicle having at least one driving function.
  • the driving function may be controlled by the artificial intelligence system of the vehicle.
  • Vehicles having automated functions may be helpful in providing safer and more efficient transportation of associated passengers.
  • Vehicles having driving functions that may be controlled in an automated manner rely upon sensors to detect other objects within the operating environment.
  • Past solutions have previously been reliant upon recreation of training data each time that a system in a training phase is adjusted or updated.
  • Past solutions have previously made use of monotype sensor arrays that may suffer from inabilities to provide data capable of distinguishing certain attributes of objects within the environment.
  • a more reliable object-detection and object- prediction system is desired to improve safety and reliability of vehicle driving functions.
  • the environmental mapping system may comprise a processor, a memory, a first sensor, and a second sensor.
  • the first sensor and the second sensor may be operable to detect objects within the environment surrounding the vehicle, and may be operable to detect attributes of each detected object.
  • the processor may be operable to utilize the data collected by the first sensor and second sensor to predict the values of a third attribute of an object, or additional attributes of an object, using a machine-learning model.
  • the machine-learning model may generate a hierarchy of attributes, some of the attributes being predictable based upon the value or one or more other attributes within the hierarchy.
  • the machine-learning model may have been trained using a corpus of groundtruth annotation data cultivated by a human observer of the training input data.
  • the processor may be operable to utilize the detected and predicted attributes of the objects to control one or more self-driving functions of the vehicle.
  • the first sensor and second sensor may comprise a lidar sensor or a camera sensor.
  • Another aspect of this disclosure is directed an environmental awareness system associated with a vehicle comprising a processor, a memory, a first sensor, and a second sensor.
  • the vehicle may be operable to perform one or more driving functions autonomously.
  • the processor may be operable to utilize the data collected by the first sensor and second sensor to predict the values of a third attribute of an object, or additional attributes of an object, using a machine-learning model.
  • the machine-learning model may generate a hierarchy of attributes, including at least a first number of attributes and a second number of attributes.
  • the machine-learning model may have been trained using a corpus of groundtruth annotation data cultivated by a human observer of the training input data.
  • the processor may be operable to utilize the detected and predicted attributes of the objects to control one or more self-driving functions of the vehicle.
  • the first sensor and second sensor may comprise a lidar sensor or a camera sensor.
  • a further aspect of this disclosure is directed to a method of controlling a driving function of a vehicle.
  • the method comprises the steps of collecting first sensor data from a first sensor and second sensor data from a second sensor.
  • the sensor data is utilized to define an object within the environment of the vehicle, and a hierarchy of attributes is defined upon detection and definition of the object.
  • the hierarchy may comprise number of junctions, each of the junctions representing an attribute. Some of the junction values may be defined in response to analysis of the collected sensor data. Some of the junction values may be predicted in response to the defined values of other junctions.
  • the completed hierarchy may be utilized to determine how the driving function should be controlled.
  • FIG. 1 is a diagrammatic view of a vehicle having an environmental awareness system.
  • Fig. 2 is a diagrammatic illustration of an exemplary operating environment for an environmental awareness system.
  • FIG. 3 is a diagrammatic illustration of the exemplary operating environment of Fig. 2, with additional groundtruth annotations overlaid thereon.
  • FIG. 4 is a diagrammatic illustration of the exemplary operating environment of Fig. 2 with additional detection point data overlaid thereon.
  • FIG. 5 is a diagrammatic illustration of the exemplary operating environment of Fig. 2 and additional detection point data of Fig. 4, with additional detection contours overlaid thereon.
  • Fig. 6 is a diagrammatic illustration of the detection contours of Fig. 5 and detection point data of Fig. 4 without additional overlaid environmental information.
  • Fig. 7 is a diagrammatic illustration of a comparison of the detection contours of Fig.
  • Fig. 8 is a flowchart illustrating a method of controlling driving functions of a vehicle based upon data collected by an environmental awareness system associated with the vehicle.
  • Fig. 1 illustrates a diagrammatic view of a vehicle 100 having an environmental mapping awareness system comprising a processor 101, a number of first sensors 103 and second sensors 105, and a memory 107.
  • first sensors 103, second sensors 105, and memory 107 may be in data communication with processor 101.
  • the number of first sensors 103 may comprise lidar sensors and the number of second sensors 105 may comprise camera sensors.
  • Other embodiments may comprise different configurations having other or additional sensors, such as motion sensors, radar sensors, infrared sensors, or any other sensor for detecting or measuring objects within an environment known to one of ordinary skill in the art.
  • first sensors 103 are configured to detect objects within the environment and disposed at different positions with respect to vehicle 100.
  • sensor 103a is forward-facing
  • sensor 103b is rear-facing
  • sensor 103c faces to the left- hand side
  • sensor 103d faces to the right-hand side of vehicle 100, but other embodiments may comprise a different number of sensors, or sensors having alternative arrangements without deviating from the teachings disclosed herein.
  • Each of sensors 103a, 103b, 103c, and 103d may comprise identical specifications, or in some embodiments one or more of the sensors may comprise distinct specification from other similar sensors without deviating from the teachings disclosed herein.
  • second sensors 105 are configured to detect objects within the environment and disposed at different positions with respect to vehicle 100.
  • sensor 105a is forward-facing
  • sensor 105b is rear-facing
  • sensor 105c faces to the left- hand side
  • sensor 105d faces to the right-hand side of vehicle 100, but other embodiments may comprise a different number of sensors, or sensors having alternative arrangements without deviating from the teachings disclosed herein.
  • Each of sensors 105a, 105b, 105c, and 105d may comprise identical specifications, or in some embodiments one or more of the sensors may comprise distinct specification from other similar sensors without deviating from the teachings disclosed herein.
  • memory 107 may comprise processor-executable instructions that may be executed by processor 101. Memory 107 may additional provide data storage for data generated by other components of the environmental mapping system.
  • memory 107 may be embodied as a non-transitory computer-readable storage medium or a machine-readable medium for carrying or having computer-executable instructions or data structures stored thereon.
  • Such non-transitory computer-readable storage media or machine-readable medium may be any available media embodied in a hardware or physical form that can be accessed by a general purpose or special purpose computer.
  • non- transitory computer-readable storage media or machine-readable medium may comprise random- access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), optical disc storage, magnetic disk storage, linear magnetic data storage, magnetic storage devices, flash memory, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. Combinations of the above should also be included within the scope of the non-transitory computer- readable storage media or machine-readable medium.
  • RAM random- access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • optical disc storage magnetic disk storage
  • linear magnetic data storage magnetic storage devices
  • flash memory or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures.
  • vehicle 100 may also comprise a global navigation satellite system (GNSS) sensor 109.
  • GNSS sensor 109 may be operable to provide processor 101 with location data useful in determining some conditions of the surrounding environment.
  • memory 107 may comprise map data, such as high-density map data, that may be used by processor 101 in determining some conditions of the surrounding environment.
  • Processor 101 may be operable to utilize first sensors 103 and second sensors 105 to detect objects within the surrounding environment.
  • Objects may comprise fixtures, vehicular traffic, pedestrian traffic, stationary items, or non-stationary items within the environment.
  • Fixtures may comprise permanent elements within the environment, such as buildings, signage, traffic markings, light posts, roadway features, roadway hazards, potholes, roadway flooding, sidewalks, curbs, bridge overpasses, or any other permanent element within the environment known to one of ordinary skill in the art.
  • Vehicular traffic may comprise moving or stationary vehicles within the bounds of a roadway or parking space, such as cars, trucks, motorcycles, bicycles, or other vehicles known to one of ordinary skill in the art.
  • Pedestrian traffic may comprise moving or stationary people or animals (such as dogs) within the environment, including within the drivable portions of the environment or on non- drivable portions of the environment (such as sidewalks, playgrounds, or walkways). Pedestrian traffic may further comprise users of personal mobility devices such as wheelchairs, skateboards, or baby strollers or any other personal mobility device known to one of ordinary skill in the art.
  • Stationary items may comprise items within the environment that are non-permanent but also not moving, such as traffic cones, picnic tables, patio furniture, statuary, or any other non-permanent, non moving item known to one of ordinary skill in the art.
  • Non-stationary items may comprise items within the environment that move within the environment but are not permanent elements within the environment.
  • Non-stationary items may comprise windswept debris, flowing water, a ball thrown between pedestrians, or any other non-stationary item known to one of ordinary skill in the art.
  • objects may comprise all of the above forms, some objects may belong to multiple overlapping forms without deviating from the teachings disclosed herein. Some objects may be belong to multiple overlapping forms based on contextual understanding of the object. By way of example and not limitation, a car may be considered vehicular traffic while moving within a roadway, a stationary item when parked, and a non-stationary item when moving outside of the roadway (such as within a driveway or parking structure). Some objects may belong to multiple overlapping forms irrespective of context.
  • a moveable railway crossing barricade may be considered a fixture, with some parts (e.g., the barricade) being a non-stationary item, and other parts (e.g., the base) being a stationary item.
  • processor 101 may be operable to analyze data acquired by the first sensors 103 and second sensors 105 to detect objects within the surrounding environment and determine additional attributes associated with each detected object. The determined attributes may be utilized by processor 101 to predict additional attributes about a particular object. Both determined attributes and predicted attributes may be utilized by processor 101 to control a driving function of vehicle 100. In some embodiments, a vehicle processor different from processor 101 may be utilized to directly control the driving functions of vehicle 100 without deviating from the teachings disclosed herein.
  • Processor 101 may be operable to utilize a machine-learning model to analyze sensor data for object detection and attribute determination.
  • the machine-learning model may be trained and tested using known input data that is accompanied by groundtruth annotations.
  • Groundtruth annotations represent data annotations provided to the known input and represent the results of an analysis of the known input that are considered to be correct by the machine-learning model.
  • groundtruth annotations may be generated by expert human observers analyzing the data.
  • groundtruth annotations may be advantageously re-used along with their associated known input data to train or test different permutations of an environmental awareness system, such as after a change in software, firmware, or hardware.
  • an updated environmental awareness system may automatically re-train its machine-learning model using the existing groundtruth annotations in response to a change in software, firmware, or hardware.
  • the machine-learning model may be trained using groundtruth annotations, but not tested using groundtruth annotations without deviating from the teachings disclosed herein.
  • Fig. 2 is a diagrammatic illustration of an exemplary operating environment for a vehicle having an environmental awareness system, such as vehicle 100 (see Fig. 1).
  • the exemplary operating environment depicted in Fig. 2 may correspond to what is detectable using sensors of the environmental awareness system from the perspective of the front of an associated vehicle.
  • the analysis presented herein may be achieved using the environmental awareness system of Fig.
  • this perspective of the exemplary environment may be observable using a lidar sensor such as first sensor 103a and a camera sensor such as second sensor 105a (see Fig. 1).
  • Other embodiments may comprise other or additional sensor configurations, or other vantage points of the environment observable using other sensors of the environmental awareness system without deviating from the teachings disclosed herein.
  • Fig. 2 comprises a number of moving objects within the environment, such as a pedestrian 201, a vehicle 203, a vehicle 205, and a vehicle 207.
  • Fig. 2 comprises a number of stationary objects within the environment, such as a sign 209, a sign 211, a crosswalk 213, a building 215, a building 217, a streetlight 219, and a streetlight 221.
  • pedestrian 201 is crossing the roadway within crosswalk
  • Vehicle 203 is making a right-hand turn through the intersection in front of the perspective vehicle from an intersecting roadway.
  • Vehicle 205 is approaching the perspective vehicle from the opposite direction in another lane of the same roadway.
  • Vehicle 207 is stopped before traffic light 221 to make a left-hand turn lane through the associated intersection.
  • the environmental awareness system may be operable to recognize each of the objects depicted, and make a prediction about their movements that can be utilized to inform a driving function of the perspective vehicle.
  • Fig. 3 is a diagrammatic illustration of the environment of Fig. 2 with additional groundtruth annotations for some of the objects depicted therein.
  • groundtruth annotations provide approximate dimensions of associated objects, but other embodiments may comprise other attributes of the objects within the data without deviating from the teachings disclosed herein.
  • Pedestrian 201 is bounded by a pedestrian annotation 301.
  • Vehicle 203, vehicle 205, and vehicle 207 are bounded by a vehicle annotation 303, a vehicle annotation 305, and a vehicle annotation 307 respectively.
  • Sign 209 and sign 211 are bounded by a sign annotation 309 and a sign annotation 311 respectively.
  • crosswalk 213, building 215, building 217, streetlight 219, and streetlight 221 are not bounded by groundtruth annotations, some embodiments may include groundtruth annotations for all objects within the environment without deviating from the teachings disclosed herein. In the depicted embodiment, only selected objects may have been illustrated with an associated groundtruth annotation for the purpose of clarity of the teachings herein. Some embodiments may comprise selective groundtruth annotations associated with only a selected number of objects within the environment without deviating from the teachings disclosed herein.
  • Fig. 4 provides a diagrammatic illustration of detection points generated by a machine-learning model of an environmental awareness system.
  • a cluster of pedestrian detections 401 may result from sensor measurements of pedestrian 201 within the environment.
  • An example of a pedestrian detection 401a is provided herein for the purposes of illustration.
  • a cluster of vehicle detections 403 may result from sensor measurements of vehicle 203 within the environment.
  • An example of a vehicle detection 403a is provided herein for the purposes of illustration.
  • a cluster of vehicle detections 405 may result from sensor measurements of vehicle 205 within the environment.
  • An example of a vehicle detection 405a is provided herein for the purposes of illustration.
  • a cluster of vehicle detections 407 may result from sensor measurements of vehicle 207 within the environment.
  • An example of a vehicle detection 407a is provided herein for the purposes of illustration.
  • a cluster of sign detections 409 may result from sensor measurements of vehicle 209 within the environment.
  • An example of a vehicle detection 409a is provided herein for the purposes of illustration.
  • a cluster of sign detections 411 may result from sensor measurements of vehicle 211 within the environment.
  • An example of a vehicle detection 411a is provided herein for the purposes of illustration.
  • Different sensors may provide data useful to distinguish the objects detected within the environment.
  • pedestrian detection 401a and vehicle detection 405a may not be readily distinguishable by a camera sensor because of their close proximity within a two-dimensional representation, but a lidar sensor may provide data describing their relative distances to the perspective vehicle, providing an associated processor with sufficient data to distinguish between pedestrian 201 and vehicle 205.
  • a lidar sensor may not be able to distinguish crosswalk 213 from the road surface
  • a camera sensor may utilize differences in coloration on the road surfaces to provide data to distinguish crosswalk 213 from the rest of the road surface.
  • Other embodiments may utilize additional or different sensors to provide different data to a processor for determining distinct objects without deviating from the teachings disclosed herein.
  • Fig. 5 depicts a diagrammatic illustration of how a processor may utilize detection points of an object to generate object contours estimating the real-world dimensions of each object that are perceivable by the sensors of an environmental awareness system.
  • a pedestrian contour 501 may be estimated using pedestrian detections 401.
  • a vehicle contour 503 may be detected using vehicle detections 403.
  • a vehicle contour 505 may be detected using vehicle detections 405.
  • a vehicle contour 507 may be detected using vehicle detections 407.
  • a sign contour 509 may be detected using sign detections 409.
  • a sign contour 511 may be detected using sign detections 411.
  • only the objects for which detections are provided a contour for purposes of illustration, and all detected objects may be associated with an estimated contour without deviating from the teachings disclosed herein.
  • Object contours may be used to estimate dimensional sizes of objects within the environment.
  • the height and width of an object may be estimated based upon the contours utilizing the measured distance of the object detections.
  • the dimensions of a detected object are an example of a first-order attribute that may be recognized by analysis of the data acquired by the sensors of an environmental awareness system.
  • Additional attributes of the object may comprise second-order attributes, which are predicted based upon the values of at least one other attribute.
  • Second-order attributes may be predicted based upon the values of first-order attributes, second-order attributes, direct analysis of sensor data, or some combination thereof without deviating from the teachings disclosed herein.
  • attributes may further be associated with a confidence value indicating a level of confidence that a particular value accurately reflects the conditions of the associated object.
  • first-order attributes and second-order attributes may be arranged in hierarchy of attributes accessible by a processor of an environmental awareness system, such as processor 101 (see Fig. 1).
  • the hierarchy may be stored in a data storage such as memory 107, or any other form known to one of ordinary skill in the art without deviating from the teachings disclosed herein.
  • the hierarchy may comprise a data structure having a matrix of junctions, each of the junctions representing a predetermined attribute that is known to be useful for controlling a driving function of an associated vehicle.
  • a value of a junction representing a first-order attribute may be determined using a direct analysis of received sensor data.
  • a value of a junction representing a second-order attribute may be predicted based at least upon the value of one other junction within the hierarchy.
  • the values of junctions representing second-order attributes may be predicted based upon a number of values of junctions representing first-order attributes, a number of values of junctions representing second-order attributes, analysis of sensor data, or some combination of the above without deviating from the teachings disclosed herein.
  • the hierarchy may comprise a predetermined structure of attribute junctions relevant to an object. Different detected objects may be associated with different hierarchy structures based upon the form of the object.
  • the hierarchy’s structure may be defined based upon the expected contextual correlation of attribute values. For example, a second-order attribute indicating whether a vehicle is parked may only be relevant if a first-order attribute indicates that the object is in fact a vehicle. Other contextual correlations will be recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
  • Examples of attributes associated with vehicle objects may comprise: confidence that the vehicle is moving or stopped, velocity, acceleration, size of the vehicle and covariance thereof, a type of vehicle, distance from a sensor of the perspective vehicle, an existence probability, whether the vehicle is empty, open/closed status of doors, open/close status of trunk, open/close status of a cargo area, whether a vehicle light is illuminated, whether a hazard signal is activated, whether a turn signal is activated, whether brake lights are illuminated, whether reverse lights are illuminated, the status of cabbing lights, a detection of vehicle occupancy, probability of vehicle occupancy, a numeric vehicle occupancy, a wheel angle, duration of motion, speed of motion, direction of motion, duration of non-motion, color of a body of the vehicle, predicted railway crossing behavior, probability of a special conditions (such as involvement in a collision or a break-down), or any other attribute associated with a vehicle known to one of ordinary skill in the art without deviating from the teachings disclosed herein.
  • Examples of attributes associated with objects representing the lane of a roadway may comprise: lane width, distance to a lane boundary from the perspective vehicle, type of lane boundary, lane assignment confidence for an object detected within the lane, lane assignment confidence for the perspective vehicle, or any other lane-related attribute recognized by one of ordinary skill without deviating from the teachings disclosed herein.
  • Examples of attributes associated with objects representing roadway elements may comprise: the number of lanes to the left of the perspective vehicle, the number of lanes to the right of the perspective vehicle, permissible traffic flow of the roadway, distance to the next upcoming turn opportunity from a perspective vehicle, identification of a shoulder of the roadway, identification of a parking area, identification of a bridge section, identification of a moveable bridge section, status of a moveable bridge section, the width of a lane on the roadway, permissible traffic maneuvers (such as parking, U-turns, Y-turns, standing, or stopping) of the perspective vehicle at its current location, the classification of street, the location of crosswalks, a speed limit, the localization of stop lines within the roadway, or any other roadway-related attributes recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
  • Examples of attributes associated with objects representing a pedestrian may comprise: a special designation of a pedestrian (such as a police officer or construction worker), a recognized behavior (such as street-crossing, waiting to cross, entering/exiting a vehicle, providing technical assistance to a vehicle), a relative position with respect to the perspective vehicle, a relative position with respect to one or more other vehicles in the environment, or any other pedestrian-related attributes recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
  • a special designation of a pedestrian such as a police officer or construction worker
  • a recognized behavior such as street-crossing, waiting to cross, entering/exiting a vehicle, providing technical assistance to a vehicle
  • a relative position with respect to the perspective vehicle such as a relative position with respect to the perspective vehicle, a relative position with respect to one or more other vehicles in the environment, or any other pedestrian-related attributes recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
  • a processor of an environmental awareness system may be operable to utilize the data acquired indicating the environment depicted in Fig. 5 to associate attributes with the objects therein.
  • pedestrian contour 501 may be utilized to determine that the associated object is pedestrian 201. Repeated measurements of may determine the speed and direction of motion of pedestrian 201. Camera data may detect crosswalk 213, and the processor may determine that pedestrian 201 is walking within crosswalk 213, and thus additional attributes of pedestrian 201 may be predicted, such as a prediction that pedestrian 201 is crossing the road within crosswalk 213.
  • contour 503 may be utilized to determine that the associated object is vehicle 203.
  • Repeated measurements may determine the speed and direction of travel for vehicle 203 as it approaches an intersection of the roadway.
  • Camera data may indicate that a turn signal of vehicle 203 is active when the signal is perceivable to a camera sensor of the perspective vehicle.
  • These first-order attributes may be utilized to predict a second-order attribute that vehicle 203 is performing a turning maneuver.
  • contour 507 may be utilized to determine that the associated object is vehicle 207.
  • Camera and lidar data may be analyzed to determine that vehicle 207 is not moving, it has a turn signal on, and it is currently positioned within a left-hand turning lane at an intersection of the roadway.
  • Camera data may also be analyzed to determine that streetlight 221 is illuminate with a“stop” signal.
  • These first-order attributes may be utilized to predict a second-order attribute that vehicle 207 is waiting to perform a turning maneuver until streetlight 221 indicates that it is legal to turn.
  • contour 505 may be utilized to determine that the associated object is vehicle 205.
  • Camera and lidar data may be analyzed to determine that vehicle 205 is in motion on a lane of the roadway different from the current lane of the perspective vehicle.
  • Additional camera data or GNSS data may be analyzed to determine that vehicle 205 is not within a designated parking area.
  • These first-order attributes may be utilized to predict a second-order attribute that vehicle 205 is not in a parked condition.
  • each of the above examples may be combined, and the combined detected and predicted attributes may be utilized to control a driving function of the perspective vehicle.
  • the attributes of pedestrian 201, vehicle 203, vehicle 205, and vehicle 207 may be utilized to predict when the perspective vehicle is able to safely navigate down the roadway towards its current orientation.
  • additional data may be gathered that is useful for controlling a driving function of the perspective vehicle.
  • the processor may be operable to analyze the visual content of sign 209 and sign 211 in order to assist driving functions such as navigation, or maneuvering through a portion of the environment.
  • Fig. 6 depicts an illustration of detection points and their associated contours to estimate the dimensions and locations of objects within the environment.
  • the contour estimation may follow a predetermined method for interpolating related detection points into a single contour.
  • each of the contours are estimated based upon detection points at the extreme portions of the detected objects.
  • the attributes associated with the detection points may be utilized to distinguish contours.
  • contour 505 is understood in the depicted embodiment to be separate from contour 501 because each of the detection points is associated with an attribute of distance. Because detection points 405 are measured to be further away from the perspective vehicle than detection points 401, the processor of the environmental awareness system can distinguish the two objects in the environment by analyzing the sensor data and generating first- order attribute data to associate with the objects.
  • a testing phase may also be utilized to ensure reliable accuracy consistent with operations observed in the training phase.
  • Groundtruth annotations may comprise training groundtruth annotations used during the training phase, and testing groundtruth annotations used during the testing phase.
  • Fig. 7 illustrates a comparison between contours estimated for detected objects, and groundtruth annotations for the same environmental data overlaid thereon.
  • the contours from Fig. 5 and the groundtruth annotations from Fig. 3 are provided for illustration of the comparison, but other embodiments may comprise other comparisons of the same data, and other environmental data may result in different comparisons.
  • Contours may be judged by their accuracy based upon their correlation to the groundtruth annotation associated with the same object.
  • Groundtruth annotations may be generated in order to encompass the entirety of a detected object, and therefore another metric for determining accuracy may be whether the contour is encapsulated by the corresponding groundtruth annotation.
  • contour 501 and contour 507 may be considered very accurate estimations when compared to the corresponding groundtruth annotation 301 and groundtruth annotation 307 respectively.
  • contour 509 may be determined to be less accurate because a large portion of the contour exceeds the bounds of its corresponding groundtruth annotation 309
  • contour 511 may be determined to be less accurate because a large portion of the contour does not correlate well with its corresponding groundtruth annotation 311.
  • estimations may be considered poor if an erroneous contour is estimated that corresponds to no groundtruth annotation, or if a groundtruth annotation does not have a corresponding estimated contour generated by the processor.
  • This comparative analysis of contours and corresponding groundtruth annotations may additionally be performed during testing phases without deviating from the teachings disclosed herein.
  • the same training groundtruth annotations and testing groundtruth annotations may be utilized for respective training and testing phases of different versions of an environmental system in order to track the relatively accuracy between the different versions.
  • changes to the system may be made to the software, firmware, or hardware of the system to distinguish each version.
  • changes may be made to attribute hierarchies that result in different attribute predictions. Changes to attribute hierarchies may comprise different weighting of input data on attribute determination, or different weighting of first-order attribute values when predicting second-order attributes. Other changes to attribute hierarchies may be utilized without deviating from the teachings disclosed herein.
  • Fig. 8 is a flowchart illustrating a method of operating an environmental awareness system in operation to control a driving function of a vehicle, such as a vehicle 100 (see Fig. 1).
  • the method begins at step 800, and continues to steps 802 and 804 where sensor data is collected that describes the environment as perceived by the sensors.
  • step 802 may correspond to a collection of lidar data
  • step 804 may correspond to a collection of camera data, but other embodiments may comprise different or other steps directed to other sensor types of the environmental awareness system without deviating from the teachings disclosed herein.
  • step 802 and step 804 may be performed concurrently, but other embodiments may perform the steps in a different sequence without deviating from the teachings disclosed herein.
  • the method proceeds to step 806 where the sensor data is analyzed to detect an object and a hierarchy of attributes is defined with respect to the detected object.
  • the environmental awareness system may utilize a number of hierarchies, each hierarchy being applicable to a particular type of object, such as a vehicle, pedestrian, fixture, or the like. After detection of the object and type, a corresponding hierarchy may be assigned having junctions thereof to representthe attributes ofthat object.
  • the junctions may comprise first-order junctions representing attributes that may be defined by analyzing the sensor data and second-order junctions representing attributes that may be predicted based upon the sensor data, the value of another junction, or a combination thereof.
  • step 808 the first-order junction values are determined using analysis of the sensor data. Once all first-order junction values have been updated, the method proceeds to step 810, where second-order junction values are predicted based upon first-order junction values, the sensor data analysis, or a combination thereof.
  • the attribute values may be utilized to control a vehicle function at step 812, and the method returns to the beginning to continue monitoring the environment during operation of the vehicle. In some embodiments, this method is concurrently active for each detected object within the environment without deviating from the teachings disclosed herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

An environmental awareness system associated with a vehicle comprising a processor, a memory, a first number of sensors, and a second number of sensors different from the first number of sensors. The processor may be operable to collect data generated by the sensors to detect and identify objects within the environment. The processor may further be operable to determine and predict attributes of detected objects in order to predict behaviors of the objects within the environment. The predicted behaviors may be utilized to control a driving function of the vehicle.

Description

ENVIRONMENTAL AWARENESS SYSTEM FOR A VEHICLE
TECHNICAL FIELD
[0001] This disclosure relates to machine-learning functions and artificial intelligence systems for a vehicle having at least one driving function. The driving function may be controlled by the artificial intelligence system of the vehicle.
BACKGROUND
[0002] Vehicles having automated functions may be helpful in providing safer and more efficient transportation of associated passengers. Vehicles having driving functions that may be controlled in an automated manner rely upon sensors to detect other objects within the operating environment. Past solutions have previously been reliant upon recreation of training data each time that a system in a training phase is adjusted or updated. Past solutions have previously made use of monotype sensor arrays that may suffer from inabilities to provide data capable of distinguishing certain attributes of objects within the environment. A more reliable object-detection and object- prediction system is desired to improve safety and reliability of vehicle driving functions.
SUMMARY
[0003] One aspect of this disclosure is directed to an environmental mapping system associated with a vehicle, the environmental mapping system operable to provide data useful for a self-driving function of the vehicle. The environmental mapping system may comprise a processor, a memory, a first sensor, and a second sensor. The first sensor and the second sensor may be operable to detect objects within the environment surrounding the vehicle, and may be operable to detect attributes of each detected object. The processor may be operable to utilize the data collected by the first sensor and second sensor to predict the values of a third attribute of an object, or additional attributes of an object, using a machine-learning model. The machine-learning model may generate a hierarchy of attributes, some of the attributes being predictable based upon the value or one or more other attributes within the hierarchy. The machine-learning model may have been trained using a corpus of groundtruth annotation data cultivated by a human observer of the training input data. The processor may be operable to utilize the detected and predicted attributes of the objects to control one or more self-driving functions of the vehicle. In some embodiments, the first sensor and second sensor may comprise a lidar sensor or a camera sensor.
[0004] Another aspect of this disclosure is directed an environmental awareness system associated with a vehicle comprising a processor, a memory, a first sensor, and a second sensor. The vehicle may be operable to perform one or more driving functions autonomously. The processor may be operable to utilize the data collected by the first sensor and second sensor to predict the values of a third attribute of an object, or additional attributes of an object, using a machine-learning model. The machine-learning model may generate a hierarchy of attributes, including at least a first number of attributes and a second number of attributes. The machine-learning model may have been trained using a corpus of groundtruth annotation data cultivated by a human observer of the training input data. The processor may be operable to utilize the detected and predicted attributes of the objects to control one or more self-driving functions of the vehicle. In some embodiments, the first sensor and second sensor may comprise a lidar sensor or a camera sensor.
[0005] A further aspect of this disclosure is directed to a method of controlling a driving function of a vehicle. The method comprises the steps of collecting first sensor data from a first sensor and second sensor data from a second sensor. The sensor data is utilized to define an object within the environment of the vehicle, and a hierarchy of attributes is defined upon detection and definition of the object. The hierarchy may comprise number of junctions, each of the junctions representing an attribute. Some of the junction values may be defined in response to analysis of the collected sensor data. Some of the junction values may be predicted in response to the defined values of other junctions. The completed hierarchy may be utilized to determine how the driving function should be controlled.
[0006] The above aspects of this disclosure and other aspects will be explained in greater detail below with reference to the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Fig. 1 is a diagrammatic view of a vehicle having an environmental awareness system. [0008] Fig. 2 is a diagrammatic illustration of an exemplary operating environment for an environmental awareness system.
[0009] Fig. 3 is a diagrammatic illustration of the exemplary operating environment of Fig. 2, with additional groundtruth annotations overlaid thereon.
[0010] Fig. 4 is a diagrammatic illustration of the exemplary operating environment of Fig. 2 with additional detection point data overlaid thereon.
[0011] Fig. 5 is a diagrammatic illustration of the exemplary operating environment of Fig. 2 and additional detection point data of Fig. 4, with additional detection contours overlaid thereon.
[0012] Fig. 6 is a diagrammatic illustration of the detection contours of Fig. 5 and detection point data of Fig. 4 without additional overlaid environmental information.
[0013] Fig. 7 is a diagrammatic illustration of a comparison of the detection contours of Fig.
5 with the groundtruth annotations of Fig. 3.
[0014] Fig. 8 is a flowchart illustrating a method of controlling driving functions of a vehicle based upon data collected by an environmental awareness system associated with the vehicle.
DETAILED DESCRIPTION
[0015] The illustrated embodiments are disclosed with reference to the drawings. However, it is to be understood that the disclosed embodiments are intended to be merely examples that may be embodied in various and alternative forms. The figures are not necessarily to scale and some features may be exaggerated or minimized to show details of particular components. The specific structural and functional details disclosed are not to be interpreted as limiting, but as a representative basis for teaching one skilled in the art how to practice the disclosed concepts.
[0016] Fig. 1 illustrates a diagrammatic view of a vehicle 100 having an environmental mapping awareness system comprising a processor 101, a number of first sensors 103 and second sensors 105, and a memory 107. Each of first sensors 103, second sensors 105, and memory 107 may be in data communication with processor 101. In the depicted embodiment, the number of first sensors 103 may comprise lidar sensors and the number of second sensors 105 may comprise camera sensors. Other embodiments may comprise different configurations having other or additional sensors, such as motion sensors, radar sensors, infrared sensors, or any other sensor for detecting or measuring objects within an environment known to one of ordinary skill in the art.
[0017] In the depicted embodiment first sensors 103 are configured to detect objects within the environment and disposed at different positions with respect to vehicle 100. In the depicted embodiment, sensor 103a is forward-facing, sensor 103b is rear-facing, sensor 103c faces to the left- hand side, and sensor 103d faces to the right-hand side of vehicle 100, but other embodiments may comprise a different number of sensors, or sensors having alternative arrangements without deviating from the teachings disclosed herein. Each of sensors 103a, 103b, 103c, and 103d may comprise identical specifications, or in some embodiments one or more of the sensors may comprise distinct specification from other similar sensors without deviating from the teachings disclosed herein.
[0018] In the depicted embodiment second sensors 105 are configured to detect objects within the environment and disposed at different positions with respect to vehicle 100. In the depicted embodiment, sensor 105a is forward-facing, sensor 105b is rear-facing, sensor 105c faces to the left- hand side, and sensor 105d faces to the right-hand side of vehicle 100, but other embodiments may comprise a different number of sensors, or sensors having alternative arrangements without deviating from the teachings disclosed herein. Each of sensors 105a, 105b, 105c, and 105d may comprise identical specifications, or in some embodiments one or more of the sensors may comprise distinct specification from other similar sensors without deviating from the teachings disclosed herein.
[0019] In the depicted embodiment, memory 107 may comprise processor-executable instructions that may be executed by processor 101. Memory 107 may additional provide data storage for data generated by other components of the environmental mapping system. In the depicted embodiment, memory 107 may be embodied as a non-transitory computer-readable storage medium or a machine-readable medium for carrying or having computer-executable instructions or data structures stored thereon. Such non-transitory computer-readable storage media or machine-readable medium may be any available media embodied in a hardware or physical form that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such non- transitory computer-readable storage media or machine-readable medium may comprise random- access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), optical disc storage, magnetic disk storage, linear magnetic data storage, magnetic storage devices, flash memory, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. Combinations of the above should also be included within the scope of the non-transitory computer- readable storage media or machine-readable medium.
[0020] In the depicted embodiment, vehicle 100 may also comprise a global navigation satellite system (GNSS) sensor 109. GNSS sensor 109 may be operable to provide processor 101 with location data useful in determining some conditions of the surrounding environment. In some embodiments, memory 107 may comprise map data, such as high-density map data, that may be used by processor 101 in determining some conditions of the surrounding environment.
[0021] Processor 101 may be operable to utilize first sensors 103 and second sensors 105 to detect objects within the surrounding environment. Objects may comprise fixtures, vehicular traffic, pedestrian traffic, stationary items, or non-stationary items within the environment. Fixtures may comprise permanent elements within the environment, such as buildings, signage, traffic markings, light posts, roadway features, roadway hazards, potholes, roadway flooding, sidewalks, curbs, bridge overpasses, or any other permanent element within the environment known to one of ordinary skill in the art. Vehicular traffic may comprise moving or stationary vehicles within the bounds of a roadway or parking space, such as cars, trucks, motorcycles, bicycles, or other vehicles known to one of ordinary skill in the art. Pedestrian traffic may comprise moving or stationary people or animals (such as dogs) within the environment, including within the drivable portions of the environment or on non- drivable portions of the environment (such as sidewalks, playgrounds, or walkways). Pedestrian traffic may further comprise users of personal mobility devices such as wheelchairs, skateboards, or baby strollers or any other personal mobility device known to one of ordinary skill in the art. Stationary items may comprise items within the environment that are non-permanent but also not moving, such as traffic cones, picnic tables, patio furniture, statuary, or any other non-permanent, non moving item known to one of ordinary skill in the art. Non-stationary items may comprise items within the environment that move within the environment but are not permanent elements within the environment. Non-stationary items may comprise windswept debris, flowing water, a ball thrown between pedestrians, or any other non-stationary item known to one of ordinary skill in the art. [0022] Because objects may comprise all of the above forms, some objects may belong to multiple overlapping forms without deviating from the teachings disclosed herein. Some objects may be belong to multiple overlapping forms based on contextual understanding of the object. By way of example and not limitation, a car may be considered vehicular traffic while moving within a roadway, a stationary item when parked, and a non-stationary item when moving outside of the roadway (such as within a driveway or parking structure). Some objects may belong to multiple overlapping forms irrespective of context. By way of example, and not limitation, a moveable railway crossing barricade may be considered a fixture, with some parts (e.g., the barricade) being a non-stationary item, and other parts (e.g., the base) being a stationary item.
[0023] Objects may exhibit different characteristics and expected behaviors based upon the forms to which they belong. For this reason, processor 101 may be operable to analyze data acquired by the first sensors 103 and second sensors 105 to detect objects within the surrounding environment and determine additional attributes associated with each detected object. The determined attributes may be utilized by processor 101 to predict additional attributes about a particular object. Both determined attributes and predicted attributes may be utilized by processor 101 to control a driving function of vehicle 100. In some embodiments, a vehicle processor different from processor 101 may be utilized to directly control the driving functions of vehicle 100 without deviating from the teachings disclosed herein.
[0024] Processor 101 may be operable to utilize a machine-learning model to analyze sensor data for object detection and attribute determination. The machine-learning model may be trained and tested using known input data that is accompanied by groundtruth annotations. Groundtruth annotations represent data annotations provided to the known input and represent the results of an analysis of the known input that are considered to be correct by the machine-learning model. In some embodiments, groundtruth annotations may be generated by expert human observers analyzing the data. In some embodiments, groundtruth annotations may be advantageously re-used along with their associated known input data to train or test different permutations of an environmental awareness system, such as after a change in software, firmware, or hardware. In some embodiments, an updated environmental awareness system may automatically re-train its machine-learning model using the existing groundtruth annotations in response to a change in software, firmware, or hardware. In some embodiments, the machine-learning model may be trained using groundtruth annotations, but not tested using groundtruth annotations without deviating from the teachings disclosed herein. [0025] Fig. 2 is a diagrammatic illustration of an exemplary operating environment for a vehicle having an environmental awareness system, such as vehicle 100 (see Fig. 1). The exemplary operating environment depicted in Fig. 2 may correspond to what is detectable using sensors of the environmental awareness system from the perspective of the front of an associated vehicle. By way of example, and not limitation, the analysis presented herein may be achieved using the environmental awareness system of Fig. 1, and in particular this perspective of the exemplary environment may be observable using a lidar sensor such as first sensor 103a and a camera sensor such as second sensor 105a (see Fig. 1). Other embodiments may comprise other or additional sensor configurations, or other vantage points of the environment observable using other sensors of the environmental awareness system without deviating from the teachings disclosed herein.
[0026] Fig. 2 comprises a number of moving objects within the environment, such as a pedestrian 201, a vehicle 203, a vehicle 205, and a vehicle 207. Fig. 2 comprises a number of stationary objects within the environment, such as a sign 209, a sign 211, a crosswalk 213, a building 215, a building 217, a streetlight 219, and a streetlight 221. These objects within the environment are provided by way of example, and not limitation, and the environmental awareness system may be operable to detect other objects within other environments without deviating from the teachings disclosed herein.
[0027] In the depicted embodiment, pedestrian 201 is crossing the roadway within crosswalk
213. Vehicle 203 is making a right-hand turn through the intersection in front of the perspective vehicle from an intersecting roadway. Vehicle 205 is approaching the perspective vehicle from the opposite direction in another lane of the same roadway. Vehicle 207 is stopped before traffic light 221 to make a left-hand turn lane through the associated intersection. The environmental awareness system may be operable to recognize each of the objects depicted, and make a prediction about their movements that can be utilized to inform a driving function of the perspective vehicle.
[0028] Fig. 3 is a diagrammatic illustration of the environment of Fig. 2 with additional groundtruth annotations for some of the objects depicted therein. In the depicted embodiment, groundtruth annotations provide approximate dimensions of associated objects, but other embodiments may comprise other attributes of the objects within the data without deviating from the teachings disclosed herein. Pedestrian 201 is bounded by a pedestrian annotation 301. Vehicle 203, vehicle 205, and vehicle 207 are bounded by a vehicle annotation 303, a vehicle annotation 305, and a vehicle annotation 307 respectively. Sign 209 and sign 211 are bounded by a sign annotation 309 and a sign annotation 311 respectively. Though in the depicted embodiment crosswalk 213, building 215, building 217, streetlight 219, and streetlight 221 are not bounded by groundtruth annotations, some embodiments may include groundtruth annotations for all objects within the environment without deviating from the teachings disclosed herein. In the depicted embodiment, only selected objects may have been illustrated with an associated groundtruth annotation for the purpose of clarity of the teachings herein. Some embodiments may comprise selective groundtruth annotations associated with only a selected number of objects within the environment without deviating from the teachings disclosed herein.
[0029] The groundtruth annotations of Fig. 3 provide a reference for detection and dimensional analysis of their respective associated objects. Fig. 4 provides a diagrammatic illustration of detection points generated by a machine-learning model of an environmental awareness system. A cluster of pedestrian detections 401 may result from sensor measurements of pedestrian 201 within the environment. An example of a pedestrian detection 401a is provided herein for the purposes of illustration. A cluster of vehicle detections 403 may result from sensor measurements of vehicle 203 within the environment. An example of a vehicle detection 403a is provided herein for the purposes of illustration. A cluster of vehicle detections 405 may result from sensor measurements of vehicle 205 within the environment. An example of a vehicle detection 405a is provided herein for the purposes of illustration. A cluster of vehicle detections 407 may result from sensor measurements of vehicle 207 within the environment. An example of a vehicle detection 407a is provided herein for the purposes of illustration. A cluster of sign detections 409 may result from sensor measurements of vehicle 209 within the environment. An example of a vehicle detection 409a is provided herein for the purposes of illustration. A cluster of sign detections 411 may result from sensor measurements of vehicle 211 within the environment. An example of a vehicle detection 411a is provided herein for the purposes of illustration.
[0030] Different sensors may provide data useful to distinguish the objects detected within the environment. For example, pedestrian detection 401a and vehicle detection 405a may not be readily distinguishable by a camera sensor because of their close proximity within a two-dimensional representation, but a lidar sensor may provide data describing their relative distances to the perspective vehicle, providing an associated processor with sufficient data to distinguish between pedestrian 201 and vehicle 205. In another example, while a lidar sensor may not be able to distinguish crosswalk 213 from the road surface, a camera sensor may utilize differences in coloration on the road surfaces to provide data to distinguish crosswalk 213 from the rest of the road surface. Other examples will be recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein. Other embodiments may utilize additional or different sensors to provide different data to a processor for determining distinct objects without deviating from the teachings disclosed herein.
[0031] Fig. 5 depicts a diagrammatic illustration of how a processor may utilize detection points of an object to generate object contours estimating the real-world dimensions of each object that are perceivable by the sensors of an environmental awareness system. A pedestrian contour 501 may be estimated using pedestrian detections 401. A vehicle contour 503 may be detected using vehicle detections 403. A vehicle contour 505 may be detected using vehicle detections 405. A vehicle contour 507 may be detected using vehicle detections 407. A sign contour 509 may be detected using sign detections 409. A sign contour 511 may be detected using sign detections 411. In the depicted embodiment, only the objects for which detections are provided a contour for purposes of illustration, and all detected objects may be associated with an estimated contour without deviating from the teachings disclosed herein.
[0032] Object contours may be used to estimate dimensional sizes of objects within the environment. The height and width of an object may be estimated based upon the contours utilizing the measured distance of the object detections. The dimensions of a detected object are an example of a first-order attribute that may be recognized by analysis of the data acquired by the sensors of an environmental awareness system. Additional attributes of the object may comprise second-order attributes, which are predicted based upon the values of at least one other attribute. Second-order attributes may be predicted based upon the values of first-order attributes, second-order attributes, direct analysis of sensor data, or some combination thereof without deviating from the teachings disclosed herein. In some embodiments, attributes may further be associated with a confidence value indicating a level of confidence that a particular value accurately reflects the conditions of the associated object.
[0033] For the purpose of processor-based determinations of object conditions, first-order attributes and second-order attributes may be arranged in hierarchy of attributes accessible by a processor of an environmental awareness system, such as processor 101 (see Fig. 1). The hierarchy may be stored in a data storage such as memory 107, or any other form known to one of ordinary skill in the art without deviating from the teachings disclosed herein. The hierarchy may comprise a data structure having a matrix of junctions, each of the junctions representing a predetermined attribute that is known to be useful for controlling a driving function of an associated vehicle. A value of a junction representing a first-order attribute may be determined using a direct analysis of received sensor data. A value of a junction representing a second-order attribute may be predicted based at least upon the value of one other junction within the hierarchy. In some embodiments, the values of junctions representing second-order attributes may be predicted based upon a number of values of junctions representing first-order attributes, a number of values of junctions representing second-order attributes, analysis of sensor data, or some combination of the above without deviating from the teachings disclosed herein.
[0034] The hierarchy may comprise a predetermined structure of attribute junctions relevant to an object. Different detected objects may be associated with different hierarchy structures based upon the form of the object. The hierarchy’s structure may be defined based upon the expected contextual correlation of attribute values. For example, a second-order attribute indicating whether a vehicle is parked may only be relevant if a first-order attribute indicates that the object is in fact a vehicle. Other contextual correlations will be recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
[0035] Examples of attributes associated with vehicle objects may comprise: confidence that the vehicle is moving or stopped, velocity, acceleration, size of the vehicle and covariance thereof, a type of vehicle, distance from a sensor of the perspective vehicle, an existence probability, whether the vehicle is empty, open/closed status of doors, open/close status of trunk, open/close status of a cargo area, whether a vehicle light is illuminated, whether a hazard signal is activated, whether a turn signal is activated, whether brake lights are illuminated, whether reverse lights are illuminated, the status of cabbing lights, a detection of vehicle occupancy, probability of vehicle occupancy, a numeric vehicle occupancy, a wheel angle, duration of motion, speed of motion, direction of motion, duration of non-motion, color of a body of the vehicle, predicted railway crossing behavior, probability of a special conditions (such as involvement in a collision or a break-down), or any other attribute associated with a vehicle known to one of ordinary skill in the art without deviating from the teachings disclosed herein. [0036] Examples of attributes associated with objects representing the lane of a roadway may comprise: lane width, distance to a lane boundary from the perspective vehicle, type of lane boundary, lane assignment confidence for an object detected within the lane, lane assignment confidence for the perspective vehicle, or any other lane-related attribute recognized by one of ordinary skill without deviating from the teachings disclosed herein.
[0037] Examples of attributes associated with objects representing roadway elements may comprise: the number of lanes to the left of the perspective vehicle, the number of lanes to the right of the perspective vehicle, permissible traffic flow of the roadway, distance to the next upcoming turn opportunity from a perspective vehicle, identification of a shoulder of the roadway, identification of a parking area, identification of a bridge section, identification of a moveable bridge section, status of a moveable bridge section, the width of a lane on the roadway, permissible traffic maneuvers (such as parking, U-turns, Y-turns, standing, or stopping) of the perspective vehicle at its current location, the classification of street, the location of crosswalks, a speed limit, the localization of stop lines within the roadway, or any other roadway-related attributes recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
[0038] Examples of attributes associated with objects representing a pedestrian may comprise: a special designation of a pedestrian (such as a police officer or construction worker), a recognized behavior (such as street-crossing, waiting to cross, entering/exiting a vehicle, providing technical assistance to a vehicle), a relative position with respect to the perspective vehicle, a relative position with respect to one or more other vehicles in the environment, or any other pedestrian-related attributes recognized by one of ordinary skill in the art without deviating from the teachings disclosed herein.
[0039] Returning to Fig. 5, a processor of an environmental awareness system may be operable to utilize the data acquired indicating the environment depicted in Fig. 5 to associate attributes with the objects therein. By way of example, and not limitation, pedestrian contour 501 may be utilized to determine that the associated object is pedestrian 201. Repeated measurements of may determine the speed and direction of motion of pedestrian 201. Camera data may detect crosswalk 213, and the processor may determine that pedestrian 201 is walking within crosswalk 213, and thus additional attributes of pedestrian 201 may be predicted, such as a prediction that pedestrian 201 is crossing the road within crosswalk 213. [0040] In another example, contour 503 may be utilized to determine that the associated object is vehicle 203. Repeated measurements may determine the speed and direction of travel for vehicle 203 as it approaches an intersection of the roadway. Camera data may indicate that a turn signal of vehicle 203 is active when the signal is perceivable to a camera sensor of the perspective vehicle. These first-order attributes may be utilized to predict a second-order attribute that vehicle 203 is performing a turning maneuver.
[0041] In another example, contour 507 may be utilized to determine that the associated object is vehicle 207. Camera and lidar data may be analyzed to determine that vehicle 207 is not moving, it has a turn signal on, and it is currently positioned within a left-hand turning lane at an intersection of the roadway. Camera data may also be analyzed to determine that streetlight 221 is illuminate with a“stop” signal. These first-order attributes may be utilized to predict a second-order attribute that vehicle 207 is waiting to perform a turning maneuver until streetlight 221 indicates that it is legal to turn.
[0042] In another example, contour 505 may be utilized to determine that the associated object is vehicle 205. Camera and lidar data may be analyzed to determine that vehicle 205 is in motion on a lane of the roadway different from the current lane of the perspective vehicle. Additional camera data or GNSS data may be analyzed to determine that vehicle 205 is not within a designated parking area. These first-order attributes may be utilized to predict a second-order attribute that vehicle 205 is not in a parked condition.
[0043] In another example, each of the above examples may be combined, and the combined detected and predicted attributes may be utilized to control a driving function of the perspective vehicle. For example, the attributes of pedestrian 201, vehicle 203, vehicle 205, and vehicle 207 may be utilized to predict when the perspective vehicle is able to safely navigate down the roadway towards its current orientation. In some embodiments, additional data may be gathered that is useful for controlling a driving function of the perspective vehicle. By way of example, and not limitation, the processor may be operable to analyze the visual content of sign 209 and sign 211 in order to assist driving functions such as navigation, or maneuvering through a portion of the environment.
[0044] Fig. 6 depicts an illustration of detection points and their associated contours to estimate the dimensions and locations of objects within the environment. The contour estimation may follow a predetermined method for interpolating related detection points into a single contour. In the depicted embodiment, each of the contours are estimated based upon detection points at the extreme portions of the detected objects. The attributes associated with the detection points may be utilized to distinguish contours. By way of example, and not limitation, contour 505 is understood in the depicted embodiment to be separate from contour 501 because each of the detection points is associated with an attribute of distance. Because detection points 405 are measured to be further away from the perspective vehicle than detection points 401, the processor of the environmental awareness system can distinguish the two objects in the environment by analyzing the sensor data and generating first- order attribute data to associate with the objects.
[0045] Prior to general use of the environmental awareness system, its accuracy may be considered and improved during a training phase. In some embodiments, a testing phase may also be utilized to ensure reliable accuracy consistent with operations observed in the training phase. Groundtruth annotations may comprise training groundtruth annotations used during the training phase, and testing groundtruth annotations used during the testing phase. Fig. 7 illustrates a comparison between contours estimated for detected objects, and groundtruth annotations for the same environmental data overlaid thereon. In the depicted embodiment, the contours from Fig. 5 and the groundtruth annotations from Fig. 3 are provided for illustration of the comparison, but other embodiments may comprise other comparisons of the same data, and other environmental data may result in different comparisons. Contours may be judged by their accuracy based upon their correlation to the groundtruth annotation associated with the same object. Groundtruth annotations may be generated in order to encompass the entirety of a detected object, and therefore another metric for determining accuracy may be whether the contour is encapsulated by the corresponding groundtruth annotation. In the depicted embodiment, contour 501 and contour 507 may be considered very accurate estimations when compared to the corresponding groundtruth annotation 301 and groundtruth annotation 307 respectively. In contrast, contour 509 may be determined to be less accurate because a large portion of the contour exceeds the bounds of its corresponding groundtruth annotation 309, while contour 511 may be determined to be less accurate because a large portion of the contour does not correlate well with its corresponding groundtruth annotation 311. In some embodiments, estimations may be considered poor if an erroneous contour is estimated that corresponds to no groundtruth annotation, or if a groundtruth annotation does not have a corresponding estimated contour generated by the processor. This comparative analysis of contours and corresponding groundtruth annotations may additionally be performed during testing phases without deviating from the teachings disclosed herein. Advantageously, the same training groundtruth annotations and testing groundtruth annotations may be utilized for respective training and testing phases of different versions of an environmental system in order to track the relatively accuracy between the different versions. In such embodiments, changes to the system may be made to the software, firmware, or hardware of the system to distinguish each version. In some such embodiments, changes may be made to attribute hierarchies that result in different attribute predictions. Changes to attribute hierarchies may comprise different weighting of input data on attribute determination, or different weighting of first-order attribute values when predicting second-order attributes. Other changes to attribute hierarchies may be utilized without deviating from the teachings disclosed herein.
[0046] Fig. 8 is a flowchart illustrating a method of operating an environmental awareness system in operation to control a driving function of a vehicle, such as a vehicle 100 (see Fig. 1). The method begins at step 800, and continues to steps 802 and 804 where sensor data is collected that describes the environment as perceived by the sensors. In the depicted embodiment, step 802 may correspond to a collection of lidar data, and step 804 may correspond to a collection of camera data, but other embodiments may comprise different or other steps directed to other sensor types of the environmental awareness system without deviating from the teachings disclosed herein. In the depicted embodiment, step 802 and step 804 may be performed concurrently, but other embodiments may perform the steps in a different sequence without deviating from the teachings disclosed herein.
[0047] After sensor data has been collected, the method proceeds to step 806 where the sensor data is analyzed to detect an object and a hierarchy of attributes is defined with respect to the detected object. The environmental awareness system may utilize a number of hierarchies, each hierarchy being applicable to a particular type of object, such as a vehicle, pedestrian, fixture, or the like. After detection of the object and type, a corresponding hierarchy may be assigned having junctions thereof to representthe attributes ofthat object. The junctions may comprise first-order junctions representing attributes that may be defined by analyzing the sensor data and second-order junctions representing attributes that may be predicted based upon the sensor data, the value of another junction, or a combination thereof.
[0048] The method proceeds to step 808, where the first-order junction values are determined using analysis of the sensor data. Once all first-order junction values have been updated, the method proceeds to step 810, where second-order junction values are predicted based upon first-order junction values, the sensor data analysis, or a combination thereof.
[0049] After the hierarchy has been completed, the attribute values may be utilized to control a vehicle function at step 812, and the method returns to the beginning to continue monitoring the environment during operation of the vehicle. In some embodiments, this method is concurrently active for each detected object within the environment without deviating from the teachings disclosed herein.
[0050] While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the disclosed apparatus and method. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure as claimed. The features of various implementing embodiments may be combined to form further embodiments of the disclosed concepts.

Claims

WHAT IS CLAIMED IS:
1. An environmental awareness system associated with a vehicle, the system
comprising:
a processor;
a first sensor in data communication with the processor and operable to generate first sensor data relating to objects within an environment surrounding the vehicle;
a second sensor different from the first sensor in data communication with the processor and operable to generate second sensor data relating to objects within the environment surrounding the vehicle; and
a memory in data communication with the processor and comprising instructions stored thereon that when executed by the processor cause the processor to detect objects within the environment using data from the first sensor and the second sensor,
wherein the processor is operable to extract at least a first attribute of a detected object from the first sensor data and operable to extract at least a second attribute of the detected object from the second sensor data,
wherein the processor is operable to predict a predicted attribute of the detected object based upon at least one of the first attribute and the second attribute,
wherein the processor is operable to predict a value of a third attribute of the detected object using a trained machine-learning model, the trained machine-learning model accepting at least one of the first attribute or second attribute as an input and trained using a corpus of groundtruth annotation data, and
wherein the processor is operable to control a driving function of the vehicle based upon the first attribute, second attribute, and third attribute of the detected object.
2 The environmental awareness system of claim 1 , wherein the first sensor is a lidar sensor and the first sensor data comprises lidar detection data.
3. The environmental awareness system of claim 2, wherein the second sensor is a camera and the second sensor data comprises camera data.
4. The environmental awareness system of claim 1, wherein the processor is operable to update the instructions stored upon the memory, and wherein the process is operable to perform a retraining of the machine-learning model using the corpus of groundtruth annotation data in response to updating the instructions stored upon the memory.
5. The environmental awareness system of claim 1, wherein the first attribute comprises dimensional data indicating a physical dimension of the detected object.
6. The environmental awareness system of claim 5, wherein the predicted attribute comprises an indication of whether or not the detected object is another vehicle.
7. The environmental awareness system of claim 6, wherein the processor is operable to predict a value of a fourth attribute of the detected objected using the trained machine-learning model, the fourth attribute comprising an indication of whether or not the another vehicle is in a parked condition.
8. The environmental awareness system of claim 1, wherein the processor is operable to extract a fourth attribute from the first sensor data, and wherein the processor is operable to predict the third attribute using at least two attributes of the first attribute, second attribute, or fourth attribute.
9. The environmental awareness system of claim 8, wherein the second sensor data further comprises a fifth attribute, and wherein the processor is operable to predict the third attribute using at least two of the first attribute, second attribute, fourth attribute, and fifth attribute.
10. The environmental awareness system of claim 1, wherein the second sensor data further comprises a fourth attribute, and wherein the processor is operable to predict the third attribute using at least two of the first attribute, second attribute, and fourth attribute.
11. The environmental awareness system of claim 1 , wherein one of the first attribute or the second attribute comprises a confidence associated with a condition of the detected object.
12. An environmental awareness system associated with a vehicle, the system
comprising:
a processor;
a first sensor in data communication with the processor and operable to generate first sensor data measuring objects within an environment surrounding the vehicle;
a second sensor different from the first sensor in data communication with the processor and operable to generate second sensor data measuring objects within the environment surrounding the vehicle; and a memory in data communication with the processor and comprising instructions stored thereon that when executed by the processor cause the processor to control the first sensor and second sensor to detect objects within the environment,
wherein the processor is operable to extract a number of first attributes associated with a detected object from the first sensor data and extract a second number of attributes associated with the detected object from the second sensor data,
wherein the processor is operable to predict a value of a predicted attribute of a detected object based upon a hierarchy comprising the first number of attributes and the second number of attributes, wherein the predicted attribute is incorporated into the hierarchy; and
wherein the processor is operable to predict values of the predicted attribute using a trained machine-learning model trained using a corpus of groundtruth annotation data,
wherein the processor is operable to control a driving function of the vehicle based upon the first number of attributes, the second number of attributes, and the predicted attribute.
13. The environmental awareness system of claim 12, wherein the processor is operable to predict a plurality of predicted attributes of the detected object based upon the first number of attributes, the second number of attributes, and the hierarchy, wherein the plurality of predicted attributes are further incorporated into the hierarchy.
14. The environmental awareness system of claim 12, wherein at least one of the number of first attributes and the number of second attributes comprises a confidence associated with a condition of the detected object.
15. The environmental awareness system of claim 12, wherein the first sensor is a lidar sensor and the first sensor data further comprises lidar detection data.
16. The environmental awareness system of claim 15, wherein the second sensor is a camera sensor and the second sensor data further comprises camera data.
17. The environmental awareness system of claim 12, wherein the groundtruth annotation data comprises training annotation data used during a training phase of the processor and testing annotation data used during a testing phase of the processor.
18. A method of controlling a driving function of a vehicle within an environment surrounding the vehicle, the method comprising:
collecting first sensor data from a first sensor associated with the vehicle and operable to measure objects within the environment, the first sensor data indicating a first detected attribute of an object;
collecting second sensor data from a second sensor associated with the vehicle and operable to measure objects within the environment, the second sensor data indicating a second detected attribute of the object;
defining a hierarchy of attributes upon detection of an object within the environment by at least one of the first sensor or the second sensor, the hierarchy comprising a plurality of junctions corresponding to attributes of the detected object, each of the junctions having a contextual correlation to at least one other junction of the hierarchy;
updating at least one of a first junction of the hierarchy or a second junction of the hierarchy with data values of the first detected attribute or second detected attribute respectively; predicting a value of a third junction of the hierarchy based upon a contextual correlation of the third junction to at least one of the first junction or the second junction and updating the value of the third junction using the predicted value; and
utilizing the updated values of the hierarchy to control a driving function during navigation of the vehicle within the environment.
19. The method of claim 18, wherein updating at least one of the first junction of the hierarchy or the second junction of the hierarchy comprises updating the first junction of the hierarchy with the first detected attribute and the second junction of the hierarchy with the second detected attribute, and
predicting the value of the third junction of the hierarchy is based upon a contextual correlation of the third junction to the first junction and the second junction.
20. The method of claim 18, wherein the first detected attribute indicates whether the detected object is a vehicle, and the third junction of the hierarchy corresponds to a third attribute indicating a parked condition of the vehicle.
PCT/EP2020/052171 2019-01-31 2020-01-29 Environmental awareness system for a vehicle WO2020157135A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962799553P 2019-01-31 2019-01-31
US62/799553 2019-01-31

Publications (1)

Publication Number Publication Date
WO2020157135A1 true WO2020157135A1 (en) 2020-08-06

Family

ID=69591597

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/052171 WO2020157135A1 (en) 2019-01-31 2020-01-29 Environmental awareness system for a vehicle

Country Status (1)

Country Link
WO (1) WO2020157135A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022258250A1 (en) * 2021-06-07 2022-12-15 Mercedes-Benz Group AG Method for detecting objects which are relevant to the safety of a vehicle

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALIREZA ASVADI ET AL: "Multimodal vehicle detection: fusing 3D-LIDAR and color camera data", PATTERN RECOGNITION LETTERS., vol. 115, 1 November 2018 (2018-11-01), NL, pages 20 - 29, XP055686397, ISSN: 0167-8655, DOI: 10.1016/j.patrec.2017.09.038 *
ASVADI ALIREZA ET AL: "3D object tracking using RGB and LIDAR data", 2016 IEEE 19TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), IEEE, 1 November 2016 (2016-11-01), pages 1255 - 1260, XP033028501, DOI: 10.1109/ITSC.2016.7795718 *
PAN WEI ET AL: "LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 11 July 2018 (2018-07-11), XP081251605 *
XIAOZHI CHEN ET AL: "3D Object Proposals for Accurate Object Class Detection", NIPS'15: PROCEEDINGS OF THE 28TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS, 31 December 2015 (2015-12-31), pages 424 - 432, XP055686441, Retrieved from the Internet <URL:https://papers.nips.cc/paper/5644-3d-object-proposals-for-accurate-object-class-detection.pdf> [retrieved on 20200416] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022258250A1 (en) * 2021-06-07 2022-12-15 Mercedes-Benz Group AG Method for detecting objects which are relevant to the safety of a vehicle

Similar Documents

Publication Publication Date Title
US11148664B2 (en) Navigation in vehicle crossing scenarios
US11851085B2 (en) Navigation and mapping based on detected arrow orientation
US11755024B2 (en) Navigation by augmented path prediction
US11254329B2 (en) Systems and methods for compression of lane data
US20220067209A1 (en) Systems and methods for anonymizing navigation information
CN109641589B (en) Route planning for autonomous vehicles
US20230005374A1 (en) Systems and methods for predicting blind spot incursions
JP6783949B2 (en) Road detection using traffic sign information
GB2620695A (en) Systems and methods for vehicle navigation
US20220027642A1 (en) Full image detection
US20190031198A1 (en) Vehicle Travel Control Method and Vehicle Travel Control Device
WO2021231906A1 (en) Systems and methods for vehicle navigation involving traffic lights and traffic signs
Zyner et al. ACFR five roundabouts dataset: Naturalistic driving at unsignalized intersections
US20240199006A1 (en) Systems and Methods for Selectively Decelerating a Vehicle
RU2660425C1 (en) Device for calculating route of motion
CN113597396A (en) On-road positioning method and apparatus using road surface characteristics
US20220397410A1 (en) Method and system for identifying confidence level of autonomous driving system
WO2023126680A1 (en) Systems and methods for analyzing and resolving image blockages
WO2020157135A1 (en) Environmental awareness system for a vehicle
US20200219399A1 (en) Lane level positioning based on neural networks
Zyner Naturalistic driver intention and path prediction using machine learning
US11837089B2 (en) Modular extensible behavioral decision system for autonomous driving
Hsu Verification and Validation of Machine Learning Applications in Advanced Driving Assistance Systems and Automated Driving Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20705294

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20705294

Country of ref document: EP

Kind code of ref document: A1