WO2021176031A1 - Method and system for determining visibility region of different object types for an autonomous vehicle - Google Patents

Method and system for determining visibility region of different object types for an autonomous vehicle Download PDF

Info

Publication number
WO2021176031A1
WO2021176031A1 PCT/EP2021/055543 EP2021055543W WO2021176031A1 WO 2021176031 A1 WO2021176031 A1 WO 2021176031A1 EP 2021055543 W EP2021055543 W EP 2021055543W WO 2021176031 A1 WO2021176031 A1 WO 2021176031A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
visibility region
customized
sensors
object type
Prior art date
Application number
PCT/EP2021/055543
Other languages
French (fr)
Inventor
Oliver Schwindt
Dominik Nuss
Manuel Schier
Benjamin Ulmer
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2021176031A1 publication Critical patent/WO2021176031A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/87Combinations of radar systems, e.g. primary radar and secondary radar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4048Field of view, e.g. obstructed view or direction of gaze
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/25Data precision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/35Data fusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • TITLE “METHOD AND SYSTEM FOR DETERMINING VISIBILITY REGION OF DIFFERENT OBJECT TYPES FOR AN AUTONOMOUS VEHICLE”
  • the present subject matter is related, in general to autonomous driving technology, and more particularly, but not exclusively to system and method for determining visibility region of different object types for an autonomous vehicle.
  • Visibility region a viewable area of a vehicle’s external environment captured/recorded by plurality of sensors on an autonomous vehicle.
  • Obstacle any object that is detected by a sensor in the path of autonomous vehicle.
  • Customized visibility region a visibility region observed by a specific sensor of autonomous vehicle with respect to a specific obstacle type.
  • Intersecting visibility region a common visibility region observed by a plurality of sensors of different types with respect to a specific obstacle type.
  • General visibility region the visibility region observed by a plurality of sensors of different types mounted on a vehicle, where the plurality of sensors observe a plurality of obstacles where obstacle type differentiation does not occur.
  • Unified visibility region the visibility region observed by a plurality of sensors of specific sensor type mounted on a vehicle, where the plurality of sensors observe a plurality of obstacles where obstacle type differentiation occurs.
  • Autonomous vehicles rely on a series of sensors that help the vehicles understand the external environment in real time to avoid collisions, navigate autonomously, spot signs of danger and drive safely. Sensors not only help to determine the actual environment and present dangers, they also help the vehicle to provide appropriate responses that range from accelerating/decelerating to turning, emergency stopping and evasive maneuverers. These responses could be determined by detecting the obstacles using information provided by various sensors integrated within the autonomous vehicle. Visibility region of an autonomous vehicle can be understood as a complement to detection of existing objects or obstacles by the sensor. Object tracking, or detection techniques, generally estimate the existence of an object only if the object has been detected at least once.
  • a method of determining a visibility region of different object types for an autonomous vehicle comprises the step of receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types.
  • the customized measurements are specific measurements associated with the sensor type of a corresponding sensor.
  • the method further comprises generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames using the sensor input from corresponding sensors.
  • Each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type.
  • the method determines a unified visibility region of each sensor type, for the obstacle of each object type.
  • a method of determining a visibility region of different object types for an autonomous vehicle comprises the step of receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types.
  • the customized measurements are specific measurements associated with the sensor type of a corresponding sensor.
  • the method further comprises generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensors.
  • Each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect of obstacle of specific object type, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and predetermined object type characteristics.
  • the method further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of a corresponding sensor type for a corresponding object type. Further, the method identifies an intersecting visibility region for each object type using the unified visibility region of one or more sensor types.
  • the method determines a likelihood of non-existence of one or more obstacles of specific object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of the sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
  • a system for determining the visibility region of different object types for an autonomous vehicle comprises a processor communicatively coupled to the system and a memory, which is communicatively coupled to the processor.
  • the memory stores processor- executable instructions, which, on execution, cause the processor to receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor , angular dimensions of the sensor with respect to the visibility region and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types.
  • the customized measurements are specific measurements associated with the sensor type of corresponding sensor.
  • the processor generates one or more customized visibility region for each of the plurality of sensors at a plurality of time frames using the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the sensor of the plurality of sensors of the vehicle with respect to a specific obstacle type.
  • the processor further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.
  • a system for determining visibility region of different obj ect types for an autonomous vehicle comprises a processor communicatively coupled to the system and a memory communicatively coupled to the processor.
  • the memory stores processor-executable instructions, which, on execution, cause the processor to receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types.
  • the customized measurements are specific measurements associated with the sensor type of corresponding sensor.
  • the processor generates one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by at least one sensor of the plurality of sensors with respect to obstacle of specific object type, wherein at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics.
  • the processor further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.
  • the processor further identifies an intersecting visibility region for each object type using the unified visibility region of one or more sensor types and determines a likelihood of non-existence of one or more obstacles of a specific object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
  • Figure 1 depicts an exemplary architecture of a system for determining visibility region of different object types for autonomous vehicle in accordance with an embodiment of the present disclosure
  • Figure 2 is an exemplary block diagram illustrating various components of a visibility region determination system of Figure 1 in accordance with an embodiment of the present disclosure
  • Figure 3a depicts a flowchart of an exemplary method of describing visibility region of different object types in accordance with an embodiment of the present disclosure
  • Figure 3b depicts exemplary representation of customized visibility regions of LIDAR sensor type in accordance with an embodiment of the present disclosure
  • Figure 3c depicts exemplary representation of customized visibility regions of RADAR sensor type in accordance with an embodiment of the present disclosure
  • Figure 3d depicts exemplary representation of unified visibility region of LIDAR sensor type in accordance with an embodiment of the present disclosure
  • Figure 3e depicts exemplary representation of unified visibility region of RADAR sensor type in accordance with an embodiment of the present disclosure.
  • Figure 3f depicts exemplary representation of intersecting visibility region of an object type for LIDAR and RADAR sensor types in accordance with an embodiment of the present disclosure.
  • Embodiments of the present disclosure relates to a method and a system for determining visibility region of different obj ect types for an autonomous vehicle.
  • the system receives a sensor input from each of a plurality of sensors associated with the autonomous vehicle.
  • the sensor input includes dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and customized measurements associated with the visibility region for each sensor.
  • the system Upon receiving the sensor input, the system generates one or more customized visibility region for each sensor at a plurality of time frames using the sensor input from corresponding sensor.
  • the customized visibility region is the visibility region observed by a specific sensor of the plurality of sensors of the vehicle with respect to a specific obstacle type at a current time frame of the plurality of time frames. Further, the system determines a unified visibility region of each sensor type, for the obstacle of each obj ect type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type. In one example, for the obstacle of one object type, the unified visibility region for the sensor type is obtained by determining union of one or more customized visibility regions of the plurality of sensors of corresponding sensor type for corresponding object type. The system further identifies an intersecting visibility region for each object type using the unified visibility regions of one or more sensor types.
  • the system determines a likelihood of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame.
  • the estimated likelihood may be fed to an autonomous driving system to make appropriate decisions while driving.
  • Figure 1 depicts an exemplary architecture of a system for determining visibility region of different object types for autonomous vehicle in accordance with an embodiment of the present disclosure.
  • the object type may be defined by size of object, material of object, velocity of object, semantic of object etc.
  • the exemplary system 100 comprises one or more components configured for determining visibility region for autonomous vehicle.
  • the system 100 may be implemented using a single computer or a network of computers including cloud-based computer implementations.
  • the exemplary system 100 comprises a visibility region determination system (hereinafter referred to as VRDS) 102, one or more sensors 109 associated with an autonomous vehicle 103, a data repository 104 and an autonomous driving system 106 connected via a communication network (alternatively referred as network) 105.
  • the data repository 104 may be a cloud-implemented repository capable of storing sensor related information 110 including sensor type, capabilities of sensor types and so on.
  • the data repository 104 also stores object type characteristics 111 of different possible obstacles on road. In one embodiment, the object type characteristics 111 may be predefined and stored in the data repository 104.
  • the object type characteristics 111 for obstacles of different types may be defined as at least one from a set not limiting to : (a) any obstacle larger than 10x10x10 centimeter, (b) any obstacle larger than 1 meter height, 30 centimeter width, 30 centimeter length such as an upright pedestrian or bike, (c) any motorized obstacle larger than 1.20 meter height, 40 centimeter width, 1.5 meter length such as motorbike, (d) any obstacle moving faster than 2 meter per second, and (e) any motorized obstacle moving faster than 2 meters per second.
  • the autonomous vehicle 103 comprises plurality of sensors 109-1, 109-2, .., 109-N (collectively referred to as sensors 109) capable of detecting or recording visibility region dimensions of external environment of the autonomous vehicle 103.
  • the plurality of sensors 109 may be associated with one or more sensor types including Radio detection and Ranging (RADAR) sensor type, Light detection and Ranging (LID AR) sensor type, Ultrasonic sensor, camera sensor, speed sensor and so on.
  • the plurality of sensors 109 is configured to identify a visibility region for the autonomous vehicle 103 and record or detect dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and customized measurements based on sensor type of the sensor.
  • the plurality of sensors 109 may also detect speed information, brake pressure details, any obstructions like pothole, bump, debris or abnormal level of roughness on the road surface.
  • the autonomous driving system 106 is coupled with the VRDS 102 and is configured to make appropriate decision while driving based on information provided by the VRDS 102.
  • the autonomous driving system 106 may be integrated within the autonomous vehicle 103.
  • the autonomous driving system 106 is configured to act based on information received from VRDS 102 by accelerating/decelerating to turning, emergency stopping and so on. In the context of self-driving and collision avoidance, the functionality of autonomous driving system 106 is based on the information provided by the VRDS 102.
  • the VRDS 102 is configured to determine visibility region of different object types based on sensor input provided by the sensors 109 associated with various sensor types.
  • the VRDS 102 may be configured as a standalone system.
  • the VRDS 102 may be configured in cloud environment.
  • the VRDS 102 may include any Wireless Application Protocol (WAP) enabled device or any other computing device capable of interfacing directly or indirectly to a network connection.
  • WAP Wireless Application Protocol
  • the VRDS 102 also includes a graphical user interface (GUI) provided therein for interacting with the data repository 104 and autonomous driving system 106.
  • the VRDS 102 comprises at least a processor 150 and a memory 152 coupled with the processor 150.
  • the VRDS 102 further comprises a visibility region generation module 156, a unified region determination module 158, an intersecting region determination module 159 and a reasoning module 160.
  • the VRDS 102 may be a typical visibility region determination system as illustrated in Figure 2.
  • the VRDS 102 comprises the processor 150, the memory 152, and an I/O interface 202.
  • the I/O interface 202 is coupled with the processor 150 and an TO device.
  • the I/O device is configured to receive inputs via the I/O interface 202 from sensors 109 and transmit outputs for displaying in the TO device via the I/O interface 202.
  • the VADS 102 further includes data 204 and modules 206.
  • the data 204 may be stored within the memory 152.
  • the data 204 may include sensor input 210, customized visibility region 212, unified visibility region 214, intersecting visibility region 215, likelihood score 216 and other data 218.
  • the sensor input 210 indicates data recorded or identified by the sensors 109.
  • the sensor input 210 in one example, is dimensions of a visibility region including, but not limiting to, location coordinates, angular dimensions of sensor with respect to the visibility region and other dimensional parameters associated with the autonomous vehicle 103.
  • the sensor input may also include speed information related to autonomous vehicle 103, brake pressure details, any obstructions like pothole, bump, debris or abnormal level of roughness on the road surface.
  • the customized visibility region 212 may be defined as the visibility region observed by a specific sensor of the plurality of sensors of the autonomous vehicle 103 with respect to a specific obstacle type at a current time frame.
  • the unified visibility region 212 is defined as total visibility region of one sensor type, for the obstacle of one object type.
  • the unified visibility region 212 in one example, may be defined as the visibility region obtained by union of plurality of sensors 109 of one sensor type for the obstacle of one object type.
  • the intersecting visibility region 215 is defined as a common visibility region observed by a plurality of sensors 109 of various sensor types with respect to the obstacle of one object type.
  • the likelihood score 216 may be defined as a probabilistic estimation of non-existence of obstacles of at least one object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in the intersecting visibility region 215 of corresponding object type.
  • the data 204 may be stored in the memory 152 in form of various data structures. Additionally, the aforementioned data can be organized using data models, such as relational or hierarchical data models.
  • the other data 218 may store data, including temporary data, temporary files and data associated with visibility region, and co-ordinate databases generated by the modules 206 for performing the various functions of the VRDS 102.
  • the modules 206 may include, for example, the visibility region generation module 156, the unified region determination module 158, the intersecting region determination module 159 and the reasoning module 160.
  • the modules 206 may also comprise other modules 224 to perform various miscellaneous functionalities of the VRDS 102. It will be appreciated that such aforementioned modules may be represented as a single module or a combination of different modules.
  • the modules 206 may be implemented in the form of software, hardware and/or firmware.
  • the term modules refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
  • ASIC application specific integrated circuit
  • the VRDS 102 is configured to receive the sensor input 210 from each of the plurality of sensors 109 and determine visibility region for each sensor for different object types based on the sensor input 210.
  • the plurality of sensors 109 of one or more sensor types associated with the autonomous vehicle 103 records sensor input as dimensions of visibility region i.e., viewable area of external environment of the autonomous vehicle 103.
  • the customized measurements of the sensor associated with one of the one or more sensor types comprise sensor-measurement parameters associated with corresponding sensor type.
  • the customized measurements for the sensor of RADAR sensor type comprise RADAR Cross Sections (RCS) of identified object type, doppler measurements including velocities and other related measurements based on capabilities of RADAR sensor type.
  • RCS RADAR Cross Sections
  • the dimensions of the visibility region may comprise location coordinates or Global positioning system (GPS) coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and other dimensional parameters associated with the visibility region.
  • the plurality of sensors 109 sends the sensor input 210 to the visibility region generation module 156 and the visibility region generation module 156 generates one or more customized visibility region 212 for each of the plurality of sensors 109 (interchangeably referred to as each sensor) at a plurality of time frames based on the sensor input 210 received from corresponding sensor using one or more known techniques for visibility region construction.
  • each of the one or more customized visibility region 212 is the visibility region with obstacle of specific object type observed by at least one sensor of the plurality of sensors 109 at a current time frame of the plurality of time frames.
  • the object type associated with the obstacle observed by at least one sensor is determined based on the predefined object type characteristics 111 of different object types and the customized measurements of the corresponding sensor at the current time frame.
  • the visibility region generation module 156 generates for each sensor, one or more customized visibility region 212 for at least one obstacle of at least one object type detected by the corresponding sensor using the visibility region dimensions received from the corresponding sensor at the plurality of time frames.
  • the plurality of sensors 209 is configured to directly generate customized visibility region 212 for different object types by detecting obstacles associated with different object types and determining dimensions of the visibility region with the detected obstacle of at least one object type.
  • the unified region determination module 158 determines the unified visibility region 214 of each sensor type, for the obstacle of each object type. In one embodiment, the unified region determination module 158 receives, for each object type, the one or more customized visibility region 212 generated for the plurality of sensors 109 of each sensor type. Further, the unified region determination module 158 determines, for the obstacle of each object type, union of the one or more customized visibility region 212 of the plurality of sensors 109 of corresponding sensor type for corresponding object type and generates the unified visibility region 214. Based on the unified visibility region 214 generated for each sensor type, the intersecting region determination module 159 identifies the intersecting visibility region 215 for each object type. In one embodiment, the intersecting region determination module 159 determines intersection of the unified visibility region 214 of one or more sensor types and generates the intersecting visibility region 215 for each object type.
  • the reasoning module 160 determines the likelihood score 216 of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103.
  • the reasoning module 160 identifies plurality of visibility regions in the intersecting visibility region 215 using occupancy grid mapping and estimates the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions.
  • the probability of non-existence of one or more obstacles in each visibility region is calculated, using true positive probability and false positive probability as given below in equation (1).
  • P(NX) 1 - [P(Z/X) / [P(Z/X) + P(Z/NX)]] — (1) where, P(NX) is the probability of non-existence of obstacles (event NX) in visibility region; P(Z/X) is the true positive probability i.e., probability that detection occurs (event Z) if the object exists (event X); and
  • P(Z/NX) is the false positive probability i.e., probability that detection occurs (event Z) if the object does not exist (event NX).
  • the reasoning module 160 determines the likelihood score 216 of non-existence of obstacles of certain object type in the intersecting visibility region 215 using the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. Based on the determined likelihood score 216, detection capabilities of sensor type, range dependencies of the sensor, weather conditions, the reasoning module 160 determines the likelihood of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. Further the reasoning module 160 sends the estimated likelihood to the autonomous driving system 106 to enable the autonomous driving system 106 take appropriate decision while driving. [0048]
  • Figure 3a depicts a flowchart of an exemplary method of determining visibility region of different object types for an autonomous vehicle in accordance with an embodiment of the present disclosure.
  • the method 300 comprises one or more blocks implemented by the processor 150 for determining visibility region of different object types for autonomous vehicle.
  • the method 300 may be described in the general context of computer executable instructions.
  • computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
  • the order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300. Additionally, individual blocks may be deleted from the method 300 without departing from the scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
  • sensor input 210 from plurality of sensors 109 of autonomous vehicle 103 is received.
  • the visibility region generation module 156 of VRDS 102 receives the sensor input 210 comprising visibility region dimensions and customized measurements from the plurality of sensors 109.
  • the plurality of sensors 109 of one or more sensor types associated with the autonomous vehicle 103 d records dimensions of visibility region i.e., viewable area of external environment of the autonomous vehicle 103.
  • the customized measurements of the sensor associated with one of the one or more sensor types are specific measurement parameters associated with corresponding sensor type.
  • the customized measurements for the sensor of RADAR sensor type comprise RADAR Cross Sections (RCS) of identified object type, doppler measurements including velocities and other related measurements based on capabilities of sensor type.
  • RCS RADAR Cross Sections
  • the dimensions of the visibility region may comprise location coordinates or Global Positioning System (GPS) coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and other dimensional parameters associated with the visibility region.
  • GPS Global Positioning System
  • the plurality of sensors 109 sends the sensor input 210 as visibility region dimensions to the visibility region generation module 156.
  • one or more customized visibility region 212 for each sensor is received.
  • the visibility region generation module 156 generates one or more customized visibility region 212 for each of the plurality of sensors 109 (interchangeably referred to as each sensor) at a plurality of time frames based on the sensor input 210 received from corresponding sensor.
  • each of the one or more customized visibility region 212 is the visibility region observed by each sensor of the plurality of sensors 109 with respect to obstacle of specific object type at a current time frame.
  • the at least one object type of the detected obstacle is determined based on the predefined object type characteristics 111 of different obstacle types and customized measurements of the sensor.
  • the visibility region generation module 156 generates for each sensor, one or more customized visibility region 212 i.e., object specific visibility region for at least one obstacle of at least one object type detected by the corresponding sensor using the visibility region dimensions received from the corresponding sensor.
  • the plurality of sensors 209 is configured to directly generate customized visibility region 212 for obstacles of different object types by determining dimensions of the visibility region.
  • the one or more customized visibility region 212 for plurality of sensors of sensor type LIDAR is illustrated in Figure 3b.
  • Figure 3b indicates customized visibility region 212 of four LIDAR sensors for one object type.
  • the one or more customized visibility region for plurality of sensors of sensor type RADAR is illustrated in Figure 3c.
  • Figure 3c indicates customized visibility region 212 of eight RADAR sensors for the same object type.
  • unified visibility region for each sensor type is determined.
  • the unified region determination module 158 determines the unified visibility region 214 of each sensor type, for the obstacle of each object type.
  • the unified region determination module 158 receives, for each object type, the one or more customized visibility region 212 generated for the plurality of sensors 109 of each sensor type. Further, the unified region determination module 158 determines, for the obstacle of each object type, union of the one or more customized visibility region 212 of the plurality of sensors 109 of corresponding sensor type for corresponding object type and generates the unified visibility region 214.
  • Figure 3d illustrates the unified visibility region 214 for the sensor type LIDAR obtained by determining union of customized visibility region 212 of four LIDAR sensors shown in Figure 3b.
  • Figure 3e illustrates the unified visibility region 214 for the sensor type RADAR obtained by determining union of customized visibility region 212 of eight LIDAR sensors shown in Figure 3c.
  • intersecting visibility region 214 for each object type is determined.
  • the intersecting region determination module 159 identifies the intersecting visibility region 215 for each object type based on the unified visibility region 214 generated for each sensor types.
  • the intersecting region determination module 159 determines intersection of the unified visibility region 214 of one or more sensor types and generates the intersecting visibility region 215 for each object type.
  • the intersecting region determination module 159 determines the intersecting visibility region 215 as intersection of visibility regions of the plurality of sensors of same sensor type.
  • Figure 3f illustrates the intersecting visibility region 215 obtained for one object type using unified visibility region of LIDAR and RADAR sensor type depicted in Figure 3d and Figure 3e.
  • likelihood of non-existence of obstacles of certain object type is determined.
  • the reasoning module 160 determines the likelihood score 216 of non-existence of one or more obstacles one of certain object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103.
  • the reasoning module 160 identifies plurality of visibility regions in the intersecting visibility region 215 and estimates the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions.
  • the reasoning module 160 determines the likelihood score 216 of non-existence of obstacles of corresponding object type in the intersecting visibility region 215 using the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. Based on the determined likelihood score 216, detection capabilities of sensor type, range dependencies of the sensor, weather conditions, the reasoning module 160 determines the likelihood of non-existence of one or more obstacles one of certain object type in each intersecting visibility region 215 of the autonomous vehicle 103. Further the reasoning module 160 sends the estimated likelihood to the autonomous driving system 106 to enable the autonomous driving system 106 take appropriate decision while driving. Thus, the system facilitates reasoning on non-existence of obstacles for the autonomous vehicle 103 by determining discrete visibility regions for different object types.
  • a computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored.
  • a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein.
  • the term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non -transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

Embodiments of the present disclosure relate to a method and system for determining visibility regions of different object types for an autonomous vehicle. The system receives sensor inputs of dimensions of a visibility region from sensors of various sensor types associated with the autonomous vehicle. The system generates customized visibility regions for each sensor at various time frames, for obstacles of at least one object type, using the sensor input of corresponding sensor. Further, the system determines a unified visibility region of each sensor type, for the obstacle of each object type, using the customized visibility regions of the sensors of corresponding sensor type for corresponding object type. The system identifies an intersecting visibility region for each object type using the unified visibility region of various sensor types and determines a likelihood of non-existence of obstacles of specific object type in each intersecting visibility region.

Description

TITLE: “METHOD AND SYSTEM FOR DETERMINING VISIBILITY REGION OF DIFFERENT OBJECT TYPES FOR AN AUTONOMOUS VEHICLE”
[001] PREAMBLE TO THE DESCRIPTION:
[002] The following specification particularly J ~~cribes the invention and the manner in which it is to be performed:
[003] DESCRIPTION OF THE INVENTION:
[004] Technical field
[005] The present subject matter is related, in general to autonomous driving technology, and more particularly, but not exclusively to system and method for determining visibility region of different object types for an autonomous vehicle.
[006] Definitions
[007] Visibility region: a viewable area of a vehicle’s external environment captured/recorded by plurality of sensors on an autonomous vehicle.
[008] Obstacle: any object that is detected by a sensor in the path of autonomous vehicle.
[009] Customized visibility region: a visibility region observed by a specific sensor of autonomous vehicle with respect to a specific obstacle type.
[0010] Intersecting visibility region: a common visibility region observed by a plurality of sensors of different types with respect to a specific obstacle type.
[0011] General visibility region: the visibility region observed by a plurality of sensors of different types mounted on a vehicle, where the plurality of sensors observe a plurality of obstacles where obstacle type differentiation does not occur.
[0012] Unified visibility region: the visibility region observed by a plurality of sensors of specific sensor type mounted on a vehicle, where the plurality of sensors observe a plurality of obstacles where obstacle type differentiation occurs. [0013] BACKGROUND OF THE DISCLOSURE
Autonomous vehicles rely on a series of sensors that help the vehicles understand the external environment in real time to avoid collisions, navigate autonomously, spot signs of danger and drive safely. Sensors not only help to determine the actual environment and present dangers, they also help the vehicle to provide appropriate responses that range from accelerating/decelerating to turning, emergency stopping and evasive maneuverers. These responses could be determined by detecting the obstacles using information provided by various sensors integrated within the autonomous vehicle. Visibility region of an autonomous vehicle can be understood as a complement to detection of existing objects or obstacles by the sensor. Object tracking, or detection techniques, generally estimate the existence of an object only if the object has been detected at least once. In general, it is not possible to reason about the absence of potential objects given the absence of measurements from the sensors, thus the objects that are excluded in the visibility region remain unknown. Current techniques combine all object types detected by sensors and determine only one visibility region for the autonomous vehicle. These techniques do not provide different visibility regions for different object types and thereby do not facilitate reasoning on excluded objects. The techniques providing a single generalized visibility region are either too conservative or aggressive, rendering less effectiveness. Consequently, a need exists for a method and a system that determines a visibility region of different object types for an autonomous vehicle that overcomes the existing limitations.
[0014] SUMMARY OF THE DISCLOSURE
[0015] One or more shortcomings of the prior art are overcome and additional advantages are provided through the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.
[0016] In one non-limiting embodiment of the present disclosure, a method of determining a visibility region of different object types for an autonomous vehicle has been disclosed. The method comprises the step of receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of a corresponding sensor. The method further comprises generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames using the sensor input from corresponding sensors. Each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type. Using the one or more customized visibility region of the plurality of sensors of a corresponding sensor type for a corresponding object type, the method determines a unified visibility region of each sensor type, for the obstacle of each object type.
[0017] In another non-limiting embodiment of the present disclosure, a method of determining a visibility region of different object types for an autonomous vehicle has been disclosed. The method comprises the step of receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of a corresponding sensor. The method further comprises generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensors. Each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect of obstacle of specific object type, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and predetermined object type characteristics. The method further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of a corresponding sensor type for a corresponding object type. Further, the method identifies an intersecting visibility region for each object type using the unified visibility region of one or more sensor types. Subsequently, the method determines a likelihood of non-existence of one or more obstacles of specific object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of the sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
[0018] In yet another non-limiting embodiment of the disclosure, a system for determining the visibility region of different object types for an autonomous vehicle has been disclosed. The system comprises a processor communicatively coupled to the system and a memory, which is communicatively coupled to the processor. The memory stores processor- executable instructions, which, on execution, cause the processor to receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor , angular dimensions of the sensor with respect to the visibility region and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of corresponding sensor. The processor generates one or more customized visibility region for each of the plurality of sensors at a plurality of time frames using the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the sensor of the plurality of sensors of the vehicle with respect to a specific obstacle type. The processor further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.
[0019] In still another non-limiting embodiment of the disclosure, a system for determining visibility region of different obj ect types for an autonomous vehicle has been disclosed. The system comprises a processor communicatively coupled to the system and a memory communicatively coupled to the processor. The memory stores processor-executable instructions, which, on execution, cause the processor to receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of corresponding sensor. The processor generates one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by at least one sensor of the plurality of sensors with respect to obstacle of specific object type, wherein at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics. The processor further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type. The processor further identifies an intersecting visibility region for each object type using the unified visibility region of one or more sensor types and determines a likelihood of non-existence of one or more obstacles of a specific object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
[0020] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
[0021] BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed embodiments. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which: [0023] Figure 1 depicts an exemplary architecture of a system for determining visibility region of different object types for autonomous vehicle in accordance with an embodiment of the present disclosure;
[0024] Figure 2 is an exemplary block diagram illustrating various components of a visibility region determination system of Figure 1 in accordance with an embodiment of the present disclosure;
[0025] Figure 3a depicts a flowchart of an exemplary method of describing visibility region of different object types in accordance with an embodiment of the present disclosure;
[0026] Figure 3b depicts exemplary representation of customized visibility regions of LIDAR sensor type in accordance with an embodiment of the present disclosure;
[0027] Figure 3c depicts exemplary representation of customized visibility regions of RADAR sensor type in accordance with an embodiment of the present disclosure;
[0028] Figure 3d depicts exemplary representation of unified visibility region of LIDAR sensor type in accordance with an embodiment of the present disclosure;
[0029] Figure 3e depicts exemplary representation of unified visibility region of RADAR sensor type in accordance with an embodiment of the present disclosure; and
[0030] Figure 3f depicts exemplary representation of intersecting visibility region of an object type for LIDAR and RADAR sensor types in accordance with an embodiment of the present disclosure.
[0031] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown. [0032] DETAILED DESCRIPTION
[0033] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
[0034] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.
[0035] The terms “comprises”, “comprising”, “include(s)”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises... a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.
[0036] Embodiments of the present disclosure relates to a method and a system for determining visibility region of different obj ect types for an autonomous vehicle. In one embodiment, the system receives a sensor input from each of a plurality of sensors associated with the autonomous vehicle. The sensor input includes dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and customized measurements associated with the visibility region for each sensor. Upon receiving the sensor input, the system generates one or more customized visibility region for each sensor at a plurality of time frames using the sensor input from corresponding sensor. The customized visibility region is the visibility region observed by a specific sensor of the plurality of sensors of the vehicle with respect to a specific obstacle type at a current time frame of the plurality of time frames. Further, the system determines a unified visibility region of each sensor type, for the obstacle of each obj ect type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type. In one example, for the obstacle of one object type, the unified visibility region for the sensor type is obtained by determining union of one or more customized visibility regions of the plurality of sensors of corresponding sensor type for corresponding object type. The system further identifies an intersecting visibility region for each object type using the unified visibility regions of one or more sensor types. In each intersecting visibility region of the autonomous vehicle, the system determines a likelihood of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame. The estimated likelihood may be fed to an autonomous driving system to make appropriate decisions while driving.
[0037] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
[0038] Figure 1 depicts an exemplary architecture of a system for determining visibility region of different object types for autonomous vehicle in accordance with an embodiment of the present disclosure. In an embodiment, the object type may be defined by size of object, material of object, velocity of object, semantic of object etc. As shown in Figure 1, the exemplary system 100 comprises one or more components configured for determining visibility region for autonomous vehicle. The system 100 may be implemented using a single computer or a network of computers including cloud-based computer implementations. In one embodiment, the exemplary system 100 comprises a visibility region determination system (hereinafter referred to as VRDS) 102, one or more sensors 109 associated with an autonomous vehicle 103, a data repository 104 and an autonomous driving system 106 connected via a communication network (alternatively referred as network) 105. [0039] The data repository 104 may be a cloud-implemented repository capable of storing sensor related information 110 including sensor type, capabilities of sensor types and so on. The data repository 104 also stores object type characteristics 111 of different possible obstacles on road. In one embodiment, the object type characteristics 111 may be predefined and stored in the data repository 104. In one example, the object type characteristics 111 for obstacles of different types may be defined as at least one from a set not limiting to : (a) any obstacle larger than 10x10x10 centimeter, (b) any obstacle larger than 1 meter height, 30 centimeter width, 30 centimeter length such as an upright pedestrian or bike, (c) any motorized obstacle larger than 1.20 meter height, 40 centimeter width, 1.5 meter length such as motorbike, (d) any obstacle moving faster than 2 meter per second, and (e) any motorized obstacle moving faster than 2 meters per second.
[0040] The autonomous vehicle 103 comprises plurality of sensors 109-1, 109-2, .., 109-N (collectively referred to as sensors 109) capable of detecting or recording visibility region dimensions of external environment of the autonomous vehicle 103. In one embodiment, the plurality of sensors 109 may be associated with one or more sensor types including Radio detection and Ranging (RADAR) sensor type, Light detection and Ranging (LID AR) sensor type, Ultrasonic sensor, camera sensor, speed sensor and so on. The plurality of sensors 109 is configured to identify a visibility region for the autonomous vehicle 103 and record or detect dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and customized measurements based on sensor type of the sensor. In one embodiment, the plurality of sensors 109 may also detect speed information, brake pressure details, any obstructions like pothole, bump, debris or abnormal level of roughness on the road surface.
[0041] The autonomous driving system 106 is coupled with the VRDS 102 and is configured to make appropriate decision while driving based on information provided by the VRDS 102. In one example, the autonomous driving system 106 may be integrated within the autonomous vehicle 103. In one embodiment, the autonomous driving system 106 is configured to act based on information received from VRDS 102 by accelerating/decelerating to turning, emergency stopping and so on. In the context of self-driving and collision avoidance, the functionality of autonomous driving system 106 is based on the information provided by the VRDS 102.
[0042] The VRDS 102 is configured to determine visibility region of different object types based on sensor input provided by the sensors 109 associated with various sensor types. In one example, the VRDS 102 may be configured as a standalone system. In another example, the VRDS 102 may be configured in cloud environment. In yet another example, the VRDS 102 may include any Wireless Application Protocol (WAP) enabled device or any other computing device capable of interfacing directly or indirectly to a network connection. The VRDS 102 also includes a graphical user interface (GUI) provided therein for interacting with the data repository 104 and autonomous driving system 106. The VRDS 102 comprises at least a processor 150 and a memory 152 coupled with the processor 150. The VRDS 102 further comprises a visibility region generation module 156, a unified region determination module 158, an intersecting region determination module 159 and a reasoning module 160. In one embodiment, the VRDS 102 may be a typical visibility region determination system as illustrated in Figure 2. The VRDS 102 comprises the processor 150, the memory 152, and an I/O interface 202. The I/O interface 202 is coupled with the processor 150 and an TO device. The I/O device is configured to receive inputs via the I/O interface 202 from sensors 109 and transmit outputs for displaying in the TO device via the I/O interface 202.
[0043] The VADS 102 further includes data 204 and modules 206. In one implementation, the data 204 may be stored within the memory 152. In one example, the data 204 may include sensor input 210, customized visibility region 212, unified visibility region 214, intersecting visibility region 215, likelihood score 216 and other data 218. The sensor input 210 indicates data recorded or identified by the sensors 109. The sensor input 210, in one example, is dimensions of a visibility region including, but not limiting to, location coordinates, angular dimensions of sensor with respect to the visibility region and other dimensional parameters associated with the autonomous vehicle 103. In another example, the sensor input may also include speed information related to autonomous vehicle 103, brake pressure details, any obstructions like pothole, bump, debris or abnormal level of roughness on the road surface. The customized visibility region 212 may be defined as the visibility region observed by a specific sensor of the plurality of sensors of the autonomous vehicle 103 with respect to a specific obstacle type at a current time frame. The unified visibility region 212 is defined as total visibility region of one sensor type, for the obstacle of one object type. The unified visibility region 212, in one example, may be defined as the visibility region obtained by union of plurality of sensors 109 of one sensor type for the obstacle of one object type. The intersecting visibility region 215 is defined as a common visibility region observed by a plurality of sensors 109 of various sensor types with respect to the obstacle of one object type. The likelihood score 216 may be defined as a probabilistic estimation of non-existence of obstacles of at least one object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in the intersecting visibility region 215 of corresponding object type. In one embodiment, the data 204 may be stored in the memory 152 in form of various data structures. Additionally, the aforementioned data can be organized using data models, such as relational or hierarchical data models. The other data 218 may store data, including temporary data, temporary files and data associated with visibility region, and co-ordinate databases generated by the modules 206 for performing the various functions of the VRDS 102.
[0044] The modules 206 may include, for example, the visibility region generation module 156, the unified region determination module 158, the intersecting region determination module 159 and the reasoning module 160. The modules 206 may also comprise other modules 224 to perform various miscellaneous functionalities of the VRDS 102. It will be appreciated that such aforementioned modules may be represented as a single module or a combination of different modules. The modules 206 may be implemented in the form of software, hardware and/or firmware. As used herein, the term modules refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
[0045] In operation, the VRDS 102 is configured to receive the sensor input 210 from each of the plurality of sensors 109 and determine visibility region for each sensor for different object types based on the sensor input 210. In one embodiment, the plurality of sensors 109 of one or more sensor types associated with the autonomous vehicle 103 records sensor input as dimensions of visibility region i.e., viewable area of external environment of the autonomous vehicle 103. The customized measurements of the sensor associated with one of the one or more sensor types comprise sensor-measurement parameters associated with corresponding sensor type. For example, the customized measurements for the sensor of RADAR sensor type comprise RADAR Cross Sections (RCS) of identified object type, doppler measurements including velocities and other related measurements based on capabilities of RADAR sensor type. The dimensions of the visibility region may comprise location coordinates or Global positioning system (GPS) coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and other dimensional parameters associated with the visibility region. The plurality of sensors 109 sends the sensor input 210 to the visibility region generation module 156 and the visibility region generation module 156 generates one or more customized visibility region 212 for each of the plurality of sensors 109 (interchangeably referred to as each sensor) at a plurality of time frames based on the sensor input 210 received from corresponding sensor using one or more known techniques for visibility region construction. For example, each of the one or more customized visibility region 212 is the visibility region with obstacle of specific object type observed by at least one sensor of the plurality of sensors 109 at a current time frame of the plurality of time frames. The object type associated with the obstacle observed by at least one sensor is determined based on the predefined object type characteristics 111 of different object types and the customized measurements of the corresponding sensor at the current time frame. In one embodiment, the visibility region generation module 156 generates for each sensor, one or more customized visibility region 212 for at least one obstacle of at least one object type detected by the corresponding sensor using the visibility region dimensions received from the corresponding sensor at the plurality of time frames. In another embodiment, the plurality of sensors 209 is configured to directly generate customized visibility region 212 for different object types by detecting obstacles associated with different object types and determining dimensions of the visibility region with the detected obstacle of at least one object type.
[0046] Upon generating customized visibility region 212 for each sensor, the unified region determination module 158 determines the unified visibility region 214 of each sensor type, for the obstacle of each object type. In one embodiment, the unified region determination module 158 receives, for each object type, the one or more customized visibility region 212 generated for the plurality of sensors 109 of each sensor type. Further, the unified region determination module 158 determines, for the obstacle of each object type, union of the one or more customized visibility region 212 of the plurality of sensors 109 of corresponding sensor type for corresponding object type and generates the unified visibility region 214. Based on the unified visibility region 214 generated for each sensor type, the intersecting region determination module 159 identifies the intersecting visibility region 215 for each object type. In one embodiment, the intersecting region determination module 159 determines intersection of the unified visibility region 214 of one or more sensor types and generates the intersecting visibility region 215 for each object type.
[0047] The reasoning module 160 determines the likelihood score 216 of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. In one embodiment, the reasoning module 160 identifies plurality of visibility regions in the intersecting visibility region 215 using occupancy grid mapping and estimates the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. The probability of non-existence of one or more obstacles in each visibility region, in one example, is calculated, using true positive probability and false positive probability as given below in equation (1).
P(NX) = 1 - [P(Z/X) / [P(Z/X) + P(Z/NX)]] — (1) where, P(NX) is the probability of non-existence of obstacles (event NX) in visibility region; P(Z/X) is the true positive probability i.e., probability that detection occurs (event Z) if the object exists (event X); and
P(Z/NX) is the false positive probability i.e., probability that detection occurs (event Z) if the object does not exist (event NX).
The reasoning module 160 determines the likelihood score 216 of non-existence of obstacles of certain object type in the intersecting visibility region 215 using the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. Based on the determined likelihood score 216, detection capabilities of sensor type, range dependencies of the sensor, weather conditions, the reasoning module 160 determines the likelihood of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. Further the reasoning module 160 sends the estimated likelihood to the autonomous driving system 106 to enable the autonomous driving system 106 take appropriate decision while driving. [0048] Figure 3a depicts a flowchart of an exemplary method of determining visibility region of different object types for an autonomous vehicle in accordance with an embodiment of the present disclosure.
[0049] As illustrated in Figure 3a, the method 300 comprises one or more blocks implemented by the processor 150 for determining visibility region of different object types for autonomous vehicle. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
[0050] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300. Additionally, individual blocks may be deleted from the method 300 without departing from the scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0051] At block 302, sensor input 210 from plurality of sensors 109 of autonomous vehicle 103 is received. In one embodiment, the visibility region generation module 156 of VRDS 102 receives the sensor input 210 comprising visibility region dimensions and customized measurements from the plurality of sensors 109. The plurality of sensors 109 of one or more sensor types associated with the autonomous vehicle 103 d records dimensions of visibility region i.e., viewable area of external environment of the autonomous vehicle 103. The customized measurements of the sensor associated with one of the one or more sensor types are specific measurement parameters associated with corresponding sensor type. For example, the customized measurements for the sensor of RADAR sensor type comprise RADAR Cross Sections (RCS) of identified object type, doppler measurements including velocities and other related measurements based on capabilities of sensor type. The dimensions of the visibility region may comprise location coordinates or Global Positioning System (GPS) coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and other dimensional parameters associated with the visibility region. The plurality of sensors 109 sends the sensor input 210 as visibility region dimensions to the visibility region generation module 156.
[0052] At block 304, one or more customized visibility region 212 for each sensor is received. In one embodiment, the visibility region generation module 156 generates one or more customized visibility region 212 for each of the plurality of sensors 109 (interchangeably referred to as each sensor) at a plurality of time frames based on the sensor input 210 received from corresponding sensor. For example, each of the one or more customized visibility region 212 is the visibility region observed by each sensor of the plurality of sensors 109 with respect to obstacle of specific object type at a current time frame. The at least one object type of the detected obstacle is determined based on the predefined object type characteristics 111 of different obstacle types and customized measurements of the sensor. In an embodiment, the visibility region generation module 156 generates for each sensor, one or more customized visibility region 212 i.e., object specific visibility region for at least one obstacle of at least one object type detected by the corresponding sensor using the visibility region dimensions received from the corresponding sensor. In another embodiment, the plurality of sensors 209 is configured to directly generate customized visibility region 212 for obstacles of different object types by determining dimensions of the visibility region. In one example, the one or more customized visibility region 212 for plurality of sensors of sensor type LIDAR is illustrated in Figure 3b. Figure 3b indicates customized visibility region 212 of four LIDAR sensors for one object type. In another example, the one or more customized visibility region for plurality of sensors of sensor type RADAR is illustrated in Figure 3c. Figure 3c indicates customized visibility region 212 of eight RADAR sensors for the same object type.
[0053] At block 306, unified visibility region for each sensor type is determined. In one embodiment, the unified region determination module 158 determines the unified visibility region 214 of each sensor type, for the obstacle of each object type. The unified region determination module 158 receives, for each object type, the one or more customized visibility region 212 generated for the plurality of sensors 109 of each sensor type. Further, the unified region determination module 158 determines, for the obstacle of each object type, union of the one or more customized visibility region 212 of the plurality of sensors 109 of corresponding sensor type for corresponding object type and generates the unified visibility region 214. In one example, Figure 3d illustrates the unified visibility region 214 for the sensor type LIDAR obtained by determining union of customized visibility region 212 of four LIDAR sensors shown in Figure 3b. In another example Figure 3e illustrates the unified visibility region 214 for the sensor type RADAR obtained by determining union of customized visibility region 212 of eight LIDAR sensors shown in Figure 3c.
[0054] At block 308, intersecting visibility region 214 for each object type is determined. In one embodiment, the intersecting region determination module 159 identifies the intersecting visibility region 215 for each object type based on the unified visibility region 214 generated for each sensor types. In one embodiment, the intersecting region determination module 159 determines intersection of the unified visibility region 214 of one or more sensor types and generates the intersecting visibility region 215 for each object type. In another embodiment, the intersecting region determination module 159 determines the intersecting visibility region 215 as intersection of visibility regions of the plurality of sensors of same sensor type. In one example, Figure 3f illustrates the intersecting visibility region 215 obtained for one object type using unified visibility region of LIDAR and RADAR sensor type depicted in Figure 3d and Figure 3e.
[0055] At block 310, likelihood of non-existence of obstacles of certain object type is determined. In one embodiment, the reasoning module 160 determines the likelihood score 216 of non-existence of one or more obstacles one of certain object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. In one embodiment, the reasoning module 160 identifies plurality of visibility regions in the intersecting visibility region 215 and estimates the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. The reasoning module 160 determines the likelihood score 216 of non-existence of obstacles of corresponding object type in the intersecting visibility region 215 using the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. Based on the determined likelihood score 216, detection capabilities of sensor type, range dependencies of the sensor, weather conditions, the reasoning module 160 determines the likelihood of non-existence of one or more obstacles one of certain object type in each intersecting visibility region 215 of the autonomous vehicle 103. Further the reasoning module 160 sends the estimated likelihood to the autonomous driving system 106 to enable the autonomous driving system 106 take appropriate decision while driving. Thus, the system facilitates reasoning on non-existence of obstacles for the autonomous vehicle 103 by determining discrete visibility regions for different object types.
[0056] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0057] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non -transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.

Claims

[0058] Claims: We Claim:
1. A method of determining a visibility region of different object types for an autonomous vehicle, the method comprising: receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type; and determining a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.
2. The method as claimed in claim 1, further comprising: identifying an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determining a likelihood of non-existence of one or more obstacles of specific object type from a current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
3. The method as claimed in claim 1, wherein generating each of the one or more customized visibility region comprises step of: determining the object type associated with the obstacle using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics.
4. A method of determining a visibility region of different object types for an autonomous vehicle, the method comprising: receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of measurements and a predetermined object type characteristics; determining a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type; identifying an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determining a likelihood of non-existence of one or more obstacles of specific object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
5. A system for determining a visibility region of different object types for an autonomous vehicle, the system comprising: a processor; a memory, communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to: receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, , angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generate one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region with an obstacle of at least one object type observed by the at least one sensor of the plurality of sensors; and determine a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.
6. The system as claimed in claim 5, wherein the processor is further configured to: identify an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determine a likelihood of non-existence of one or more obstacles of specific object type from a current time frame of the plurality of time frames to one of plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
7. The system as claimed in claim 5, wherein the processor is configured to generate each of the one or more customized visibility region by determining the object type associated with the obstacle using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics.
8. A system for determining a visibility region of different object types for an autonomous vehicle, the system comprising: a processor; a memory, communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to: receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, , angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generate one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region with an obstacle of at least one object type observed by the at least one sensor of the plurality of sensors, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics; determine a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type; identify an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determine a likelihood of non-existence of one or more obstacles of specific object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.
PCT/EP2021/055543 2020-03-05 2021-03-05 Method and system for determining visibility region of different object types for an autonomous vehicle WO2021176031A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2003188.6 2020-03-05
GB2003188.6A GB2592640B (en) 2020-03-05 2020-03-05 Method and system for determining visibility region of different object types for an autonomous vehicle

Publications (1)

Publication Number Publication Date
WO2021176031A1 true WO2021176031A1 (en) 2021-09-10

Family

ID=70278402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/055543 WO2021176031A1 (en) 2020-03-05 2021-03-05 Method and system for determining visibility region of different object types for an autonomous vehicle

Country Status (2)

Country Link
GB (1) GB2592640B (en)
WO (1) WO2021176031A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023205931A1 (en) * 2022-04-24 2023-11-02 Robert Bosch Gmbh Sensor data processing apparatus and method
CN116653820B (en) * 2023-08-02 2023-10-20 南京中旭电子科技有限公司 Hall sensor processing method and device suitable for fault diagnosis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019089015A1 (en) * 2017-10-31 2019-05-09 Nissan North America, Inc. Autonomous vehicle operation with explicit occlusion reasoning
US20190384302A1 (en) * 2018-06-18 2019-12-19 Zoox, Inc. Occulsion aware planning and control

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726278B2 (en) * 2016-09-30 2020-07-28 Samsung Electronics Co., Ltd. Method, device and system for providing notification information
US10140855B1 (en) * 2018-08-24 2018-11-27 Iteris, Inc. Enhanced traffic detection by fusing multiple sensor data
WO2020112213A2 (en) * 2018-09-13 2020-06-04 Nvidia Corporation Deep neural network processing for sensor blindness detection in autonomous machine applications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019089015A1 (en) * 2017-10-31 2019-05-09 Nissan North America, Inc. Autonomous vehicle operation with explicit occlusion reasoning
US20190384302A1 (en) * 2018-06-18 2019-12-19 Zoox, Inc. Occulsion aware planning and control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PHILIPP LINDNER ET AL: "Multi level fusion for an automotive pre-crash safety system", MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, 2008. MFI 2008. IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 20 August 2008 (2008-08-20), pages 143 - 146, XP031346330, ISBN: 978-1-4244-2143-5 *

Also Published As

Publication number Publication date
GB2592640A (en) 2021-09-08
GB202003188D0 (en) 2020-04-22
GB2592640B (en) 2024-03-20

Similar Documents

Publication Publication Date Title
JP7440013B2 (en) Vehicle environment mapping method and corresponding systems, vehicles and computer programs
RU2694154C2 (en) Generation of simulated sensor data for training and validating detection models
US10255812B2 (en) Method and apparatus for preventing collision between objects
CN107015559B (en) Probabilistic inference of target tracking using hash weighted integration and summation
US20240005674A1 (en) Road edge recognition based on laser point cloud
US8558679B2 (en) Method of analyzing the surroundings of a vehicle
EP3818393B1 (en) Autonomous vehicle control using prior radar space map
CN110286389B (en) Grid management method for obstacle identification
US8233663B2 (en) Method for object formation
US20230386225A1 (en) Method for Determining a Drivable Area
RU2757038C2 (en) Method and system for predicting a future event in a self-driving car (sdc)
WO2021176031A1 (en) Method and system for determining visibility region of different object types for an autonomous vehicle
RU2744012C1 (en) Methods and systems for automated determination of objects presence
CN113358110B (en) Method and device for constructing robot obstacle map, robot and storage medium
GB2560618A (en) Object tracking by unsupervised learning
JP7147651B2 (en) Object recognition device and vehicle control system
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
Dey et al. Robust perception architecture design for automotive cyber-physical systems
US20220126865A1 (en) Layered architecture for availability of advanced driver assistance features
US20230322236A1 (en) Vehicle pose assessment
US20230129223A1 (en) Ads perception system perceived free-space verification
US11555913B2 (en) Object recognition device and object recognition method
US20210302991A1 (en) Method and system for generating an enhanced field of view for an autonomous ground vehicle
WO2021240623A1 (en) Predictive tracking device, predictive tracking method, and predictive tracking program
CN115236612A (en) Method and device for calibrating data of multi-millimeter wave radar

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21710441

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21710441

Country of ref document: EP

Kind code of ref document: A1