US20230063845A1 - Systems and methods for monocular based object detection - Google Patents

Systems and methods for monocular based object detection Download PDF

Info

Publication number
US20230063845A1
US20230063845A1 US18/054,585 US202218054585A US2023063845A1 US 20230063845 A1 US20230063845 A1 US 20230063845A1 US 202218054585 A US202218054585 A US 202218054585A US 2023063845 A1 US2023063845 A1 US 2023063845A1
Authority
US
United States
Prior art keywords
map
information
robot
computing device
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/054,585
Inventor
Sean Foley
James Hays
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ford Global Technologies LLC
Original Assignee
Argo AI LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Argo AI LLC filed Critical Argo AI LLC
Priority to US18/054,585 priority Critical patent/US20230063845A1/en
Assigned to Argo AI, LLC reassignment Argo AI, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FOLEY, SEAN, HAYES, JAMES
Publication of US20230063845A1 publication Critical patent/US20230063845A1/en
Assigned to FORD GLOBAL TECHNOLOGIES, LLC reassignment FORD GLOBAL TECHNOLOGIES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Argo AI, LLC
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • G01C21/3822Road feature data, e.g. slope data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2201/00Application
    • G05D2201/02Control of position of land vehicles
    • G05D2201/0213Road vehicle, e.g. car or truck
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]

Definitions

  • the present disclosure relates generally to object detection systems. More particularly, the present disclosure relates to implementing systems and methods for monocular based object detection.
  • Modern day vehicles have at least one on-board computer and have internet/satellite connectivity.
  • the software running on these on-board computers monitor and/or control operations of the vehicles.
  • the vehicle also comprises LiDAR detectors for detecting objects in proximity thereto.
  • the LiDAR detectors generate LiDAR datasets that measure the distance from the vehicle to an object at a plurality of different times. These distance measurements can be used for identifying objects, tracking movements of the object, making predictions as to the objects trajectory, and planning paths of travel for the vehicle based on the predicted objects trajectory.
  • LiDAR based object detection is costly and sensitive to weather conditions.
  • the present disclosure concerns implementing systems and methods for object detection.
  • the methods comprise performing the following operations by a computing device: obtaining an image that comprises a plurality of layers superimposed on each other; identifying a center point of a robot on a map; selecting a portion of the map contained in a geometric shape overlaid on the map so as to have a center set to the center point of the robot; obtaining map information associated with the selected portion of the map; generating at least one additional layer using the map information; superimposing the at least one additional layer onto the image to generate a modified image; and performing an object detection algorithm to detect at least one object in the modified image.
  • the implementing systems can comprise: a processor; and a non-transitory computer-readable storage medium comprising programming instructions that are configured to cause the processor to implement a method for operating an automated system.
  • the above-described methods can also be implemented by a computer program product comprising memory and programming instructions that are configured to cause a processor to perform operations.
  • FIG. 1 is an illustration of an illustrative system.
  • FIG. 2 is an illustration of an illustrative architecture for a vehicle.
  • FIG. 3 is an illustration of an illustrative computing device.
  • FIG. 4 provides a flow diagram of an illustrative method for object detection.
  • FIG. 5 provides an illustration of an illustrative road map.
  • FIG. 6 provides images that are useful for understanding the method shown in FIG. 4 .
  • FIG. 7 provides an illustration of an illustrative grid.
  • FIG. 8 provides an illustration of an illustrative polygon.
  • FIGS. 9 - 10 each provide an illustration of an illustrative modified image.
  • FIG. 11 provides a block diagram that is useful for understanding how a vehicle is controlled in accordance with the present solution.
  • An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement.
  • the memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions.
  • memory each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices.
  • processor and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.
  • vehicle refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy.
  • vehicle includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like.
  • An “autonomous vehicle” is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator.
  • An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle.
  • the present solution is described herein in the context of an autonomous vehicle.
  • the present solution is not limited to autonomous vehicle applications.
  • the present solution can be used in other applications such as robotic applications.
  • the present solution provides an alternative approach to LiDAR based object detection which is costly and sensitive to weather conditions.
  • the present solution generally involves using an object detection algorithm that leverages information contained in a road map (e.g., a pre-defined high definition 3D surface road map) for monocular 3D object detection in, for example, AV applications.
  • Road maps are well known in the art.
  • Results of the object detection may be used for object trajectory prediction, vehicle trajectory generation, and/or collision avoidance.
  • the object detection algorithm can include a machine-learning algorithm that is trained to estimate an object's position, orientation and/or spatial extent based on learned combination(s) of road map features.
  • the road map features include, but are not limited to, a ground height feature, a ground depth feature, a drivable geographical area feature, a map point distance-to-lane center feature, a lane direction feature, and an intersection feature.
  • a monocular camera of the AV captures an image (e.g., a 2D image).
  • the image comprises 3 layers (or channels) of information superimposed on each other—a Red (R) layer, a Green (G) layer and a Blue (B) layer.
  • RGB image This image is also referred to as an RGB image.
  • Road map information is projected onto the image. This projection is achieved by: obtaining AV pose information (a location defined as 3D map coordinates, an angle and a pointing direction of a vehicle to which the monocular camera is attached); using the AV pose information and a predefined map grid portion size to identify a portion of a road map to be projected into the image; and generating a modified image with superimposed road map information associated with the identified portion of the road map onto the image.
  • the modified image is generated by adding additional layers (or channels) to the image.
  • the additional layers (or channels) include, but are not limited to, a ground height layer (or channel), a ground depth layer (or channel), a drivable geographical area layer (or channel), a map point distance-to-lane center layer (or channel), lane direction layer (or channel), and/or an intersection layer (or channel). Pixels of the layers (or channel) are aligned with each other in 2D space.
  • the modified image is then used by the object detection algorithm to estimate a positon, an orientation, a spatial extent, and/or a classification for at least one object detected in the modified image.
  • the object's positon/orientation/spatial extent/classification is(are) then used to control operations of the AV (e.g., for object trajectory prediction, vehicle trajectory planning and/or vehicle motion control).
  • AV positon/orientation/spatial extent/classification
  • System 100 comprises a vehicle 102 1 that is traveling along a road in a semi-autonomous or autonomous manner.
  • Vehicle 102 1 is also referred to herein as an AV.
  • the AV 102 1 can include, but is not limited to, a land vehicle (as shown in FIG. 1 ), an aircraft, or a watercraft.
  • AV 102 1 is generally configured to detect objects 102 2 , 114 , 116 in proximity thereto.
  • the objects can include, but are not limited to, a vehicle 102 2 , a cyclist 114 (such as a rider of a bicycle, electric scooter, motorcycle, or the like) and/or a pedestrian 116 .
  • the object detection is achieved in accordance with a novel monocular based object detection process.
  • the novel monocular based object detection process will be described in detail below.
  • the monocular based object detection process can be performed at the AV 102 1 , at the remote computing device 110 , or partially at both the AV 102 1 and the remote computing device 110 .
  • information related to object detection may be communicated between the AV and a remote computing device 110 via a network 108 (e.g., the Internet, a cellular network and/or a radio network).
  • the object detection related information may also be stored in a database 112 .
  • AV 102 1 When such an object detection is made, AV 102 1 performs operations to: generate one or more possible object trajectories for the detected object; and analyze at least one of the generated possible object trajectories to determine whether or not there is an undesirable level of risk that a collision will occur between the AV and object if the AV is to follow a given trajectory. If not, the AV 102 1 is caused to follow the given vehicle trajectory. If so, the AV 102 1 is caused to (i) follow another vehicle trajectory with a relatively low risk of collision with the object or (ii) perform a maneuver to reduce the risk of collision with the object or avoid collision with the object (e.g., brakes and/or changes direction of travel).
  • a maneuver to reduce the risk of collision with the object or avoid collision with the object e.g., brakes and/or changes direction of travel.
  • FIG. 2 there is provided an illustration of an illustrative system architecture 200 for a vehicle.
  • Vehicles 102 1 and/or 102 2 of FIG. 1 can have the same or similar system architecture as that shown in FIG. 2 .
  • system architecture 200 is sufficient for understanding vehicle(s) 102 1 , 102 2 of FIG. 1 .
  • the vehicle 200 includes an engine or motor 202 and various sensors 204 - 218 for measuring various parameters of the vehicle.
  • the sensors may include, for example, an engine temperature sensor 204 , a battery voltage sensor 206 , an engine Rotations Per Minute (“RPM”) sensor 208 , and a throttle position sensor 210 .
  • RPM Rotations Per Minute
  • the vehicle may have an electric motor, and accordingly will have sensors such as a battery monitoring system 212 (to measure current, voltage and/or temperature of the battery), motor current 214 and voltage 216 sensors, and motor position sensors such as resolvers and encoders 218 .
  • Operational parameter sensors that are common to both types of vehicles include, for example: a position sensor 236 such as an accelerometer, gyroscope and/or inertial measurement unit; a speed sensor 238 ; and an odometer sensor 240 .
  • the vehicle also may have a clock 242 that the system uses to determine vehicle time during operation.
  • the clock 242 may be encoded into the vehicle on-board computing device, it may be a separate device, or multiple clocks may be available.
  • the vehicle also will include various sensors that operate to gather information about the environment in which the vehicle is traveling. These sensors may include, for example: a location sensor 260 (e.g., a Global Positioning System (GPS) device); and object detection sensors such as one or more cameras 262 .
  • the sensors also may include environmental sensors 268 such as a precipitation sensor and/or ambient temperature sensor.
  • the object detection sensors may enable the vehicle to detect objects that are within a given distance range of the vehicle 200 in any direction, while the environmental sensors collect data about environmental conditions within the vehicle's area of travel.
  • the on-board computing device 220 analyzes the data captured by the sensors and optionally controls operations of the vehicle based on results of the analysis. For example, the on-board computing device 220 may control: braking via a brake controller 232 ; direction via a steering controller 224 ; speed and acceleration via a throttle controller 226 (in a gas-powered vehicle) or a motor speed controller 228 (such as a current level controller in an electric vehicle); a differential gear controller 230 (in vehicles with transmissions); and/or other controllers.
  • Geographic location information may be communicated from the location sensor 260 to the on-board computing device 220 , which may then access a map of the environment that corresponds to the location information to determine known fixed features of the environment such as streets, buildings, stop signs and/or stop/go signals.
  • Captured images are communicated from the cameras 262 to the on-board computing device 220 .
  • the captured images are processed by the on-board computing device 220 to detect objects in proximity to the vehicle 200 in accordance with the novel monocular based object detection algorithm of the present solution.
  • the novel monocular based object detection algorithm will be described in detail below. It should be noted that the monocular based object detection algorithm uses an object detection algorithm that leverages information contained in a road map 270 for object detection.
  • the road map 270 can include, but is not limited to, any known or to be known 3D surface road map.
  • the road map 270 is stored in a local memory of the on-board computing device 220 .
  • the object detection algorithm can employ machine-learning.
  • Machine-learning is a type of Artificial Intelligence (AI) that provides computers with the ability to learn without being explicitly programmed through the automation of analytical model building based on data analysis.
  • the machine-learning based object detection algorithm is configured to: recognize shapes of objects from various angles, relationships and trends from data; establish baseline profiles for objects based on the recognized information; and make predictions/estimations about object types, positons, orientations and spatial extents for objects detected in inputted images.
  • the baseline profiles for objects may change over time.
  • the machine-learning based object detection algorithm can employ supervised machine learning, semi-supervised machine learning, unsupervised machine learning, and/or reinforcement machine learning. Each of these listed types of machine-learning is well known in the art.
  • the machine-learning based object detection algorithm includes, but is not limited to, a decision tree learning algorithm, an association rule learning algorithm, an artificial neural network learning algorithm, a deep learning algorithm, an inductive logic programming based algorithm, a support vector machine based algorithm, a clustering based algorithm, a Bayesian network based algorithm, a representation learning algorithm, a similarity and metric learning algorithm, a sparse dictionary learning algorithm, a genetic algorithm, a rule-based machine-learning algorithm, and/or a learning classifier systems based algorithm.
  • COTS Commercial-Off-The-Shelf
  • the on-board computing device 220 When the on-board computing device 220 detects a moving object, the on-board computing device 220 will generate one or more possible object trajectories for the detected object, and analyze the possible object trajectories to assess the risk of a collision between the object and the AV if the AV was to follow a given vehicle trajectory. If the risk does not exceed the acceptable threshold, then the on-board computing device 220 may cause the vehicle 200 to follow the given trajectory. If the risk exceeds an acceptable threshold, the on-board computing device 220 performs operations to: (i) determine an alternative vehicle trajectory and analyze whether the collision can be avoided if the AV follows this alternative vehicle trajectory; or (ii) causes the AV to perform a maneuver (e.g., brake, accelerate, or swerve).
  • a maneuver e.g., brake, accelerate, or swerve
  • FIG. 3 there is provided an illustration of an illustrative architecture for a computing device 300 .
  • the computing device 110 of FIG. 1 and/or the vehicle on-board computing device 220 of FIG. 2 is/are the same as or similar to computing device 300 .
  • the discussion of computing device 300 is sufficient for understanding the computing device 110 of FIG. 1 and the vehicle on-board computing device 220 of FIG. 2 .
  • Computing device 300 may include more or less components than those shown in FIG. 3 . However, the components shown are sufficient to disclose an illustrative solution implementing the present solution.
  • the hardware architecture of FIG. 3 represents one implementation of a representative computing device configured to operate a vehicle, as described herein. As such, the computing device 300 of FIG. 3 implements at least a portion of the method(s) described herein.
  • the hardware includes, but is not limited to, one or more electronic circuits.
  • the electronic circuits can include, but are not limited to, passive components (e.g., resistors and capacitors) and/or active components (e.g., amplifiers and/or microprocessors).
  • the passive and/or active components can be adapted to, arranged to and/or programmed to perform one or more of the methodologies, procedures, or functions described herein.
  • the computing device 300 comprises a user interface 302 , a Central Processing Unit (CPU) 306 , a system bus 310 , a memory 312 connected to and accessible by other portions of computing device 300 through system bus 310 , a system interface 360 , and hardware entities 314 connected to system bus 310 .
  • the user interface can include input devices and output devices, which facilitate user-software interactions for controlling operations of the computing device 300 .
  • the input devices include, but are not limited to, a physical and/or touch keyboard 350 .
  • the input devices can be connected to the computing device 300 via a wired or wireless connection (e.g., a Bluetooth® connection).
  • the output devices include, but are not limited to, a speaker 352 , a display 354 , and/or light emitting diodes 356 .
  • System interface 360 is configured to facilitate wired or wireless communications to and from external devices (e.g., network nodes such as access points, etc.).
  • Hardware entities 314 perform actions involving access to and use of memory 312 , which can be a Random Access Memory (RAM), a disk drive, flash memory, a Compact Disc Read Only Memory (CD-ROM) and/or another hardware device that is capable of storing instructions and data.
  • Hardware entities 314 can include a disk drive unit 316 comprising a computer-readable storage medium 318 on which is stored one or more sets of instructions 320 (e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein.
  • the instructions 320 can also reside, completely or at least partially, within the memory 312 and/or within the CPU 306 during execution thereof by the computing device 300 .
  • the memory 312 and the CPU 306 also can constitute machine-readable media.
  • machine-readable media refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 320 .
  • machine-readable media also refers to any medium that is capable of storing, encoding or carrying a set of instructions 320 for execution by the computing device 300 and that cause the computing device 300 to perform any one or more of the methodologies of the present disclosure.
  • Method 400 begins with 402 and continues with 404 where one or more images are captured by a camera (e.g., camera 262 of FIG. 2 ).
  • the camera may comprise a monocular camera mounted on an AV (e.g., AV 102 1 of FIG. 1 ) that captures images.
  • Each image comprises 3 layers (or channels) of information superimposed on each other—a Red (R) layer, a Green (G) layer and a Blue (B) layer. This image is also referred to as an RGB image.
  • An illustrative image 600 is shown in FIG. 6 .
  • the computing device obtains pose information and a pre-defined map grid portion size (e.g., ⁇ 200 meters by ⁇ 200 meters) from a datastore (e.g., datastore 112 of FIG. 1 and/or memory 312 of FIG. 3 ).
  • the pose information can include, but is not limited to, vehicle pose information.
  • the vehicle pose information comprises a location of the vehicle defined as 3D map coordinates, an angle of the vehicle relative to a reference point, and a pointing direction of the vehicle.
  • the vehicle pose information and the pre-defined map grid portion size are used in 408 to identify a portion of the road map that is to be projected into the image captured in 404 .
  • the road map can include, but is not limited to, a 2.5D grid of a surface defined by data points each having an x-coordinate and a y-coordinate.
  • the portion of the road map is identified by: identifying the center point of the AV on the road map; and selecting the portion of the road map that encompasses the AV, where the center of a shape (e.g., a rectangle) having the pre-defined map grid portion size is set to the center point of the AV.
  • An illustration of an illustrative road map 500 is provided in FIG. 5 .
  • the center of the AV is represented by dot 502 .
  • the portion 504 of the road map is identified in 408 because it comprises a segment of the map that is included in a rectangle (i) having the same center point as the AV and (ii) dimensions (e.g., length and width) equal to those of the pre-defined map grid portion size.
  • the present solution is not limited to the particulars of FIG. 5 .
  • one or more types of road map information are selected by the computing device for use in generating a modified image.
  • the types of road map information include, but are not limited to, ground, ground depth, drivable geographical area, map point distance-to-lane center, lane direction, intersection, and/or any other map information relevant to a given application.
  • the type(s) of road map information is(are) selected based on machine-learned information. For example, the computing device machine learns that a combination of ground, ground depth and lane direction provides a most accurate solution in a first scenario, and machine learns that a different combination of ground and map point distance-to-lane center provide a most accurate solution in a second scenario.
  • the present solution is not limited to the particulars of the example. In other scenarios, the type of road map information is pre-defined.
  • road map information is obtained for a plurality of geometric points p that are contained in the portion of the road map (identified in 408 ) from a datastore (e.g., datastore 112 of FIG. 1 and/or memory 312 of FIG. 3 ).
  • This road map information includes, but is not limited to, ground information and drivable geographical area information. Illustrations of illustrative road map information obtained in 412 are provided in FIG. 6 .
  • the ground information obtained from a road map is shown plotted in an xy-coordinate system within graph 602 . Each point of a ground surface is defined by a value having an x-coordinate and a y-coordinate.
  • the drivable geographical area information is available as a binary value (derivable or non-) for each point in the grid.
  • the drivable geographical area information obtained from a road map is shown plotted in an xy-coordinate system within graph 606 .
  • Each point of a drivable geographical area within the road map is defined by a value having an x-coordinate and a y-coordinate.
  • ground information and drivable geographical area information is well known.
  • additional map information is computed by the computing device.
  • the additional map information includes, but is not limited to, ground depth information, map point distance-to-lane center information, lane direction information, and/or intersection information.
  • Ground depth information is derived from information contained in the road map and other information associated with a monocular camera.
  • the ground depth information is derived from information stored in a datastore (e.g., datastore 112 of FIG. 1 and/or memory 312 of FIG. 3 ).
  • This information includes, but is not limited to, ground height information, a known camera height, and/or a known camera location.
  • Illustrative ground depth information is shown in graph 604 of FIG. 6 .
  • the ground depth information is computed using a Euclidean distance algorithm.
  • Each Euclidean distance value represents a distance between a known location of a camera in 3-dimensional space and a given ground point location (i.e., defined by an x-coordinate and y-coordinate from the grid, and a z-coordinate from the ground height information).
  • Each Euclidean distance value is defined by a single number defining a relation between two locations in 3-dimensional space, and thus is plotted in an xy-coordinate system on graph 604 .
  • a map point distance-to-lane center value is computed by the computing device for each geometric point location (i.e., defined by an x-coordinate and y-coordinate) in the map.
  • the nearest centerline to a map location is defined as whichever centerline of a plurality of centerlines contains the closest map point (in Euclidean distance in (x, y); vertical distance is ignored) to the map location.
  • C is a set of centerlines, where c ⁇ C consists of a set of geometric points p ⁇ c. Distance is defined as the 2-norm.
  • the nearest centerline ⁇ to a map location l is defined by mathematical equation (1).
  • map point distance-to-lane center value information is shown in graph 608 of FIG. 6 .
  • the value of the nearest centerline ⁇ for each map location l is defined by an x-coordinate and a y-coordinate, and thus is plotted in an xy-coordinate system on graph 608 .
  • a lane direction value is determined for each geometric point location l in the map. For example, a nearest centerline ⁇ is identified for a given geometric point location 1 . The direction of the geometric point location is then set to the lane direction defined for the nearest centerline.
  • the lane direction is defined as a 2-vector with an x-component and y-component. Illustrative lane direction information is shown in graph 610 of FIG. 6 . Each lane direction is plotted in an xy-coordinate system on graph 610 .
  • An intersection value is determined for each geometric point location l in the map. For example, a nearest centerline ⁇ is identified for a given geometric point location l. A determination is then made as to whether the nearest centerline is in an intersection contained in the map. This determination can be made based on a look-up table associated with the map or based on xyz-coordinates defining the nearest centerline and xyz-coordinates defining intersections within the map. If the xyz-coordinates of the nearest centerline fall within an area of an intersection, then a determination is made that the given geometric point location is in the intersection. As such, the intersection is assigned to the given geometric point location. Each intersection is defined by an x-component and y-component. Illustrative intersection information is shown in graph 612 of FIG. 6 . Each intersection is plotted in an xy-coordinate system on graph 612 .
  • the map information of 412 - 414 is projected to a given coordinate frame to obtain additional layers (or channels). This projection is achieved by defining a grid on each graph (e.g., graphs 602 - 612 ).
  • An illustrative grid 700 is shown in FIG. 7 .
  • the grid 700 comprises a plurality of tiles (or cells) 702 .
  • Each point 704 where two lines of the grid 700 meet (or intersect) defines a location of a geometric point of the map.
  • Each tile (or cell) has four corners. Each corner is defined by a geometric point.
  • Each geometric point location has a ground height value associated therewith.
  • Each geometric point location has an identifier assigned thereto which is defined by a row number and a column number.
  • a first point has an identifier p 11 since it resides in a first row and a first column.
  • a second point has an identifier p 12 since it resides in a the first row and the second column, etc.
  • a last point has an identifier pmj since it resides in the m th row and the j th column.
  • the ground height values are used to define each tile (or cell) p 11 , . . . , pmj as a 4-polygon in 3D space.
  • An illustrative 4-polygon 800 for a given tile (or cell) is shown in FIG. 8 .
  • the 4-polygons are then projected into a camera frame or view using a perspective transformation algorithm, i.e., is converted from an xyz-coordinate system to a uv-coordinate system.
  • Perspective transformation algorithms are well known. Perspective transformation algorithms generally involve mapping the four points in plane A to four points in a plane B. This mapping involves transforming/converting x-coordinate values to u-coordinate values, and y-coordinate values to v-coordinate values.
  • a color for each projected 4-polygon is set to a color value of a select point of the polygon or is set to an average of the color values for the points of the polygon.
  • the polygons defined by the uv-coordinates are then drawn on the camera frame in a top-to-bottom manner relative to the v-axis.
  • Layer 614 comprises ground height information plotted in a uv-coordinate system.
  • Layer (or channel) 616 comprises ground depth information plotted in a uv-coordinate system.
  • Layer (or channel) 618 comprises derivable geographical area information plotted in a uv-coordinate system.
  • Layer (or channel) 620 comprises map point distance-to-lane center information plotted in a uv-coordinate system.
  • Layer (or channel) 622 comprises lane direction information plotted in a uv-coordinate system.
  • Layer (or channel) 624 comprises intersection information plotted in a uv-coordinate system. The present solution is not limited to the particulars of FIG. 6 .
  • a modified image is generated by superimposing the additional layer(s) of road map information onto the image captured in 404 .
  • An illustration of an illustrative modified image 900 is provided in FIG. 9 .
  • Another illustrative modified image 1000 is provided in FIG. 10 . As shown in FIG.
  • the modified image 1000 comprises an R layer (or channel) 1002 , a G layer (or channel) 1004 , a B layer (or channel) 1006 , a ground height layer (or channel) 1008 , a ground depth layer (or channel) 1010 , a derivable geographical area layer (or channel) 1012 , a map point distance-to-lane center layer (or channel) 1014 , a lane direction layer (or channel) 1016 , and an intersection layer (or channel) 1018 .
  • the present solution is not limited to the particulars of FIG. 10 .
  • the modified image can include more or less layers (or channels) of road map information than that show in FIGS. 9 - 10 .
  • the modified image is input into an object detection algorithm of a computing device (e.g., computing device 110 of FIG. 1 , vehicle on-board computing device 220 of FIG. 2 , and/or computing device 300 of FIG. 3 ), as shown by 420 .
  • the object detection algorithm is generally configured to estimate a positon, an orientation, a spatial extent for at least one object detected in the modified image, and/or object classification, as shown by 422 .
  • one of the following machine learning algorithms is employed in 422 for 3D object detection: a Deep MANTA algorithm, a 3D-RCNN algorithm, an RoI-10D algorithm, a Mono3D++ algorithm, MonoGRNet algorithm, or MonoGRNet V2 algorithm.
  • 424 is performed where method 400 ends or other processing is performed (e.g., return to 402 ).
  • the position, orientation, spatial extent and object classification generated during method 400 can be used by an AV for object trajectory prediction, vehicle trajectory generation, and/or collision avoidance.
  • a block diagram is provided in FIG. 11 that is useful for understanding how vehicles control is achieved in accordance with the object related information estimated based on the modified image. All or some of the operations performed in FIG. 11 can be performed by the on-board computing device of a vehicle (e.g., AV 102 1 of FIG. 1 ) and/or a remote computing device (e.g., computing device 110 of FIG. 1 ).
  • a location of the vehicle is detected. This detection can be made based on sensor data output from a location sensor (e.g., location sensor 260 of FIG. 2 ) of the vehicle. This sensor data can include, but is not limited to, GPS data. Information 1120 specifying the detected location of the vehicle is then passed to block 1106 .
  • a location sensor e.g., location sensor 260 of FIG. 2
  • This sensor data can include, but is not limited to, GPS data.
  • Information 1120 specifying the detected location of the vehicle is then passed to block 1106 .
  • an object is detected within proximity of the vehicle. This detection is made based on sensor data output from a camera (e.g., camera 262 of FIG. 2 ) of the vehicle. The manner in which the object detection is achieved was discussed above in relation to FIGS. 4 - 10 .
  • Information about the detected object 1122 is passed to block 1106 . This information includes, but is not limited to a position of an object, an orientation of the object, a spatial extent of the object, an initial predicted trajectory of the object, a speed of the object, and/or a classification of the object.
  • the initial predicted object trajectory can include, but is not limited to, a linear path pointing in the heading direction of the object.
  • a vehicle trajectory is generated using the information from blocks 1102 and 1104 .
  • Techniques for determining a vehicle trajectory are well known in the art. Any known or to be known technique for determining a vehicle trajectory can be used herein without limitation. For example, in some scenarios, such a technique involves determining a trajectory for the AV that would pass the object when the object is in front of the AV, the object has a heading direction that is aligned with the direction in which the AV is moving, and the object has a length that is greater than a threshold value. The present solution is not limited to the particulars of this scenario.
  • the vehicle trajectory 1124 can be determined based on the location information 1120 , the object detection information 1122 , and/or a road map (e.g., road map 270 of FIG. 2 which is pre-stored in a datastore of the vehicle).
  • the vehicle trajectory 1124 may represent a smooth path that does not have abrupt changes that would otherwise provide passenger discomfort.
  • the vehicle trajectory is defined by a path of travel along a given lane of a road in which the object is not predicted travel within a given amount of time.
  • the vehicle trajectory 1124 is then provided to block 1108 .
  • a steering angle and velocity command is generated based on the vehicle trajectory 1124 .
  • the steering angle and velocity command is provided to block 1110 for vehicle dynamics control.

Abstract

Disclosed herein are systems, methods, and computer program products for object detection. The methods comprise performing the following operations by a computing device: obtaining an image that comprises a plurality of layers superimposed on each other; identifying a center point of a robot on a map; selecting a portion of the map contained in a geometric shape overlaid on the map so as to have a center set to the center point of the robot; obtaining map information associated with the selected portion of the map; generating at least one additional layer using the map information; superimposing the at least one additional layer onto the image to generate a modified image; and performing an object detection algorithm to detect at least one object in the modified image.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation application of and claims priority to U.S. patent Ser. No. 17/105,199 which was filed on Nov. 25, 2019. The entire content of this application is incorporated herein by reference.
  • BACKGROUND Statement of the Technical Field
  • The present disclosure relates generally to object detection systems. More particularly, the present disclosure relates to implementing systems and methods for monocular based object detection.
  • Description of the Related Art
  • Modern day vehicles have at least one on-board computer and have internet/satellite connectivity. The software running on these on-board computers monitor and/or control operations of the vehicles. The vehicle also comprises LiDAR detectors for detecting objects in proximity thereto. The LiDAR detectors generate LiDAR datasets that measure the distance from the vehicle to an object at a plurality of different times. These distance measurements can be used for identifying objects, tracking movements of the object, making predictions as to the objects trajectory, and planning paths of travel for the vehicle based on the predicted objects trajectory. LiDAR based object detection is costly and sensitive to weather conditions.
  • SUMMARY
  • The present disclosure concerns implementing systems and methods for object detection. The methods comprise performing the following operations by a computing device: obtaining an image that comprises a plurality of layers superimposed on each other; identifying a center point of a robot on a map; selecting a portion of the map contained in a geometric shape overlaid on the map so as to have a center set to the center point of the robot; obtaining map information associated with the selected portion of the map; generating at least one additional layer using the map information; superimposing the at least one additional layer onto the image to generate a modified image; and performing an object detection algorithm to detect at least one object in the modified image.
  • The implementing systems can comprise: a processor; and a non-transitory computer-readable storage medium comprising programming instructions that are configured to cause the processor to implement a method for operating an automated system. The above-described methods can also be implemented by a computer program product comprising memory and programming instructions that are configured to cause a processor to perform operations.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present solution will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures.
  • FIG. 1 is an illustration of an illustrative system.
  • FIG. 2 is an illustration of an illustrative architecture for a vehicle.
  • FIG. 3 is an illustration of an illustrative computing device.
  • FIG. 4 provides a flow diagram of an illustrative method for object detection.
  • FIG. 5 provides an illustration of an illustrative road map.
  • FIG. 6 provides images that are useful for understanding the method shown in FIG. 4 .
  • FIG. 7 provides an illustration of an illustrative grid.
  • FIG. 8 provides an illustration of an illustrative polygon.
  • FIGS. 9-10 each provide an illustration of an illustrative modified image.
  • FIG. 11 provides a block diagram that is useful for understanding how a vehicle is controlled in accordance with the present solution.
  • DETAILED DESCRIPTION
  • As used in this document, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” means “including, but not limited to.” Definitions for additional terms that are relevant to this document are included at the end of this Detailed Description.
  • An “electronic device” or a “computing device” refers to a device that includes a processor and memory. Each device may have its own processor and/or memory, or the processor and/or memory may be shared with other devices as in a virtual machine or container arrangement. The memory will contain or receive programming instructions that, when executed by the processor, cause the electronic device to perform one or more operations according to the programming instructions.
  • The terms “memory,” “memory device,” “data store,” “data storage facility” and the like each refer to a non-transitory device on which computer-readable data, programming instructions or both are stored. Except where specifically stated otherwise, the terms “memory,” “memory device,” “data store,” “data storage facility” and the like are intended to include single device embodiments, embodiments in which multiple memory devices together or collectively store a set of data or instructions, as well as individual sectors within such devices.
  • The terms “processor” and “processing device” refer to a hardware component of an electronic device that is configured to execute programming instructions. Except where specifically stated otherwise, the singular term “processor” or “processing device” is intended to include both single-processing device embodiments and embodiments in which multiple processing devices together or collectively perform a process.
  • The term “vehicle” refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy. The term “vehicle” includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like. An “autonomous vehicle” is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator. An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle.
  • In this document, when terms such as “first” and “second” are used to modify a noun, such use is simply intended to distinguish one item from another, and is not intended to require a sequential order unless specifically stated. In addition, terms of relative position such as “vertical” and “horizontal”, or “front” and “rear”, when used, are intended to be relative to each other and need not be absolute, and only refer to one possible position of the device associated with those terms depending on the device's orientation.
  • The present solution is described herein in the context of an autonomous vehicle. The present solution is not limited to autonomous vehicle applications. The present solution can be used in other applications such as robotic applications.
  • The present solution provides an alternative approach to LiDAR based object detection which is costly and sensitive to weather conditions. The present solution generally involves using an object detection algorithm that leverages information contained in a road map (e.g., a pre-defined high definition 3D surface road map) for monocular 3D object detection in, for example, AV applications. Road maps are well known in the art. Results of the object detection may be used for object trajectory prediction, vehicle trajectory generation, and/or collision avoidance. The object detection algorithm can include a machine-learning algorithm that is trained to estimate an object's position, orientation and/or spatial extent based on learned combination(s) of road map features. The road map features include, but are not limited to, a ground height feature, a ground depth feature, a drivable geographical area feature, a map point distance-to-lane center feature, a lane direction feature, and an intersection feature.
  • During operation, a monocular camera of the AV captures an image (e.g., a 2D image). The image comprises 3 layers (or channels) of information superimposed on each other—a Red (R) layer, a Green (G) layer and a Blue (B) layer. This image is also referred to as an RGB image. Road map information is projected onto the image. This projection is achieved by: obtaining AV pose information (a location defined as 3D map coordinates, an angle and a pointing direction of a vehicle to which the monocular camera is attached); using the AV pose information and a predefined map grid portion size to identify a portion of a road map to be projected into the image; and generating a modified image with superimposed road map information associated with the identified portion of the road map onto the image. The modified image is generated by adding additional layers (or channels) to the image. The additional layers (or channels) include, but are not limited to, a ground height layer (or channel), a ground depth layer (or channel), a drivable geographical area layer (or channel), a map point distance-to-lane center layer (or channel), lane direction layer (or channel), and/or an intersection layer (or channel). Pixels of the layers (or channel) are aligned with each other in 2D space. The modified image is then used by the object detection algorithm to estimate a positon, an orientation, a spatial extent, and/or a classification for at least one object detected in the modified image. The object's positon/orientation/spatial extent/classification is(are) then used to control operations of the AV (e.g., for object trajectory prediction, vehicle trajectory planning and/or vehicle motion control). Illustrative implementing systems of the present solution will now be described.
  • Illustrative Implementing Systems
  • Referring now to FIG. 1 , there is provided an illustration of an illustrative system 100. System 100 comprises a vehicle 102 1 that is traveling along a road in a semi-autonomous or autonomous manner. Vehicle 102 1 is also referred to herein as an AV. The AV 102 1 can include, but is not limited to, a land vehicle (as shown in FIG. 1 ), an aircraft, or a watercraft.
  • AV 102 1 is generally configured to detect objects 102 2, 114, 116 in proximity thereto. The objects can include, but are not limited to, a vehicle 102 2, a cyclist 114 (such as a rider of a bicycle, electric scooter, motorcycle, or the like) and/or a pedestrian 116. The object detection is achieved in accordance with a novel monocular based object detection process. The novel monocular based object detection process will be described in detail below. The monocular based object detection process can be performed at the AV 102 1, at the remote computing device 110, or partially at both the AV 102 1 and the remote computing device 110. Accordingly, information related to object detection may be communicated between the AV and a remote computing device 110 via a network 108 (e.g., the Internet, a cellular network and/or a radio network). The object detection related information may also be stored in a database 112.
  • When such an object detection is made, AV 102 1 performs operations to: generate one or more possible object trajectories for the detected object; and analyze at least one of the generated possible object trajectories to determine whether or not there is an undesirable level of risk that a collision will occur between the AV and object if the AV is to follow a given trajectory. If not, the AV 102 1 is caused to follow the given vehicle trajectory. If so, the AV 102 1 is caused to (i) follow another vehicle trajectory with a relatively low risk of collision with the object or (ii) perform a maneuver to reduce the risk of collision with the object or avoid collision with the object (e.g., brakes and/or changes direction of travel).
  • Referring now to FIG. 2 , there is provided an illustration of an illustrative system architecture 200 for a vehicle. Vehicles 102 1 and/or 102 2 of FIG. 1 can have the same or similar system architecture as that shown in FIG. 2 . Thus, the following discussion of system architecture 200 is sufficient for understanding vehicle(s) 102 1, 102 2 of FIG. 1 .
  • As shown in FIG. 2 , the vehicle 200 includes an engine or motor 202 and various sensors 204-218 for measuring various parameters of the vehicle. In gas-powered or hybrid vehicles having a fuel-powered engine, the sensors may include, for example, an engine temperature sensor 204, a battery voltage sensor 206, an engine Rotations Per Minute (“RPM”) sensor 208, and a throttle position sensor 210. If the vehicle is an electric or hybrid vehicle, then the vehicle may have an electric motor, and accordingly will have sensors such as a battery monitoring system 212 (to measure current, voltage and/or temperature of the battery), motor current 214 and voltage 216 sensors, and motor position sensors such as resolvers and encoders 218.
  • Operational parameter sensors that are common to both types of vehicles include, for example: a position sensor 236 such as an accelerometer, gyroscope and/or inertial measurement unit; a speed sensor 238; and an odometer sensor 240. The vehicle also may have a clock 242 that the system uses to determine vehicle time during operation. The clock 242 may be encoded into the vehicle on-board computing device, it may be a separate device, or multiple clocks may be available.
  • The vehicle also will include various sensors that operate to gather information about the environment in which the vehicle is traveling. These sensors may include, for example: a location sensor 260 (e.g., a Global Positioning System (GPS) device); and object detection sensors such as one or more cameras 262. The sensors also may include environmental sensors 268 such as a precipitation sensor and/or ambient temperature sensor. The object detection sensors may enable the vehicle to detect objects that are within a given distance range of the vehicle 200 in any direction, while the environmental sensors collect data about environmental conditions within the vehicle's area of travel.
  • During operations, information is communicated from the sensors to an on-board computing device 220. The on-board computing device 220 analyzes the data captured by the sensors and optionally controls operations of the vehicle based on results of the analysis. For example, the on-board computing device 220 may control: braking via a brake controller 232; direction via a steering controller 224; speed and acceleration via a throttle controller 226 (in a gas-powered vehicle) or a motor speed controller 228 (such as a current level controller in an electric vehicle); a differential gear controller 230 (in vehicles with transmissions); and/or other controllers.
  • Geographic location information may be communicated from the location sensor 260 to the on-board computing device 220, which may then access a map of the environment that corresponds to the location information to determine known fixed features of the environment such as streets, buildings, stop signs and/or stop/go signals.
  • Captured images are communicated from the cameras 262 to the on-board computing device 220. The captured images are processed by the on-board computing device 220 to detect objects in proximity to the vehicle 200 in accordance with the novel monocular based object detection algorithm of the present solution. The novel monocular based object detection algorithm will be described in detail below. It should be noted that the monocular based object detection algorithm uses an object detection algorithm that leverages information contained in a road map 270 for object detection. The road map 270 can include, but is not limited to, any known or to be known 3D surface road map. The road map 270 is stored in a local memory of the on-board computing device 220.
  • The object detection algorithm can employ machine-learning. Machine-learning is a type of Artificial Intelligence (AI) that provides computers with the ability to learn without being explicitly programmed through the automation of analytical model building based on data analysis. In some scenarios, the machine-learning based object detection algorithm is configured to: recognize shapes of objects from various angles, relationships and trends from data; establish baseline profiles for objects based on the recognized information; and make predictions/estimations about object types, positons, orientations and spatial extents for objects detected in inputted images. The baseline profiles for objects may change over time. The machine-learning based object detection algorithm can employ supervised machine learning, semi-supervised machine learning, unsupervised machine learning, and/or reinforcement machine learning. Each of these listed types of machine-learning is well known in the art.
  • In some scenarios, the machine-learning based object detection algorithm includes, but is not limited to, a decision tree learning algorithm, an association rule learning algorithm, an artificial neural network learning algorithm, a deep learning algorithm, an inductive logic programming based algorithm, a support vector machine based algorithm, a clustering based algorithm, a Bayesian network based algorithm, a representation learning algorithm, a similarity and metric learning algorithm, a sparse dictionary learning algorithm, a genetic algorithm, a rule-based machine-learning algorithm, and/or a learning classifier systems based algorithm. The machine-learning process implemented by the present solution can be built using Commercial-Off-The-Shelf (COTS) tools (e.g., SAS available from SAS Institute Inc. of Cary, N.C.).
  • When the on-board computing device 220 detects a moving object, the on-board computing device 220 will generate one or more possible object trajectories for the detected object, and analyze the possible object trajectories to assess the risk of a collision between the object and the AV if the AV was to follow a given vehicle trajectory. If the risk does not exceed the acceptable threshold, then the on-board computing device 220 may cause the vehicle 200 to follow the given trajectory. If the risk exceeds an acceptable threshold, the on-board computing device 220 performs operations to: (i) determine an alternative vehicle trajectory and analyze whether the collision can be avoided if the AV follows this alternative vehicle trajectory; or (ii) causes the AV to perform a maneuver (e.g., brake, accelerate, or swerve).
  • Referring now to FIG. 3 , there is provided an illustration of an illustrative architecture for a computing device 300. The computing device 110 of FIG. 1 and/or the vehicle on-board computing device 220 of FIG. 2 is/are the same as or similar to computing device 300. As such, the discussion of computing device 300 is sufficient for understanding the computing device 110 of FIG. 1 and the vehicle on-board computing device 220 of FIG. 2 .
  • Computing device 300 may include more or less components than those shown in FIG. 3 . However, the components shown are sufficient to disclose an illustrative solution implementing the present solution. The hardware architecture of FIG. 3 represents one implementation of a representative computing device configured to operate a vehicle, as described herein. As such, the computing device 300 of FIG. 3 implements at least a portion of the method(s) described herein.
  • Some or all components of the computing device 300 can be implemented as hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuits can include, but are not limited to, passive components (e.g., resistors and capacitors) and/or active components (e.g., amplifiers and/or microprocessors). The passive and/or active components can be adapted to, arranged to and/or programmed to perform one or more of the methodologies, procedures, or functions described herein.
  • As shown in FIG. 3 , the computing device 300 comprises a user interface 302, a Central Processing Unit (CPU) 306, a system bus 310, a memory 312 connected to and accessible by other portions of computing device 300 through system bus 310, a system interface 360, and hardware entities 314 connected to system bus 310. The user interface can include input devices and output devices, which facilitate user-software interactions for controlling operations of the computing device 300. The input devices include, but are not limited to, a physical and/or touch keyboard 350. The input devices can be connected to the computing device 300 via a wired or wireless connection (e.g., a Bluetooth® connection). The output devices include, but are not limited to, a speaker 352, a display 354, and/or light emitting diodes 356. System interface 360 is configured to facilitate wired or wireless communications to and from external devices (e.g., network nodes such as access points, etc.).
  • At least some of the hardware entities 314 perform actions involving access to and use of memory 312, which can be a Random Access Memory (RAM), a disk drive, flash memory, a Compact Disc Read Only Memory (CD-ROM) and/or another hardware device that is capable of storing instructions and data. Hardware entities 314 can include a disk drive unit 316 comprising a computer-readable storage medium 318 on which is stored one or more sets of instructions 320 (e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein. The instructions 320 can also reside, completely or at least partially, within the memory 312 and/or within the CPU 306 during execution thereof by the computing device 300. The memory 312 and the CPU 306 also can constitute machine-readable media. The term “machine-readable media”, as used here, refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 320. The term “machine-readable media”, as used here, also refers to any medium that is capable of storing, encoding or carrying a set of instructions 320 for execution by the computing device 300 and that cause the computing device 300 to perform any one or more of the methodologies of the present disclosure.
  • Referring now to FIG. 4 , there is provided a flow diagram of an illustrative method 400 for object detection. Method 400 begins with 402 and continues with 404 where one or more images are captured by a camera (e.g., camera 262 of FIG. 2 ). The camera may comprise a monocular camera mounted on an AV (e.g., AV 102 1 of FIG. 1 ) that captures images. Each image comprises 3 layers (or channels) of information superimposed on each other—a Red (R) layer, a Green (G) layer and a Blue (B) layer. This image is also referred to as an RGB image. An illustrative image 600 is shown in FIG. 6 .
  • In 406, the computing device obtains pose information and a pre-defined map grid portion size (e.g., ≤200 meters by ≤200 meters) from a datastore (e.g., datastore 112 of FIG. 1 and/or memory 312 of FIG. 3 ). The pose information can include, but is not limited to, vehicle pose information. The vehicle pose information comprises a location of the vehicle defined as 3D map coordinates, an angle of the vehicle relative to a reference point, and a pointing direction of the vehicle.
  • The vehicle pose information and the pre-defined map grid portion size are used in 408 to identify a portion of the road map that is to be projected into the image captured in 404. The road map can include, but is not limited to, a 2.5D grid of a surface defined by data points each having an x-coordinate and a y-coordinate. The portion of the road map is identified by: identifying the center point of the AV on the road map; and selecting the portion of the road map that encompasses the AV, where the center of a shape (e.g., a rectangle) having the pre-defined map grid portion size is set to the center point of the AV. An illustration of an illustrative road map 500 is provided in FIG. 5 . The center of the AV is represented by dot 502. The portion 504 of the road map is identified in 408 because it comprises a segment of the map that is included in a rectangle (i) having the same center point as the AV and (ii) dimensions (e.g., length and width) equal to those of the pre-defined map grid portion size. The present solution is not limited to the particulars of FIG. 5 .
  • In optional 410, one or more types of road map information are selected by the computing device for use in generating a modified image. The types of road map information include, but are not limited to, ground, ground depth, drivable geographical area, map point distance-to-lane center, lane direction, intersection, and/or any other map information relevant to a given application. The type(s) of road map information is(are) selected based on machine-learned information. For example, the computing device machine learns that a combination of ground, ground depth and lane direction provides a most accurate solution in a first scenario, and machine learns that a different combination of ground and map point distance-to-lane center provide a most accurate solution in a second scenario. The present solution is not limited to the particulars of the example. In other scenarios, the type of road map information is pre-defined.
  • In 412, road map information is obtained for a plurality of geometric points p that are contained in the portion of the road map (identified in 408) from a datastore (e.g., datastore 112 of FIG. 1 and/or memory 312 of FIG. 3 ). This road map information includes, but is not limited to, ground information and drivable geographical area information. Illustrations of illustrative road map information obtained in 412 are provided in FIG. 6 . The ground information obtained from a road map is shown plotted in an xy-coordinate system within graph 602. Each point of a ground surface is defined by a value having an x-coordinate and a y-coordinate. The drivable geographical area information is available as a binary value (derivable or non-) for each point in the grid. The drivable geographical area information obtained from a road map is shown plotted in an xy-coordinate system within graph 606. Each point of a drivable geographical area within the road map is defined by a value having an x-coordinate and a y-coordinate. Such ground information and drivable geographical area information is well known.
  • In 414, additional map information is computed by the computing device. The additional map information includes, but is not limited to, ground depth information, map point distance-to-lane center information, lane direction information, and/or intersection information. Ground depth information is derived from information contained in the road map and other information associated with a monocular camera. Thus, the ground depth information is derived from information stored in a datastore (e.g., datastore 112 of FIG. 1 and/or memory 312 of FIG. 3). This information includes, but is not limited to, ground height information, a known camera height, and/or a known camera location. Illustrative ground depth information is shown in graph 604 of FIG. 6 . The ground depth information is computed using a Euclidean distance algorithm. Each Euclidean distance value represents a distance between a known location of a camera in 3-dimensional space and a given ground point location (i.e., defined by an x-coordinate and y-coordinate from the grid, and a z-coordinate from the ground height information). Each Euclidean distance value is defined by a single number defining a relation between two locations in 3-dimensional space, and thus is plotted in an xy-coordinate system on graph 604.
  • A map point distance-to-lane center value is computed by the computing device for each geometric point location (i.e., defined by an x-coordinate and y-coordinate) in the map. The nearest centerline to a map location is defined as whichever centerline of a plurality of centerlines contains the closest map point (in Euclidean distance in (x, y); vertical distance is ignored) to the map location. C is a set of centerlines, where cϵC consists of a set of geometric points pϵc. Distance is defined as the 2-norm. The nearest centerline ĉ to a map location l is defined by mathematical equation (1).
  • c ˆ = min c C ( min p c ( p - l ) ) ( 1 )
  • where p represents an ordered set of geometric points in the centerline. Illustrative map point distance-to-lane center value information is shown in graph 608 of FIG. 6 . The value of the nearest centerline ĉ for each map location l is defined by an x-coordinate and a y-coordinate, and thus is plotted in an xy-coordinate system on graph 608.
  • A lane direction value is determined for each geometric point location l in the map. For example, a nearest centerline ĉ is identified for a given geometric point location 1. The direction of the geometric point location is then set to the lane direction defined for the nearest centerline. The lane direction is defined as a 2-vector with an x-component and y-component. Illustrative lane direction information is shown in graph 610 of FIG. 6 . Each lane direction is plotted in an xy-coordinate system on graph 610.
  • An intersection value is determined for each geometric point location l in the map. For example, a nearest centerline ĉ is identified for a given geometric point location l. A determination is then made as to whether the nearest centerline is in an intersection contained in the map. This determination can be made based on a look-up table associated with the map or based on xyz-coordinates defining the nearest centerline and xyz-coordinates defining intersections within the map. If the xyz-coordinates of the nearest centerline fall within an area of an intersection, then a determination is made that the given geometric point location is in the intersection. As such, the intersection is assigned to the given geometric point location. Each intersection is defined by an x-component and y-component. Illustrative intersection information is shown in graph 612 of FIG. 6 . Each intersection is plotted in an xy-coordinate system on graph 612.
  • In 416, the map information of 412-414 is projected to a given coordinate frame to obtain additional layers (or channels). This projection is achieved by defining a grid on each graph (e.g., graphs 602-612). An illustrative grid 700 is shown in FIG. 7 . The grid 700 comprises a plurality of tiles (or cells) 702. Each point 704 where two lines of the grid 700 meet (or intersect) defines a location of a geometric point of the map. Each tile (or cell) has four corners. Each corner is defined by a geometric point. Each geometric point location has a ground height value associated therewith. Each geometric point location has an identifier assigned thereto which is defined by a row number and a column number. For example, a first point has an identifier p11 since it resides in a first row and a first column. A second point has an identifier p12 since it resides in a the first row and the second column, etc. A last point has an identifier pmj since it resides in the mth row and the jth column. The present solution is not limited to the particulars of this example. The ground height values are used to define each tile (or cell) p11, . . . , pmj as a 4-polygon in 3D space. An illustrative 4-polygon 800 for a given tile (or cell) is shown in FIG. 8 . The 4-polygons are then projected into a camera frame or view using a perspective transformation algorithm, i.e., is converted from an xyz-coordinate system to a uv-coordinate system. Perspective transformation algorithms are well known. Perspective transformation algorithms generally involve mapping the four points in plane A to four points in a plane B. This mapping involves transforming/converting x-coordinate values to u-coordinate values, and y-coordinate values to v-coordinate values. A color for each projected 4-polygon is set to a color value of a select point of the polygon or is set to an average of the color values for the points of the polygon. The polygons defined by the uv-coordinates are then drawn on the camera frame in a top-to-bottom manner relative to the v-axis.
  • Illustrations of illustrative additional layers (or channels) are provided in FIG. 6 . Layer 614 comprises ground height information plotted in a uv-coordinate system. Layer (or channel) 616 comprises ground depth information plotted in a uv-coordinate system. Layer (or channel) 618 comprises derivable geographical area information plotted in a uv-coordinate system. Layer (or channel) 620 comprises map point distance-to-lane center information plotted in a uv-coordinate system. Layer (or channel) 622 comprises lane direction information plotted in a uv-coordinate system. Layer (or channel) 624 comprises intersection information plotted in a uv-coordinate system. The present solution is not limited to the particulars of FIG. 6 .
  • In 418 of FIG. 4 , a modified image is generated by superimposing the additional layer(s) of road map information onto the image captured in 404. An illustration of an illustrative modified image 900 is provided in FIG. 9 . Another illustrative modified image 1000 is provided in FIG. 10 . As shown in FIG. 10 , the modified image 1000 comprises an R layer (or channel) 1002, a G layer (or channel) 1004, a B layer (or channel) 1006, a ground height layer (or channel) 1008, a ground depth layer (or channel) 1010, a derivable geographical area layer (or channel) 1012, a map point distance-to-lane center layer (or channel) 1014, a lane direction layer (or channel) 1016, and an intersection layer (or channel) 1018. The present solution is not limited to the particulars of FIG. 10 . The modified image can include more or less layers (or channels) of road map information than that show in FIGS. 9-10 .
  • The modified image is input into an object detection algorithm of a computing device (e.g., computing device 110 of FIG. 1 , vehicle on-board computing device 220 of FIG. 2 , and/or computing device 300 of FIG. 3 ), as shown by 420. The object detection algorithm is generally configured to estimate a positon, an orientation, a spatial extent for at least one object detected in the modified image, and/or object classification, as shown by 422. In some scenarios, one of the following machine learning algorithms is employed in 422 for 3D object detection: a Deep MANTA algorithm, a 3D-RCNN algorithm, an RoI-10D algorithm, a Mono3D++ algorithm, MonoGRNet algorithm, or MonoGRNet V2 algorithm. Subsequently, 424 is performed where method 400 ends or other processing is performed (e.g., return to 402).
  • The position, orientation, spatial extent and object classification generated during method 400 can be used by an AV for object trajectory prediction, vehicle trajectory generation, and/or collision avoidance. A block diagram is provided in FIG. 11 that is useful for understanding how vehicles control is achieved in accordance with the object related information estimated based on the modified image. All or some of the operations performed in FIG. 11 can be performed by the on-board computing device of a vehicle (e.g., AV 102 1 of FIG. 1 ) and/or a remote computing device (e.g., computing device 110 of FIG. 1 ).
  • In block 1102, a location of the vehicle is detected. This detection can be made based on sensor data output from a location sensor (e.g., location sensor 260 of FIG. 2 ) of the vehicle. This sensor data can include, but is not limited to, GPS data. Information 1120 specifying the detected location of the vehicle is then passed to block 1106.
  • In block 1104, an object is detected within proximity of the vehicle. This detection is made based on sensor data output from a camera (e.g., camera 262 of FIG. 2 ) of the vehicle. The manner in which the object detection is achieved was discussed above in relation to FIGS. 4-10 . Information about the detected object 1122 is passed to block 1106. This information includes, but is not limited to a position of an object, an orientation of the object, a spatial extent of the object, an initial predicted trajectory of the object, a speed of the object, and/or a classification of the object. The initial predicted object trajectory can include, but is not limited to, a linear path pointing in the heading direction of the object.
  • In block 1106, a vehicle trajectory is generated using the information from blocks 1102 and 1104. Techniques for determining a vehicle trajectory are well known in the art. Any known or to be known technique for determining a vehicle trajectory can be used herein without limitation. For example, in some scenarios, such a technique involves determining a trajectory for the AV that would pass the object when the object is in front of the AV, the object has a heading direction that is aligned with the direction in which the AV is moving, and the object has a length that is greater than a threshold value. The present solution is not limited to the particulars of this scenario. The vehicle trajectory 1124 can be determined based on the location information 1120, the object detection information 1122, and/or a road map (e.g., road map 270 of FIG. 2 which is pre-stored in a datastore of the vehicle). The vehicle trajectory 1124 may represent a smooth path that does not have abrupt changes that would otherwise provide passenger discomfort. For example, the vehicle trajectory is defined by a path of travel along a given lane of a road in which the object is not predicted travel within a given amount of time. The vehicle trajectory 1124 is then provided to block 1108.
  • In block 1108, a steering angle and velocity command is generated based on the vehicle trajectory 1124. The steering angle and velocity command is provided to block 1110 for vehicle dynamics control.
  • Although the present solution has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the present solution may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present solution should not be limited by any of the above described embodiments. Rather, the scope of the present solution should be defined in accordance with the following claims and their equivalents.

Claims (20)

What is claimed is:
1. A method for object detection, comprising:
obtaining, by a computing device, an image that comprises a plurality of layers superimposed on each other;
identifying, by the computing device, a center point of a robot on a map;
selecting, by the computing device, a portion of the map contained in a geometric shape overlaid on the map so as to have a center set to the center point of the robot;
obtaining, by the computing device, map information associated with the selected portion of the map;
generating, by the computing device, at least one additional layer using the map information;
superimposing, by the computing device, the at least one additional layer onto the image to generate a modified image; and
performing, by the computing device, an object detection algorithm to detect at least one object in the modified image.
2. The method according to claim 1, further comprising causing, by the computing device, control operations of the robot based on an output of the object detection algorithm.
3. The method according to claim 1, wherein the identifying the center point of the robot is based on pose information of the robot that comprises a location defined in 3D map coordinates, an angle of the robot relative to a reference point, and a pointing direction of the robot.
4. The method according to claim 1, wherein the selected portion of the map represents a geographic area encompassing the robot.
5. The method according to claim 1, further comprising performing machine learning operations to select a first combination of different types of map information when a first scenario exists for the robot and a second combination of different types of map information when a second scenario exists for the robot.
6. The method according to claim 1, wherein the map information comprises at least one of ground information, drivable geographical area information, ground depth information, map point distance-to-lane center information, lane direction information, and intersection information.
7. The method according to claim 1, wherein the at least one additional layer comprises a ground height layer, a ground depth layer, a drivable geographical area layer, a map point distance-to-lane center layer, a lane direction layer, or an intersection layer.
8. The method according to claim 1, wherein the generating the at least one additional layer comprises:
plotting the map information on a graph;
defining a 2D grid on the graph, the 2D grid comprising a plurality of cells with each said cell having four corners respectively associated with geometric points of the map;
using ground height values associated with the geometric points to define polygons in 3D space; and
projecting the polygons into a camera frame.
9. The method according to claim 8, further comprising setting a color value of each said projected polygon to (i) a color value associated with a select one of the geometric points used to define a corresponding one of the polygons or (ii) an average color value for the geometric points used to define a corresponding one of the polygons.
10. The method according to claim 1, wherein at least two different additional layers are generated using the map information and superimposed onto the image to generate the modified image.
11. A system, comprising:
a processor;
a non-transitory computer-readable storage medium comprising programming instructions that are configured to cause the processor to implement a method for object detection, wherein the programming instructions comprise instructions to:
obtain an image that comprises a plurality of layers superimposed on each other;
identify a center point of a robot on a map;
select a portion of the map contained in a geometric shape overlaid on the map so as to have a center set to the center point of the robot;
obtain map information associated with the selected portion of the map;
generate at least one additional layer using the map information;
superimpose the at least one additional layer onto the image to generate a modified image; and
perform an object detection algorithm to detect at least one object in the modified image.
12. The system according to claim 11, wherein the programming instructions comprise instructions to control operations of the robot based on an output of the object detection algorithm.
13. The system according to claim 11, wherein the center point of the robot is identified based on pose information of the robot that comprises a location defined in 3D map coordinates, an angle of the robot relative to a reference point, and a pointing direction of the robot.
14. The system according to claim 11, wherein the selected portion of the map represents a geographic area encompassing the robot.
15. The system according to claim 11, wherein the programming instructions comprise instructions to perform machine learning operations to select a first combination of different types of map information when a first scenario exists for the robot and a second combination of different types of map information when a second scenario exists for the robot.
16. The system according to claim 11, wherein the map information comprises at least one of ground information, drivable geographical area information, ground depth information, map point distance-to-lane center information, lane direction information, and intersection information.
17. The system according to claim 11, wherein the at least one additional layer comprises a ground height layer, a ground depth layer, a drivable geographical area layer, a map point distance-to-lane center layer, a lane direction layer, or an intersection layer.
18. The system according to claim 11, wherein the at least one additional layer is generated by:
plotting the map information on a graph;
defining a 2D grid on the graph, the 2D grid comprising a plurality of cells with each said cell having four corners respectively associated with geometric points of the map;
using ground height values associated with the geometric points to define polygons in 3D space; and
projecting the polygons into a camera frame.
19. The system according to claim 18, wherein the programming instructions comprise instructions to set a color value of each said projected polygon to (i) a color value associated with a select one of the geometric points used to define a corresponding one of the polygons or (ii) an average color value for the geometric points used to define a corresponding one of the polygons.
20. A non-transitory computer-readable medium that stores instructions that are configured to, when executed by at least one computing device, cause the at least one computing device to perform operations comprising:
obtaining an image that comprises a plurality of layers superimposed on each other;
identifying a center point of a robot on a map;
selecting a portion of the map contained in a geometric shape overlaid on the map so as to have a center set to the center point of the robot;
obtaining map information associated with the selected portion of the map;
generating at least one additional layer using the map information;
superimposing the at least one additional layer onto the image to generate a modified image; and
performing an object detection algorithm to detect at least one object in the modified image.
US18/054,585 2020-11-25 2022-11-11 Systems and methods for monocular based object detection Pending US20230063845A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/054,585 US20230063845A1 (en) 2020-11-25 2022-11-11 Systems and methods for monocular based object detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/105,199 US11527028B2 (en) 2020-11-25 2020-11-25 Systems and methods for monocular based object detection
US18/054,585 US20230063845A1 (en) 2020-11-25 2022-11-11 Systems and methods for monocular based object detection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/105,199 Continuation US11527028B2 (en) 2020-11-25 2020-11-25 Systems and methods for monocular based object detection

Publications (1)

Publication Number Publication Date
US20230063845A1 true US20230063845A1 (en) 2023-03-02

Family

ID=81658461

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/105,199 Active 2041-04-24 US11527028B2 (en) 2020-11-25 2020-11-25 Systems and methods for monocular based object detection
US18/054,585 Pending US20230063845A1 (en) 2020-11-25 2022-11-11 Systems and methods for monocular based object detection

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/105,199 Active 2041-04-24 US11527028B2 (en) 2020-11-25 2020-11-25 Systems and methods for monocular based object detection

Country Status (4)

Country Link
US (2) US11527028B2 (en)
CN (1) CN116529561A (en)
DE (1) DE112021006111T5 (en)
WO (1) WO2022115215A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11473917B2 (en) 2020-07-14 2022-10-18 Argo AI, LLC System for augmenting autonomous vehicle perception using smart nodes
US11403943B2 (en) * 2020-07-14 2022-08-02 Argo AI, LLC Method and system for vehicle navigation using information from smart node
US20230089897A1 (en) * 2021-09-23 2023-03-23 Motional Ad Llc Spatially and temporally consistent ground modelling with information fusion

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190220002A1 (en) * 2016-08-18 2019-07-18 SZ DJI Technology Co., Ltd. Systems and methods for augmented stereoscopic display
US10410328B1 (en) * 2016-08-29 2019-09-10 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002098538A (en) * 2000-09-27 2002-04-05 Alpine Electronics Inc Navigation system and method for displaying information of pseudo three dimensional map
US9927251B2 (en) * 2009-02-24 2018-03-27 Alpine Electronics, Inc. Method and apparatus for detecting and correcting freeway-ramp-freeway situation in calculated route
US10488215B1 (en) 2018-10-26 2019-11-26 Phiar Technologies, Inc. Augmented reality interface for navigation assistance
US20210256849A1 (en) * 2020-02-19 2021-08-19 GM Global Technology Operations LLC Process and system for local traffic approximation through analysis of cloud data
IN202027039221A (en) 2020-09-10 2020-10-02 Sony Corp

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190220002A1 (en) * 2016-08-18 2019-07-18 SZ DJI Technology Co., Ltd. Systems and methods for augmented stereoscopic display
US10410328B1 (en) * 2016-08-29 2019-09-10 Perceptin Shenzhen Limited Visual-inertial positional awareness for autonomous and non-autonomous device

Also Published As

Publication number Publication date
WO2022115215A1 (en) 2022-06-02
US11527028B2 (en) 2022-12-13
CN116529561A (en) 2023-08-01
DE112021006111T5 (en) 2023-12-14
US20220165010A1 (en) 2022-05-26

Similar Documents

Publication Publication Date Title
US11761791B2 (en) Modifying map elements associated with map data
US11422259B2 (en) Multi-resolution maps for localization
US20230063845A1 (en) Systems and methods for monocular based object detection
US10890663B2 (en) Loading multi-resolution maps for localization
CN114555446A (en) Complex ground contour estimation
US11780464B2 (en) Autonomous vehicle trajectory generation using velocity-based steering limits
US20230118472A1 (en) Systems and methods for vehicle motion planning
US11370424B1 (en) Relevant object detection
US11543263B1 (en) Map distortion determination
US20230211802A1 (en) Motion planning using spatio-temporal convex corridors
EP4148599A1 (en) Systems and methods for providing and using confidence estimations for semantic labeling
US11663807B2 (en) Systems and methods for image based perception
US11966452B2 (en) Systems and methods for image based perception
US20230075425A1 (en) Systems and methods for training and using machine learning models and algorithms
WO2020006091A1 (en) Multi-resolution maps for localization
EP4131175A1 (en) Systems and methods for image based perception
US11645364B2 (en) Systems and methods for object detection using stereovision information
US20230415736A1 (en) Systems and methods for controlling longitudinal acceleration based on lateral objects
US20230415739A1 (en) Systems and methods for controlling longitudinal acceleration based on lateral objects
US20230415781A1 (en) Systems and methods for controlling longitudinal acceleration based on lateral objects
US20230245336A1 (en) Distance representation and encoding
US20230410469A1 (en) Systems and methods for image classification using a neural network combined with a correlation structure
US11906967B1 (en) Determining yaw with learned motion model
US20230252638A1 (en) Systems and methods for panoptic segmentation of images for autonomous driving

Legal Events

Date Code Title Description
AS Assignment

Owner name: ARGO AI, LLC, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FOLEY, SEAN;HAYES, JAMES;SIGNING DATES FROM 20201119 TO 20201124;REEL/FRAME:061733/0121

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: FORD GLOBAL TECHNOLOGIES, LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ARGO AI, LLC;REEL/FRAME:063025/0346

Effective date: 20230309

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED